question
stringlengths
15
150
question_id
int64
258k
2.12M
score
int64
1
208
link
stringlengths
60
131
body
stringlengths
101
9.72k
answers
listlengths
1
288
verbolised
stringlengths
650
683k
Exists first-order formula $Z(x)$ in language $\langle 0, +, \le\rangle$ that over field $\mathbb{Q}$ expresses fact that $x$ is integer?
2,089,199
6
https://math.stackexchange.com/questions/2089199/exists-first-order-formula-zx-in-language-langle-0-le-rangle-that-ov
<p>This is a followup to my previous questions <a href="https://math.stackexchange.com/questions/2085455/first-order-formula-in-first-order-language-another-open-language-where-equival?noredirect=1&amp;lq=1">here</a> and <a href="https://math.stackexchange.com/questions/2087230/followup-question-on-first-order-formulas-in-first-order-languages-and-open-form?noredirect=1&amp;lq=1">here</a>.</p> <p>Does there exist any first-order formula $Z(x)$ in the language $\langle 0, +, \le\rangle$ that over the field $\mathbb{Q}$ of rational numbers expresses the fact that $x$ is an integer?</p> <p>I strongly suspect there isn't any, but I don't know how to see that. Could anybody help?</p>
[ { "answer_id": 2089237, "score": 5, "text": "<p>You are correct: no such formula exists.</p>\n\n<p>Suppose that $Z(x)$ is such a formula. That is, for each $q \\in \\mathbb{Q}$\n$$\\mathfrak{Q} = \\langle \\mathbb{Q}, 0 , + , \\leq \\rangle \\models Z[q] \\Leftrightarrow q \\in \\mathbb{Z}. \\tag{1}$$</p>\n\n<p>Note that the mapping $\\sigma (x) = \\frac{x}{2}$ is an isomorphism of $\\mathfrak{Q}$, and so for each $q \\in \\mathbb{Q}$ we have that $$\\mathfrak{Q} \\models Z[q] \\Leftrightarrow \\mathfrak{Q} \\models Z[ \\sigma(q) ]. \\tag{2}$$\nNow consider what happens when $q = 1$: By (2) we have that\n$$\\mathfrak{Q} \\models Z[1] \\Leftrightarrow \\mathfrak{Q} \\models Z[\\sigma(1) = \\tfrac{1}{2}],$$ which clearly contradicts our assumption (1).</p>\n" }, { "answer_id": 2098555, "score": 2, "text": "<p>While @komorebi's answer is correct, approaching this question using automorphisms is a bit fragile. In particular, the automorphism method cannot rule out the possibility that the integers are definable <em>with parameters</em> (that is, after adding some constants). To see why, observe that there are no non-trivial automorphisms fixing a non-zero number.</p>\n\n<p>Fortunately, there is classification of all definable sets in this structure: a subset of $\\mathbb{Q}^n$ is definable (possibly with parameters) in the language $(\\mathbb{Q},+,&lt;)$ if and only if it is a Boolean combination of solution sets to linear inequalities. In particular, a subset of $\\mathbb{Q}$ is definable (with parameters) if and only if it is a Boolean combination of intervals and points, so $\\mathbb{Z}$ is not definable.</p>\n\n<p>The above result is a consequence of <strong>quantifier elimination</strong> for the theory of <strong>divisible ordered Abelian groups</strong>. (Really, it's just a restatement of that fact.) That fact, in turn, is an approachable exercise: prove by induction on formulas that every formula in the language $(+,&lt;)$ is equivalent (modulo the theory of divisible ordered Abelian groups) to a quantifier free formula.</p>\n" }, { "answer_id": 2089237, "score": 5, "text": "<p>You are correct: no such formula exists.</p>\n\n<p>Suppose that $Z(x)$ is such a formula. That is, for each $q \\in \\mathbb{Q}$\n$$\\mathfrak{Q} = \\langle \\mathbb{Q}, 0 , + , \\leq \\rangle \\models Z[q] \\Leftrightarrow q \\in \\mathbb{Z}. \\tag{1}$$</p>\n\n<p>Note that the mapping $\\sigma (x) = \\frac{x}{2}$ is an isomorphism of $\\mathfrak{Q}$, and so for each $q \\in \\mathbb{Q}$ we have that $$\\mathfrak{Q} \\models Z[q] \\Leftrightarrow \\mathfrak{Q} \\models Z[ \\sigma(q) ]. \\tag{2}$$\nNow consider what happens when $q = 1$: By (2) we have that\n$$\\mathfrak{Q} \\models Z[1] \\Leftrightarrow \\mathfrak{Q} \\models Z[\\sigma(1) = \\tfrac{1}{2}],$$ which clearly contradicts our assumption (1).</p>\n" }, { "answer_id": 2098555, "score": 2, "text": "<p>While @komorebi's answer is correct, approaching this question using automorphisms is a bit fragile. In particular, the automorphism method cannot rule out the possibility that the integers are definable <em>with parameters</em> (that is, after adding some constants). To see why, observe that there are no non-trivial automorphisms fixing a non-zero number.</p>\n\n<p>Fortunately, there is classification of all definable sets in this structure: a subset of $\\mathbb{Q}^n$ is definable (possibly with parameters) in the language $(\\mathbb{Q},+,&lt;)$ if and only if it is a Boolean combination of solution sets to linear inequalities. In particular, a subset of $\\mathbb{Q}$ is definable (with parameters) if and only if it is a Boolean combination of intervals and points, so $\\mathbb{Z}$ is not definable.</p>\n\n<p>The above result is a consequence of <strong>quantifier elimination</strong> for the theory of <strong>divisible ordered Abelian groups</strong>. (Really, it's just a restatement of that fact.) That fact, in turn, is an approachable exercise: prove by induction on formulas that every formula in the language $(+,&lt;)$ is equivalent (modulo the theory of divisible ordered Abelian groups) to a quantifier free formula.</p>\n" }, { "answer_id": 2089237, "score": 5, "text": "<p>You are correct: no such formula exists.</p>\n\n<p>Suppose that $Z(x)$ is such a formula. That is, for each $q \\in \\mathbb{Q}$\n$$\\mathfrak{Q} = \\langle \\mathbb{Q}, 0 , + , \\leq \\rangle \\models Z[q] \\Leftrightarrow q \\in \\mathbb{Z}. \\tag{1}$$</p>\n\n<p>Note that the mapping $\\sigma (x) = \\frac{x}{2}$ is an isomorphism of $\\mathfrak{Q}$, and so for each $q \\in \\mathbb{Q}$ we have that $$\\mathfrak{Q} \\models Z[q] \\Leftrightarrow \\mathfrak{Q} \\models Z[ \\sigma(q) ]. \\tag{2}$$\nNow consider what happens when $q = 1$: By (2) we have that\n$$\\mathfrak{Q} \\models Z[1] \\Leftrightarrow \\mathfrak{Q} \\models Z[\\sigma(1) = \\tfrac{1}{2}],$$ which clearly contradicts our assumption (1).</p>\n" }, { "answer_id": 2098555, "score": 2, "text": "<p>While @komorebi's answer is correct, approaching this question using automorphisms is a bit fragile. In particular, the automorphism method cannot rule out the possibility that the integers are definable <em>with parameters</em> (that is, after adding some constants). To see why, observe that there are no non-trivial automorphisms fixing a non-zero number.</p>\n\n<p>Fortunately, there is classification of all definable sets in this structure: a subset of $\\mathbb{Q}^n$ is definable (possibly with parameters) in the language $(\\mathbb{Q},+,&lt;)$ if and only if it is a Boolean combination of solution sets to linear inequalities. In particular, a subset of $\\mathbb{Q}$ is definable (with parameters) if and only if it is a Boolean combination of intervals and points, so $\\mathbb{Z}$ is not definable.</p>\n\n<p>The above result is a consequence of <strong>quantifier elimination</strong> for the theory of <strong>divisible ordered Abelian groups</strong>. (Really, it's just a restatement of that fact.) That fact, in turn, is an approachable exercise: prove by induction on formulas that every formula in the language $(+,&lt;)$ is equivalent (modulo the theory of divisible ordered Abelian groups) to a quantifier free formula.</p>\n" }, { "answer_id": 2089237, "score": 5, "text": "<p>You are correct: no such formula exists.</p>\n\n<p>Suppose that $Z(x)$ is such a formula. That is, for each $q \\in \\mathbb{Q}$\n$$\\mathfrak{Q} = \\langle \\mathbb{Q}, 0 , + , \\leq \\rangle \\models Z[q] \\Leftrightarrow q \\in \\mathbb{Z}. \\tag{1}$$</p>\n\n<p>Note that the mapping $\\sigma (x) = \\frac{x}{2}$ is an isomorphism of $\\mathfrak{Q}$, and so for each $q \\in \\mathbb{Q}$ we have that $$\\mathfrak{Q} \\models Z[q] \\Leftrightarrow \\mathfrak{Q} \\models Z[ \\sigma(q) ]. \\tag{2}$$\nNow consider what happens when $q = 1$: By (2) we have that\n$$\\mathfrak{Q} \\models Z[1] \\Leftrightarrow \\mathfrak{Q} \\models Z[\\sigma(1) = \\tfrac{1}{2}],$$ which clearly contradicts our assumption (1).</p>\n" }, { "answer_id": 2098555, "score": 2, "text": "<p>While @komorebi's answer is correct, approaching this question using automorphisms is a bit fragile. In particular, the automorphism method cannot rule out the possibility that the integers are definable <em>with parameters</em> (that is, after adding some constants). To see why, observe that there are no non-trivial automorphisms fixing a non-zero number.</p>\n\n<p>Fortunately, there is classification of all definable sets in this structure: a subset of $\\mathbb{Q}^n$ is definable (possibly with parameters) in the language $(\\mathbb{Q},+,&lt;)$ if and only if it is a Boolean combination of solution sets to linear inequalities. In particular, a subset of $\\mathbb{Q}$ is definable (with parameters) if and only if it is a Boolean combination of intervals and points, so $\\mathbb{Z}$ is not definable.</p>\n\n<p>The above result is a consequence of <strong>quantifier elimination</strong> for the theory of <strong>divisible ordered Abelian groups</strong>. (Really, it's just a restatement of that fact.) That fact, in turn, is an approachable exercise: prove by induction on formulas that every formula in the language $(+,&lt;)$ is equivalent (modulo the theory of divisible ordered Abelian groups) to a quantifier free formula.</p>\n" }, { "answer_id": 2089237, "score": 5, "text": "<p>You are correct: no such formula exists.</p>\n\n<p>Suppose that $Z(x)$ is such a formula. That is, for each $q \\in \\mathbb{Q}$\n$$\\mathfrak{Q} = \\langle \\mathbb{Q}, 0 , + , \\leq \\rangle \\models Z[q] \\Leftrightarrow q \\in \\mathbb{Z}. \\tag{1}$$</p>\n\n<p>Note that the mapping $\\sigma (x) = \\frac{x}{2}$ is an isomorphism of $\\mathfrak{Q}$, and so for each $q \\in \\mathbb{Q}$ we have that $$\\mathfrak{Q} \\models Z[q] \\Leftrightarrow \\mathfrak{Q} \\models Z[ \\sigma(q) ]. \\tag{2}$$\nNow consider what happens when $q = 1$: By (2) we have that\n$$\\mathfrak{Q} \\models Z[1] \\Leftrightarrow \\mathfrak{Q} \\models Z[\\sigma(1) = \\tfrac{1}{2}],$$ which clearly contradicts our assumption (1).</p>\n" }, { "answer_id": 2098555, "score": 2, "text": "<p>While @komorebi's answer is correct, approaching this question using automorphisms is a bit fragile. In particular, the automorphism method cannot rule out the possibility that the integers are definable <em>with parameters</em> (that is, after adding some constants). To see why, observe that there are no non-trivial automorphisms fixing a non-zero number.</p>\n\n<p>Fortunately, there is classification of all definable sets in this structure: a subset of $\\mathbb{Q}^n$ is definable (possibly with parameters) in the language $(\\mathbb{Q},+,&lt;)$ if and only if it is a Boolean combination of solution sets to linear inequalities. In particular, a subset of $\\mathbb{Q}$ is definable (with parameters) if and only if it is a Boolean combination of intervals and points, so $\\mathbb{Z}$ is not definable.</p>\n\n<p>The above result is a consequence of <strong>quantifier elimination</strong> for the theory of <strong>divisible ordered Abelian groups</strong>. (Really, it's just a restatement of that fact.) That fact, in turn, is an approachable exercise: prove by induction on formulas that every formula in the language $(+,&lt;)$ is equivalent (modulo the theory of divisible ordered Abelian groups) to a quantifier free formula.</p>\n" }, { "answer_id": 2089237, "score": 5, "text": "<p>You are correct: no such formula exists.</p>\n\n<p>Suppose that $Z(x)$ is such a formula. That is, for each $q \\in \\mathbb{Q}$\n$$\\mathfrak{Q} = \\langle \\mathbb{Q}, 0 , + , \\leq \\rangle \\models Z[q] \\Leftrightarrow q \\in \\mathbb{Z}. \\tag{1}$$</p>\n\n<p>Note that the mapping $\\sigma (x) = \\frac{x}{2}$ is an isomorphism of $\\mathfrak{Q}$, and so for each $q \\in \\mathbb{Q}$ we have that $$\\mathfrak{Q} \\models Z[q] \\Leftrightarrow \\mathfrak{Q} \\models Z[ \\sigma(q) ]. \\tag{2}$$\nNow consider what happens when $q = 1$: By (2) we have that\n$$\\mathfrak{Q} \\models Z[1] \\Leftrightarrow \\mathfrak{Q} \\models Z[\\sigma(1) = \\tfrac{1}{2}],$$ which clearly contradicts our assumption (1).</p>\n" }, { "answer_id": 2098555, "score": 2, "text": "<p>While @komorebi's answer is correct, approaching this question using automorphisms is a bit fragile. In particular, the automorphism method cannot rule out the possibility that the integers are definable <em>with parameters</em> (that is, after adding some constants). To see why, observe that there are no non-trivial automorphisms fixing a non-zero number.</p>\n\n<p>Fortunately, there is classification of all definable sets in this structure: a subset of $\\mathbb{Q}^n$ is definable (possibly with parameters) in the language $(\\mathbb{Q},+,&lt;)$ if and only if it is a Boolean combination of solution sets to linear inequalities. In particular, a subset of $\\mathbb{Q}$ is definable (with parameters) if and only if it is a Boolean combination of intervals and points, so $\\mathbb{Z}$ is not definable.</p>\n\n<p>The above result is a consequence of <strong>quantifier elimination</strong> for the theory of <strong>divisible ordered Abelian groups</strong>. (Really, it's just a restatement of that fact.) That fact, in turn, is an approachable exercise: prove by induction on formulas that every formula in the language $(+,&lt;)$ is equivalent (modulo the theory of divisible ordered Abelian groups) to a quantifier free formula.</p>\n" }, { "answer_id": 2089237, "score": 5, "text": "<p>You are correct: no such formula exists.</p>\n\n<p>Suppose that $Z(x)$ is such a formula. That is, for each $q \\in \\mathbb{Q}$\n$$\\mathfrak{Q} = \\langle \\mathbb{Q}, 0 , + , \\leq \\rangle \\models Z[q] \\Leftrightarrow q \\in \\mathbb{Z}. \\tag{1}$$</p>\n\n<p>Note that the mapping $\\sigma (x) = \\frac{x}{2}$ is an isomorphism of $\\mathfrak{Q}$, and so for each $q \\in \\mathbb{Q}$ we have that $$\\mathfrak{Q} \\models Z[q] \\Leftrightarrow \\mathfrak{Q} \\models Z[ \\sigma(q) ]. \\tag{2}$$\nNow consider what happens when $q = 1$: By (2) we have that\n$$\\mathfrak{Q} \\models Z[1] \\Leftrightarrow \\mathfrak{Q} \\models Z[\\sigma(1) = \\tfrac{1}{2}],$$ which clearly contradicts our assumption (1).</p>\n" }, { "answer_id": 2098555, "score": 2, "text": "<p>While @komorebi's answer is correct, approaching this question using automorphisms is a bit fragile. In particular, the automorphism method cannot rule out the possibility that the integers are definable <em>with parameters</em> (that is, after adding some constants). To see why, observe that there are no non-trivial automorphisms fixing a non-zero number.</p>\n\n<p>Fortunately, there is classification of all definable sets in this structure: a subset of $\\mathbb{Q}^n$ is definable (possibly with parameters) in the language $(\\mathbb{Q},+,&lt;)$ if and only if it is a Boolean combination of solution sets to linear inequalities. In particular, a subset of $\\mathbb{Q}$ is definable (with parameters) if and only if it is a Boolean combination of intervals and points, so $\\mathbb{Z}$ is not definable.</p>\n\n<p>The above result is a consequence of <strong>quantifier elimination</strong> for the theory of <strong>divisible ordered Abelian groups</strong>. (Really, it's just a restatement of that fact.) That fact, in turn, is an approachable exercise: prove by induction on formulas that every formula in the language $(+,&lt;)$ is equivalent (modulo the theory of divisible ordered Abelian groups) to a quantifier free formula.</p>\n" }, { "answer_id": 2089237, "score": 5, "text": "<p>You are correct: no such formula exists.</p>\n\n<p>Suppose that $Z(x)$ is such a formula. That is, for each $q \\in \\mathbb{Q}$\n$$\\mathfrak{Q} = \\langle \\mathbb{Q}, 0 , + , \\leq \\rangle \\models Z[q] \\Leftrightarrow q \\in \\mathbb{Z}. \\tag{1}$$</p>\n\n<p>Note that the mapping $\\sigma (x) = \\frac{x}{2}$ is an isomorphism of $\\mathfrak{Q}$, and so for each $q \\in \\mathbb{Q}$ we have that $$\\mathfrak{Q} \\models Z[q] \\Leftrightarrow \\mathfrak{Q} \\models Z[ \\sigma(q) ]. \\tag{2}$$\nNow consider what happens when $q = 1$: By (2) we have that\n$$\\mathfrak{Q} \\models Z[1] \\Leftrightarrow \\mathfrak{Q} \\models Z[\\sigma(1) = \\tfrac{1}{2}],$$ which clearly contradicts our assumption (1).</p>\n" }, { "answer_id": 2098555, "score": 2, "text": "<p>While @komorebi's answer is correct, approaching this question using automorphisms is a bit fragile. In particular, the automorphism method cannot rule out the possibility that the integers are definable <em>with parameters</em> (that is, after adding some constants). To see why, observe that there are no non-trivial automorphisms fixing a non-zero number.</p>\n\n<p>Fortunately, there is classification of all definable sets in this structure: a subset of $\\mathbb{Q}^n$ is definable (possibly with parameters) in the language $(\\mathbb{Q},+,&lt;)$ if and only if it is a Boolean combination of solution sets to linear inequalities. In particular, a subset of $\\mathbb{Q}$ is definable (with parameters) if and only if it is a Boolean combination of intervals and points, so $\\mathbb{Z}$ is not definable.</p>\n\n<p>The above result is a consequence of <strong>quantifier elimination</strong> for the theory of <strong>divisible ordered Abelian groups</strong>. (Really, it's just a restatement of that fact.) That fact, in turn, is an approachable exercise: prove by induction on formulas that every formula in the language $(+,&lt;)$ is equivalent (modulo the theory of divisible ordered Abelian groups) to a quantifier free formula.</p>\n" }, { "answer_id": 2089237, "score": 5, "text": "<p>You are correct: no such formula exists.</p>\n\n<p>Suppose that $Z(x)$ is such a formula. That is, for each $q \\in \\mathbb{Q}$\n$$\\mathfrak{Q} = \\langle \\mathbb{Q}, 0 , + , \\leq \\rangle \\models Z[q] \\Leftrightarrow q \\in \\mathbb{Z}. \\tag{1}$$</p>\n\n<p>Note that the mapping $\\sigma (x) = \\frac{x}{2}$ is an isomorphism of $\\mathfrak{Q}$, and so for each $q \\in \\mathbb{Q}$ we have that $$\\mathfrak{Q} \\models Z[q] \\Leftrightarrow \\mathfrak{Q} \\models Z[ \\sigma(q) ]. \\tag{2}$$\nNow consider what happens when $q = 1$: By (2) we have that\n$$\\mathfrak{Q} \\models Z[1] \\Leftrightarrow \\mathfrak{Q} \\models Z[\\sigma(1) = \\tfrac{1}{2}],$$ which clearly contradicts our assumption (1).</p>\n" }, { "answer_id": 2098555, "score": 2, "text": "<p>While @komorebi's answer is correct, approaching this question using automorphisms is a bit fragile. In particular, the automorphism method cannot rule out the possibility that the integers are definable <em>with parameters</em> (that is, after adding some constants). To see why, observe that there are no non-trivial automorphisms fixing a non-zero number.</p>\n\n<p>Fortunately, there is classification of all definable sets in this structure: a subset of $\\mathbb{Q}^n$ is definable (possibly with parameters) in the language $(\\mathbb{Q},+,&lt;)$ if and only if it is a Boolean combination of solution sets to linear inequalities. In particular, a subset of $\\mathbb{Q}$ is definable (with parameters) if and only if it is a Boolean combination of intervals and points, so $\\mathbb{Z}$ is not definable.</p>\n\n<p>The above result is a consequence of <strong>quantifier elimination</strong> for the theory of <strong>divisible ordered Abelian groups</strong>. (Really, it's just a restatement of that fact.) That fact, in turn, is an approachable exercise: prove by induction on formulas that every formula in the language $(+,&lt;)$ is equivalent (modulo the theory of divisible ordered Abelian groups) to a quantifier free formula.</p>\n" }, { "answer_id": 2089237, "score": 5, "text": "<p>You are correct: no such formula exists.</p>\n\n<p>Suppose that $Z(x)$ is such a formula. That is, for each $q \\in \\mathbb{Q}$\n$$\\mathfrak{Q} = \\langle \\mathbb{Q}, 0 , + , \\leq \\rangle \\models Z[q] \\Leftrightarrow q \\in \\mathbb{Z}. \\tag{1}$$</p>\n\n<p>Note that the mapping $\\sigma (x) = \\frac{x}{2}$ is an isomorphism of $\\mathfrak{Q}$, and so for each $q \\in \\mathbb{Q}$ we have that $$\\mathfrak{Q} \\models Z[q] \\Leftrightarrow \\mathfrak{Q} \\models Z[ \\sigma(q) ]. \\tag{2}$$\nNow consider what happens when $q = 1$: By (2) we have that\n$$\\mathfrak{Q} \\models Z[1] \\Leftrightarrow \\mathfrak{Q} \\models Z[\\sigma(1) = \\tfrac{1}{2}],$$ which clearly contradicts our assumption (1).</p>\n" }, { "answer_id": 2098555, "score": 2, "text": "<p>While @komorebi's answer is correct, approaching this question using automorphisms is a bit fragile. In particular, the automorphism method cannot rule out the possibility that the integers are definable <em>with parameters</em> (that is, after adding some constants). To see why, observe that there are no non-trivial automorphisms fixing a non-zero number.</p>\n\n<p>Fortunately, there is classification of all definable sets in this structure: a subset of $\\mathbb{Q}^n$ is definable (possibly with parameters) in the language $(\\mathbb{Q},+,&lt;)$ if and only if it is a Boolean combination of solution sets to linear inequalities. In particular, a subset of $\\mathbb{Q}$ is definable (with parameters) if and only if it is a Boolean combination of intervals and points, so $\\mathbb{Z}$ is not definable.</p>\n\n<p>The above result is a consequence of <strong>quantifier elimination</strong> for the theory of <strong>divisible ordered Abelian groups</strong>. (Really, it's just a restatement of that fact.) That fact, in turn, is an approachable exercise: prove by induction on formulas that every formula in the language $(+,&lt;)$ is equivalent (modulo the theory of divisible ordered Abelian groups) to a quantifier free formula.</p>\n" }, { "answer_id": 2089237, "score": 5, "text": "<p>You are correct: no such formula exists.</p>\n\n<p>Suppose that $Z(x)$ is such a formula. That is, for each $q \\in \\mathbb{Q}$\n$$\\mathfrak{Q} = \\langle \\mathbb{Q}, 0 , + , \\leq \\rangle \\models Z[q] \\Leftrightarrow q \\in \\mathbb{Z}. \\tag{1}$$</p>\n\n<p>Note that the mapping $\\sigma (x) = \\frac{x}{2}$ is an isomorphism of $\\mathfrak{Q}$, and so for each $q \\in \\mathbb{Q}$ we have that $$\\mathfrak{Q} \\models Z[q] \\Leftrightarrow \\mathfrak{Q} \\models Z[ \\sigma(q) ]. \\tag{2}$$\nNow consider what happens when $q = 1$: By (2) we have that\n$$\\mathfrak{Q} \\models Z[1] \\Leftrightarrow \\mathfrak{Q} \\models Z[\\sigma(1) = \\tfrac{1}{2}],$$ which clearly contradicts our assumption (1).</p>\n" }, { "answer_id": 2098555, "score": 2, "text": "<p>While @komorebi's answer is correct, approaching this question using automorphisms is a bit fragile. In particular, the automorphism method cannot rule out the possibility that the integers are definable <em>with parameters</em> (that is, after adding some constants). To see why, observe that there are no non-trivial automorphisms fixing a non-zero number.</p>\n\n<p>Fortunately, there is classification of all definable sets in this structure: a subset of $\\mathbb{Q}^n$ is definable (possibly with parameters) in the language $(\\mathbb{Q},+,&lt;)$ if and only if it is a Boolean combination of solution sets to linear inequalities. In particular, a subset of $\\mathbb{Q}$ is definable (with parameters) if and only if it is a Boolean combination of intervals and points, so $\\mathbb{Z}$ is not definable.</p>\n\n<p>The above result is a consequence of <strong>quantifier elimination</strong> for the theory of <strong>divisible ordered Abelian groups</strong>. (Really, it's just a restatement of that fact.) That fact, in turn, is an approachable exercise: prove by induction on formulas that every formula in the language $(+,&lt;)$ is equivalent (modulo the theory of divisible ordered Abelian groups) to a quantifier free formula.</p>\n" } ]
Question: Exists first-order formula $Z(x)$ in language $\langle 0, +, \le\rangle$ that over field $\mathbb{Q}$ expresses fact that $x$ is integer? Answer 1: <p>You are correct: no such formula exists.</p> <p>Suppose that $Z(x)$ is such a formula. That is, for each $q \in \mathbb{Q}$ $$\mathfrak{Q} = \langle \mathbb{Q}, 0 , + , \leq \rangle \models Z[q] \Leftrightarrow q \in \mathbb{Z}. \tag{1}$$</p> <p>Note that the mapping $\sigma (x) = \frac{x}{2}$ is an isomorphism of $\mathfrak{Q}$, and so for each $q \in \mathbb{Q}$ we have that $$\mathfrak{Q} \models Z[q] \Leftrightarrow \mathfrak{Q} \models Z[ \sigma(q) ]. \tag{2}$$ Now consider what happens when $q = 1$: By (2) we have that $$\mathfrak{Q} \models Z[1] \Leftrightarrow \mathfrak{Q} \models Z[\sigma(1) = \tfrac{1}{2}],$$ which clearly contradicts our assumption (1).</p> Answer 2: <p>While @komorebi's answer is correct, approaching this question using automorphisms is a bit fragile. In particular, the automorphism method cannot rule out the possibility that the integers are definable <em>with parameters</em> (that is, after adding some constants). To see why, observe that there are no non-trivial automorphisms fixing a non-zero number.</p> <p>Fortunately, there is classification of all definable sets in this structure: a subset of $\mathbb{Q}^n$ is definable (possibly with parameters) in the language $(\mathbb{Q},+,&lt;)$ if and only if it is a Boolean combination of solution sets to linear inequalities. In particular, a subset of $\mathbb{Q}$ is definable (with parameters) if and only if it is a Boolean combination of intervals and points, so $\mathbb{Z}$ is not definable.</p> <p>The above result is a consequence of <strong>quantifier elimination</strong> for the theory of <strong>divisible ordered Abelian groups</strong>. (Really, it's just a restatement of that fact.) That fact, in turn, is an approachable exercise: prove by induction on formulas that every formula in the language $(+,&lt;)$ is equivalent (modulo the theory of divisible ordered Abelian groups) to a quantifier free formula.</p> Answer 3: <p>You are correct: no such formula exists.</p> <p>Suppose that $Z(x)$ is such a formula. That is, for each $q \in \mathbb{Q}$ $$\mathfrak{Q} = \langle \mathbb{Q}, 0 , + , \leq \rangle \models Z[q] \Leftrightarrow q \in \mathbb{Z}. \tag{1}$$</p> <p>Note that the mapping $\sigma (x) = \frac{x}{2}$ is an isomorphism of $\mathfrak{Q}$, and so for each $q \in \mathbb{Q}$ we have that $$\mathfrak{Q} \models Z[q] \Leftrightarrow \mathfrak{Q} \models Z[ \sigma(q) ]. \tag{2}$$ Now consider what happens when $q = 1$: By (2) we have that $$\mathfrak{Q} \models Z[1] \Leftrightarrow \mathfrak{Q} \models Z[\sigma(1) = \tfrac{1}{2}],$$ which clearly contradicts our assumption (1).</p> Answer 4: <p>While @komorebi's answer is correct, approaching this question using automorphisms is a bit fragile. In particular, the automorphism method cannot rule out the possibility that the integers are definable <em>with parameters</em> (that is, after adding some constants). To see why, observe that there are no non-trivial automorphisms fixing a non-zero number.</p> <p>Fortunately, there is classification of all definable sets in this structure: a subset of $\mathbb{Q}^n$ is definable (possibly with parameters) in the language $(\mathbb{Q},+,&lt;)$ if and only if it is a Boolean combination of solution sets to linear inequalities. In particular, a subset of $\mathbb{Q}$ is definable (with parameters) if and only if it is a Boolean combination of intervals and points, so $\mathbb{Z}$ is not definable.</p> <p>The above result is a consequence of <strong>quantifier elimination</strong> for the theory of <strong>divisible ordered Abelian groups</strong>. (Really, it's just a restatement of that fact.) That fact, in turn, is an approachable exercise: prove by induction on formulas that every formula in the language $(+,&lt;)$ is equivalent (modulo the theory of divisible ordered Abelian groups) to a quantifier free formula.</p> Answer 5: <p>You are correct: no such formula exists.</p> <p>Suppose that $Z(x)$ is such a formula. That is, for each $q \in \mathbb{Q}$ $$\mathfrak{Q} = \langle \mathbb{Q}, 0 , + , \leq \rangle \models Z[q] \Leftrightarrow q \in \mathbb{Z}. \tag{1}$$</p> <p>Note that the mapping $\sigma (x) = \frac{x}{2}$ is an isomorphism of $\mathfrak{Q}$, and so for each $q \in \mathbb{Q}$ we have that $$\mathfrak{Q} \models Z[q] \Leftrightarrow \mathfrak{Q} \models Z[ \sigma(q) ]. \tag{2}$$ Now consider what happens when $q = 1$: By (2) we have that $$\mathfrak{Q} \models Z[1] \Leftrightarrow \mathfrak{Q} \models Z[\sigma(1) = \tfrac{1}{2}],$$ which clearly contradicts our assumption (1).</p> Answer 6: <p>While @komorebi's answer is correct, approaching this question using automorphisms is a bit fragile. In particular, the automorphism method cannot rule out the possibility that the integers are definable <em>with parameters</em> (that is, after adding some constants). To see why, observe that there are no non-trivial automorphisms fixing a non-zero number.</p> <p>Fortunately, there is classification of all definable sets in this structure: a subset of $\mathbb{Q}^n$ is definable (possibly with parameters) in the language $(\mathbb{Q},+,&lt;)$ if and only if it is a Boolean combination of solution sets to linear inequalities. In particular, a subset of $\mathbb{Q}$ is definable (with parameters) if and only if it is a Boolean combination of intervals and points, so $\mathbb{Z}$ is not definable.</p> <p>The above result is a consequence of <strong>quantifier elimination</strong> for the theory of <strong>divisible ordered Abelian groups</strong>. (Really, it's just a restatement of that fact.) That fact, in turn, is an approachable exercise: prove by induction on formulas that every formula in the language $(+,&lt;)$ is equivalent (modulo the theory of divisible ordered Abelian groups) to a quantifier free formula.</p> Answer 7: <p>You are correct: no such formula exists.</p> <p>Suppose that $Z(x)$ is such a formula. That is, for each $q \in \mathbb{Q}$ $$\mathfrak{Q} = \langle \mathbb{Q}, 0 , + , \leq \rangle \models Z[q] \Leftrightarrow q \in \mathbb{Z}. \tag{1}$$</p> <p>Note that the mapping $\sigma (x) = \frac{x}{2}$ is an isomorphism of $\mathfrak{Q}$, and so for each $q \in \mathbb{Q}$ we have that $$\mathfrak{Q} \models Z[q] \Leftrightarrow \mathfrak{Q} \models Z[ \sigma(q) ]. \tag{2}$$ Now consider what happens when $q = 1$: By (2) we have that $$\mathfrak{Q} \models Z[1] \Leftrightarrow \mathfrak{Q} \models Z[\sigma(1) = \tfrac{1}{2}],$$ which clearly contradicts our assumption (1).</p> Answer 8: <p>While @komorebi's answer is correct, approaching this question using automorphisms is a bit fragile. In particular, the automorphism method cannot rule out the possibility that the integers are definable <em>with parameters</em> (that is, after adding some constants). To see why, observe that there are no non-trivial automorphisms fixing a non-zero number.</p> <p>Fortunately, there is classification of all definable sets in this structure: a subset of $\mathbb{Q}^n$ is definable (possibly with parameters) in the language $(\mathbb{Q},+,&lt;)$ if and only if it is a Boolean combination of solution sets to linear inequalities. In particular, a subset of $\mathbb{Q}$ is definable (with parameters) if and only if it is a Boolean combination of intervals and points, so $\mathbb{Z}$ is not definable.</p> <p>The above result is a consequence of <strong>quantifier elimination</strong> for the theory of <strong>divisible ordered Abelian groups</strong>. (Really, it's just a restatement of that fact.) That fact, in turn, is an approachable exercise: prove by induction on formulas that every formula in the language $(+,&lt;)$ is equivalent (modulo the theory of divisible ordered Abelian groups) to a quantifier free formula.</p> Answer 9: <p>You are correct: no such formula exists.</p> <p>Suppose that $Z(x)$ is such a formula. That is, for each $q \in \mathbb{Q}$ $$\mathfrak{Q} = \langle \mathbb{Q}, 0 , + , \leq \rangle \models Z[q] \Leftrightarrow q \in \mathbb{Z}. \tag{1}$$</p> <p>Note that the mapping $\sigma (x) = \frac{x}{2}$ is an isomorphism of $\mathfrak{Q}$, and so for each $q \in \mathbb{Q}$ we have that $$\mathfrak{Q} \models Z[q] \Leftrightarrow \mathfrak{Q} \models Z[ \sigma(q) ]. \tag{2}$$ Now consider what happens when $q = 1$: By (2) we have that $$\mathfrak{Q} \models Z[1] \Leftrightarrow \mathfrak{Q} \models Z[\sigma(1) = \tfrac{1}{2}],$$ which clearly contradicts our assumption (1).</p> Answer 10: <p>While @komorebi's answer is correct, approaching this question using automorphisms is a bit fragile. In particular, the automorphism method cannot rule out the possibility that the integers are definable <em>with parameters</em> (that is, after adding some constants). To see why, observe that there are no non-trivial automorphisms fixing a non-zero number.</p> <p>Fortunately, there is classification of all definable sets in this structure: a subset of $\mathbb{Q}^n$ is definable (possibly with parameters) in the language $(\mathbb{Q},+,&lt;)$ if and only if it is a Boolean combination of solution sets to linear inequalities. In particular, a subset of $\mathbb{Q}$ is definable (with parameters) if and only if it is a Boolean combination of intervals and points, so $\mathbb{Z}$ is not definable.</p> <p>The above result is a consequence of <strong>quantifier elimination</strong> for the theory of <strong>divisible ordered Abelian groups</strong>. (Really, it's just a restatement of that fact.) That fact, in turn, is an approachable exercise: prove by induction on formulas that every formula in the language $(+,&lt;)$ is equivalent (modulo the theory of divisible ordered Abelian groups) to a quantifier free formula.</p> Answer 11: <p>You are correct: no such formula exists.</p> <p>Suppose that $Z(x)$ is such a formula. That is, for each $q \in \mathbb{Q}$ $$\mathfrak{Q} = \langle \mathbb{Q}, 0 , + , \leq \rangle \models Z[q] \Leftrightarrow q \in \mathbb{Z}. \tag{1}$$</p> <p>Note that the mapping $\sigma (x) = \frac{x}{2}$ is an isomorphism of $\mathfrak{Q}$, and so for each $q \in \mathbb{Q}$ we have that $$\mathfrak{Q} \models Z[q] \Leftrightarrow \mathfrak{Q} \models Z[ \sigma(q) ]. \tag{2}$$ Now consider what happens when $q = 1$: By (2) we have that $$\mathfrak{Q} \models Z[1] \Leftrightarrow \mathfrak{Q} \models Z[\sigma(1) = \tfrac{1}{2}],$$ which clearly contradicts our assumption (1).</p> Answer 12: <p>While @komorebi's answer is correct, approaching this question using automorphisms is a bit fragile. In particular, the automorphism method cannot rule out the possibility that the integers are definable <em>with parameters</em> (that is, after adding some constants). To see why, observe that there are no non-trivial automorphisms fixing a non-zero number.</p> <p>Fortunately, there is classification of all definable sets in this structure: a subset of $\mathbb{Q}^n$ is definable (possibly with parameters) in the language $(\mathbb{Q},+,&lt;)$ if and only if it is a Boolean combination of solution sets to linear inequalities. In particular, a subset of $\mathbb{Q}$ is definable (with parameters) if and only if it is a Boolean combination of intervals and points, so $\mathbb{Z}$ is not definable.</p> <p>The above result is a consequence of <strong>quantifier elimination</strong> for the theory of <strong>divisible ordered Abelian groups</strong>. (Really, it's just a restatement of that fact.) That fact, in turn, is an approachable exercise: prove by induction on formulas that every formula in the language $(+,&lt;)$ is equivalent (modulo the theory of divisible ordered Abelian groups) to a quantifier free formula.</p> Answer 13: <p>You are correct: no such formula exists.</p> <p>Suppose that $Z(x)$ is such a formula. That is, for each $q \in \mathbb{Q}$ $$\mathfrak{Q} = \langle \mathbb{Q}, 0 , + , \leq \rangle \models Z[q] \Leftrightarrow q \in \mathbb{Z}. \tag{1}$$</p> <p>Note that the mapping $\sigma (x) = \frac{x}{2}$ is an isomorphism of $\mathfrak{Q}$, and so for each $q \in \mathbb{Q}$ we have that $$\mathfrak{Q} \models Z[q] \Leftrightarrow \mathfrak{Q} \models Z[ \sigma(q) ]. \tag{2}$$ Now consider what happens when $q = 1$: By (2) we have that $$\mathfrak{Q} \models Z[1] \Leftrightarrow \mathfrak{Q} \models Z[\sigma(1) = \tfrac{1}{2}],$$ which clearly contradicts our assumption (1).</p> Answer 14: <p>While @komorebi's answer is correct, approaching this question using automorphisms is a bit fragile. In particular, the automorphism method cannot rule out the possibility that the integers are definable <em>with parameters</em> (that is, after adding some constants). To see why, observe that there are no non-trivial automorphisms fixing a non-zero number.</p> <p>Fortunately, there is classification of all definable sets in this structure: a subset of $\mathbb{Q}^n$ is definable (possibly with parameters) in the language $(\mathbb{Q},+,&lt;)$ if and only if it is a Boolean combination of solution sets to linear inequalities. In particular, a subset of $\mathbb{Q}$ is definable (with parameters) if and only if it is a Boolean combination of intervals and points, so $\mathbb{Z}$ is not definable.</p> <p>The above result is a consequence of <strong>quantifier elimination</strong> for the theory of <strong>divisible ordered Abelian groups</strong>. (Really, it's just a restatement of that fact.) That fact, in turn, is an approachable exercise: prove by induction on formulas that every formula in the language $(+,&lt;)$ is equivalent (modulo the theory of divisible ordered Abelian groups) to a quantifier free formula.</p> Answer 15: <p>You are correct: no such formula exists.</p> <p>Suppose that $Z(x)$ is such a formula. That is, for each $q \in \mathbb{Q}$ $$\mathfrak{Q} = \langle \mathbb{Q}, 0 , + , \leq \rangle \models Z[q] \Leftrightarrow q \in \mathbb{Z}. \tag{1}$$</p> <p>Note that the mapping $\sigma (x) = \frac{x}{2}$ is an isomorphism of $\mathfrak{Q}$, and so for each $q \in \mathbb{Q}$ we have that $$\mathfrak{Q} \models Z[q] \Leftrightarrow \mathfrak{Q} \models Z[ \sigma(q) ]. \tag{2}$$ Now consider what happens when $q = 1$: By (2) we have that $$\mathfrak{Q} \models Z[1] \Leftrightarrow \mathfrak{Q} \models Z[\sigma(1) = \tfrac{1}{2}],$$ which clearly contradicts our assumption (1).</p> Answer 16: <p>While @komorebi's answer is correct, approaching this question using automorphisms is a bit fragile. In particular, the automorphism method cannot rule out the possibility that the integers are definable <em>with parameters</em> (that is, after adding some constants). To see why, observe that there are no non-trivial automorphisms fixing a non-zero number.</p> <p>Fortunately, there is classification of all definable sets in this structure: a subset of $\mathbb{Q}^n$ is definable (possibly with parameters) in the language $(\mathbb{Q},+,&lt;)$ if and only if it is a Boolean combination of solution sets to linear inequalities. In particular, a subset of $\mathbb{Q}$ is definable (with parameters) if and only if it is a Boolean combination of intervals and points, so $\mathbb{Z}$ is not definable.</p> <p>The above result is a consequence of <strong>quantifier elimination</strong> for the theory of <strong>divisible ordered Abelian groups</strong>. (Really, it's just a restatement of that fact.) That fact, in turn, is an approachable exercise: prove by induction on formulas that every formula in the language $(+,&lt;)$ is equivalent (modulo the theory of divisible ordered Abelian groups) to a quantifier free formula.</p> Answer 17: <p>You are correct: no such formula exists.</p> <p>Suppose that $Z(x)$ is such a formula. That is, for each $q \in \mathbb{Q}$ $$\mathfrak{Q} = \langle \mathbb{Q}, 0 , + , \leq \rangle \models Z[q] \Leftrightarrow q \in \mathbb{Z}. \tag{1}$$</p> <p>Note that the mapping $\sigma (x) = \frac{x}{2}$ is an isomorphism of $\mathfrak{Q}$, and so for each $q \in \mathbb{Q}$ we have that $$\mathfrak{Q} \models Z[q] \Leftrightarrow \mathfrak{Q} \models Z[ \sigma(q) ]. \tag{2}$$ Now consider what happens when $q = 1$: By (2) we have that $$\mathfrak{Q} \models Z[1] \Leftrightarrow \mathfrak{Q} \models Z[\sigma(1) = \tfrac{1}{2}],$$ which clearly contradicts our assumption (1).</p> Answer 18: <p>While @komorebi's answer is correct, approaching this question using automorphisms is a bit fragile. In particular, the automorphism method cannot rule out the possibility that the integers are definable <em>with parameters</em> (that is, after adding some constants). To see why, observe that there are no non-trivial automorphisms fixing a non-zero number.</p> <p>Fortunately, there is classification of all definable sets in this structure: a subset of $\mathbb{Q}^n$ is definable (possibly with parameters) in the language $(\mathbb{Q},+,&lt;)$ if and only if it is a Boolean combination of solution sets to linear inequalities. In particular, a subset of $\mathbb{Q}$ is definable (with parameters) if and only if it is a Boolean combination of intervals and points, so $\mathbb{Z}$ is not definable.</p> <p>The above result is a consequence of <strong>quantifier elimination</strong> for the theory of <strong>divisible ordered Abelian groups</strong>. (Really, it's just a restatement of that fact.) That fact, in turn, is an approachable exercise: prove by induction on formulas that every formula in the language $(+,&lt;)$ is equivalent (modulo the theory of divisible ordered Abelian groups) to a quantifier free formula.</p> Answer 19: <p>You are correct: no such formula exists.</p> <p>Suppose that $Z(x)$ is such a formula. That is, for each $q \in \mathbb{Q}$ $$\mathfrak{Q} = \langle \mathbb{Q}, 0 , + , \leq \rangle \models Z[q] \Leftrightarrow q \in \mathbb{Z}. \tag{1}$$</p> <p>Note that the mapping $\sigma (x) = \frac{x}{2}$ is an isomorphism of $\mathfrak{Q}$, and so for each $q \in \mathbb{Q}$ we have that $$\mathfrak{Q} \models Z[q] \Leftrightarrow \mathfrak{Q} \models Z[ \sigma(q) ]. \tag{2}$$ Now consider what happens when $q = 1$: By (2) we have that $$\mathfrak{Q} \models Z[1] \Leftrightarrow \mathfrak{Q} \models Z[\sigma(1) = \tfrac{1}{2}],$$ which clearly contradicts our assumption (1).</p> Answer 20: <p>While @komorebi's answer is correct, approaching this question using automorphisms is a bit fragile. In particular, the automorphism method cannot rule out the possibility that the integers are definable <em>with parameters</em> (that is, after adding some constants). To see why, observe that there are no non-trivial automorphisms fixing a non-zero number.</p> <p>Fortunately, there is classification of all definable sets in this structure: a subset of $\mathbb{Q}^n$ is definable (possibly with parameters) in the language $(\mathbb{Q},+,&lt;)$ if and only if it is a Boolean combination of solution sets to linear inequalities. In particular, a subset of $\mathbb{Q}$ is definable (with parameters) if and only if it is a Boolean combination of intervals and points, so $\mathbb{Z}$ is not definable.</p> <p>The above result is a consequence of <strong>quantifier elimination</strong> for the theory of <strong>divisible ordered Abelian groups</strong>. (Really, it's just a restatement of that fact.) That fact, in turn, is an approachable exercise: prove by induction on formulas that every formula in the language $(+,&lt;)$ is equivalent (modulo the theory of divisible ordered Abelian groups) to a quantifier free formula.</p> Answer 21: <p>You are correct: no such formula exists.</p> <p>Suppose that $Z(x)$ is such a formula. That is, for each $q \in \mathbb{Q}$ $$\mathfrak{Q} = \langle \mathbb{Q}, 0 , + , \leq \rangle \models Z[q] \Leftrightarrow q \in \mathbb{Z}. \tag{1}$$</p> <p>Note that the mapping $\sigma (x) = \frac{x}{2}$ is an isomorphism of $\mathfrak{Q}$, and so for each $q \in \mathbb{Q}$ we have that $$\mathfrak{Q} \models Z[q] \Leftrightarrow \mathfrak{Q} \models Z[ \sigma(q) ]. \tag{2}$$ Now consider what happens when $q = 1$: By (2) we have that $$\mathfrak{Q} \models Z[1] \Leftrightarrow \mathfrak{Q} \models Z[\sigma(1) = \tfrac{1}{2}],$$ which clearly contradicts our assumption (1).</p> Answer 22: <p>While @komorebi's answer is correct, approaching this question using automorphisms is a bit fragile. In particular, the automorphism method cannot rule out the possibility that the integers are definable <em>with parameters</em> (that is, after adding some constants). To see why, observe that there are no non-trivial automorphisms fixing a non-zero number.</p> <p>Fortunately, there is classification of all definable sets in this structure: a subset of $\mathbb{Q}^n$ is definable (possibly with parameters) in the language $(\mathbb{Q},+,&lt;)$ if and only if it is a Boolean combination of solution sets to linear inequalities. In particular, a subset of $\mathbb{Q}$ is definable (with parameters) if and only if it is a Boolean combination of intervals and points, so $\mathbb{Z}$ is not definable.</p> <p>The above result is a consequence of <strong>quantifier elimination</strong> for the theory of <strong>divisible ordered Abelian groups</strong>. (Really, it's just a restatement of that fact.) That fact, in turn, is an approachable exercise: prove by induction on formulas that every formula in the language $(+,&lt;)$ is equivalent (modulo the theory of divisible ordered Abelian groups) to a quantifier free formula.</p>
Show that this lengthy integral is equal to $\int_{0}^{\infty}{1\over x^8+x^4+1}\cdot{1\over x^3+1}\mathrm dx={5\pi\over 12\sqrt{3}}$
2,089,367
6
https://math.stackexchange.com/questions/2089367/show-that-this-lengthy-integral-is-equal-to-int-0-infty1-over-x8x41
<p>$$\displaystyle\int_{0}^{\infty}{1\over x^8+x^4+1}\cdot{1\over x^3+1}\mathrm dx={5\pi\over 12\sqrt{3}}$$</p> <p>$x^8+x^4+1=(x^2+x-1)(x^2-x-1)(x^2+x+1)(x^2-x+1)$</p> <p>$x^3+1=(x+1)(x^2-x+1)$</p> <p>${A\over x+1}+{Bx+C\over x^2-x+1}={1\over x^3+1}$</p> <p>${Ax+B\over x^2+x-1}+{Cx+D\over x^2-x-1}+{Ex+F\over x^2+x+1}+{Gx+H\over x^2-x+1}={1\over x^8+x^4+1}$</p> <p>it would be a nightmare trying to decomposition of fraction.</p> <p>I would like some help please?</p>
[ { "answer_id": 2089414, "score": 10, "text": "<p>We want to compute:\n$$ I = \\int_{0}^{1}\\frac{x^4-1}{x^{12}-1}\\cdot\\frac{\\mathrm dx}{x^3+1}+\\int_{0}^{1}\\frac{x^{8}-x^{12}}{1-x^{12}}\\cdot\\frac{x\\,\\mathrm dx}{x^3+1}=\\int_{0}^{1}\\frac{1-x^3+x^6}{1+x^4+x^8}\\,\\mathrm dx $$\nthat is:\n$$ I = \\int_{0}^{1}\\left(1-x^3-x^4+x^6+x^7-x^{10}\\right)\\frac{\\mathrm dx}{1-x^{12}} $$\nor:\n$$ I=\\sum_{k\\geq 0}\\left(\\frac{1}{12k+1}-\\frac{1}{12k+4}-\\frac{1}{12k+5}+\\frac{1}{12k+7}+\\frac{1}{12k+8}-\\frac{1}{12k+11}\\right). $$\nBy the reflection formula for the <a href=\"https://en.wikipedia.org/wiki/Digamma_function\" rel=\"nofollow noreferrer\">digamma function</a> we have:\n$$ \\sum_{k\\geq 0}\\left(\\frac{1}{12k+\\tau}-\\frac{1}{12k+(12-\\tau)}\\right)=\\frac{\\pi}{12}\\,\\cot\\frac{\\pi \\tau}{12} $$\nand the wanted result trivially follows from $\\cot\\left(\\frac{\\pi}{12}\\right)=2+\\sqrt{3}$.</p>\n" }, { "answer_id": 2089535, "score": 6, "text": "<p>If you really love partial fractions you may note that $$I=\\int_{0}^{1}\\frac{1-x^{3}+x^{6}}{1+x^{4}+x^{8}}\\mathrm dx=\\frac{1}{4\\sqrt{3}}\\int_{0}^{1}\\frac{1}{x^{2}+\\sqrt{3}x+1}\\mathrm dx+\\frac{1}{4}\\int_{0}^{1}\\frac{1}{x^{2}+x+1}\\mathrm dx\n $$ $$+\\frac{3}{4}\\int_{0}^{1}\\frac{1}{x^{2}-x+1}\\mathrm dx-\\frac{1}{4\\sqrt{3}}\\int_{0}^{1}\\frac{1}{x^{2}-\\sqrt{3}x+1}\\mathrm dx\n $$ $$=\\frac{1}{2\\sqrt{3}}\\int_{\\sqrt{3}}^{2+\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx+\\frac{1}{2\\sqrt{3}}\\int_{1/\\sqrt{3}}^{\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx\n $$ $$+\\frac{\\sqrt{3}}{2}\\int_{-1/\\sqrt{3}}^{1/\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx-\\frac{1}{2\\sqrt{3}}\\int_{-\\sqrt{3}}^{2-\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx\n $$ so we have $4$ elementary integrals hence $$I=\\color{red}{\\frac{5\\pi}{12\\sqrt{3}}}.$$</p>\n" }, { "answer_id": 2089414, "score": 10, "text": "<p>We want to compute:\n$$ I = \\int_{0}^{1}\\frac{x^4-1}{x^{12}-1}\\cdot\\frac{\\mathrm dx}{x^3+1}+\\int_{0}^{1}\\frac{x^{8}-x^{12}}{1-x^{12}}\\cdot\\frac{x\\,\\mathrm dx}{x^3+1}=\\int_{0}^{1}\\frac{1-x^3+x^6}{1+x^4+x^8}\\,\\mathrm dx $$\nthat is:\n$$ I = \\int_{0}^{1}\\left(1-x^3-x^4+x^6+x^7-x^{10}\\right)\\frac{\\mathrm dx}{1-x^{12}} $$\nor:\n$$ I=\\sum_{k\\geq 0}\\left(\\frac{1}{12k+1}-\\frac{1}{12k+4}-\\frac{1}{12k+5}+\\frac{1}{12k+7}+\\frac{1}{12k+8}-\\frac{1}{12k+11}\\right). $$\nBy the reflection formula for the <a href=\"https://en.wikipedia.org/wiki/Digamma_function\" rel=\"nofollow noreferrer\">digamma function</a> we have:\n$$ \\sum_{k\\geq 0}\\left(\\frac{1}{12k+\\tau}-\\frac{1}{12k+(12-\\tau)}\\right)=\\frac{\\pi}{12}\\,\\cot\\frac{\\pi \\tau}{12} $$\nand the wanted result trivially follows from $\\cot\\left(\\frac{\\pi}{12}\\right)=2+\\sqrt{3}$.</p>\n" }, { "answer_id": 2089535, "score": 6, "text": "<p>If you really love partial fractions you may note that $$I=\\int_{0}^{1}\\frac{1-x^{3}+x^{6}}{1+x^{4}+x^{8}}\\mathrm dx=\\frac{1}{4\\sqrt{3}}\\int_{0}^{1}\\frac{1}{x^{2}+\\sqrt{3}x+1}\\mathrm dx+\\frac{1}{4}\\int_{0}^{1}\\frac{1}{x^{2}+x+1}\\mathrm dx\n $$ $$+\\frac{3}{4}\\int_{0}^{1}\\frac{1}{x^{2}-x+1}\\mathrm dx-\\frac{1}{4\\sqrt{3}}\\int_{0}^{1}\\frac{1}{x^{2}-\\sqrt{3}x+1}\\mathrm dx\n $$ $$=\\frac{1}{2\\sqrt{3}}\\int_{\\sqrt{3}}^{2+\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx+\\frac{1}{2\\sqrt{3}}\\int_{1/\\sqrt{3}}^{\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx\n $$ $$+\\frac{\\sqrt{3}}{2}\\int_{-1/\\sqrt{3}}^{1/\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx-\\frac{1}{2\\sqrt{3}}\\int_{-\\sqrt{3}}^{2-\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx\n $$ so we have $4$ elementary integrals hence $$I=\\color{red}{\\frac{5\\pi}{12\\sqrt{3}}}.$$</p>\n" }, { "answer_id": 2089414, "score": 10, "text": "<p>We want to compute:\n$$ I = \\int_{0}^{1}\\frac{x^4-1}{x^{12}-1}\\cdot\\frac{\\mathrm dx}{x^3+1}+\\int_{0}^{1}\\frac{x^{8}-x^{12}}{1-x^{12}}\\cdot\\frac{x\\,\\mathrm dx}{x^3+1}=\\int_{0}^{1}\\frac{1-x^3+x^6}{1+x^4+x^8}\\,\\mathrm dx $$\nthat is:\n$$ I = \\int_{0}^{1}\\left(1-x^3-x^4+x^6+x^7-x^{10}\\right)\\frac{\\mathrm dx}{1-x^{12}} $$\nor:\n$$ I=\\sum_{k\\geq 0}\\left(\\frac{1}{12k+1}-\\frac{1}{12k+4}-\\frac{1}{12k+5}+\\frac{1}{12k+7}+\\frac{1}{12k+8}-\\frac{1}{12k+11}\\right). $$\nBy the reflection formula for the <a href=\"https://en.wikipedia.org/wiki/Digamma_function\" rel=\"nofollow noreferrer\">digamma function</a> we have:\n$$ \\sum_{k\\geq 0}\\left(\\frac{1}{12k+\\tau}-\\frac{1}{12k+(12-\\tau)}\\right)=\\frac{\\pi}{12}\\,\\cot\\frac{\\pi \\tau}{12} $$\nand the wanted result trivially follows from $\\cot\\left(\\frac{\\pi}{12}\\right)=2+\\sqrt{3}$.</p>\n" }, { "answer_id": 2089535, "score": 6, "text": "<p>If you really love partial fractions you may note that $$I=\\int_{0}^{1}\\frac{1-x^{3}+x^{6}}{1+x^{4}+x^{8}}\\mathrm dx=\\frac{1}{4\\sqrt{3}}\\int_{0}^{1}\\frac{1}{x^{2}+\\sqrt{3}x+1}\\mathrm dx+\\frac{1}{4}\\int_{0}^{1}\\frac{1}{x^{2}+x+1}\\mathrm dx\n $$ $$+\\frac{3}{4}\\int_{0}^{1}\\frac{1}{x^{2}-x+1}\\mathrm dx-\\frac{1}{4\\sqrt{3}}\\int_{0}^{1}\\frac{1}{x^{2}-\\sqrt{3}x+1}\\mathrm dx\n $$ $$=\\frac{1}{2\\sqrt{3}}\\int_{\\sqrt{3}}^{2+\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx+\\frac{1}{2\\sqrt{3}}\\int_{1/\\sqrt{3}}^{\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx\n $$ $$+\\frac{\\sqrt{3}}{2}\\int_{-1/\\sqrt{3}}^{1/\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx-\\frac{1}{2\\sqrt{3}}\\int_{-\\sqrt{3}}^{2-\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx\n $$ so we have $4$ elementary integrals hence $$I=\\color{red}{\\frac{5\\pi}{12\\sqrt{3}}}.$$</p>\n" }, { "answer_id": 2089414, "score": 10, "text": "<p>We want to compute:\n$$ I = \\int_{0}^{1}\\frac{x^4-1}{x^{12}-1}\\cdot\\frac{\\mathrm dx}{x^3+1}+\\int_{0}^{1}\\frac{x^{8}-x^{12}}{1-x^{12}}\\cdot\\frac{x\\,\\mathrm dx}{x^3+1}=\\int_{0}^{1}\\frac{1-x^3+x^6}{1+x^4+x^8}\\,\\mathrm dx $$\nthat is:\n$$ I = \\int_{0}^{1}\\left(1-x^3-x^4+x^6+x^7-x^{10}\\right)\\frac{\\mathrm dx}{1-x^{12}} $$\nor:\n$$ I=\\sum_{k\\geq 0}\\left(\\frac{1}{12k+1}-\\frac{1}{12k+4}-\\frac{1}{12k+5}+\\frac{1}{12k+7}+\\frac{1}{12k+8}-\\frac{1}{12k+11}\\right). $$\nBy the reflection formula for the <a href=\"https://en.wikipedia.org/wiki/Digamma_function\" rel=\"nofollow noreferrer\">digamma function</a> we have:\n$$ \\sum_{k\\geq 0}\\left(\\frac{1}{12k+\\tau}-\\frac{1}{12k+(12-\\tau)}\\right)=\\frac{\\pi}{12}\\,\\cot\\frac{\\pi \\tau}{12} $$\nand the wanted result trivially follows from $\\cot\\left(\\frac{\\pi}{12}\\right)=2+\\sqrt{3}$.</p>\n" }, { "answer_id": 2089535, "score": 6, "text": "<p>If you really love partial fractions you may note that $$I=\\int_{0}^{1}\\frac{1-x^{3}+x^{6}}{1+x^{4}+x^{8}}\\mathrm dx=\\frac{1}{4\\sqrt{3}}\\int_{0}^{1}\\frac{1}{x^{2}+\\sqrt{3}x+1}\\mathrm dx+\\frac{1}{4}\\int_{0}^{1}\\frac{1}{x^{2}+x+1}\\mathrm dx\n $$ $$+\\frac{3}{4}\\int_{0}^{1}\\frac{1}{x^{2}-x+1}\\mathrm dx-\\frac{1}{4\\sqrt{3}}\\int_{0}^{1}\\frac{1}{x^{2}-\\sqrt{3}x+1}\\mathrm dx\n $$ $$=\\frac{1}{2\\sqrt{3}}\\int_{\\sqrt{3}}^{2+\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx+\\frac{1}{2\\sqrt{3}}\\int_{1/\\sqrt{3}}^{\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx\n $$ $$+\\frac{\\sqrt{3}}{2}\\int_{-1/\\sqrt{3}}^{1/\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx-\\frac{1}{2\\sqrt{3}}\\int_{-\\sqrt{3}}^{2-\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx\n $$ so we have $4$ elementary integrals hence $$I=\\color{red}{\\frac{5\\pi}{12\\sqrt{3}}}.$$</p>\n" }, { "answer_id": 2089414, "score": 10, "text": "<p>We want to compute:\n$$ I = \\int_{0}^{1}\\frac{x^4-1}{x^{12}-1}\\cdot\\frac{\\mathrm dx}{x^3+1}+\\int_{0}^{1}\\frac{x^{8}-x^{12}}{1-x^{12}}\\cdot\\frac{x\\,\\mathrm dx}{x^3+1}=\\int_{0}^{1}\\frac{1-x^3+x^6}{1+x^4+x^8}\\,\\mathrm dx $$\nthat is:\n$$ I = \\int_{0}^{1}\\left(1-x^3-x^4+x^6+x^7-x^{10}\\right)\\frac{\\mathrm dx}{1-x^{12}} $$\nor:\n$$ I=\\sum_{k\\geq 0}\\left(\\frac{1}{12k+1}-\\frac{1}{12k+4}-\\frac{1}{12k+5}+\\frac{1}{12k+7}+\\frac{1}{12k+8}-\\frac{1}{12k+11}\\right). $$\nBy the reflection formula for the <a href=\"https://en.wikipedia.org/wiki/Digamma_function\" rel=\"nofollow noreferrer\">digamma function</a> we have:\n$$ \\sum_{k\\geq 0}\\left(\\frac{1}{12k+\\tau}-\\frac{1}{12k+(12-\\tau)}\\right)=\\frac{\\pi}{12}\\,\\cot\\frac{\\pi \\tau}{12} $$\nand the wanted result trivially follows from $\\cot\\left(\\frac{\\pi}{12}\\right)=2+\\sqrt{3}$.</p>\n" }, { "answer_id": 2089535, "score": 6, "text": "<p>If you really love partial fractions you may note that $$I=\\int_{0}^{1}\\frac{1-x^{3}+x^{6}}{1+x^{4}+x^{8}}\\mathrm dx=\\frac{1}{4\\sqrt{3}}\\int_{0}^{1}\\frac{1}{x^{2}+\\sqrt{3}x+1}\\mathrm dx+\\frac{1}{4}\\int_{0}^{1}\\frac{1}{x^{2}+x+1}\\mathrm dx\n $$ $$+\\frac{3}{4}\\int_{0}^{1}\\frac{1}{x^{2}-x+1}\\mathrm dx-\\frac{1}{4\\sqrt{3}}\\int_{0}^{1}\\frac{1}{x^{2}-\\sqrt{3}x+1}\\mathrm dx\n $$ $$=\\frac{1}{2\\sqrt{3}}\\int_{\\sqrt{3}}^{2+\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx+\\frac{1}{2\\sqrt{3}}\\int_{1/\\sqrt{3}}^{\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx\n $$ $$+\\frac{\\sqrt{3}}{2}\\int_{-1/\\sqrt{3}}^{1/\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx-\\frac{1}{2\\sqrt{3}}\\int_{-\\sqrt{3}}^{2-\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx\n $$ so we have $4$ elementary integrals hence $$I=\\color{red}{\\frac{5\\pi}{12\\sqrt{3}}}.$$</p>\n" }, { "answer_id": 2089414, "score": 10, "text": "<p>We want to compute:\n$$ I = \\int_{0}^{1}\\frac{x^4-1}{x^{12}-1}\\cdot\\frac{\\mathrm dx}{x^3+1}+\\int_{0}^{1}\\frac{x^{8}-x^{12}}{1-x^{12}}\\cdot\\frac{x\\,\\mathrm dx}{x^3+1}=\\int_{0}^{1}\\frac{1-x^3+x^6}{1+x^4+x^8}\\,\\mathrm dx $$\nthat is:\n$$ I = \\int_{0}^{1}\\left(1-x^3-x^4+x^6+x^7-x^{10}\\right)\\frac{\\mathrm dx}{1-x^{12}} $$\nor:\n$$ I=\\sum_{k\\geq 0}\\left(\\frac{1}{12k+1}-\\frac{1}{12k+4}-\\frac{1}{12k+5}+\\frac{1}{12k+7}+\\frac{1}{12k+8}-\\frac{1}{12k+11}\\right). $$\nBy the reflection formula for the <a href=\"https://en.wikipedia.org/wiki/Digamma_function\" rel=\"nofollow noreferrer\">digamma function</a> we have:\n$$ \\sum_{k\\geq 0}\\left(\\frac{1}{12k+\\tau}-\\frac{1}{12k+(12-\\tau)}\\right)=\\frac{\\pi}{12}\\,\\cot\\frac{\\pi \\tau}{12} $$\nand the wanted result trivially follows from $\\cot\\left(\\frac{\\pi}{12}\\right)=2+\\sqrt{3}$.</p>\n" }, { "answer_id": 2089535, "score": 6, "text": "<p>If you really love partial fractions you may note that $$I=\\int_{0}^{1}\\frac{1-x^{3}+x^{6}}{1+x^{4}+x^{8}}\\mathrm dx=\\frac{1}{4\\sqrt{3}}\\int_{0}^{1}\\frac{1}{x^{2}+\\sqrt{3}x+1}\\mathrm dx+\\frac{1}{4}\\int_{0}^{1}\\frac{1}{x^{2}+x+1}\\mathrm dx\n $$ $$+\\frac{3}{4}\\int_{0}^{1}\\frac{1}{x^{2}-x+1}\\mathrm dx-\\frac{1}{4\\sqrt{3}}\\int_{0}^{1}\\frac{1}{x^{2}-\\sqrt{3}x+1}\\mathrm dx\n $$ $$=\\frac{1}{2\\sqrt{3}}\\int_{\\sqrt{3}}^{2+\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx+\\frac{1}{2\\sqrt{3}}\\int_{1/\\sqrt{3}}^{\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx\n $$ $$+\\frac{\\sqrt{3}}{2}\\int_{-1/\\sqrt{3}}^{1/\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx-\\frac{1}{2\\sqrt{3}}\\int_{-\\sqrt{3}}^{2-\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx\n $$ so we have $4$ elementary integrals hence $$I=\\color{red}{\\frac{5\\pi}{12\\sqrt{3}}}.$$</p>\n" }, { "answer_id": 2089414, "score": 10, "text": "<p>We want to compute:\n$$ I = \\int_{0}^{1}\\frac{x^4-1}{x^{12}-1}\\cdot\\frac{\\mathrm dx}{x^3+1}+\\int_{0}^{1}\\frac{x^{8}-x^{12}}{1-x^{12}}\\cdot\\frac{x\\,\\mathrm dx}{x^3+1}=\\int_{0}^{1}\\frac{1-x^3+x^6}{1+x^4+x^8}\\,\\mathrm dx $$\nthat is:\n$$ I = \\int_{0}^{1}\\left(1-x^3-x^4+x^6+x^7-x^{10}\\right)\\frac{\\mathrm dx}{1-x^{12}} $$\nor:\n$$ I=\\sum_{k\\geq 0}\\left(\\frac{1}{12k+1}-\\frac{1}{12k+4}-\\frac{1}{12k+5}+\\frac{1}{12k+7}+\\frac{1}{12k+8}-\\frac{1}{12k+11}\\right). $$\nBy the reflection formula for the <a href=\"https://en.wikipedia.org/wiki/Digamma_function\" rel=\"nofollow noreferrer\">digamma function</a> we have:\n$$ \\sum_{k\\geq 0}\\left(\\frac{1}{12k+\\tau}-\\frac{1}{12k+(12-\\tau)}\\right)=\\frac{\\pi}{12}\\,\\cot\\frac{\\pi \\tau}{12} $$\nand the wanted result trivially follows from $\\cot\\left(\\frac{\\pi}{12}\\right)=2+\\sqrt{3}$.</p>\n" }, { "answer_id": 2089535, "score": 6, "text": "<p>If you really love partial fractions you may note that $$I=\\int_{0}^{1}\\frac{1-x^{3}+x^{6}}{1+x^{4}+x^{8}}\\mathrm dx=\\frac{1}{4\\sqrt{3}}\\int_{0}^{1}\\frac{1}{x^{2}+\\sqrt{3}x+1}\\mathrm dx+\\frac{1}{4}\\int_{0}^{1}\\frac{1}{x^{2}+x+1}\\mathrm dx\n $$ $$+\\frac{3}{4}\\int_{0}^{1}\\frac{1}{x^{2}-x+1}\\mathrm dx-\\frac{1}{4\\sqrt{3}}\\int_{0}^{1}\\frac{1}{x^{2}-\\sqrt{3}x+1}\\mathrm dx\n $$ $$=\\frac{1}{2\\sqrt{3}}\\int_{\\sqrt{3}}^{2+\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx+\\frac{1}{2\\sqrt{3}}\\int_{1/\\sqrt{3}}^{\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx\n $$ $$+\\frac{\\sqrt{3}}{2}\\int_{-1/\\sqrt{3}}^{1/\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx-\\frac{1}{2\\sqrt{3}}\\int_{-\\sqrt{3}}^{2-\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx\n $$ so we have $4$ elementary integrals hence $$I=\\color{red}{\\frac{5\\pi}{12\\sqrt{3}}}.$$</p>\n" }, { "answer_id": 2089414, "score": 10, "text": "<p>We want to compute:\n$$ I = \\int_{0}^{1}\\frac{x^4-1}{x^{12}-1}\\cdot\\frac{\\mathrm dx}{x^3+1}+\\int_{0}^{1}\\frac{x^{8}-x^{12}}{1-x^{12}}\\cdot\\frac{x\\,\\mathrm dx}{x^3+1}=\\int_{0}^{1}\\frac{1-x^3+x^6}{1+x^4+x^8}\\,\\mathrm dx $$\nthat is:\n$$ I = \\int_{0}^{1}\\left(1-x^3-x^4+x^6+x^7-x^{10}\\right)\\frac{\\mathrm dx}{1-x^{12}} $$\nor:\n$$ I=\\sum_{k\\geq 0}\\left(\\frac{1}{12k+1}-\\frac{1}{12k+4}-\\frac{1}{12k+5}+\\frac{1}{12k+7}+\\frac{1}{12k+8}-\\frac{1}{12k+11}\\right). $$\nBy the reflection formula for the <a href=\"https://en.wikipedia.org/wiki/Digamma_function\" rel=\"nofollow noreferrer\">digamma function</a> we have:\n$$ \\sum_{k\\geq 0}\\left(\\frac{1}{12k+\\tau}-\\frac{1}{12k+(12-\\tau)}\\right)=\\frac{\\pi}{12}\\,\\cot\\frac{\\pi \\tau}{12} $$\nand the wanted result trivially follows from $\\cot\\left(\\frac{\\pi}{12}\\right)=2+\\sqrt{3}$.</p>\n" }, { "answer_id": 2089535, "score": 6, "text": "<p>If you really love partial fractions you may note that $$I=\\int_{0}^{1}\\frac{1-x^{3}+x^{6}}{1+x^{4}+x^{8}}\\mathrm dx=\\frac{1}{4\\sqrt{3}}\\int_{0}^{1}\\frac{1}{x^{2}+\\sqrt{3}x+1}\\mathrm dx+\\frac{1}{4}\\int_{0}^{1}\\frac{1}{x^{2}+x+1}\\mathrm dx\n $$ $$+\\frac{3}{4}\\int_{0}^{1}\\frac{1}{x^{2}-x+1}\\mathrm dx-\\frac{1}{4\\sqrt{3}}\\int_{0}^{1}\\frac{1}{x^{2}-\\sqrt{3}x+1}\\mathrm dx\n $$ $$=\\frac{1}{2\\sqrt{3}}\\int_{\\sqrt{3}}^{2+\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx+\\frac{1}{2\\sqrt{3}}\\int_{1/\\sqrt{3}}^{\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx\n $$ $$+\\frac{\\sqrt{3}}{2}\\int_{-1/\\sqrt{3}}^{1/\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx-\\frac{1}{2\\sqrt{3}}\\int_{-\\sqrt{3}}^{2-\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx\n $$ so we have $4$ elementary integrals hence $$I=\\color{red}{\\frac{5\\pi}{12\\sqrt{3}}}.$$</p>\n" }, { "answer_id": 2089414, "score": 10, "text": "<p>We want to compute:\n$$ I = \\int_{0}^{1}\\frac{x^4-1}{x^{12}-1}\\cdot\\frac{\\mathrm dx}{x^3+1}+\\int_{0}^{1}\\frac{x^{8}-x^{12}}{1-x^{12}}\\cdot\\frac{x\\,\\mathrm dx}{x^3+1}=\\int_{0}^{1}\\frac{1-x^3+x^6}{1+x^4+x^8}\\,\\mathrm dx $$\nthat is:\n$$ I = \\int_{0}^{1}\\left(1-x^3-x^4+x^6+x^7-x^{10}\\right)\\frac{\\mathrm dx}{1-x^{12}} $$\nor:\n$$ I=\\sum_{k\\geq 0}\\left(\\frac{1}{12k+1}-\\frac{1}{12k+4}-\\frac{1}{12k+5}+\\frac{1}{12k+7}+\\frac{1}{12k+8}-\\frac{1}{12k+11}\\right). $$\nBy the reflection formula for the <a href=\"https://en.wikipedia.org/wiki/Digamma_function\" rel=\"nofollow noreferrer\">digamma function</a> we have:\n$$ \\sum_{k\\geq 0}\\left(\\frac{1}{12k+\\tau}-\\frac{1}{12k+(12-\\tau)}\\right)=\\frac{\\pi}{12}\\,\\cot\\frac{\\pi \\tau}{12} $$\nand the wanted result trivially follows from $\\cot\\left(\\frac{\\pi}{12}\\right)=2+\\sqrt{3}$.</p>\n" }, { "answer_id": 2089535, "score": 6, "text": "<p>If you really love partial fractions you may note that $$I=\\int_{0}^{1}\\frac{1-x^{3}+x^{6}}{1+x^{4}+x^{8}}\\mathrm dx=\\frac{1}{4\\sqrt{3}}\\int_{0}^{1}\\frac{1}{x^{2}+\\sqrt{3}x+1}\\mathrm dx+\\frac{1}{4}\\int_{0}^{1}\\frac{1}{x^{2}+x+1}\\mathrm dx\n $$ $$+\\frac{3}{4}\\int_{0}^{1}\\frac{1}{x^{2}-x+1}\\mathrm dx-\\frac{1}{4\\sqrt{3}}\\int_{0}^{1}\\frac{1}{x^{2}-\\sqrt{3}x+1}\\mathrm dx\n $$ $$=\\frac{1}{2\\sqrt{3}}\\int_{\\sqrt{3}}^{2+\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx+\\frac{1}{2\\sqrt{3}}\\int_{1/\\sqrt{3}}^{\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx\n $$ $$+\\frac{\\sqrt{3}}{2}\\int_{-1/\\sqrt{3}}^{1/\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx-\\frac{1}{2\\sqrt{3}}\\int_{-\\sqrt{3}}^{2-\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx\n $$ so we have $4$ elementary integrals hence $$I=\\color{red}{\\frac{5\\pi}{12\\sqrt{3}}}.$$</p>\n" }, { "answer_id": 2089414, "score": 10, "text": "<p>We want to compute:\n$$ I = \\int_{0}^{1}\\frac{x^4-1}{x^{12}-1}\\cdot\\frac{\\mathrm dx}{x^3+1}+\\int_{0}^{1}\\frac{x^{8}-x^{12}}{1-x^{12}}\\cdot\\frac{x\\,\\mathrm dx}{x^3+1}=\\int_{0}^{1}\\frac{1-x^3+x^6}{1+x^4+x^8}\\,\\mathrm dx $$\nthat is:\n$$ I = \\int_{0}^{1}\\left(1-x^3-x^4+x^6+x^7-x^{10}\\right)\\frac{\\mathrm dx}{1-x^{12}} $$\nor:\n$$ I=\\sum_{k\\geq 0}\\left(\\frac{1}{12k+1}-\\frac{1}{12k+4}-\\frac{1}{12k+5}+\\frac{1}{12k+7}+\\frac{1}{12k+8}-\\frac{1}{12k+11}\\right). $$\nBy the reflection formula for the <a href=\"https://en.wikipedia.org/wiki/Digamma_function\" rel=\"nofollow noreferrer\">digamma function</a> we have:\n$$ \\sum_{k\\geq 0}\\left(\\frac{1}{12k+\\tau}-\\frac{1}{12k+(12-\\tau)}\\right)=\\frac{\\pi}{12}\\,\\cot\\frac{\\pi \\tau}{12} $$\nand the wanted result trivially follows from $\\cot\\left(\\frac{\\pi}{12}\\right)=2+\\sqrt{3}$.</p>\n" }, { "answer_id": 2089535, "score": 6, "text": "<p>If you really love partial fractions you may note that $$I=\\int_{0}^{1}\\frac{1-x^{3}+x^{6}}{1+x^{4}+x^{8}}\\mathrm dx=\\frac{1}{4\\sqrt{3}}\\int_{0}^{1}\\frac{1}{x^{2}+\\sqrt{3}x+1}\\mathrm dx+\\frac{1}{4}\\int_{0}^{1}\\frac{1}{x^{2}+x+1}\\mathrm dx\n $$ $$+\\frac{3}{4}\\int_{0}^{1}\\frac{1}{x^{2}-x+1}\\mathrm dx-\\frac{1}{4\\sqrt{3}}\\int_{0}^{1}\\frac{1}{x^{2}-\\sqrt{3}x+1}\\mathrm dx\n $$ $$=\\frac{1}{2\\sqrt{3}}\\int_{\\sqrt{3}}^{2+\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx+\\frac{1}{2\\sqrt{3}}\\int_{1/\\sqrt{3}}^{\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx\n $$ $$+\\frac{\\sqrt{3}}{2}\\int_{-1/\\sqrt{3}}^{1/\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx-\\frac{1}{2\\sqrt{3}}\\int_{-\\sqrt{3}}^{2-\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx\n $$ so we have $4$ elementary integrals hence $$I=\\color{red}{\\frac{5\\pi}{12\\sqrt{3}}}.$$</p>\n" }, { "answer_id": 2089414, "score": 10, "text": "<p>We want to compute:\n$$ I = \\int_{0}^{1}\\frac{x^4-1}{x^{12}-1}\\cdot\\frac{\\mathrm dx}{x^3+1}+\\int_{0}^{1}\\frac{x^{8}-x^{12}}{1-x^{12}}\\cdot\\frac{x\\,\\mathrm dx}{x^3+1}=\\int_{0}^{1}\\frac{1-x^3+x^6}{1+x^4+x^8}\\,\\mathrm dx $$\nthat is:\n$$ I = \\int_{0}^{1}\\left(1-x^3-x^4+x^6+x^7-x^{10}\\right)\\frac{\\mathrm dx}{1-x^{12}} $$\nor:\n$$ I=\\sum_{k\\geq 0}\\left(\\frac{1}{12k+1}-\\frac{1}{12k+4}-\\frac{1}{12k+5}+\\frac{1}{12k+7}+\\frac{1}{12k+8}-\\frac{1}{12k+11}\\right). $$\nBy the reflection formula for the <a href=\"https://en.wikipedia.org/wiki/Digamma_function\" rel=\"nofollow noreferrer\">digamma function</a> we have:\n$$ \\sum_{k\\geq 0}\\left(\\frac{1}{12k+\\tau}-\\frac{1}{12k+(12-\\tau)}\\right)=\\frac{\\pi}{12}\\,\\cot\\frac{\\pi \\tau}{12} $$\nand the wanted result trivially follows from $\\cot\\left(\\frac{\\pi}{12}\\right)=2+\\sqrt{3}$.</p>\n" }, { "answer_id": 2089535, "score": 6, "text": "<p>If you really love partial fractions you may note that $$I=\\int_{0}^{1}\\frac{1-x^{3}+x^{6}}{1+x^{4}+x^{8}}\\mathrm dx=\\frac{1}{4\\sqrt{3}}\\int_{0}^{1}\\frac{1}{x^{2}+\\sqrt{3}x+1}\\mathrm dx+\\frac{1}{4}\\int_{0}^{1}\\frac{1}{x^{2}+x+1}\\mathrm dx\n $$ $$+\\frac{3}{4}\\int_{0}^{1}\\frac{1}{x^{2}-x+1}\\mathrm dx-\\frac{1}{4\\sqrt{3}}\\int_{0}^{1}\\frac{1}{x^{2}-\\sqrt{3}x+1}\\mathrm dx\n $$ $$=\\frac{1}{2\\sqrt{3}}\\int_{\\sqrt{3}}^{2+\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx+\\frac{1}{2\\sqrt{3}}\\int_{1/\\sqrt{3}}^{\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx\n $$ $$+\\frac{\\sqrt{3}}{2}\\int_{-1/\\sqrt{3}}^{1/\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx-\\frac{1}{2\\sqrt{3}}\\int_{-\\sqrt{3}}^{2-\\sqrt{3}}\\frac{1}{x^{2}+1}\\mathrm dx\n $$ so we have $4$ elementary integrals hence $$I=\\color{red}{\\frac{5\\pi}{12\\sqrt{3}}}.$$</p>\n" } ]
Question: Show that this lengthy integral is equal to $\int_{0}^{\infty}{1\over x^8+x^4+1}\cdot{1\over x^3+1}\mathrm dx={5\pi\over 12\sqrt{3}}$ Answer 1: <p>We want to compute: $$ I = \int_{0}^{1}\frac{x^4-1}{x^{12}-1}\cdot\frac{\mathrm dx}{x^3+1}+\int_{0}^{1}\frac{x^{8}-x^{12}}{1-x^{12}}\cdot\frac{x\,\mathrm dx}{x^3+1}=\int_{0}^{1}\frac{1-x^3+x^6}{1+x^4+x^8}\,\mathrm dx $$ that is: $$ I = \int_{0}^{1}\left(1-x^3-x^4+x^6+x^7-x^{10}\right)\frac{\mathrm dx}{1-x^{12}} $$ or: $$ I=\sum_{k\geq 0}\left(\frac{1}{12k+1}-\frac{1}{12k+4}-\frac{1}{12k+5}+\frac{1}{12k+7}+\frac{1}{12k+8}-\frac{1}{12k+11}\right). $$ By the reflection formula for the <a href="https://en.wikipedia.org/wiki/Digamma_function" rel="nofollow noreferrer">digamma function</a> we have: $$ \sum_{k\geq 0}\left(\frac{1}{12k+\tau}-\frac{1}{12k+(12-\tau)}\right)=\frac{\pi}{12}\,\cot\frac{\pi \tau}{12} $$ and the wanted result trivially follows from $\cot\left(\frac{\pi}{12}\right)=2+\sqrt{3}$.</p> Answer 2: <p>If you really love partial fractions you may note that $$I=\int_{0}^{1}\frac{1-x^{3}+x^{6}}{1+x^{4}+x^{8}}\mathrm dx=\frac{1}{4\sqrt{3}}\int_{0}^{1}\frac{1}{x^{2}+\sqrt{3}x+1}\mathrm dx+\frac{1}{4}\int_{0}^{1}\frac{1}{x^{2}+x+1}\mathrm dx $$ $$+\frac{3}{4}\int_{0}^{1}\frac{1}{x^{2}-x+1}\mathrm dx-\frac{1}{4\sqrt{3}}\int_{0}^{1}\frac{1}{x^{2}-\sqrt{3}x+1}\mathrm dx $$ $$=\frac{1}{2\sqrt{3}}\int_{\sqrt{3}}^{2+\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx+\frac{1}{2\sqrt{3}}\int_{1/\sqrt{3}}^{\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx $$ $$+\frac{\sqrt{3}}{2}\int_{-1/\sqrt{3}}^{1/\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx-\frac{1}{2\sqrt{3}}\int_{-\sqrt{3}}^{2-\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx $$ so we have $4$ elementary integrals hence $$I=\color{red}{\frac{5\pi}{12\sqrt{3}}}.$$</p> Answer 3: <p>We want to compute: $$ I = \int_{0}^{1}\frac{x^4-1}{x^{12}-1}\cdot\frac{\mathrm dx}{x^3+1}+\int_{0}^{1}\frac{x^{8}-x^{12}}{1-x^{12}}\cdot\frac{x\,\mathrm dx}{x^3+1}=\int_{0}^{1}\frac{1-x^3+x^6}{1+x^4+x^8}\,\mathrm dx $$ that is: $$ I = \int_{0}^{1}\left(1-x^3-x^4+x^6+x^7-x^{10}\right)\frac{\mathrm dx}{1-x^{12}} $$ or: $$ I=\sum_{k\geq 0}\left(\frac{1}{12k+1}-\frac{1}{12k+4}-\frac{1}{12k+5}+\frac{1}{12k+7}+\frac{1}{12k+8}-\frac{1}{12k+11}\right). $$ By the reflection formula for the <a href="https://en.wikipedia.org/wiki/Digamma_function" rel="nofollow noreferrer">digamma function</a> we have: $$ \sum_{k\geq 0}\left(\frac{1}{12k+\tau}-\frac{1}{12k+(12-\tau)}\right)=\frac{\pi}{12}\,\cot\frac{\pi \tau}{12} $$ and the wanted result trivially follows from $\cot\left(\frac{\pi}{12}\right)=2+\sqrt{3}$.</p> Answer 4: <p>If you really love partial fractions you may note that $$I=\int_{0}^{1}\frac{1-x^{3}+x^{6}}{1+x^{4}+x^{8}}\mathrm dx=\frac{1}{4\sqrt{3}}\int_{0}^{1}\frac{1}{x^{2}+\sqrt{3}x+1}\mathrm dx+\frac{1}{4}\int_{0}^{1}\frac{1}{x^{2}+x+1}\mathrm dx $$ $$+\frac{3}{4}\int_{0}^{1}\frac{1}{x^{2}-x+1}\mathrm dx-\frac{1}{4\sqrt{3}}\int_{0}^{1}\frac{1}{x^{2}-\sqrt{3}x+1}\mathrm dx $$ $$=\frac{1}{2\sqrt{3}}\int_{\sqrt{3}}^{2+\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx+\frac{1}{2\sqrt{3}}\int_{1/\sqrt{3}}^{\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx $$ $$+\frac{\sqrt{3}}{2}\int_{-1/\sqrt{3}}^{1/\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx-\frac{1}{2\sqrt{3}}\int_{-\sqrt{3}}^{2-\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx $$ so we have $4$ elementary integrals hence $$I=\color{red}{\frac{5\pi}{12\sqrt{3}}}.$$</p> Answer 5: <p>We want to compute: $$ I = \int_{0}^{1}\frac{x^4-1}{x^{12}-1}\cdot\frac{\mathrm dx}{x^3+1}+\int_{0}^{1}\frac{x^{8}-x^{12}}{1-x^{12}}\cdot\frac{x\,\mathrm dx}{x^3+1}=\int_{0}^{1}\frac{1-x^3+x^6}{1+x^4+x^8}\,\mathrm dx $$ that is: $$ I = \int_{0}^{1}\left(1-x^3-x^4+x^6+x^7-x^{10}\right)\frac{\mathrm dx}{1-x^{12}} $$ or: $$ I=\sum_{k\geq 0}\left(\frac{1}{12k+1}-\frac{1}{12k+4}-\frac{1}{12k+5}+\frac{1}{12k+7}+\frac{1}{12k+8}-\frac{1}{12k+11}\right). $$ By the reflection formula for the <a href="https://en.wikipedia.org/wiki/Digamma_function" rel="nofollow noreferrer">digamma function</a> we have: $$ \sum_{k\geq 0}\left(\frac{1}{12k+\tau}-\frac{1}{12k+(12-\tau)}\right)=\frac{\pi}{12}\,\cot\frac{\pi \tau}{12} $$ and the wanted result trivially follows from $\cot\left(\frac{\pi}{12}\right)=2+\sqrt{3}$.</p> Answer 6: <p>If you really love partial fractions you may note that $$I=\int_{0}^{1}\frac{1-x^{3}+x^{6}}{1+x^{4}+x^{8}}\mathrm dx=\frac{1}{4\sqrt{3}}\int_{0}^{1}\frac{1}{x^{2}+\sqrt{3}x+1}\mathrm dx+\frac{1}{4}\int_{0}^{1}\frac{1}{x^{2}+x+1}\mathrm dx $$ $$+\frac{3}{4}\int_{0}^{1}\frac{1}{x^{2}-x+1}\mathrm dx-\frac{1}{4\sqrt{3}}\int_{0}^{1}\frac{1}{x^{2}-\sqrt{3}x+1}\mathrm dx $$ $$=\frac{1}{2\sqrt{3}}\int_{\sqrt{3}}^{2+\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx+\frac{1}{2\sqrt{3}}\int_{1/\sqrt{3}}^{\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx $$ $$+\frac{\sqrt{3}}{2}\int_{-1/\sqrt{3}}^{1/\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx-\frac{1}{2\sqrt{3}}\int_{-\sqrt{3}}^{2-\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx $$ so we have $4$ elementary integrals hence $$I=\color{red}{\frac{5\pi}{12\sqrt{3}}}.$$</p> Answer 7: <p>We want to compute: $$ I = \int_{0}^{1}\frac{x^4-1}{x^{12}-1}\cdot\frac{\mathrm dx}{x^3+1}+\int_{0}^{1}\frac{x^{8}-x^{12}}{1-x^{12}}\cdot\frac{x\,\mathrm dx}{x^3+1}=\int_{0}^{1}\frac{1-x^3+x^6}{1+x^4+x^8}\,\mathrm dx $$ that is: $$ I = \int_{0}^{1}\left(1-x^3-x^4+x^6+x^7-x^{10}\right)\frac{\mathrm dx}{1-x^{12}} $$ or: $$ I=\sum_{k\geq 0}\left(\frac{1}{12k+1}-\frac{1}{12k+4}-\frac{1}{12k+5}+\frac{1}{12k+7}+\frac{1}{12k+8}-\frac{1}{12k+11}\right). $$ By the reflection formula for the <a href="https://en.wikipedia.org/wiki/Digamma_function" rel="nofollow noreferrer">digamma function</a> we have: $$ \sum_{k\geq 0}\left(\frac{1}{12k+\tau}-\frac{1}{12k+(12-\tau)}\right)=\frac{\pi}{12}\,\cot\frac{\pi \tau}{12} $$ and the wanted result trivially follows from $\cot\left(\frac{\pi}{12}\right)=2+\sqrt{3}$.</p> Answer 8: <p>If you really love partial fractions you may note that $$I=\int_{0}^{1}\frac{1-x^{3}+x^{6}}{1+x^{4}+x^{8}}\mathrm dx=\frac{1}{4\sqrt{3}}\int_{0}^{1}\frac{1}{x^{2}+\sqrt{3}x+1}\mathrm dx+\frac{1}{4}\int_{0}^{1}\frac{1}{x^{2}+x+1}\mathrm dx $$ $$+\frac{3}{4}\int_{0}^{1}\frac{1}{x^{2}-x+1}\mathrm dx-\frac{1}{4\sqrt{3}}\int_{0}^{1}\frac{1}{x^{2}-\sqrt{3}x+1}\mathrm dx $$ $$=\frac{1}{2\sqrt{3}}\int_{\sqrt{3}}^{2+\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx+\frac{1}{2\sqrt{3}}\int_{1/\sqrt{3}}^{\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx $$ $$+\frac{\sqrt{3}}{2}\int_{-1/\sqrt{3}}^{1/\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx-\frac{1}{2\sqrt{3}}\int_{-\sqrt{3}}^{2-\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx $$ so we have $4$ elementary integrals hence $$I=\color{red}{\frac{5\pi}{12\sqrt{3}}}.$$</p> Answer 9: <p>We want to compute: $$ I = \int_{0}^{1}\frac{x^4-1}{x^{12}-1}\cdot\frac{\mathrm dx}{x^3+1}+\int_{0}^{1}\frac{x^{8}-x^{12}}{1-x^{12}}\cdot\frac{x\,\mathrm dx}{x^3+1}=\int_{0}^{1}\frac{1-x^3+x^6}{1+x^4+x^8}\,\mathrm dx $$ that is: $$ I = \int_{0}^{1}\left(1-x^3-x^4+x^6+x^7-x^{10}\right)\frac{\mathrm dx}{1-x^{12}} $$ or: $$ I=\sum_{k\geq 0}\left(\frac{1}{12k+1}-\frac{1}{12k+4}-\frac{1}{12k+5}+\frac{1}{12k+7}+\frac{1}{12k+8}-\frac{1}{12k+11}\right). $$ By the reflection formula for the <a href="https://en.wikipedia.org/wiki/Digamma_function" rel="nofollow noreferrer">digamma function</a> we have: $$ \sum_{k\geq 0}\left(\frac{1}{12k+\tau}-\frac{1}{12k+(12-\tau)}\right)=\frac{\pi}{12}\,\cot\frac{\pi \tau}{12} $$ and the wanted result trivially follows from $\cot\left(\frac{\pi}{12}\right)=2+\sqrt{3}$.</p> Answer 10: <p>If you really love partial fractions you may note that $$I=\int_{0}^{1}\frac{1-x^{3}+x^{6}}{1+x^{4}+x^{8}}\mathrm dx=\frac{1}{4\sqrt{3}}\int_{0}^{1}\frac{1}{x^{2}+\sqrt{3}x+1}\mathrm dx+\frac{1}{4}\int_{0}^{1}\frac{1}{x^{2}+x+1}\mathrm dx $$ $$+\frac{3}{4}\int_{0}^{1}\frac{1}{x^{2}-x+1}\mathrm dx-\frac{1}{4\sqrt{3}}\int_{0}^{1}\frac{1}{x^{2}-\sqrt{3}x+1}\mathrm dx $$ $$=\frac{1}{2\sqrt{3}}\int_{\sqrt{3}}^{2+\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx+\frac{1}{2\sqrt{3}}\int_{1/\sqrt{3}}^{\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx $$ $$+\frac{\sqrt{3}}{2}\int_{-1/\sqrt{3}}^{1/\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx-\frac{1}{2\sqrt{3}}\int_{-\sqrt{3}}^{2-\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx $$ so we have $4$ elementary integrals hence $$I=\color{red}{\frac{5\pi}{12\sqrt{3}}}.$$</p> Answer 11: <p>We want to compute: $$ I = \int_{0}^{1}\frac{x^4-1}{x^{12}-1}\cdot\frac{\mathrm dx}{x^3+1}+\int_{0}^{1}\frac{x^{8}-x^{12}}{1-x^{12}}\cdot\frac{x\,\mathrm dx}{x^3+1}=\int_{0}^{1}\frac{1-x^3+x^6}{1+x^4+x^8}\,\mathrm dx $$ that is: $$ I = \int_{0}^{1}\left(1-x^3-x^4+x^6+x^7-x^{10}\right)\frac{\mathrm dx}{1-x^{12}} $$ or: $$ I=\sum_{k\geq 0}\left(\frac{1}{12k+1}-\frac{1}{12k+4}-\frac{1}{12k+5}+\frac{1}{12k+7}+\frac{1}{12k+8}-\frac{1}{12k+11}\right). $$ By the reflection formula for the <a href="https://en.wikipedia.org/wiki/Digamma_function" rel="nofollow noreferrer">digamma function</a> we have: $$ \sum_{k\geq 0}\left(\frac{1}{12k+\tau}-\frac{1}{12k+(12-\tau)}\right)=\frac{\pi}{12}\,\cot\frac{\pi \tau}{12} $$ and the wanted result trivially follows from $\cot\left(\frac{\pi}{12}\right)=2+\sqrt{3}$.</p> Answer 12: <p>If you really love partial fractions you may note that $$I=\int_{0}^{1}\frac{1-x^{3}+x^{6}}{1+x^{4}+x^{8}}\mathrm dx=\frac{1}{4\sqrt{3}}\int_{0}^{1}\frac{1}{x^{2}+\sqrt{3}x+1}\mathrm dx+\frac{1}{4}\int_{0}^{1}\frac{1}{x^{2}+x+1}\mathrm dx $$ $$+\frac{3}{4}\int_{0}^{1}\frac{1}{x^{2}-x+1}\mathrm dx-\frac{1}{4\sqrt{3}}\int_{0}^{1}\frac{1}{x^{2}-\sqrt{3}x+1}\mathrm dx $$ $$=\frac{1}{2\sqrt{3}}\int_{\sqrt{3}}^{2+\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx+\frac{1}{2\sqrt{3}}\int_{1/\sqrt{3}}^{\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx $$ $$+\frac{\sqrt{3}}{2}\int_{-1/\sqrt{3}}^{1/\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx-\frac{1}{2\sqrt{3}}\int_{-\sqrt{3}}^{2-\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx $$ so we have $4$ elementary integrals hence $$I=\color{red}{\frac{5\pi}{12\sqrt{3}}}.$$</p> Answer 13: <p>We want to compute: $$ I = \int_{0}^{1}\frac{x^4-1}{x^{12}-1}\cdot\frac{\mathrm dx}{x^3+1}+\int_{0}^{1}\frac{x^{8}-x^{12}}{1-x^{12}}\cdot\frac{x\,\mathrm dx}{x^3+1}=\int_{0}^{1}\frac{1-x^3+x^6}{1+x^4+x^8}\,\mathrm dx $$ that is: $$ I = \int_{0}^{1}\left(1-x^3-x^4+x^6+x^7-x^{10}\right)\frac{\mathrm dx}{1-x^{12}} $$ or: $$ I=\sum_{k\geq 0}\left(\frac{1}{12k+1}-\frac{1}{12k+4}-\frac{1}{12k+5}+\frac{1}{12k+7}+\frac{1}{12k+8}-\frac{1}{12k+11}\right). $$ By the reflection formula for the <a href="https://en.wikipedia.org/wiki/Digamma_function" rel="nofollow noreferrer">digamma function</a> we have: $$ \sum_{k\geq 0}\left(\frac{1}{12k+\tau}-\frac{1}{12k+(12-\tau)}\right)=\frac{\pi}{12}\,\cot\frac{\pi \tau}{12} $$ and the wanted result trivially follows from $\cot\left(\frac{\pi}{12}\right)=2+\sqrt{3}$.</p> Answer 14: <p>If you really love partial fractions you may note that $$I=\int_{0}^{1}\frac{1-x^{3}+x^{6}}{1+x^{4}+x^{8}}\mathrm dx=\frac{1}{4\sqrt{3}}\int_{0}^{1}\frac{1}{x^{2}+\sqrt{3}x+1}\mathrm dx+\frac{1}{4}\int_{0}^{1}\frac{1}{x^{2}+x+1}\mathrm dx $$ $$+\frac{3}{4}\int_{0}^{1}\frac{1}{x^{2}-x+1}\mathrm dx-\frac{1}{4\sqrt{3}}\int_{0}^{1}\frac{1}{x^{2}-\sqrt{3}x+1}\mathrm dx $$ $$=\frac{1}{2\sqrt{3}}\int_{\sqrt{3}}^{2+\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx+\frac{1}{2\sqrt{3}}\int_{1/\sqrt{3}}^{\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx $$ $$+\frac{\sqrt{3}}{2}\int_{-1/\sqrt{3}}^{1/\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx-\frac{1}{2\sqrt{3}}\int_{-\sqrt{3}}^{2-\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx $$ so we have $4$ elementary integrals hence $$I=\color{red}{\frac{5\pi}{12\sqrt{3}}}.$$</p> Answer 15: <p>We want to compute: $$ I = \int_{0}^{1}\frac{x^4-1}{x^{12}-1}\cdot\frac{\mathrm dx}{x^3+1}+\int_{0}^{1}\frac{x^{8}-x^{12}}{1-x^{12}}\cdot\frac{x\,\mathrm dx}{x^3+1}=\int_{0}^{1}\frac{1-x^3+x^6}{1+x^4+x^8}\,\mathrm dx $$ that is: $$ I = \int_{0}^{1}\left(1-x^3-x^4+x^6+x^7-x^{10}\right)\frac{\mathrm dx}{1-x^{12}} $$ or: $$ I=\sum_{k\geq 0}\left(\frac{1}{12k+1}-\frac{1}{12k+4}-\frac{1}{12k+5}+\frac{1}{12k+7}+\frac{1}{12k+8}-\frac{1}{12k+11}\right). $$ By the reflection formula for the <a href="https://en.wikipedia.org/wiki/Digamma_function" rel="nofollow noreferrer">digamma function</a> we have: $$ \sum_{k\geq 0}\left(\frac{1}{12k+\tau}-\frac{1}{12k+(12-\tau)}\right)=\frac{\pi}{12}\,\cot\frac{\pi \tau}{12} $$ and the wanted result trivially follows from $\cot\left(\frac{\pi}{12}\right)=2+\sqrt{3}$.</p> Answer 16: <p>If you really love partial fractions you may note that $$I=\int_{0}^{1}\frac{1-x^{3}+x^{6}}{1+x^{4}+x^{8}}\mathrm dx=\frac{1}{4\sqrt{3}}\int_{0}^{1}\frac{1}{x^{2}+\sqrt{3}x+1}\mathrm dx+\frac{1}{4}\int_{0}^{1}\frac{1}{x^{2}+x+1}\mathrm dx $$ $$+\frac{3}{4}\int_{0}^{1}\frac{1}{x^{2}-x+1}\mathrm dx-\frac{1}{4\sqrt{3}}\int_{0}^{1}\frac{1}{x^{2}-\sqrt{3}x+1}\mathrm dx $$ $$=\frac{1}{2\sqrt{3}}\int_{\sqrt{3}}^{2+\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx+\frac{1}{2\sqrt{3}}\int_{1/\sqrt{3}}^{\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx $$ $$+\frac{\sqrt{3}}{2}\int_{-1/\sqrt{3}}^{1/\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx-\frac{1}{2\sqrt{3}}\int_{-\sqrt{3}}^{2-\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx $$ so we have $4$ elementary integrals hence $$I=\color{red}{\frac{5\pi}{12\sqrt{3}}}.$$</p> Answer 17: <p>We want to compute: $$ I = \int_{0}^{1}\frac{x^4-1}{x^{12}-1}\cdot\frac{\mathrm dx}{x^3+1}+\int_{0}^{1}\frac{x^{8}-x^{12}}{1-x^{12}}\cdot\frac{x\,\mathrm dx}{x^3+1}=\int_{0}^{1}\frac{1-x^3+x^6}{1+x^4+x^8}\,\mathrm dx $$ that is: $$ I = \int_{0}^{1}\left(1-x^3-x^4+x^6+x^7-x^{10}\right)\frac{\mathrm dx}{1-x^{12}} $$ or: $$ I=\sum_{k\geq 0}\left(\frac{1}{12k+1}-\frac{1}{12k+4}-\frac{1}{12k+5}+\frac{1}{12k+7}+\frac{1}{12k+8}-\frac{1}{12k+11}\right). $$ By the reflection formula for the <a href="https://en.wikipedia.org/wiki/Digamma_function" rel="nofollow noreferrer">digamma function</a> we have: $$ \sum_{k\geq 0}\left(\frac{1}{12k+\tau}-\frac{1}{12k+(12-\tau)}\right)=\frac{\pi}{12}\,\cot\frac{\pi \tau}{12} $$ and the wanted result trivially follows from $\cot\left(\frac{\pi}{12}\right)=2+\sqrt{3}$.</p> Answer 18: <p>If you really love partial fractions you may note that $$I=\int_{0}^{1}\frac{1-x^{3}+x^{6}}{1+x^{4}+x^{8}}\mathrm dx=\frac{1}{4\sqrt{3}}\int_{0}^{1}\frac{1}{x^{2}+\sqrt{3}x+1}\mathrm dx+\frac{1}{4}\int_{0}^{1}\frac{1}{x^{2}+x+1}\mathrm dx $$ $$+\frac{3}{4}\int_{0}^{1}\frac{1}{x^{2}-x+1}\mathrm dx-\frac{1}{4\sqrt{3}}\int_{0}^{1}\frac{1}{x^{2}-\sqrt{3}x+1}\mathrm dx $$ $$=\frac{1}{2\sqrt{3}}\int_{\sqrt{3}}^{2+\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx+\frac{1}{2\sqrt{3}}\int_{1/\sqrt{3}}^{\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx $$ $$+\frac{\sqrt{3}}{2}\int_{-1/\sqrt{3}}^{1/\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx-\frac{1}{2\sqrt{3}}\int_{-\sqrt{3}}^{2-\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx $$ so we have $4$ elementary integrals hence $$I=\color{red}{\frac{5\pi}{12\sqrt{3}}}.$$</p> Answer 19: <p>We want to compute: $$ I = \int_{0}^{1}\frac{x^4-1}{x^{12}-1}\cdot\frac{\mathrm dx}{x^3+1}+\int_{0}^{1}\frac{x^{8}-x^{12}}{1-x^{12}}\cdot\frac{x\,\mathrm dx}{x^3+1}=\int_{0}^{1}\frac{1-x^3+x^6}{1+x^4+x^8}\,\mathrm dx $$ that is: $$ I = \int_{0}^{1}\left(1-x^3-x^4+x^6+x^7-x^{10}\right)\frac{\mathrm dx}{1-x^{12}} $$ or: $$ I=\sum_{k\geq 0}\left(\frac{1}{12k+1}-\frac{1}{12k+4}-\frac{1}{12k+5}+\frac{1}{12k+7}+\frac{1}{12k+8}-\frac{1}{12k+11}\right). $$ By the reflection formula for the <a href="https://en.wikipedia.org/wiki/Digamma_function" rel="nofollow noreferrer">digamma function</a> we have: $$ \sum_{k\geq 0}\left(\frac{1}{12k+\tau}-\frac{1}{12k+(12-\tau)}\right)=\frac{\pi}{12}\,\cot\frac{\pi \tau}{12} $$ and the wanted result trivially follows from $\cot\left(\frac{\pi}{12}\right)=2+\sqrt{3}$.</p> Answer 20: <p>If you really love partial fractions you may note that $$I=\int_{0}^{1}\frac{1-x^{3}+x^{6}}{1+x^{4}+x^{8}}\mathrm dx=\frac{1}{4\sqrt{3}}\int_{0}^{1}\frac{1}{x^{2}+\sqrt{3}x+1}\mathrm dx+\frac{1}{4}\int_{0}^{1}\frac{1}{x^{2}+x+1}\mathrm dx $$ $$+\frac{3}{4}\int_{0}^{1}\frac{1}{x^{2}-x+1}\mathrm dx-\frac{1}{4\sqrt{3}}\int_{0}^{1}\frac{1}{x^{2}-\sqrt{3}x+1}\mathrm dx $$ $$=\frac{1}{2\sqrt{3}}\int_{\sqrt{3}}^{2+\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx+\frac{1}{2\sqrt{3}}\int_{1/\sqrt{3}}^{\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx $$ $$+\frac{\sqrt{3}}{2}\int_{-1/\sqrt{3}}^{1/\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx-\frac{1}{2\sqrt{3}}\int_{-\sqrt{3}}^{2-\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx $$ so we have $4$ elementary integrals hence $$I=\color{red}{\frac{5\pi}{12\sqrt{3}}}.$$</p> Answer 21: <p>We want to compute: $$ I = \int_{0}^{1}\frac{x^4-1}{x^{12}-1}\cdot\frac{\mathrm dx}{x^3+1}+\int_{0}^{1}\frac{x^{8}-x^{12}}{1-x^{12}}\cdot\frac{x\,\mathrm dx}{x^3+1}=\int_{0}^{1}\frac{1-x^3+x^6}{1+x^4+x^8}\,\mathrm dx $$ that is: $$ I = \int_{0}^{1}\left(1-x^3-x^4+x^6+x^7-x^{10}\right)\frac{\mathrm dx}{1-x^{12}} $$ or: $$ I=\sum_{k\geq 0}\left(\frac{1}{12k+1}-\frac{1}{12k+4}-\frac{1}{12k+5}+\frac{1}{12k+7}+\frac{1}{12k+8}-\frac{1}{12k+11}\right). $$ By the reflection formula for the <a href="https://en.wikipedia.org/wiki/Digamma_function" rel="nofollow noreferrer">digamma function</a> we have: $$ \sum_{k\geq 0}\left(\frac{1}{12k+\tau}-\frac{1}{12k+(12-\tau)}\right)=\frac{\pi}{12}\,\cot\frac{\pi \tau}{12} $$ and the wanted result trivially follows from $\cot\left(\frac{\pi}{12}\right)=2+\sqrt{3}$.</p> Answer 22: <p>If you really love partial fractions you may note that $$I=\int_{0}^{1}\frac{1-x^{3}+x^{6}}{1+x^{4}+x^{8}}\mathrm dx=\frac{1}{4\sqrt{3}}\int_{0}^{1}\frac{1}{x^{2}+\sqrt{3}x+1}\mathrm dx+\frac{1}{4}\int_{0}^{1}\frac{1}{x^{2}+x+1}\mathrm dx $$ $$+\frac{3}{4}\int_{0}^{1}\frac{1}{x^{2}-x+1}\mathrm dx-\frac{1}{4\sqrt{3}}\int_{0}^{1}\frac{1}{x^{2}-\sqrt{3}x+1}\mathrm dx $$ $$=\frac{1}{2\sqrt{3}}\int_{\sqrt{3}}^{2+\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx+\frac{1}{2\sqrt{3}}\int_{1/\sqrt{3}}^{\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx $$ $$+\frac{\sqrt{3}}{2}\int_{-1/\sqrt{3}}^{1/\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx-\frac{1}{2\sqrt{3}}\int_{-\sqrt{3}}^{2-\sqrt{3}}\frac{1}{x^{2}+1}\mathrm dx $$ so we have $4$ elementary integrals hence $$I=\color{red}{\frac{5\pi}{12\sqrt{3}}}.$$</p>
Why can&#39;t we separate fraction powers in euler&#39;s formula?
2,089,620
6
https://math.stackexchange.com/questions/2089620/why-cant-we-separate-fraction-powers-in-eulers-formula
<p>I am sure there is something wrong i am doing but i just want someone to point it to me.</p> <p>Why can't we say that $e^{\frac {\pi i}{3}} = \left(e^{\pi i}\right)^{\frac {1}{3}} = (-1)^{\frac {1}{3}} = -1$ Why do we have to calculate it by the formula as it is (which produces different value)</p> <p>Another question is why can't we say that $ (-1)^{\frac {1}{2}} = \left(\left (-1\right)^{2}\right)^{\frac {1}{4}} = 1^{\frac {1}{4}} = 1$</p> <p>Again i have to say that i know this is all wrong i just want to know why it is wrong</p>
[ { "answer_id": 2089634, "score": 3, "text": "<p>It is simply the case that <a href=\"https://math.stackexchange.com/questions/1347504/for-which-complex-a-b-c-does-abc-abc-hold\">$(a^b)^c$ is not always equal to $a^{bc}$</a> if $a$ is not positive or if $b$ and/or $c$ are complex.</p>\n\n<p>The problems here are that</p>\n\n<p>$$x=(-1)^{1/2}\\implies x^2=-1\\implies x^4=1$$</p>\n\n<p>However, when you say that $1^{1/4}=1$, you cause a misconception. Usually, this is perfectly fine, but when you think about it, why not $1^{1/4}=-1$? Indeed, both values are solutions to the equation $x^4=1$, but neither are equal to $(-1)^{1/2}$. However, upon factoring, you could see that</p>\n\n<p>$$x^4=1\\implies x^4-1=0\\implies(x+1)(x-1)(x+i)(x-i)=0$$</p>\n\n<p>One of these is the correct solution in our context, though you happen to choose the wrong one.</p>\n\n<p>Similarly, you state that $(-1)^{1/3}=-1$, but</p>\n\n<p>$$x^3=-1\\implies x^3+1=0\\implies(x+1)(x^2-x+1)=0$$</p>\n\n<p>And $x=e^{\\pi i/3}$ is one such possible solution. If you want more information, please see the link above.</p>\n" }, { "answer_id": 2089633, "score": 1, "text": "<p>The laws we learn for powers apply to positive real numbers, they don't apply when you start to take roots of negative numbers.</p>\n\n<p>Your proof that $ (-1)^{\\frac {1}{2}}=1$ obviously raises alarm. As Ross mentioned in the comments this is related to functions being multivalued.</p>\n\n<p>A similar false-proof using multivalued functions is $\\sqrt{1}=1$ but $\\sqrt{1}=-1$. Therefore $1=-1$.</p>\n\n<p>Because we allowed the square root to have two results it lead to the result $1=-1$. Normally the square root is defined to be always positive so as to avoid this problem. However, if we allow for multiple values then we have to remember that $\\sqrt{a}=b$ and $\\sqrt{a}=c$ doesn't imply $b=\\sqrt{a}=c$. Allowing multivalued functions to exist means that we lose a property of equality.</p>\n" }, { "answer_id": 2089634, "score": 3, "text": "<p>It is simply the case that <a href=\"https://math.stackexchange.com/questions/1347504/for-which-complex-a-b-c-does-abc-abc-hold\">$(a^b)^c$ is not always equal to $a^{bc}$</a> if $a$ is not positive or if $b$ and/or $c$ are complex.</p>\n\n<p>The problems here are that</p>\n\n<p>$$x=(-1)^{1/2}\\implies x^2=-1\\implies x^4=1$$</p>\n\n<p>However, when you say that $1^{1/4}=1$, you cause a misconception. Usually, this is perfectly fine, but when you think about it, why not $1^{1/4}=-1$? Indeed, both values are solutions to the equation $x^4=1$, but neither are equal to $(-1)^{1/2}$. However, upon factoring, you could see that</p>\n\n<p>$$x^4=1\\implies x^4-1=0\\implies(x+1)(x-1)(x+i)(x-i)=0$$</p>\n\n<p>One of these is the correct solution in our context, though you happen to choose the wrong one.</p>\n\n<p>Similarly, you state that $(-1)^{1/3}=-1$, but</p>\n\n<p>$$x^3=-1\\implies x^3+1=0\\implies(x+1)(x^2-x+1)=0$$</p>\n\n<p>And $x=e^{\\pi i/3}$ is one such possible solution. If you want more information, please see the link above.</p>\n" }, { "answer_id": 2089633, "score": 1, "text": "<p>The laws we learn for powers apply to positive real numbers, they don't apply when you start to take roots of negative numbers.</p>\n\n<p>Your proof that $ (-1)^{\\frac {1}{2}}=1$ obviously raises alarm. As Ross mentioned in the comments this is related to functions being multivalued.</p>\n\n<p>A similar false-proof using multivalued functions is $\\sqrt{1}=1$ but $\\sqrt{1}=-1$. Therefore $1=-1$.</p>\n\n<p>Because we allowed the square root to have two results it lead to the result $1=-1$. Normally the square root is defined to be always positive so as to avoid this problem. However, if we allow for multiple values then we have to remember that $\\sqrt{a}=b$ and $\\sqrt{a}=c$ doesn't imply $b=\\sqrt{a}=c$. Allowing multivalued functions to exist means that we lose a property of equality.</p>\n" }, { "answer_id": 2089634, "score": 3, "text": "<p>It is simply the case that <a href=\"https://math.stackexchange.com/questions/1347504/for-which-complex-a-b-c-does-abc-abc-hold\">$(a^b)^c$ is not always equal to $a^{bc}$</a> if $a$ is not positive or if $b$ and/or $c$ are complex.</p>\n\n<p>The problems here are that</p>\n\n<p>$$x=(-1)^{1/2}\\implies x^2=-1\\implies x^4=1$$</p>\n\n<p>However, when you say that $1^{1/4}=1$, you cause a misconception. Usually, this is perfectly fine, but when you think about it, why not $1^{1/4}=-1$? Indeed, both values are solutions to the equation $x^4=1$, but neither are equal to $(-1)^{1/2}$. However, upon factoring, you could see that</p>\n\n<p>$$x^4=1\\implies x^4-1=0\\implies(x+1)(x-1)(x+i)(x-i)=0$$</p>\n\n<p>One of these is the correct solution in our context, though you happen to choose the wrong one.</p>\n\n<p>Similarly, you state that $(-1)^{1/3}=-1$, but</p>\n\n<p>$$x^3=-1\\implies x^3+1=0\\implies(x+1)(x^2-x+1)=0$$</p>\n\n<p>And $x=e^{\\pi i/3}$ is one such possible solution. If you want more information, please see the link above.</p>\n" }, { "answer_id": 2089633, "score": 1, "text": "<p>The laws we learn for powers apply to positive real numbers, they don't apply when you start to take roots of negative numbers.</p>\n\n<p>Your proof that $ (-1)^{\\frac {1}{2}}=1$ obviously raises alarm. As Ross mentioned in the comments this is related to functions being multivalued.</p>\n\n<p>A similar false-proof using multivalued functions is $\\sqrt{1}=1$ but $\\sqrt{1}=-1$. Therefore $1=-1$.</p>\n\n<p>Because we allowed the square root to have two results it lead to the result $1=-1$. Normally the square root is defined to be always positive so as to avoid this problem. However, if we allow for multiple values then we have to remember that $\\sqrt{a}=b$ and $\\sqrt{a}=c$ doesn't imply $b=\\sqrt{a}=c$. Allowing multivalued functions to exist means that we lose a property of equality.</p>\n" }, { "answer_id": 2089634, "score": 3, "text": "<p>It is simply the case that <a href=\"https://math.stackexchange.com/questions/1347504/for-which-complex-a-b-c-does-abc-abc-hold\">$(a^b)^c$ is not always equal to $a^{bc}$</a> if $a$ is not positive or if $b$ and/or $c$ are complex.</p>\n\n<p>The problems here are that</p>\n\n<p>$$x=(-1)^{1/2}\\implies x^2=-1\\implies x^4=1$$</p>\n\n<p>However, when you say that $1^{1/4}=1$, you cause a misconception. Usually, this is perfectly fine, but when you think about it, why not $1^{1/4}=-1$? Indeed, both values are solutions to the equation $x^4=1$, but neither are equal to $(-1)^{1/2}$. However, upon factoring, you could see that</p>\n\n<p>$$x^4=1\\implies x^4-1=0\\implies(x+1)(x-1)(x+i)(x-i)=0$$</p>\n\n<p>One of these is the correct solution in our context, though you happen to choose the wrong one.</p>\n\n<p>Similarly, you state that $(-1)^{1/3}=-1$, but</p>\n\n<p>$$x^3=-1\\implies x^3+1=0\\implies(x+1)(x^2-x+1)=0$$</p>\n\n<p>And $x=e^{\\pi i/3}$ is one such possible solution. If you want more information, please see the link above.</p>\n" }, { "answer_id": 2089633, "score": 1, "text": "<p>The laws we learn for powers apply to positive real numbers, they don't apply when you start to take roots of negative numbers.</p>\n\n<p>Your proof that $ (-1)^{\\frac {1}{2}}=1$ obviously raises alarm. As Ross mentioned in the comments this is related to functions being multivalued.</p>\n\n<p>A similar false-proof using multivalued functions is $\\sqrt{1}=1$ but $\\sqrt{1}=-1$. Therefore $1=-1$.</p>\n\n<p>Because we allowed the square root to have two results it lead to the result $1=-1$. Normally the square root is defined to be always positive so as to avoid this problem. However, if we allow for multiple values then we have to remember that $\\sqrt{a}=b$ and $\\sqrt{a}=c$ doesn't imply $b=\\sqrt{a}=c$. Allowing multivalued functions to exist means that we lose a property of equality.</p>\n" }, { "answer_id": 2089634, "score": 3, "text": "<p>It is simply the case that <a href=\"https://math.stackexchange.com/questions/1347504/for-which-complex-a-b-c-does-abc-abc-hold\">$(a^b)^c$ is not always equal to $a^{bc}$</a> if $a$ is not positive or if $b$ and/or $c$ are complex.</p>\n\n<p>The problems here are that</p>\n\n<p>$$x=(-1)^{1/2}\\implies x^2=-1\\implies x^4=1$$</p>\n\n<p>However, when you say that $1^{1/4}=1$, you cause a misconception. Usually, this is perfectly fine, but when you think about it, why not $1^{1/4}=-1$? Indeed, both values are solutions to the equation $x^4=1$, but neither are equal to $(-1)^{1/2}$. However, upon factoring, you could see that</p>\n\n<p>$$x^4=1\\implies x^4-1=0\\implies(x+1)(x-1)(x+i)(x-i)=0$$</p>\n\n<p>One of these is the correct solution in our context, though you happen to choose the wrong one.</p>\n\n<p>Similarly, you state that $(-1)^{1/3}=-1$, but</p>\n\n<p>$$x^3=-1\\implies x^3+1=0\\implies(x+1)(x^2-x+1)=0$$</p>\n\n<p>And $x=e^{\\pi i/3}$ is one such possible solution. If you want more information, please see the link above.</p>\n" }, { "answer_id": 2089633, "score": 1, "text": "<p>The laws we learn for powers apply to positive real numbers, they don't apply when you start to take roots of negative numbers.</p>\n\n<p>Your proof that $ (-1)^{\\frac {1}{2}}=1$ obviously raises alarm. As Ross mentioned in the comments this is related to functions being multivalued.</p>\n\n<p>A similar false-proof using multivalued functions is $\\sqrt{1}=1$ but $\\sqrt{1}=-1$. Therefore $1=-1$.</p>\n\n<p>Because we allowed the square root to have two results it lead to the result $1=-1$. Normally the square root is defined to be always positive so as to avoid this problem. However, if we allow for multiple values then we have to remember that $\\sqrt{a}=b$ and $\\sqrt{a}=c$ doesn't imply $b=\\sqrt{a}=c$. Allowing multivalued functions to exist means that we lose a property of equality.</p>\n" }, { "answer_id": 2089634, "score": 3, "text": "<p>It is simply the case that <a href=\"https://math.stackexchange.com/questions/1347504/for-which-complex-a-b-c-does-abc-abc-hold\">$(a^b)^c$ is not always equal to $a^{bc}$</a> if $a$ is not positive or if $b$ and/or $c$ are complex.</p>\n\n<p>The problems here are that</p>\n\n<p>$$x=(-1)^{1/2}\\implies x^2=-1\\implies x^4=1$$</p>\n\n<p>However, when you say that $1^{1/4}=1$, you cause a misconception. Usually, this is perfectly fine, but when you think about it, why not $1^{1/4}=-1$? Indeed, both values are solutions to the equation $x^4=1$, but neither are equal to $(-1)^{1/2}$. However, upon factoring, you could see that</p>\n\n<p>$$x^4=1\\implies x^4-1=0\\implies(x+1)(x-1)(x+i)(x-i)=0$$</p>\n\n<p>One of these is the correct solution in our context, though you happen to choose the wrong one.</p>\n\n<p>Similarly, you state that $(-1)^{1/3}=-1$, but</p>\n\n<p>$$x^3=-1\\implies x^3+1=0\\implies(x+1)(x^2-x+1)=0$$</p>\n\n<p>And $x=e^{\\pi i/3}$ is one such possible solution. If you want more information, please see the link above.</p>\n" }, { "answer_id": 2089633, "score": 1, "text": "<p>The laws we learn for powers apply to positive real numbers, they don't apply when you start to take roots of negative numbers.</p>\n\n<p>Your proof that $ (-1)^{\\frac {1}{2}}=1$ obviously raises alarm. As Ross mentioned in the comments this is related to functions being multivalued.</p>\n\n<p>A similar false-proof using multivalued functions is $\\sqrt{1}=1$ but $\\sqrt{1}=-1$. Therefore $1=-1$.</p>\n\n<p>Because we allowed the square root to have two results it lead to the result $1=-1$. Normally the square root is defined to be always positive so as to avoid this problem. However, if we allow for multiple values then we have to remember that $\\sqrt{a}=b$ and $\\sqrt{a}=c$ doesn't imply $b=\\sqrt{a}=c$. Allowing multivalued functions to exist means that we lose a property of equality.</p>\n" }, { "answer_id": 2089634, "score": 3, "text": "<p>It is simply the case that <a href=\"https://math.stackexchange.com/questions/1347504/for-which-complex-a-b-c-does-abc-abc-hold\">$(a^b)^c$ is not always equal to $a^{bc}$</a> if $a$ is not positive or if $b$ and/or $c$ are complex.</p>\n\n<p>The problems here are that</p>\n\n<p>$$x=(-1)^{1/2}\\implies x^2=-1\\implies x^4=1$$</p>\n\n<p>However, when you say that $1^{1/4}=1$, you cause a misconception. Usually, this is perfectly fine, but when you think about it, why not $1^{1/4}=-1$? Indeed, both values are solutions to the equation $x^4=1$, but neither are equal to $(-1)^{1/2}$. However, upon factoring, you could see that</p>\n\n<p>$$x^4=1\\implies x^4-1=0\\implies(x+1)(x-1)(x+i)(x-i)=0$$</p>\n\n<p>One of these is the correct solution in our context, though you happen to choose the wrong one.</p>\n\n<p>Similarly, you state that $(-1)^{1/3}=-1$, but</p>\n\n<p>$$x^3=-1\\implies x^3+1=0\\implies(x+1)(x^2-x+1)=0$$</p>\n\n<p>And $x=e^{\\pi i/3}$ is one such possible solution. If you want more information, please see the link above.</p>\n" }, { "answer_id": 2089633, "score": 1, "text": "<p>The laws we learn for powers apply to positive real numbers, they don't apply when you start to take roots of negative numbers.</p>\n\n<p>Your proof that $ (-1)^{\\frac {1}{2}}=1$ obviously raises alarm. As Ross mentioned in the comments this is related to functions being multivalued.</p>\n\n<p>A similar false-proof using multivalued functions is $\\sqrt{1}=1$ but $\\sqrt{1}=-1$. Therefore $1=-1$.</p>\n\n<p>Because we allowed the square root to have two results it lead to the result $1=-1$. Normally the square root is defined to be always positive so as to avoid this problem. However, if we allow for multiple values then we have to remember that $\\sqrt{a}=b$ and $\\sqrt{a}=c$ doesn't imply $b=\\sqrt{a}=c$. Allowing multivalued functions to exist means that we lose a property of equality.</p>\n" }, { "answer_id": 2089634, "score": 3, "text": "<p>It is simply the case that <a href=\"https://math.stackexchange.com/questions/1347504/for-which-complex-a-b-c-does-abc-abc-hold\">$(a^b)^c$ is not always equal to $a^{bc}$</a> if $a$ is not positive or if $b$ and/or $c$ are complex.</p>\n\n<p>The problems here are that</p>\n\n<p>$$x=(-1)^{1/2}\\implies x^2=-1\\implies x^4=1$$</p>\n\n<p>However, when you say that $1^{1/4}=1$, you cause a misconception. Usually, this is perfectly fine, but when you think about it, why not $1^{1/4}=-1$? Indeed, both values are solutions to the equation $x^4=1$, but neither are equal to $(-1)^{1/2}$. However, upon factoring, you could see that</p>\n\n<p>$$x^4=1\\implies x^4-1=0\\implies(x+1)(x-1)(x+i)(x-i)=0$$</p>\n\n<p>One of these is the correct solution in our context, though you happen to choose the wrong one.</p>\n\n<p>Similarly, you state that $(-1)^{1/3}=-1$, but</p>\n\n<p>$$x^3=-1\\implies x^3+1=0\\implies(x+1)(x^2-x+1)=0$$</p>\n\n<p>And $x=e^{\\pi i/3}$ is one such possible solution. If you want more information, please see the link above.</p>\n" }, { "answer_id": 2089633, "score": 1, "text": "<p>The laws we learn for powers apply to positive real numbers, they don't apply when you start to take roots of negative numbers.</p>\n\n<p>Your proof that $ (-1)^{\\frac {1}{2}}=1$ obviously raises alarm. As Ross mentioned in the comments this is related to functions being multivalued.</p>\n\n<p>A similar false-proof using multivalued functions is $\\sqrt{1}=1$ but $\\sqrt{1}=-1$. Therefore $1=-1$.</p>\n\n<p>Because we allowed the square root to have two results it lead to the result $1=-1$. Normally the square root is defined to be always positive so as to avoid this problem. However, if we allow for multiple values then we have to remember that $\\sqrt{a}=b$ and $\\sqrt{a}=c$ doesn't imply $b=\\sqrt{a}=c$. Allowing multivalued functions to exist means that we lose a property of equality.</p>\n" }, { "answer_id": 2089634, "score": 3, "text": "<p>It is simply the case that <a href=\"https://math.stackexchange.com/questions/1347504/for-which-complex-a-b-c-does-abc-abc-hold\">$(a^b)^c$ is not always equal to $a^{bc}$</a> if $a$ is not positive or if $b$ and/or $c$ are complex.</p>\n\n<p>The problems here are that</p>\n\n<p>$$x=(-1)^{1/2}\\implies x^2=-1\\implies x^4=1$$</p>\n\n<p>However, when you say that $1^{1/4}=1$, you cause a misconception. Usually, this is perfectly fine, but when you think about it, why not $1^{1/4}=-1$? Indeed, both values are solutions to the equation $x^4=1$, but neither are equal to $(-1)^{1/2}$. However, upon factoring, you could see that</p>\n\n<p>$$x^4=1\\implies x^4-1=0\\implies(x+1)(x-1)(x+i)(x-i)=0$$</p>\n\n<p>One of these is the correct solution in our context, though you happen to choose the wrong one.</p>\n\n<p>Similarly, you state that $(-1)^{1/3}=-1$, but</p>\n\n<p>$$x^3=-1\\implies x^3+1=0\\implies(x+1)(x^2-x+1)=0$$</p>\n\n<p>And $x=e^{\\pi i/3}$ is one such possible solution. If you want more information, please see the link above.</p>\n" }, { "answer_id": 2089633, "score": 1, "text": "<p>The laws we learn for powers apply to positive real numbers, they don't apply when you start to take roots of negative numbers.</p>\n\n<p>Your proof that $ (-1)^{\\frac {1}{2}}=1$ obviously raises alarm. As Ross mentioned in the comments this is related to functions being multivalued.</p>\n\n<p>A similar false-proof using multivalued functions is $\\sqrt{1}=1$ but $\\sqrt{1}=-1$. Therefore $1=-1$.</p>\n\n<p>Because we allowed the square root to have two results it lead to the result $1=-1$. Normally the square root is defined to be always positive so as to avoid this problem. However, if we allow for multiple values then we have to remember that $\\sqrt{a}=b$ and $\\sqrt{a}=c$ doesn't imply $b=\\sqrt{a}=c$. Allowing multivalued functions to exist means that we lose a property of equality.</p>\n" }, { "answer_id": 2089634, "score": 3, "text": "<p>It is simply the case that <a href=\"https://math.stackexchange.com/questions/1347504/for-which-complex-a-b-c-does-abc-abc-hold\">$(a^b)^c$ is not always equal to $a^{bc}$</a> if $a$ is not positive or if $b$ and/or $c$ are complex.</p>\n\n<p>The problems here are that</p>\n\n<p>$$x=(-1)^{1/2}\\implies x^2=-1\\implies x^4=1$$</p>\n\n<p>However, when you say that $1^{1/4}=1$, you cause a misconception. Usually, this is perfectly fine, but when you think about it, why not $1^{1/4}=-1$? Indeed, both values are solutions to the equation $x^4=1$, but neither are equal to $(-1)^{1/2}$. However, upon factoring, you could see that</p>\n\n<p>$$x^4=1\\implies x^4-1=0\\implies(x+1)(x-1)(x+i)(x-i)=0$$</p>\n\n<p>One of these is the correct solution in our context, though you happen to choose the wrong one.</p>\n\n<p>Similarly, you state that $(-1)^{1/3}=-1$, but</p>\n\n<p>$$x^3=-1\\implies x^3+1=0\\implies(x+1)(x^2-x+1)=0$$</p>\n\n<p>And $x=e^{\\pi i/3}$ is one such possible solution. If you want more information, please see the link above.</p>\n" }, { "answer_id": 2089633, "score": 1, "text": "<p>The laws we learn for powers apply to positive real numbers, they don't apply when you start to take roots of negative numbers.</p>\n\n<p>Your proof that $ (-1)^{\\frac {1}{2}}=1$ obviously raises alarm. As Ross mentioned in the comments this is related to functions being multivalued.</p>\n\n<p>A similar false-proof using multivalued functions is $\\sqrt{1}=1$ but $\\sqrt{1}=-1$. Therefore $1=-1$.</p>\n\n<p>Because we allowed the square root to have two results it lead to the result $1=-1$. Normally the square root is defined to be always positive so as to avoid this problem. However, if we allow for multiple values then we have to remember that $\\sqrt{a}=b$ and $\\sqrt{a}=c$ doesn't imply $b=\\sqrt{a}=c$. Allowing multivalued functions to exist means that we lose a property of equality.</p>\n" }, { "answer_id": 2089634, "score": 3, "text": "<p>It is simply the case that <a href=\"https://math.stackexchange.com/questions/1347504/for-which-complex-a-b-c-does-abc-abc-hold\">$(a^b)^c$ is not always equal to $a^{bc}$</a> if $a$ is not positive or if $b$ and/or $c$ are complex.</p>\n\n<p>The problems here are that</p>\n\n<p>$$x=(-1)^{1/2}\\implies x^2=-1\\implies x^4=1$$</p>\n\n<p>However, when you say that $1^{1/4}=1$, you cause a misconception. Usually, this is perfectly fine, but when you think about it, why not $1^{1/4}=-1$? Indeed, both values are solutions to the equation $x^4=1$, but neither are equal to $(-1)^{1/2}$. However, upon factoring, you could see that</p>\n\n<p>$$x^4=1\\implies x^4-1=0\\implies(x+1)(x-1)(x+i)(x-i)=0$$</p>\n\n<p>One of these is the correct solution in our context, though you happen to choose the wrong one.</p>\n\n<p>Similarly, you state that $(-1)^{1/3}=-1$, but</p>\n\n<p>$$x^3=-1\\implies x^3+1=0\\implies(x+1)(x^2-x+1)=0$$</p>\n\n<p>And $x=e^{\\pi i/3}$ is one such possible solution. If you want more information, please see the link above.</p>\n" }, { "answer_id": 2089633, "score": 1, "text": "<p>The laws we learn for powers apply to positive real numbers, they don't apply when you start to take roots of negative numbers.</p>\n\n<p>Your proof that $ (-1)^{\\frac {1}{2}}=1$ obviously raises alarm. As Ross mentioned in the comments this is related to functions being multivalued.</p>\n\n<p>A similar false-proof using multivalued functions is $\\sqrt{1}=1$ but $\\sqrt{1}=-1$. Therefore $1=-1$.</p>\n\n<p>Because we allowed the square root to have two results it lead to the result $1=-1$. Normally the square root is defined to be always positive so as to avoid this problem. However, if we allow for multiple values then we have to remember that $\\sqrt{a}=b$ and $\\sqrt{a}=c$ doesn't imply $b=\\sqrt{a}=c$. Allowing multivalued functions to exist means that we lose a property of equality.</p>\n" } ]
Question: Why can&#39;t we separate fraction powers in euler&#39;s formula? Answer 1: <p>It is simply the case that <a href="https://math.stackexchange.com/questions/1347504/for-which-complex-a-b-c-does-abc-abc-hold">$(a^b)^c$ is not always equal to $a^{bc}$</a> if $a$ is not positive or if $b$ and/or $c$ are complex.</p> <p>The problems here are that</p> <p>$$x=(-1)^{1/2}\implies x^2=-1\implies x^4=1$$</p> <p>However, when you say that $1^{1/4}=1$, you cause a misconception. Usually, this is perfectly fine, but when you think about it, why not $1^{1/4}=-1$? Indeed, both values are solutions to the equation $x^4=1$, but neither are equal to $(-1)^{1/2}$. However, upon factoring, you could see that</p> <p>$$x^4=1\implies x^4-1=0\implies(x+1)(x-1)(x+i)(x-i)=0$$</p> <p>One of these is the correct solution in our context, though you happen to choose the wrong one.</p> <p>Similarly, you state that $(-1)^{1/3}=-1$, but</p> <p>$$x^3=-1\implies x^3+1=0\implies(x+1)(x^2-x+1)=0$$</p> <p>And $x=e^{\pi i/3}$ is one such possible solution. If you want more information, please see the link above.</p> Answer 2: <p>The laws we learn for powers apply to positive real numbers, they don't apply when you start to take roots of negative numbers.</p> <p>Your proof that $ (-1)^{\frac {1}{2}}=1$ obviously raises alarm. As Ross mentioned in the comments this is related to functions being multivalued.</p> <p>A similar false-proof using multivalued functions is $\sqrt{1}=1$ but $\sqrt{1}=-1$. Therefore $1=-1$.</p> <p>Because we allowed the square root to have two results it lead to the result $1=-1$. Normally the square root is defined to be always positive so as to avoid this problem. However, if we allow for multiple values then we have to remember that $\sqrt{a}=b$ and $\sqrt{a}=c$ doesn't imply $b=\sqrt{a}=c$. Allowing multivalued functions to exist means that we lose a property of equality.</p> Answer 3: <p>It is simply the case that <a href="https://math.stackexchange.com/questions/1347504/for-which-complex-a-b-c-does-abc-abc-hold">$(a^b)^c$ is not always equal to $a^{bc}$</a> if $a$ is not positive or if $b$ and/or $c$ are complex.</p> <p>The problems here are that</p> <p>$$x=(-1)^{1/2}\implies x^2=-1\implies x^4=1$$</p> <p>However, when you say that $1^{1/4}=1$, you cause a misconception. Usually, this is perfectly fine, but when you think about it, why not $1^{1/4}=-1$? Indeed, both values are solutions to the equation $x^4=1$, but neither are equal to $(-1)^{1/2}$. However, upon factoring, you could see that</p> <p>$$x^4=1\implies x^4-1=0\implies(x+1)(x-1)(x+i)(x-i)=0$$</p> <p>One of these is the correct solution in our context, though you happen to choose the wrong one.</p> <p>Similarly, you state that $(-1)^{1/3}=-1$, but</p> <p>$$x^3=-1\implies x^3+1=0\implies(x+1)(x^2-x+1)=0$$</p> <p>And $x=e^{\pi i/3}$ is one such possible solution. If you want more information, please see the link above.</p> Answer 4: <p>The laws we learn for powers apply to positive real numbers, they don't apply when you start to take roots of negative numbers.</p> <p>Your proof that $ (-1)^{\frac {1}{2}}=1$ obviously raises alarm. As Ross mentioned in the comments this is related to functions being multivalued.</p> <p>A similar false-proof using multivalued functions is $\sqrt{1}=1$ but $\sqrt{1}=-1$. Therefore $1=-1$.</p> <p>Because we allowed the square root to have two results it lead to the result $1=-1$. Normally the square root is defined to be always positive so as to avoid this problem. However, if we allow for multiple values then we have to remember that $\sqrt{a}=b$ and $\sqrt{a}=c$ doesn't imply $b=\sqrt{a}=c$. Allowing multivalued functions to exist means that we lose a property of equality.</p> Answer 5: <p>It is simply the case that <a href="https://math.stackexchange.com/questions/1347504/for-which-complex-a-b-c-does-abc-abc-hold">$(a^b)^c$ is not always equal to $a^{bc}$</a> if $a$ is not positive or if $b$ and/or $c$ are complex.</p> <p>The problems here are that</p> <p>$$x=(-1)^{1/2}\implies x^2=-1\implies x^4=1$$</p> <p>However, when you say that $1^{1/4}=1$, you cause a misconception. Usually, this is perfectly fine, but when you think about it, why not $1^{1/4}=-1$? Indeed, both values are solutions to the equation $x^4=1$, but neither are equal to $(-1)^{1/2}$. However, upon factoring, you could see that</p> <p>$$x^4=1\implies x^4-1=0\implies(x+1)(x-1)(x+i)(x-i)=0$$</p> <p>One of these is the correct solution in our context, though you happen to choose the wrong one.</p> <p>Similarly, you state that $(-1)^{1/3}=-1$, but</p> <p>$$x^3=-1\implies x^3+1=0\implies(x+1)(x^2-x+1)=0$$</p> <p>And $x=e^{\pi i/3}$ is one such possible solution. If you want more information, please see the link above.</p> Answer 6: <p>The laws we learn for powers apply to positive real numbers, they don't apply when you start to take roots of negative numbers.</p> <p>Your proof that $ (-1)^{\frac {1}{2}}=1$ obviously raises alarm. As Ross mentioned in the comments this is related to functions being multivalued.</p> <p>A similar false-proof using multivalued functions is $\sqrt{1}=1$ but $\sqrt{1}=-1$. Therefore $1=-1$.</p> <p>Because we allowed the square root to have two results it lead to the result $1=-1$. Normally the square root is defined to be always positive so as to avoid this problem. However, if we allow for multiple values then we have to remember that $\sqrt{a}=b$ and $\sqrt{a}=c$ doesn't imply $b=\sqrt{a}=c$. Allowing multivalued functions to exist means that we lose a property of equality.</p> Answer 7: <p>It is simply the case that <a href="https://math.stackexchange.com/questions/1347504/for-which-complex-a-b-c-does-abc-abc-hold">$(a^b)^c$ is not always equal to $a^{bc}$</a> if $a$ is not positive or if $b$ and/or $c$ are complex.</p> <p>The problems here are that</p> <p>$$x=(-1)^{1/2}\implies x^2=-1\implies x^4=1$$</p> <p>However, when you say that $1^{1/4}=1$, you cause a misconception. Usually, this is perfectly fine, but when you think about it, why not $1^{1/4}=-1$? Indeed, both values are solutions to the equation $x^4=1$, but neither are equal to $(-1)^{1/2}$. However, upon factoring, you could see that</p> <p>$$x^4=1\implies x^4-1=0\implies(x+1)(x-1)(x+i)(x-i)=0$$</p> <p>One of these is the correct solution in our context, though you happen to choose the wrong one.</p> <p>Similarly, you state that $(-1)^{1/3}=-1$, but</p> <p>$$x^3=-1\implies x^3+1=0\implies(x+1)(x^2-x+1)=0$$</p> <p>And $x=e^{\pi i/3}$ is one such possible solution. If you want more information, please see the link above.</p> Answer 8: <p>The laws we learn for powers apply to positive real numbers, they don't apply when you start to take roots of negative numbers.</p> <p>Your proof that $ (-1)^{\frac {1}{2}}=1$ obviously raises alarm. As Ross mentioned in the comments this is related to functions being multivalued.</p> <p>A similar false-proof using multivalued functions is $\sqrt{1}=1$ but $\sqrt{1}=-1$. Therefore $1=-1$.</p> <p>Because we allowed the square root to have two results it lead to the result $1=-1$. Normally the square root is defined to be always positive so as to avoid this problem. However, if we allow for multiple values then we have to remember that $\sqrt{a}=b$ and $\sqrt{a}=c$ doesn't imply $b=\sqrt{a}=c$. Allowing multivalued functions to exist means that we lose a property of equality.</p> Answer 9: <p>It is simply the case that <a href="https://math.stackexchange.com/questions/1347504/for-which-complex-a-b-c-does-abc-abc-hold">$(a^b)^c$ is not always equal to $a^{bc}$</a> if $a$ is not positive or if $b$ and/or $c$ are complex.</p> <p>The problems here are that</p> <p>$$x=(-1)^{1/2}\implies x^2=-1\implies x^4=1$$</p> <p>However, when you say that $1^{1/4}=1$, you cause a misconception. Usually, this is perfectly fine, but when you think about it, why not $1^{1/4}=-1$? Indeed, both values are solutions to the equation $x^4=1$, but neither are equal to $(-1)^{1/2}$. However, upon factoring, you could see that</p> <p>$$x^4=1\implies x^4-1=0\implies(x+1)(x-1)(x+i)(x-i)=0$$</p> <p>One of these is the correct solution in our context, though you happen to choose the wrong one.</p> <p>Similarly, you state that $(-1)^{1/3}=-1$, but</p> <p>$$x^3=-1\implies x^3+1=0\implies(x+1)(x^2-x+1)=0$$</p> <p>And $x=e^{\pi i/3}$ is one such possible solution. If you want more information, please see the link above.</p> Answer 10: <p>The laws we learn for powers apply to positive real numbers, they don't apply when you start to take roots of negative numbers.</p> <p>Your proof that $ (-1)^{\frac {1}{2}}=1$ obviously raises alarm. As Ross mentioned in the comments this is related to functions being multivalued.</p> <p>A similar false-proof using multivalued functions is $\sqrt{1}=1$ but $\sqrt{1}=-1$. Therefore $1=-1$.</p> <p>Because we allowed the square root to have two results it lead to the result $1=-1$. Normally the square root is defined to be always positive so as to avoid this problem. However, if we allow for multiple values then we have to remember that $\sqrt{a}=b$ and $\sqrt{a}=c$ doesn't imply $b=\sqrt{a}=c$. Allowing multivalued functions to exist means that we lose a property of equality.</p> Answer 11: <p>It is simply the case that <a href="https://math.stackexchange.com/questions/1347504/for-which-complex-a-b-c-does-abc-abc-hold">$(a^b)^c$ is not always equal to $a^{bc}$</a> if $a$ is not positive or if $b$ and/or $c$ are complex.</p> <p>The problems here are that</p> <p>$$x=(-1)^{1/2}\implies x^2=-1\implies x^4=1$$</p> <p>However, when you say that $1^{1/4}=1$, you cause a misconception. Usually, this is perfectly fine, but when you think about it, why not $1^{1/4}=-1$? Indeed, both values are solutions to the equation $x^4=1$, but neither are equal to $(-1)^{1/2}$. However, upon factoring, you could see that</p> <p>$$x^4=1\implies x^4-1=0\implies(x+1)(x-1)(x+i)(x-i)=0$$</p> <p>One of these is the correct solution in our context, though you happen to choose the wrong one.</p> <p>Similarly, you state that $(-1)^{1/3}=-1$, but</p> <p>$$x^3=-1\implies x^3+1=0\implies(x+1)(x^2-x+1)=0$$</p> <p>And $x=e^{\pi i/3}$ is one such possible solution. If you want more information, please see the link above.</p> Answer 12: <p>The laws we learn for powers apply to positive real numbers, they don't apply when you start to take roots of negative numbers.</p> <p>Your proof that $ (-1)^{\frac {1}{2}}=1$ obviously raises alarm. As Ross mentioned in the comments this is related to functions being multivalued.</p> <p>A similar false-proof using multivalued functions is $\sqrt{1}=1$ but $\sqrt{1}=-1$. Therefore $1=-1$.</p> <p>Because we allowed the square root to have two results it lead to the result $1=-1$. Normally the square root is defined to be always positive so as to avoid this problem. However, if we allow for multiple values then we have to remember that $\sqrt{a}=b$ and $\sqrt{a}=c$ doesn't imply $b=\sqrt{a}=c$. Allowing multivalued functions to exist means that we lose a property of equality.</p> Answer 13: <p>It is simply the case that <a href="https://math.stackexchange.com/questions/1347504/for-which-complex-a-b-c-does-abc-abc-hold">$(a^b)^c$ is not always equal to $a^{bc}$</a> if $a$ is not positive or if $b$ and/or $c$ are complex.</p> <p>The problems here are that</p> <p>$$x=(-1)^{1/2}\implies x^2=-1\implies x^4=1$$</p> <p>However, when you say that $1^{1/4}=1$, you cause a misconception. Usually, this is perfectly fine, but when you think about it, why not $1^{1/4}=-1$? Indeed, both values are solutions to the equation $x^4=1$, but neither are equal to $(-1)^{1/2}$. However, upon factoring, you could see that</p> <p>$$x^4=1\implies x^4-1=0\implies(x+1)(x-1)(x+i)(x-i)=0$$</p> <p>One of these is the correct solution in our context, though you happen to choose the wrong one.</p> <p>Similarly, you state that $(-1)^{1/3}=-1$, but</p> <p>$$x^3=-1\implies x^3+1=0\implies(x+1)(x^2-x+1)=0$$</p> <p>And $x=e^{\pi i/3}$ is one such possible solution. If you want more information, please see the link above.</p> Answer 14: <p>The laws we learn for powers apply to positive real numbers, they don't apply when you start to take roots of negative numbers.</p> <p>Your proof that $ (-1)^{\frac {1}{2}}=1$ obviously raises alarm. As Ross mentioned in the comments this is related to functions being multivalued.</p> <p>A similar false-proof using multivalued functions is $\sqrt{1}=1$ but $\sqrt{1}=-1$. Therefore $1=-1$.</p> <p>Because we allowed the square root to have two results it lead to the result $1=-1$. Normally the square root is defined to be always positive so as to avoid this problem. However, if we allow for multiple values then we have to remember that $\sqrt{a}=b$ and $\sqrt{a}=c$ doesn't imply $b=\sqrt{a}=c$. Allowing multivalued functions to exist means that we lose a property of equality.</p> Answer 15: <p>It is simply the case that <a href="https://math.stackexchange.com/questions/1347504/for-which-complex-a-b-c-does-abc-abc-hold">$(a^b)^c$ is not always equal to $a^{bc}$</a> if $a$ is not positive or if $b$ and/or $c$ are complex.</p> <p>The problems here are that</p> <p>$$x=(-1)^{1/2}\implies x^2=-1\implies x^4=1$$</p> <p>However, when you say that $1^{1/4}=1$, you cause a misconception. Usually, this is perfectly fine, but when you think about it, why not $1^{1/4}=-1$? Indeed, both values are solutions to the equation $x^4=1$, but neither are equal to $(-1)^{1/2}$. However, upon factoring, you could see that</p> <p>$$x^4=1\implies x^4-1=0\implies(x+1)(x-1)(x+i)(x-i)=0$$</p> <p>One of these is the correct solution in our context, though you happen to choose the wrong one.</p> <p>Similarly, you state that $(-1)^{1/3}=-1$, but</p> <p>$$x^3=-1\implies x^3+1=0\implies(x+1)(x^2-x+1)=0$$</p> <p>And $x=e^{\pi i/3}$ is one such possible solution. If you want more information, please see the link above.</p> Answer 16: <p>The laws we learn for powers apply to positive real numbers, they don't apply when you start to take roots of negative numbers.</p> <p>Your proof that $ (-1)^{\frac {1}{2}}=1$ obviously raises alarm. As Ross mentioned in the comments this is related to functions being multivalued.</p> <p>A similar false-proof using multivalued functions is $\sqrt{1}=1$ but $\sqrt{1}=-1$. Therefore $1=-1$.</p> <p>Because we allowed the square root to have two results it lead to the result $1=-1$. Normally the square root is defined to be always positive so as to avoid this problem. However, if we allow for multiple values then we have to remember that $\sqrt{a}=b$ and $\sqrt{a}=c$ doesn't imply $b=\sqrt{a}=c$. Allowing multivalued functions to exist means that we lose a property of equality.</p> Answer 17: <p>It is simply the case that <a href="https://math.stackexchange.com/questions/1347504/for-which-complex-a-b-c-does-abc-abc-hold">$(a^b)^c$ is not always equal to $a^{bc}$</a> if $a$ is not positive or if $b$ and/or $c$ are complex.</p> <p>The problems here are that</p> <p>$$x=(-1)^{1/2}\implies x^2=-1\implies x^4=1$$</p> <p>However, when you say that $1^{1/4}=1$, you cause a misconception. Usually, this is perfectly fine, but when you think about it, why not $1^{1/4}=-1$? Indeed, both values are solutions to the equation $x^4=1$, but neither are equal to $(-1)^{1/2}$. However, upon factoring, you could see that</p> <p>$$x^4=1\implies x^4-1=0\implies(x+1)(x-1)(x+i)(x-i)=0$$</p> <p>One of these is the correct solution in our context, though you happen to choose the wrong one.</p> <p>Similarly, you state that $(-1)^{1/3}=-1$, but</p> <p>$$x^3=-1\implies x^3+1=0\implies(x+1)(x^2-x+1)=0$$</p> <p>And $x=e^{\pi i/3}$ is one such possible solution. If you want more information, please see the link above.</p> Answer 18: <p>The laws we learn for powers apply to positive real numbers, they don't apply when you start to take roots of negative numbers.</p> <p>Your proof that $ (-1)^{\frac {1}{2}}=1$ obviously raises alarm. As Ross mentioned in the comments this is related to functions being multivalued.</p> <p>A similar false-proof using multivalued functions is $\sqrt{1}=1$ but $\sqrt{1}=-1$. Therefore $1=-1$.</p> <p>Because we allowed the square root to have two results it lead to the result $1=-1$. Normally the square root is defined to be always positive so as to avoid this problem. However, if we allow for multiple values then we have to remember that $\sqrt{a}=b$ and $\sqrt{a}=c$ doesn't imply $b=\sqrt{a}=c$. Allowing multivalued functions to exist means that we lose a property of equality.</p> Answer 19: <p>It is simply the case that <a href="https://math.stackexchange.com/questions/1347504/for-which-complex-a-b-c-does-abc-abc-hold">$(a^b)^c$ is not always equal to $a^{bc}$</a> if $a$ is not positive or if $b$ and/or $c$ are complex.</p> <p>The problems here are that</p> <p>$$x=(-1)^{1/2}\implies x^2=-1\implies x^4=1$$</p> <p>However, when you say that $1^{1/4}=1$, you cause a misconception. Usually, this is perfectly fine, but when you think about it, why not $1^{1/4}=-1$? Indeed, both values are solutions to the equation $x^4=1$, but neither are equal to $(-1)^{1/2}$. However, upon factoring, you could see that</p> <p>$$x^4=1\implies x^4-1=0\implies(x+1)(x-1)(x+i)(x-i)=0$$</p> <p>One of these is the correct solution in our context, though you happen to choose the wrong one.</p> <p>Similarly, you state that $(-1)^{1/3}=-1$, but</p> <p>$$x^3=-1\implies x^3+1=0\implies(x+1)(x^2-x+1)=0$$</p> <p>And $x=e^{\pi i/3}$ is one such possible solution. If you want more information, please see the link above.</p> Answer 20: <p>The laws we learn for powers apply to positive real numbers, they don't apply when you start to take roots of negative numbers.</p> <p>Your proof that $ (-1)^{\frac {1}{2}}=1$ obviously raises alarm. As Ross mentioned in the comments this is related to functions being multivalued.</p> <p>A similar false-proof using multivalued functions is $\sqrt{1}=1$ but $\sqrt{1}=-1$. Therefore $1=-1$.</p> <p>Because we allowed the square root to have two results it lead to the result $1=-1$. Normally the square root is defined to be always positive so as to avoid this problem. However, if we allow for multiple values then we have to remember that $\sqrt{a}=b$ and $\sqrt{a}=c$ doesn't imply $b=\sqrt{a}=c$. Allowing multivalued functions to exist means that we lose a property of equality.</p> Answer 21: <p>It is simply the case that <a href="https://math.stackexchange.com/questions/1347504/for-which-complex-a-b-c-does-abc-abc-hold">$(a^b)^c$ is not always equal to $a^{bc}$</a> if $a$ is not positive or if $b$ and/or $c$ are complex.</p> <p>The problems here are that</p> <p>$$x=(-1)^{1/2}\implies x^2=-1\implies x^4=1$$</p> <p>However, when you say that $1^{1/4}=1$, you cause a misconception. Usually, this is perfectly fine, but when you think about it, why not $1^{1/4}=-1$? Indeed, both values are solutions to the equation $x^4=1$, but neither are equal to $(-1)^{1/2}$. However, upon factoring, you could see that</p> <p>$$x^4=1\implies x^4-1=0\implies(x+1)(x-1)(x+i)(x-i)=0$$</p> <p>One of these is the correct solution in our context, though you happen to choose the wrong one.</p> <p>Similarly, you state that $(-1)^{1/3}=-1$, but</p> <p>$$x^3=-1\implies x^3+1=0\implies(x+1)(x^2-x+1)=0$$</p> <p>And $x=e^{\pi i/3}$ is one such possible solution. If you want more information, please see the link above.</p> Answer 22: <p>The laws we learn for powers apply to positive real numbers, they don't apply when you start to take roots of negative numbers.</p> <p>Your proof that $ (-1)^{\frac {1}{2}}=1$ obviously raises alarm. As Ross mentioned in the comments this is related to functions being multivalued.</p> <p>A similar false-proof using multivalued functions is $\sqrt{1}=1$ but $\sqrt{1}=-1$. Therefore $1=-1$.</p> <p>Because we allowed the square root to have two results it lead to the result $1=-1$. Normally the square root is defined to be always positive so as to avoid this problem. However, if we allow for multiple values then we have to remember that $\sqrt{a}=b$ and $\sqrt{a}=c$ doesn't imply $b=\sqrt{a}=c$. Allowing multivalued functions to exist means that we lose a property of equality.</p>
Solving proportions with $3$ ratios,$ x:3:y = -2:3:-4$
2,089,636
6
https://math.stackexchange.com/questions/2089636/solving-proportions-with-3-ratios-x3y-23-4
<p>Proportion seems simple enough for me. Example is $4:x = 2:5,$ and the answer is $x = 10.$</p> <p>My problem is how do I solve for proportions with $3$ ratios like $x:3:y = -2:3:-4$ ?</p> <p>Do I write it like</p> <p>$$\frac{\frac{x}{3}}{y} = \frac{\frac{-2}{3}}{-4} $$ ?</p>
[ { "answer_id": 2089644, "score": 8, "text": "<p>Just like chained equalities, you can break it apart. $x:3:y=-2:3:-4$ becomes $x:3=-2:3$ and $3:y=3:-4$. Now solve them independently.</p>\n" }, { "answer_id": 2089668, "score": 5, "text": "<p>We have: $x:3:y=-2:3:-4$</p>\n\n<p>$\\Rightarrow x:3=-2:3$</p>\n\n<p>$\\Rightarrow \\dfrac{x}{3}=-\\dfrac{2}{3}$</p>\n\n<p>$\\Rightarrow x=-2$</p>\n\n<p>and</p>\n\n<p>$\\Rightarrow 3:y=3:-4$</p>\n\n<p>$\\Rightarrow \\dfrac{3}{y}=-\\dfrac{3}{4}$</p>\n\n<p>$\\Rightarrow \\dfrac{y}{3}=-\\dfrac{4}{3}$</p>\n\n<p>$\\Rightarrow y=-4$</p>\n\n<p>$\\therefore x=-2,y=-4$</p>\n" }, { "answer_id": 2089644, "score": 8, "text": "<p>Just like chained equalities, you can break it apart. $x:3:y=-2:3:-4$ becomes $x:3=-2:3$ and $3:y=3:-4$. Now solve them independently.</p>\n" }, { "answer_id": 2089668, "score": 5, "text": "<p>We have: $x:3:y=-2:3:-4$</p>\n\n<p>$\\Rightarrow x:3=-2:3$</p>\n\n<p>$\\Rightarrow \\dfrac{x}{3}=-\\dfrac{2}{3}$</p>\n\n<p>$\\Rightarrow x=-2$</p>\n\n<p>and</p>\n\n<p>$\\Rightarrow 3:y=3:-4$</p>\n\n<p>$\\Rightarrow \\dfrac{3}{y}=-\\dfrac{3}{4}$</p>\n\n<p>$\\Rightarrow \\dfrac{y}{3}=-\\dfrac{4}{3}$</p>\n\n<p>$\\Rightarrow y=-4$</p>\n\n<p>$\\therefore x=-2,y=-4$</p>\n" }, { "answer_id": 2089644, "score": 8, "text": "<p>Just like chained equalities, you can break it apart. $x:3:y=-2:3:-4$ becomes $x:3=-2:3$ and $3:y=3:-4$. Now solve them independently.</p>\n" }, { "answer_id": 2089668, "score": 5, "text": "<p>We have: $x:3:y=-2:3:-4$</p>\n\n<p>$\\Rightarrow x:3=-2:3$</p>\n\n<p>$\\Rightarrow \\dfrac{x}{3}=-\\dfrac{2}{3}$</p>\n\n<p>$\\Rightarrow x=-2$</p>\n\n<p>and</p>\n\n<p>$\\Rightarrow 3:y=3:-4$</p>\n\n<p>$\\Rightarrow \\dfrac{3}{y}=-\\dfrac{3}{4}$</p>\n\n<p>$\\Rightarrow \\dfrac{y}{3}=-\\dfrac{4}{3}$</p>\n\n<p>$\\Rightarrow y=-4$</p>\n\n<p>$\\therefore x=-2,y=-4$</p>\n" }, { "answer_id": 2089644, "score": 8, "text": "<p>Just like chained equalities, you can break it apart. $x:3:y=-2:3:-4$ becomes $x:3=-2:3$ and $3:y=3:-4$. Now solve them independently.</p>\n" }, { "answer_id": 2089668, "score": 5, "text": "<p>We have: $x:3:y=-2:3:-4$</p>\n\n<p>$\\Rightarrow x:3=-2:3$</p>\n\n<p>$\\Rightarrow \\dfrac{x}{3}=-\\dfrac{2}{3}$</p>\n\n<p>$\\Rightarrow x=-2$</p>\n\n<p>and</p>\n\n<p>$\\Rightarrow 3:y=3:-4$</p>\n\n<p>$\\Rightarrow \\dfrac{3}{y}=-\\dfrac{3}{4}$</p>\n\n<p>$\\Rightarrow \\dfrac{y}{3}=-\\dfrac{4}{3}$</p>\n\n<p>$\\Rightarrow y=-4$</p>\n\n<p>$\\therefore x=-2,y=-4$</p>\n" }, { "answer_id": 2089644, "score": 8, "text": "<p>Just like chained equalities, you can break it apart. $x:3:y=-2:3:-4$ becomes $x:3=-2:3$ and $3:y=3:-4$. Now solve them independently.</p>\n" }, { "answer_id": 2089668, "score": 5, "text": "<p>We have: $x:3:y=-2:3:-4$</p>\n\n<p>$\\Rightarrow x:3=-2:3$</p>\n\n<p>$\\Rightarrow \\dfrac{x}{3}=-\\dfrac{2}{3}$</p>\n\n<p>$\\Rightarrow x=-2$</p>\n\n<p>and</p>\n\n<p>$\\Rightarrow 3:y=3:-4$</p>\n\n<p>$\\Rightarrow \\dfrac{3}{y}=-\\dfrac{3}{4}$</p>\n\n<p>$\\Rightarrow \\dfrac{y}{3}=-\\dfrac{4}{3}$</p>\n\n<p>$\\Rightarrow y=-4$</p>\n\n<p>$\\therefore x=-2,y=-4$</p>\n" }, { "answer_id": 2089644, "score": 8, "text": "<p>Just like chained equalities, you can break it apart. $x:3:y=-2:3:-4$ becomes $x:3=-2:3$ and $3:y=3:-4$. Now solve them independently.</p>\n" }, { "answer_id": 2089668, "score": 5, "text": "<p>We have: $x:3:y=-2:3:-4$</p>\n\n<p>$\\Rightarrow x:3=-2:3$</p>\n\n<p>$\\Rightarrow \\dfrac{x}{3}=-\\dfrac{2}{3}$</p>\n\n<p>$\\Rightarrow x=-2$</p>\n\n<p>and</p>\n\n<p>$\\Rightarrow 3:y=3:-4$</p>\n\n<p>$\\Rightarrow \\dfrac{3}{y}=-\\dfrac{3}{4}$</p>\n\n<p>$\\Rightarrow \\dfrac{y}{3}=-\\dfrac{4}{3}$</p>\n\n<p>$\\Rightarrow y=-4$</p>\n\n<p>$\\therefore x=-2,y=-4$</p>\n" }, { "answer_id": 2089644, "score": 8, "text": "<p>Just like chained equalities, you can break it apart. $x:3:y=-2:3:-4$ becomes $x:3=-2:3$ and $3:y=3:-4$. Now solve them independently.</p>\n" }, { "answer_id": 2089668, "score": 5, "text": "<p>We have: $x:3:y=-2:3:-4$</p>\n\n<p>$\\Rightarrow x:3=-2:3$</p>\n\n<p>$\\Rightarrow \\dfrac{x}{3}=-\\dfrac{2}{3}$</p>\n\n<p>$\\Rightarrow x=-2$</p>\n\n<p>and</p>\n\n<p>$\\Rightarrow 3:y=3:-4$</p>\n\n<p>$\\Rightarrow \\dfrac{3}{y}=-\\dfrac{3}{4}$</p>\n\n<p>$\\Rightarrow \\dfrac{y}{3}=-\\dfrac{4}{3}$</p>\n\n<p>$\\Rightarrow y=-4$</p>\n\n<p>$\\therefore x=-2,y=-4$</p>\n" }, { "answer_id": 2089644, "score": 8, "text": "<p>Just like chained equalities, you can break it apart. $x:3:y=-2:3:-4$ becomes $x:3=-2:3$ and $3:y=3:-4$. Now solve them independently.</p>\n" }, { "answer_id": 2089668, "score": 5, "text": "<p>We have: $x:3:y=-2:3:-4$</p>\n\n<p>$\\Rightarrow x:3=-2:3$</p>\n\n<p>$\\Rightarrow \\dfrac{x}{3}=-\\dfrac{2}{3}$</p>\n\n<p>$\\Rightarrow x=-2$</p>\n\n<p>and</p>\n\n<p>$\\Rightarrow 3:y=3:-4$</p>\n\n<p>$\\Rightarrow \\dfrac{3}{y}=-\\dfrac{3}{4}$</p>\n\n<p>$\\Rightarrow \\dfrac{y}{3}=-\\dfrac{4}{3}$</p>\n\n<p>$\\Rightarrow y=-4$</p>\n\n<p>$\\therefore x=-2,y=-4$</p>\n" }, { "answer_id": 2089644, "score": 8, "text": "<p>Just like chained equalities, you can break it apart. $x:3:y=-2:3:-4$ becomes $x:3=-2:3$ and $3:y=3:-4$. Now solve them independently.</p>\n" }, { "answer_id": 2089668, "score": 5, "text": "<p>We have: $x:3:y=-2:3:-4$</p>\n\n<p>$\\Rightarrow x:3=-2:3$</p>\n\n<p>$\\Rightarrow \\dfrac{x}{3}=-\\dfrac{2}{3}$</p>\n\n<p>$\\Rightarrow x=-2$</p>\n\n<p>and</p>\n\n<p>$\\Rightarrow 3:y=3:-4$</p>\n\n<p>$\\Rightarrow \\dfrac{3}{y}=-\\dfrac{3}{4}$</p>\n\n<p>$\\Rightarrow \\dfrac{y}{3}=-\\dfrac{4}{3}$</p>\n\n<p>$\\Rightarrow y=-4$</p>\n\n<p>$\\therefore x=-2,y=-4$</p>\n" }, { "answer_id": 2089644, "score": 8, "text": "<p>Just like chained equalities, you can break it apart. $x:3:y=-2:3:-4$ becomes $x:3=-2:3$ and $3:y=3:-4$. Now solve them independently.</p>\n" }, { "answer_id": 2089668, "score": 5, "text": "<p>We have: $x:3:y=-2:3:-4$</p>\n\n<p>$\\Rightarrow x:3=-2:3$</p>\n\n<p>$\\Rightarrow \\dfrac{x}{3}=-\\dfrac{2}{3}$</p>\n\n<p>$\\Rightarrow x=-2$</p>\n\n<p>and</p>\n\n<p>$\\Rightarrow 3:y=3:-4$</p>\n\n<p>$\\Rightarrow \\dfrac{3}{y}=-\\dfrac{3}{4}$</p>\n\n<p>$\\Rightarrow \\dfrac{y}{3}=-\\dfrac{4}{3}$</p>\n\n<p>$\\Rightarrow y=-4$</p>\n\n<p>$\\therefore x=-2,y=-4$</p>\n" }, { "answer_id": 2089644, "score": 8, "text": "<p>Just like chained equalities, you can break it apart. $x:3:y=-2:3:-4$ becomes $x:3=-2:3$ and $3:y=3:-4$. Now solve them independently.</p>\n" }, { "answer_id": 2089668, "score": 5, "text": "<p>We have: $x:3:y=-2:3:-4$</p>\n\n<p>$\\Rightarrow x:3=-2:3$</p>\n\n<p>$\\Rightarrow \\dfrac{x}{3}=-\\dfrac{2}{3}$</p>\n\n<p>$\\Rightarrow x=-2$</p>\n\n<p>and</p>\n\n<p>$\\Rightarrow 3:y=3:-4$</p>\n\n<p>$\\Rightarrow \\dfrac{3}{y}=-\\dfrac{3}{4}$</p>\n\n<p>$\\Rightarrow \\dfrac{y}{3}=-\\dfrac{4}{3}$</p>\n\n<p>$\\Rightarrow y=-4$</p>\n\n<p>$\\therefore x=-2,y=-4$</p>\n" } ]
Question: Solving proportions with $3$ ratios,$ x:3:y = -2:3:-4$ Answer 1: <p>Just like chained equalities, you can break it apart. $x:3:y=-2:3:-4$ becomes $x:3=-2:3$ and $3:y=3:-4$. Now solve them independently.</p> Answer 2: <p>We have: $x:3:y=-2:3:-4$</p> <p>$\Rightarrow x:3=-2:3$</p> <p>$\Rightarrow \dfrac{x}{3}=-\dfrac{2}{3}$</p> <p>$\Rightarrow x=-2$</p> <p>and</p> <p>$\Rightarrow 3:y=3:-4$</p> <p>$\Rightarrow \dfrac{3}{y}=-\dfrac{3}{4}$</p> <p>$\Rightarrow \dfrac{y}{3}=-\dfrac{4}{3}$</p> <p>$\Rightarrow y=-4$</p> <p>$\therefore x=-2,y=-4$</p> Answer 3: <p>Just like chained equalities, you can break it apart. $x:3:y=-2:3:-4$ becomes $x:3=-2:3$ and $3:y=3:-4$. Now solve them independently.</p> Answer 4: <p>We have: $x:3:y=-2:3:-4$</p> <p>$\Rightarrow x:3=-2:3$</p> <p>$\Rightarrow \dfrac{x}{3}=-\dfrac{2}{3}$</p> <p>$\Rightarrow x=-2$</p> <p>and</p> <p>$\Rightarrow 3:y=3:-4$</p> <p>$\Rightarrow \dfrac{3}{y}=-\dfrac{3}{4}$</p> <p>$\Rightarrow \dfrac{y}{3}=-\dfrac{4}{3}$</p> <p>$\Rightarrow y=-4$</p> <p>$\therefore x=-2,y=-4$</p> Answer 5: <p>Just like chained equalities, you can break it apart. $x:3:y=-2:3:-4$ becomes $x:3=-2:3$ and $3:y=3:-4$. Now solve them independently.</p> Answer 6: <p>We have: $x:3:y=-2:3:-4$</p> <p>$\Rightarrow x:3=-2:3$</p> <p>$\Rightarrow \dfrac{x}{3}=-\dfrac{2}{3}$</p> <p>$\Rightarrow x=-2$</p> <p>and</p> <p>$\Rightarrow 3:y=3:-4$</p> <p>$\Rightarrow \dfrac{3}{y}=-\dfrac{3}{4}$</p> <p>$\Rightarrow \dfrac{y}{3}=-\dfrac{4}{3}$</p> <p>$\Rightarrow y=-4$</p> <p>$\therefore x=-2,y=-4$</p> Answer 7: <p>Just like chained equalities, you can break it apart. $x:3:y=-2:3:-4$ becomes $x:3=-2:3$ and $3:y=3:-4$. Now solve them independently.</p> Answer 8: <p>We have: $x:3:y=-2:3:-4$</p> <p>$\Rightarrow x:3=-2:3$</p> <p>$\Rightarrow \dfrac{x}{3}=-\dfrac{2}{3}$</p> <p>$\Rightarrow x=-2$</p> <p>and</p> <p>$\Rightarrow 3:y=3:-4$</p> <p>$\Rightarrow \dfrac{3}{y}=-\dfrac{3}{4}$</p> <p>$\Rightarrow \dfrac{y}{3}=-\dfrac{4}{3}$</p> <p>$\Rightarrow y=-4$</p> <p>$\therefore x=-2,y=-4$</p> Answer 9: <p>Just like chained equalities, you can break it apart. $x:3:y=-2:3:-4$ becomes $x:3=-2:3$ and $3:y=3:-4$. Now solve them independently.</p> Answer 10: <p>We have: $x:3:y=-2:3:-4$</p> <p>$\Rightarrow x:3=-2:3$</p> <p>$\Rightarrow \dfrac{x}{3}=-\dfrac{2}{3}$</p> <p>$\Rightarrow x=-2$</p> <p>and</p> <p>$\Rightarrow 3:y=3:-4$</p> <p>$\Rightarrow \dfrac{3}{y}=-\dfrac{3}{4}$</p> <p>$\Rightarrow \dfrac{y}{3}=-\dfrac{4}{3}$</p> <p>$\Rightarrow y=-4$</p> <p>$\therefore x=-2,y=-4$</p> Answer 11: <p>Just like chained equalities, you can break it apart. $x:3:y=-2:3:-4$ becomes $x:3=-2:3$ and $3:y=3:-4$. Now solve them independently.</p> Answer 12: <p>We have: $x:3:y=-2:3:-4$</p> <p>$\Rightarrow x:3=-2:3$</p> <p>$\Rightarrow \dfrac{x}{3}=-\dfrac{2}{3}$</p> <p>$\Rightarrow x=-2$</p> <p>and</p> <p>$\Rightarrow 3:y=3:-4$</p> <p>$\Rightarrow \dfrac{3}{y}=-\dfrac{3}{4}$</p> <p>$\Rightarrow \dfrac{y}{3}=-\dfrac{4}{3}$</p> <p>$\Rightarrow y=-4$</p> <p>$\therefore x=-2,y=-4$</p> Answer 13: <p>Just like chained equalities, you can break it apart. $x:3:y=-2:3:-4$ becomes $x:3=-2:3$ and $3:y=3:-4$. Now solve them independently.</p> Answer 14: <p>We have: $x:3:y=-2:3:-4$</p> <p>$\Rightarrow x:3=-2:3$</p> <p>$\Rightarrow \dfrac{x}{3}=-\dfrac{2}{3}$</p> <p>$\Rightarrow x=-2$</p> <p>and</p> <p>$\Rightarrow 3:y=3:-4$</p> <p>$\Rightarrow \dfrac{3}{y}=-\dfrac{3}{4}$</p> <p>$\Rightarrow \dfrac{y}{3}=-\dfrac{4}{3}$</p> <p>$\Rightarrow y=-4$</p> <p>$\therefore x=-2,y=-4$</p> Answer 15: <p>Just like chained equalities, you can break it apart. $x:3:y=-2:3:-4$ becomes $x:3=-2:3$ and $3:y=3:-4$. Now solve them independently.</p> Answer 16: <p>We have: $x:3:y=-2:3:-4$</p> <p>$\Rightarrow x:3=-2:3$</p> <p>$\Rightarrow \dfrac{x}{3}=-\dfrac{2}{3}$</p> <p>$\Rightarrow x=-2$</p> <p>and</p> <p>$\Rightarrow 3:y=3:-4$</p> <p>$\Rightarrow \dfrac{3}{y}=-\dfrac{3}{4}$</p> <p>$\Rightarrow \dfrac{y}{3}=-\dfrac{4}{3}$</p> <p>$\Rightarrow y=-4$</p> <p>$\therefore x=-2,y=-4$</p> Answer 17: <p>Just like chained equalities, you can break it apart. $x:3:y=-2:3:-4$ becomes $x:3=-2:3$ and $3:y=3:-4$. Now solve them independently.</p> Answer 18: <p>We have: $x:3:y=-2:3:-4$</p> <p>$\Rightarrow x:3=-2:3$</p> <p>$\Rightarrow \dfrac{x}{3}=-\dfrac{2}{3}$</p> <p>$\Rightarrow x=-2$</p> <p>and</p> <p>$\Rightarrow 3:y=3:-4$</p> <p>$\Rightarrow \dfrac{3}{y}=-\dfrac{3}{4}$</p> <p>$\Rightarrow \dfrac{y}{3}=-\dfrac{4}{3}$</p> <p>$\Rightarrow y=-4$</p> <p>$\therefore x=-2,y=-4$</p> Answer 19: <p>Just like chained equalities, you can break it apart. $x:3:y=-2:3:-4$ becomes $x:3=-2:3$ and $3:y=3:-4$. Now solve them independently.</p> Answer 20: <p>We have: $x:3:y=-2:3:-4$</p> <p>$\Rightarrow x:3=-2:3$</p> <p>$\Rightarrow \dfrac{x}{3}=-\dfrac{2}{3}$</p> <p>$\Rightarrow x=-2$</p> <p>and</p> <p>$\Rightarrow 3:y=3:-4$</p> <p>$\Rightarrow \dfrac{3}{y}=-\dfrac{3}{4}$</p> <p>$\Rightarrow \dfrac{y}{3}=-\dfrac{4}{3}$</p> <p>$\Rightarrow y=-4$</p> <p>$\therefore x=-2,y=-4$</p> Answer 21: <p>Just like chained equalities, you can break it apart. $x:3:y=-2:3:-4$ becomes $x:3=-2:3$ and $3:y=3:-4$. Now solve them independently.</p> Answer 22: <p>We have: $x:3:y=-2:3:-4$</p> <p>$\Rightarrow x:3=-2:3$</p> <p>$\Rightarrow \dfrac{x}{3}=-\dfrac{2}{3}$</p> <p>$\Rightarrow x=-2$</p> <p>and</p> <p>$\Rightarrow 3:y=3:-4$</p> <p>$\Rightarrow \dfrac{3}{y}=-\dfrac{3}{4}$</p> <p>$\Rightarrow \dfrac{y}{3}=-\dfrac{4}{3}$</p> <p>$\Rightarrow y=-4$</p> <p>$\therefore x=-2,y=-4$</p>
Evaluate $\int_0^\infty \frac{\ln\left ( 2x+1 \right )}{x\left ( x+1 \right )}\mathrm dx$
2,089,757
6
https://math.stackexchange.com/questions/2089757/evaluate-int-0-infty-frac-ln-left-2x1-right-x-left-x1-right-m
<p>How to evaluate $$\int_{0}^{\infty }\frac{\ln(2x+1)}{x(x+1)}\,\mathrm dx?$$ I tried $$\frac{\ln(2x+1)}{x(x+1)}=\ln(2x+1)\left (\frac{1}{x}-\frac{1}{1+x} \right)$$ but I don't know how to go on.</p>
[ { "answer_id": 2089820, "score": 5, "text": "<p>Alternatively we can use Feynman's Trick if we let, $$I(a)=\\int_0^{\\infty} \\frac{\\ln(ax+1)}{x(x+1)}\\mathrm dx,$$ then if we take the derivative of $I(a)$ and expand using partial fractions we have $$I'(a)=\\int_0^{\\infty} \\frac{1}{(ax+1)(x+1)}\\mathrm dx =\\int_0^{\\infty} \\frac{a}{(a-1)(ax+1)} - \\frac{1}{(a-1)(x+1)}\\mathrm dx.$$ Integrating this gives, $$I'(a)=\\bigg(\\frac{1}{a-1}\\ln|ax+1|-\\frac{1}{a-1}\\ln|x+1| \\bigg)|_0^{\\infty}=\\frac{1}{a-1}\\ln|\\frac{ax+1}{x+1}| |_0^{\\infty}.$$ Noticing the lower bound is zero we're left with an easy limit$$I'(a)=\\lim_{x\\to\\infty}\\frac{1}{a-1}\\ln|\\frac{ax+1}{x+1}|=\\frac{\\ln(a)}{a-1}.$$ Now if we integrate we can get $I(a)$, this is easy because our integral closely resembles the definition of the dilogarithm, so we have $$I(a)=\\int \\frac{\\ln(a)}{a-1} da=-Li_2(1-a)+C.$$ Now if we notice $I(0)=0$ and note that $Li_2(1)=\\frac{\\pi^2}{6}$ we see that $C=\\frac{\\pi^2}{6}.$ So finally if we note that for your problem $a=2$ have that, $$I(2)=-Li_2(-1)+\\frac{\\pi^2}{6}=\\frac{\\pi^2}{12}+\\frac{\\pi^2}{6}=\\frac{\\pi^2}{4}.$$</p>\n\n<p>This is admittedly not as nice as the approach using the geometric series and makes heavy use of the dilogarithm and it's particular values but I think it's kinda fun.</p>\n" }, { "answer_id": 2089769, "score": 4, "text": "<p>do the substitution $2x+1\\rightarrow x$ and the let $x\\rightarrow x^{-1}$ , hence\n$$\\int_{0}^{\\infty }\\frac{\\ln\\left ( 2x+1 \\right )}{x\\left ( x+1 \\right )}\\, \\mathrm{d}x=2\\int_{1}^{\\infty }\\frac{\\ln x}{x^{2}-1}\\, \\mathrm{d}x=-2\\int_{0}^{1}\\frac{\\ln x}{1-x^{2}}\\, \\mathrm{d}x$$\nthen use the geometric series\n$$\\frac{1}{1-x^{2}}=\\sum_{n=0}^{\\infty }x^{2n}$$\nwe get\n$$\\int_{0}^{\\infty }\\frac{\\ln\\left ( 2x+1 \\right )}{x\\left ( x+1 \\right )}\\, \\mathrm{d}x=2\\sum_{n=0}^{\\infty }\\frac{1}{\\left ( 2n+1 \\right )^{2}}$$\nand the answer will follow.</p>\n" }, { "answer_id": 2089860, "score": 1, "text": "<p><em>Too long for a comment.</em></p>\n\n<p>Making the problem more general, it is possible to compute $$I=\\int \\frac{\\log (a x+b)}{(x+c) (x+d)}\\,\\mathrm dx$$ Using one integration by parts $$u=\\log (a x+b)\\implies u'=\\frac{a}{a x+b}\\,\\mathrm dx$$ $$v'=\\frac{\\mathrm dx}{(c+x) (d+x)}\\implies v=\\frac{\\log (x+d)-\\log (x+c)}{c-d}$$ So, if $c\\neq d$,$$(c-d)I=\\log (a x+b) \\log \\left(\\frac{(x+d) (a c-b)}{(x+c) (a\n d-b)}\\right)-\\text{Li}_2\\left(\\frac{a x+b}{b-a\n c}\\right)+\\text{Li}_2\\left(\\frac{a x+b}{b-a d}\\right) $$ and, if $c=d$, $$I=\\frac{a (x+c) \\log (x+c)-(a x+b) \\log (a x+b)}{(x+c) (b-a c)}$$</p>\n" }, { "answer_id": 2106818, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\int_{0}^{\\infty}{\\ln\\pars{2x + 1} \\over x\\pars{x + 1}}\\,\\dd x &amp;\n\\,\\,\\,\\stackrel{2x + 1\\ \\mapsto\\ x}{=}\\,\\,\\,\n\\int_{1}^{\\infty}{\\ln\\pars{x} \\over\n\\bracks{\\pars{x - 1}/2}\\bracks{\\pars{x - 1}/2 + 1}}\\,{\\dd x \\over 2} =\n2\\int_{1}^{\\infty}{\\ln\\pars{x} \\over x^{2} - 1}\\,\\dd x\n\\\\[5mm] &amp; \\stackrel{x\\ \\mapsto\\ 1/x}{=}\\,\\,\\,\n-2\\int_{0}^{1}{\\ln\\pars{x} \\over 1 - x^{2}}\\,\\dd x\n\\,\\,\\,\\stackrel{x^{2}\\ \\mapsto\\ x}{=}\\,\\,\\,\n-\\,{1 \\over 2}\\int_{0}^{1}{x^{-1/2}\\,\\ln\\pars{x} \\over 1 - x}\\,\\dd x\n\\\\[5mm] &amp; =\n\\left.{1 \\over 2}\\,\\totald{}{\\mu}\\int_{0}^{1}{1 - x^{\\mu} \\over 1 - x}\\,\\dd x\\,\n\\right\\vert_{\\ \\mu\\ =\\ -1/2} =\n\\left.{1 \\over 2}\\,\\totald{\\bracks{\\Psi\\pars{\\mu + 1} + \\gamma}}{\\mu}\n\\right\\vert_{\\ \\mu\\ =\\ -1/2}\\label{1}\\tag{1}\n\\end{align}</p>\n\n<blockquote>\n <p>where $\\ds{\\Psi}$ is the $Digamma\\ Function$ and $\\gamma$ is the\n $Euler\\mbox{-}Mascheroni\\ Constant$: See $\\mathbf{6.3.22}$ in <a href=\"http://people.math.sfu.ca/~cbm/aands/page_259.htm\" rel=\"nofollow noreferrer\">A&amp;S Table</a>.</p>\n</blockquote>\n\n<p>From expression&nbsp;\\eqref{1}:\n$$\n\\int_{0}^{\\infty}{\\ln\\pars{2x + 1} \\over x\\pars{x + 1}}\\,\\dd x =\n{1 \\over 2}\\,\\Psi\\, '\\pars{1 \\over 2} = \\bbx{\\ds{\\pi^{2} \\over 4}}\n$$</p>\n\n<blockquote>\n <p>See $\\mathbf{6.4.4}$ in\n <a href=\"http://people.math.sfu.ca/~cbm/aands/page_260.htm\" rel=\"nofollow noreferrer\">A&amp;S Table</a>.</p>\n</blockquote>\n" }, { "answer_id": 2106855, "score": 0, "text": "<p>This might be a late response, but consider the double integral</p>\n\n<p>$$I=\\int_{0}^{\\infty}\\int_{0}^{\\infty} \\frac{1}{1+x^2(2y+1)^2} \\frac{x}{1+x^2} \\ dy \\ dx.$$</p>\n\n<p>We will evaluate $I$ in two ways. Integrating with respect to $y$ first, we see that</p>\n\n<p>$$\\int_{0}^{\\infty} \\frac{1}{1+x^2(2y+1)^2} \\ dy= \\lim_{b \\rightarrow \\infty}\\frac{\\arctan((2b+1)x)}{2x}-\\frac{\\arctan(x)}{2x}=\\frac{\\pi}{4x}-\\frac{\\arctan(x)}{2x}$$ and get</p>\n\n<p>$$I=\\int_{0}^{\\infty} \\left(\\frac{\\pi}{4x}-\\frac{\\arctan(x)}{2x} \\right) \\frac{x}{1+x^2} \\ dx=\\int_{0}^{\\infty} \\frac{\\pi}{4(1+x^2)}-\\frac{\\arctan(x)}{2(1+x^2)} \\ dx=\\frac{\\pi^2}{8}-\\frac{\\pi^2}{16}=\\frac{\\pi^2}{16}.$$ </p>\n\n<p>Now we reverse the order of integration as such:</p>\n\n<p>$$I=\\int_{0}^{\\infty}\\int_{0}^{\\infty} \\frac{1}{1+x^2(2y+1)^2} \\frac{x}{1+x^2} \\ dx \\ dy.$$</p>\n\n<p>To clean things up a little bit, let $u=x^2, \\ du = 2 x \\ dx.$ So </p>\n\n<p>$$I=\\int_{0}^{\\infty}\\int_{0}^{\\infty} \\frac{1}{1+u(2y+1)^2} \\frac{u}{2(1+u)} \\ du \\ dy.$$</p>\n\n<p>Integrating with respect to $u,$ we can use partial fractions to get that </p>\n\n<p>$$\\frac{1}{1+u(2y+1)^2} \\frac{1}{2(1+u)}=\\frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\\frac{1}{2(u+1)((2y+1)^2-1)}.$$</p>\n\n<p>We see </p>\n\n<p>$$\\int_{0}^{\\infty} \\frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\\frac{1}{2(u+1)((2y+1)^2-1)} \\ du $$\n$$= \\lim_{b \\rightarrow \\infty}\\frac{\\ln(b(2y+1)^2+1)}{2(2y+1)^2-2} - \\frac{\\ln(b+1)}{2(2y+1)^2-2}= \\frac{\\ln(2y+1)}{(2y+1)^2-1}=\\frac{\\ln(2y+1)}{4y(y+1)}.$$</p>\n\n<p>Hence, </p>\n\n<p>$$I=\\int_{0}^{\\infty}\\frac{\\ln(2y+1)}{4y(y+1)} \\ dy=\\frac{\\pi^2}{16},$$</p>\n\n<p>So $$I=\\int_{0}^{\\infty}\\frac{\\ln(2y+1)}{y(y+1)} \\ dy=\\frac{\\pi^2}{4}.$$</p>\n" }, { "answer_id": 2089820, "score": 5, "text": "<p>Alternatively we can use Feynman's Trick if we let, $$I(a)=\\int_0^{\\infty} \\frac{\\ln(ax+1)}{x(x+1)}\\mathrm dx,$$ then if we take the derivative of $I(a)$ and expand using partial fractions we have $$I'(a)=\\int_0^{\\infty} \\frac{1}{(ax+1)(x+1)}\\mathrm dx =\\int_0^{\\infty} \\frac{a}{(a-1)(ax+1)} - \\frac{1}{(a-1)(x+1)}\\mathrm dx.$$ Integrating this gives, $$I'(a)=\\bigg(\\frac{1}{a-1}\\ln|ax+1|-\\frac{1}{a-1}\\ln|x+1| \\bigg)|_0^{\\infty}=\\frac{1}{a-1}\\ln|\\frac{ax+1}{x+1}| |_0^{\\infty}.$$ Noticing the lower bound is zero we're left with an easy limit$$I'(a)=\\lim_{x\\to\\infty}\\frac{1}{a-1}\\ln|\\frac{ax+1}{x+1}|=\\frac{\\ln(a)}{a-1}.$$ Now if we integrate we can get $I(a)$, this is easy because our integral closely resembles the definition of the dilogarithm, so we have $$I(a)=\\int \\frac{\\ln(a)}{a-1} da=-Li_2(1-a)+C.$$ Now if we notice $I(0)=0$ and note that $Li_2(1)=\\frac{\\pi^2}{6}$ we see that $C=\\frac{\\pi^2}{6}.$ So finally if we note that for your problem $a=2$ have that, $$I(2)=-Li_2(-1)+\\frac{\\pi^2}{6}=\\frac{\\pi^2}{12}+\\frac{\\pi^2}{6}=\\frac{\\pi^2}{4}.$$</p>\n\n<p>This is admittedly not as nice as the approach using the geometric series and makes heavy use of the dilogarithm and it's particular values but I think it's kinda fun.</p>\n" }, { "answer_id": 2089769, "score": 4, "text": "<p>do the substitution $2x+1\\rightarrow x$ and the let $x\\rightarrow x^{-1}$ , hence\n$$\\int_{0}^{\\infty }\\frac{\\ln\\left ( 2x+1 \\right )}{x\\left ( x+1 \\right )}\\, \\mathrm{d}x=2\\int_{1}^{\\infty }\\frac{\\ln x}{x^{2}-1}\\, \\mathrm{d}x=-2\\int_{0}^{1}\\frac{\\ln x}{1-x^{2}}\\, \\mathrm{d}x$$\nthen use the geometric series\n$$\\frac{1}{1-x^{2}}=\\sum_{n=0}^{\\infty }x^{2n}$$\nwe get\n$$\\int_{0}^{\\infty }\\frac{\\ln\\left ( 2x+1 \\right )}{x\\left ( x+1 \\right )}\\, \\mathrm{d}x=2\\sum_{n=0}^{\\infty }\\frac{1}{\\left ( 2n+1 \\right )^{2}}$$\nand the answer will follow.</p>\n" }, { "answer_id": 2089860, "score": 1, "text": "<p><em>Too long for a comment.</em></p>\n\n<p>Making the problem more general, it is possible to compute $$I=\\int \\frac{\\log (a x+b)}{(x+c) (x+d)}\\,\\mathrm dx$$ Using one integration by parts $$u=\\log (a x+b)\\implies u'=\\frac{a}{a x+b}\\,\\mathrm dx$$ $$v'=\\frac{\\mathrm dx}{(c+x) (d+x)}\\implies v=\\frac{\\log (x+d)-\\log (x+c)}{c-d}$$ So, if $c\\neq d$,$$(c-d)I=\\log (a x+b) \\log \\left(\\frac{(x+d) (a c-b)}{(x+c) (a\n d-b)}\\right)-\\text{Li}_2\\left(\\frac{a x+b}{b-a\n c}\\right)+\\text{Li}_2\\left(\\frac{a x+b}{b-a d}\\right) $$ and, if $c=d$, $$I=\\frac{a (x+c) \\log (x+c)-(a x+b) \\log (a x+b)}{(x+c) (b-a c)}$$</p>\n" }, { "answer_id": 2106818, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\int_{0}^{\\infty}{\\ln\\pars{2x + 1} \\over x\\pars{x + 1}}\\,\\dd x &amp;\n\\,\\,\\,\\stackrel{2x + 1\\ \\mapsto\\ x}{=}\\,\\,\\,\n\\int_{1}^{\\infty}{\\ln\\pars{x} \\over\n\\bracks{\\pars{x - 1}/2}\\bracks{\\pars{x - 1}/2 + 1}}\\,{\\dd x \\over 2} =\n2\\int_{1}^{\\infty}{\\ln\\pars{x} \\over x^{2} - 1}\\,\\dd x\n\\\\[5mm] &amp; \\stackrel{x\\ \\mapsto\\ 1/x}{=}\\,\\,\\,\n-2\\int_{0}^{1}{\\ln\\pars{x} \\over 1 - x^{2}}\\,\\dd x\n\\,\\,\\,\\stackrel{x^{2}\\ \\mapsto\\ x}{=}\\,\\,\\,\n-\\,{1 \\over 2}\\int_{0}^{1}{x^{-1/2}\\,\\ln\\pars{x} \\over 1 - x}\\,\\dd x\n\\\\[5mm] &amp; =\n\\left.{1 \\over 2}\\,\\totald{}{\\mu}\\int_{0}^{1}{1 - x^{\\mu} \\over 1 - x}\\,\\dd x\\,\n\\right\\vert_{\\ \\mu\\ =\\ -1/2} =\n\\left.{1 \\over 2}\\,\\totald{\\bracks{\\Psi\\pars{\\mu + 1} + \\gamma}}{\\mu}\n\\right\\vert_{\\ \\mu\\ =\\ -1/2}\\label{1}\\tag{1}\n\\end{align}</p>\n\n<blockquote>\n <p>where $\\ds{\\Psi}$ is the $Digamma\\ Function$ and $\\gamma$ is the\n $Euler\\mbox{-}Mascheroni\\ Constant$: See $\\mathbf{6.3.22}$ in <a href=\"http://people.math.sfu.ca/~cbm/aands/page_259.htm\" rel=\"nofollow noreferrer\">A&amp;S Table</a>.</p>\n</blockquote>\n\n<p>From expression&nbsp;\\eqref{1}:\n$$\n\\int_{0}^{\\infty}{\\ln\\pars{2x + 1} \\over x\\pars{x + 1}}\\,\\dd x =\n{1 \\over 2}\\,\\Psi\\, '\\pars{1 \\over 2} = \\bbx{\\ds{\\pi^{2} \\over 4}}\n$$</p>\n\n<blockquote>\n <p>See $\\mathbf{6.4.4}$ in\n <a href=\"http://people.math.sfu.ca/~cbm/aands/page_260.htm\" rel=\"nofollow noreferrer\">A&amp;S Table</a>.</p>\n</blockquote>\n" }, { "answer_id": 2106855, "score": 0, "text": "<p>This might be a late response, but consider the double integral</p>\n\n<p>$$I=\\int_{0}^{\\infty}\\int_{0}^{\\infty} \\frac{1}{1+x^2(2y+1)^2} \\frac{x}{1+x^2} \\ dy \\ dx.$$</p>\n\n<p>We will evaluate $I$ in two ways. Integrating with respect to $y$ first, we see that</p>\n\n<p>$$\\int_{0}^{\\infty} \\frac{1}{1+x^2(2y+1)^2} \\ dy= \\lim_{b \\rightarrow \\infty}\\frac{\\arctan((2b+1)x)}{2x}-\\frac{\\arctan(x)}{2x}=\\frac{\\pi}{4x}-\\frac{\\arctan(x)}{2x}$$ and get</p>\n\n<p>$$I=\\int_{0}^{\\infty} \\left(\\frac{\\pi}{4x}-\\frac{\\arctan(x)}{2x} \\right) \\frac{x}{1+x^2} \\ dx=\\int_{0}^{\\infty} \\frac{\\pi}{4(1+x^2)}-\\frac{\\arctan(x)}{2(1+x^2)} \\ dx=\\frac{\\pi^2}{8}-\\frac{\\pi^2}{16}=\\frac{\\pi^2}{16}.$$ </p>\n\n<p>Now we reverse the order of integration as such:</p>\n\n<p>$$I=\\int_{0}^{\\infty}\\int_{0}^{\\infty} \\frac{1}{1+x^2(2y+1)^2} \\frac{x}{1+x^2} \\ dx \\ dy.$$</p>\n\n<p>To clean things up a little bit, let $u=x^2, \\ du = 2 x \\ dx.$ So </p>\n\n<p>$$I=\\int_{0}^{\\infty}\\int_{0}^{\\infty} \\frac{1}{1+u(2y+1)^2} \\frac{u}{2(1+u)} \\ du \\ dy.$$</p>\n\n<p>Integrating with respect to $u,$ we can use partial fractions to get that </p>\n\n<p>$$\\frac{1}{1+u(2y+1)^2} \\frac{1}{2(1+u)}=\\frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\\frac{1}{2(u+1)((2y+1)^2-1)}.$$</p>\n\n<p>We see </p>\n\n<p>$$\\int_{0}^{\\infty} \\frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\\frac{1}{2(u+1)((2y+1)^2-1)} \\ du $$\n$$= \\lim_{b \\rightarrow \\infty}\\frac{\\ln(b(2y+1)^2+1)}{2(2y+1)^2-2} - \\frac{\\ln(b+1)}{2(2y+1)^2-2}= \\frac{\\ln(2y+1)}{(2y+1)^2-1}=\\frac{\\ln(2y+1)}{4y(y+1)}.$$</p>\n\n<p>Hence, </p>\n\n<p>$$I=\\int_{0}^{\\infty}\\frac{\\ln(2y+1)}{4y(y+1)} \\ dy=\\frac{\\pi^2}{16},$$</p>\n\n<p>So $$I=\\int_{0}^{\\infty}\\frac{\\ln(2y+1)}{y(y+1)} \\ dy=\\frac{\\pi^2}{4}.$$</p>\n" }, { "answer_id": 2089820, "score": 5, "text": "<p>Alternatively we can use Feynman's Trick if we let, $$I(a)=\\int_0^{\\infty} \\frac{\\ln(ax+1)}{x(x+1)}\\mathrm dx,$$ then if we take the derivative of $I(a)$ and expand using partial fractions we have $$I'(a)=\\int_0^{\\infty} \\frac{1}{(ax+1)(x+1)}\\mathrm dx =\\int_0^{\\infty} \\frac{a}{(a-1)(ax+1)} - \\frac{1}{(a-1)(x+1)}\\mathrm dx.$$ Integrating this gives, $$I'(a)=\\bigg(\\frac{1}{a-1}\\ln|ax+1|-\\frac{1}{a-1}\\ln|x+1| \\bigg)|_0^{\\infty}=\\frac{1}{a-1}\\ln|\\frac{ax+1}{x+1}| |_0^{\\infty}.$$ Noticing the lower bound is zero we're left with an easy limit$$I'(a)=\\lim_{x\\to\\infty}\\frac{1}{a-1}\\ln|\\frac{ax+1}{x+1}|=\\frac{\\ln(a)}{a-1}.$$ Now if we integrate we can get $I(a)$, this is easy because our integral closely resembles the definition of the dilogarithm, so we have $$I(a)=\\int \\frac{\\ln(a)}{a-1} da=-Li_2(1-a)+C.$$ Now if we notice $I(0)=0$ and note that $Li_2(1)=\\frac{\\pi^2}{6}$ we see that $C=\\frac{\\pi^2}{6}.$ So finally if we note that for your problem $a=2$ have that, $$I(2)=-Li_2(-1)+\\frac{\\pi^2}{6}=\\frac{\\pi^2}{12}+\\frac{\\pi^2}{6}=\\frac{\\pi^2}{4}.$$</p>\n\n<p>This is admittedly not as nice as the approach using the geometric series and makes heavy use of the dilogarithm and it's particular values but I think it's kinda fun.</p>\n" }, { "answer_id": 2089769, "score": 4, "text": "<p>do the substitution $2x+1\\rightarrow x$ and the let $x\\rightarrow x^{-1}$ , hence\n$$\\int_{0}^{\\infty }\\frac{\\ln\\left ( 2x+1 \\right )}{x\\left ( x+1 \\right )}\\, \\mathrm{d}x=2\\int_{1}^{\\infty }\\frac{\\ln x}{x^{2}-1}\\, \\mathrm{d}x=-2\\int_{0}^{1}\\frac{\\ln x}{1-x^{2}}\\, \\mathrm{d}x$$\nthen use the geometric series\n$$\\frac{1}{1-x^{2}}=\\sum_{n=0}^{\\infty }x^{2n}$$\nwe get\n$$\\int_{0}^{\\infty }\\frac{\\ln\\left ( 2x+1 \\right )}{x\\left ( x+1 \\right )}\\, \\mathrm{d}x=2\\sum_{n=0}^{\\infty }\\frac{1}{\\left ( 2n+1 \\right )^{2}}$$\nand the answer will follow.</p>\n" }, { "answer_id": 2089860, "score": 1, "text": "<p><em>Too long for a comment.</em></p>\n\n<p>Making the problem more general, it is possible to compute $$I=\\int \\frac{\\log (a x+b)}{(x+c) (x+d)}\\,\\mathrm dx$$ Using one integration by parts $$u=\\log (a x+b)\\implies u'=\\frac{a}{a x+b}\\,\\mathrm dx$$ $$v'=\\frac{\\mathrm dx}{(c+x) (d+x)}\\implies v=\\frac{\\log (x+d)-\\log (x+c)}{c-d}$$ So, if $c\\neq d$,$$(c-d)I=\\log (a x+b) \\log \\left(\\frac{(x+d) (a c-b)}{(x+c) (a\n d-b)}\\right)-\\text{Li}_2\\left(\\frac{a x+b}{b-a\n c}\\right)+\\text{Li}_2\\left(\\frac{a x+b}{b-a d}\\right) $$ and, if $c=d$, $$I=\\frac{a (x+c) \\log (x+c)-(a x+b) \\log (a x+b)}{(x+c) (b-a c)}$$</p>\n" }, { "answer_id": 2106818, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\int_{0}^{\\infty}{\\ln\\pars{2x + 1} \\over x\\pars{x + 1}}\\,\\dd x &amp;\n\\,\\,\\,\\stackrel{2x + 1\\ \\mapsto\\ x}{=}\\,\\,\\,\n\\int_{1}^{\\infty}{\\ln\\pars{x} \\over\n\\bracks{\\pars{x - 1}/2}\\bracks{\\pars{x - 1}/2 + 1}}\\,{\\dd x \\over 2} =\n2\\int_{1}^{\\infty}{\\ln\\pars{x} \\over x^{2} - 1}\\,\\dd x\n\\\\[5mm] &amp; \\stackrel{x\\ \\mapsto\\ 1/x}{=}\\,\\,\\,\n-2\\int_{0}^{1}{\\ln\\pars{x} \\over 1 - x^{2}}\\,\\dd x\n\\,\\,\\,\\stackrel{x^{2}\\ \\mapsto\\ x}{=}\\,\\,\\,\n-\\,{1 \\over 2}\\int_{0}^{1}{x^{-1/2}\\,\\ln\\pars{x} \\over 1 - x}\\,\\dd x\n\\\\[5mm] &amp; =\n\\left.{1 \\over 2}\\,\\totald{}{\\mu}\\int_{0}^{1}{1 - x^{\\mu} \\over 1 - x}\\,\\dd x\\,\n\\right\\vert_{\\ \\mu\\ =\\ -1/2} =\n\\left.{1 \\over 2}\\,\\totald{\\bracks{\\Psi\\pars{\\mu + 1} + \\gamma}}{\\mu}\n\\right\\vert_{\\ \\mu\\ =\\ -1/2}\\label{1}\\tag{1}\n\\end{align}</p>\n\n<blockquote>\n <p>where $\\ds{\\Psi}$ is the $Digamma\\ Function$ and $\\gamma$ is the\n $Euler\\mbox{-}Mascheroni\\ Constant$: See $\\mathbf{6.3.22}$ in <a href=\"http://people.math.sfu.ca/~cbm/aands/page_259.htm\" rel=\"nofollow noreferrer\">A&amp;S Table</a>.</p>\n</blockquote>\n\n<p>From expression&nbsp;\\eqref{1}:\n$$\n\\int_{0}^{\\infty}{\\ln\\pars{2x + 1} \\over x\\pars{x + 1}}\\,\\dd x =\n{1 \\over 2}\\,\\Psi\\, '\\pars{1 \\over 2} = \\bbx{\\ds{\\pi^{2} \\over 4}}\n$$</p>\n\n<blockquote>\n <p>See $\\mathbf{6.4.4}$ in\n <a href=\"http://people.math.sfu.ca/~cbm/aands/page_260.htm\" rel=\"nofollow noreferrer\">A&amp;S Table</a>.</p>\n</blockquote>\n" }, { "answer_id": 2106855, "score": 0, "text": "<p>This might be a late response, but consider the double integral</p>\n\n<p>$$I=\\int_{0}^{\\infty}\\int_{0}^{\\infty} \\frac{1}{1+x^2(2y+1)^2} \\frac{x}{1+x^2} \\ dy \\ dx.$$</p>\n\n<p>We will evaluate $I$ in two ways. Integrating with respect to $y$ first, we see that</p>\n\n<p>$$\\int_{0}^{\\infty} \\frac{1}{1+x^2(2y+1)^2} \\ dy= \\lim_{b \\rightarrow \\infty}\\frac{\\arctan((2b+1)x)}{2x}-\\frac{\\arctan(x)}{2x}=\\frac{\\pi}{4x}-\\frac{\\arctan(x)}{2x}$$ and get</p>\n\n<p>$$I=\\int_{0}^{\\infty} \\left(\\frac{\\pi}{4x}-\\frac{\\arctan(x)}{2x} \\right) \\frac{x}{1+x^2} \\ dx=\\int_{0}^{\\infty} \\frac{\\pi}{4(1+x^2)}-\\frac{\\arctan(x)}{2(1+x^2)} \\ dx=\\frac{\\pi^2}{8}-\\frac{\\pi^2}{16}=\\frac{\\pi^2}{16}.$$ </p>\n\n<p>Now we reverse the order of integration as such:</p>\n\n<p>$$I=\\int_{0}^{\\infty}\\int_{0}^{\\infty} \\frac{1}{1+x^2(2y+1)^2} \\frac{x}{1+x^2} \\ dx \\ dy.$$</p>\n\n<p>To clean things up a little bit, let $u=x^2, \\ du = 2 x \\ dx.$ So </p>\n\n<p>$$I=\\int_{0}^{\\infty}\\int_{0}^{\\infty} \\frac{1}{1+u(2y+1)^2} \\frac{u}{2(1+u)} \\ du \\ dy.$$</p>\n\n<p>Integrating with respect to $u,$ we can use partial fractions to get that </p>\n\n<p>$$\\frac{1}{1+u(2y+1)^2} \\frac{1}{2(1+u)}=\\frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\\frac{1}{2(u+1)((2y+1)^2-1)}.$$</p>\n\n<p>We see </p>\n\n<p>$$\\int_{0}^{\\infty} \\frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\\frac{1}{2(u+1)((2y+1)^2-1)} \\ du $$\n$$= \\lim_{b \\rightarrow \\infty}\\frac{\\ln(b(2y+1)^2+1)}{2(2y+1)^2-2} - \\frac{\\ln(b+1)}{2(2y+1)^2-2}= \\frac{\\ln(2y+1)}{(2y+1)^2-1}=\\frac{\\ln(2y+1)}{4y(y+1)}.$$</p>\n\n<p>Hence, </p>\n\n<p>$$I=\\int_{0}^{\\infty}\\frac{\\ln(2y+1)}{4y(y+1)} \\ dy=\\frac{\\pi^2}{16},$$</p>\n\n<p>So $$I=\\int_{0}^{\\infty}\\frac{\\ln(2y+1)}{y(y+1)} \\ dy=\\frac{\\pi^2}{4}.$$</p>\n" }, { "answer_id": 2089820, "score": 5, "text": "<p>Alternatively we can use Feynman's Trick if we let, $$I(a)=\\int_0^{\\infty} \\frac{\\ln(ax+1)}{x(x+1)}\\mathrm dx,$$ then if we take the derivative of $I(a)$ and expand using partial fractions we have $$I'(a)=\\int_0^{\\infty} \\frac{1}{(ax+1)(x+1)}\\mathrm dx =\\int_0^{\\infty} \\frac{a}{(a-1)(ax+1)} - \\frac{1}{(a-1)(x+1)}\\mathrm dx.$$ Integrating this gives, $$I'(a)=\\bigg(\\frac{1}{a-1}\\ln|ax+1|-\\frac{1}{a-1}\\ln|x+1| \\bigg)|_0^{\\infty}=\\frac{1}{a-1}\\ln|\\frac{ax+1}{x+1}| |_0^{\\infty}.$$ Noticing the lower bound is zero we're left with an easy limit$$I'(a)=\\lim_{x\\to\\infty}\\frac{1}{a-1}\\ln|\\frac{ax+1}{x+1}|=\\frac{\\ln(a)}{a-1}.$$ Now if we integrate we can get $I(a)$, this is easy because our integral closely resembles the definition of the dilogarithm, so we have $$I(a)=\\int \\frac{\\ln(a)}{a-1} da=-Li_2(1-a)+C.$$ Now if we notice $I(0)=0$ and note that $Li_2(1)=\\frac{\\pi^2}{6}$ we see that $C=\\frac{\\pi^2}{6}.$ So finally if we note that for your problem $a=2$ have that, $$I(2)=-Li_2(-1)+\\frac{\\pi^2}{6}=\\frac{\\pi^2}{12}+\\frac{\\pi^2}{6}=\\frac{\\pi^2}{4}.$$</p>\n\n<p>This is admittedly not as nice as the approach using the geometric series and makes heavy use of the dilogarithm and it's particular values but I think it's kinda fun.</p>\n" }, { "answer_id": 2089769, "score": 4, "text": "<p>do the substitution $2x+1\\rightarrow x$ and the let $x\\rightarrow x^{-1}$ , hence\n$$\\int_{0}^{\\infty }\\frac{\\ln\\left ( 2x+1 \\right )}{x\\left ( x+1 \\right )}\\, \\mathrm{d}x=2\\int_{1}^{\\infty }\\frac{\\ln x}{x^{2}-1}\\, \\mathrm{d}x=-2\\int_{0}^{1}\\frac{\\ln x}{1-x^{2}}\\, \\mathrm{d}x$$\nthen use the geometric series\n$$\\frac{1}{1-x^{2}}=\\sum_{n=0}^{\\infty }x^{2n}$$\nwe get\n$$\\int_{0}^{\\infty }\\frac{\\ln\\left ( 2x+1 \\right )}{x\\left ( x+1 \\right )}\\, \\mathrm{d}x=2\\sum_{n=0}^{\\infty }\\frac{1}{\\left ( 2n+1 \\right )^{2}}$$\nand the answer will follow.</p>\n" }, { "answer_id": 2089860, "score": 1, "text": "<p><em>Too long for a comment.</em></p>\n\n<p>Making the problem more general, it is possible to compute $$I=\\int \\frac{\\log (a x+b)}{(x+c) (x+d)}\\,\\mathrm dx$$ Using one integration by parts $$u=\\log (a x+b)\\implies u'=\\frac{a}{a x+b}\\,\\mathrm dx$$ $$v'=\\frac{\\mathrm dx}{(c+x) (d+x)}\\implies v=\\frac{\\log (x+d)-\\log (x+c)}{c-d}$$ So, if $c\\neq d$,$$(c-d)I=\\log (a x+b) \\log \\left(\\frac{(x+d) (a c-b)}{(x+c) (a\n d-b)}\\right)-\\text{Li}_2\\left(\\frac{a x+b}{b-a\n c}\\right)+\\text{Li}_2\\left(\\frac{a x+b}{b-a d}\\right) $$ and, if $c=d$, $$I=\\frac{a (x+c) \\log (x+c)-(a x+b) \\log (a x+b)}{(x+c) (b-a c)}$$</p>\n" }, { "answer_id": 2106818, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\int_{0}^{\\infty}{\\ln\\pars{2x + 1} \\over x\\pars{x + 1}}\\,\\dd x &amp;\n\\,\\,\\,\\stackrel{2x + 1\\ \\mapsto\\ x}{=}\\,\\,\\,\n\\int_{1}^{\\infty}{\\ln\\pars{x} \\over\n\\bracks{\\pars{x - 1}/2}\\bracks{\\pars{x - 1}/2 + 1}}\\,{\\dd x \\over 2} =\n2\\int_{1}^{\\infty}{\\ln\\pars{x} \\over x^{2} - 1}\\,\\dd x\n\\\\[5mm] &amp; \\stackrel{x\\ \\mapsto\\ 1/x}{=}\\,\\,\\,\n-2\\int_{0}^{1}{\\ln\\pars{x} \\over 1 - x^{2}}\\,\\dd x\n\\,\\,\\,\\stackrel{x^{2}\\ \\mapsto\\ x}{=}\\,\\,\\,\n-\\,{1 \\over 2}\\int_{0}^{1}{x^{-1/2}\\,\\ln\\pars{x} \\over 1 - x}\\,\\dd x\n\\\\[5mm] &amp; =\n\\left.{1 \\over 2}\\,\\totald{}{\\mu}\\int_{0}^{1}{1 - x^{\\mu} \\over 1 - x}\\,\\dd x\\,\n\\right\\vert_{\\ \\mu\\ =\\ -1/2} =\n\\left.{1 \\over 2}\\,\\totald{\\bracks{\\Psi\\pars{\\mu + 1} + \\gamma}}{\\mu}\n\\right\\vert_{\\ \\mu\\ =\\ -1/2}\\label{1}\\tag{1}\n\\end{align}</p>\n\n<blockquote>\n <p>where $\\ds{\\Psi}$ is the $Digamma\\ Function$ and $\\gamma$ is the\n $Euler\\mbox{-}Mascheroni\\ Constant$: See $\\mathbf{6.3.22}$ in <a href=\"http://people.math.sfu.ca/~cbm/aands/page_259.htm\" rel=\"nofollow noreferrer\">A&amp;S Table</a>.</p>\n</blockquote>\n\n<p>From expression&nbsp;\\eqref{1}:\n$$\n\\int_{0}^{\\infty}{\\ln\\pars{2x + 1} \\over x\\pars{x + 1}}\\,\\dd x =\n{1 \\over 2}\\,\\Psi\\, '\\pars{1 \\over 2} = \\bbx{\\ds{\\pi^{2} \\over 4}}\n$$</p>\n\n<blockquote>\n <p>See $\\mathbf{6.4.4}$ in\n <a href=\"http://people.math.sfu.ca/~cbm/aands/page_260.htm\" rel=\"nofollow noreferrer\">A&amp;S Table</a>.</p>\n</blockquote>\n" }, { "answer_id": 2106855, "score": 0, "text": "<p>This might be a late response, but consider the double integral</p>\n\n<p>$$I=\\int_{0}^{\\infty}\\int_{0}^{\\infty} \\frac{1}{1+x^2(2y+1)^2} \\frac{x}{1+x^2} \\ dy \\ dx.$$</p>\n\n<p>We will evaluate $I$ in two ways. Integrating with respect to $y$ first, we see that</p>\n\n<p>$$\\int_{0}^{\\infty} \\frac{1}{1+x^2(2y+1)^2} \\ dy= \\lim_{b \\rightarrow \\infty}\\frac{\\arctan((2b+1)x)}{2x}-\\frac{\\arctan(x)}{2x}=\\frac{\\pi}{4x}-\\frac{\\arctan(x)}{2x}$$ and get</p>\n\n<p>$$I=\\int_{0}^{\\infty} \\left(\\frac{\\pi}{4x}-\\frac{\\arctan(x)}{2x} \\right) \\frac{x}{1+x^2} \\ dx=\\int_{0}^{\\infty} \\frac{\\pi}{4(1+x^2)}-\\frac{\\arctan(x)}{2(1+x^2)} \\ dx=\\frac{\\pi^2}{8}-\\frac{\\pi^2}{16}=\\frac{\\pi^2}{16}.$$ </p>\n\n<p>Now we reverse the order of integration as such:</p>\n\n<p>$$I=\\int_{0}^{\\infty}\\int_{0}^{\\infty} \\frac{1}{1+x^2(2y+1)^2} \\frac{x}{1+x^2} \\ dx \\ dy.$$</p>\n\n<p>To clean things up a little bit, let $u=x^2, \\ du = 2 x \\ dx.$ So </p>\n\n<p>$$I=\\int_{0}^{\\infty}\\int_{0}^{\\infty} \\frac{1}{1+u(2y+1)^2} \\frac{u}{2(1+u)} \\ du \\ dy.$$</p>\n\n<p>Integrating with respect to $u,$ we can use partial fractions to get that </p>\n\n<p>$$\\frac{1}{1+u(2y+1)^2} \\frac{1}{2(1+u)}=\\frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\\frac{1}{2(u+1)((2y+1)^2-1)}.$$</p>\n\n<p>We see </p>\n\n<p>$$\\int_{0}^{\\infty} \\frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\\frac{1}{2(u+1)((2y+1)^2-1)} \\ du $$\n$$= \\lim_{b \\rightarrow \\infty}\\frac{\\ln(b(2y+1)^2+1)}{2(2y+1)^2-2} - \\frac{\\ln(b+1)}{2(2y+1)^2-2}= \\frac{\\ln(2y+1)}{(2y+1)^2-1}=\\frac{\\ln(2y+1)}{4y(y+1)}.$$</p>\n\n<p>Hence, </p>\n\n<p>$$I=\\int_{0}^{\\infty}\\frac{\\ln(2y+1)}{4y(y+1)} \\ dy=\\frac{\\pi^2}{16},$$</p>\n\n<p>So $$I=\\int_{0}^{\\infty}\\frac{\\ln(2y+1)}{y(y+1)} \\ dy=\\frac{\\pi^2}{4}.$$</p>\n" }, { "answer_id": 2089820, "score": 5, "text": "<p>Alternatively we can use Feynman's Trick if we let, $$I(a)=\\int_0^{\\infty} \\frac{\\ln(ax+1)}{x(x+1)}\\mathrm dx,$$ then if we take the derivative of $I(a)$ and expand using partial fractions we have $$I'(a)=\\int_0^{\\infty} \\frac{1}{(ax+1)(x+1)}\\mathrm dx =\\int_0^{\\infty} \\frac{a}{(a-1)(ax+1)} - \\frac{1}{(a-1)(x+1)}\\mathrm dx.$$ Integrating this gives, $$I'(a)=\\bigg(\\frac{1}{a-1}\\ln|ax+1|-\\frac{1}{a-1}\\ln|x+1| \\bigg)|_0^{\\infty}=\\frac{1}{a-1}\\ln|\\frac{ax+1}{x+1}| |_0^{\\infty}.$$ Noticing the lower bound is zero we're left with an easy limit$$I'(a)=\\lim_{x\\to\\infty}\\frac{1}{a-1}\\ln|\\frac{ax+1}{x+1}|=\\frac{\\ln(a)}{a-1}.$$ Now if we integrate we can get $I(a)$, this is easy because our integral closely resembles the definition of the dilogarithm, so we have $$I(a)=\\int \\frac{\\ln(a)}{a-1} da=-Li_2(1-a)+C.$$ Now if we notice $I(0)=0$ and note that $Li_2(1)=\\frac{\\pi^2}{6}$ we see that $C=\\frac{\\pi^2}{6}.$ So finally if we note that for your problem $a=2$ have that, $$I(2)=-Li_2(-1)+\\frac{\\pi^2}{6}=\\frac{\\pi^2}{12}+\\frac{\\pi^2}{6}=\\frac{\\pi^2}{4}.$$</p>\n\n<p>This is admittedly not as nice as the approach using the geometric series and makes heavy use of the dilogarithm and it's particular values but I think it's kinda fun.</p>\n" }, { "answer_id": 2089769, "score": 4, "text": "<p>do the substitution $2x+1\\rightarrow x$ and the let $x\\rightarrow x^{-1}$ , hence\n$$\\int_{0}^{\\infty }\\frac{\\ln\\left ( 2x+1 \\right )}{x\\left ( x+1 \\right )}\\, \\mathrm{d}x=2\\int_{1}^{\\infty }\\frac{\\ln x}{x^{2}-1}\\, \\mathrm{d}x=-2\\int_{0}^{1}\\frac{\\ln x}{1-x^{2}}\\, \\mathrm{d}x$$\nthen use the geometric series\n$$\\frac{1}{1-x^{2}}=\\sum_{n=0}^{\\infty }x^{2n}$$\nwe get\n$$\\int_{0}^{\\infty }\\frac{\\ln\\left ( 2x+1 \\right )}{x\\left ( x+1 \\right )}\\, \\mathrm{d}x=2\\sum_{n=0}^{\\infty }\\frac{1}{\\left ( 2n+1 \\right )^{2}}$$\nand the answer will follow.</p>\n" }, { "answer_id": 2089860, "score": 1, "text": "<p><em>Too long for a comment.</em></p>\n\n<p>Making the problem more general, it is possible to compute $$I=\\int \\frac{\\log (a x+b)}{(x+c) (x+d)}\\,\\mathrm dx$$ Using one integration by parts $$u=\\log (a x+b)\\implies u'=\\frac{a}{a x+b}\\,\\mathrm dx$$ $$v'=\\frac{\\mathrm dx}{(c+x) (d+x)}\\implies v=\\frac{\\log (x+d)-\\log (x+c)}{c-d}$$ So, if $c\\neq d$,$$(c-d)I=\\log (a x+b) \\log \\left(\\frac{(x+d) (a c-b)}{(x+c) (a\n d-b)}\\right)-\\text{Li}_2\\left(\\frac{a x+b}{b-a\n c}\\right)+\\text{Li}_2\\left(\\frac{a x+b}{b-a d}\\right) $$ and, if $c=d$, $$I=\\frac{a (x+c) \\log (x+c)-(a x+b) \\log (a x+b)}{(x+c) (b-a c)}$$</p>\n" }, { "answer_id": 2106818, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\int_{0}^{\\infty}{\\ln\\pars{2x + 1} \\over x\\pars{x + 1}}\\,\\dd x &amp;\n\\,\\,\\,\\stackrel{2x + 1\\ \\mapsto\\ x}{=}\\,\\,\\,\n\\int_{1}^{\\infty}{\\ln\\pars{x} \\over\n\\bracks{\\pars{x - 1}/2}\\bracks{\\pars{x - 1}/2 + 1}}\\,{\\dd x \\over 2} =\n2\\int_{1}^{\\infty}{\\ln\\pars{x} \\over x^{2} - 1}\\,\\dd x\n\\\\[5mm] &amp; \\stackrel{x\\ \\mapsto\\ 1/x}{=}\\,\\,\\,\n-2\\int_{0}^{1}{\\ln\\pars{x} \\over 1 - x^{2}}\\,\\dd x\n\\,\\,\\,\\stackrel{x^{2}\\ \\mapsto\\ x}{=}\\,\\,\\,\n-\\,{1 \\over 2}\\int_{0}^{1}{x^{-1/2}\\,\\ln\\pars{x} \\over 1 - x}\\,\\dd x\n\\\\[5mm] &amp; =\n\\left.{1 \\over 2}\\,\\totald{}{\\mu}\\int_{0}^{1}{1 - x^{\\mu} \\over 1 - x}\\,\\dd x\\,\n\\right\\vert_{\\ \\mu\\ =\\ -1/2} =\n\\left.{1 \\over 2}\\,\\totald{\\bracks{\\Psi\\pars{\\mu + 1} + \\gamma}}{\\mu}\n\\right\\vert_{\\ \\mu\\ =\\ -1/2}\\label{1}\\tag{1}\n\\end{align}</p>\n\n<blockquote>\n <p>where $\\ds{\\Psi}$ is the $Digamma\\ Function$ and $\\gamma$ is the\n $Euler\\mbox{-}Mascheroni\\ Constant$: See $\\mathbf{6.3.22}$ in <a href=\"http://people.math.sfu.ca/~cbm/aands/page_259.htm\" rel=\"nofollow noreferrer\">A&amp;S Table</a>.</p>\n</blockquote>\n\n<p>From expression&nbsp;\\eqref{1}:\n$$\n\\int_{0}^{\\infty}{\\ln\\pars{2x + 1} \\over x\\pars{x + 1}}\\,\\dd x =\n{1 \\over 2}\\,\\Psi\\, '\\pars{1 \\over 2} = \\bbx{\\ds{\\pi^{2} \\over 4}}\n$$</p>\n\n<blockquote>\n <p>See $\\mathbf{6.4.4}$ in\n <a href=\"http://people.math.sfu.ca/~cbm/aands/page_260.htm\" rel=\"nofollow noreferrer\">A&amp;S Table</a>.</p>\n</blockquote>\n" }, { "answer_id": 2106855, "score": 0, "text": "<p>This might be a late response, but consider the double integral</p>\n\n<p>$$I=\\int_{0}^{\\infty}\\int_{0}^{\\infty} \\frac{1}{1+x^2(2y+1)^2} \\frac{x}{1+x^2} \\ dy \\ dx.$$</p>\n\n<p>We will evaluate $I$ in two ways. Integrating with respect to $y$ first, we see that</p>\n\n<p>$$\\int_{0}^{\\infty} \\frac{1}{1+x^2(2y+1)^2} \\ dy= \\lim_{b \\rightarrow \\infty}\\frac{\\arctan((2b+1)x)}{2x}-\\frac{\\arctan(x)}{2x}=\\frac{\\pi}{4x}-\\frac{\\arctan(x)}{2x}$$ and get</p>\n\n<p>$$I=\\int_{0}^{\\infty} \\left(\\frac{\\pi}{4x}-\\frac{\\arctan(x)}{2x} \\right) \\frac{x}{1+x^2} \\ dx=\\int_{0}^{\\infty} \\frac{\\pi}{4(1+x^2)}-\\frac{\\arctan(x)}{2(1+x^2)} \\ dx=\\frac{\\pi^2}{8}-\\frac{\\pi^2}{16}=\\frac{\\pi^2}{16}.$$ </p>\n\n<p>Now we reverse the order of integration as such:</p>\n\n<p>$$I=\\int_{0}^{\\infty}\\int_{0}^{\\infty} \\frac{1}{1+x^2(2y+1)^2} \\frac{x}{1+x^2} \\ dx \\ dy.$$</p>\n\n<p>To clean things up a little bit, let $u=x^2, \\ du = 2 x \\ dx.$ So </p>\n\n<p>$$I=\\int_{0}^{\\infty}\\int_{0}^{\\infty} \\frac{1}{1+u(2y+1)^2} \\frac{u}{2(1+u)} \\ du \\ dy.$$</p>\n\n<p>Integrating with respect to $u,$ we can use partial fractions to get that </p>\n\n<p>$$\\frac{1}{1+u(2y+1)^2} \\frac{1}{2(1+u)}=\\frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\\frac{1}{2(u+1)((2y+1)^2-1)}.$$</p>\n\n<p>We see </p>\n\n<p>$$\\int_{0}^{\\infty} \\frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\\frac{1}{2(u+1)((2y+1)^2-1)} \\ du $$\n$$= \\lim_{b \\rightarrow \\infty}\\frac{\\ln(b(2y+1)^2+1)}{2(2y+1)^2-2} - \\frac{\\ln(b+1)}{2(2y+1)^2-2}= \\frac{\\ln(2y+1)}{(2y+1)^2-1}=\\frac{\\ln(2y+1)}{4y(y+1)}.$$</p>\n\n<p>Hence, </p>\n\n<p>$$I=\\int_{0}^{\\infty}\\frac{\\ln(2y+1)}{4y(y+1)} \\ dy=\\frac{\\pi^2}{16},$$</p>\n\n<p>So $$I=\\int_{0}^{\\infty}\\frac{\\ln(2y+1)}{y(y+1)} \\ dy=\\frac{\\pi^2}{4}.$$</p>\n" }, { "answer_id": 2089820, "score": 5, "text": "<p>Alternatively we can use Feynman's Trick if we let, $$I(a)=\\int_0^{\\infty} \\frac{\\ln(ax+1)}{x(x+1)}\\mathrm dx,$$ then if we take the derivative of $I(a)$ and expand using partial fractions we have $$I'(a)=\\int_0^{\\infty} \\frac{1}{(ax+1)(x+1)}\\mathrm dx =\\int_0^{\\infty} \\frac{a}{(a-1)(ax+1)} - \\frac{1}{(a-1)(x+1)}\\mathrm dx.$$ Integrating this gives, $$I'(a)=\\bigg(\\frac{1}{a-1}\\ln|ax+1|-\\frac{1}{a-1}\\ln|x+1| \\bigg)|_0^{\\infty}=\\frac{1}{a-1}\\ln|\\frac{ax+1}{x+1}| |_0^{\\infty}.$$ Noticing the lower bound is zero we're left with an easy limit$$I'(a)=\\lim_{x\\to\\infty}\\frac{1}{a-1}\\ln|\\frac{ax+1}{x+1}|=\\frac{\\ln(a)}{a-1}.$$ Now if we integrate we can get $I(a)$, this is easy because our integral closely resembles the definition of the dilogarithm, so we have $$I(a)=\\int \\frac{\\ln(a)}{a-1} da=-Li_2(1-a)+C.$$ Now if we notice $I(0)=0$ and note that $Li_2(1)=\\frac{\\pi^2}{6}$ we see that $C=\\frac{\\pi^2}{6}.$ So finally if we note that for your problem $a=2$ have that, $$I(2)=-Li_2(-1)+\\frac{\\pi^2}{6}=\\frac{\\pi^2}{12}+\\frac{\\pi^2}{6}=\\frac{\\pi^2}{4}.$$</p>\n\n<p>This is admittedly not as nice as the approach using the geometric series and makes heavy use of the dilogarithm and it's particular values but I think it's kinda fun.</p>\n" }, { "answer_id": 2089769, "score": 4, "text": "<p>do the substitution $2x+1\\rightarrow x$ and the let $x\\rightarrow x^{-1}$ , hence\n$$\\int_{0}^{\\infty }\\frac{\\ln\\left ( 2x+1 \\right )}{x\\left ( x+1 \\right )}\\, \\mathrm{d}x=2\\int_{1}^{\\infty }\\frac{\\ln x}{x^{2}-1}\\, \\mathrm{d}x=-2\\int_{0}^{1}\\frac{\\ln x}{1-x^{2}}\\, \\mathrm{d}x$$\nthen use the geometric series\n$$\\frac{1}{1-x^{2}}=\\sum_{n=0}^{\\infty }x^{2n}$$\nwe get\n$$\\int_{0}^{\\infty }\\frac{\\ln\\left ( 2x+1 \\right )}{x\\left ( x+1 \\right )}\\, \\mathrm{d}x=2\\sum_{n=0}^{\\infty }\\frac{1}{\\left ( 2n+1 \\right )^{2}}$$\nand the answer will follow.</p>\n" }, { "answer_id": 2089860, "score": 1, "text": "<p><em>Too long for a comment.</em></p>\n\n<p>Making the problem more general, it is possible to compute $$I=\\int \\frac{\\log (a x+b)}{(x+c) (x+d)}\\,\\mathrm dx$$ Using one integration by parts $$u=\\log (a x+b)\\implies u'=\\frac{a}{a x+b}\\,\\mathrm dx$$ $$v'=\\frac{\\mathrm dx}{(c+x) (d+x)}\\implies v=\\frac{\\log (x+d)-\\log (x+c)}{c-d}$$ So, if $c\\neq d$,$$(c-d)I=\\log (a x+b) \\log \\left(\\frac{(x+d) (a c-b)}{(x+c) (a\n d-b)}\\right)-\\text{Li}_2\\left(\\frac{a x+b}{b-a\n c}\\right)+\\text{Li}_2\\left(\\frac{a x+b}{b-a d}\\right) $$ and, if $c=d$, $$I=\\frac{a (x+c) \\log (x+c)-(a x+b) \\log (a x+b)}{(x+c) (b-a c)}$$</p>\n" }, { "answer_id": 2106818, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\int_{0}^{\\infty}{\\ln\\pars{2x + 1} \\over x\\pars{x + 1}}\\,\\dd x &amp;\n\\,\\,\\,\\stackrel{2x + 1\\ \\mapsto\\ x}{=}\\,\\,\\,\n\\int_{1}^{\\infty}{\\ln\\pars{x} \\over\n\\bracks{\\pars{x - 1}/2}\\bracks{\\pars{x - 1}/2 + 1}}\\,{\\dd x \\over 2} =\n2\\int_{1}^{\\infty}{\\ln\\pars{x} \\over x^{2} - 1}\\,\\dd x\n\\\\[5mm] &amp; \\stackrel{x\\ \\mapsto\\ 1/x}{=}\\,\\,\\,\n-2\\int_{0}^{1}{\\ln\\pars{x} \\over 1 - x^{2}}\\,\\dd x\n\\,\\,\\,\\stackrel{x^{2}\\ \\mapsto\\ x}{=}\\,\\,\\,\n-\\,{1 \\over 2}\\int_{0}^{1}{x^{-1/2}\\,\\ln\\pars{x} \\over 1 - x}\\,\\dd x\n\\\\[5mm] &amp; =\n\\left.{1 \\over 2}\\,\\totald{}{\\mu}\\int_{0}^{1}{1 - x^{\\mu} \\over 1 - x}\\,\\dd x\\,\n\\right\\vert_{\\ \\mu\\ =\\ -1/2} =\n\\left.{1 \\over 2}\\,\\totald{\\bracks{\\Psi\\pars{\\mu + 1} + \\gamma}}{\\mu}\n\\right\\vert_{\\ \\mu\\ =\\ -1/2}\\label{1}\\tag{1}\n\\end{align}</p>\n\n<blockquote>\n <p>where $\\ds{\\Psi}$ is the $Digamma\\ Function$ and $\\gamma$ is the\n $Euler\\mbox{-}Mascheroni\\ Constant$: See $\\mathbf{6.3.22}$ in <a href=\"http://people.math.sfu.ca/~cbm/aands/page_259.htm\" rel=\"nofollow noreferrer\">A&amp;S Table</a>.</p>\n</blockquote>\n\n<p>From expression&nbsp;\\eqref{1}:\n$$\n\\int_{0}^{\\infty}{\\ln\\pars{2x + 1} \\over x\\pars{x + 1}}\\,\\dd x =\n{1 \\over 2}\\,\\Psi\\, '\\pars{1 \\over 2} = \\bbx{\\ds{\\pi^{2} \\over 4}}\n$$</p>\n\n<blockquote>\n <p>See $\\mathbf{6.4.4}$ in\n <a href=\"http://people.math.sfu.ca/~cbm/aands/page_260.htm\" rel=\"nofollow noreferrer\">A&amp;S Table</a>.</p>\n</blockquote>\n" }, { "answer_id": 2106855, "score": 0, "text": "<p>This might be a late response, but consider the double integral</p>\n\n<p>$$I=\\int_{0}^{\\infty}\\int_{0}^{\\infty} \\frac{1}{1+x^2(2y+1)^2} \\frac{x}{1+x^2} \\ dy \\ dx.$$</p>\n\n<p>We will evaluate $I$ in two ways. Integrating with respect to $y$ first, we see that</p>\n\n<p>$$\\int_{0}^{\\infty} \\frac{1}{1+x^2(2y+1)^2} \\ dy= \\lim_{b \\rightarrow \\infty}\\frac{\\arctan((2b+1)x)}{2x}-\\frac{\\arctan(x)}{2x}=\\frac{\\pi}{4x}-\\frac{\\arctan(x)}{2x}$$ and get</p>\n\n<p>$$I=\\int_{0}^{\\infty} \\left(\\frac{\\pi}{4x}-\\frac{\\arctan(x)}{2x} \\right) \\frac{x}{1+x^2} \\ dx=\\int_{0}^{\\infty} \\frac{\\pi}{4(1+x^2)}-\\frac{\\arctan(x)}{2(1+x^2)} \\ dx=\\frac{\\pi^2}{8}-\\frac{\\pi^2}{16}=\\frac{\\pi^2}{16}.$$ </p>\n\n<p>Now we reverse the order of integration as such:</p>\n\n<p>$$I=\\int_{0}^{\\infty}\\int_{0}^{\\infty} \\frac{1}{1+x^2(2y+1)^2} \\frac{x}{1+x^2} \\ dx \\ dy.$$</p>\n\n<p>To clean things up a little bit, let $u=x^2, \\ du = 2 x \\ dx.$ So </p>\n\n<p>$$I=\\int_{0}^{\\infty}\\int_{0}^{\\infty} \\frac{1}{1+u(2y+1)^2} \\frac{u}{2(1+u)} \\ du \\ dy.$$</p>\n\n<p>Integrating with respect to $u,$ we can use partial fractions to get that </p>\n\n<p>$$\\frac{1}{1+u(2y+1)^2} \\frac{1}{2(1+u)}=\\frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\\frac{1}{2(u+1)((2y+1)^2-1)}.$$</p>\n\n<p>We see </p>\n\n<p>$$\\int_{0}^{\\infty} \\frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\\frac{1}{2(u+1)((2y+1)^2-1)} \\ du $$\n$$= \\lim_{b \\rightarrow \\infty}\\frac{\\ln(b(2y+1)^2+1)}{2(2y+1)^2-2} - \\frac{\\ln(b+1)}{2(2y+1)^2-2}= \\frac{\\ln(2y+1)}{(2y+1)^2-1}=\\frac{\\ln(2y+1)}{4y(y+1)}.$$</p>\n\n<p>Hence, </p>\n\n<p>$$I=\\int_{0}^{\\infty}\\frac{\\ln(2y+1)}{4y(y+1)} \\ dy=\\frac{\\pi^2}{16},$$</p>\n\n<p>So $$I=\\int_{0}^{\\infty}\\frac{\\ln(2y+1)}{y(y+1)} \\ dy=\\frac{\\pi^2}{4}.$$</p>\n" }, { "answer_id": 2089820, "score": 5, "text": "<p>Alternatively we can use Feynman's Trick if we let, $$I(a)=\\int_0^{\\infty} \\frac{\\ln(ax+1)}{x(x+1)}\\mathrm dx,$$ then if we take the derivative of $I(a)$ and expand using partial fractions we have $$I'(a)=\\int_0^{\\infty} \\frac{1}{(ax+1)(x+1)}\\mathrm dx =\\int_0^{\\infty} \\frac{a}{(a-1)(ax+1)} - \\frac{1}{(a-1)(x+1)}\\mathrm dx.$$ Integrating this gives, $$I'(a)=\\bigg(\\frac{1}{a-1}\\ln|ax+1|-\\frac{1}{a-1}\\ln|x+1| \\bigg)|_0^{\\infty}=\\frac{1}{a-1}\\ln|\\frac{ax+1}{x+1}| |_0^{\\infty}.$$ Noticing the lower bound is zero we're left with an easy limit$$I'(a)=\\lim_{x\\to\\infty}\\frac{1}{a-1}\\ln|\\frac{ax+1}{x+1}|=\\frac{\\ln(a)}{a-1}.$$ Now if we integrate we can get $I(a)$, this is easy because our integral closely resembles the definition of the dilogarithm, so we have $$I(a)=\\int \\frac{\\ln(a)}{a-1} da=-Li_2(1-a)+C.$$ Now if we notice $I(0)=0$ and note that $Li_2(1)=\\frac{\\pi^2}{6}$ we see that $C=\\frac{\\pi^2}{6}.$ So finally if we note that for your problem $a=2$ have that, $$I(2)=-Li_2(-1)+\\frac{\\pi^2}{6}=\\frac{\\pi^2}{12}+\\frac{\\pi^2}{6}=\\frac{\\pi^2}{4}.$$</p>\n\n<p>This is admittedly not as nice as the approach using the geometric series and makes heavy use of the dilogarithm and it's particular values but I think it's kinda fun.</p>\n" }, { "answer_id": 2089769, "score": 4, "text": "<p>do the substitution $2x+1\\rightarrow x$ and the let $x\\rightarrow x^{-1}$ , hence\n$$\\int_{0}^{\\infty }\\frac{\\ln\\left ( 2x+1 \\right )}{x\\left ( x+1 \\right )}\\, \\mathrm{d}x=2\\int_{1}^{\\infty }\\frac{\\ln x}{x^{2}-1}\\, \\mathrm{d}x=-2\\int_{0}^{1}\\frac{\\ln x}{1-x^{2}}\\, \\mathrm{d}x$$\nthen use the geometric series\n$$\\frac{1}{1-x^{2}}=\\sum_{n=0}^{\\infty }x^{2n}$$\nwe get\n$$\\int_{0}^{\\infty }\\frac{\\ln\\left ( 2x+1 \\right )}{x\\left ( x+1 \\right )}\\, \\mathrm{d}x=2\\sum_{n=0}^{\\infty }\\frac{1}{\\left ( 2n+1 \\right )^{2}}$$\nand the answer will follow.</p>\n" }, { "answer_id": 2089860, "score": 1, "text": "<p><em>Too long for a comment.</em></p>\n\n<p>Making the problem more general, it is possible to compute $$I=\\int \\frac{\\log (a x+b)}{(x+c) (x+d)}\\,\\mathrm dx$$ Using one integration by parts $$u=\\log (a x+b)\\implies u'=\\frac{a}{a x+b}\\,\\mathrm dx$$ $$v'=\\frac{\\mathrm dx}{(c+x) (d+x)}\\implies v=\\frac{\\log (x+d)-\\log (x+c)}{c-d}$$ So, if $c\\neq d$,$$(c-d)I=\\log (a x+b) \\log \\left(\\frac{(x+d) (a c-b)}{(x+c) (a\n d-b)}\\right)-\\text{Li}_2\\left(\\frac{a x+b}{b-a\n c}\\right)+\\text{Li}_2\\left(\\frac{a x+b}{b-a d}\\right) $$ and, if $c=d$, $$I=\\frac{a (x+c) \\log (x+c)-(a x+b) \\log (a x+b)}{(x+c) (b-a c)}$$</p>\n" }, { "answer_id": 2106818, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\int_{0}^{\\infty}{\\ln\\pars{2x + 1} \\over x\\pars{x + 1}}\\,\\dd x &amp;\n\\,\\,\\,\\stackrel{2x + 1\\ \\mapsto\\ x}{=}\\,\\,\\,\n\\int_{1}^{\\infty}{\\ln\\pars{x} \\over\n\\bracks{\\pars{x - 1}/2}\\bracks{\\pars{x - 1}/2 + 1}}\\,{\\dd x \\over 2} =\n2\\int_{1}^{\\infty}{\\ln\\pars{x} \\over x^{2} - 1}\\,\\dd x\n\\\\[5mm] &amp; \\stackrel{x\\ \\mapsto\\ 1/x}{=}\\,\\,\\,\n-2\\int_{0}^{1}{\\ln\\pars{x} \\over 1 - x^{2}}\\,\\dd x\n\\,\\,\\,\\stackrel{x^{2}\\ \\mapsto\\ x}{=}\\,\\,\\,\n-\\,{1 \\over 2}\\int_{0}^{1}{x^{-1/2}\\,\\ln\\pars{x} \\over 1 - x}\\,\\dd x\n\\\\[5mm] &amp; =\n\\left.{1 \\over 2}\\,\\totald{}{\\mu}\\int_{0}^{1}{1 - x^{\\mu} \\over 1 - x}\\,\\dd x\\,\n\\right\\vert_{\\ \\mu\\ =\\ -1/2} =\n\\left.{1 \\over 2}\\,\\totald{\\bracks{\\Psi\\pars{\\mu + 1} + \\gamma}}{\\mu}\n\\right\\vert_{\\ \\mu\\ =\\ -1/2}\\label{1}\\tag{1}\n\\end{align}</p>\n\n<blockquote>\n <p>where $\\ds{\\Psi}$ is the $Digamma\\ Function$ and $\\gamma$ is the\n $Euler\\mbox{-}Mascheroni\\ Constant$: See $\\mathbf{6.3.22}$ in <a href=\"http://people.math.sfu.ca/~cbm/aands/page_259.htm\" rel=\"nofollow noreferrer\">A&amp;S Table</a>.</p>\n</blockquote>\n\n<p>From expression&nbsp;\\eqref{1}:\n$$\n\\int_{0}^{\\infty}{\\ln\\pars{2x + 1} \\over x\\pars{x + 1}}\\,\\dd x =\n{1 \\over 2}\\,\\Psi\\, '\\pars{1 \\over 2} = \\bbx{\\ds{\\pi^{2} \\over 4}}\n$$</p>\n\n<blockquote>\n <p>See $\\mathbf{6.4.4}$ in\n <a href=\"http://people.math.sfu.ca/~cbm/aands/page_260.htm\" rel=\"nofollow noreferrer\">A&amp;S Table</a>.</p>\n</blockquote>\n" }, { "answer_id": 2106855, "score": 0, "text": "<p>This might be a late response, but consider the double integral</p>\n\n<p>$$I=\\int_{0}^{\\infty}\\int_{0}^{\\infty} \\frac{1}{1+x^2(2y+1)^2} \\frac{x}{1+x^2} \\ dy \\ dx.$$</p>\n\n<p>We will evaluate $I$ in two ways. Integrating with respect to $y$ first, we see that</p>\n\n<p>$$\\int_{0}^{\\infty} \\frac{1}{1+x^2(2y+1)^2} \\ dy= \\lim_{b \\rightarrow \\infty}\\frac{\\arctan((2b+1)x)}{2x}-\\frac{\\arctan(x)}{2x}=\\frac{\\pi}{4x}-\\frac{\\arctan(x)}{2x}$$ and get</p>\n\n<p>$$I=\\int_{0}^{\\infty} \\left(\\frac{\\pi}{4x}-\\frac{\\arctan(x)}{2x} \\right) \\frac{x}{1+x^2} \\ dx=\\int_{0}^{\\infty} \\frac{\\pi}{4(1+x^2)}-\\frac{\\arctan(x)}{2(1+x^2)} \\ dx=\\frac{\\pi^2}{8}-\\frac{\\pi^2}{16}=\\frac{\\pi^2}{16}.$$ </p>\n\n<p>Now we reverse the order of integration as such:</p>\n\n<p>$$I=\\int_{0}^{\\infty}\\int_{0}^{\\infty} \\frac{1}{1+x^2(2y+1)^2} \\frac{x}{1+x^2} \\ dx \\ dy.$$</p>\n\n<p>To clean things up a little bit, let $u=x^2, \\ du = 2 x \\ dx.$ So </p>\n\n<p>$$I=\\int_{0}^{\\infty}\\int_{0}^{\\infty} \\frac{1}{1+u(2y+1)^2} \\frac{u}{2(1+u)} \\ du \\ dy.$$</p>\n\n<p>Integrating with respect to $u,$ we can use partial fractions to get that </p>\n\n<p>$$\\frac{1}{1+u(2y+1)^2} \\frac{1}{2(1+u)}=\\frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\\frac{1}{2(u+1)((2y+1)^2-1)}.$$</p>\n\n<p>We see </p>\n\n<p>$$\\int_{0}^{\\infty} \\frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\\frac{1}{2(u+1)((2y+1)^2-1)} \\ du $$\n$$= \\lim_{b \\rightarrow \\infty}\\frac{\\ln(b(2y+1)^2+1)}{2(2y+1)^2-2} - \\frac{\\ln(b+1)}{2(2y+1)^2-2}= \\frac{\\ln(2y+1)}{(2y+1)^2-1}=\\frac{\\ln(2y+1)}{4y(y+1)}.$$</p>\n\n<p>Hence, </p>\n\n<p>$$I=\\int_{0}^{\\infty}\\frac{\\ln(2y+1)}{4y(y+1)} \\ dy=\\frac{\\pi^2}{16},$$</p>\n\n<p>So $$I=\\int_{0}^{\\infty}\\frac{\\ln(2y+1)}{y(y+1)} \\ dy=\\frac{\\pi^2}{4}.$$</p>\n" }, { "answer_id": 2089820, "score": 5, "text": "<p>Alternatively we can use Feynman's Trick if we let, $$I(a)=\\int_0^{\\infty} \\frac{\\ln(ax+1)}{x(x+1)}\\mathrm dx,$$ then if we take the derivative of $I(a)$ and expand using partial fractions we have $$I'(a)=\\int_0^{\\infty} \\frac{1}{(ax+1)(x+1)}\\mathrm dx =\\int_0^{\\infty} \\frac{a}{(a-1)(ax+1)} - \\frac{1}{(a-1)(x+1)}\\mathrm dx.$$ Integrating this gives, $$I'(a)=\\bigg(\\frac{1}{a-1}\\ln|ax+1|-\\frac{1}{a-1}\\ln|x+1| \\bigg)|_0^{\\infty}=\\frac{1}{a-1}\\ln|\\frac{ax+1}{x+1}| |_0^{\\infty}.$$ Noticing the lower bound is zero we're left with an easy limit$$I'(a)=\\lim_{x\\to\\infty}\\frac{1}{a-1}\\ln|\\frac{ax+1}{x+1}|=\\frac{\\ln(a)}{a-1}.$$ Now if we integrate we can get $I(a)$, this is easy because our integral closely resembles the definition of the dilogarithm, so we have $$I(a)=\\int \\frac{\\ln(a)}{a-1} da=-Li_2(1-a)+C.$$ Now if we notice $I(0)=0$ and note that $Li_2(1)=\\frac{\\pi^2}{6}$ we see that $C=\\frac{\\pi^2}{6}.$ So finally if we note that for your problem $a=2$ have that, $$I(2)=-Li_2(-1)+\\frac{\\pi^2}{6}=\\frac{\\pi^2}{12}+\\frac{\\pi^2}{6}=\\frac{\\pi^2}{4}.$$</p>\n\n<p>This is admittedly not as nice as the approach using the geometric series and makes heavy use of the dilogarithm and it's particular values but I think it's kinda fun.</p>\n" }, { "answer_id": 2089769, "score": 4, "text": "<p>do the substitution $2x+1\\rightarrow x$ and the let $x\\rightarrow x^{-1}$ , hence\n$$\\int_{0}^{\\infty }\\frac{\\ln\\left ( 2x+1 \\right )}{x\\left ( x+1 \\right )}\\, \\mathrm{d}x=2\\int_{1}^{\\infty }\\frac{\\ln x}{x^{2}-1}\\, \\mathrm{d}x=-2\\int_{0}^{1}\\frac{\\ln x}{1-x^{2}}\\, \\mathrm{d}x$$\nthen use the geometric series\n$$\\frac{1}{1-x^{2}}=\\sum_{n=0}^{\\infty }x^{2n}$$\nwe get\n$$\\int_{0}^{\\infty }\\frac{\\ln\\left ( 2x+1 \\right )}{x\\left ( x+1 \\right )}\\, \\mathrm{d}x=2\\sum_{n=0}^{\\infty }\\frac{1}{\\left ( 2n+1 \\right )^{2}}$$\nand the answer will follow.</p>\n" }, { "answer_id": 2089860, "score": 1, "text": "<p><em>Too long for a comment.</em></p>\n\n<p>Making the problem more general, it is possible to compute $$I=\\int \\frac{\\log (a x+b)}{(x+c) (x+d)}\\,\\mathrm dx$$ Using one integration by parts $$u=\\log (a x+b)\\implies u'=\\frac{a}{a x+b}\\,\\mathrm dx$$ $$v'=\\frac{\\mathrm dx}{(c+x) (d+x)}\\implies v=\\frac{\\log (x+d)-\\log (x+c)}{c-d}$$ So, if $c\\neq d$,$$(c-d)I=\\log (a x+b) \\log \\left(\\frac{(x+d) (a c-b)}{(x+c) (a\n d-b)}\\right)-\\text{Li}_2\\left(\\frac{a x+b}{b-a\n c}\\right)+\\text{Li}_2\\left(\\frac{a x+b}{b-a d}\\right) $$ and, if $c=d$, $$I=\\frac{a (x+c) \\log (x+c)-(a x+b) \\log (a x+b)}{(x+c) (b-a c)}$$</p>\n" }, { "answer_id": 2106818, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\int_{0}^{\\infty}{\\ln\\pars{2x + 1} \\over x\\pars{x + 1}}\\,\\dd x &amp;\n\\,\\,\\,\\stackrel{2x + 1\\ \\mapsto\\ x}{=}\\,\\,\\,\n\\int_{1}^{\\infty}{\\ln\\pars{x} \\over\n\\bracks{\\pars{x - 1}/2}\\bracks{\\pars{x - 1}/2 + 1}}\\,{\\dd x \\over 2} =\n2\\int_{1}^{\\infty}{\\ln\\pars{x} \\over x^{2} - 1}\\,\\dd x\n\\\\[5mm] &amp; \\stackrel{x\\ \\mapsto\\ 1/x}{=}\\,\\,\\,\n-2\\int_{0}^{1}{\\ln\\pars{x} \\over 1 - x^{2}}\\,\\dd x\n\\,\\,\\,\\stackrel{x^{2}\\ \\mapsto\\ x}{=}\\,\\,\\,\n-\\,{1 \\over 2}\\int_{0}^{1}{x^{-1/2}\\,\\ln\\pars{x} \\over 1 - x}\\,\\dd x\n\\\\[5mm] &amp; =\n\\left.{1 \\over 2}\\,\\totald{}{\\mu}\\int_{0}^{1}{1 - x^{\\mu} \\over 1 - x}\\,\\dd x\\,\n\\right\\vert_{\\ \\mu\\ =\\ -1/2} =\n\\left.{1 \\over 2}\\,\\totald{\\bracks{\\Psi\\pars{\\mu + 1} + \\gamma}}{\\mu}\n\\right\\vert_{\\ \\mu\\ =\\ -1/2}\\label{1}\\tag{1}\n\\end{align}</p>\n\n<blockquote>\n <p>where $\\ds{\\Psi}$ is the $Digamma\\ Function$ and $\\gamma$ is the\n $Euler\\mbox{-}Mascheroni\\ Constant$: See $\\mathbf{6.3.22}$ in <a href=\"http://people.math.sfu.ca/~cbm/aands/page_259.htm\" rel=\"nofollow noreferrer\">A&amp;S Table</a>.</p>\n</blockquote>\n\n<p>From expression&nbsp;\\eqref{1}:\n$$\n\\int_{0}^{\\infty}{\\ln\\pars{2x + 1} \\over x\\pars{x + 1}}\\,\\dd x =\n{1 \\over 2}\\,\\Psi\\, '\\pars{1 \\over 2} = \\bbx{\\ds{\\pi^{2} \\over 4}}\n$$</p>\n\n<blockquote>\n <p>See $\\mathbf{6.4.4}$ in\n <a href=\"http://people.math.sfu.ca/~cbm/aands/page_260.htm\" rel=\"nofollow noreferrer\">A&amp;S Table</a>.</p>\n</blockquote>\n" }, { "answer_id": 2106855, "score": 0, "text": "<p>This might be a late response, but consider the double integral</p>\n\n<p>$$I=\\int_{0}^{\\infty}\\int_{0}^{\\infty} \\frac{1}{1+x^2(2y+1)^2} \\frac{x}{1+x^2} \\ dy \\ dx.$$</p>\n\n<p>We will evaluate $I$ in two ways. Integrating with respect to $y$ first, we see that</p>\n\n<p>$$\\int_{0}^{\\infty} \\frac{1}{1+x^2(2y+1)^2} \\ dy= \\lim_{b \\rightarrow \\infty}\\frac{\\arctan((2b+1)x)}{2x}-\\frac{\\arctan(x)}{2x}=\\frac{\\pi}{4x}-\\frac{\\arctan(x)}{2x}$$ and get</p>\n\n<p>$$I=\\int_{0}^{\\infty} \\left(\\frac{\\pi}{4x}-\\frac{\\arctan(x)}{2x} \\right) \\frac{x}{1+x^2} \\ dx=\\int_{0}^{\\infty} \\frac{\\pi}{4(1+x^2)}-\\frac{\\arctan(x)}{2(1+x^2)} \\ dx=\\frac{\\pi^2}{8}-\\frac{\\pi^2}{16}=\\frac{\\pi^2}{16}.$$ </p>\n\n<p>Now we reverse the order of integration as such:</p>\n\n<p>$$I=\\int_{0}^{\\infty}\\int_{0}^{\\infty} \\frac{1}{1+x^2(2y+1)^2} \\frac{x}{1+x^2} \\ dx \\ dy.$$</p>\n\n<p>To clean things up a little bit, let $u=x^2, \\ du = 2 x \\ dx.$ So </p>\n\n<p>$$I=\\int_{0}^{\\infty}\\int_{0}^{\\infty} \\frac{1}{1+u(2y+1)^2} \\frac{u}{2(1+u)} \\ du \\ dy.$$</p>\n\n<p>Integrating with respect to $u,$ we can use partial fractions to get that </p>\n\n<p>$$\\frac{1}{1+u(2y+1)^2} \\frac{1}{2(1+u)}=\\frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\\frac{1}{2(u+1)((2y+1)^2-1)}.$$</p>\n\n<p>We see </p>\n\n<p>$$\\int_{0}^{\\infty} \\frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\\frac{1}{2(u+1)((2y+1)^2-1)} \\ du $$\n$$= \\lim_{b \\rightarrow \\infty}\\frac{\\ln(b(2y+1)^2+1)}{2(2y+1)^2-2} - \\frac{\\ln(b+1)}{2(2y+1)^2-2}= \\frac{\\ln(2y+1)}{(2y+1)^2-1}=\\frac{\\ln(2y+1)}{4y(y+1)}.$$</p>\n\n<p>Hence, </p>\n\n<p>$$I=\\int_{0}^{\\infty}\\frac{\\ln(2y+1)}{4y(y+1)} \\ dy=\\frac{\\pi^2}{16},$$</p>\n\n<p>So $$I=\\int_{0}^{\\infty}\\frac{\\ln(2y+1)}{y(y+1)} \\ dy=\\frac{\\pi^2}{4}.$$</p>\n" }, { "answer_id": 2089820, "score": 5, "text": "<p>Alternatively we can use Feynman's Trick if we let, $$I(a)=\\int_0^{\\infty} \\frac{\\ln(ax+1)}{x(x+1)}\\mathrm dx,$$ then if we take the derivative of $I(a)$ and expand using partial fractions we have $$I'(a)=\\int_0^{\\infty} \\frac{1}{(ax+1)(x+1)}\\mathrm dx =\\int_0^{\\infty} \\frac{a}{(a-1)(ax+1)} - \\frac{1}{(a-1)(x+1)}\\mathrm dx.$$ Integrating this gives, $$I'(a)=\\bigg(\\frac{1}{a-1}\\ln|ax+1|-\\frac{1}{a-1}\\ln|x+1| \\bigg)|_0^{\\infty}=\\frac{1}{a-1}\\ln|\\frac{ax+1}{x+1}| |_0^{\\infty}.$$ Noticing the lower bound is zero we're left with an easy limit$$I'(a)=\\lim_{x\\to\\infty}\\frac{1}{a-1}\\ln|\\frac{ax+1}{x+1}|=\\frac{\\ln(a)}{a-1}.$$ Now if we integrate we can get $I(a)$, this is easy because our integral closely resembles the definition of the dilogarithm, so we have $$I(a)=\\int \\frac{\\ln(a)}{a-1} da=-Li_2(1-a)+C.$$ Now if we notice $I(0)=0$ and note that $Li_2(1)=\\frac{\\pi^2}{6}$ we see that $C=\\frac{\\pi^2}{6}.$ So finally if we note that for your problem $a=2$ have that, $$I(2)=-Li_2(-1)+\\frac{\\pi^2}{6}=\\frac{\\pi^2}{12}+\\frac{\\pi^2}{6}=\\frac{\\pi^2}{4}.$$</p>\n\n<p>This is admittedly not as nice as the approach using the geometric series and makes heavy use of the dilogarithm and it's particular values but I think it's kinda fun.</p>\n" }, { "answer_id": 2089769, "score": 4, "text": "<p>do the substitution $2x+1\\rightarrow x$ and the let $x\\rightarrow x^{-1}$ , hence\n$$\\int_{0}^{\\infty }\\frac{\\ln\\left ( 2x+1 \\right )}{x\\left ( x+1 \\right )}\\, \\mathrm{d}x=2\\int_{1}^{\\infty }\\frac{\\ln x}{x^{2}-1}\\, \\mathrm{d}x=-2\\int_{0}^{1}\\frac{\\ln x}{1-x^{2}}\\, \\mathrm{d}x$$\nthen use the geometric series\n$$\\frac{1}{1-x^{2}}=\\sum_{n=0}^{\\infty }x^{2n}$$\nwe get\n$$\\int_{0}^{\\infty }\\frac{\\ln\\left ( 2x+1 \\right )}{x\\left ( x+1 \\right )}\\, \\mathrm{d}x=2\\sum_{n=0}^{\\infty }\\frac{1}{\\left ( 2n+1 \\right )^{2}}$$\nand the answer will follow.</p>\n" }, { "answer_id": 2089860, "score": 1, "text": "<p><em>Too long for a comment.</em></p>\n\n<p>Making the problem more general, it is possible to compute $$I=\\int \\frac{\\log (a x+b)}{(x+c) (x+d)}\\,\\mathrm dx$$ Using one integration by parts $$u=\\log (a x+b)\\implies u'=\\frac{a}{a x+b}\\,\\mathrm dx$$ $$v'=\\frac{\\mathrm dx}{(c+x) (d+x)}\\implies v=\\frac{\\log (x+d)-\\log (x+c)}{c-d}$$ So, if $c\\neq d$,$$(c-d)I=\\log (a x+b) \\log \\left(\\frac{(x+d) (a c-b)}{(x+c) (a\n d-b)}\\right)-\\text{Li}_2\\left(\\frac{a x+b}{b-a\n c}\\right)+\\text{Li}_2\\left(\\frac{a x+b}{b-a d}\\right) $$ and, if $c=d$, $$I=\\frac{a (x+c) \\log (x+c)-(a x+b) \\log (a x+b)}{(x+c) (b-a c)}$$</p>\n" }, { "answer_id": 2106818, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\int_{0}^{\\infty}{\\ln\\pars{2x + 1} \\over x\\pars{x + 1}}\\,\\dd x &amp;\n\\,\\,\\,\\stackrel{2x + 1\\ \\mapsto\\ x}{=}\\,\\,\\,\n\\int_{1}^{\\infty}{\\ln\\pars{x} \\over\n\\bracks{\\pars{x - 1}/2}\\bracks{\\pars{x - 1}/2 + 1}}\\,{\\dd x \\over 2} =\n2\\int_{1}^{\\infty}{\\ln\\pars{x} \\over x^{2} - 1}\\,\\dd x\n\\\\[5mm] &amp; \\stackrel{x\\ \\mapsto\\ 1/x}{=}\\,\\,\\,\n-2\\int_{0}^{1}{\\ln\\pars{x} \\over 1 - x^{2}}\\,\\dd x\n\\,\\,\\,\\stackrel{x^{2}\\ \\mapsto\\ x}{=}\\,\\,\\,\n-\\,{1 \\over 2}\\int_{0}^{1}{x^{-1/2}\\,\\ln\\pars{x} \\over 1 - x}\\,\\dd x\n\\\\[5mm] &amp; =\n\\left.{1 \\over 2}\\,\\totald{}{\\mu}\\int_{0}^{1}{1 - x^{\\mu} \\over 1 - x}\\,\\dd x\\,\n\\right\\vert_{\\ \\mu\\ =\\ -1/2} =\n\\left.{1 \\over 2}\\,\\totald{\\bracks{\\Psi\\pars{\\mu + 1} + \\gamma}}{\\mu}\n\\right\\vert_{\\ \\mu\\ =\\ -1/2}\\label{1}\\tag{1}\n\\end{align}</p>\n\n<blockquote>\n <p>where $\\ds{\\Psi}$ is the $Digamma\\ Function$ and $\\gamma$ is the\n $Euler\\mbox{-}Mascheroni\\ Constant$: See $\\mathbf{6.3.22}$ in <a href=\"http://people.math.sfu.ca/~cbm/aands/page_259.htm\" rel=\"nofollow noreferrer\">A&amp;S Table</a>.</p>\n</blockquote>\n\n<p>From expression&nbsp;\\eqref{1}:\n$$\n\\int_{0}^{\\infty}{\\ln\\pars{2x + 1} \\over x\\pars{x + 1}}\\,\\dd x =\n{1 \\over 2}\\,\\Psi\\, '\\pars{1 \\over 2} = \\bbx{\\ds{\\pi^{2} \\over 4}}\n$$</p>\n\n<blockquote>\n <p>See $\\mathbf{6.4.4}$ in\n <a href=\"http://people.math.sfu.ca/~cbm/aands/page_260.htm\" rel=\"nofollow noreferrer\">A&amp;S Table</a>.</p>\n</blockquote>\n" }, { "answer_id": 2106855, "score": 0, "text": "<p>This might be a late response, but consider the double integral</p>\n\n<p>$$I=\\int_{0}^{\\infty}\\int_{0}^{\\infty} \\frac{1}{1+x^2(2y+1)^2} \\frac{x}{1+x^2} \\ dy \\ dx.$$</p>\n\n<p>We will evaluate $I$ in two ways. Integrating with respect to $y$ first, we see that</p>\n\n<p>$$\\int_{0}^{\\infty} \\frac{1}{1+x^2(2y+1)^2} \\ dy= \\lim_{b \\rightarrow \\infty}\\frac{\\arctan((2b+1)x)}{2x}-\\frac{\\arctan(x)}{2x}=\\frac{\\pi}{4x}-\\frac{\\arctan(x)}{2x}$$ and get</p>\n\n<p>$$I=\\int_{0}^{\\infty} \\left(\\frac{\\pi}{4x}-\\frac{\\arctan(x)}{2x} \\right) \\frac{x}{1+x^2} \\ dx=\\int_{0}^{\\infty} \\frac{\\pi}{4(1+x^2)}-\\frac{\\arctan(x)}{2(1+x^2)} \\ dx=\\frac{\\pi^2}{8}-\\frac{\\pi^2}{16}=\\frac{\\pi^2}{16}.$$ </p>\n\n<p>Now we reverse the order of integration as such:</p>\n\n<p>$$I=\\int_{0}^{\\infty}\\int_{0}^{\\infty} \\frac{1}{1+x^2(2y+1)^2} \\frac{x}{1+x^2} \\ dx \\ dy.$$</p>\n\n<p>To clean things up a little bit, let $u=x^2, \\ du = 2 x \\ dx.$ So </p>\n\n<p>$$I=\\int_{0}^{\\infty}\\int_{0}^{\\infty} \\frac{1}{1+u(2y+1)^2} \\frac{u}{2(1+u)} \\ du \\ dy.$$</p>\n\n<p>Integrating with respect to $u,$ we can use partial fractions to get that </p>\n\n<p>$$\\frac{1}{1+u(2y+1)^2} \\frac{1}{2(1+u)}=\\frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\\frac{1}{2(u+1)((2y+1)^2-1)}.$$</p>\n\n<p>We see </p>\n\n<p>$$\\int_{0}^{\\infty} \\frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\\frac{1}{2(u+1)((2y+1)^2-1)} \\ du $$\n$$= \\lim_{b \\rightarrow \\infty}\\frac{\\ln(b(2y+1)^2+1)}{2(2y+1)^2-2} - \\frac{\\ln(b+1)}{2(2y+1)^2-2}= \\frac{\\ln(2y+1)}{(2y+1)^2-1}=\\frac{\\ln(2y+1)}{4y(y+1)}.$$</p>\n\n<p>Hence, </p>\n\n<p>$$I=\\int_{0}^{\\infty}\\frac{\\ln(2y+1)}{4y(y+1)} \\ dy=\\frac{\\pi^2}{16},$$</p>\n\n<p>So $$I=\\int_{0}^{\\infty}\\frac{\\ln(2y+1)}{y(y+1)} \\ dy=\\frac{\\pi^2}{4}.$$</p>\n" }, { "answer_id": 2089820, "score": 5, "text": "<p>Alternatively we can use Feynman's Trick if we let, $$I(a)=\\int_0^{\\infty} \\frac{\\ln(ax+1)}{x(x+1)}\\mathrm dx,$$ then if we take the derivative of $I(a)$ and expand using partial fractions we have $$I'(a)=\\int_0^{\\infty} \\frac{1}{(ax+1)(x+1)}\\mathrm dx =\\int_0^{\\infty} \\frac{a}{(a-1)(ax+1)} - \\frac{1}{(a-1)(x+1)}\\mathrm dx.$$ Integrating this gives, $$I'(a)=\\bigg(\\frac{1}{a-1}\\ln|ax+1|-\\frac{1}{a-1}\\ln|x+1| \\bigg)|_0^{\\infty}=\\frac{1}{a-1}\\ln|\\frac{ax+1}{x+1}| |_0^{\\infty}.$$ Noticing the lower bound is zero we're left with an easy limit$$I'(a)=\\lim_{x\\to\\infty}\\frac{1}{a-1}\\ln|\\frac{ax+1}{x+1}|=\\frac{\\ln(a)}{a-1}.$$ Now if we integrate we can get $I(a)$, this is easy because our integral closely resembles the definition of the dilogarithm, so we have $$I(a)=\\int \\frac{\\ln(a)}{a-1} da=-Li_2(1-a)+C.$$ Now if we notice $I(0)=0$ and note that $Li_2(1)=\\frac{\\pi^2}{6}$ we see that $C=\\frac{\\pi^2}{6}.$ So finally if we note that for your problem $a=2$ have that, $$I(2)=-Li_2(-1)+\\frac{\\pi^2}{6}=\\frac{\\pi^2}{12}+\\frac{\\pi^2}{6}=\\frac{\\pi^2}{4}.$$</p>\n\n<p>This is admittedly not as nice as the approach using the geometric series and makes heavy use of the dilogarithm and it's particular values but I think it's kinda fun.</p>\n" }, { "answer_id": 2089769, "score": 4, "text": "<p>do the substitution $2x+1\\rightarrow x$ and the let $x\\rightarrow x^{-1}$ , hence\n$$\\int_{0}^{\\infty }\\frac{\\ln\\left ( 2x+1 \\right )}{x\\left ( x+1 \\right )}\\, \\mathrm{d}x=2\\int_{1}^{\\infty }\\frac{\\ln x}{x^{2}-1}\\, \\mathrm{d}x=-2\\int_{0}^{1}\\frac{\\ln x}{1-x^{2}}\\, \\mathrm{d}x$$\nthen use the geometric series\n$$\\frac{1}{1-x^{2}}=\\sum_{n=0}^{\\infty }x^{2n}$$\nwe get\n$$\\int_{0}^{\\infty }\\frac{\\ln\\left ( 2x+1 \\right )}{x\\left ( x+1 \\right )}\\, \\mathrm{d}x=2\\sum_{n=0}^{\\infty }\\frac{1}{\\left ( 2n+1 \\right )^{2}}$$\nand the answer will follow.</p>\n" }, { "answer_id": 2089860, "score": 1, "text": "<p><em>Too long for a comment.</em></p>\n\n<p>Making the problem more general, it is possible to compute $$I=\\int \\frac{\\log (a x+b)}{(x+c) (x+d)}\\,\\mathrm dx$$ Using one integration by parts $$u=\\log (a x+b)\\implies u'=\\frac{a}{a x+b}\\,\\mathrm dx$$ $$v'=\\frac{\\mathrm dx}{(c+x) (d+x)}\\implies v=\\frac{\\log (x+d)-\\log (x+c)}{c-d}$$ So, if $c\\neq d$,$$(c-d)I=\\log (a x+b) \\log \\left(\\frac{(x+d) (a c-b)}{(x+c) (a\n d-b)}\\right)-\\text{Li}_2\\left(\\frac{a x+b}{b-a\n c}\\right)+\\text{Li}_2\\left(\\frac{a x+b}{b-a d}\\right) $$ and, if $c=d$, $$I=\\frac{a (x+c) \\log (x+c)-(a x+b) \\log (a x+b)}{(x+c) (b-a c)}$$</p>\n" }, { "answer_id": 2106818, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\int_{0}^{\\infty}{\\ln\\pars{2x + 1} \\over x\\pars{x + 1}}\\,\\dd x &amp;\n\\,\\,\\,\\stackrel{2x + 1\\ \\mapsto\\ x}{=}\\,\\,\\,\n\\int_{1}^{\\infty}{\\ln\\pars{x} \\over\n\\bracks{\\pars{x - 1}/2}\\bracks{\\pars{x - 1}/2 + 1}}\\,{\\dd x \\over 2} =\n2\\int_{1}^{\\infty}{\\ln\\pars{x} \\over x^{2} - 1}\\,\\dd x\n\\\\[5mm] &amp; \\stackrel{x\\ \\mapsto\\ 1/x}{=}\\,\\,\\,\n-2\\int_{0}^{1}{\\ln\\pars{x} \\over 1 - x^{2}}\\,\\dd x\n\\,\\,\\,\\stackrel{x^{2}\\ \\mapsto\\ x}{=}\\,\\,\\,\n-\\,{1 \\over 2}\\int_{0}^{1}{x^{-1/2}\\,\\ln\\pars{x} \\over 1 - x}\\,\\dd x\n\\\\[5mm] &amp; =\n\\left.{1 \\over 2}\\,\\totald{}{\\mu}\\int_{0}^{1}{1 - x^{\\mu} \\over 1 - x}\\,\\dd x\\,\n\\right\\vert_{\\ \\mu\\ =\\ -1/2} =\n\\left.{1 \\over 2}\\,\\totald{\\bracks{\\Psi\\pars{\\mu + 1} + \\gamma}}{\\mu}\n\\right\\vert_{\\ \\mu\\ =\\ -1/2}\\label{1}\\tag{1}\n\\end{align}</p>\n\n<blockquote>\n <p>where $\\ds{\\Psi}$ is the $Digamma\\ Function$ and $\\gamma$ is the\n $Euler\\mbox{-}Mascheroni\\ Constant$: See $\\mathbf{6.3.22}$ in <a href=\"http://people.math.sfu.ca/~cbm/aands/page_259.htm\" rel=\"nofollow noreferrer\">A&amp;S Table</a>.</p>\n</blockquote>\n\n<p>From expression&nbsp;\\eqref{1}:\n$$\n\\int_{0}^{\\infty}{\\ln\\pars{2x + 1} \\over x\\pars{x + 1}}\\,\\dd x =\n{1 \\over 2}\\,\\Psi\\, '\\pars{1 \\over 2} = \\bbx{\\ds{\\pi^{2} \\over 4}}\n$$</p>\n\n<blockquote>\n <p>See $\\mathbf{6.4.4}$ in\n <a href=\"http://people.math.sfu.ca/~cbm/aands/page_260.htm\" rel=\"nofollow noreferrer\">A&amp;S Table</a>.</p>\n</blockquote>\n" }, { "answer_id": 2106855, "score": 0, "text": "<p>This might be a late response, but consider the double integral</p>\n\n<p>$$I=\\int_{0}^{\\infty}\\int_{0}^{\\infty} \\frac{1}{1+x^2(2y+1)^2} \\frac{x}{1+x^2} \\ dy \\ dx.$$</p>\n\n<p>We will evaluate $I$ in two ways. Integrating with respect to $y$ first, we see that</p>\n\n<p>$$\\int_{0}^{\\infty} \\frac{1}{1+x^2(2y+1)^2} \\ dy= \\lim_{b \\rightarrow \\infty}\\frac{\\arctan((2b+1)x)}{2x}-\\frac{\\arctan(x)}{2x}=\\frac{\\pi}{4x}-\\frac{\\arctan(x)}{2x}$$ and get</p>\n\n<p>$$I=\\int_{0}^{\\infty} \\left(\\frac{\\pi}{4x}-\\frac{\\arctan(x)}{2x} \\right) \\frac{x}{1+x^2} \\ dx=\\int_{0}^{\\infty} \\frac{\\pi}{4(1+x^2)}-\\frac{\\arctan(x)}{2(1+x^2)} \\ dx=\\frac{\\pi^2}{8}-\\frac{\\pi^2}{16}=\\frac{\\pi^2}{16}.$$ </p>\n\n<p>Now we reverse the order of integration as such:</p>\n\n<p>$$I=\\int_{0}^{\\infty}\\int_{0}^{\\infty} \\frac{1}{1+x^2(2y+1)^2} \\frac{x}{1+x^2} \\ dx \\ dy.$$</p>\n\n<p>To clean things up a little bit, let $u=x^2, \\ du = 2 x \\ dx.$ So </p>\n\n<p>$$I=\\int_{0}^{\\infty}\\int_{0}^{\\infty} \\frac{1}{1+u(2y+1)^2} \\frac{u}{2(1+u)} \\ du \\ dy.$$</p>\n\n<p>Integrating with respect to $u,$ we can use partial fractions to get that </p>\n\n<p>$$\\frac{1}{1+u(2y+1)^2} \\frac{1}{2(1+u)}=\\frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\\frac{1}{2(u+1)((2y+1)^2-1)}.$$</p>\n\n<p>We see </p>\n\n<p>$$\\int_{0}^{\\infty} \\frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\\frac{1}{2(u+1)((2y+1)^2-1)} \\ du $$\n$$= \\lim_{b \\rightarrow \\infty}\\frac{\\ln(b(2y+1)^2+1)}{2(2y+1)^2-2} - \\frac{\\ln(b+1)}{2(2y+1)^2-2}= \\frac{\\ln(2y+1)}{(2y+1)^2-1}=\\frac{\\ln(2y+1)}{4y(y+1)}.$$</p>\n\n<p>Hence, </p>\n\n<p>$$I=\\int_{0}^{\\infty}\\frac{\\ln(2y+1)}{4y(y+1)} \\ dy=\\frac{\\pi^2}{16},$$</p>\n\n<p>So $$I=\\int_{0}^{\\infty}\\frac{\\ln(2y+1)}{y(y+1)} \\ dy=\\frac{\\pi^2}{4}.$$</p>\n" }, { "answer_id": 2089820, "score": 5, "text": "<p>Alternatively we can use Feynman's Trick if we let, $$I(a)=\\int_0^{\\infty} \\frac{\\ln(ax+1)}{x(x+1)}\\mathrm dx,$$ then if we take the derivative of $I(a)$ and expand using partial fractions we have $$I'(a)=\\int_0^{\\infty} \\frac{1}{(ax+1)(x+1)}\\mathrm dx =\\int_0^{\\infty} \\frac{a}{(a-1)(ax+1)} - \\frac{1}{(a-1)(x+1)}\\mathrm dx.$$ Integrating this gives, $$I'(a)=\\bigg(\\frac{1}{a-1}\\ln|ax+1|-\\frac{1}{a-1}\\ln|x+1| \\bigg)|_0^{\\infty}=\\frac{1}{a-1}\\ln|\\frac{ax+1}{x+1}| |_0^{\\infty}.$$ Noticing the lower bound is zero we're left with an easy limit$$I'(a)=\\lim_{x\\to\\infty}\\frac{1}{a-1}\\ln|\\frac{ax+1}{x+1}|=\\frac{\\ln(a)}{a-1}.$$ Now if we integrate we can get $I(a)$, this is easy because our integral closely resembles the definition of the dilogarithm, so we have $$I(a)=\\int \\frac{\\ln(a)}{a-1} da=-Li_2(1-a)+C.$$ Now if we notice $I(0)=0$ and note that $Li_2(1)=\\frac{\\pi^2}{6}$ we see that $C=\\frac{\\pi^2}{6}.$ So finally if we note that for your problem $a=2$ have that, $$I(2)=-Li_2(-1)+\\frac{\\pi^2}{6}=\\frac{\\pi^2}{12}+\\frac{\\pi^2}{6}=\\frac{\\pi^2}{4}.$$</p>\n\n<p>This is admittedly not as nice as the approach using the geometric series and makes heavy use of the dilogarithm and it's particular values but I think it's kinda fun.</p>\n" }, { "answer_id": 2089769, "score": 4, "text": "<p>do the substitution $2x+1\\rightarrow x$ and the let $x\\rightarrow x^{-1}$ , hence\n$$\\int_{0}^{\\infty }\\frac{\\ln\\left ( 2x+1 \\right )}{x\\left ( x+1 \\right )}\\, \\mathrm{d}x=2\\int_{1}^{\\infty }\\frac{\\ln x}{x^{2}-1}\\, \\mathrm{d}x=-2\\int_{0}^{1}\\frac{\\ln x}{1-x^{2}}\\, \\mathrm{d}x$$\nthen use the geometric series\n$$\\frac{1}{1-x^{2}}=\\sum_{n=0}^{\\infty }x^{2n}$$\nwe get\n$$\\int_{0}^{\\infty }\\frac{\\ln\\left ( 2x+1 \\right )}{x\\left ( x+1 \\right )}\\, \\mathrm{d}x=2\\sum_{n=0}^{\\infty }\\frac{1}{\\left ( 2n+1 \\right )^{2}}$$\nand the answer will follow.</p>\n" }, { "answer_id": 2089860, "score": 1, "text": "<p><em>Too long for a comment.</em></p>\n\n<p>Making the problem more general, it is possible to compute $$I=\\int \\frac{\\log (a x+b)}{(x+c) (x+d)}\\,\\mathrm dx$$ Using one integration by parts $$u=\\log (a x+b)\\implies u'=\\frac{a}{a x+b}\\,\\mathrm dx$$ $$v'=\\frac{\\mathrm dx}{(c+x) (d+x)}\\implies v=\\frac{\\log (x+d)-\\log (x+c)}{c-d}$$ So, if $c\\neq d$,$$(c-d)I=\\log (a x+b) \\log \\left(\\frac{(x+d) (a c-b)}{(x+c) (a\n d-b)}\\right)-\\text{Li}_2\\left(\\frac{a x+b}{b-a\n c}\\right)+\\text{Li}_2\\left(\\frac{a x+b}{b-a d}\\right) $$ and, if $c=d$, $$I=\\frac{a (x+c) \\log (x+c)-(a x+b) \\log (a x+b)}{(x+c) (b-a c)}$$</p>\n" }, { "answer_id": 2106818, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\int_{0}^{\\infty}{\\ln\\pars{2x + 1} \\over x\\pars{x + 1}}\\,\\dd x &amp;\n\\,\\,\\,\\stackrel{2x + 1\\ \\mapsto\\ x}{=}\\,\\,\\,\n\\int_{1}^{\\infty}{\\ln\\pars{x} \\over\n\\bracks{\\pars{x - 1}/2}\\bracks{\\pars{x - 1}/2 + 1}}\\,{\\dd x \\over 2} =\n2\\int_{1}^{\\infty}{\\ln\\pars{x} \\over x^{2} - 1}\\,\\dd x\n\\\\[5mm] &amp; \\stackrel{x\\ \\mapsto\\ 1/x}{=}\\,\\,\\,\n-2\\int_{0}^{1}{\\ln\\pars{x} \\over 1 - x^{2}}\\,\\dd x\n\\,\\,\\,\\stackrel{x^{2}\\ \\mapsto\\ x}{=}\\,\\,\\,\n-\\,{1 \\over 2}\\int_{0}^{1}{x^{-1/2}\\,\\ln\\pars{x} \\over 1 - x}\\,\\dd x\n\\\\[5mm] &amp; =\n\\left.{1 \\over 2}\\,\\totald{}{\\mu}\\int_{0}^{1}{1 - x^{\\mu} \\over 1 - x}\\,\\dd x\\,\n\\right\\vert_{\\ \\mu\\ =\\ -1/2} =\n\\left.{1 \\over 2}\\,\\totald{\\bracks{\\Psi\\pars{\\mu + 1} + \\gamma}}{\\mu}\n\\right\\vert_{\\ \\mu\\ =\\ -1/2}\\label{1}\\tag{1}\n\\end{align}</p>\n\n<blockquote>\n <p>where $\\ds{\\Psi}$ is the $Digamma\\ Function$ and $\\gamma$ is the\n $Euler\\mbox{-}Mascheroni\\ Constant$: See $\\mathbf{6.3.22}$ in <a href=\"http://people.math.sfu.ca/~cbm/aands/page_259.htm\" rel=\"nofollow noreferrer\">A&amp;S Table</a>.</p>\n</blockquote>\n\n<p>From expression&nbsp;\\eqref{1}:\n$$\n\\int_{0}^{\\infty}{\\ln\\pars{2x + 1} \\over x\\pars{x + 1}}\\,\\dd x =\n{1 \\over 2}\\,\\Psi\\, '\\pars{1 \\over 2} = \\bbx{\\ds{\\pi^{2} \\over 4}}\n$$</p>\n\n<blockquote>\n <p>See $\\mathbf{6.4.4}$ in\n <a href=\"http://people.math.sfu.ca/~cbm/aands/page_260.htm\" rel=\"nofollow noreferrer\">A&amp;S Table</a>.</p>\n</blockquote>\n" }, { "answer_id": 2106855, "score": 0, "text": "<p>This might be a late response, but consider the double integral</p>\n\n<p>$$I=\\int_{0}^{\\infty}\\int_{0}^{\\infty} \\frac{1}{1+x^2(2y+1)^2} \\frac{x}{1+x^2} \\ dy \\ dx.$$</p>\n\n<p>We will evaluate $I$ in two ways. Integrating with respect to $y$ first, we see that</p>\n\n<p>$$\\int_{0}^{\\infty} \\frac{1}{1+x^2(2y+1)^2} \\ dy= \\lim_{b \\rightarrow \\infty}\\frac{\\arctan((2b+1)x)}{2x}-\\frac{\\arctan(x)}{2x}=\\frac{\\pi}{4x}-\\frac{\\arctan(x)}{2x}$$ and get</p>\n\n<p>$$I=\\int_{0}^{\\infty} \\left(\\frac{\\pi}{4x}-\\frac{\\arctan(x)}{2x} \\right) \\frac{x}{1+x^2} \\ dx=\\int_{0}^{\\infty} \\frac{\\pi}{4(1+x^2)}-\\frac{\\arctan(x)}{2(1+x^2)} \\ dx=\\frac{\\pi^2}{8}-\\frac{\\pi^2}{16}=\\frac{\\pi^2}{16}.$$ </p>\n\n<p>Now we reverse the order of integration as such:</p>\n\n<p>$$I=\\int_{0}^{\\infty}\\int_{0}^{\\infty} \\frac{1}{1+x^2(2y+1)^2} \\frac{x}{1+x^2} \\ dx \\ dy.$$</p>\n\n<p>To clean things up a little bit, let $u=x^2, \\ du = 2 x \\ dx.$ So </p>\n\n<p>$$I=\\int_{0}^{\\infty}\\int_{0}^{\\infty} \\frac{1}{1+u(2y+1)^2} \\frac{u}{2(1+u)} \\ du \\ dy.$$</p>\n\n<p>Integrating with respect to $u,$ we can use partial fractions to get that </p>\n\n<p>$$\\frac{1}{1+u(2y+1)^2} \\frac{1}{2(1+u)}=\\frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\\frac{1}{2(u+1)((2y+1)^2-1)}.$$</p>\n\n<p>We see </p>\n\n<p>$$\\int_{0}^{\\infty} \\frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\\frac{1}{2(u+1)((2y+1)^2-1)} \\ du $$\n$$= \\lim_{b \\rightarrow \\infty}\\frac{\\ln(b(2y+1)^2+1)}{2(2y+1)^2-2} - \\frac{\\ln(b+1)}{2(2y+1)^2-2}= \\frac{\\ln(2y+1)}{(2y+1)^2-1}=\\frac{\\ln(2y+1)}{4y(y+1)}.$$</p>\n\n<p>Hence, </p>\n\n<p>$$I=\\int_{0}^{\\infty}\\frac{\\ln(2y+1)}{4y(y+1)} \\ dy=\\frac{\\pi^2}{16},$$</p>\n\n<p>So $$I=\\int_{0}^{\\infty}\\frac{\\ln(2y+1)}{y(y+1)} \\ dy=\\frac{\\pi^2}{4}.$$</p>\n" } ]
Question: Evaluate $\int_0^\infty \frac{\ln\left ( 2x+1 \right )}{x\left ( x+1 \right )}\mathrm dx$ Answer 1: <p>Alternatively we can use Feynman's Trick if we let, $$I(a)=\int_0^{\infty} \frac{\ln(ax+1)}{x(x+1)}\mathrm dx,$$ then if we take the derivative of $I(a)$ and expand using partial fractions we have $$I'(a)=\int_0^{\infty} \frac{1}{(ax+1)(x+1)}\mathrm dx =\int_0^{\infty} \frac{a}{(a-1)(ax+1)} - \frac{1}{(a-1)(x+1)}\mathrm dx.$$ Integrating this gives, $$I'(a)=\bigg(\frac{1}{a-1}\ln|ax+1|-\frac{1}{a-1}\ln|x+1| \bigg)|_0^{\infty}=\frac{1}{a-1}\ln|\frac{ax+1}{x+1}| |_0^{\infty}.$$ Noticing the lower bound is zero we're left with an easy limit$$I'(a)=\lim_{x\to\infty}\frac{1}{a-1}\ln|\frac{ax+1}{x+1}|=\frac{\ln(a)}{a-1}.$$ Now if we integrate we can get $I(a)$, this is easy because our integral closely resembles the definition of the dilogarithm, so we have $$I(a)=\int \frac{\ln(a)}{a-1} da=-Li_2(1-a)+C.$$ Now if we notice $I(0)=0$ and note that $Li_2(1)=\frac{\pi^2}{6}$ we see that $C=\frac{\pi^2}{6}.$ So finally if we note that for your problem $a=2$ have that, $$I(2)=-Li_2(-1)+\frac{\pi^2}{6}=\frac{\pi^2}{12}+\frac{\pi^2}{6}=\frac{\pi^2}{4}.$$</p> <p>This is admittedly not as nice as the approach using the geometric series and makes heavy use of the dilogarithm and it's particular values but I think it's kinda fun.</p> Answer 2: <p>do the substitution $2x+1\rightarrow x$ and the let $x\rightarrow x^{-1}$ , hence $$\int_{0}^{\infty }\frac{\ln\left ( 2x+1 \right )}{x\left ( x+1 \right )}\, \mathrm{d}x=2\int_{1}^{\infty }\frac{\ln x}{x^{2}-1}\, \mathrm{d}x=-2\int_{0}^{1}\frac{\ln x}{1-x^{2}}\, \mathrm{d}x$$ then use the geometric series $$\frac{1}{1-x^{2}}=\sum_{n=0}^{\infty }x^{2n}$$ we get $$\int_{0}^{\infty }\frac{\ln\left ( 2x+1 \right )}{x\left ( x+1 \right )}\, \mathrm{d}x=2\sum_{n=0}^{\infty }\frac{1}{\left ( 2n+1 \right )^{2}}$$ and the answer will follow.</p> Answer 3: <p><em>Too long for a comment.</em></p> <p>Making the problem more general, it is possible to compute $$I=\int \frac{\log (a x+b)}{(x+c) (x+d)}\,\mathrm dx$$ Using one integration by parts $$u=\log (a x+b)\implies u'=\frac{a}{a x+b}\,\mathrm dx$$ $$v'=\frac{\mathrm dx}{(c+x) (d+x)}\implies v=\frac{\log (x+d)-\log (x+c)}{c-d}$$ So, if $c\neq d$,$$(c-d)I=\log (a x+b) \log \left(\frac{(x+d) (a c-b)}{(x+c) (a d-b)}\right)-\text{Li}_2\left(\frac{a x+b}{b-a c}\right)+\text{Li}_2\left(\frac{a x+b}{b-a d}\right) $$ and, if $c=d$, $$I=\frac{a (x+c) \log (x+c)-(a x+b) \log (a x+b)}{(x+c) (b-a c)}$$</p> Answer 4: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \int_{0}^{\infty}{\ln\pars{2x + 1} \over x\pars{x + 1}}\,\dd x &amp; \,\,\,\stackrel{2x + 1\ \mapsto\ x}{=}\,\,\, \int_{1}^{\infty}{\ln\pars{x} \over \bracks{\pars{x - 1}/2}\bracks{\pars{x - 1}/2 + 1}}\,{\dd x \over 2} = 2\int_{1}^{\infty}{\ln\pars{x} \over x^{2} - 1}\,\dd x \\[5mm] &amp; \stackrel{x\ \mapsto\ 1/x}{=}\,\,\, -2\int_{0}^{1}{\ln\pars{x} \over 1 - x^{2}}\,\dd x \,\,\,\stackrel{x^{2}\ \mapsto\ x}{=}\,\,\, -\,{1 \over 2}\int_{0}^{1}{x^{-1/2}\,\ln\pars{x} \over 1 - x}\,\dd x \\[5mm] &amp; = \left.{1 \over 2}\,\totald{}{\mu}\int_{0}^{1}{1 - x^{\mu} \over 1 - x}\,\dd x\, \right\vert_{\ \mu\ =\ -1/2} = \left.{1 \over 2}\,\totald{\bracks{\Psi\pars{\mu + 1} + \gamma}}{\mu} \right\vert_{\ \mu\ =\ -1/2}\label{1}\tag{1} \end{align}</p> <blockquote> <p>where $\ds{\Psi}$ is the $Digamma\ Function$ and $\gamma$ is the $Euler\mbox{-}Mascheroni\ Constant$: See $\mathbf{6.3.22}$ in <a href="http://people.math.sfu.ca/~cbm/aands/page_259.htm" rel="nofollow noreferrer">A&amp;S Table</a>.</p> </blockquote> <p>From expression&nbsp;\eqref{1}: $$ \int_{0}^{\infty}{\ln\pars{2x + 1} \over x\pars{x + 1}}\,\dd x = {1 \over 2}\,\Psi\, '\pars{1 \over 2} = \bbx{\ds{\pi^{2} \over 4}} $$</p> <blockquote> <p>See $\mathbf{6.4.4}$ in <a href="http://people.math.sfu.ca/~cbm/aands/page_260.htm" rel="nofollow noreferrer">A&amp;S Table</a>.</p> </blockquote> Answer 5: <p>This might be a late response, but consider the double integral</p> <p>$$I=\int_{0}^{\infty}\int_{0}^{\infty} \frac{1}{1+x^2(2y+1)^2} \frac{x}{1+x^2} \ dy \ dx.$$</p> <p>We will evaluate $I$ in two ways. Integrating with respect to $y$ first, we see that</p> <p>$$\int_{0}^{\infty} \frac{1}{1+x^2(2y+1)^2} \ dy= \lim_{b \rightarrow \infty}\frac{\arctan((2b+1)x)}{2x}-\frac{\arctan(x)}{2x}=\frac{\pi}{4x}-\frac{\arctan(x)}{2x}$$ and get</p> <p>$$I=\int_{0}^{\infty} \left(\frac{\pi}{4x}-\frac{\arctan(x)}{2x} \right) \frac{x}{1+x^2} \ dx=\int_{0}^{\infty} \frac{\pi}{4(1+x^2)}-\frac{\arctan(x)}{2(1+x^2)} \ dx=\frac{\pi^2}{8}-\frac{\pi^2}{16}=\frac{\pi^2}{16}.$$ </p> <p>Now we reverse the order of integration as such:</p> <p>$$I=\int_{0}^{\infty}\int_{0}^{\infty} \frac{1}{1+x^2(2y+1)^2} \frac{x}{1+x^2} \ dx \ dy.$$</p> <p>To clean things up a little bit, let $u=x^2, \ du = 2 x \ dx.$ So </p> <p>$$I=\int_{0}^{\infty}\int_{0}^{\infty} \frac{1}{1+u(2y+1)^2} \frac{u}{2(1+u)} \ du \ dy.$$</p> <p>Integrating with respect to $u,$ we can use partial fractions to get that </p> <p>$$\frac{1}{1+u(2y+1)^2} \frac{1}{2(1+u)}=\frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\frac{1}{2(u+1)((2y+1)^2-1)}.$$</p> <p>We see </p> <p>$$\int_{0}^{\infty} \frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\frac{1}{2(u+1)((2y+1)^2-1)} \ du $$ $$= \lim_{b \rightarrow \infty}\frac{\ln(b(2y+1)^2+1)}{2(2y+1)^2-2} - \frac{\ln(b+1)}{2(2y+1)^2-2}= \frac{\ln(2y+1)}{(2y+1)^2-1}=\frac{\ln(2y+1)}{4y(y+1)}.$$</p> <p>Hence, </p> <p>$$I=\int_{0}^{\infty}\frac{\ln(2y+1)}{4y(y+1)} \ dy=\frac{\pi^2}{16},$$</p> <p>So $$I=\int_{0}^{\infty}\frac{\ln(2y+1)}{y(y+1)} \ dy=\frac{\pi^2}{4}.$$</p> Answer 6: <p>Alternatively we can use Feynman's Trick if we let, $$I(a)=\int_0^{\infty} \frac{\ln(ax+1)}{x(x+1)}\mathrm dx,$$ then if we take the derivative of $I(a)$ and expand using partial fractions we have $$I'(a)=\int_0^{\infty} \frac{1}{(ax+1)(x+1)}\mathrm dx =\int_0^{\infty} \frac{a}{(a-1)(ax+1)} - \frac{1}{(a-1)(x+1)}\mathrm dx.$$ Integrating this gives, $$I'(a)=\bigg(\frac{1}{a-1}\ln|ax+1|-\frac{1}{a-1}\ln|x+1| \bigg)|_0^{\infty}=\frac{1}{a-1}\ln|\frac{ax+1}{x+1}| |_0^{\infty}.$$ Noticing the lower bound is zero we're left with an easy limit$$I'(a)=\lim_{x\to\infty}\frac{1}{a-1}\ln|\frac{ax+1}{x+1}|=\frac{\ln(a)}{a-1}.$$ Now if we integrate we can get $I(a)$, this is easy because our integral closely resembles the definition of the dilogarithm, so we have $$I(a)=\int \frac{\ln(a)}{a-1} da=-Li_2(1-a)+C.$$ Now if we notice $I(0)=0$ and note that $Li_2(1)=\frac{\pi^2}{6}$ we see that $C=\frac{\pi^2}{6}.$ So finally if we note that for your problem $a=2$ have that, $$I(2)=-Li_2(-1)+\frac{\pi^2}{6}=\frac{\pi^2}{12}+\frac{\pi^2}{6}=\frac{\pi^2}{4}.$$</p> <p>This is admittedly not as nice as the approach using the geometric series and makes heavy use of the dilogarithm and it's particular values but I think it's kinda fun.</p> Answer 7: <p>do the substitution $2x+1\rightarrow x$ and the let $x\rightarrow x^{-1}$ , hence $$\int_{0}^{\infty }\frac{\ln\left ( 2x+1 \right )}{x\left ( x+1 \right )}\, \mathrm{d}x=2\int_{1}^{\infty }\frac{\ln x}{x^{2}-1}\, \mathrm{d}x=-2\int_{0}^{1}\frac{\ln x}{1-x^{2}}\, \mathrm{d}x$$ then use the geometric series $$\frac{1}{1-x^{2}}=\sum_{n=0}^{\infty }x^{2n}$$ we get $$\int_{0}^{\infty }\frac{\ln\left ( 2x+1 \right )}{x\left ( x+1 \right )}\, \mathrm{d}x=2\sum_{n=0}^{\infty }\frac{1}{\left ( 2n+1 \right )^{2}}$$ and the answer will follow.</p> Answer 8: <p><em>Too long for a comment.</em></p> <p>Making the problem more general, it is possible to compute $$I=\int \frac{\log (a x+b)}{(x+c) (x+d)}\,\mathrm dx$$ Using one integration by parts $$u=\log (a x+b)\implies u'=\frac{a}{a x+b}\,\mathrm dx$$ $$v'=\frac{\mathrm dx}{(c+x) (d+x)}\implies v=\frac{\log (x+d)-\log (x+c)}{c-d}$$ So, if $c\neq d$,$$(c-d)I=\log (a x+b) \log \left(\frac{(x+d) (a c-b)}{(x+c) (a d-b)}\right)-\text{Li}_2\left(\frac{a x+b}{b-a c}\right)+\text{Li}_2\left(\frac{a x+b}{b-a d}\right) $$ and, if $c=d$, $$I=\frac{a (x+c) \log (x+c)-(a x+b) \log (a x+b)}{(x+c) (b-a c)}$$</p> Answer 9: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \int_{0}^{\infty}{\ln\pars{2x + 1} \over x\pars{x + 1}}\,\dd x &amp; \,\,\,\stackrel{2x + 1\ \mapsto\ x}{=}\,\,\, \int_{1}^{\infty}{\ln\pars{x} \over \bracks{\pars{x - 1}/2}\bracks{\pars{x - 1}/2 + 1}}\,{\dd x \over 2} = 2\int_{1}^{\infty}{\ln\pars{x} \over x^{2} - 1}\,\dd x \\[5mm] &amp; \stackrel{x\ \mapsto\ 1/x}{=}\,\,\, -2\int_{0}^{1}{\ln\pars{x} \over 1 - x^{2}}\,\dd x \,\,\,\stackrel{x^{2}\ \mapsto\ x}{=}\,\,\, -\,{1 \over 2}\int_{0}^{1}{x^{-1/2}\,\ln\pars{x} \over 1 - x}\,\dd x \\[5mm] &amp; = \left.{1 \over 2}\,\totald{}{\mu}\int_{0}^{1}{1 - x^{\mu} \over 1 - x}\,\dd x\, \right\vert_{\ \mu\ =\ -1/2} = \left.{1 \over 2}\,\totald{\bracks{\Psi\pars{\mu + 1} + \gamma}}{\mu} \right\vert_{\ \mu\ =\ -1/2}\label{1}\tag{1} \end{align}</p> <blockquote> <p>where $\ds{\Psi}$ is the $Digamma\ Function$ and $\gamma$ is the $Euler\mbox{-}Mascheroni\ Constant$: See $\mathbf{6.3.22}$ in <a href="http://people.math.sfu.ca/~cbm/aands/page_259.htm" rel="nofollow noreferrer">A&amp;S Table</a>.</p> </blockquote> <p>From expression&nbsp;\eqref{1}: $$ \int_{0}^{\infty}{\ln\pars{2x + 1} \over x\pars{x + 1}}\,\dd x = {1 \over 2}\,\Psi\, '\pars{1 \over 2} = \bbx{\ds{\pi^{2} \over 4}} $$</p> <blockquote> <p>See $\mathbf{6.4.4}$ in <a href="http://people.math.sfu.ca/~cbm/aands/page_260.htm" rel="nofollow noreferrer">A&amp;S Table</a>.</p> </blockquote> Answer 10: <p>This might be a late response, but consider the double integral</p> <p>$$I=\int_{0}^{\infty}\int_{0}^{\infty} \frac{1}{1+x^2(2y+1)^2} \frac{x}{1+x^2} \ dy \ dx.$$</p> <p>We will evaluate $I$ in two ways. Integrating with respect to $y$ first, we see that</p> <p>$$\int_{0}^{\infty} \frac{1}{1+x^2(2y+1)^2} \ dy= \lim_{b \rightarrow \infty}\frac{\arctan((2b+1)x)}{2x}-\frac{\arctan(x)}{2x}=\frac{\pi}{4x}-\frac{\arctan(x)}{2x}$$ and get</p> <p>$$I=\int_{0}^{\infty} \left(\frac{\pi}{4x}-\frac{\arctan(x)}{2x} \right) \frac{x}{1+x^2} \ dx=\int_{0}^{\infty} \frac{\pi}{4(1+x^2)}-\frac{\arctan(x)}{2(1+x^2)} \ dx=\frac{\pi^2}{8}-\frac{\pi^2}{16}=\frac{\pi^2}{16}.$$ </p> <p>Now we reverse the order of integration as such:</p> <p>$$I=\int_{0}^{\infty}\int_{0}^{\infty} \frac{1}{1+x^2(2y+1)^2} \frac{x}{1+x^2} \ dx \ dy.$$</p> <p>To clean things up a little bit, let $u=x^2, \ du = 2 x \ dx.$ So </p> <p>$$I=\int_{0}^{\infty}\int_{0}^{\infty} \frac{1}{1+u(2y+1)^2} \frac{u}{2(1+u)} \ du \ dy.$$</p> <p>Integrating with respect to $u,$ we can use partial fractions to get that </p> <p>$$\frac{1}{1+u(2y+1)^2} \frac{1}{2(1+u)}=\frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\frac{1}{2(u+1)((2y+1)^2-1)}.$$</p> <p>We see </p> <p>$$\int_{0}^{\infty} \frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\frac{1}{2(u+1)((2y+1)^2-1)} \ du $$ $$= \lim_{b \rightarrow \infty}\frac{\ln(b(2y+1)^2+1)}{2(2y+1)^2-2} - \frac{\ln(b+1)}{2(2y+1)^2-2}= \frac{\ln(2y+1)}{(2y+1)^2-1}=\frac{\ln(2y+1)}{4y(y+1)}.$$</p> <p>Hence, </p> <p>$$I=\int_{0}^{\infty}\frac{\ln(2y+1)}{4y(y+1)} \ dy=\frac{\pi^2}{16},$$</p> <p>So $$I=\int_{0}^{\infty}\frac{\ln(2y+1)}{y(y+1)} \ dy=\frac{\pi^2}{4}.$$</p> Answer 11: <p>Alternatively we can use Feynman's Trick if we let, $$I(a)=\int_0^{\infty} \frac{\ln(ax+1)}{x(x+1)}\mathrm dx,$$ then if we take the derivative of $I(a)$ and expand using partial fractions we have $$I'(a)=\int_0^{\infty} \frac{1}{(ax+1)(x+1)}\mathrm dx =\int_0^{\infty} \frac{a}{(a-1)(ax+1)} - \frac{1}{(a-1)(x+1)}\mathrm dx.$$ Integrating this gives, $$I'(a)=\bigg(\frac{1}{a-1}\ln|ax+1|-\frac{1}{a-1}\ln|x+1| \bigg)|_0^{\infty}=\frac{1}{a-1}\ln|\frac{ax+1}{x+1}| |_0^{\infty}.$$ Noticing the lower bound is zero we're left with an easy limit$$I'(a)=\lim_{x\to\infty}\frac{1}{a-1}\ln|\frac{ax+1}{x+1}|=\frac{\ln(a)}{a-1}.$$ Now if we integrate we can get $I(a)$, this is easy because our integral closely resembles the definition of the dilogarithm, so we have $$I(a)=\int \frac{\ln(a)}{a-1} da=-Li_2(1-a)+C.$$ Now if we notice $I(0)=0$ and note that $Li_2(1)=\frac{\pi^2}{6}$ we see that $C=\frac{\pi^2}{6}.$ So finally if we note that for your problem $a=2$ have that, $$I(2)=-Li_2(-1)+\frac{\pi^2}{6}=\frac{\pi^2}{12}+\frac{\pi^2}{6}=\frac{\pi^2}{4}.$$</p> <p>This is admittedly not as nice as the approach using the geometric series and makes heavy use of the dilogarithm and it's particular values but I think it's kinda fun.</p> Answer 12: <p>do the substitution $2x+1\rightarrow x$ and the let $x\rightarrow x^{-1}$ , hence $$\int_{0}^{\infty }\frac{\ln\left ( 2x+1 \right )}{x\left ( x+1 \right )}\, \mathrm{d}x=2\int_{1}^{\infty }\frac{\ln x}{x^{2}-1}\, \mathrm{d}x=-2\int_{0}^{1}\frac{\ln x}{1-x^{2}}\, \mathrm{d}x$$ then use the geometric series $$\frac{1}{1-x^{2}}=\sum_{n=0}^{\infty }x^{2n}$$ we get $$\int_{0}^{\infty }\frac{\ln\left ( 2x+1 \right )}{x\left ( x+1 \right )}\, \mathrm{d}x=2\sum_{n=0}^{\infty }\frac{1}{\left ( 2n+1 \right )^{2}}$$ and the answer will follow.</p> Answer 13: <p><em>Too long for a comment.</em></p> <p>Making the problem more general, it is possible to compute $$I=\int \frac{\log (a x+b)}{(x+c) (x+d)}\,\mathrm dx$$ Using one integration by parts $$u=\log (a x+b)\implies u'=\frac{a}{a x+b}\,\mathrm dx$$ $$v'=\frac{\mathrm dx}{(c+x) (d+x)}\implies v=\frac{\log (x+d)-\log (x+c)}{c-d}$$ So, if $c\neq d$,$$(c-d)I=\log (a x+b) \log \left(\frac{(x+d) (a c-b)}{(x+c) (a d-b)}\right)-\text{Li}_2\left(\frac{a x+b}{b-a c}\right)+\text{Li}_2\left(\frac{a x+b}{b-a d}\right) $$ and, if $c=d$, $$I=\frac{a (x+c) \log (x+c)-(a x+b) \log (a x+b)}{(x+c) (b-a c)}$$</p> Answer 14: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \int_{0}^{\infty}{\ln\pars{2x + 1} \over x\pars{x + 1}}\,\dd x &amp; \,\,\,\stackrel{2x + 1\ \mapsto\ x}{=}\,\,\, \int_{1}^{\infty}{\ln\pars{x} \over \bracks{\pars{x - 1}/2}\bracks{\pars{x - 1}/2 + 1}}\,{\dd x \over 2} = 2\int_{1}^{\infty}{\ln\pars{x} \over x^{2} - 1}\,\dd x \\[5mm] &amp; \stackrel{x\ \mapsto\ 1/x}{=}\,\,\, -2\int_{0}^{1}{\ln\pars{x} \over 1 - x^{2}}\,\dd x \,\,\,\stackrel{x^{2}\ \mapsto\ x}{=}\,\,\, -\,{1 \over 2}\int_{0}^{1}{x^{-1/2}\,\ln\pars{x} \over 1 - x}\,\dd x \\[5mm] &amp; = \left.{1 \over 2}\,\totald{}{\mu}\int_{0}^{1}{1 - x^{\mu} \over 1 - x}\,\dd x\, \right\vert_{\ \mu\ =\ -1/2} = \left.{1 \over 2}\,\totald{\bracks{\Psi\pars{\mu + 1} + \gamma}}{\mu} \right\vert_{\ \mu\ =\ -1/2}\label{1}\tag{1} \end{align}</p> <blockquote> <p>where $\ds{\Psi}$ is the $Digamma\ Function$ and $\gamma$ is the $Euler\mbox{-}Mascheroni\ Constant$: See $\mathbf{6.3.22}$ in <a href="http://people.math.sfu.ca/~cbm/aands/page_259.htm" rel="nofollow noreferrer">A&amp;S Table</a>.</p> </blockquote> <p>From expression&nbsp;\eqref{1}: $$ \int_{0}^{\infty}{\ln\pars{2x + 1} \over x\pars{x + 1}}\,\dd x = {1 \over 2}\,\Psi\, '\pars{1 \over 2} = \bbx{\ds{\pi^{2} \over 4}} $$</p> <blockquote> <p>See $\mathbf{6.4.4}$ in <a href="http://people.math.sfu.ca/~cbm/aands/page_260.htm" rel="nofollow noreferrer">A&amp;S Table</a>.</p> </blockquote> Answer 15: <p>This might be a late response, but consider the double integral</p> <p>$$I=\int_{0}^{\infty}\int_{0}^{\infty} \frac{1}{1+x^2(2y+1)^2} \frac{x}{1+x^2} \ dy \ dx.$$</p> <p>We will evaluate $I$ in two ways. Integrating with respect to $y$ first, we see that</p> <p>$$\int_{0}^{\infty} \frac{1}{1+x^2(2y+1)^2} \ dy= \lim_{b \rightarrow \infty}\frac{\arctan((2b+1)x)}{2x}-\frac{\arctan(x)}{2x}=\frac{\pi}{4x}-\frac{\arctan(x)}{2x}$$ and get</p> <p>$$I=\int_{0}^{\infty} \left(\frac{\pi}{4x}-\frac{\arctan(x)}{2x} \right) \frac{x}{1+x^2} \ dx=\int_{0}^{\infty} \frac{\pi}{4(1+x^2)}-\frac{\arctan(x)}{2(1+x^2)} \ dx=\frac{\pi^2}{8}-\frac{\pi^2}{16}=\frac{\pi^2}{16}.$$ </p> <p>Now we reverse the order of integration as such:</p> <p>$$I=\int_{0}^{\infty}\int_{0}^{\infty} \frac{1}{1+x^2(2y+1)^2} \frac{x}{1+x^2} \ dx \ dy.$$</p> <p>To clean things up a little bit, let $u=x^2, \ du = 2 x \ dx.$ So </p> <p>$$I=\int_{0}^{\infty}\int_{0}^{\infty} \frac{1}{1+u(2y+1)^2} \frac{u}{2(1+u)} \ du \ dy.$$</p> <p>Integrating with respect to $u,$ we can use partial fractions to get that </p> <p>$$\frac{1}{1+u(2y+1)^2} \frac{1}{2(1+u)}=\frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\frac{1}{2(u+1)((2y+1)^2-1)}.$$</p> <p>We see </p> <p>$$\int_{0}^{\infty} \frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\frac{1}{2(u+1)((2y+1)^2-1)} \ du $$ $$= \lim_{b \rightarrow \infty}\frac{\ln(b(2y+1)^2+1)}{2(2y+1)^2-2} - \frac{\ln(b+1)}{2(2y+1)^2-2}= \frac{\ln(2y+1)}{(2y+1)^2-1}=\frac{\ln(2y+1)}{4y(y+1)}.$$</p> <p>Hence, </p> <p>$$I=\int_{0}^{\infty}\frac{\ln(2y+1)}{4y(y+1)} \ dy=\frac{\pi^2}{16},$$</p> <p>So $$I=\int_{0}^{\infty}\frac{\ln(2y+1)}{y(y+1)} \ dy=\frac{\pi^2}{4}.$$</p> Answer 16: <p>Alternatively we can use Feynman's Trick if we let, $$I(a)=\int_0^{\infty} \frac{\ln(ax+1)}{x(x+1)}\mathrm dx,$$ then if we take the derivative of $I(a)$ and expand using partial fractions we have $$I'(a)=\int_0^{\infty} \frac{1}{(ax+1)(x+1)}\mathrm dx =\int_0^{\infty} \frac{a}{(a-1)(ax+1)} - \frac{1}{(a-1)(x+1)}\mathrm dx.$$ Integrating this gives, $$I'(a)=\bigg(\frac{1}{a-1}\ln|ax+1|-\frac{1}{a-1}\ln|x+1| \bigg)|_0^{\infty}=\frac{1}{a-1}\ln|\frac{ax+1}{x+1}| |_0^{\infty}.$$ Noticing the lower bound is zero we're left with an easy limit$$I'(a)=\lim_{x\to\infty}\frac{1}{a-1}\ln|\frac{ax+1}{x+1}|=\frac{\ln(a)}{a-1}.$$ Now if we integrate we can get $I(a)$, this is easy because our integral closely resembles the definition of the dilogarithm, so we have $$I(a)=\int \frac{\ln(a)}{a-1} da=-Li_2(1-a)+C.$$ Now if we notice $I(0)=0$ and note that $Li_2(1)=\frac{\pi^2}{6}$ we see that $C=\frac{\pi^2}{6}.$ So finally if we note that for your problem $a=2$ have that, $$I(2)=-Li_2(-1)+\frac{\pi^2}{6}=\frac{\pi^2}{12}+\frac{\pi^2}{6}=\frac{\pi^2}{4}.$$</p> <p>This is admittedly not as nice as the approach using the geometric series and makes heavy use of the dilogarithm and it's particular values but I think it's kinda fun.</p> Answer 17: <p>do the substitution $2x+1\rightarrow x$ and the let $x\rightarrow x^{-1}$ , hence $$\int_{0}^{\infty }\frac{\ln\left ( 2x+1 \right )}{x\left ( x+1 \right )}\, \mathrm{d}x=2\int_{1}^{\infty }\frac{\ln x}{x^{2}-1}\, \mathrm{d}x=-2\int_{0}^{1}\frac{\ln x}{1-x^{2}}\, \mathrm{d}x$$ then use the geometric series $$\frac{1}{1-x^{2}}=\sum_{n=0}^{\infty }x^{2n}$$ we get $$\int_{0}^{\infty }\frac{\ln\left ( 2x+1 \right )}{x\left ( x+1 \right )}\, \mathrm{d}x=2\sum_{n=0}^{\infty }\frac{1}{\left ( 2n+1 \right )^{2}}$$ and the answer will follow.</p> Answer 18: <p><em>Too long for a comment.</em></p> <p>Making the problem more general, it is possible to compute $$I=\int \frac{\log (a x+b)}{(x+c) (x+d)}\,\mathrm dx$$ Using one integration by parts $$u=\log (a x+b)\implies u'=\frac{a}{a x+b}\,\mathrm dx$$ $$v'=\frac{\mathrm dx}{(c+x) (d+x)}\implies v=\frac{\log (x+d)-\log (x+c)}{c-d}$$ So, if $c\neq d$,$$(c-d)I=\log (a x+b) \log \left(\frac{(x+d) (a c-b)}{(x+c) (a d-b)}\right)-\text{Li}_2\left(\frac{a x+b}{b-a c}\right)+\text{Li}_2\left(\frac{a x+b}{b-a d}\right) $$ and, if $c=d$, $$I=\frac{a (x+c) \log (x+c)-(a x+b) \log (a x+b)}{(x+c) (b-a c)}$$</p> Answer 19: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \int_{0}^{\infty}{\ln\pars{2x + 1} \over x\pars{x + 1}}\,\dd x &amp; \,\,\,\stackrel{2x + 1\ \mapsto\ x}{=}\,\,\, \int_{1}^{\infty}{\ln\pars{x} \over \bracks{\pars{x - 1}/2}\bracks{\pars{x - 1}/2 + 1}}\,{\dd x \over 2} = 2\int_{1}^{\infty}{\ln\pars{x} \over x^{2} - 1}\,\dd x \\[5mm] &amp; \stackrel{x\ \mapsto\ 1/x}{=}\,\,\, -2\int_{0}^{1}{\ln\pars{x} \over 1 - x^{2}}\,\dd x \,\,\,\stackrel{x^{2}\ \mapsto\ x}{=}\,\,\, -\,{1 \over 2}\int_{0}^{1}{x^{-1/2}\,\ln\pars{x} \over 1 - x}\,\dd x \\[5mm] &amp; = \left.{1 \over 2}\,\totald{}{\mu}\int_{0}^{1}{1 - x^{\mu} \over 1 - x}\,\dd x\, \right\vert_{\ \mu\ =\ -1/2} = \left.{1 \over 2}\,\totald{\bracks{\Psi\pars{\mu + 1} + \gamma}}{\mu} \right\vert_{\ \mu\ =\ -1/2}\label{1}\tag{1} \end{align}</p> <blockquote> <p>where $\ds{\Psi}$ is the $Digamma\ Function$ and $\gamma$ is the $Euler\mbox{-}Mascheroni\ Constant$: See $\mathbf{6.3.22}$ in <a href="http://people.math.sfu.ca/~cbm/aands/page_259.htm" rel="nofollow noreferrer">A&amp;S Table</a>.</p> </blockquote> <p>From expression&nbsp;\eqref{1}: $$ \int_{0}^{\infty}{\ln\pars{2x + 1} \over x\pars{x + 1}}\,\dd x = {1 \over 2}\,\Psi\, '\pars{1 \over 2} = \bbx{\ds{\pi^{2} \over 4}} $$</p> <blockquote> <p>See $\mathbf{6.4.4}$ in <a href="http://people.math.sfu.ca/~cbm/aands/page_260.htm" rel="nofollow noreferrer">A&amp;S Table</a>.</p> </blockquote> Answer 20: <p>This might be a late response, but consider the double integral</p> <p>$$I=\int_{0}^{\infty}\int_{0}^{\infty} \frac{1}{1+x^2(2y+1)^2} \frac{x}{1+x^2} \ dy \ dx.$$</p> <p>We will evaluate $I$ in two ways. Integrating with respect to $y$ first, we see that</p> <p>$$\int_{0}^{\infty} \frac{1}{1+x^2(2y+1)^2} \ dy= \lim_{b \rightarrow \infty}\frac{\arctan((2b+1)x)}{2x}-\frac{\arctan(x)}{2x}=\frac{\pi}{4x}-\frac{\arctan(x)}{2x}$$ and get</p> <p>$$I=\int_{0}^{\infty} \left(\frac{\pi}{4x}-\frac{\arctan(x)}{2x} \right) \frac{x}{1+x^2} \ dx=\int_{0}^{\infty} \frac{\pi}{4(1+x^2)}-\frac{\arctan(x)}{2(1+x^2)} \ dx=\frac{\pi^2}{8}-\frac{\pi^2}{16}=\frac{\pi^2}{16}.$$ </p> <p>Now we reverse the order of integration as such:</p> <p>$$I=\int_{0}^{\infty}\int_{0}^{\infty} \frac{1}{1+x^2(2y+1)^2} \frac{x}{1+x^2} \ dx \ dy.$$</p> <p>To clean things up a little bit, let $u=x^2, \ du = 2 x \ dx.$ So </p> <p>$$I=\int_{0}^{\infty}\int_{0}^{\infty} \frac{1}{1+u(2y+1)^2} \frac{u}{2(1+u)} \ du \ dy.$$</p> <p>Integrating with respect to $u,$ we can use partial fractions to get that </p> <p>$$\frac{1}{1+u(2y+1)^2} \frac{1}{2(1+u)}=\frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\frac{1}{2(u+1)((2y+1)^2-1)}.$$</p> <p>We see </p> <p>$$\int_{0}^{\infty} \frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\frac{1}{2(u+1)((2y+1)^2-1)} \ du $$ $$= \lim_{b \rightarrow \infty}\frac{\ln(b(2y+1)^2+1)}{2(2y+1)^2-2} - \frac{\ln(b+1)}{2(2y+1)^2-2}= \frac{\ln(2y+1)}{(2y+1)^2-1}=\frac{\ln(2y+1)}{4y(y+1)}.$$</p> <p>Hence, </p> <p>$$I=\int_{0}^{\infty}\frac{\ln(2y+1)}{4y(y+1)} \ dy=\frac{\pi^2}{16},$$</p> <p>So $$I=\int_{0}^{\infty}\frac{\ln(2y+1)}{y(y+1)} \ dy=\frac{\pi^2}{4}.$$</p> Answer 21: <p>Alternatively we can use Feynman's Trick if we let, $$I(a)=\int_0^{\infty} \frac{\ln(ax+1)}{x(x+1)}\mathrm dx,$$ then if we take the derivative of $I(a)$ and expand using partial fractions we have $$I'(a)=\int_0^{\infty} \frac{1}{(ax+1)(x+1)}\mathrm dx =\int_0^{\infty} \frac{a}{(a-1)(ax+1)} - \frac{1}{(a-1)(x+1)}\mathrm dx.$$ Integrating this gives, $$I'(a)=\bigg(\frac{1}{a-1}\ln|ax+1|-\frac{1}{a-1}\ln|x+1| \bigg)|_0^{\infty}=\frac{1}{a-1}\ln|\frac{ax+1}{x+1}| |_0^{\infty}.$$ Noticing the lower bound is zero we're left with an easy limit$$I'(a)=\lim_{x\to\infty}\frac{1}{a-1}\ln|\frac{ax+1}{x+1}|=\frac{\ln(a)}{a-1}.$$ Now if we integrate we can get $I(a)$, this is easy because our integral closely resembles the definition of the dilogarithm, so we have $$I(a)=\int \frac{\ln(a)}{a-1} da=-Li_2(1-a)+C.$$ Now if we notice $I(0)=0$ and note that $Li_2(1)=\frac{\pi^2}{6}$ we see that $C=\frac{\pi^2}{6}.$ So finally if we note that for your problem $a=2$ have that, $$I(2)=-Li_2(-1)+\frac{\pi^2}{6}=\frac{\pi^2}{12}+\frac{\pi^2}{6}=\frac{\pi^2}{4}.$$</p> <p>This is admittedly not as nice as the approach using the geometric series and makes heavy use of the dilogarithm and it's particular values but I think it's kinda fun.</p> Answer 22: <p>do the substitution $2x+1\rightarrow x$ and the let $x\rightarrow x^{-1}$ , hence $$\int_{0}^{\infty }\frac{\ln\left ( 2x+1 \right )}{x\left ( x+1 \right )}\, \mathrm{d}x=2\int_{1}^{\infty }\frac{\ln x}{x^{2}-1}\, \mathrm{d}x=-2\int_{0}^{1}\frac{\ln x}{1-x^{2}}\, \mathrm{d}x$$ then use the geometric series $$\frac{1}{1-x^{2}}=\sum_{n=0}^{\infty }x^{2n}$$ we get $$\int_{0}^{\infty }\frac{\ln\left ( 2x+1 \right )}{x\left ( x+1 \right )}\, \mathrm{d}x=2\sum_{n=0}^{\infty }\frac{1}{\left ( 2n+1 \right )^{2}}$$ and the answer will follow.</p> Answer 23: <p><em>Too long for a comment.</em></p> <p>Making the problem more general, it is possible to compute $$I=\int \frac{\log (a x+b)}{(x+c) (x+d)}\,\mathrm dx$$ Using one integration by parts $$u=\log (a x+b)\implies u'=\frac{a}{a x+b}\,\mathrm dx$$ $$v'=\frac{\mathrm dx}{(c+x) (d+x)}\implies v=\frac{\log (x+d)-\log (x+c)}{c-d}$$ So, if $c\neq d$,$$(c-d)I=\log (a x+b) \log \left(\frac{(x+d) (a c-b)}{(x+c) (a d-b)}\right)-\text{Li}_2\left(\frac{a x+b}{b-a c}\right)+\text{Li}_2\left(\frac{a x+b}{b-a d}\right) $$ and, if $c=d$, $$I=\frac{a (x+c) \log (x+c)-(a x+b) \log (a x+b)}{(x+c) (b-a c)}$$</p> Answer 24: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \int_{0}^{\infty}{\ln\pars{2x + 1} \over x\pars{x + 1}}\,\dd x &amp; \,\,\,\stackrel{2x + 1\ \mapsto\ x}{=}\,\,\, \int_{1}^{\infty}{\ln\pars{x} \over \bracks{\pars{x - 1}/2}\bracks{\pars{x - 1}/2 + 1}}\,{\dd x \over 2} = 2\int_{1}^{\infty}{\ln\pars{x} \over x^{2} - 1}\,\dd x \\[5mm] &amp; \stackrel{x\ \mapsto\ 1/x}{=}\,\,\, -2\int_{0}^{1}{\ln\pars{x} \over 1 - x^{2}}\,\dd x \,\,\,\stackrel{x^{2}\ \mapsto\ x}{=}\,\,\, -\,{1 \over 2}\int_{0}^{1}{x^{-1/2}\,\ln\pars{x} \over 1 - x}\,\dd x \\[5mm] &amp; = \left.{1 \over 2}\,\totald{}{\mu}\int_{0}^{1}{1 - x^{\mu} \over 1 - x}\,\dd x\, \right\vert_{\ \mu\ =\ -1/2} = \left.{1 \over 2}\,\totald{\bracks{\Psi\pars{\mu + 1} + \gamma}}{\mu} \right\vert_{\ \mu\ =\ -1/2}\label{1}\tag{1} \end{align}</p> <blockquote> <p>where $\ds{\Psi}$ is the $Digamma\ Function$ and $\gamma$ is the $Euler\mbox{-}Mascheroni\ Constant$: See $\mathbf{6.3.22}$ in <a href="http://people.math.sfu.ca/~cbm/aands/page_259.htm" rel="nofollow noreferrer">A&amp;S Table</a>.</p> </blockquote> <p>From expression&nbsp;\eqref{1}: $$ \int_{0}^{\infty}{\ln\pars{2x + 1} \over x\pars{x + 1}}\,\dd x = {1 \over 2}\,\Psi\, '\pars{1 \over 2} = \bbx{\ds{\pi^{2} \over 4}} $$</p> <blockquote> <p>See $\mathbf{6.4.4}$ in <a href="http://people.math.sfu.ca/~cbm/aands/page_260.htm" rel="nofollow noreferrer">A&amp;S Table</a>.</p> </blockquote> Answer 25: <p>This might be a late response, but consider the double integral</p> <p>$$I=\int_{0}^{\infty}\int_{0}^{\infty} \frac{1}{1+x^2(2y+1)^2} \frac{x}{1+x^2} \ dy \ dx.$$</p> <p>We will evaluate $I$ in two ways. Integrating with respect to $y$ first, we see that</p> <p>$$\int_{0}^{\infty} \frac{1}{1+x^2(2y+1)^2} \ dy= \lim_{b \rightarrow \infty}\frac{\arctan((2b+1)x)}{2x}-\frac{\arctan(x)}{2x}=\frac{\pi}{4x}-\frac{\arctan(x)}{2x}$$ and get</p> <p>$$I=\int_{0}^{\infty} \left(\frac{\pi}{4x}-\frac{\arctan(x)}{2x} \right) \frac{x}{1+x^2} \ dx=\int_{0}^{\infty} \frac{\pi}{4(1+x^2)}-\frac{\arctan(x)}{2(1+x^2)} \ dx=\frac{\pi^2}{8}-\frac{\pi^2}{16}=\frac{\pi^2}{16}.$$ </p> <p>Now we reverse the order of integration as such:</p> <p>$$I=\int_{0}^{\infty}\int_{0}^{\infty} \frac{1}{1+x^2(2y+1)^2} \frac{x}{1+x^2} \ dx \ dy.$$</p> <p>To clean things up a little bit, let $u=x^2, \ du = 2 x \ dx.$ So </p> <p>$$I=\int_{0}^{\infty}\int_{0}^{\infty} \frac{1}{1+u(2y+1)^2} \frac{u}{2(1+u)} \ du \ dy.$$</p> <p>Integrating with respect to $u,$ we can use partial fractions to get that </p> <p>$$\frac{1}{1+u(2y+1)^2} \frac{1}{2(1+u)}=\frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\frac{1}{2(u+1)((2y+1)^2-1)}.$$</p> <p>We see </p> <p>$$\int_{0}^{\infty} \frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\frac{1}{2(u+1)((2y+1)^2-1)} \ du $$ $$= \lim_{b \rightarrow \infty}\frac{\ln(b(2y+1)^2+1)}{2(2y+1)^2-2} - \frac{\ln(b+1)}{2(2y+1)^2-2}= \frac{\ln(2y+1)}{(2y+1)^2-1}=\frac{\ln(2y+1)}{4y(y+1)}.$$</p> <p>Hence, </p> <p>$$I=\int_{0}^{\infty}\frac{\ln(2y+1)}{4y(y+1)} \ dy=\frac{\pi^2}{16},$$</p> <p>So $$I=\int_{0}^{\infty}\frac{\ln(2y+1)}{y(y+1)} \ dy=\frac{\pi^2}{4}.$$</p> Answer 26: <p>Alternatively we can use Feynman's Trick if we let, $$I(a)=\int_0^{\infty} \frac{\ln(ax+1)}{x(x+1)}\mathrm dx,$$ then if we take the derivative of $I(a)$ and expand using partial fractions we have $$I'(a)=\int_0^{\infty} \frac{1}{(ax+1)(x+1)}\mathrm dx =\int_0^{\infty} \frac{a}{(a-1)(ax+1)} - \frac{1}{(a-1)(x+1)}\mathrm dx.$$ Integrating this gives, $$I'(a)=\bigg(\frac{1}{a-1}\ln|ax+1|-\frac{1}{a-1}\ln|x+1| \bigg)|_0^{\infty}=\frac{1}{a-1}\ln|\frac{ax+1}{x+1}| |_0^{\infty}.$$ Noticing the lower bound is zero we're left with an easy limit$$I'(a)=\lim_{x\to\infty}\frac{1}{a-1}\ln|\frac{ax+1}{x+1}|=\frac{\ln(a)}{a-1}.$$ Now if we integrate we can get $I(a)$, this is easy because our integral closely resembles the definition of the dilogarithm, so we have $$I(a)=\int \frac{\ln(a)}{a-1} da=-Li_2(1-a)+C.$$ Now if we notice $I(0)=0$ and note that $Li_2(1)=\frac{\pi^2}{6}$ we see that $C=\frac{\pi^2}{6}.$ So finally if we note that for your problem $a=2$ have that, $$I(2)=-Li_2(-1)+\frac{\pi^2}{6}=\frac{\pi^2}{12}+\frac{\pi^2}{6}=\frac{\pi^2}{4}.$$</p> <p>This is admittedly not as nice as the approach using the geometric series and makes heavy use of the dilogarithm and it's particular values but I think it's kinda fun.</p> Answer 27: <p>do the substitution $2x+1\rightarrow x$ and the let $x\rightarrow x^{-1}$ , hence $$\int_{0}^{\infty }\frac{\ln\left ( 2x+1 \right )}{x\left ( x+1 \right )}\, \mathrm{d}x=2\int_{1}^{\infty }\frac{\ln x}{x^{2}-1}\, \mathrm{d}x=-2\int_{0}^{1}\frac{\ln x}{1-x^{2}}\, \mathrm{d}x$$ then use the geometric series $$\frac{1}{1-x^{2}}=\sum_{n=0}^{\infty }x^{2n}$$ we get $$\int_{0}^{\infty }\frac{\ln\left ( 2x+1 \right )}{x\left ( x+1 \right )}\, \mathrm{d}x=2\sum_{n=0}^{\infty }\frac{1}{\left ( 2n+1 \right )^{2}}$$ and the answer will follow.</p> Answer 28: <p><em>Too long for a comment.</em></p> <p>Making the problem more general, it is possible to compute $$I=\int \frac{\log (a x+b)}{(x+c) (x+d)}\,\mathrm dx$$ Using one integration by parts $$u=\log (a x+b)\implies u'=\frac{a}{a x+b}\,\mathrm dx$$ $$v'=\frac{\mathrm dx}{(c+x) (d+x)}\implies v=\frac{\log (x+d)-\log (x+c)}{c-d}$$ So, if $c\neq d$,$$(c-d)I=\log (a x+b) \log \left(\frac{(x+d) (a c-b)}{(x+c) (a d-b)}\right)-\text{Li}_2\left(\frac{a x+b}{b-a c}\right)+\text{Li}_2\left(\frac{a x+b}{b-a d}\right) $$ and, if $c=d$, $$I=\frac{a (x+c) \log (x+c)-(a x+b) \log (a x+b)}{(x+c) (b-a c)}$$</p> Answer 29: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \int_{0}^{\infty}{\ln\pars{2x + 1} \over x\pars{x + 1}}\,\dd x &amp; \,\,\,\stackrel{2x + 1\ \mapsto\ x}{=}\,\,\, \int_{1}^{\infty}{\ln\pars{x} \over \bracks{\pars{x - 1}/2}\bracks{\pars{x - 1}/2 + 1}}\,{\dd x \over 2} = 2\int_{1}^{\infty}{\ln\pars{x} \over x^{2} - 1}\,\dd x \\[5mm] &amp; \stackrel{x\ \mapsto\ 1/x}{=}\,\,\, -2\int_{0}^{1}{\ln\pars{x} \over 1 - x^{2}}\,\dd x \,\,\,\stackrel{x^{2}\ \mapsto\ x}{=}\,\,\, -\,{1 \over 2}\int_{0}^{1}{x^{-1/2}\,\ln\pars{x} \over 1 - x}\,\dd x \\[5mm] &amp; = \left.{1 \over 2}\,\totald{}{\mu}\int_{0}^{1}{1 - x^{\mu} \over 1 - x}\,\dd x\, \right\vert_{\ \mu\ =\ -1/2} = \left.{1 \over 2}\,\totald{\bracks{\Psi\pars{\mu + 1} + \gamma}}{\mu} \right\vert_{\ \mu\ =\ -1/2}\label{1}\tag{1} \end{align}</p> <blockquote> <p>where $\ds{\Psi}$ is the $Digamma\ Function$ and $\gamma$ is the $Euler\mbox{-}Mascheroni\ Constant$: See $\mathbf{6.3.22}$ in <a href="http://people.math.sfu.ca/~cbm/aands/page_259.htm" rel="nofollow noreferrer">A&amp;S Table</a>.</p> </blockquote> <p>From expression&nbsp;\eqref{1}: $$ \int_{0}^{\infty}{\ln\pars{2x + 1} \over x\pars{x + 1}}\,\dd x = {1 \over 2}\,\Psi\, '\pars{1 \over 2} = \bbx{\ds{\pi^{2} \over 4}} $$</p> <blockquote> <p>See $\mathbf{6.4.4}$ in <a href="http://people.math.sfu.ca/~cbm/aands/page_260.htm" rel="nofollow noreferrer">A&amp;S Table</a>.</p> </blockquote> Answer 30: <p>This might be a late response, but consider the double integral</p> <p>$$I=\int_{0}^{\infty}\int_{0}^{\infty} \frac{1}{1+x^2(2y+1)^2} \frac{x}{1+x^2} \ dy \ dx.$$</p> <p>We will evaluate $I$ in two ways. Integrating with respect to $y$ first, we see that</p> <p>$$\int_{0}^{\infty} \frac{1}{1+x^2(2y+1)^2} \ dy= \lim_{b \rightarrow \infty}\frac{\arctan((2b+1)x)}{2x}-\frac{\arctan(x)}{2x}=\frac{\pi}{4x}-\frac{\arctan(x)}{2x}$$ and get</p> <p>$$I=\int_{0}^{\infty} \left(\frac{\pi}{4x}-\frac{\arctan(x)}{2x} \right) \frac{x}{1+x^2} \ dx=\int_{0}^{\infty} \frac{\pi}{4(1+x^2)}-\frac{\arctan(x)}{2(1+x^2)} \ dx=\frac{\pi^2}{8}-\frac{\pi^2}{16}=\frac{\pi^2}{16}.$$ </p> <p>Now we reverse the order of integration as such:</p> <p>$$I=\int_{0}^{\infty}\int_{0}^{\infty} \frac{1}{1+x^2(2y+1)^2} \frac{x}{1+x^2} \ dx \ dy.$$</p> <p>To clean things up a little bit, let $u=x^2, \ du = 2 x \ dx.$ So </p> <p>$$I=\int_{0}^{\infty}\int_{0}^{\infty} \frac{1}{1+u(2y+1)^2} \frac{u}{2(1+u)} \ du \ dy.$$</p> <p>Integrating with respect to $u,$ we can use partial fractions to get that </p> <p>$$\frac{1}{1+u(2y+1)^2} \frac{1}{2(1+u)}=\frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\frac{1}{2(u+1)((2y+1)^2-1)}.$$</p> <p>We see </p> <p>$$\int_{0}^{\infty} \frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\frac{1}{2(u+1)((2y+1)^2-1)} \ du $$ $$= \lim_{b \rightarrow \infty}\frac{\ln(b(2y+1)^2+1)}{2(2y+1)^2-2} - \frac{\ln(b+1)}{2(2y+1)^2-2}= \frac{\ln(2y+1)}{(2y+1)^2-1}=\frac{\ln(2y+1)}{4y(y+1)}.$$</p> <p>Hence, </p> <p>$$I=\int_{0}^{\infty}\frac{\ln(2y+1)}{4y(y+1)} \ dy=\frac{\pi^2}{16},$$</p> <p>So $$I=\int_{0}^{\infty}\frac{\ln(2y+1)}{y(y+1)} \ dy=\frac{\pi^2}{4}.$$</p> Answer 31: <p>Alternatively we can use Feynman's Trick if we let, $$I(a)=\int_0^{\infty} \frac{\ln(ax+1)}{x(x+1)}\mathrm dx,$$ then if we take the derivative of $I(a)$ and expand using partial fractions we have $$I'(a)=\int_0^{\infty} \frac{1}{(ax+1)(x+1)}\mathrm dx =\int_0^{\infty} \frac{a}{(a-1)(ax+1)} - \frac{1}{(a-1)(x+1)}\mathrm dx.$$ Integrating this gives, $$I'(a)=\bigg(\frac{1}{a-1}\ln|ax+1|-\frac{1}{a-1}\ln|x+1| \bigg)|_0^{\infty}=\frac{1}{a-1}\ln|\frac{ax+1}{x+1}| |_0^{\infty}.$$ Noticing the lower bound is zero we're left with an easy limit$$I'(a)=\lim_{x\to\infty}\frac{1}{a-1}\ln|\frac{ax+1}{x+1}|=\frac{\ln(a)}{a-1}.$$ Now if we integrate we can get $I(a)$, this is easy because our integral closely resembles the definition of the dilogarithm, so we have $$I(a)=\int \frac{\ln(a)}{a-1} da=-Li_2(1-a)+C.$$ Now if we notice $I(0)=0$ and note that $Li_2(1)=\frac{\pi^2}{6}$ we see that $C=\frac{\pi^2}{6}.$ So finally if we note that for your problem $a=2$ have that, $$I(2)=-Li_2(-1)+\frac{\pi^2}{6}=\frac{\pi^2}{12}+\frac{\pi^2}{6}=\frac{\pi^2}{4}.$$</p> <p>This is admittedly not as nice as the approach using the geometric series and makes heavy use of the dilogarithm and it's particular values but I think it's kinda fun.</p> Answer 32: <p>do the substitution $2x+1\rightarrow x$ and the let $x\rightarrow x^{-1}$ , hence $$\int_{0}^{\infty }\frac{\ln\left ( 2x+1 \right )}{x\left ( x+1 \right )}\, \mathrm{d}x=2\int_{1}^{\infty }\frac{\ln x}{x^{2}-1}\, \mathrm{d}x=-2\int_{0}^{1}\frac{\ln x}{1-x^{2}}\, \mathrm{d}x$$ then use the geometric series $$\frac{1}{1-x^{2}}=\sum_{n=0}^{\infty }x^{2n}$$ we get $$\int_{0}^{\infty }\frac{\ln\left ( 2x+1 \right )}{x\left ( x+1 \right )}\, \mathrm{d}x=2\sum_{n=0}^{\infty }\frac{1}{\left ( 2n+1 \right )^{2}}$$ and the answer will follow.</p> Answer 33: <p><em>Too long for a comment.</em></p> <p>Making the problem more general, it is possible to compute $$I=\int \frac{\log (a x+b)}{(x+c) (x+d)}\,\mathrm dx$$ Using one integration by parts $$u=\log (a x+b)\implies u'=\frac{a}{a x+b}\,\mathrm dx$$ $$v'=\frac{\mathrm dx}{(c+x) (d+x)}\implies v=\frac{\log (x+d)-\log (x+c)}{c-d}$$ So, if $c\neq d$,$$(c-d)I=\log (a x+b) \log \left(\frac{(x+d) (a c-b)}{(x+c) (a d-b)}\right)-\text{Li}_2\left(\frac{a x+b}{b-a c}\right)+\text{Li}_2\left(\frac{a x+b}{b-a d}\right) $$ and, if $c=d$, $$I=\frac{a (x+c) \log (x+c)-(a x+b) \log (a x+b)}{(x+c) (b-a c)}$$</p> Answer 34: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \int_{0}^{\infty}{\ln\pars{2x + 1} \over x\pars{x + 1}}\,\dd x &amp; \,\,\,\stackrel{2x + 1\ \mapsto\ x}{=}\,\,\, \int_{1}^{\infty}{\ln\pars{x} \over \bracks{\pars{x - 1}/2}\bracks{\pars{x - 1}/2 + 1}}\,{\dd x \over 2} = 2\int_{1}^{\infty}{\ln\pars{x} \over x^{2} - 1}\,\dd x \\[5mm] &amp; \stackrel{x\ \mapsto\ 1/x}{=}\,\,\, -2\int_{0}^{1}{\ln\pars{x} \over 1 - x^{2}}\,\dd x \,\,\,\stackrel{x^{2}\ \mapsto\ x}{=}\,\,\, -\,{1 \over 2}\int_{0}^{1}{x^{-1/2}\,\ln\pars{x} \over 1 - x}\,\dd x \\[5mm] &amp; = \left.{1 \over 2}\,\totald{}{\mu}\int_{0}^{1}{1 - x^{\mu} \over 1 - x}\,\dd x\, \right\vert_{\ \mu\ =\ -1/2} = \left.{1 \over 2}\,\totald{\bracks{\Psi\pars{\mu + 1} + \gamma}}{\mu} \right\vert_{\ \mu\ =\ -1/2}\label{1}\tag{1} \end{align}</p> <blockquote> <p>where $\ds{\Psi}$ is the $Digamma\ Function$ and $\gamma$ is the $Euler\mbox{-}Mascheroni\ Constant$: See $\mathbf{6.3.22}$ in <a href="http://people.math.sfu.ca/~cbm/aands/page_259.htm" rel="nofollow noreferrer">A&amp;S Table</a>.</p> </blockquote> <p>From expression&nbsp;\eqref{1}: $$ \int_{0}^{\infty}{\ln\pars{2x + 1} \over x\pars{x + 1}}\,\dd x = {1 \over 2}\,\Psi\, '\pars{1 \over 2} = \bbx{\ds{\pi^{2} \over 4}} $$</p> <blockquote> <p>See $\mathbf{6.4.4}$ in <a href="http://people.math.sfu.ca/~cbm/aands/page_260.htm" rel="nofollow noreferrer">A&amp;S Table</a>.</p> </blockquote> Answer 35: <p>This might be a late response, but consider the double integral</p> <p>$$I=\int_{0}^{\infty}\int_{0}^{\infty} \frac{1}{1+x^2(2y+1)^2} \frac{x}{1+x^2} \ dy \ dx.$$</p> <p>We will evaluate $I$ in two ways. Integrating with respect to $y$ first, we see that</p> <p>$$\int_{0}^{\infty} \frac{1}{1+x^2(2y+1)^2} \ dy= \lim_{b \rightarrow \infty}\frac{\arctan((2b+1)x)}{2x}-\frac{\arctan(x)}{2x}=\frac{\pi}{4x}-\frac{\arctan(x)}{2x}$$ and get</p> <p>$$I=\int_{0}^{\infty} \left(\frac{\pi}{4x}-\frac{\arctan(x)}{2x} \right) \frac{x}{1+x^2} \ dx=\int_{0}^{\infty} \frac{\pi}{4(1+x^2)}-\frac{\arctan(x)}{2(1+x^2)} \ dx=\frac{\pi^2}{8}-\frac{\pi^2}{16}=\frac{\pi^2}{16}.$$ </p> <p>Now we reverse the order of integration as such:</p> <p>$$I=\int_{0}^{\infty}\int_{0}^{\infty} \frac{1}{1+x^2(2y+1)^2} \frac{x}{1+x^2} \ dx \ dy.$$</p> <p>To clean things up a little bit, let $u=x^2, \ du = 2 x \ dx.$ So </p> <p>$$I=\int_{0}^{\infty}\int_{0}^{\infty} \frac{1}{1+u(2y+1)^2} \frac{u}{2(1+u)} \ du \ dy.$$</p> <p>Integrating with respect to $u,$ we can use partial fractions to get that </p> <p>$$\frac{1}{1+u(2y+1)^2} \frac{1}{2(1+u)}=\frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\frac{1}{2(u+1)((2y+1)^2-1)}.$$</p> <p>We see </p> <p>$$\int_{0}^{\infty} \frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\frac{1}{2(u+1)((2y+1)^2-1)} \ du $$ $$= \lim_{b \rightarrow \infty}\frac{\ln(b(2y+1)^2+1)}{2(2y+1)^2-2} - \frac{\ln(b+1)}{2(2y+1)^2-2}= \frac{\ln(2y+1)}{(2y+1)^2-1}=\frac{\ln(2y+1)}{4y(y+1)}.$$</p> <p>Hence, </p> <p>$$I=\int_{0}^{\infty}\frac{\ln(2y+1)}{4y(y+1)} \ dy=\frac{\pi^2}{16},$$</p> <p>So $$I=\int_{0}^{\infty}\frac{\ln(2y+1)}{y(y+1)} \ dy=\frac{\pi^2}{4}.$$</p> Answer 36: <p>Alternatively we can use Feynman's Trick if we let, $$I(a)=\int_0^{\infty} \frac{\ln(ax+1)}{x(x+1)}\mathrm dx,$$ then if we take the derivative of $I(a)$ and expand using partial fractions we have $$I'(a)=\int_0^{\infty} \frac{1}{(ax+1)(x+1)}\mathrm dx =\int_0^{\infty} \frac{a}{(a-1)(ax+1)} - \frac{1}{(a-1)(x+1)}\mathrm dx.$$ Integrating this gives, $$I'(a)=\bigg(\frac{1}{a-1}\ln|ax+1|-\frac{1}{a-1}\ln|x+1| \bigg)|_0^{\infty}=\frac{1}{a-1}\ln|\frac{ax+1}{x+1}| |_0^{\infty}.$$ Noticing the lower bound is zero we're left with an easy limit$$I'(a)=\lim_{x\to\infty}\frac{1}{a-1}\ln|\frac{ax+1}{x+1}|=\frac{\ln(a)}{a-1}.$$ Now if we integrate we can get $I(a)$, this is easy because our integral closely resembles the definition of the dilogarithm, so we have $$I(a)=\int \frac{\ln(a)}{a-1} da=-Li_2(1-a)+C.$$ Now if we notice $I(0)=0$ and note that $Li_2(1)=\frac{\pi^2}{6}$ we see that $C=\frac{\pi^2}{6}.$ So finally if we note that for your problem $a=2$ have that, $$I(2)=-Li_2(-1)+\frac{\pi^2}{6}=\frac{\pi^2}{12}+\frac{\pi^2}{6}=\frac{\pi^2}{4}.$$</p> <p>This is admittedly not as nice as the approach using the geometric series and makes heavy use of the dilogarithm and it's particular values but I think it's kinda fun.</p> Answer 37: <p>do the substitution $2x+1\rightarrow x$ and the let $x\rightarrow x^{-1}$ , hence $$\int_{0}^{\infty }\frac{\ln\left ( 2x+1 \right )}{x\left ( x+1 \right )}\, \mathrm{d}x=2\int_{1}^{\infty }\frac{\ln x}{x^{2}-1}\, \mathrm{d}x=-2\int_{0}^{1}\frac{\ln x}{1-x^{2}}\, \mathrm{d}x$$ then use the geometric series $$\frac{1}{1-x^{2}}=\sum_{n=0}^{\infty }x^{2n}$$ we get $$\int_{0}^{\infty }\frac{\ln\left ( 2x+1 \right )}{x\left ( x+1 \right )}\, \mathrm{d}x=2\sum_{n=0}^{\infty }\frac{1}{\left ( 2n+1 \right )^{2}}$$ and the answer will follow.</p> Answer 38: <p><em>Too long for a comment.</em></p> <p>Making the problem more general, it is possible to compute $$I=\int \frac{\log (a x+b)}{(x+c) (x+d)}\,\mathrm dx$$ Using one integration by parts $$u=\log (a x+b)\implies u'=\frac{a}{a x+b}\,\mathrm dx$$ $$v'=\frac{\mathrm dx}{(c+x) (d+x)}\implies v=\frac{\log (x+d)-\log (x+c)}{c-d}$$ So, if $c\neq d$,$$(c-d)I=\log (a x+b) \log \left(\frac{(x+d) (a c-b)}{(x+c) (a d-b)}\right)-\text{Li}_2\left(\frac{a x+b}{b-a c}\right)+\text{Li}_2\left(\frac{a x+b}{b-a d}\right) $$ and, if $c=d$, $$I=\frac{a (x+c) \log (x+c)-(a x+b) \log (a x+b)}{(x+c) (b-a c)}$$</p> Answer 39: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \int_{0}^{\infty}{\ln\pars{2x + 1} \over x\pars{x + 1}}\,\dd x &amp; \,\,\,\stackrel{2x + 1\ \mapsto\ x}{=}\,\,\, \int_{1}^{\infty}{\ln\pars{x} \over \bracks{\pars{x - 1}/2}\bracks{\pars{x - 1}/2 + 1}}\,{\dd x \over 2} = 2\int_{1}^{\infty}{\ln\pars{x} \over x^{2} - 1}\,\dd x \\[5mm] &amp; \stackrel{x\ \mapsto\ 1/x}{=}\,\,\, -2\int_{0}^{1}{\ln\pars{x} \over 1 - x^{2}}\,\dd x \,\,\,\stackrel{x^{2}\ \mapsto\ x}{=}\,\,\, -\,{1 \over 2}\int_{0}^{1}{x^{-1/2}\,\ln\pars{x} \over 1 - x}\,\dd x \\[5mm] &amp; = \left.{1 \over 2}\,\totald{}{\mu}\int_{0}^{1}{1 - x^{\mu} \over 1 - x}\,\dd x\, \right\vert_{\ \mu\ =\ -1/2} = \left.{1 \over 2}\,\totald{\bracks{\Psi\pars{\mu + 1} + \gamma}}{\mu} \right\vert_{\ \mu\ =\ -1/2}\label{1}\tag{1} \end{align}</p> <blockquote> <p>where $\ds{\Psi}$ is the $Digamma\ Function$ and $\gamma$ is the $Euler\mbox{-}Mascheroni\ Constant$: See $\mathbf{6.3.22}$ in <a href="http://people.math.sfu.ca/~cbm/aands/page_259.htm" rel="nofollow noreferrer">A&amp;S Table</a>.</p> </blockquote> <p>From expression&nbsp;\eqref{1}: $$ \int_{0}^{\infty}{\ln\pars{2x + 1} \over x\pars{x + 1}}\,\dd x = {1 \over 2}\,\Psi\, '\pars{1 \over 2} = \bbx{\ds{\pi^{2} \over 4}} $$</p> <blockquote> <p>See $\mathbf{6.4.4}$ in <a href="http://people.math.sfu.ca/~cbm/aands/page_260.htm" rel="nofollow noreferrer">A&amp;S Table</a>.</p> </blockquote> Answer 40: <p>This might be a late response, but consider the double integral</p> <p>$$I=\int_{0}^{\infty}\int_{0}^{\infty} \frac{1}{1+x^2(2y+1)^2} \frac{x}{1+x^2} \ dy \ dx.$$</p> <p>We will evaluate $I$ in two ways. Integrating with respect to $y$ first, we see that</p> <p>$$\int_{0}^{\infty} \frac{1}{1+x^2(2y+1)^2} \ dy= \lim_{b \rightarrow \infty}\frac{\arctan((2b+1)x)}{2x}-\frac{\arctan(x)}{2x}=\frac{\pi}{4x}-\frac{\arctan(x)}{2x}$$ and get</p> <p>$$I=\int_{0}^{\infty} \left(\frac{\pi}{4x}-\frac{\arctan(x)}{2x} \right) \frac{x}{1+x^2} \ dx=\int_{0}^{\infty} \frac{\pi}{4(1+x^2)}-\frac{\arctan(x)}{2(1+x^2)} \ dx=\frac{\pi^2}{8}-\frac{\pi^2}{16}=\frac{\pi^2}{16}.$$ </p> <p>Now we reverse the order of integration as such:</p> <p>$$I=\int_{0}^{\infty}\int_{0}^{\infty} \frac{1}{1+x^2(2y+1)^2} \frac{x}{1+x^2} \ dx \ dy.$$</p> <p>To clean things up a little bit, let $u=x^2, \ du = 2 x \ dx.$ So </p> <p>$$I=\int_{0}^{\infty}\int_{0}^{\infty} \frac{1}{1+u(2y+1)^2} \frac{u}{2(1+u)} \ du \ dy.$$</p> <p>Integrating with respect to $u,$ we can use partial fractions to get that </p> <p>$$\frac{1}{1+u(2y+1)^2} \frac{1}{2(1+u)}=\frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\frac{1}{2(u+1)((2y+1)^2-1)}.$$</p> <p>We see </p> <p>$$\int_{0}^{\infty} \frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\frac{1}{2(u+1)((2y+1)^2-1)} \ du $$ $$= \lim_{b \rightarrow \infty}\frac{\ln(b(2y+1)^2+1)}{2(2y+1)^2-2} - \frac{\ln(b+1)}{2(2y+1)^2-2}= \frac{\ln(2y+1)}{(2y+1)^2-1}=\frac{\ln(2y+1)}{4y(y+1)}.$$</p> <p>Hence, </p> <p>$$I=\int_{0}^{\infty}\frac{\ln(2y+1)}{4y(y+1)} \ dy=\frac{\pi^2}{16},$$</p> <p>So $$I=\int_{0}^{\infty}\frac{\ln(2y+1)}{y(y+1)} \ dy=\frac{\pi^2}{4}.$$</p> Answer 41: <p>Alternatively we can use Feynman's Trick if we let, $$I(a)=\int_0^{\infty} \frac{\ln(ax+1)}{x(x+1)}\mathrm dx,$$ then if we take the derivative of $I(a)$ and expand using partial fractions we have $$I'(a)=\int_0^{\infty} \frac{1}{(ax+1)(x+1)}\mathrm dx =\int_0^{\infty} \frac{a}{(a-1)(ax+1)} - \frac{1}{(a-1)(x+1)}\mathrm dx.$$ Integrating this gives, $$I'(a)=\bigg(\frac{1}{a-1}\ln|ax+1|-\frac{1}{a-1}\ln|x+1| \bigg)|_0^{\infty}=\frac{1}{a-1}\ln|\frac{ax+1}{x+1}| |_0^{\infty}.$$ Noticing the lower bound is zero we're left with an easy limit$$I'(a)=\lim_{x\to\infty}\frac{1}{a-1}\ln|\frac{ax+1}{x+1}|=\frac{\ln(a)}{a-1}.$$ Now if we integrate we can get $I(a)$, this is easy because our integral closely resembles the definition of the dilogarithm, so we have $$I(a)=\int \frac{\ln(a)}{a-1} da=-Li_2(1-a)+C.$$ Now if we notice $I(0)=0$ and note that $Li_2(1)=\frac{\pi^2}{6}$ we see that $C=\frac{\pi^2}{6}.$ So finally if we note that for your problem $a=2$ have that, $$I(2)=-Li_2(-1)+\frac{\pi^2}{6}=\frac{\pi^2}{12}+\frac{\pi^2}{6}=\frac{\pi^2}{4}.$$</p> <p>This is admittedly not as nice as the approach using the geometric series and makes heavy use of the dilogarithm and it's particular values but I think it's kinda fun.</p> Answer 42: <p>do the substitution $2x+1\rightarrow x$ and the let $x\rightarrow x^{-1}$ , hence $$\int_{0}^{\infty }\frac{\ln\left ( 2x+1 \right )}{x\left ( x+1 \right )}\, \mathrm{d}x=2\int_{1}^{\infty }\frac{\ln x}{x^{2}-1}\, \mathrm{d}x=-2\int_{0}^{1}\frac{\ln x}{1-x^{2}}\, \mathrm{d}x$$ then use the geometric series $$\frac{1}{1-x^{2}}=\sum_{n=0}^{\infty }x^{2n}$$ we get $$\int_{0}^{\infty }\frac{\ln\left ( 2x+1 \right )}{x\left ( x+1 \right )}\, \mathrm{d}x=2\sum_{n=0}^{\infty }\frac{1}{\left ( 2n+1 \right )^{2}}$$ and the answer will follow.</p> Answer 43: <p><em>Too long for a comment.</em></p> <p>Making the problem more general, it is possible to compute $$I=\int \frac{\log (a x+b)}{(x+c) (x+d)}\,\mathrm dx$$ Using one integration by parts $$u=\log (a x+b)\implies u'=\frac{a}{a x+b}\,\mathrm dx$$ $$v'=\frac{\mathrm dx}{(c+x) (d+x)}\implies v=\frac{\log (x+d)-\log (x+c)}{c-d}$$ So, if $c\neq d$,$$(c-d)I=\log (a x+b) \log \left(\frac{(x+d) (a c-b)}{(x+c) (a d-b)}\right)-\text{Li}_2\left(\frac{a x+b}{b-a c}\right)+\text{Li}_2\left(\frac{a x+b}{b-a d}\right) $$ and, if $c=d$, $$I=\frac{a (x+c) \log (x+c)-(a x+b) \log (a x+b)}{(x+c) (b-a c)}$$</p> Answer 44: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \int_{0}^{\infty}{\ln\pars{2x + 1} \over x\pars{x + 1}}\,\dd x &amp; \,\,\,\stackrel{2x + 1\ \mapsto\ x}{=}\,\,\, \int_{1}^{\infty}{\ln\pars{x} \over \bracks{\pars{x - 1}/2}\bracks{\pars{x - 1}/2 + 1}}\,{\dd x \over 2} = 2\int_{1}^{\infty}{\ln\pars{x} \over x^{2} - 1}\,\dd x \\[5mm] &amp; \stackrel{x\ \mapsto\ 1/x}{=}\,\,\, -2\int_{0}^{1}{\ln\pars{x} \over 1 - x^{2}}\,\dd x \,\,\,\stackrel{x^{2}\ \mapsto\ x}{=}\,\,\, -\,{1 \over 2}\int_{0}^{1}{x^{-1/2}\,\ln\pars{x} \over 1 - x}\,\dd x \\[5mm] &amp; = \left.{1 \over 2}\,\totald{}{\mu}\int_{0}^{1}{1 - x^{\mu} \over 1 - x}\,\dd x\, \right\vert_{\ \mu\ =\ -1/2} = \left.{1 \over 2}\,\totald{\bracks{\Psi\pars{\mu + 1} + \gamma}}{\mu} \right\vert_{\ \mu\ =\ -1/2}\label{1}\tag{1} \end{align}</p> <blockquote> <p>where $\ds{\Psi}$ is the $Digamma\ Function$ and $\gamma$ is the $Euler\mbox{-}Mascheroni\ Constant$: See $\mathbf{6.3.22}$ in <a href="http://people.math.sfu.ca/~cbm/aands/page_259.htm" rel="nofollow noreferrer">A&amp;S Table</a>.</p> </blockquote> <p>From expression&nbsp;\eqref{1}: $$ \int_{0}^{\infty}{\ln\pars{2x + 1} \over x\pars{x + 1}}\,\dd x = {1 \over 2}\,\Psi\, '\pars{1 \over 2} = \bbx{\ds{\pi^{2} \over 4}} $$</p> <blockquote> <p>See $\mathbf{6.4.4}$ in <a href="http://people.math.sfu.ca/~cbm/aands/page_260.htm" rel="nofollow noreferrer">A&amp;S Table</a>.</p> </blockquote> Answer 45: <p>This might be a late response, but consider the double integral</p> <p>$$I=\int_{0}^{\infty}\int_{0}^{\infty} \frac{1}{1+x^2(2y+1)^2} \frac{x}{1+x^2} \ dy \ dx.$$</p> <p>We will evaluate $I$ in two ways. Integrating with respect to $y$ first, we see that</p> <p>$$\int_{0}^{\infty} \frac{1}{1+x^2(2y+1)^2} \ dy= \lim_{b \rightarrow \infty}\frac{\arctan((2b+1)x)}{2x}-\frac{\arctan(x)}{2x}=\frac{\pi}{4x}-\frac{\arctan(x)}{2x}$$ and get</p> <p>$$I=\int_{0}^{\infty} \left(\frac{\pi}{4x}-\frac{\arctan(x)}{2x} \right) \frac{x}{1+x^2} \ dx=\int_{0}^{\infty} \frac{\pi}{4(1+x^2)}-\frac{\arctan(x)}{2(1+x^2)} \ dx=\frac{\pi^2}{8}-\frac{\pi^2}{16}=\frac{\pi^2}{16}.$$ </p> <p>Now we reverse the order of integration as such:</p> <p>$$I=\int_{0}^{\infty}\int_{0}^{\infty} \frac{1}{1+x^2(2y+1)^2} \frac{x}{1+x^2} \ dx \ dy.$$</p> <p>To clean things up a little bit, let $u=x^2, \ du = 2 x \ dx.$ So </p> <p>$$I=\int_{0}^{\infty}\int_{0}^{\infty} \frac{1}{1+u(2y+1)^2} \frac{u}{2(1+u)} \ du \ dy.$$</p> <p>Integrating with respect to $u,$ we can use partial fractions to get that </p> <p>$$\frac{1}{1+u(2y+1)^2} \frac{1}{2(1+u)}=\frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\frac{1}{2(u+1)((2y+1)^2-1)}.$$</p> <p>We see </p> <p>$$\int_{0}^{\infty} \frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\frac{1}{2(u+1)((2y+1)^2-1)} \ du $$ $$= \lim_{b \rightarrow \infty}\frac{\ln(b(2y+1)^2+1)}{2(2y+1)^2-2} - \frac{\ln(b+1)}{2(2y+1)^2-2}= \frac{\ln(2y+1)}{(2y+1)^2-1}=\frac{\ln(2y+1)}{4y(y+1)}.$$</p> <p>Hence, </p> <p>$$I=\int_{0}^{\infty}\frac{\ln(2y+1)}{4y(y+1)} \ dy=\frac{\pi^2}{16},$$</p> <p>So $$I=\int_{0}^{\infty}\frac{\ln(2y+1)}{y(y+1)} \ dy=\frac{\pi^2}{4}.$$</p> Answer 46: <p>Alternatively we can use Feynman's Trick if we let, $$I(a)=\int_0^{\infty} \frac{\ln(ax+1)}{x(x+1)}\mathrm dx,$$ then if we take the derivative of $I(a)$ and expand using partial fractions we have $$I'(a)=\int_0^{\infty} \frac{1}{(ax+1)(x+1)}\mathrm dx =\int_0^{\infty} \frac{a}{(a-1)(ax+1)} - \frac{1}{(a-1)(x+1)}\mathrm dx.$$ Integrating this gives, $$I'(a)=\bigg(\frac{1}{a-1}\ln|ax+1|-\frac{1}{a-1}\ln|x+1| \bigg)|_0^{\infty}=\frac{1}{a-1}\ln|\frac{ax+1}{x+1}| |_0^{\infty}.$$ Noticing the lower bound is zero we're left with an easy limit$$I'(a)=\lim_{x\to\infty}\frac{1}{a-1}\ln|\frac{ax+1}{x+1}|=\frac{\ln(a)}{a-1}.$$ Now if we integrate we can get $I(a)$, this is easy because our integral closely resembles the definition of the dilogarithm, so we have $$I(a)=\int \frac{\ln(a)}{a-1} da=-Li_2(1-a)+C.$$ Now if we notice $I(0)=0$ and note that $Li_2(1)=\frac{\pi^2}{6}$ we see that $C=\frac{\pi^2}{6}.$ So finally if we note that for your problem $a=2$ have that, $$I(2)=-Li_2(-1)+\frac{\pi^2}{6}=\frac{\pi^2}{12}+\frac{\pi^2}{6}=\frac{\pi^2}{4}.$$</p> <p>This is admittedly not as nice as the approach using the geometric series and makes heavy use of the dilogarithm and it's particular values but I think it's kinda fun.</p> Answer 47: <p>do the substitution $2x+1\rightarrow x$ and the let $x\rightarrow x^{-1}$ , hence $$\int_{0}^{\infty }\frac{\ln\left ( 2x+1 \right )}{x\left ( x+1 \right )}\, \mathrm{d}x=2\int_{1}^{\infty }\frac{\ln x}{x^{2}-1}\, \mathrm{d}x=-2\int_{0}^{1}\frac{\ln x}{1-x^{2}}\, \mathrm{d}x$$ then use the geometric series $$\frac{1}{1-x^{2}}=\sum_{n=0}^{\infty }x^{2n}$$ we get $$\int_{0}^{\infty }\frac{\ln\left ( 2x+1 \right )}{x\left ( x+1 \right )}\, \mathrm{d}x=2\sum_{n=0}^{\infty }\frac{1}{\left ( 2n+1 \right )^{2}}$$ and the answer will follow.</p> Answer 48: <p><em>Too long for a comment.</em></p> <p>Making the problem more general, it is possible to compute $$I=\int \frac{\log (a x+b)}{(x+c) (x+d)}\,\mathrm dx$$ Using one integration by parts $$u=\log (a x+b)\implies u'=\frac{a}{a x+b}\,\mathrm dx$$ $$v'=\frac{\mathrm dx}{(c+x) (d+x)}\implies v=\frac{\log (x+d)-\log (x+c)}{c-d}$$ So, if $c\neq d$,$$(c-d)I=\log (a x+b) \log \left(\frac{(x+d) (a c-b)}{(x+c) (a d-b)}\right)-\text{Li}_2\left(\frac{a x+b}{b-a c}\right)+\text{Li}_2\left(\frac{a x+b}{b-a d}\right) $$ and, if $c=d$, $$I=\frac{a (x+c) \log (x+c)-(a x+b) \log (a x+b)}{(x+c) (b-a c)}$$</p> Answer 49: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \int_{0}^{\infty}{\ln\pars{2x + 1} \over x\pars{x + 1}}\,\dd x &amp; \,\,\,\stackrel{2x + 1\ \mapsto\ x}{=}\,\,\, \int_{1}^{\infty}{\ln\pars{x} \over \bracks{\pars{x - 1}/2}\bracks{\pars{x - 1}/2 + 1}}\,{\dd x \over 2} = 2\int_{1}^{\infty}{\ln\pars{x} \over x^{2} - 1}\,\dd x \\[5mm] &amp; \stackrel{x\ \mapsto\ 1/x}{=}\,\,\, -2\int_{0}^{1}{\ln\pars{x} \over 1 - x^{2}}\,\dd x \,\,\,\stackrel{x^{2}\ \mapsto\ x}{=}\,\,\, -\,{1 \over 2}\int_{0}^{1}{x^{-1/2}\,\ln\pars{x} \over 1 - x}\,\dd x \\[5mm] &amp; = \left.{1 \over 2}\,\totald{}{\mu}\int_{0}^{1}{1 - x^{\mu} \over 1 - x}\,\dd x\, \right\vert_{\ \mu\ =\ -1/2} = \left.{1 \over 2}\,\totald{\bracks{\Psi\pars{\mu + 1} + \gamma}}{\mu} \right\vert_{\ \mu\ =\ -1/2}\label{1}\tag{1} \end{align}</p> <blockquote> <p>where $\ds{\Psi}$ is the $Digamma\ Function$ and $\gamma$ is the $Euler\mbox{-}Mascheroni\ Constant$: See $\mathbf{6.3.22}$ in <a href="http://people.math.sfu.ca/~cbm/aands/page_259.htm" rel="nofollow noreferrer">A&amp;S Table</a>.</p> </blockquote> <p>From expression&nbsp;\eqref{1}: $$ \int_{0}^{\infty}{\ln\pars{2x + 1} \over x\pars{x + 1}}\,\dd x = {1 \over 2}\,\Psi\, '\pars{1 \over 2} = \bbx{\ds{\pi^{2} \over 4}} $$</p> <blockquote> <p>See $\mathbf{6.4.4}$ in <a href="http://people.math.sfu.ca/~cbm/aands/page_260.htm" rel="nofollow noreferrer">A&amp;S Table</a>.</p> </blockquote> Answer 50: <p>This might be a late response, but consider the double integral</p> <p>$$I=\int_{0}^{\infty}\int_{0}^{\infty} \frac{1}{1+x^2(2y+1)^2} \frac{x}{1+x^2} \ dy \ dx.$$</p> <p>We will evaluate $I$ in two ways. Integrating with respect to $y$ first, we see that</p> <p>$$\int_{0}^{\infty} \frac{1}{1+x^2(2y+1)^2} \ dy= \lim_{b \rightarrow \infty}\frac{\arctan((2b+1)x)}{2x}-\frac{\arctan(x)}{2x}=\frac{\pi}{4x}-\frac{\arctan(x)}{2x}$$ and get</p> <p>$$I=\int_{0}^{\infty} \left(\frac{\pi}{4x}-\frac{\arctan(x)}{2x} \right) \frac{x}{1+x^2} \ dx=\int_{0}^{\infty} \frac{\pi}{4(1+x^2)}-\frac{\arctan(x)}{2(1+x^2)} \ dx=\frac{\pi^2}{8}-\frac{\pi^2}{16}=\frac{\pi^2}{16}.$$ </p> <p>Now we reverse the order of integration as such:</p> <p>$$I=\int_{0}^{\infty}\int_{0}^{\infty} \frac{1}{1+x^2(2y+1)^2} \frac{x}{1+x^2} \ dx \ dy.$$</p> <p>To clean things up a little bit, let $u=x^2, \ du = 2 x \ dx.$ So </p> <p>$$I=\int_{0}^{\infty}\int_{0}^{\infty} \frac{1}{1+u(2y+1)^2} \frac{u}{2(1+u)} \ du \ dy.$$</p> <p>Integrating with respect to $u,$ we can use partial fractions to get that </p> <p>$$\frac{1}{1+u(2y+1)^2} \frac{1}{2(1+u)}=\frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\frac{1}{2(u+1)((2y+1)^2-1)}.$$</p> <p>We see </p> <p>$$\int_{0}^{\infty} \frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\frac{1}{2(u+1)((2y+1)^2-1)} \ du $$ $$= \lim_{b \rightarrow \infty}\frac{\ln(b(2y+1)^2+1)}{2(2y+1)^2-2} - \frac{\ln(b+1)}{2(2y+1)^2-2}= \frac{\ln(2y+1)}{(2y+1)^2-1}=\frac{\ln(2y+1)}{4y(y+1)}.$$</p> <p>Hence, </p> <p>$$I=\int_{0}^{\infty}\frac{\ln(2y+1)}{4y(y+1)} \ dy=\frac{\pi^2}{16},$$</p> <p>So $$I=\int_{0}^{\infty}\frac{\ln(2y+1)}{y(y+1)} \ dy=\frac{\pi^2}{4}.$$</p> Answer 51: <p>Alternatively we can use Feynman's Trick if we let, $$I(a)=\int_0^{\infty} \frac{\ln(ax+1)}{x(x+1)}\mathrm dx,$$ then if we take the derivative of $I(a)$ and expand using partial fractions we have $$I'(a)=\int_0^{\infty} \frac{1}{(ax+1)(x+1)}\mathrm dx =\int_0^{\infty} \frac{a}{(a-1)(ax+1)} - \frac{1}{(a-1)(x+1)}\mathrm dx.$$ Integrating this gives, $$I'(a)=\bigg(\frac{1}{a-1}\ln|ax+1|-\frac{1}{a-1}\ln|x+1| \bigg)|_0^{\infty}=\frac{1}{a-1}\ln|\frac{ax+1}{x+1}| |_0^{\infty}.$$ Noticing the lower bound is zero we're left with an easy limit$$I'(a)=\lim_{x\to\infty}\frac{1}{a-1}\ln|\frac{ax+1}{x+1}|=\frac{\ln(a)}{a-1}.$$ Now if we integrate we can get $I(a)$, this is easy because our integral closely resembles the definition of the dilogarithm, so we have $$I(a)=\int \frac{\ln(a)}{a-1} da=-Li_2(1-a)+C.$$ Now if we notice $I(0)=0$ and note that $Li_2(1)=\frac{\pi^2}{6}$ we see that $C=\frac{\pi^2}{6}.$ So finally if we note that for your problem $a=2$ have that, $$I(2)=-Li_2(-1)+\frac{\pi^2}{6}=\frac{\pi^2}{12}+\frac{\pi^2}{6}=\frac{\pi^2}{4}.$$</p> <p>This is admittedly not as nice as the approach using the geometric series and makes heavy use of the dilogarithm and it's particular values but I think it's kinda fun.</p> Answer 52: <p>do the substitution $2x+1\rightarrow x$ and the let $x\rightarrow x^{-1}$ , hence $$\int_{0}^{\infty }\frac{\ln\left ( 2x+1 \right )}{x\left ( x+1 \right )}\, \mathrm{d}x=2\int_{1}^{\infty }\frac{\ln x}{x^{2}-1}\, \mathrm{d}x=-2\int_{0}^{1}\frac{\ln x}{1-x^{2}}\, \mathrm{d}x$$ then use the geometric series $$\frac{1}{1-x^{2}}=\sum_{n=0}^{\infty }x^{2n}$$ we get $$\int_{0}^{\infty }\frac{\ln\left ( 2x+1 \right )}{x\left ( x+1 \right )}\, \mathrm{d}x=2\sum_{n=0}^{\infty }\frac{1}{\left ( 2n+1 \right )^{2}}$$ and the answer will follow.</p> Answer 53: <p><em>Too long for a comment.</em></p> <p>Making the problem more general, it is possible to compute $$I=\int \frac{\log (a x+b)}{(x+c) (x+d)}\,\mathrm dx$$ Using one integration by parts $$u=\log (a x+b)\implies u'=\frac{a}{a x+b}\,\mathrm dx$$ $$v'=\frac{\mathrm dx}{(c+x) (d+x)}\implies v=\frac{\log (x+d)-\log (x+c)}{c-d}$$ So, if $c\neq d$,$$(c-d)I=\log (a x+b) \log \left(\frac{(x+d) (a c-b)}{(x+c) (a d-b)}\right)-\text{Li}_2\left(\frac{a x+b}{b-a c}\right)+\text{Li}_2\left(\frac{a x+b}{b-a d}\right) $$ and, if $c=d$, $$I=\frac{a (x+c) \log (x+c)-(a x+b) \log (a x+b)}{(x+c) (b-a c)}$$</p> Answer 54: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \int_{0}^{\infty}{\ln\pars{2x + 1} \over x\pars{x + 1}}\,\dd x &amp; \,\,\,\stackrel{2x + 1\ \mapsto\ x}{=}\,\,\, \int_{1}^{\infty}{\ln\pars{x} \over \bracks{\pars{x - 1}/2}\bracks{\pars{x - 1}/2 + 1}}\,{\dd x \over 2} = 2\int_{1}^{\infty}{\ln\pars{x} \over x^{2} - 1}\,\dd x \\[5mm] &amp; \stackrel{x\ \mapsto\ 1/x}{=}\,\,\, -2\int_{0}^{1}{\ln\pars{x} \over 1 - x^{2}}\,\dd x \,\,\,\stackrel{x^{2}\ \mapsto\ x}{=}\,\,\, -\,{1 \over 2}\int_{0}^{1}{x^{-1/2}\,\ln\pars{x} \over 1 - x}\,\dd x \\[5mm] &amp; = \left.{1 \over 2}\,\totald{}{\mu}\int_{0}^{1}{1 - x^{\mu} \over 1 - x}\,\dd x\, \right\vert_{\ \mu\ =\ -1/2} = \left.{1 \over 2}\,\totald{\bracks{\Psi\pars{\mu + 1} + \gamma}}{\mu} \right\vert_{\ \mu\ =\ -1/2}\label{1}\tag{1} \end{align}</p> <blockquote> <p>where $\ds{\Psi}$ is the $Digamma\ Function$ and $\gamma$ is the $Euler\mbox{-}Mascheroni\ Constant$: See $\mathbf{6.3.22}$ in <a href="http://people.math.sfu.ca/~cbm/aands/page_259.htm" rel="nofollow noreferrer">A&amp;S Table</a>.</p> </blockquote> <p>From expression&nbsp;\eqref{1}: $$ \int_{0}^{\infty}{\ln\pars{2x + 1} \over x\pars{x + 1}}\,\dd x = {1 \over 2}\,\Psi\, '\pars{1 \over 2} = \bbx{\ds{\pi^{2} \over 4}} $$</p> <blockquote> <p>See $\mathbf{6.4.4}$ in <a href="http://people.math.sfu.ca/~cbm/aands/page_260.htm" rel="nofollow noreferrer">A&amp;S Table</a>.</p> </blockquote> Answer 55: <p>This might be a late response, but consider the double integral</p> <p>$$I=\int_{0}^{\infty}\int_{0}^{\infty} \frac{1}{1+x^2(2y+1)^2} \frac{x}{1+x^2} \ dy \ dx.$$</p> <p>We will evaluate $I$ in two ways. Integrating with respect to $y$ first, we see that</p> <p>$$\int_{0}^{\infty} \frac{1}{1+x^2(2y+1)^2} \ dy= \lim_{b \rightarrow \infty}\frac{\arctan((2b+1)x)}{2x}-\frac{\arctan(x)}{2x}=\frac{\pi}{4x}-\frac{\arctan(x)}{2x}$$ and get</p> <p>$$I=\int_{0}^{\infty} \left(\frac{\pi}{4x}-\frac{\arctan(x)}{2x} \right) \frac{x}{1+x^2} \ dx=\int_{0}^{\infty} \frac{\pi}{4(1+x^2)}-\frac{\arctan(x)}{2(1+x^2)} \ dx=\frac{\pi^2}{8}-\frac{\pi^2}{16}=\frac{\pi^2}{16}.$$ </p> <p>Now we reverse the order of integration as such:</p> <p>$$I=\int_{0}^{\infty}\int_{0}^{\infty} \frac{1}{1+x^2(2y+1)^2} \frac{x}{1+x^2} \ dx \ dy.$$</p> <p>To clean things up a little bit, let $u=x^2, \ du = 2 x \ dx.$ So </p> <p>$$I=\int_{0}^{\infty}\int_{0}^{\infty} \frac{1}{1+u(2y+1)^2} \frac{u}{2(1+u)} \ du \ dy.$$</p> <p>Integrating with respect to $u,$ we can use partial fractions to get that </p> <p>$$\frac{1}{1+u(2y+1)^2} \frac{1}{2(1+u)}=\frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\frac{1}{2(u+1)((2y+1)^2-1)}.$$</p> <p>We see </p> <p>$$\int_{0}^{\infty} \frac{(2y+1)^2}{2(1+u(2y+1)^2)((2y+1)^2-1)}-\frac{1}{2(u+1)((2y+1)^2-1)} \ du $$ $$= \lim_{b \rightarrow \infty}\frac{\ln(b(2y+1)^2+1)}{2(2y+1)^2-2} - \frac{\ln(b+1)}{2(2y+1)^2-2}= \frac{\ln(2y+1)}{(2y+1)^2-1}=\frac{\ln(2y+1)}{4y(y+1)}.$$</p> <p>Hence, </p> <p>$$I=\int_{0}^{\infty}\frac{\ln(2y+1)}{4y(y+1)} \ dy=\frac{\pi^2}{16},$$</p> <p>So $$I=\int_{0}^{\infty}\frac{\ln(2y+1)}{y(y+1)} \ dy=\frac{\pi^2}{4}.$$</p>
Is $f(x)=\mu(V+x)$ continuous?
2,090,031
6
https://math.stackexchange.com/questions/2090031/is-fx-muvx-continuous
<blockquote> <p>Suppose $V$ is open in $R^k$ and $\mu$ is a finite postive Borel measure on $R^k$. Is the function $f(x)=\mu(V+x)$ continuous? lower semicontinuous? upper semicontinuous?</p> </blockquote> <p>Clearly $f$ cannot be guaranteed to be continuous or even upper semicontinuous. If we let $$\mu_{x_0}(E)=\begin{cases} 1 &amp; x_0\in E \\ 0 &amp; x_0\not\in E \end{cases}$$ and $V= B_r(x_0)$, we then find that it is strictly lower semicontinuous. </p> <p>But I am finding it hard to:</p> <ul> <li>Find an example where $f$ isn't lower semicontinuous.</li> <li>Find a proof that $f$ must be lower semicontinuous.</li> </ul> <p>Any Ideas as to how .</p>
[ { "answer_id": 2090057, "score": 5, "text": "<p>Lower-semicontinuity of $f$ can be thought as a consequence of the portmanteau theorem. Here is a self-contained proof:</p>\n\n<p>Since $V$ is open, there exists a sequence $(\\varphi_n)$ of compactly supported continuous functions on $\\Bbb{R}^k$ such that $0 \\leq \\varphi_1 \\leq \\varphi_2 \\leq \\varphi_3 \\leq \\cdots$ and $\\varphi_n \\uparrow \\mathbf{1}_V$ pointwise. From this, for any $x \\in \\Bbb{R}^k$ and for any $n = 1, 2, \\cdots$ we have</p>\n\n<p>$$ f(x)\n= \\mu(x + V)\n= \\int_{\\Bbb{R}^k} \\mathbf{1}_V (t-x) \\, \\mu(dt)\n\\geq \\int_{\\Bbb{R}^k} \\varphi_n(t-x) \\, \\mu(dt). $$</p>\n\n<p>Now for each fixed $n$, we find that $\\varphi_n(\\cdot - x)$ converges pointwise to $\\varphi_n(\\cdot - x_0)$ as $x \\to x_0$. Thus by the bounded convergence theorem,</p>\n\n<p>$$ \\liminf_{x\\to x_0} f(x)\n\\geq \\liminf_{x\\to x_0} \\int_{\\Bbb{R}^k} \\varphi_n(t-x) \\, \\mu(dt)\n= \\int_{\\Bbb{R}^k} \\varphi_n(t-x_0) \\, \\mu(dt). $$</p>\n\n<p>Taking $n \\to \\infty$ together with the monotone convergence theorem shows that</p>\n\n<p>$$ \\liminf_{x\\to x_0} f(x) \\geq f(x_0), $$</p>\n\n<p>which implies lower-semicontinuity of $f$.</p>\n" }, { "answer_id": 2090706, "score": 1, "text": "<p>It can be shown that $x\\mapsto\\mu(x+V)$ is continuous for all open $V$ if and only if $\\mu$ is absolutely continuous with respect to Lebesgue measure. See <a href=\"http://www.jstor.org/stable/pdf/2244564.pdf\" rel=\"nofollow noreferrer\"> A note on translation continuity of probability measures</a> by S. Zabell.</p>\n" }, { "answer_id": 2090057, "score": 5, "text": "<p>Lower-semicontinuity of $f$ can be thought as a consequence of the portmanteau theorem. Here is a self-contained proof:</p>\n\n<p>Since $V$ is open, there exists a sequence $(\\varphi_n)$ of compactly supported continuous functions on $\\Bbb{R}^k$ such that $0 \\leq \\varphi_1 \\leq \\varphi_2 \\leq \\varphi_3 \\leq \\cdots$ and $\\varphi_n \\uparrow \\mathbf{1}_V$ pointwise. From this, for any $x \\in \\Bbb{R}^k$ and for any $n = 1, 2, \\cdots$ we have</p>\n\n<p>$$ f(x)\n= \\mu(x + V)\n= \\int_{\\Bbb{R}^k} \\mathbf{1}_V (t-x) \\, \\mu(dt)\n\\geq \\int_{\\Bbb{R}^k} \\varphi_n(t-x) \\, \\mu(dt). $$</p>\n\n<p>Now for each fixed $n$, we find that $\\varphi_n(\\cdot - x)$ converges pointwise to $\\varphi_n(\\cdot - x_0)$ as $x \\to x_0$. Thus by the bounded convergence theorem,</p>\n\n<p>$$ \\liminf_{x\\to x_0} f(x)\n\\geq \\liminf_{x\\to x_0} \\int_{\\Bbb{R}^k} \\varphi_n(t-x) \\, \\mu(dt)\n= \\int_{\\Bbb{R}^k} \\varphi_n(t-x_0) \\, \\mu(dt). $$</p>\n\n<p>Taking $n \\to \\infty$ together with the monotone convergence theorem shows that</p>\n\n<p>$$ \\liminf_{x\\to x_0} f(x) \\geq f(x_0), $$</p>\n\n<p>which implies lower-semicontinuity of $f$.</p>\n" }, { "answer_id": 2090706, "score": 1, "text": "<p>It can be shown that $x\\mapsto\\mu(x+V)$ is continuous for all open $V$ if and only if $\\mu$ is absolutely continuous with respect to Lebesgue measure. See <a href=\"http://www.jstor.org/stable/pdf/2244564.pdf\" rel=\"nofollow noreferrer\"> A note on translation continuity of probability measures</a> by S. Zabell.</p>\n" }, { "answer_id": 2090057, "score": 5, "text": "<p>Lower-semicontinuity of $f$ can be thought as a consequence of the portmanteau theorem. Here is a self-contained proof:</p>\n\n<p>Since $V$ is open, there exists a sequence $(\\varphi_n)$ of compactly supported continuous functions on $\\Bbb{R}^k$ such that $0 \\leq \\varphi_1 \\leq \\varphi_2 \\leq \\varphi_3 \\leq \\cdots$ and $\\varphi_n \\uparrow \\mathbf{1}_V$ pointwise. From this, for any $x \\in \\Bbb{R}^k$ and for any $n = 1, 2, \\cdots$ we have</p>\n\n<p>$$ f(x)\n= \\mu(x + V)\n= \\int_{\\Bbb{R}^k} \\mathbf{1}_V (t-x) \\, \\mu(dt)\n\\geq \\int_{\\Bbb{R}^k} \\varphi_n(t-x) \\, \\mu(dt). $$</p>\n\n<p>Now for each fixed $n$, we find that $\\varphi_n(\\cdot - x)$ converges pointwise to $\\varphi_n(\\cdot - x_0)$ as $x \\to x_0$. Thus by the bounded convergence theorem,</p>\n\n<p>$$ \\liminf_{x\\to x_0} f(x)\n\\geq \\liminf_{x\\to x_0} \\int_{\\Bbb{R}^k} \\varphi_n(t-x) \\, \\mu(dt)\n= \\int_{\\Bbb{R}^k} \\varphi_n(t-x_0) \\, \\mu(dt). $$</p>\n\n<p>Taking $n \\to \\infty$ together with the monotone convergence theorem shows that</p>\n\n<p>$$ \\liminf_{x\\to x_0} f(x) \\geq f(x_0), $$</p>\n\n<p>which implies lower-semicontinuity of $f$.</p>\n" }, { "answer_id": 2090706, "score": 1, "text": "<p>It can be shown that $x\\mapsto\\mu(x+V)$ is continuous for all open $V$ if and only if $\\mu$ is absolutely continuous with respect to Lebesgue measure. See <a href=\"http://www.jstor.org/stable/pdf/2244564.pdf\" rel=\"nofollow noreferrer\"> A note on translation continuity of probability measures</a> by S. Zabell.</p>\n" }, { "answer_id": 2090057, "score": 5, "text": "<p>Lower-semicontinuity of $f$ can be thought as a consequence of the portmanteau theorem. Here is a self-contained proof:</p>\n\n<p>Since $V$ is open, there exists a sequence $(\\varphi_n)$ of compactly supported continuous functions on $\\Bbb{R}^k$ such that $0 \\leq \\varphi_1 \\leq \\varphi_2 \\leq \\varphi_3 \\leq \\cdots$ and $\\varphi_n \\uparrow \\mathbf{1}_V$ pointwise. From this, for any $x \\in \\Bbb{R}^k$ and for any $n = 1, 2, \\cdots$ we have</p>\n\n<p>$$ f(x)\n= \\mu(x + V)\n= \\int_{\\Bbb{R}^k} \\mathbf{1}_V (t-x) \\, \\mu(dt)\n\\geq \\int_{\\Bbb{R}^k} \\varphi_n(t-x) \\, \\mu(dt). $$</p>\n\n<p>Now for each fixed $n$, we find that $\\varphi_n(\\cdot - x)$ converges pointwise to $\\varphi_n(\\cdot - x_0)$ as $x \\to x_0$. Thus by the bounded convergence theorem,</p>\n\n<p>$$ \\liminf_{x\\to x_0} f(x)\n\\geq \\liminf_{x\\to x_0} \\int_{\\Bbb{R}^k} \\varphi_n(t-x) \\, \\mu(dt)\n= \\int_{\\Bbb{R}^k} \\varphi_n(t-x_0) \\, \\mu(dt). $$</p>\n\n<p>Taking $n \\to \\infty$ together with the monotone convergence theorem shows that</p>\n\n<p>$$ \\liminf_{x\\to x_0} f(x) \\geq f(x_0), $$</p>\n\n<p>which implies lower-semicontinuity of $f$.</p>\n" }, { "answer_id": 2090706, "score": 1, "text": "<p>It can be shown that $x\\mapsto\\mu(x+V)$ is continuous for all open $V$ if and only if $\\mu$ is absolutely continuous with respect to Lebesgue measure. See <a href=\"http://www.jstor.org/stable/pdf/2244564.pdf\" rel=\"nofollow noreferrer\"> A note on translation continuity of probability measures</a> by S. Zabell.</p>\n" }, { "answer_id": 2090057, "score": 5, "text": "<p>Lower-semicontinuity of $f$ can be thought as a consequence of the portmanteau theorem. Here is a self-contained proof:</p>\n\n<p>Since $V$ is open, there exists a sequence $(\\varphi_n)$ of compactly supported continuous functions on $\\Bbb{R}^k$ such that $0 \\leq \\varphi_1 \\leq \\varphi_2 \\leq \\varphi_3 \\leq \\cdots$ and $\\varphi_n \\uparrow \\mathbf{1}_V$ pointwise. From this, for any $x \\in \\Bbb{R}^k$ and for any $n = 1, 2, \\cdots$ we have</p>\n\n<p>$$ f(x)\n= \\mu(x + V)\n= \\int_{\\Bbb{R}^k} \\mathbf{1}_V (t-x) \\, \\mu(dt)\n\\geq \\int_{\\Bbb{R}^k} \\varphi_n(t-x) \\, \\mu(dt). $$</p>\n\n<p>Now for each fixed $n$, we find that $\\varphi_n(\\cdot - x)$ converges pointwise to $\\varphi_n(\\cdot - x_0)$ as $x \\to x_0$. Thus by the bounded convergence theorem,</p>\n\n<p>$$ \\liminf_{x\\to x_0} f(x)\n\\geq \\liminf_{x\\to x_0} \\int_{\\Bbb{R}^k} \\varphi_n(t-x) \\, \\mu(dt)\n= \\int_{\\Bbb{R}^k} \\varphi_n(t-x_0) \\, \\mu(dt). $$</p>\n\n<p>Taking $n \\to \\infty$ together with the monotone convergence theorem shows that</p>\n\n<p>$$ \\liminf_{x\\to x_0} f(x) \\geq f(x_0), $$</p>\n\n<p>which implies lower-semicontinuity of $f$.</p>\n" }, { "answer_id": 2090706, "score": 1, "text": "<p>It can be shown that $x\\mapsto\\mu(x+V)$ is continuous for all open $V$ if and only if $\\mu$ is absolutely continuous with respect to Lebesgue measure. See <a href=\"http://www.jstor.org/stable/pdf/2244564.pdf\" rel=\"nofollow noreferrer\"> A note on translation continuity of probability measures</a> by S. Zabell.</p>\n" }, { "answer_id": 2090057, "score": 5, "text": "<p>Lower-semicontinuity of $f$ can be thought as a consequence of the portmanteau theorem. Here is a self-contained proof:</p>\n\n<p>Since $V$ is open, there exists a sequence $(\\varphi_n)$ of compactly supported continuous functions on $\\Bbb{R}^k$ such that $0 \\leq \\varphi_1 \\leq \\varphi_2 \\leq \\varphi_3 \\leq \\cdots$ and $\\varphi_n \\uparrow \\mathbf{1}_V$ pointwise. From this, for any $x \\in \\Bbb{R}^k$ and for any $n = 1, 2, \\cdots$ we have</p>\n\n<p>$$ f(x)\n= \\mu(x + V)\n= \\int_{\\Bbb{R}^k} \\mathbf{1}_V (t-x) \\, \\mu(dt)\n\\geq \\int_{\\Bbb{R}^k} \\varphi_n(t-x) \\, \\mu(dt). $$</p>\n\n<p>Now for each fixed $n$, we find that $\\varphi_n(\\cdot - x)$ converges pointwise to $\\varphi_n(\\cdot - x_0)$ as $x \\to x_0$. Thus by the bounded convergence theorem,</p>\n\n<p>$$ \\liminf_{x\\to x_0} f(x)\n\\geq \\liminf_{x\\to x_0} \\int_{\\Bbb{R}^k} \\varphi_n(t-x) \\, \\mu(dt)\n= \\int_{\\Bbb{R}^k} \\varphi_n(t-x_0) \\, \\mu(dt). $$</p>\n\n<p>Taking $n \\to \\infty$ together with the monotone convergence theorem shows that</p>\n\n<p>$$ \\liminf_{x\\to x_0} f(x) \\geq f(x_0), $$</p>\n\n<p>which implies lower-semicontinuity of $f$.</p>\n" }, { "answer_id": 2090706, "score": 1, "text": "<p>It can be shown that $x\\mapsto\\mu(x+V)$ is continuous for all open $V$ if and only if $\\mu$ is absolutely continuous with respect to Lebesgue measure. See <a href=\"http://www.jstor.org/stable/pdf/2244564.pdf\" rel=\"nofollow noreferrer\"> A note on translation continuity of probability measures</a> by S. Zabell.</p>\n" }, { "answer_id": 2090057, "score": 5, "text": "<p>Lower-semicontinuity of $f$ can be thought as a consequence of the portmanteau theorem. Here is a self-contained proof:</p>\n\n<p>Since $V$ is open, there exists a sequence $(\\varphi_n)$ of compactly supported continuous functions on $\\Bbb{R}^k$ such that $0 \\leq \\varphi_1 \\leq \\varphi_2 \\leq \\varphi_3 \\leq \\cdots$ and $\\varphi_n \\uparrow \\mathbf{1}_V$ pointwise. From this, for any $x \\in \\Bbb{R}^k$ and for any $n = 1, 2, \\cdots$ we have</p>\n\n<p>$$ f(x)\n= \\mu(x + V)\n= \\int_{\\Bbb{R}^k} \\mathbf{1}_V (t-x) \\, \\mu(dt)\n\\geq \\int_{\\Bbb{R}^k} \\varphi_n(t-x) \\, \\mu(dt). $$</p>\n\n<p>Now for each fixed $n$, we find that $\\varphi_n(\\cdot - x)$ converges pointwise to $\\varphi_n(\\cdot - x_0)$ as $x \\to x_0$. Thus by the bounded convergence theorem,</p>\n\n<p>$$ \\liminf_{x\\to x_0} f(x)\n\\geq \\liminf_{x\\to x_0} \\int_{\\Bbb{R}^k} \\varphi_n(t-x) \\, \\mu(dt)\n= \\int_{\\Bbb{R}^k} \\varphi_n(t-x_0) \\, \\mu(dt). $$</p>\n\n<p>Taking $n \\to \\infty$ together with the monotone convergence theorem shows that</p>\n\n<p>$$ \\liminf_{x\\to x_0} f(x) \\geq f(x_0), $$</p>\n\n<p>which implies lower-semicontinuity of $f$.</p>\n" }, { "answer_id": 2090706, "score": 1, "text": "<p>It can be shown that $x\\mapsto\\mu(x+V)$ is continuous for all open $V$ if and only if $\\mu$ is absolutely continuous with respect to Lebesgue measure. See <a href=\"http://www.jstor.org/stable/pdf/2244564.pdf\" rel=\"nofollow noreferrer\"> A note on translation continuity of probability measures</a> by S. Zabell.</p>\n" }, { "answer_id": 2090057, "score": 5, "text": "<p>Lower-semicontinuity of $f$ can be thought as a consequence of the portmanteau theorem. Here is a self-contained proof:</p>\n\n<p>Since $V$ is open, there exists a sequence $(\\varphi_n)$ of compactly supported continuous functions on $\\Bbb{R}^k$ such that $0 \\leq \\varphi_1 \\leq \\varphi_2 \\leq \\varphi_3 \\leq \\cdots$ and $\\varphi_n \\uparrow \\mathbf{1}_V$ pointwise. From this, for any $x \\in \\Bbb{R}^k$ and for any $n = 1, 2, \\cdots$ we have</p>\n\n<p>$$ f(x)\n= \\mu(x + V)\n= \\int_{\\Bbb{R}^k} \\mathbf{1}_V (t-x) \\, \\mu(dt)\n\\geq \\int_{\\Bbb{R}^k} \\varphi_n(t-x) \\, \\mu(dt). $$</p>\n\n<p>Now for each fixed $n$, we find that $\\varphi_n(\\cdot - x)$ converges pointwise to $\\varphi_n(\\cdot - x_0)$ as $x \\to x_0$. Thus by the bounded convergence theorem,</p>\n\n<p>$$ \\liminf_{x\\to x_0} f(x)\n\\geq \\liminf_{x\\to x_0} \\int_{\\Bbb{R}^k} \\varphi_n(t-x) \\, \\mu(dt)\n= \\int_{\\Bbb{R}^k} \\varphi_n(t-x_0) \\, \\mu(dt). $$</p>\n\n<p>Taking $n \\to \\infty$ together with the monotone convergence theorem shows that</p>\n\n<p>$$ \\liminf_{x\\to x_0} f(x) \\geq f(x_0), $$</p>\n\n<p>which implies lower-semicontinuity of $f$.</p>\n" }, { "answer_id": 2090706, "score": 1, "text": "<p>It can be shown that $x\\mapsto\\mu(x+V)$ is continuous for all open $V$ if and only if $\\mu$ is absolutely continuous with respect to Lebesgue measure. See <a href=\"http://www.jstor.org/stable/pdf/2244564.pdf\" rel=\"nofollow noreferrer\"> A note on translation continuity of probability measures</a> by S. Zabell.</p>\n" }, { "answer_id": 2090057, "score": 5, "text": "<p>Lower-semicontinuity of $f$ can be thought as a consequence of the portmanteau theorem. Here is a self-contained proof:</p>\n\n<p>Since $V$ is open, there exists a sequence $(\\varphi_n)$ of compactly supported continuous functions on $\\Bbb{R}^k$ such that $0 \\leq \\varphi_1 \\leq \\varphi_2 \\leq \\varphi_3 \\leq \\cdots$ and $\\varphi_n \\uparrow \\mathbf{1}_V$ pointwise. From this, for any $x \\in \\Bbb{R}^k$ and for any $n = 1, 2, \\cdots$ we have</p>\n\n<p>$$ f(x)\n= \\mu(x + V)\n= \\int_{\\Bbb{R}^k} \\mathbf{1}_V (t-x) \\, \\mu(dt)\n\\geq \\int_{\\Bbb{R}^k} \\varphi_n(t-x) \\, \\mu(dt). $$</p>\n\n<p>Now for each fixed $n$, we find that $\\varphi_n(\\cdot - x)$ converges pointwise to $\\varphi_n(\\cdot - x_0)$ as $x \\to x_0$. Thus by the bounded convergence theorem,</p>\n\n<p>$$ \\liminf_{x\\to x_0} f(x)\n\\geq \\liminf_{x\\to x_0} \\int_{\\Bbb{R}^k} \\varphi_n(t-x) \\, \\mu(dt)\n= \\int_{\\Bbb{R}^k} \\varphi_n(t-x_0) \\, \\mu(dt). $$</p>\n\n<p>Taking $n \\to \\infty$ together with the monotone convergence theorem shows that</p>\n\n<p>$$ \\liminf_{x\\to x_0} f(x) \\geq f(x_0), $$</p>\n\n<p>which implies lower-semicontinuity of $f$.</p>\n" }, { "answer_id": 2090706, "score": 1, "text": "<p>It can be shown that $x\\mapsto\\mu(x+V)$ is continuous for all open $V$ if and only if $\\mu$ is absolutely continuous with respect to Lebesgue measure. See <a href=\"http://www.jstor.org/stable/pdf/2244564.pdf\" rel=\"nofollow noreferrer\"> A note on translation continuity of probability measures</a> by S. Zabell.</p>\n" }, { "answer_id": 2090057, "score": 5, "text": "<p>Lower-semicontinuity of $f$ can be thought as a consequence of the portmanteau theorem. Here is a self-contained proof:</p>\n\n<p>Since $V$ is open, there exists a sequence $(\\varphi_n)$ of compactly supported continuous functions on $\\Bbb{R}^k$ such that $0 \\leq \\varphi_1 \\leq \\varphi_2 \\leq \\varphi_3 \\leq \\cdots$ and $\\varphi_n \\uparrow \\mathbf{1}_V$ pointwise. From this, for any $x \\in \\Bbb{R}^k$ and for any $n = 1, 2, \\cdots$ we have</p>\n\n<p>$$ f(x)\n= \\mu(x + V)\n= \\int_{\\Bbb{R}^k} \\mathbf{1}_V (t-x) \\, \\mu(dt)\n\\geq \\int_{\\Bbb{R}^k} \\varphi_n(t-x) \\, \\mu(dt). $$</p>\n\n<p>Now for each fixed $n$, we find that $\\varphi_n(\\cdot - x)$ converges pointwise to $\\varphi_n(\\cdot - x_0)$ as $x \\to x_0$. Thus by the bounded convergence theorem,</p>\n\n<p>$$ \\liminf_{x\\to x_0} f(x)\n\\geq \\liminf_{x\\to x_0} \\int_{\\Bbb{R}^k} \\varphi_n(t-x) \\, \\mu(dt)\n= \\int_{\\Bbb{R}^k} \\varphi_n(t-x_0) \\, \\mu(dt). $$</p>\n\n<p>Taking $n \\to \\infty$ together with the monotone convergence theorem shows that</p>\n\n<p>$$ \\liminf_{x\\to x_0} f(x) \\geq f(x_0), $$</p>\n\n<p>which implies lower-semicontinuity of $f$.</p>\n" }, { "answer_id": 2090706, "score": 1, "text": "<p>It can be shown that $x\\mapsto\\mu(x+V)$ is continuous for all open $V$ if and only if $\\mu$ is absolutely continuous with respect to Lebesgue measure. See <a href=\"http://www.jstor.org/stable/pdf/2244564.pdf\" rel=\"nofollow noreferrer\"> A note on translation continuity of probability measures</a> by S. Zabell.</p>\n" }, { "answer_id": 2090057, "score": 5, "text": "<p>Lower-semicontinuity of $f$ can be thought as a consequence of the portmanteau theorem. Here is a self-contained proof:</p>\n\n<p>Since $V$ is open, there exists a sequence $(\\varphi_n)$ of compactly supported continuous functions on $\\Bbb{R}^k$ such that $0 \\leq \\varphi_1 \\leq \\varphi_2 \\leq \\varphi_3 \\leq \\cdots$ and $\\varphi_n \\uparrow \\mathbf{1}_V$ pointwise. From this, for any $x \\in \\Bbb{R}^k$ and for any $n = 1, 2, \\cdots$ we have</p>\n\n<p>$$ f(x)\n= \\mu(x + V)\n= \\int_{\\Bbb{R}^k} \\mathbf{1}_V (t-x) \\, \\mu(dt)\n\\geq \\int_{\\Bbb{R}^k} \\varphi_n(t-x) \\, \\mu(dt). $$</p>\n\n<p>Now for each fixed $n$, we find that $\\varphi_n(\\cdot - x)$ converges pointwise to $\\varphi_n(\\cdot - x_0)$ as $x \\to x_0$. Thus by the bounded convergence theorem,</p>\n\n<p>$$ \\liminf_{x\\to x_0} f(x)\n\\geq \\liminf_{x\\to x_0} \\int_{\\Bbb{R}^k} \\varphi_n(t-x) \\, \\mu(dt)\n= \\int_{\\Bbb{R}^k} \\varphi_n(t-x_0) \\, \\mu(dt). $$</p>\n\n<p>Taking $n \\to \\infty$ together with the monotone convergence theorem shows that</p>\n\n<p>$$ \\liminf_{x\\to x_0} f(x) \\geq f(x_0), $$</p>\n\n<p>which implies lower-semicontinuity of $f$.</p>\n" }, { "answer_id": 2090706, "score": 1, "text": "<p>It can be shown that $x\\mapsto\\mu(x+V)$ is continuous for all open $V$ if and only if $\\mu$ is absolutely continuous with respect to Lebesgue measure. See <a href=\"http://www.jstor.org/stable/pdf/2244564.pdf\" rel=\"nofollow noreferrer\"> A note on translation continuity of probability measures</a> by S. Zabell.</p>\n" } ]
Question: Is $f(x)=\mu(V+x)$ continuous? Answer 1: <p>Lower-semicontinuity of $f$ can be thought as a consequence of the portmanteau theorem. Here is a self-contained proof:</p> <p>Since $V$ is open, there exists a sequence $(\varphi_n)$ of compactly supported continuous functions on $\Bbb{R}^k$ such that $0 \leq \varphi_1 \leq \varphi_2 \leq \varphi_3 \leq \cdots$ and $\varphi_n \uparrow \mathbf{1}_V$ pointwise. From this, for any $x \in \Bbb{R}^k$ and for any $n = 1, 2, \cdots$ we have</p> <p>$$ f(x) = \mu(x + V) = \int_{\Bbb{R}^k} \mathbf{1}_V (t-x) \, \mu(dt) \geq \int_{\Bbb{R}^k} \varphi_n(t-x) \, \mu(dt). $$</p> <p>Now for each fixed $n$, we find that $\varphi_n(\cdot - x)$ converges pointwise to $\varphi_n(\cdot - x_0)$ as $x \to x_0$. Thus by the bounded convergence theorem,</p> <p>$$ \liminf_{x\to x_0} f(x) \geq \liminf_{x\to x_0} \int_{\Bbb{R}^k} \varphi_n(t-x) \, \mu(dt) = \int_{\Bbb{R}^k} \varphi_n(t-x_0) \, \mu(dt). $$</p> <p>Taking $n \to \infty$ together with the monotone convergence theorem shows that</p> <p>$$ \liminf_{x\to x_0} f(x) \geq f(x_0), $$</p> <p>which implies lower-semicontinuity of $f$.</p> Answer 2: <p>It can be shown that $x\mapsto\mu(x+V)$ is continuous for all open $V$ if and only if $\mu$ is absolutely continuous with respect to Lebesgue measure. See <a href="http://www.jstor.org/stable/pdf/2244564.pdf" rel="nofollow noreferrer"> A note on translation continuity of probability measures</a> by S. Zabell.</p> Answer 3: <p>Lower-semicontinuity of $f$ can be thought as a consequence of the portmanteau theorem. Here is a self-contained proof:</p> <p>Since $V$ is open, there exists a sequence $(\varphi_n)$ of compactly supported continuous functions on $\Bbb{R}^k$ such that $0 \leq \varphi_1 \leq \varphi_2 \leq \varphi_3 \leq \cdots$ and $\varphi_n \uparrow \mathbf{1}_V$ pointwise. From this, for any $x \in \Bbb{R}^k$ and for any $n = 1, 2, \cdots$ we have</p> <p>$$ f(x) = \mu(x + V) = \int_{\Bbb{R}^k} \mathbf{1}_V (t-x) \, \mu(dt) \geq \int_{\Bbb{R}^k} \varphi_n(t-x) \, \mu(dt). $$</p> <p>Now for each fixed $n$, we find that $\varphi_n(\cdot - x)$ converges pointwise to $\varphi_n(\cdot - x_0)$ as $x \to x_0$. Thus by the bounded convergence theorem,</p> <p>$$ \liminf_{x\to x_0} f(x) \geq \liminf_{x\to x_0} \int_{\Bbb{R}^k} \varphi_n(t-x) \, \mu(dt) = \int_{\Bbb{R}^k} \varphi_n(t-x_0) \, \mu(dt). $$</p> <p>Taking $n \to \infty$ together with the monotone convergence theorem shows that</p> <p>$$ \liminf_{x\to x_0} f(x) \geq f(x_0), $$</p> <p>which implies lower-semicontinuity of $f$.</p> Answer 4: <p>It can be shown that $x\mapsto\mu(x+V)$ is continuous for all open $V$ if and only if $\mu$ is absolutely continuous with respect to Lebesgue measure. See <a href="http://www.jstor.org/stable/pdf/2244564.pdf" rel="nofollow noreferrer"> A note on translation continuity of probability measures</a> by S. Zabell.</p> Answer 5: <p>Lower-semicontinuity of $f$ can be thought as a consequence of the portmanteau theorem. Here is a self-contained proof:</p> <p>Since $V$ is open, there exists a sequence $(\varphi_n)$ of compactly supported continuous functions on $\Bbb{R}^k$ such that $0 \leq \varphi_1 \leq \varphi_2 \leq \varphi_3 \leq \cdots$ and $\varphi_n \uparrow \mathbf{1}_V$ pointwise. From this, for any $x \in \Bbb{R}^k$ and for any $n = 1, 2, \cdots$ we have</p> <p>$$ f(x) = \mu(x + V) = \int_{\Bbb{R}^k} \mathbf{1}_V (t-x) \, \mu(dt) \geq \int_{\Bbb{R}^k} \varphi_n(t-x) \, \mu(dt). $$</p> <p>Now for each fixed $n$, we find that $\varphi_n(\cdot - x)$ converges pointwise to $\varphi_n(\cdot - x_0)$ as $x \to x_0$. Thus by the bounded convergence theorem,</p> <p>$$ \liminf_{x\to x_0} f(x) \geq \liminf_{x\to x_0} \int_{\Bbb{R}^k} \varphi_n(t-x) \, \mu(dt) = \int_{\Bbb{R}^k} \varphi_n(t-x_0) \, \mu(dt). $$</p> <p>Taking $n \to \infty$ together with the monotone convergence theorem shows that</p> <p>$$ \liminf_{x\to x_0} f(x) \geq f(x_0), $$</p> <p>which implies lower-semicontinuity of $f$.</p> Answer 6: <p>It can be shown that $x\mapsto\mu(x+V)$ is continuous for all open $V$ if and only if $\mu$ is absolutely continuous with respect to Lebesgue measure. See <a href="http://www.jstor.org/stable/pdf/2244564.pdf" rel="nofollow noreferrer"> A note on translation continuity of probability measures</a> by S. Zabell.</p> Answer 7: <p>Lower-semicontinuity of $f$ can be thought as a consequence of the portmanteau theorem. Here is a self-contained proof:</p> <p>Since $V$ is open, there exists a sequence $(\varphi_n)$ of compactly supported continuous functions on $\Bbb{R}^k$ such that $0 \leq \varphi_1 \leq \varphi_2 \leq \varphi_3 \leq \cdots$ and $\varphi_n \uparrow \mathbf{1}_V$ pointwise. From this, for any $x \in \Bbb{R}^k$ and for any $n = 1, 2, \cdots$ we have</p> <p>$$ f(x) = \mu(x + V) = \int_{\Bbb{R}^k} \mathbf{1}_V (t-x) \, \mu(dt) \geq \int_{\Bbb{R}^k} \varphi_n(t-x) \, \mu(dt). $$</p> <p>Now for each fixed $n$, we find that $\varphi_n(\cdot - x)$ converges pointwise to $\varphi_n(\cdot - x_0)$ as $x \to x_0$. Thus by the bounded convergence theorem,</p> <p>$$ \liminf_{x\to x_0} f(x) \geq \liminf_{x\to x_0} \int_{\Bbb{R}^k} \varphi_n(t-x) \, \mu(dt) = \int_{\Bbb{R}^k} \varphi_n(t-x_0) \, \mu(dt). $$</p> <p>Taking $n \to \infty$ together with the monotone convergence theorem shows that</p> <p>$$ \liminf_{x\to x_0} f(x) \geq f(x_0), $$</p> <p>which implies lower-semicontinuity of $f$.</p> Answer 8: <p>It can be shown that $x\mapsto\mu(x+V)$ is continuous for all open $V$ if and only if $\mu$ is absolutely continuous with respect to Lebesgue measure. See <a href="http://www.jstor.org/stable/pdf/2244564.pdf" rel="nofollow noreferrer"> A note on translation continuity of probability measures</a> by S. Zabell.</p> Answer 9: <p>Lower-semicontinuity of $f$ can be thought as a consequence of the portmanteau theorem. Here is a self-contained proof:</p> <p>Since $V$ is open, there exists a sequence $(\varphi_n)$ of compactly supported continuous functions on $\Bbb{R}^k$ such that $0 \leq \varphi_1 \leq \varphi_2 \leq \varphi_3 \leq \cdots$ and $\varphi_n \uparrow \mathbf{1}_V$ pointwise. From this, for any $x \in \Bbb{R}^k$ and for any $n = 1, 2, \cdots$ we have</p> <p>$$ f(x) = \mu(x + V) = \int_{\Bbb{R}^k} \mathbf{1}_V (t-x) \, \mu(dt) \geq \int_{\Bbb{R}^k} \varphi_n(t-x) \, \mu(dt). $$</p> <p>Now for each fixed $n$, we find that $\varphi_n(\cdot - x)$ converges pointwise to $\varphi_n(\cdot - x_0)$ as $x \to x_0$. Thus by the bounded convergence theorem,</p> <p>$$ \liminf_{x\to x_0} f(x) \geq \liminf_{x\to x_0} \int_{\Bbb{R}^k} \varphi_n(t-x) \, \mu(dt) = \int_{\Bbb{R}^k} \varphi_n(t-x_0) \, \mu(dt). $$</p> <p>Taking $n \to \infty$ together with the monotone convergence theorem shows that</p> <p>$$ \liminf_{x\to x_0} f(x) \geq f(x_0), $$</p> <p>which implies lower-semicontinuity of $f$.</p> Answer 10: <p>It can be shown that $x\mapsto\mu(x+V)$ is continuous for all open $V$ if and only if $\mu$ is absolutely continuous with respect to Lebesgue measure. See <a href="http://www.jstor.org/stable/pdf/2244564.pdf" rel="nofollow noreferrer"> A note on translation continuity of probability measures</a> by S. Zabell.</p> Answer 11: <p>Lower-semicontinuity of $f$ can be thought as a consequence of the portmanteau theorem. Here is a self-contained proof:</p> <p>Since $V$ is open, there exists a sequence $(\varphi_n)$ of compactly supported continuous functions on $\Bbb{R}^k$ such that $0 \leq \varphi_1 \leq \varphi_2 \leq \varphi_3 \leq \cdots$ and $\varphi_n \uparrow \mathbf{1}_V$ pointwise. From this, for any $x \in \Bbb{R}^k$ and for any $n = 1, 2, \cdots$ we have</p> <p>$$ f(x) = \mu(x + V) = \int_{\Bbb{R}^k} \mathbf{1}_V (t-x) \, \mu(dt) \geq \int_{\Bbb{R}^k} \varphi_n(t-x) \, \mu(dt). $$</p> <p>Now for each fixed $n$, we find that $\varphi_n(\cdot - x)$ converges pointwise to $\varphi_n(\cdot - x_0)$ as $x \to x_0$. Thus by the bounded convergence theorem,</p> <p>$$ \liminf_{x\to x_0} f(x) \geq \liminf_{x\to x_0} \int_{\Bbb{R}^k} \varphi_n(t-x) \, \mu(dt) = \int_{\Bbb{R}^k} \varphi_n(t-x_0) \, \mu(dt). $$</p> <p>Taking $n \to \infty$ together with the monotone convergence theorem shows that</p> <p>$$ \liminf_{x\to x_0} f(x) \geq f(x_0), $$</p> <p>which implies lower-semicontinuity of $f$.</p> Answer 12: <p>It can be shown that $x\mapsto\mu(x+V)$ is continuous for all open $V$ if and only if $\mu$ is absolutely continuous with respect to Lebesgue measure. See <a href="http://www.jstor.org/stable/pdf/2244564.pdf" rel="nofollow noreferrer"> A note on translation continuity of probability measures</a> by S. Zabell.</p> Answer 13: <p>Lower-semicontinuity of $f$ can be thought as a consequence of the portmanteau theorem. Here is a self-contained proof:</p> <p>Since $V$ is open, there exists a sequence $(\varphi_n)$ of compactly supported continuous functions on $\Bbb{R}^k$ such that $0 \leq \varphi_1 \leq \varphi_2 \leq \varphi_3 \leq \cdots$ and $\varphi_n \uparrow \mathbf{1}_V$ pointwise. From this, for any $x \in \Bbb{R}^k$ and for any $n = 1, 2, \cdots$ we have</p> <p>$$ f(x) = \mu(x + V) = \int_{\Bbb{R}^k} \mathbf{1}_V (t-x) \, \mu(dt) \geq \int_{\Bbb{R}^k} \varphi_n(t-x) \, \mu(dt). $$</p> <p>Now for each fixed $n$, we find that $\varphi_n(\cdot - x)$ converges pointwise to $\varphi_n(\cdot - x_0)$ as $x \to x_0$. Thus by the bounded convergence theorem,</p> <p>$$ \liminf_{x\to x_0} f(x) \geq \liminf_{x\to x_0} \int_{\Bbb{R}^k} \varphi_n(t-x) \, \mu(dt) = \int_{\Bbb{R}^k} \varphi_n(t-x_0) \, \mu(dt). $$</p> <p>Taking $n \to \infty$ together with the monotone convergence theorem shows that</p> <p>$$ \liminf_{x\to x_0} f(x) \geq f(x_0), $$</p> <p>which implies lower-semicontinuity of $f$.</p> Answer 14: <p>It can be shown that $x\mapsto\mu(x+V)$ is continuous for all open $V$ if and only if $\mu$ is absolutely continuous with respect to Lebesgue measure. See <a href="http://www.jstor.org/stable/pdf/2244564.pdf" rel="nofollow noreferrer"> A note on translation continuity of probability measures</a> by S. Zabell.</p> Answer 15: <p>Lower-semicontinuity of $f$ can be thought as a consequence of the portmanteau theorem. Here is a self-contained proof:</p> <p>Since $V$ is open, there exists a sequence $(\varphi_n)$ of compactly supported continuous functions on $\Bbb{R}^k$ such that $0 \leq \varphi_1 \leq \varphi_2 \leq \varphi_3 \leq \cdots$ and $\varphi_n \uparrow \mathbf{1}_V$ pointwise. From this, for any $x \in \Bbb{R}^k$ and for any $n = 1, 2, \cdots$ we have</p> <p>$$ f(x) = \mu(x + V) = \int_{\Bbb{R}^k} \mathbf{1}_V (t-x) \, \mu(dt) \geq \int_{\Bbb{R}^k} \varphi_n(t-x) \, \mu(dt). $$</p> <p>Now for each fixed $n$, we find that $\varphi_n(\cdot - x)$ converges pointwise to $\varphi_n(\cdot - x_0)$ as $x \to x_0$. Thus by the bounded convergence theorem,</p> <p>$$ \liminf_{x\to x_0} f(x) \geq \liminf_{x\to x_0} \int_{\Bbb{R}^k} \varphi_n(t-x) \, \mu(dt) = \int_{\Bbb{R}^k} \varphi_n(t-x_0) \, \mu(dt). $$</p> <p>Taking $n \to \infty$ together with the monotone convergence theorem shows that</p> <p>$$ \liminf_{x\to x_0} f(x) \geq f(x_0), $$</p> <p>which implies lower-semicontinuity of $f$.</p> Answer 16: <p>It can be shown that $x\mapsto\mu(x+V)$ is continuous for all open $V$ if and only if $\mu$ is absolutely continuous with respect to Lebesgue measure. See <a href="http://www.jstor.org/stable/pdf/2244564.pdf" rel="nofollow noreferrer"> A note on translation continuity of probability measures</a> by S. Zabell.</p> Answer 17: <p>Lower-semicontinuity of $f$ can be thought as a consequence of the portmanteau theorem. Here is a self-contained proof:</p> <p>Since $V$ is open, there exists a sequence $(\varphi_n)$ of compactly supported continuous functions on $\Bbb{R}^k$ such that $0 \leq \varphi_1 \leq \varphi_2 \leq \varphi_3 \leq \cdots$ and $\varphi_n \uparrow \mathbf{1}_V$ pointwise. From this, for any $x \in \Bbb{R}^k$ and for any $n = 1, 2, \cdots$ we have</p> <p>$$ f(x) = \mu(x + V) = \int_{\Bbb{R}^k} \mathbf{1}_V (t-x) \, \mu(dt) \geq \int_{\Bbb{R}^k} \varphi_n(t-x) \, \mu(dt). $$</p> <p>Now for each fixed $n$, we find that $\varphi_n(\cdot - x)$ converges pointwise to $\varphi_n(\cdot - x_0)$ as $x \to x_0$. Thus by the bounded convergence theorem,</p> <p>$$ \liminf_{x\to x_0} f(x) \geq \liminf_{x\to x_0} \int_{\Bbb{R}^k} \varphi_n(t-x) \, \mu(dt) = \int_{\Bbb{R}^k} \varphi_n(t-x_0) \, \mu(dt). $$</p> <p>Taking $n \to \infty$ together with the monotone convergence theorem shows that</p> <p>$$ \liminf_{x\to x_0} f(x) \geq f(x_0), $$</p> <p>which implies lower-semicontinuity of $f$.</p> Answer 18: <p>It can be shown that $x\mapsto\mu(x+V)$ is continuous for all open $V$ if and only if $\mu$ is absolutely continuous with respect to Lebesgue measure. See <a href="http://www.jstor.org/stable/pdf/2244564.pdf" rel="nofollow noreferrer"> A note on translation continuity of probability measures</a> by S. Zabell.</p> Answer 19: <p>Lower-semicontinuity of $f$ can be thought as a consequence of the portmanteau theorem. Here is a self-contained proof:</p> <p>Since $V$ is open, there exists a sequence $(\varphi_n)$ of compactly supported continuous functions on $\Bbb{R}^k$ such that $0 \leq \varphi_1 \leq \varphi_2 \leq \varphi_3 \leq \cdots$ and $\varphi_n \uparrow \mathbf{1}_V$ pointwise. From this, for any $x \in \Bbb{R}^k$ and for any $n = 1, 2, \cdots$ we have</p> <p>$$ f(x) = \mu(x + V) = \int_{\Bbb{R}^k} \mathbf{1}_V (t-x) \, \mu(dt) \geq \int_{\Bbb{R}^k} \varphi_n(t-x) \, \mu(dt). $$</p> <p>Now for each fixed $n$, we find that $\varphi_n(\cdot - x)$ converges pointwise to $\varphi_n(\cdot - x_0)$ as $x \to x_0$. Thus by the bounded convergence theorem,</p> <p>$$ \liminf_{x\to x_0} f(x) \geq \liminf_{x\to x_0} \int_{\Bbb{R}^k} \varphi_n(t-x) \, \mu(dt) = \int_{\Bbb{R}^k} \varphi_n(t-x_0) \, \mu(dt). $$</p> <p>Taking $n \to \infty$ together with the monotone convergence theorem shows that</p> <p>$$ \liminf_{x\to x_0} f(x) \geq f(x_0), $$</p> <p>which implies lower-semicontinuity of $f$.</p> Answer 20: <p>It can be shown that $x\mapsto\mu(x+V)$ is continuous for all open $V$ if and only if $\mu$ is absolutely continuous with respect to Lebesgue measure. See <a href="http://www.jstor.org/stable/pdf/2244564.pdf" rel="nofollow noreferrer"> A note on translation continuity of probability measures</a> by S. Zabell.</p> Answer 21: <p>Lower-semicontinuity of $f$ can be thought as a consequence of the portmanteau theorem. Here is a self-contained proof:</p> <p>Since $V$ is open, there exists a sequence $(\varphi_n)$ of compactly supported continuous functions on $\Bbb{R}^k$ such that $0 \leq \varphi_1 \leq \varphi_2 \leq \varphi_3 \leq \cdots$ and $\varphi_n \uparrow \mathbf{1}_V$ pointwise. From this, for any $x \in \Bbb{R}^k$ and for any $n = 1, 2, \cdots$ we have</p> <p>$$ f(x) = \mu(x + V) = \int_{\Bbb{R}^k} \mathbf{1}_V (t-x) \, \mu(dt) \geq \int_{\Bbb{R}^k} \varphi_n(t-x) \, \mu(dt). $$</p> <p>Now for each fixed $n$, we find that $\varphi_n(\cdot - x)$ converges pointwise to $\varphi_n(\cdot - x_0)$ as $x \to x_0$. Thus by the bounded convergence theorem,</p> <p>$$ \liminf_{x\to x_0} f(x) \geq \liminf_{x\to x_0} \int_{\Bbb{R}^k} \varphi_n(t-x) \, \mu(dt) = \int_{\Bbb{R}^k} \varphi_n(t-x_0) \, \mu(dt). $$</p> <p>Taking $n \to \infty$ together with the monotone convergence theorem shows that</p> <p>$$ \liminf_{x\to x_0} f(x) \geq f(x_0), $$</p> <p>which implies lower-semicontinuity of $f$.</p> Answer 22: <p>It can be shown that $x\mapsto\mu(x+V)$ is continuous for all open $V$ if and only if $\mu$ is absolutely continuous with respect to Lebesgue measure. See <a href="http://www.jstor.org/stable/pdf/2244564.pdf" rel="nofollow noreferrer"> A note on translation continuity of probability measures</a> by S. Zabell.</p>
How is $\sqrt{\frac{\sqrt{2}-1}{\sqrt{2}+1}} = \sqrt{2}-1$?
2,090,821
6
https://math.stackexchange.com/questions/2090821/how-is-sqrt-frac-sqrt2-1-sqrt21-sqrt2-1
<p>I wondered, why this: $$\sqrt{\frac{\sqrt{2}-1}{\sqrt{2}+1}}$$ is equal to $\sqrt{2}-1$.<br> Can anyone explain me, why this is equal? :/</p>
[ { "answer_id": 2090828, "score": 10, "text": "<p>\\begin{align}\n\\sqrt{\\frac{\\sqrt{2}-1}{\\sqrt{2}+1}} &amp;= \\sqrt{\\frac{\\sqrt{2}-1}{\\sqrt{2}+1}}\\cdot\\sqrt{\\frac{\\sqrt{2}-1}{\\sqrt{2}-1}}\\\\\n&amp;=\\sqrt{\\frac{\\left(\\sqrt{2}-1\\right)^2}{2-1}}\\\\\n&amp;=\\sqrt{\\left(\\sqrt{2}-1\\right)^2}\\\\\\\\\n&amp;=\\sqrt{2}-1\n\\end{align}</p>\n" }, { "answer_id": 2090834, "score": 6, "text": "<p>For $a,b&gt;0$, if $ab=1$, then\n$$\n\\frac{a}{b}=a^2\n$$\nand thus\n$$\n\\sqrt{\\frac{a}{b}}=a.\n$$</p>\n\n<p>Now, consider $a=\\sqrt{2}-1$ and $b=\\sqrt{2}+1$.</p>\n" }, { "answer_id": 2090897, "score": 1, "text": "<p>Since I've expanded on my initial <a href=\"https://math.stackexchange.com/questions/2090821/how-is-sqrt-frac-sqrt2-1-sqrt21-sqrt2-1?noredirect=1#comment4297405_2090821\">\"joke comment\"</a>, I might as well make it a full joke answer :) By that I mean, nobody in their right mind would take this approach to <em>actually</em> verify that the two quantities are equal: instead, what follows is a good, but limited, way to produce expressions with radicals that look different, but are really not.</p>\n\n<p>The trigonometric function $\\tan$ (tangent) has a wide variety of <a href=\"https://en.wikipedia.org/wiki/Tangent_half-angle_formula\" rel=\"nofollow noreferrer\">half-angle formulas</a>. I would like to use the following two:</p>\n\n<p>\\begin{align*}\n\\tan \\frac\\theta 2 &amp;= \\frac{1 - \\sin \\theta}{\\cos \\theta} \\tag{1}\\\\[10pt]\n\\tan \\frac\\theta 2 &amp;= \\sqrt{\\frac{1 - \\cos \\theta}{1 + \\cos \\theta}} \\tag{2}\n\\end{align*} </p>\n\n<p>Evaluating the first at $\\theta = \\frac{\\pi}{4}$, we have that</p>\n\n<p>\\begin{align*}\n\\tan \\frac{\\pi}{8} = \\tan \\frac{\\pi/4}{2} &amp;= \\frac{1 - \\sin(\\pi/4)}{\\cos(\\pi/4)} \\\\[7pt]\n &amp;= \\frac{1 - \\frac{1}{\\sqrt{2}}}{\\frac{1}{\\sqrt{2}}} \\\\[7pt]\n&amp;= \\left(1 - \\frac{1}{\\sqrt{2}}\\right) \\cdot \\frac{\\sqrt{2}}{1} \\\\[5pt]\n&amp;= \\sqrt{2} - 1\n\\end{align*}</p>\n\n<p>Now, using the second identity with $\\theta = \\pi/4$, we have</p>\n\n<p>\\begin{align*}\n\\tan \\frac{\\pi}{8} = \\tan \\frac{\\pi/4}{2} \n&amp;= \\sqrt{\\frac{1 - \\cos(\\pi/4)}{1 + \\cos(\\pi/4)}} \\\\[7pt]\n&amp;=\\sqrt{\\frac{1 - \\frac{1}{\\sqrt{2}}}{1 + \\frac{1}{\\sqrt{2}}}} \\\\[7pt]\n&amp;=\\sqrt{\\frac{1 - \\frac{1}{\\sqrt{2}}}{1 + \\frac{1}{\\sqrt{2}}} \\cdot \\left(\\frac{\\sqrt 2}{\\sqrt 2}\\right)} \\\\[7pt]\n&amp;= \\sqrt{\\frac{\\sqrt{2} - 1}{\\sqrt{2} + 1}}\n\\end{align*}</p>\n" }, { "answer_id": 2090828, "score": 10, "text": "<p>\\begin{align}\n\\sqrt{\\frac{\\sqrt{2}-1}{\\sqrt{2}+1}} &amp;= \\sqrt{\\frac{\\sqrt{2}-1}{\\sqrt{2}+1}}\\cdot\\sqrt{\\frac{\\sqrt{2}-1}{\\sqrt{2}-1}}\\\\\n&amp;=\\sqrt{\\frac{\\left(\\sqrt{2}-1\\right)^2}{2-1}}\\\\\n&amp;=\\sqrt{\\left(\\sqrt{2}-1\\right)^2}\\\\\\\\\n&amp;=\\sqrt{2}-1\n\\end{align}</p>\n" }, { "answer_id": 2090834, "score": 6, "text": "<p>For $a,b&gt;0$, if $ab=1$, then\n$$\n\\frac{a}{b}=a^2\n$$\nand thus\n$$\n\\sqrt{\\frac{a}{b}}=a.\n$$</p>\n\n<p>Now, consider $a=\\sqrt{2}-1$ and $b=\\sqrt{2}+1$.</p>\n" }, { "answer_id": 2090897, "score": 1, "text": "<p>Since I've expanded on my initial <a href=\"https://math.stackexchange.com/questions/2090821/how-is-sqrt-frac-sqrt2-1-sqrt21-sqrt2-1?noredirect=1#comment4297405_2090821\">\"joke comment\"</a>, I might as well make it a full joke answer :) By that I mean, nobody in their right mind would take this approach to <em>actually</em> verify that the two quantities are equal: instead, what follows is a good, but limited, way to produce expressions with radicals that look different, but are really not.</p>\n\n<p>The trigonometric function $\\tan$ (tangent) has a wide variety of <a href=\"https://en.wikipedia.org/wiki/Tangent_half-angle_formula\" rel=\"nofollow noreferrer\">half-angle formulas</a>. I would like to use the following two:</p>\n\n<p>\\begin{align*}\n\\tan \\frac\\theta 2 &amp;= \\frac{1 - \\sin \\theta}{\\cos \\theta} \\tag{1}\\\\[10pt]\n\\tan \\frac\\theta 2 &amp;= \\sqrt{\\frac{1 - \\cos \\theta}{1 + \\cos \\theta}} \\tag{2}\n\\end{align*} </p>\n\n<p>Evaluating the first at $\\theta = \\frac{\\pi}{4}$, we have that</p>\n\n<p>\\begin{align*}\n\\tan \\frac{\\pi}{8} = \\tan \\frac{\\pi/4}{2} &amp;= \\frac{1 - \\sin(\\pi/4)}{\\cos(\\pi/4)} \\\\[7pt]\n &amp;= \\frac{1 - \\frac{1}{\\sqrt{2}}}{\\frac{1}{\\sqrt{2}}} \\\\[7pt]\n&amp;= \\left(1 - \\frac{1}{\\sqrt{2}}\\right) \\cdot \\frac{\\sqrt{2}}{1} \\\\[5pt]\n&amp;= \\sqrt{2} - 1\n\\end{align*}</p>\n\n<p>Now, using the second identity with $\\theta = \\pi/4$, we have</p>\n\n<p>\\begin{align*}\n\\tan \\frac{\\pi}{8} = \\tan \\frac{\\pi/4}{2} \n&amp;= \\sqrt{\\frac{1 - \\cos(\\pi/4)}{1 + \\cos(\\pi/4)}} \\\\[7pt]\n&amp;=\\sqrt{\\frac{1 - \\frac{1}{\\sqrt{2}}}{1 + \\frac{1}{\\sqrt{2}}}} \\\\[7pt]\n&amp;=\\sqrt{\\frac{1 - \\frac{1}{\\sqrt{2}}}{1 + \\frac{1}{\\sqrt{2}}} \\cdot \\left(\\frac{\\sqrt 2}{\\sqrt 2}\\right)} \\\\[7pt]\n&amp;= \\sqrt{\\frac{\\sqrt{2} - 1}{\\sqrt{2} + 1}}\n\\end{align*}</p>\n" }, { "answer_id": 2090828, "score": 10, "text": "<p>\\begin{align}\n\\sqrt{\\frac{\\sqrt{2}-1}{\\sqrt{2}+1}} &amp;= \\sqrt{\\frac{\\sqrt{2}-1}{\\sqrt{2}+1}}\\cdot\\sqrt{\\frac{\\sqrt{2}-1}{\\sqrt{2}-1}}\\\\\n&amp;=\\sqrt{\\frac{\\left(\\sqrt{2}-1\\right)^2}{2-1}}\\\\\n&amp;=\\sqrt{\\left(\\sqrt{2}-1\\right)^2}\\\\\\\\\n&amp;=\\sqrt{2}-1\n\\end{align}</p>\n" }, { "answer_id": 2090834, "score": 6, "text": "<p>For $a,b&gt;0$, if $ab=1$, then\n$$\n\\frac{a}{b}=a^2\n$$\nand thus\n$$\n\\sqrt{\\frac{a}{b}}=a.\n$$</p>\n\n<p>Now, consider $a=\\sqrt{2}-1$ and $b=\\sqrt{2}+1$.</p>\n" }, { "answer_id": 2090897, "score": 1, "text": "<p>Since I've expanded on my initial <a href=\"https://math.stackexchange.com/questions/2090821/how-is-sqrt-frac-sqrt2-1-sqrt21-sqrt2-1?noredirect=1#comment4297405_2090821\">\"joke comment\"</a>, I might as well make it a full joke answer :) By that I mean, nobody in their right mind would take this approach to <em>actually</em> verify that the two quantities are equal: instead, what follows is a good, but limited, way to produce expressions with radicals that look different, but are really not.</p>\n\n<p>The trigonometric function $\\tan$ (tangent) has a wide variety of <a href=\"https://en.wikipedia.org/wiki/Tangent_half-angle_formula\" rel=\"nofollow noreferrer\">half-angle formulas</a>. I would like to use the following two:</p>\n\n<p>\\begin{align*}\n\\tan \\frac\\theta 2 &amp;= \\frac{1 - \\sin \\theta}{\\cos \\theta} \\tag{1}\\\\[10pt]\n\\tan \\frac\\theta 2 &amp;= \\sqrt{\\frac{1 - \\cos \\theta}{1 + \\cos \\theta}} \\tag{2}\n\\end{align*} </p>\n\n<p>Evaluating the first at $\\theta = \\frac{\\pi}{4}$, we have that</p>\n\n<p>\\begin{align*}\n\\tan \\frac{\\pi}{8} = \\tan \\frac{\\pi/4}{2} &amp;= \\frac{1 - \\sin(\\pi/4)}{\\cos(\\pi/4)} \\\\[7pt]\n &amp;= \\frac{1 - \\frac{1}{\\sqrt{2}}}{\\frac{1}{\\sqrt{2}}} \\\\[7pt]\n&amp;= \\left(1 - \\frac{1}{\\sqrt{2}}\\right) \\cdot \\frac{\\sqrt{2}}{1} \\\\[5pt]\n&amp;= \\sqrt{2} - 1\n\\end{align*}</p>\n\n<p>Now, using the second identity with $\\theta = \\pi/4$, we have</p>\n\n<p>\\begin{align*}\n\\tan \\frac{\\pi}{8} = \\tan \\frac{\\pi/4}{2} \n&amp;= \\sqrt{\\frac{1 - \\cos(\\pi/4)}{1 + \\cos(\\pi/4)}} \\\\[7pt]\n&amp;=\\sqrt{\\frac{1 - \\frac{1}{\\sqrt{2}}}{1 + \\frac{1}{\\sqrt{2}}}} \\\\[7pt]\n&amp;=\\sqrt{\\frac{1 - \\frac{1}{\\sqrt{2}}}{1 + \\frac{1}{\\sqrt{2}}} \\cdot \\left(\\frac{\\sqrt 2}{\\sqrt 2}\\right)} \\\\[7pt]\n&amp;= \\sqrt{\\frac{\\sqrt{2} - 1}{\\sqrt{2} + 1}}\n\\end{align*}</p>\n" }, { "answer_id": 2090828, "score": 10, "text": "<p>\\begin{align}\n\\sqrt{\\frac{\\sqrt{2}-1}{\\sqrt{2}+1}} &amp;= \\sqrt{\\frac{\\sqrt{2}-1}{\\sqrt{2}+1}}\\cdot\\sqrt{\\frac{\\sqrt{2}-1}{\\sqrt{2}-1}}\\\\\n&amp;=\\sqrt{\\frac{\\left(\\sqrt{2}-1\\right)^2}{2-1}}\\\\\n&amp;=\\sqrt{\\left(\\sqrt{2}-1\\right)^2}\\\\\\\\\n&amp;=\\sqrt{2}-1\n\\end{align}</p>\n" }, { "answer_id": 2090834, "score": 6, "text": "<p>For $a,b&gt;0$, if $ab=1$, then\n$$\n\\frac{a}{b}=a^2\n$$\nand thus\n$$\n\\sqrt{\\frac{a}{b}}=a.\n$$</p>\n\n<p>Now, consider $a=\\sqrt{2}-1$ and $b=\\sqrt{2}+1$.</p>\n" }, { "answer_id": 2090897, "score": 1, "text": "<p>Since I've expanded on my initial <a href=\"https://math.stackexchange.com/questions/2090821/how-is-sqrt-frac-sqrt2-1-sqrt21-sqrt2-1?noredirect=1#comment4297405_2090821\">\"joke comment\"</a>, I might as well make it a full joke answer :) By that I mean, nobody in their right mind would take this approach to <em>actually</em> verify that the two quantities are equal: instead, what follows is a good, but limited, way to produce expressions with radicals that look different, but are really not.</p>\n\n<p>The trigonometric function $\\tan$ (tangent) has a wide variety of <a href=\"https://en.wikipedia.org/wiki/Tangent_half-angle_formula\" rel=\"nofollow noreferrer\">half-angle formulas</a>. I would like to use the following two:</p>\n\n<p>\\begin{align*}\n\\tan \\frac\\theta 2 &amp;= \\frac{1 - \\sin \\theta}{\\cos \\theta} \\tag{1}\\\\[10pt]\n\\tan \\frac\\theta 2 &amp;= \\sqrt{\\frac{1 - \\cos \\theta}{1 + \\cos \\theta}} \\tag{2}\n\\end{align*} </p>\n\n<p>Evaluating the first at $\\theta = \\frac{\\pi}{4}$, we have that</p>\n\n<p>\\begin{align*}\n\\tan \\frac{\\pi}{8} = \\tan \\frac{\\pi/4}{2} &amp;= \\frac{1 - \\sin(\\pi/4)}{\\cos(\\pi/4)} \\\\[7pt]\n &amp;= \\frac{1 - \\frac{1}{\\sqrt{2}}}{\\frac{1}{\\sqrt{2}}} \\\\[7pt]\n&amp;= \\left(1 - \\frac{1}{\\sqrt{2}}\\right) \\cdot \\frac{\\sqrt{2}}{1} \\\\[5pt]\n&amp;= \\sqrt{2} - 1\n\\end{align*}</p>\n\n<p>Now, using the second identity with $\\theta = \\pi/4$, we have</p>\n\n<p>\\begin{align*}\n\\tan \\frac{\\pi}{8} = \\tan \\frac{\\pi/4}{2} \n&amp;= \\sqrt{\\frac{1 - \\cos(\\pi/4)}{1 + \\cos(\\pi/4)}} \\\\[7pt]\n&amp;=\\sqrt{\\frac{1 - \\frac{1}{\\sqrt{2}}}{1 + \\frac{1}{\\sqrt{2}}}} \\\\[7pt]\n&amp;=\\sqrt{\\frac{1 - \\frac{1}{\\sqrt{2}}}{1 + \\frac{1}{\\sqrt{2}}} \\cdot \\left(\\frac{\\sqrt 2}{\\sqrt 2}\\right)} \\\\[7pt]\n&amp;= \\sqrt{\\frac{\\sqrt{2} - 1}{\\sqrt{2} + 1}}\n\\end{align*}</p>\n" }, { "answer_id": 2090828, "score": 10, "text": "<p>\\begin{align}\n\\sqrt{\\frac{\\sqrt{2}-1}{\\sqrt{2}+1}} &amp;= \\sqrt{\\frac{\\sqrt{2}-1}{\\sqrt{2}+1}}\\cdot\\sqrt{\\frac{\\sqrt{2}-1}{\\sqrt{2}-1}}\\\\\n&amp;=\\sqrt{\\frac{\\left(\\sqrt{2}-1\\right)^2}{2-1}}\\\\\n&amp;=\\sqrt{\\left(\\sqrt{2}-1\\right)^2}\\\\\\\\\n&amp;=\\sqrt{2}-1\n\\end{align}</p>\n" }, { "answer_id": 2090834, "score": 6, "text": "<p>For $a,b&gt;0$, if $ab=1$, then\n$$\n\\frac{a}{b}=a^2\n$$\nand thus\n$$\n\\sqrt{\\frac{a}{b}}=a.\n$$</p>\n\n<p>Now, consider $a=\\sqrt{2}-1$ and $b=\\sqrt{2}+1$.</p>\n" }, { "answer_id": 2090897, "score": 1, "text": "<p>Since I've expanded on my initial <a href=\"https://math.stackexchange.com/questions/2090821/how-is-sqrt-frac-sqrt2-1-sqrt21-sqrt2-1?noredirect=1#comment4297405_2090821\">\"joke comment\"</a>, I might as well make it a full joke answer :) By that I mean, nobody in their right mind would take this approach to <em>actually</em> verify that the two quantities are equal: instead, what follows is a good, but limited, way to produce expressions with radicals that look different, but are really not.</p>\n\n<p>The trigonometric function $\\tan$ (tangent) has a wide variety of <a href=\"https://en.wikipedia.org/wiki/Tangent_half-angle_formula\" rel=\"nofollow noreferrer\">half-angle formulas</a>. I would like to use the following two:</p>\n\n<p>\\begin{align*}\n\\tan \\frac\\theta 2 &amp;= \\frac{1 - \\sin \\theta}{\\cos \\theta} \\tag{1}\\\\[10pt]\n\\tan \\frac\\theta 2 &amp;= \\sqrt{\\frac{1 - \\cos \\theta}{1 + \\cos \\theta}} \\tag{2}\n\\end{align*} </p>\n\n<p>Evaluating the first at $\\theta = \\frac{\\pi}{4}$, we have that</p>\n\n<p>\\begin{align*}\n\\tan \\frac{\\pi}{8} = \\tan \\frac{\\pi/4}{2} &amp;= \\frac{1 - \\sin(\\pi/4)}{\\cos(\\pi/4)} \\\\[7pt]\n &amp;= \\frac{1 - \\frac{1}{\\sqrt{2}}}{\\frac{1}{\\sqrt{2}}} \\\\[7pt]\n&amp;= \\left(1 - \\frac{1}{\\sqrt{2}}\\right) \\cdot \\frac{\\sqrt{2}}{1} \\\\[5pt]\n&amp;= \\sqrt{2} - 1\n\\end{align*}</p>\n\n<p>Now, using the second identity with $\\theta = \\pi/4$, we have</p>\n\n<p>\\begin{align*}\n\\tan \\frac{\\pi}{8} = \\tan \\frac{\\pi/4}{2} \n&amp;= \\sqrt{\\frac{1 - \\cos(\\pi/4)}{1 + \\cos(\\pi/4)}} \\\\[7pt]\n&amp;=\\sqrt{\\frac{1 - \\frac{1}{\\sqrt{2}}}{1 + \\frac{1}{\\sqrt{2}}}} \\\\[7pt]\n&amp;=\\sqrt{\\frac{1 - \\frac{1}{\\sqrt{2}}}{1 + \\frac{1}{\\sqrt{2}}} \\cdot \\left(\\frac{\\sqrt 2}{\\sqrt 2}\\right)} \\\\[7pt]\n&amp;= \\sqrt{\\frac{\\sqrt{2} - 1}{\\sqrt{2} + 1}}\n\\end{align*}</p>\n" }, { "answer_id": 2090828, "score": 10, "text": "<p>\\begin{align}\n\\sqrt{\\frac{\\sqrt{2}-1}{\\sqrt{2}+1}} &amp;= \\sqrt{\\frac{\\sqrt{2}-1}{\\sqrt{2}+1}}\\cdot\\sqrt{\\frac{\\sqrt{2}-1}{\\sqrt{2}-1}}\\\\\n&amp;=\\sqrt{\\frac{\\left(\\sqrt{2}-1\\right)^2}{2-1}}\\\\\n&amp;=\\sqrt{\\left(\\sqrt{2}-1\\right)^2}\\\\\\\\\n&amp;=\\sqrt{2}-1\n\\end{align}</p>\n" }, { "answer_id": 2090834, "score": 6, "text": "<p>For $a,b&gt;0$, if $ab=1$, then\n$$\n\\frac{a}{b}=a^2\n$$\nand thus\n$$\n\\sqrt{\\frac{a}{b}}=a.\n$$</p>\n\n<p>Now, consider $a=\\sqrt{2}-1$ and $b=\\sqrt{2}+1$.</p>\n" }, { "answer_id": 2090897, "score": 1, "text": "<p>Since I've expanded on my initial <a href=\"https://math.stackexchange.com/questions/2090821/how-is-sqrt-frac-sqrt2-1-sqrt21-sqrt2-1?noredirect=1#comment4297405_2090821\">\"joke comment\"</a>, I might as well make it a full joke answer :) By that I mean, nobody in their right mind would take this approach to <em>actually</em> verify that the two quantities are equal: instead, what follows is a good, but limited, way to produce expressions with radicals that look different, but are really not.</p>\n\n<p>The trigonometric function $\\tan$ (tangent) has a wide variety of <a href=\"https://en.wikipedia.org/wiki/Tangent_half-angle_formula\" rel=\"nofollow noreferrer\">half-angle formulas</a>. I would like to use the following two:</p>\n\n<p>\\begin{align*}\n\\tan \\frac\\theta 2 &amp;= \\frac{1 - \\sin \\theta}{\\cos \\theta} \\tag{1}\\\\[10pt]\n\\tan \\frac\\theta 2 &amp;= \\sqrt{\\frac{1 - \\cos \\theta}{1 + \\cos \\theta}} \\tag{2}\n\\end{align*} </p>\n\n<p>Evaluating the first at $\\theta = \\frac{\\pi}{4}$, we have that</p>\n\n<p>\\begin{align*}\n\\tan \\frac{\\pi}{8} = \\tan \\frac{\\pi/4}{2} &amp;= \\frac{1 - \\sin(\\pi/4)}{\\cos(\\pi/4)} \\\\[7pt]\n &amp;= \\frac{1 - \\frac{1}{\\sqrt{2}}}{\\frac{1}{\\sqrt{2}}} \\\\[7pt]\n&amp;= \\left(1 - \\frac{1}{\\sqrt{2}}\\right) \\cdot \\frac{\\sqrt{2}}{1} \\\\[5pt]\n&amp;= \\sqrt{2} - 1\n\\end{align*}</p>\n\n<p>Now, using the second identity with $\\theta = \\pi/4$, we have</p>\n\n<p>\\begin{align*}\n\\tan \\frac{\\pi}{8} = \\tan \\frac{\\pi/4}{2} \n&amp;= \\sqrt{\\frac{1 - \\cos(\\pi/4)}{1 + \\cos(\\pi/4)}} \\\\[7pt]\n&amp;=\\sqrt{\\frac{1 - \\frac{1}{\\sqrt{2}}}{1 + \\frac{1}{\\sqrt{2}}}} \\\\[7pt]\n&amp;=\\sqrt{\\frac{1 - \\frac{1}{\\sqrt{2}}}{1 + \\frac{1}{\\sqrt{2}}} \\cdot \\left(\\frac{\\sqrt 2}{\\sqrt 2}\\right)} \\\\[7pt]\n&amp;= \\sqrt{\\frac{\\sqrt{2} - 1}{\\sqrt{2} + 1}}\n\\end{align*}</p>\n" }, { "answer_id": 2090828, "score": 10, "text": "<p>\\begin{align}\n\\sqrt{\\frac{\\sqrt{2}-1}{\\sqrt{2}+1}} &amp;= \\sqrt{\\frac{\\sqrt{2}-1}{\\sqrt{2}+1}}\\cdot\\sqrt{\\frac{\\sqrt{2}-1}{\\sqrt{2}-1}}\\\\\n&amp;=\\sqrt{\\frac{\\left(\\sqrt{2}-1\\right)^2}{2-1}}\\\\\n&amp;=\\sqrt{\\left(\\sqrt{2}-1\\right)^2}\\\\\\\\\n&amp;=\\sqrt{2}-1\n\\end{align}</p>\n" }, { "answer_id": 2090834, "score": 6, "text": "<p>For $a,b&gt;0$, if $ab=1$, then\n$$\n\\frac{a}{b}=a^2\n$$\nand thus\n$$\n\\sqrt{\\frac{a}{b}}=a.\n$$</p>\n\n<p>Now, consider $a=\\sqrt{2}-1$ and $b=\\sqrt{2}+1$.</p>\n" }, { "answer_id": 2090897, "score": 1, "text": "<p>Since I've expanded on my initial <a href=\"https://math.stackexchange.com/questions/2090821/how-is-sqrt-frac-sqrt2-1-sqrt21-sqrt2-1?noredirect=1#comment4297405_2090821\">\"joke comment\"</a>, I might as well make it a full joke answer :) By that I mean, nobody in their right mind would take this approach to <em>actually</em> verify that the two quantities are equal: instead, what follows is a good, but limited, way to produce expressions with radicals that look different, but are really not.</p>\n\n<p>The trigonometric function $\\tan$ (tangent) has a wide variety of <a href=\"https://en.wikipedia.org/wiki/Tangent_half-angle_formula\" rel=\"nofollow noreferrer\">half-angle formulas</a>. I would like to use the following two:</p>\n\n<p>\\begin{align*}\n\\tan \\frac\\theta 2 &amp;= \\frac{1 - \\sin \\theta}{\\cos \\theta} \\tag{1}\\\\[10pt]\n\\tan \\frac\\theta 2 &amp;= \\sqrt{\\frac{1 - \\cos \\theta}{1 + \\cos \\theta}} \\tag{2}\n\\end{align*} </p>\n\n<p>Evaluating the first at $\\theta = \\frac{\\pi}{4}$, we have that</p>\n\n<p>\\begin{align*}\n\\tan \\frac{\\pi}{8} = \\tan \\frac{\\pi/4}{2} &amp;= \\frac{1 - \\sin(\\pi/4)}{\\cos(\\pi/4)} \\\\[7pt]\n &amp;= \\frac{1 - \\frac{1}{\\sqrt{2}}}{\\frac{1}{\\sqrt{2}}} \\\\[7pt]\n&amp;= \\left(1 - \\frac{1}{\\sqrt{2}}\\right) \\cdot \\frac{\\sqrt{2}}{1} \\\\[5pt]\n&amp;= \\sqrt{2} - 1\n\\end{align*}</p>\n\n<p>Now, using the second identity with $\\theta = \\pi/4$, we have</p>\n\n<p>\\begin{align*}\n\\tan \\frac{\\pi}{8} = \\tan \\frac{\\pi/4}{2} \n&amp;= \\sqrt{\\frac{1 - \\cos(\\pi/4)}{1 + \\cos(\\pi/4)}} \\\\[7pt]\n&amp;=\\sqrt{\\frac{1 - \\frac{1}{\\sqrt{2}}}{1 + \\frac{1}{\\sqrt{2}}}} \\\\[7pt]\n&amp;=\\sqrt{\\frac{1 - \\frac{1}{\\sqrt{2}}}{1 + \\frac{1}{\\sqrt{2}}} \\cdot \\left(\\frac{\\sqrt 2}{\\sqrt 2}\\right)} \\\\[7pt]\n&amp;= \\sqrt{\\frac{\\sqrt{2} - 1}{\\sqrt{2} + 1}}\n\\end{align*}</p>\n" }, { "answer_id": 2090828, "score": 10, "text": "<p>\\begin{align}\n\\sqrt{\\frac{\\sqrt{2}-1}{\\sqrt{2}+1}} &amp;= \\sqrt{\\frac{\\sqrt{2}-1}{\\sqrt{2}+1}}\\cdot\\sqrt{\\frac{\\sqrt{2}-1}{\\sqrt{2}-1}}\\\\\n&amp;=\\sqrt{\\frac{\\left(\\sqrt{2}-1\\right)^2}{2-1}}\\\\\n&amp;=\\sqrt{\\left(\\sqrt{2}-1\\right)^2}\\\\\\\\\n&amp;=\\sqrt{2}-1\n\\end{align}</p>\n" }, { "answer_id": 2090834, "score": 6, "text": "<p>For $a,b&gt;0$, if $ab=1$, then\n$$\n\\frac{a}{b}=a^2\n$$\nand thus\n$$\n\\sqrt{\\frac{a}{b}}=a.\n$$</p>\n\n<p>Now, consider $a=\\sqrt{2}-1$ and $b=\\sqrt{2}+1$.</p>\n" }, { "answer_id": 2090897, "score": 1, "text": "<p>Since I've expanded on my initial <a href=\"https://math.stackexchange.com/questions/2090821/how-is-sqrt-frac-sqrt2-1-sqrt21-sqrt2-1?noredirect=1#comment4297405_2090821\">\"joke comment\"</a>, I might as well make it a full joke answer :) By that I mean, nobody in their right mind would take this approach to <em>actually</em> verify that the two quantities are equal: instead, what follows is a good, but limited, way to produce expressions with radicals that look different, but are really not.</p>\n\n<p>The trigonometric function $\\tan$ (tangent) has a wide variety of <a href=\"https://en.wikipedia.org/wiki/Tangent_half-angle_formula\" rel=\"nofollow noreferrer\">half-angle formulas</a>. I would like to use the following two:</p>\n\n<p>\\begin{align*}\n\\tan \\frac\\theta 2 &amp;= \\frac{1 - \\sin \\theta}{\\cos \\theta} \\tag{1}\\\\[10pt]\n\\tan \\frac\\theta 2 &amp;= \\sqrt{\\frac{1 - \\cos \\theta}{1 + \\cos \\theta}} \\tag{2}\n\\end{align*} </p>\n\n<p>Evaluating the first at $\\theta = \\frac{\\pi}{4}$, we have that</p>\n\n<p>\\begin{align*}\n\\tan \\frac{\\pi}{8} = \\tan \\frac{\\pi/4}{2} &amp;= \\frac{1 - \\sin(\\pi/4)}{\\cos(\\pi/4)} \\\\[7pt]\n &amp;= \\frac{1 - \\frac{1}{\\sqrt{2}}}{\\frac{1}{\\sqrt{2}}} \\\\[7pt]\n&amp;= \\left(1 - \\frac{1}{\\sqrt{2}}\\right) \\cdot \\frac{\\sqrt{2}}{1} \\\\[5pt]\n&amp;= \\sqrt{2} - 1\n\\end{align*}</p>\n\n<p>Now, using the second identity with $\\theta = \\pi/4$, we have</p>\n\n<p>\\begin{align*}\n\\tan \\frac{\\pi}{8} = \\tan \\frac{\\pi/4}{2} \n&amp;= \\sqrt{\\frac{1 - \\cos(\\pi/4)}{1 + \\cos(\\pi/4)}} \\\\[7pt]\n&amp;=\\sqrt{\\frac{1 - \\frac{1}{\\sqrt{2}}}{1 + \\frac{1}{\\sqrt{2}}}} \\\\[7pt]\n&amp;=\\sqrt{\\frac{1 - \\frac{1}{\\sqrt{2}}}{1 + \\frac{1}{\\sqrt{2}}} \\cdot \\left(\\frac{\\sqrt 2}{\\sqrt 2}\\right)} \\\\[7pt]\n&amp;= \\sqrt{\\frac{\\sqrt{2} - 1}{\\sqrt{2} + 1}}\n\\end{align*}</p>\n" }, { "answer_id": 2090828, "score": 10, "text": "<p>\\begin{align}\n\\sqrt{\\frac{\\sqrt{2}-1}{\\sqrt{2}+1}} &amp;= \\sqrt{\\frac{\\sqrt{2}-1}{\\sqrt{2}+1}}\\cdot\\sqrt{\\frac{\\sqrt{2}-1}{\\sqrt{2}-1}}\\\\\n&amp;=\\sqrt{\\frac{\\left(\\sqrt{2}-1\\right)^2}{2-1}}\\\\\n&amp;=\\sqrt{\\left(\\sqrt{2}-1\\right)^2}\\\\\\\\\n&amp;=\\sqrt{2}-1\n\\end{align}</p>\n" }, { "answer_id": 2090834, "score": 6, "text": "<p>For $a,b&gt;0$, if $ab=1$, then\n$$\n\\frac{a}{b}=a^2\n$$\nand thus\n$$\n\\sqrt{\\frac{a}{b}}=a.\n$$</p>\n\n<p>Now, consider $a=\\sqrt{2}-1$ and $b=\\sqrt{2}+1$.</p>\n" }, { "answer_id": 2090897, "score": 1, "text": "<p>Since I've expanded on my initial <a href=\"https://math.stackexchange.com/questions/2090821/how-is-sqrt-frac-sqrt2-1-sqrt21-sqrt2-1?noredirect=1#comment4297405_2090821\">\"joke comment\"</a>, I might as well make it a full joke answer :) By that I mean, nobody in their right mind would take this approach to <em>actually</em> verify that the two quantities are equal: instead, what follows is a good, but limited, way to produce expressions with radicals that look different, but are really not.</p>\n\n<p>The trigonometric function $\\tan$ (tangent) has a wide variety of <a href=\"https://en.wikipedia.org/wiki/Tangent_half-angle_formula\" rel=\"nofollow noreferrer\">half-angle formulas</a>. I would like to use the following two:</p>\n\n<p>\\begin{align*}\n\\tan \\frac\\theta 2 &amp;= \\frac{1 - \\sin \\theta}{\\cos \\theta} \\tag{1}\\\\[10pt]\n\\tan \\frac\\theta 2 &amp;= \\sqrt{\\frac{1 - \\cos \\theta}{1 + \\cos \\theta}} \\tag{2}\n\\end{align*} </p>\n\n<p>Evaluating the first at $\\theta = \\frac{\\pi}{4}$, we have that</p>\n\n<p>\\begin{align*}\n\\tan \\frac{\\pi}{8} = \\tan \\frac{\\pi/4}{2} &amp;= \\frac{1 - \\sin(\\pi/4)}{\\cos(\\pi/4)} \\\\[7pt]\n &amp;= \\frac{1 - \\frac{1}{\\sqrt{2}}}{\\frac{1}{\\sqrt{2}}} \\\\[7pt]\n&amp;= \\left(1 - \\frac{1}{\\sqrt{2}}\\right) \\cdot \\frac{\\sqrt{2}}{1} \\\\[5pt]\n&amp;= \\sqrt{2} - 1\n\\end{align*}</p>\n\n<p>Now, using the second identity with $\\theta = \\pi/4$, we have</p>\n\n<p>\\begin{align*}\n\\tan \\frac{\\pi}{8} = \\tan \\frac{\\pi/4}{2} \n&amp;= \\sqrt{\\frac{1 - \\cos(\\pi/4)}{1 + \\cos(\\pi/4)}} \\\\[7pt]\n&amp;=\\sqrt{\\frac{1 - \\frac{1}{\\sqrt{2}}}{1 + \\frac{1}{\\sqrt{2}}}} \\\\[7pt]\n&amp;=\\sqrt{\\frac{1 - \\frac{1}{\\sqrt{2}}}{1 + \\frac{1}{\\sqrt{2}}} \\cdot \\left(\\frac{\\sqrt 2}{\\sqrt 2}\\right)} \\\\[7pt]\n&amp;= \\sqrt{\\frac{\\sqrt{2} - 1}{\\sqrt{2} + 1}}\n\\end{align*}</p>\n" }, { "answer_id": 2090828, "score": 10, "text": "<p>\\begin{align}\n\\sqrt{\\frac{\\sqrt{2}-1}{\\sqrt{2}+1}} &amp;= \\sqrt{\\frac{\\sqrt{2}-1}{\\sqrt{2}+1}}\\cdot\\sqrt{\\frac{\\sqrt{2}-1}{\\sqrt{2}-1}}\\\\\n&amp;=\\sqrt{\\frac{\\left(\\sqrt{2}-1\\right)^2}{2-1}}\\\\\n&amp;=\\sqrt{\\left(\\sqrt{2}-1\\right)^2}\\\\\\\\\n&amp;=\\sqrt{2}-1\n\\end{align}</p>\n" }, { "answer_id": 2090834, "score": 6, "text": "<p>For $a,b&gt;0$, if $ab=1$, then\n$$\n\\frac{a}{b}=a^2\n$$\nand thus\n$$\n\\sqrt{\\frac{a}{b}}=a.\n$$</p>\n\n<p>Now, consider $a=\\sqrt{2}-1$ and $b=\\sqrt{2}+1$.</p>\n" }, { "answer_id": 2090897, "score": 1, "text": "<p>Since I've expanded on my initial <a href=\"https://math.stackexchange.com/questions/2090821/how-is-sqrt-frac-sqrt2-1-sqrt21-sqrt2-1?noredirect=1#comment4297405_2090821\">\"joke comment\"</a>, I might as well make it a full joke answer :) By that I mean, nobody in their right mind would take this approach to <em>actually</em> verify that the two quantities are equal: instead, what follows is a good, but limited, way to produce expressions with radicals that look different, but are really not.</p>\n\n<p>The trigonometric function $\\tan$ (tangent) has a wide variety of <a href=\"https://en.wikipedia.org/wiki/Tangent_half-angle_formula\" rel=\"nofollow noreferrer\">half-angle formulas</a>. I would like to use the following two:</p>\n\n<p>\\begin{align*}\n\\tan \\frac\\theta 2 &amp;= \\frac{1 - \\sin \\theta}{\\cos \\theta} \\tag{1}\\\\[10pt]\n\\tan \\frac\\theta 2 &amp;= \\sqrt{\\frac{1 - \\cos \\theta}{1 + \\cos \\theta}} \\tag{2}\n\\end{align*} </p>\n\n<p>Evaluating the first at $\\theta = \\frac{\\pi}{4}$, we have that</p>\n\n<p>\\begin{align*}\n\\tan \\frac{\\pi}{8} = \\tan \\frac{\\pi/4}{2} &amp;= \\frac{1 - \\sin(\\pi/4)}{\\cos(\\pi/4)} \\\\[7pt]\n &amp;= \\frac{1 - \\frac{1}{\\sqrt{2}}}{\\frac{1}{\\sqrt{2}}} \\\\[7pt]\n&amp;= \\left(1 - \\frac{1}{\\sqrt{2}}\\right) \\cdot \\frac{\\sqrt{2}}{1} \\\\[5pt]\n&amp;= \\sqrt{2} - 1\n\\end{align*}</p>\n\n<p>Now, using the second identity with $\\theta = \\pi/4$, we have</p>\n\n<p>\\begin{align*}\n\\tan \\frac{\\pi}{8} = \\tan \\frac{\\pi/4}{2} \n&amp;= \\sqrt{\\frac{1 - \\cos(\\pi/4)}{1 + \\cos(\\pi/4)}} \\\\[7pt]\n&amp;=\\sqrt{\\frac{1 - \\frac{1}{\\sqrt{2}}}{1 + \\frac{1}{\\sqrt{2}}}} \\\\[7pt]\n&amp;=\\sqrt{\\frac{1 - \\frac{1}{\\sqrt{2}}}{1 + \\frac{1}{\\sqrt{2}}} \\cdot \\left(\\frac{\\sqrt 2}{\\sqrt 2}\\right)} \\\\[7pt]\n&amp;= \\sqrt{\\frac{\\sqrt{2} - 1}{\\sqrt{2} + 1}}\n\\end{align*}</p>\n" }, { "answer_id": 2090828, "score": 10, "text": "<p>\\begin{align}\n\\sqrt{\\frac{\\sqrt{2}-1}{\\sqrt{2}+1}} &amp;= \\sqrt{\\frac{\\sqrt{2}-1}{\\sqrt{2}+1}}\\cdot\\sqrt{\\frac{\\sqrt{2}-1}{\\sqrt{2}-1}}\\\\\n&amp;=\\sqrt{\\frac{\\left(\\sqrt{2}-1\\right)^2}{2-1}}\\\\\n&amp;=\\sqrt{\\left(\\sqrt{2}-1\\right)^2}\\\\\\\\\n&amp;=\\sqrt{2}-1\n\\end{align}</p>\n" }, { "answer_id": 2090834, "score": 6, "text": "<p>For $a,b&gt;0$, if $ab=1$, then\n$$\n\\frac{a}{b}=a^2\n$$\nand thus\n$$\n\\sqrt{\\frac{a}{b}}=a.\n$$</p>\n\n<p>Now, consider $a=\\sqrt{2}-1$ and $b=\\sqrt{2}+1$.</p>\n" }, { "answer_id": 2090897, "score": 1, "text": "<p>Since I've expanded on my initial <a href=\"https://math.stackexchange.com/questions/2090821/how-is-sqrt-frac-sqrt2-1-sqrt21-sqrt2-1?noredirect=1#comment4297405_2090821\">\"joke comment\"</a>, I might as well make it a full joke answer :) By that I mean, nobody in their right mind would take this approach to <em>actually</em> verify that the two quantities are equal: instead, what follows is a good, but limited, way to produce expressions with radicals that look different, but are really not.</p>\n\n<p>The trigonometric function $\\tan$ (tangent) has a wide variety of <a href=\"https://en.wikipedia.org/wiki/Tangent_half-angle_formula\" rel=\"nofollow noreferrer\">half-angle formulas</a>. I would like to use the following two:</p>\n\n<p>\\begin{align*}\n\\tan \\frac\\theta 2 &amp;= \\frac{1 - \\sin \\theta}{\\cos \\theta} \\tag{1}\\\\[10pt]\n\\tan \\frac\\theta 2 &amp;= \\sqrt{\\frac{1 - \\cos \\theta}{1 + \\cos \\theta}} \\tag{2}\n\\end{align*} </p>\n\n<p>Evaluating the first at $\\theta = \\frac{\\pi}{4}$, we have that</p>\n\n<p>\\begin{align*}\n\\tan \\frac{\\pi}{8} = \\tan \\frac{\\pi/4}{2} &amp;= \\frac{1 - \\sin(\\pi/4)}{\\cos(\\pi/4)} \\\\[7pt]\n &amp;= \\frac{1 - \\frac{1}{\\sqrt{2}}}{\\frac{1}{\\sqrt{2}}} \\\\[7pt]\n&amp;= \\left(1 - \\frac{1}{\\sqrt{2}}\\right) \\cdot \\frac{\\sqrt{2}}{1} \\\\[5pt]\n&amp;= \\sqrt{2} - 1\n\\end{align*}</p>\n\n<p>Now, using the second identity with $\\theta = \\pi/4$, we have</p>\n\n<p>\\begin{align*}\n\\tan \\frac{\\pi}{8} = \\tan \\frac{\\pi/4}{2} \n&amp;= \\sqrt{\\frac{1 - \\cos(\\pi/4)}{1 + \\cos(\\pi/4)}} \\\\[7pt]\n&amp;=\\sqrt{\\frac{1 - \\frac{1}{\\sqrt{2}}}{1 + \\frac{1}{\\sqrt{2}}}} \\\\[7pt]\n&amp;=\\sqrt{\\frac{1 - \\frac{1}{\\sqrt{2}}}{1 + \\frac{1}{\\sqrt{2}}} \\cdot \\left(\\frac{\\sqrt 2}{\\sqrt 2}\\right)} \\\\[7pt]\n&amp;= \\sqrt{\\frac{\\sqrt{2} - 1}{\\sqrt{2} + 1}}\n\\end{align*}</p>\n" } ]
Question: How is $\sqrt{\frac{\sqrt{2}-1}{\sqrt{2}+1}} = \sqrt{2}-1$? Answer 1: <p>\begin{align} \sqrt{\frac{\sqrt{2}-1}{\sqrt{2}+1}} &amp;= \sqrt{\frac{\sqrt{2}-1}{\sqrt{2}+1}}\cdot\sqrt{\frac{\sqrt{2}-1}{\sqrt{2}-1}}\\ &amp;=\sqrt{\frac{\left(\sqrt{2}-1\right)^2}{2-1}}\\ &amp;=\sqrt{\left(\sqrt{2}-1\right)^2}\\\\ &amp;=\sqrt{2}-1 \end{align}</p> Answer 2: <p>For $a,b&gt;0$, if $ab=1$, then $$ \frac{a}{b}=a^2 $$ and thus $$ \sqrt{\frac{a}{b}}=a. $$</p> <p>Now, consider $a=\sqrt{2}-1$ and $b=\sqrt{2}+1$.</p> Answer 3: <p>Since I've expanded on my initial <a href="https://math.stackexchange.com/questions/2090821/how-is-sqrt-frac-sqrt2-1-sqrt21-sqrt2-1?noredirect=1#comment4297405_2090821">"joke comment"</a>, I might as well make it a full joke answer :) By that I mean, nobody in their right mind would take this approach to <em>actually</em> verify that the two quantities are equal: instead, what follows is a good, but limited, way to produce expressions with radicals that look different, but are really not.</p> <p>The trigonometric function $\tan$ (tangent) has a wide variety of <a href="https://en.wikipedia.org/wiki/Tangent_half-angle_formula" rel="nofollow noreferrer">half-angle formulas</a>. I would like to use the following two:</p> <p>\begin{align*} \tan \frac\theta 2 &amp;= \frac{1 - \sin \theta}{\cos \theta} \tag{1}\\[10pt] \tan \frac\theta 2 &amp;= \sqrt{\frac{1 - \cos \theta}{1 + \cos \theta}} \tag{2} \end{align*} </p> <p>Evaluating the first at $\theta = \frac{\pi}{4}$, we have that</p> <p>\begin{align*} \tan \frac{\pi}{8} = \tan \frac{\pi/4}{2} &amp;= \frac{1 - \sin(\pi/4)}{\cos(\pi/4)} \\[7pt] &amp;= \frac{1 - \frac{1}{\sqrt{2}}}{\frac{1}{\sqrt{2}}} \\[7pt] &amp;= \left(1 - \frac{1}{\sqrt{2}}\right) \cdot \frac{\sqrt{2}}{1} \\[5pt] &amp;= \sqrt{2} - 1 \end{align*}</p> <p>Now, using the second identity with $\theta = \pi/4$, we have</p> <p>\begin{align*} \tan \frac{\pi}{8} = \tan \frac{\pi/4}{2} &amp;= \sqrt{\frac{1 - \cos(\pi/4)}{1 + \cos(\pi/4)}} \\[7pt] &amp;=\sqrt{\frac{1 - \frac{1}{\sqrt{2}}}{1 + \frac{1}{\sqrt{2}}}} \\[7pt] &amp;=\sqrt{\frac{1 - \frac{1}{\sqrt{2}}}{1 + \frac{1}{\sqrt{2}}} \cdot \left(\frac{\sqrt 2}{\sqrt 2}\right)} \\[7pt] &amp;= \sqrt{\frac{\sqrt{2} - 1}{\sqrt{2} + 1}} \end{align*}</p> Answer 4: <p>\begin{align} \sqrt{\frac{\sqrt{2}-1}{\sqrt{2}+1}} &amp;= \sqrt{\frac{\sqrt{2}-1}{\sqrt{2}+1}}\cdot\sqrt{\frac{\sqrt{2}-1}{\sqrt{2}-1}}\\ &amp;=\sqrt{\frac{\left(\sqrt{2}-1\right)^2}{2-1}}\\ &amp;=\sqrt{\left(\sqrt{2}-1\right)^2}\\\\ &amp;=\sqrt{2}-1 \end{align}</p> Answer 5: <p>For $a,b&gt;0$, if $ab=1$, then $$ \frac{a}{b}=a^2 $$ and thus $$ \sqrt{\frac{a}{b}}=a. $$</p> <p>Now, consider $a=\sqrt{2}-1$ and $b=\sqrt{2}+1$.</p> Answer 6: <p>Since I've expanded on my initial <a href="https://math.stackexchange.com/questions/2090821/how-is-sqrt-frac-sqrt2-1-sqrt21-sqrt2-1?noredirect=1#comment4297405_2090821">"joke comment"</a>, I might as well make it a full joke answer :) By that I mean, nobody in their right mind would take this approach to <em>actually</em> verify that the two quantities are equal: instead, what follows is a good, but limited, way to produce expressions with radicals that look different, but are really not.</p> <p>The trigonometric function $\tan$ (tangent) has a wide variety of <a href="https://en.wikipedia.org/wiki/Tangent_half-angle_formula" rel="nofollow noreferrer">half-angle formulas</a>. I would like to use the following two:</p> <p>\begin{align*} \tan \frac\theta 2 &amp;= \frac{1 - \sin \theta}{\cos \theta} \tag{1}\\[10pt] \tan \frac\theta 2 &amp;= \sqrt{\frac{1 - \cos \theta}{1 + \cos \theta}} \tag{2} \end{align*} </p> <p>Evaluating the first at $\theta = \frac{\pi}{4}$, we have that</p> <p>\begin{align*} \tan \frac{\pi}{8} = \tan \frac{\pi/4}{2} &amp;= \frac{1 - \sin(\pi/4)}{\cos(\pi/4)} \\[7pt] &amp;= \frac{1 - \frac{1}{\sqrt{2}}}{\frac{1}{\sqrt{2}}} \\[7pt] &amp;= \left(1 - \frac{1}{\sqrt{2}}\right) \cdot \frac{\sqrt{2}}{1} \\[5pt] &amp;= \sqrt{2} - 1 \end{align*}</p> <p>Now, using the second identity with $\theta = \pi/4$, we have</p> <p>\begin{align*} \tan \frac{\pi}{8} = \tan \frac{\pi/4}{2} &amp;= \sqrt{\frac{1 - \cos(\pi/4)}{1 + \cos(\pi/4)}} \\[7pt] &amp;=\sqrt{\frac{1 - \frac{1}{\sqrt{2}}}{1 + \frac{1}{\sqrt{2}}}} \\[7pt] &amp;=\sqrt{\frac{1 - \frac{1}{\sqrt{2}}}{1 + \frac{1}{\sqrt{2}}} \cdot \left(\frac{\sqrt 2}{\sqrt 2}\right)} \\[7pt] &amp;= \sqrt{\frac{\sqrt{2} - 1}{\sqrt{2} + 1}} \end{align*}</p> Answer 7: <p>\begin{align} \sqrt{\frac{\sqrt{2}-1}{\sqrt{2}+1}} &amp;= \sqrt{\frac{\sqrt{2}-1}{\sqrt{2}+1}}\cdot\sqrt{\frac{\sqrt{2}-1}{\sqrt{2}-1}}\\ &amp;=\sqrt{\frac{\left(\sqrt{2}-1\right)^2}{2-1}}\\ &amp;=\sqrt{\left(\sqrt{2}-1\right)^2}\\\\ &amp;=\sqrt{2}-1 \end{align}</p> Answer 8: <p>For $a,b&gt;0$, if $ab=1$, then $$ \frac{a}{b}=a^2 $$ and thus $$ \sqrt{\frac{a}{b}}=a. $$</p> <p>Now, consider $a=\sqrt{2}-1$ and $b=\sqrt{2}+1$.</p> Answer 9: <p>Since I've expanded on my initial <a href="https://math.stackexchange.com/questions/2090821/how-is-sqrt-frac-sqrt2-1-sqrt21-sqrt2-1?noredirect=1#comment4297405_2090821">"joke comment"</a>, I might as well make it a full joke answer :) By that I mean, nobody in their right mind would take this approach to <em>actually</em> verify that the two quantities are equal: instead, what follows is a good, but limited, way to produce expressions with radicals that look different, but are really not.</p> <p>The trigonometric function $\tan$ (tangent) has a wide variety of <a href="https://en.wikipedia.org/wiki/Tangent_half-angle_formula" rel="nofollow noreferrer">half-angle formulas</a>. I would like to use the following two:</p> <p>\begin{align*} \tan \frac\theta 2 &amp;= \frac{1 - \sin \theta}{\cos \theta} \tag{1}\\[10pt] \tan \frac\theta 2 &amp;= \sqrt{\frac{1 - \cos \theta}{1 + \cos \theta}} \tag{2} \end{align*} </p> <p>Evaluating the first at $\theta = \frac{\pi}{4}$, we have that</p> <p>\begin{align*} \tan \frac{\pi}{8} = \tan \frac{\pi/4}{2} &amp;= \frac{1 - \sin(\pi/4)}{\cos(\pi/4)} \\[7pt] &amp;= \frac{1 - \frac{1}{\sqrt{2}}}{\frac{1}{\sqrt{2}}} \\[7pt] &amp;= \left(1 - \frac{1}{\sqrt{2}}\right) \cdot \frac{\sqrt{2}}{1} \\[5pt] &amp;= \sqrt{2} - 1 \end{align*}</p> <p>Now, using the second identity with $\theta = \pi/4$, we have</p> <p>\begin{align*} \tan \frac{\pi}{8} = \tan \frac{\pi/4}{2} &amp;= \sqrt{\frac{1 - \cos(\pi/4)}{1 + \cos(\pi/4)}} \\[7pt] &amp;=\sqrt{\frac{1 - \frac{1}{\sqrt{2}}}{1 + \frac{1}{\sqrt{2}}}} \\[7pt] &amp;=\sqrt{\frac{1 - \frac{1}{\sqrt{2}}}{1 + \frac{1}{\sqrt{2}}} \cdot \left(\frac{\sqrt 2}{\sqrt 2}\right)} \\[7pt] &amp;= \sqrt{\frac{\sqrt{2} - 1}{\sqrt{2} + 1}} \end{align*}</p> Answer 10: <p>\begin{align} \sqrt{\frac{\sqrt{2}-1}{\sqrt{2}+1}} &amp;= \sqrt{\frac{\sqrt{2}-1}{\sqrt{2}+1}}\cdot\sqrt{\frac{\sqrt{2}-1}{\sqrt{2}-1}}\\ &amp;=\sqrt{\frac{\left(\sqrt{2}-1\right)^2}{2-1}}\\ &amp;=\sqrt{\left(\sqrt{2}-1\right)^2}\\\\ &amp;=\sqrt{2}-1 \end{align}</p> Answer 11: <p>For $a,b&gt;0$, if $ab=1$, then $$ \frac{a}{b}=a^2 $$ and thus $$ \sqrt{\frac{a}{b}}=a. $$</p> <p>Now, consider $a=\sqrt{2}-1$ and $b=\sqrt{2}+1$.</p> Answer 12: <p>Since I've expanded on my initial <a href="https://math.stackexchange.com/questions/2090821/how-is-sqrt-frac-sqrt2-1-sqrt21-sqrt2-1?noredirect=1#comment4297405_2090821">"joke comment"</a>, I might as well make it a full joke answer :) By that I mean, nobody in their right mind would take this approach to <em>actually</em> verify that the two quantities are equal: instead, what follows is a good, but limited, way to produce expressions with radicals that look different, but are really not.</p> <p>The trigonometric function $\tan$ (tangent) has a wide variety of <a href="https://en.wikipedia.org/wiki/Tangent_half-angle_formula" rel="nofollow noreferrer">half-angle formulas</a>. I would like to use the following two:</p> <p>\begin{align*} \tan \frac\theta 2 &amp;= \frac{1 - \sin \theta}{\cos \theta} \tag{1}\\[10pt] \tan \frac\theta 2 &amp;= \sqrt{\frac{1 - \cos \theta}{1 + \cos \theta}} \tag{2} \end{align*} </p> <p>Evaluating the first at $\theta = \frac{\pi}{4}$, we have that</p> <p>\begin{align*} \tan \frac{\pi}{8} = \tan \frac{\pi/4}{2} &amp;= \frac{1 - \sin(\pi/4)}{\cos(\pi/4)} \\[7pt] &amp;= \frac{1 - \frac{1}{\sqrt{2}}}{\frac{1}{\sqrt{2}}} \\[7pt] &amp;= \left(1 - \frac{1}{\sqrt{2}}\right) \cdot \frac{\sqrt{2}}{1} \\[5pt] &amp;= \sqrt{2} - 1 \end{align*}</p> <p>Now, using the second identity with $\theta = \pi/4$, we have</p> <p>\begin{align*} \tan \frac{\pi}{8} = \tan \frac{\pi/4}{2} &amp;= \sqrt{\frac{1 - \cos(\pi/4)}{1 + \cos(\pi/4)}} \\[7pt] &amp;=\sqrt{\frac{1 - \frac{1}{\sqrt{2}}}{1 + \frac{1}{\sqrt{2}}}} \\[7pt] &amp;=\sqrt{\frac{1 - \frac{1}{\sqrt{2}}}{1 + \frac{1}{\sqrt{2}}} \cdot \left(\frac{\sqrt 2}{\sqrt 2}\right)} \\[7pt] &amp;= \sqrt{\frac{\sqrt{2} - 1}{\sqrt{2} + 1}} \end{align*}</p> Answer 13: <p>\begin{align} \sqrt{\frac{\sqrt{2}-1}{\sqrt{2}+1}} &amp;= \sqrt{\frac{\sqrt{2}-1}{\sqrt{2}+1}}\cdot\sqrt{\frac{\sqrt{2}-1}{\sqrt{2}-1}}\\ &amp;=\sqrt{\frac{\left(\sqrt{2}-1\right)^2}{2-1}}\\ &amp;=\sqrt{\left(\sqrt{2}-1\right)^2}\\\\ &amp;=\sqrt{2}-1 \end{align}</p> Answer 14: <p>For $a,b&gt;0$, if $ab=1$, then $$ \frac{a}{b}=a^2 $$ and thus $$ \sqrt{\frac{a}{b}}=a. $$</p> <p>Now, consider $a=\sqrt{2}-1$ and $b=\sqrt{2}+1$.</p> Answer 15: <p>Since I've expanded on my initial <a href="https://math.stackexchange.com/questions/2090821/how-is-sqrt-frac-sqrt2-1-sqrt21-sqrt2-1?noredirect=1#comment4297405_2090821">"joke comment"</a>, I might as well make it a full joke answer :) By that I mean, nobody in their right mind would take this approach to <em>actually</em> verify that the two quantities are equal: instead, what follows is a good, but limited, way to produce expressions with radicals that look different, but are really not.</p> <p>The trigonometric function $\tan$ (tangent) has a wide variety of <a href="https://en.wikipedia.org/wiki/Tangent_half-angle_formula" rel="nofollow noreferrer">half-angle formulas</a>. I would like to use the following two:</p> <p>\begin{align*} \tan \frac\theta 2 &amp;= \frac{1 - \sin \theta}{\cos \theta} \tag{1}\\[10pt] \tan \frac\theta 2 &amp;= \sqrt{\frac{1 - \cos \theta}{1 + \cos \theta}} \tag{2} \end{align*} </p> <p>Evaluating the first at $\theta = \frac{\pi}{4}$, we have that</p> <p>\begin{align*} \tan \frac{\pi}{8} = \tan \frac{\pi/4}{2} &amp;= \frac{1 - \sin(\pi/4)}{\cos(\pi/4)} \\[7pt] &amp;= \frac{1 - \frac{1}{\sqrt{2}}}{\frac{1}{\sqrt{2}}} \\[7pt] &amp;= \left(1 - \frac{1}{\sqrt{2}}\right) \cdot \frac{\sqrt{2}}{1} \\[5pt] &amp;= \sqrt{2} - 1 \end{align*}</p> <p>Now, using the second identity with $\theta = \pi/4$, we have</p> <p>\begin{align*} \tan \frac{\pi}{8} = \tan \frac{\pi/4}{2} &amp;= \sqrt{\frac{1 - \cos(\pi/4)}{1 + \cos(\pi/4)}} \\[7pt] &amp;=\sqrt{\frac{1 - \frac{1}{\sqrt{2}}}{1 + \frac{1}{\sqrt{2}}}} \\[7pt] &amp;=\sqrt{\frac{1 - \frac{1}{\sqrt{2}}}{1 + \frac{1}{\sqrt{2}}} \cdot \left(\frac{\sqrt 2}{\sqrt 2}\right)} \\[7pt] &amp;= \sqrt{\frac{\sqrt{2} - 1}{\sqrt{2} + 1}} \end{align*}</p> Answer 16: <p>\begin{align} \sqrt{\frac{\sqrt{2}-1}{\sqrt{2}+1}} &amp;= \sqrt{\frac{\sqrt{2}-1}{\sqrt{2}+1}}\cdot\sqrt{\frac{\sqrt{2}-1}{\sqrt{2}-1}}\\ &amp;=\sqrt{\frac{\left(\sqrt{2}-1\right)^2}{2-1}}\\ &amp;=\sqrt{\left(\sqrt{2}-1\right)^2}\\\\ &amp;=\sqrt{2}-1 \end{align}</p> Answer 17: <p>For $a,b&gt;0$, if $ab=1$, then $$ \frac{a}{b}=a^2 $$ and thus $$ \sqrt{\frac{a}{b}}=a. $$</p> <p>Now, consider $a=\sqrt{2}-1$ and $b=\sqrt{2}+1$.</p> Answer 18: <p>Since I've expanded on my initial <a href="https://math.stackexchange.com/questions/2090821/how-is-sqrt-frac-sqrt2-1-sqrt21-sqrt2-1?noredirect=1#comment4297405_2090821">"joke comment"</a>, I might as well make it a full joke answer :) By that I mean, nobody in their right mind would take this approach to <em>actually</em> verify that the two quantities are equal: instead, what follows is a good, but limited, way to produce expressions with radicals that look different, but are really not.</p> <p>The trigonometric function $\tan$ (tangent) has a wide variety of <a href="https://en.wikipedia.org/wiki/Tangent_half-angle_formula" rel="nofollow noreferrer">half-angle formulas</a>. I would like to use the following two:</p> <p>\begin{align*} \tan \frac\theta 2 &amp;= \frac{1 - \sin \theta}{\cos \theta} \tag{1}\\[10pt] \tan \frac\theta 2 &amp;= \sqrt{\frac{1 - \cos \theta}{1 + \cos \theta}} \tag{2} \end{align*} </p> <p>Evaluating the first at $\theta = \frac{\pi}{4}$, we have that</p> <p>\begin{align*} \tan \frac{\pi}{8} = \tan \frac{\pi/4}{2} &amp;= \frac{1 - \sin(\pi/4)}{\cos(\pi/4)} \\[7pt] &amp;= \frac{1 - \frac{1}{\sqrt{2}}}{\frac{1}{\sqrt{2}}} \\[7pt] &amp;= \left(1 - \frac{1}{\sqrt{2}}\right) \cdot \frac{\sqrt{2}}{1} \\[5pt] &amp;= \sqrt{2} - 1 \end{align*}</p> <p>Now, using the second identity with $\theta = \pi/4$, we have</p> <p>\begin{align*} \tan \frac{\pi}{8} = \tan \frac{\pi/4}{2} &amp;= \sqrt{\frac{1 - \cos(\pi/4)}{1 + \cos(\pi/4)}} \\[7pt] &amp;=\sqrt{\frac{1 - \frac{1}{\sqrt{2}}}{1 + \frac{1}{\sqrt{2}}}} \\[7pt] &amp;=\sqrt{\frac{1 - \frac{1}{\sqrt{2}}}{1 + \frac{1}{\sqrt{2}}} \cdot \left(\frac{\sqrt 2}{\sqrt 2}\right)} \\[7pt] &amp;= \sqrt{\frac{\sqrt{2} - 1}{\sqrt{2} + 1}} \end{align*}</p> Answer 19: <p>\begin{align} \sqrt{\frac{\sqrt{2}-1}{\sqrt{2}+1}} &amp;= \sqrt{\frac{\sqrt{2}-1}{\sqrt{2}+1}}\cdot\sqrt{\frac{\sqrt{2}-1}{\sqrt{2}-1}}\\ &amp;=\sqrt{\frac{\left(\sqrt{2}-1\right)^2}{2-1}}\\ &amp;=\sqrt{\left(\sqrt{2}-1\right)^2}\\\\ &amp;=\sqrt{2}-1 \end{align}</p> Answer 20: <p>For $a,b&gt;0$, if $ab=1$, then $$ \frac{a}{b}=a^2 $$ and thus $$ \sqrt{\frac{a}{b}}=a. $$</p> <p>Now, consider $a=\sqrt{2}-1$ and $b=\sqrt{2}+1$.</p> Answer 21: <p>Since I've expanded on my initial <a href="https://math.stackexchange.com/questions/2090821/how-is-sqrt-frac-sqrt2-1-sqrt21-sqrt2-1?noredirect=1#comment4297405_2090821">"joke comment"</a>, I might as well make it a full joke answer :) By that I mean, nobody in their right mind would take this approach to <em>actually</em> verify that the two quantities are equal: instead, what follows is a good, but limited, way to produce expressions with radicals that look different, but are really not.</p> <p>The trigonometric function $\tan$ (tangent) has a wide variety of <a href="https://en.wikipedia.org/wiki/Tangent_half-angle_formula" rel="nofollow noreferrer">half-angle formulas</a>. I would like to use the following two:</p> <p>\begin{align*} \tan \frac\theta 2 &amp;= \frac{1 - \sin \theta}{\cos \theta} \tag{1}\\[10pt] \tan \frac\theta 2 &amp;= \sqrt{\frac{1 - \cos \theta}{1 + \cos \theta}} \tag{2} \end{align*} </p> <p>Evaluating the first at $\theta = \frac{\pi}{4}$, we have that</p> <p>\begin{align*} \tan \frac{\pi}{8} = \tan \frac{\pi/4}{2} &amp;= \frac{1 - \sin(\pi/4)}{\cos(\pi/4)} \\[7pt] &amp;= \frac{1 - \frac{1}{\sqrt{2}}}{\frac{1}{\sqrt{2}}} \\[7pt] &amp;= \left(1 - \frac{1}{\sqrt{2}}\right) \cdot \frac{\sqrt{2}}{1} \\[5pt] &amp;= \sqrt{2} - 1 \end{align*}</p> <p>Now, using the second identity with $\theta = \pi/4$, we have</p> <p>\begin{align*} \tan \frac{\pi}{8} = \tan \frac{\pi/4}{2} &amp;= \sqrt{\frac{1 - \cos(\pi/4)}{1 + \cos(\pi/4)}} \\[7pt] &amp;=\sqrt{\frac{1 - \frac{1}{\sqrt{2}}}{1 + \frac{1}{\sqrt{2}}}} \\[7pt] &amp;=\sqrt{\frac{1 - \frac{1}{\sqrt{2}}}{1 + \frac{1}{\sqrt{2}}} \cdot \left(\frac{\sqrt 2}{\sqrt 2}\right)} \\[7pt] &amp;= \sqrt{\frac{\sqrt{2} - 1}{\sqrt{2} + 1}} \end{align*}</p> Answer 22: <p>\begin{align} \sqrt{\frac{\sqrt{2}-1}{\sqrt{2}+1}} &amp;= \sqrt{\frac{\sqrt{2}-1}{\sqrt{2}+1}}\cdot\sqrt{\frac{\sqrt{2}-1}{\sqrt{2}-1}}\\ &amp;=\sqrt{\frac{\left(\sqrt{2}-1\right)^2}{2-1}}\\ &amp;=\sqrt{\left(\sqrt{2}-1\right)^2}\\\\ &amp;=\sqrt{2}-1 \end{align}</p> Answer 23: <p>For $a,b&gt;0$, if $ab=1$, then $$ \frac{a}{b}=a^2 $$ and thus $$ \sqrt{\frac{a}{b}}=a. $$</p> <p>Now, consider $a=\sqrt{2}-1$ and $b=\sqrt{2}+1$.</p> Answer 24: <p>Since I've expanded on my initial <a href="https://math.stackexchange.com/questions/2090821/how-is-sqrt-frac-sqrt2-1-sqrt21-sqrt2-1?noredirect=1#comment4297405_2090821">"joke comment"</a>, I might as well make it a full joke answer :) By that I mean, nobody in their right mind would take this approach to <em>actually</em> verify that the two quantities are equal: instead, what follows is a good, but limited, way to produce expressions with radicals that look different, but are really not.</p> <p>The trigonometric function $\tan$ (tangent) has a wide variety of <a href="https://en.wikipedia.org/wiki/Tangent_half-angle_formula" rel="nofollow noreferrer">half-angle formulas</a>. I would like to use the following two:</p> <p>\begin{align*} \tan \frac\theta 2 &amp;= \frac{1 - \sin \theta}{\cos \theta} \tag{1}\\[10pt] \tan \frac\theta 2 &amp;= \sqrt{\frac{1 - \cos \theta}{1 + \cos \theta}} \tag{2} \end{align*} </p> <p>Evaluating the first at $\theta = \frac{\pi}{4}$, we have that</p> <p>\begin{align*} \tan \frac{\pi}{8} = \tan \frac{\pi/4}{2} &amp;= \frac{1 - \sin(\pi/4)}{\cos(\pi/4)} \\[7pt] &amp;= \frac{1 - \frac{1}{\sqrt{2}}}{\frac{1}{\sqrt{2}}} \\[7pt] &amp;= \left(1 - \frac{1}{\sqrt{2}}\right) \cdot \frac{\sqrt{2}}{1} \\[5pt] &amp;= \sqrt{2} - 1 \end{align*}</p> <p>Now, using the second identity with $\theta = \pi/4$, we have</p> <p>\begin{align*} \tan \frac{\pi}{8} = \tan \frac{\pi/4}{2} &amp;= \sqrt{\frac{1 - \cos(\pi/4)}{1 + \cos(\pi/4)}} \\[7pt] &amp;=\sqrt{\frac{1 - \frac{1}{\sqrt{2}}}{1 + \frac{1}{\sqrt{2}}}} \\[7pt] &amp;=\sqrt{\frac{1 - \frac{1}{\sqrt{2}}}{1 + \frac{1}{\sqrt{2}}} \cdot \left(\frac{\sqrt 2}{\sqrt 2}\right)} \\[7pt] &amp;= \sqrt{\frac{\sqrt{2} - 1}{\sqrt{2} + 1}} \end{align*}</p> Answer 25: <p>\begin{align} \sqrt{\frac{\sqrt{2}-1}{\sqrt{2}+1}} &amp;= \sqrt{\frac{\sqrt{2}-1}{\sqrt{2}+1}}\cdot\sqrt{\frac{\sqrt{2}-1}{\sqrt{2}-1}}\\ &amp;=\sqrt{\frac{\left(\sqrt{2}-1\right)^2}{2-1}}\\ &amp;=\sqrt{\left(\sqrt{2}-1\right)^2}\\\\ &amp;=\sqrt{2}-1 \end{align}</p> Answer 26: <p>For $a,b&gt;0$, if $ab=1$, then $$ \frac{a}{b}=a^2 $$ and thus $$ \sqrt{\frac{a}{b}}=a. $$</p> <p>Now, consider $a=\sqrt{2}-1$ and $b=\sqrt{2}+1$.</p> Answer 27: <p>Since I've expanded on my initial <a href="https://math.stackexchange.com/questions/2090821/how-is-sqrt-frac-sqrt2-1-sqrt21-sqrt2-1?noredirect=1#comment4297405_2090821">"joke comment"</a>, I might as well make it a full joke answer :) By that I mean, nobody in their right mind would take this approach to <em>actually</em> verify that the two quantities are equal: instead, what follows is a good, but limited, way to produce expressions with radicals that look different, but are really not.</p> <p>The trigonometric function $\tan$ (tangent) has a wide variety of <a href="https://en.wikipedia.org/wiki/Tangent_half-angle_formula" rel="nofollow noreferrer">half-angle formulas</a>. I would like to use the following two:</p> <p>\begin{align*} \tan \frac\theta 2 &amp;= \frac{1 - \sin \theta}{\cos \theta} \tag{1}\\[10pt] \tan \frac\theta 2 &amp;= \sqrt{\frac{1 - \cos \theta}{1 + \cos \theta}} \tag{2} \end{align*} </p> <p>Evaluating the first at $\theta = \frac{\pi}{4}$, we have that</p> <p>\begin{align*} \tan \frac{\pi}{8} = \tan \frac{\pi/4}{2} &amp;= \frac{1 - \sin(\pi/4)}{\cos(\pi/4)} \\[7pt] &amp;= \frac{1 - \frac{1}{\sqrt{2}}}{\frac{1}{\sqrt{2}}} \\[7pt] &amp;= \left(1 - \frac{1}{\sqrt{2}}\right) \cdot \frac{\sqrt{2}}{1} \\[5pt] &amp;= \sqrt{2} - 1 \end{align*}</p> <p>Now, using the second identity with $\theta = \pi/4$, we have</p> <p>\begin{align*} \tan \frac{\pi}{8} = \tan \frac{\pi/4}{2} &amp;= \sqrt{\frac{1 - \cos(\pi/4)}{1 + \cos(\pi/4)}} \\[7pt] &amp;=\sqrt{\frac{1 - \frac{1}{\sqrt{2}}}{1 + \frac{1}{\sqrt{2}}}} \\[7pt] &amp;=\sqrt{\frac{1 - \frac{1}{\sqrt{2}}}{1 + \frac{1}{\sqrt{2}}} \cdot \left(\frac{\sqrt 2}{\sqrt 2}\right)} \\[7pt] &amp;= \sqrt{\frac{\sqrt{2} - 1}{\sqrt{2} + 1}} \end{align*}</p> Answer 28: <p>\begin{align} \sqrt{\frac{\sqrt{2}-1}{\sqrt{2}+1}} &amp;= \sqrt{\frac{\sqrt{2}-1}{\sqrt{2}+1}}\cdot\sqrt{\frac{\sqrt{2}-1}{\sqrt{2}-1}}\\ &amp;=\sqrt{\frac{\left(\sqrt{2}-1\right)^2}{2-1}}\\ &amp;=\sqrt{\left(\sqrt{2}-1\right)^2}\\\\ &amp;=\sqrt{2}-1 \end{align}</p> Answer 29: <p>For $a,b&gt;0$, if $ab=1$, then $$ \frac{a}{b}=a^2 $$ and thus $$ \sqrt{\frac{a}{b}}=a. $$</p> <p>Now, consider $a=\sqrt{2}-1$ and $b=\sqrt{2}+1$.</p> Answer 30: <p>Since I've expanded on my initial <a href="https://math.stackexchange.com/questions/2090821/how-is-sqrt-frac-sqrt2-1-sqrt21-sqrt2-1?noredirect=1#comment4297405_2090821">"joke comment"</a>, I might as well make it a full joke answer :) By that I mean, nobody in their right mind would take this approach to <em>actually</em> verify that the two quantities are equal: instead, what follows is a good, but limited, way to produce expressions with radicals that look different, but are really not.</p> <p>The trigonometric function $\tan$ (tangent) has a wide variety of <a href="https://en.wikipedia.org/wiki/Tangent_half-angle_formula" rel="nofollow noreferrer">half-angle formulas</a>. I would like to use the following two:</p> <p>\begin{align*} \tan \frac\theta 2 &amp;= \frac{1 - \sin \theta}{\cos \theta} \tag{1}\\[10pt] \tan \frac\theta 2 &amp;= \sqrt{\frac{1 - \cos \theta}{1 + \cos \theta}} \tag{2} \end{align*} </p> <p>Evaluating the first at $\theta = \frac{\pi}{4}$, we have that</p> <p>\begin{align*} \tan \frac{\pi}{8} = \tan \frac{\pi/4}{2} &amp;= \frac{1 - \sin(\pi/4)}{\cos(\pi/4)} \\[7pt] &amp;= \frac{1 - \frac{1}{\sqrt{2}}}{\frac{1}{\sqrt{2}}} \\[7pt] &amp;= \left(1 - \frac{1}{\sqrt{2}}\right) \cdot \frac{\sqrt{2}}{1} \\[5pt] &amp;= \sqrt{2} - 1 \end{align*}</p> <p>Now, using the second identity with $\theta = \pi/4$, we have</p> <p>\begin{align*} \tan \frac{\pi}{8} = \tan \frac{\pi/4}{2} &amp;= \sqrt{\frac{1 - \cos(\pi/4)}{1 + \cos(\pi/4)}} \\[7pt] &amp;=\sqrt{\frac{1 - \frac{1}{\sqrt{2}}}{1 + \frac{1}{\sqrt{2}}}} \\[7pt] &amp;=\sqrt{\frac{1 - \frac{1}{\sqrt{2}}}{1 + \frac{1}{\sqrt{2}}} \cdot \left(\frac{\sqrt 2}{\sqrt 2}\right)} \\[7pt] &amp;= \sqrt{\frac{\sqrt{2} - 1}{\sqrt{2} + 1}} \end{align*}</p> Answer 31: <p>\begin{align} \sqrt{\frac{\sqrt{2}-1}{\sqrt{2}+1}} &amp;= \sqrt{\frac{\sqrt{2}-1}{\sqrt{2}+1}}\cdot\sqrt{\frac{\sqrt{2}-1}{\sqrt{2}-1}}\\ &amp;=\sqrt{\frac{\left(\sqrt{2}-1\right)^2}{2-1}}\\ &amp;=\sqrt{\left(\sqrt{2}-1\right)^2}\\\\ &amp;=\sqrt{2}-1 \end{align}</p> Answer 32: <p>For $a,b&gt;0$, if $ab=1$, then $$ \frac{a}{b}=a^2 $$ and thus $$ \sqrt{\frac{a}{b}}=a. $$</p> <p>Now, consider $a=\sqrt{2}-1$ and $b=\sqrt{2}+1$.</p> Answer 33: <p>Since I've expanded on my initial <a href="https://math.stackexchange.com/questions/2090821/how-is-sqrt-frac-sqrt2-1-sqrt21-sqrt2-1?noredirect=1#comment4297405_2090821">"joke comment"</a>, I might as well make it a full joke answer :) By that I mean, nobody in their right mind would take this approach to <em>actually</em> verify that the two quantities are equal: instead, what follows is a good, but limited, way to produce expressions with radicals that look different, but are really not.</p> <p>The trigonometric function $\tan$ (tangent) has a wide variety of <a href="https://en.wikipedia.org/wiki/Tangent_half-angle_formula" rel="nofollow noreferrer">half-angle formulas</a>. I would like to use the following two:</p> <p>\begin{align*} \tan \frac\theta 2 &amp;= \frac{1 - \sin \theta}{\cos \theta} \tag{1}\\[10pt] \tan \frac\theta 2 &amp;= \sqrt{\frac{1 - \cos \theta}{1 + \cos \theta}} \tag{2} \end{align*} </p> <p>Evaluating the first at $\theta = \frac{\pi}{4}$, we have that</p> <p>\begin{align*} \tan \frac{\pi}{8} = \tan \frac{\pi/4}{2} &amp;= \frac{1 - \sin(\pi/4)}{\cos(\pi/4)} \\[7pt] &amp;= \frac{1 - \frac{1}{\sqrt{2}}}{\frac{1}{\sqrt{2}}} \\[7pt] &amp;= \left(1 - \frac{1}{\sqrt{2}}\right) \cdot \frac{\sqrt{2}}{1} \\[5pt] &amp;= \sqrt{2} - 1 \end{align*}</p> <p>Now, using the second identity with $\theta = \pi/4$, we have</p> <p>\begin{align*} \tan \frac{\pi}{8} = \tan \frac{\pi/4}{2} &amp;= \sqrt{\frac{1 - \cos(\pi/4)}{1 + \cos(\pi/4)}} \\[7pt] &amp;=\sqrt{\frac{1 - \frac{1}{\sqrt{2}}}{1 + \frac{1}{\sqrt{2}}}} \\[7pt] &amp;=\sqrt{\frac{1 - \frac{1}{\sqrt{2}}}{1 + \frac{1}{\sqrt{2}}} \cdot \left(\frac{\sqrt 2}{\sqrt 2}\right)} \\[7pt] &amp;= \sqrt{\frac{\sqrt{2} - 1}{\sqrt{2} + 1}} \end{align*}</p>
Using Cartan decomposition to prove $SO(p,q)$ has two connected components
2,091,097
6
https://math.stackexchange.com/questions/2091097/using-cartan-decomposition-to-prove-sop-q-has-two-connected-components
<p>The lie group $SO(p,q)$ has two connected components. The one proof I know uses the transitive action of the group on a two sheeted hyperboloid (can be found on Onishchik and Vinberg book).</p> <p>I am very confident that we can arrive at the same result by inspecting the Cartan decomposition of $SO(p,q)$.</p> <p>Recall the Cartan decomposition says in short that given a real Lie subgroup $G$ of $GL_n(\mathbb{C})$ and $K= \{ g \in G : g= (g^*)^{-1} \}$, then the natural map $$ K \times \mathfrak{h} \longrightarrow G$$ $$(k, X) \mapsto k \exp(X),$$ where $\mathfrak{h}$ is the space of all hermitian elements of the Lie algebra of $G$, is a diffeomorphism.</p> <p>In the case of $O(p,q),$ which has 4 connected components, the Cartan decomposition tells us that $$O(p,q) \simeq O(p) \times O(q) \times \mathfrak{h}$$, where $\mathfrak{h}$ denotes the space of symmetric matrices. Hence, since $O(n)$ has two connected components, it follows $O(p,q)$ must have 4 connected components. This decomposition arises by noting that we can identify $K$ with $O(p) \times O(q)$.</p> <p>Applying the same reasoning to $SO(p,q)$, I verified we can identify the set $K$ with $SO(p) \times SO(q)$. However, $SO(n)$ is connected, and therefore the product $SO(p) \times SO(q) \times \mathfrak{h}$ is also connected. </p> <p>Something is clearly escaping me. What have I done wrong?</p>
[ { "answer_id": 2091516, "score": 3, "text": "<p>I think what went wrong is that for $SO(p,q)$ you cannot identify $K$ with $SO(p)\\times SO(q)$, but with $S(O(p)\\times O(q))=\\{(A,B)\\in O(p)\\times O(q):\\det(A)\\det(B)=1\\}$, which has two connected components. This is exactly the general version of the fact observed in the answer of @JohnHughes for $p=q=1$. </p>\n" }, { "answer_id": 2091121, "score": 2, "text": "<p>Did you work out your claims in detail for $p = q = 1$? </p>\n\n<p>In that case, the elements of $G$ are of the form\n$$\n\\begin{bmatrix} k\\cosh t &amp; \\sinh t \\\\ k\\sinh t &amp; \\cosh t \\end{bmatrix} $$ where $k = \\pm 1$, but $K$ contains <em>both</em> the identity and matrix and $\\begin{bmatrix} -1 &amp; 0 \\\\ 0 &amp; 1 \\end{bmatrix} $ , so $K$ is not the same as $SO(1) \\times SO(1)$, which contains only the identity. </p>\n\n<p>(I might be messing something up here -- this is way outside my domain -- but I'm guessing that you forgot the possibility of orientation-reversing items within $K$. )</p>\n" }, { "answer_id": 2091516, "score": 3, "text": "<p>I think what went wrong is that for $SO(p,q)$ you cannot identify $K$ with $SO(p)\\times SO(q)$, but with $S(O(p)\\times O(q))=\\{(A,B)\\in O(p)\\times O(q):\\det(A)\\det(B)=1\\}$, which has two connected components. This is exactly the general version of the fact observed in the answer of @JohnHughes for $p=q=1$. </p>\n" }, { "answer_id": 2091121, "score": 2, "text": "<p>Did you work out your claims in detail for $p = q = 1$? </p>\n\n<p>In that case, the elements of $G$ are of the form\n$$\n\\begin{bmatrix} k\\cosh t &amp; \\sinh t \\\\ k\\sinh t &amp; \\cosh t \\end{bmatrix} $$ where $k = \\pm 1$, but $K$ contains <em>both</em> the identity and matrix and $\\begin{bmatrix} -1 &amp; 0 \\\\ 0 &amp; 1 \\end{bmatrix} $ , so $K$ is not the same as $SO(1) \\times SO(1)$, which contains only the identity. </p>\n\n<p>(I might be messing something up here -- this is way outside my domain -- but I'm guessing that you forgot the possibility of orientation-reversing items within $K$. )</p>\n" }, { "answer_id": 2091516, "score": 3, "text": "<p>I think what went wrong is that for $SO(p,q)$ you cannot identify $K$ with $SO(p)\\times SO(q)$, but with $S(O(p)\\times O(q))=\\{(A,B)\\in O(p)\\times O(q):\\det(A)\\det(B)=1\\}$, which has two connected components. This is exactly the general version of the fact observed in the answer of @JohnHughes for $p=q=1$. </p>\n" }, { "answer_id": 2091121, "score": 2, "text": "<p>Did you work out your claims in detail for $p = q = 1$? </p>\n\n<p>In that case, the elements of $G$ are of the form\n$$\n\\begin{bmatrix} k\\cosh t &amp; \\sinh t \\\\ k\\sinh t &amp; \\cosh t \\end{bmatrix} $$ where $k = \\pm 1$, but $K$ contains <em>both</em> the identity and matrix and $\\begin{bmatrix} -1 &amp; 0 \\\\ 0 &amp; 1 \\end{bmatrix} $ , so $K$ is not the same as $SO(1) \\times SO(1)$, which contains only the identity. </p>\n\n<p>(I might be messing something up here -- this is way outside my domain -- but I'm guessing that you forgot the possibility of orientation-reversing items within $K$. )</p>\n" }, { "answer_id": 2091516, "score": 3, "text": "<p>I think what went wrong is that for $SO(p,q)$ you cannot identify $K$ with $SO(p)\\times SO(q)$, but with $S(O(p)\\times O(q))=\\{(A,B)\\in O(p)\\times O(q):\\det(A)\\det(B)=1\\}$, which has two connected components. This is exactly the general version of the fact observed in the answer of @JohnHughes for $p=q=1$. </p>\n" }, { "answer_id": 2091121, "score": 2, "text": "<p>Did you work out your claims in detail for $p = q = 1$? </p>\n\n<p>In that case, the elements of $G$ are of the form\n$$\n\\begin{bmatrix} k\\cosh t &amp; \\sinh t \\\\ k\\sinh t &amp; \\cosh t \\end{bmatrix} $$ where $k = \\pm 1$, but $K$ contains <em>both</em> the identity and matrix and $\\begin{bmatrix} -1 &amp; 0 \\\\ 0 &amp; 1 \\end{bmatrix} $ , so $K$ is not the same as $SO(1) \\times SO(1)$, which contains only the identity. </p>\n\n<p>(I might be messing something up here -- this is way outside my domain -- but I'm guessing that you forgot the possibility of orientation-reversing items within $K$. )</p>\n" }, { "answer_id": 2091516, "score": 3, "text": "<p>I think what went wrong is that for $SO(p,q)$ you cannot identify $K$ with $SO(p)\\times SO(q)$, but with $S(O(p)\\times O(q))=\\{(A,B)\\in O(p)\\times O(q):\\det(A)\\det(B)=1\\}$, which has two connected components. This is exactly the general version of the fact observed in the answer of @JohnHughes for $p=q=1$. </p>\n" }, { "answer_id": 2091121, "score": 2, "text": "<p>Did you work out your claims in detail for $p = q = 1$? </p>\n\n<p>In that case, the elements of $G$ are of the form\n$$\n\\begin{bmatrix} k\\cosh t &amp; \\sinh t \\\\ k\\sinh t &amp; \\cosh t \\end{bmatrix} $$ where $k = \\pm 1$, but $K$ contains <em>both</em> the identity and matrix and $\\begin{bmatrix} -1 &amp; 0 \\\\ 0 &amp; 1 \\end{bmatrix} $ , so $K$ is not the same as $SO(1) \\times SO(1)$, which contains only the identity. </p>\n\n<p>(I might be messing something up here -- this is way outside my domain -- but I'm guessing that you forgot the possibility of orientation-reversing items within $K$. )</p>\n" }, { "answer_id": 2091516, "score": 3, "text": "<p>I think what went wrong is that for $SO(p,q)$ you cannot identify $K$ with $SO(p)\\times SO(q)$, but with $S(O(p)\\times O(q))=\\{(A,B)\\in O(p)\\times O(q):\\det(A)\\det(B)=1\\}$, which has two connected components. This is exactly the general version of the fact observed in the answer of @JohnHughes for $p=q=1$. </p>\n" }, { "answer_id": 2091121, "score": 2, "text": "<p>Did you work out your claims in detail for $p = q = 1$? </p>\n\n<p>In that case, the elements of $G$ are of the form\n$$\n\\begin{bmatrix} k\\cosh t &amp; \\sinh t \\\\ k\\sinh t &amp; \\cosh t \\end{bmatrix} $$ where $k = \\pm 1$, but $K$ contains <em>both</em> the identity and matrix and $\\begin{bmatrix} -1 &amp; 0 \\\\ 0 &amp; 1 \\end{bmatrix} $ , so $K$ is not the same as $SO(1) \\times SO(1)$, which contains only the identity. </p>\n\n<p>(I might be messing something up here -- this is way outside my domain -- but I'm guessing that you forgot the possibility of orientation-reversing items within $K$. )</p>\n" }, { "answer_id": 2091516, "score": 3, "text": "<p>I think what went wrong is that for $SO(p,q)$ you cannot identify $K$ with $SO(p)\\times SO(q)$, but with $S(O(p)\\times O(q))=\\{(A,B)\\in O(p)\\times O(q):\\det(A)\\det(B)=1\\}$, which has two connected components. This is exactly the general version of the fact observed in the answer of @JohnHughes for $p=q=1$. </p>\n" }, { "answer_id": 2091121, "score": 2, "text": "<p>Did you work out your claims in detail for $p = q = 1$? </p>\n\n<p>In that case, the elements of $G$ are of the form\n$$\n\\begin{bmatrix} k\\cosh t &amp; \\sinh t \\\\ k\\sinh t &amp; \\cosh t \\end{bmatrix} $$ where $k = \\pm 1$, but $K$ contains <em>both</em> the identity and matrix and $\\begin{bmatrix} -1 &amp; 0 \\\\ 0 &amp; 1 \\end{bmatrix} $ , so $K$ is not the same as $SO(1) \\times SO(1)$, which contains only the identity. </p>\n\n<p>(I might be messing something up here -- this is way outside my domain -- but I'm guessing that you forgot the possibility of orientation-reversing items within $K$. )</p>\n" }, { "answer_id": 2091516, "score": 3, "text": "<p>I think what went wrong is that for $SO(p,q)$ you cannot identify $K$ with $SO(p)\\times SO(q)$, but with $S(O(p)\\times O(q))=\\{(A,B)\\in O(p)\\times O(q):\\det(A)\\det(B)=1\\}$, which has two connected components. This is exactly the general version of the fact observed in the answer of @JohnHughes for $p=q=1$. </p>\n" }, { "answer_id": 2091121, "score": 2, "text": "<p>Did you work out your claims in detail for $p = q = 1$? </p>\n\n<p>In that case, the elements of $G$ are of the form\n$$\n\\begin{bmatrix} k\\cosh t &amp; \\sinh t \\\\ k\\sinh t &amp; \\cosh t \\end{bmatrix} $$ where $k = \\pm 1$, but $K$ contains <em>both</em> the identity and matrix and $\\begin{bmatrix} -1 &amp; 0 \\\\ 0 &amp; 1 \\end{bmatrix} $ , so $K$ is not the same as $SO(1) \\times SO(1)$, which contains only the identity. </p>\n\n<p>(I might be messing something up here -- this is way outside my domain -- but I'm guessing that you forgot the possibility of orientation-reversing items within $K$. )</p>\n" }, { "answer_id": 2091516, "score": 3, "text": "<p>I think what went wrong is that for $SO(p,q)$ you cannot identify $K$ with $SO(p)\\times SO(q)$, but with $S(O(p)\\times O(q))=\\{(A,B)\\in O(p)\\times O(q):\\det(A)\\det(B)=1\\}$, which has two connected components. This is exactly the general version of the fact observed in the answer of @JohnHughes for $p=q=1$. </p>\n" }, { "answer_id": 2091121, "score": 2, "text": "<p>Did you work out your claims in detail for $p = q = 1$? </p>\n\n<p>In that case, the elements of $G$ are of the form\n$$\n\\begin{bmatrix} k\\cosh t &amp; \\sinh t \\\\ k\\sinh t &amp; \\cosh t \\end{bmatrix} $$ where $k = \\pm 1$, but $K$ contains <em>both</em> the identity and matrix and $\\begin{bmatrix} -1 &amp; 0 \\\\ 0 &amp; 1 \\end{bmatrix} $ , so $K$ is not the same as $SO(1) \\times SO(1)$, which contains only the identity. </p>\n\n<p>(I might be messing something up here -- this is way outside my domain -- but I'm guessing that you forgot the possibility of orientation-reversing items within $K$. )</p>\n" }, { "answer_id": 2091516, "score": 3, "text": "<p>I think what went wrong is that for $SO(p,q)$ you cannot identify $K$ with $SO(p)\\times SO(q)$, but with $S(O(p)\\times O(q))=\\{(A,B)\\in O(p)\\times O(q):\\det(A)\\det(B)=1\\}$, which has two connected components. This is exactly the general version of the fact observed in the answer of @JohnHughes for $p=q=1$. </p>\n" }, { "answer_id": 2091121, "score": 2, "text": "<p>Did you work out your claims in detail for $p = q = 1$? </p>\n\n<p>In that case, the elements of $G$ are of the form\n$$\n\\begin{bmatrix} k\\cosh t &amp; \\sinh t \\\\ k\\sinh t &amp; \\cosh t \\end{bmatrix} $$ where $k = \\pm 1$, but $K$ contains <em>both</em> the identity and matrix and $\\begin{bmatrix} -1 &amp; 0 \\\\ 0 &amp; 1 \\end{bmatrix} $ , so $K$ is not the same as $SO(1) \\times SO(1)$, which contains only the identity. </p>\n\n<p>(I might be messing something up here -- this is way outside my domain -- but I'm guessing that you forgot the possibility of orientation-reversing items within $K$. )</p>\n" }, { "answer_id": 2091516, "score": 3, "text": "<p>I think what went wrong is that for $SO(p,q)$ you cannot identify $K$ with $SO(p)\\times SO(q)$, but with $S(O(p)\\times O(q))=\\{(A,B)\\in O(p)\\times O(q):\\det(A)\\det(B)=1\\}$, which has two connected components. This is exactly the general version of the fact observed in the answer of @JohnHughes for $p=q=1$. </p>\n" }, { "answer_id": 2091121, "score": 2, "text": "<p>Did you work out your claims in detail for $p = q = 1$? </p>\n\n<p>In that case, the elements of $G$ are of the form\n$$\n\\begin{bmatrix} k\\cosh t &amp; \\sinh t \\\\ k\\sinh t &amp; \\cosh t \\end{bmatrix} $$ where $k = \\pm 1$, but $K$ contains <em>both</em> the identity and matrix and $\\begin{bmatrix} -1 &amp; 0 \\\\ 0 &amp; 1 \\end{bmatrix} $ , so $K$ is not the same as $SO(1) \\times SO(1)$, which contains only the identity. </p>\n\n<p>(I might be messing something up here -- this is way outside my domain -- but I'm guessing that you forgot the possibility of orientation-reversing items within $K$. )</p>\n" } ]
Question: Using Cartan decomposition to prove $SO(p,q)$ has two connected components Answer 1: <p>I think what went wrong is that for $SO(p,q)$ you cannot identify $K$ with $SO(p)\times SO(q)$, but with $S(O(p)\times O(q))=\{(A,B)\in O(p)\times O(q):\det(A)\det(B)=1\}$, which has two connected components. This is exactly the general version of the fact observed in the answer of @JohnHughes for $p=q=1$. </p> Answer 2: <p>Did you work out your claims in detail for $p = q = 1$? </p> <p>In that case, the elements of $G$ are of the form $$ \begin{bmatrix} k\cosh t &amp; \sinh t \\ k\sinh t &amp; \cosh t \end{bmatrix} $$ where $k = \pm 1$, but $K$ contains <em>both</em> the identity and matrix and $\begin{bmatrix} -1 &amp; 0 \\ 0 &amp; 1 \end{bmatrix} $ , so $K$ is not the same as $SO(1) \times SO(1)$, which contains only the identity. </p> <p>(I might be messing something up here -- this is way outside my domain -- but I'm guessing that you forgot the possibility of orientation-reversing items within $K$. )</p> Answer 3: <p>I think what went wrong is that for $SO(p,q)$ you cannot identify $K$ with $SO(p)\times SO(q)$, but with $S(O(p)\times O(q))=\{(A,B)\in O(p)\times O(q):\det(A)\det(B)=1\}$, which has two connected components. This is exactly the general version of the fact observed in the answer of @JohnHughes for $p=q=1$. </p> Answer 4: <p>Did you work out your claims in detail for $p = q = 1$? </p> <p>In that case, the elements of $G$ are of the form $$ \begin{bmatrix} k\cosh t &amp; \sinh t \\ k\sinh t &amp; \cosh t \end{bmatrix} $$ where $k = \pm 1$, but $K$ contains <em>both</em> the identity and matrix and $\begin{bmatrix} -1 &amp; 0 \\ 0 &amp; 1 \end{bmatrix} $ , so $K$ is not the same as $SO(1) \times SO(1)$, which contains only the identity. </p> <p>(I might be messing something up here -- this is way outside my domain -- but I'm guessing that you forgot the possibility of orientation-reversing items within $K$. )</p> Answer 5: <p>I think what went wrong is that for $SO(p,q)$ you cannot identify $K$ with $SO(p)\times SO(q)$, but with $S(O(p)\times O(q))=\{(A,B)\in O(p)\times O(q):\det(A)\det(B)=1\}$, which has two connected components. This is exactly the general version of the fact observed in the answer of @JohnHughes for $p=q=1$. </p> Answer 6: <p>Did you work out your claims in detail for $p = q = 1$? </p> <p>In that case, the elements of $G$ are of the form $$ \begin{bmatrix} k\cosh t &amp; \sinh t \\ k\sinh t &amp; \cosh t \end{bmatrix} $$ where $k = \pm 1$, but $K$ contains <em>both</em> the identity and matrix and $\begin{bmatrix} -1 &amp; 0 \\ 0 &amp; 1 \end{bmatrix} $ , so $K$ is not the same as $SO(1) \times SO(1)$, which contains only the identity. </p> <p>(I might be messing something up here -- this is way outside my domain -- but I'm guessing that you forgot the possibility of orientation-reversing items within $K$. )</p> Answer 7: <p>I think what went wrong is that for $SO(p,q)$ you cannot identify $K$ with $SO(p)\times SO(q)$, but with $S(O(p)\times O(q))=\{(A,B)\in O(p)\times O(q):\det(A)\det(B)=1\}$, which has two connected components. This is exactly the general version of the fact observed in the answer of @JohnHughes for $p=q=1$. </p> Answer 8: <p>Did you work out your claims in detail for $p = q = 1$? </p> <p>In that case, the elements of $G$ are of the form $$ \begin{bmatrix} k\cosh t &amp; \sinh t \\ k\sinh t &amp; \cosh t \end{bmatrix} $$ where $k = \pm 1$, but $K$ contains <em>both</em> the identity and matrix and $\begin{bmatrix} -1 &amp; 0 \\ 0 &amp; 1 \end{bmatrix} $ , so $K$ is not the same as $SO(1) \times SO(1)$, which contains only the identity. </p> <p>(I might be messing something up here -- this is way outside my domain -- but I'm guessing that you forgot the possibility of orientation-reversing items within $K$. )</p> Answer 9: <p>I think what went wrong is that for $SO(p,q)$ you cannot identify $K$ with $SO(p)\times SO(q)$, but with $S(O(p)\times O(q))=\{(A,B)\in O(p)\times O(q):\det(A)\det(B)=1\}$, which has two connected components. This is exactly the general version of the fact observed in the answer of @JohnHughes for $p=q=1$. </p> Answer 10: <p>Did you work out your claims in detail for $p = q = 1$? </p> <p>In that case, the elements of $G$ are of the form $$ \begin{bmatrix} k\cosh t &amp; \sinh t \\ k\sinh t &amp; \cosh t \end{bmatrix} $$ where $k = \pm 1$, but $K$ contains <em>both</em> the identity and matrix and $\begin{bmatrix} -1 &amp; 0 \\ 0 &amp; 1 \end{bmatrix} $ , so $K$ is not the same as $SO(1) \times SO(1)$, which contains only the identity. </p> <p>(I might be messing something up here -- this is way outside my domain -- but I'm guessing that you forgot the possibility of orientation-reversing items within $K$. )</p> Answer 11: <p>I think what went wrong is that for $SO(p,q)$ you cannot identify $K$ with $SO(p)\times SO(q)$, but with $S(O(p)\times O(q))=\{(A,B)\in O(p)\times O(q):\det(A)\det(B)=1\}$, which has two connected components. This is exactly the general version of the fact observed in the answer of @JohnHughes for $p=q=1$. </p> Answer 12: <p>Did you work out your claims in detail for $p = q = 1$? </p> <p>In that case, the elements of $G$ are of the form $$ \begin{bmatrix} k\cosh t &amp; \sinh t \\ k\sinh t &amp; \cosh t \end{bmatrix} $$ where $k = \pm 1$, but $K$ contains <em>both</em> the identity and matrix and $\begin{bmatrix} -1 &amp; 0 \\ 0 &amp; 1 \end{bmatrix} $ , so $K$ is not the same as $SO(1) \times SO(1)$, which contains only the identity. </p> <p>(I might be messing something up here -- this is way outside my domain -- but I'm guessing that you forgot the possibility of orientation-reversing items within $K$. )</p> Answer 13: <p>I think what went wrong is that for $SO(p,q)$ you cannot identify $K$ with $SO(p)\times SO(q)$, but with $S(O(p)\times O(q))=\{(A,B)\in O(p)\times O(q):\det(A)\det(B)=1\}$, which has two connected components. This is exactly the general version of the fact observed in the answer of @JohnHughes for $p=q=1$. </p> Answer 14: <p>Did you work out your claims in detail for $p = q = 1$? </p> <p>In that case, the elements of $G$ are of the form $$ \begin{bmatrix} k\cosh t &amp; \sinh t \\ k\sinh t &amp; \cosh t \end{bmatrix} $$ where $k = \pm 1$, but $K$ contains <em>both</em> the identity and matrix and $\begin{bmatrix} -1 &amp; 0 \\ 0 &amp; 1 \end{bmatrix} $ , so $K$ is not the same as $SO(1) \times SO(1)$, which contains only the identity. </p> <p>(I might be messing something up here -- this is way outside my domain -- but I'm guessing that you forgot the possibility of orientation-reversing items within $K$. )</p> Answer 15: <p>I think what went wrong is that for $SO(p,q)$ you cannot identify $K$ with $SO(p)\times SO(q)$, but with $S(O(p)\times O(q))=\{(A,B)\in O(p)\times O(q):\det(A)\det(B)=1\}$, which has two connected components. This is exactly the general version of the fact observed in the answer of @JohnHughes for $p=q=1$. </p> Answer 16: <p>Did you work out your claims in detail for $p = q = 1$? </p> <p>In that case, the elements of $G$ are of the form $$ \begin{bmatrix} k\cosh t &amp; \sinh t \\ k\sinh t &amp; \cosh t \end{bmatrix} $$ where $k = \pm 1$, but $K$ contains <em>both</em> the identity and matrix and $\begin{bmatrix} -1 &amp; 0 \\ 0 &amp; 1 \end{bmatrix} $ , so $K$ is not the same as $SO(1) \times SO(1)$, which contains only the identity. </p> <p>(I might be messing something up here -- this is way outside my domain -- but I'm guessing that you forgot the possibility of orientation-reversing items within $K$. )</p> Answer 17: <p>I think what went wrong is that for $SO(p,q)$ you cannot identify $K$ with $SO(p)\times SO(q)$, but with $S(O(p)\times O(q))=\{(A,B)\in O(p)\times O(q):\det(A)\det(B)=1\}$, which has two connected components. This is exactly the general version of the fact observed in the answer of @JohnHughes for $p=q=1$. </p> Answer 18: <p>Did you work out your claims in detail for $p = q = 1$? </p> <p>In that case, the elements of $G$ are of the form $$ \begin{bmatrix} k\cosh t &amp; \sinh t \\ k\sinh t &amp; \cosh t \end{bmatrix} $$ where $k = \pm 1$, but $K$ contains <em>both</em> the identity and matrix and $\begin{bmatrix} -1 &amp; 0 \\ 0 &amp; 1 \end{bmatrix} $ , so $K$ is not the same as $SO(1) \times SO(1)$, which contains only the identity. </p> <p>(I might be messing something up here -- this is way outside my domain -- but I'm guessing that you forgot the possibility of orientation-reversing items within $K$. )</p> Answer 19: <p>I think what went wrong is that for $SO(p,q)$ you cannot identify $K$ with $SO(p)\times SO(q)$, but with $S(O(p)\times O(q))=\{(A,B)\in O(p)\times O(q):\det(A)\det(B)=1\}$, which has two connected components. This is exactly the general version of the fact observed in the answer of @JohnHughes for $p=q=1$. </p> Answer 20: <p>Did you work out your claims in detail for $p = q = 1$? </p> <p>In that case, the elements of $G$ are of the form $$ \begin{bmatrix} k\cosh t &amp; \sinh t \\ k\sinh t &amp; \cosh t \end{bmatrix} $$ where $k = \pm 1$, but $K$ contains <em>both</em> the identity and matrix and $\begin{bmatrix} -1 &amp; 0 \\ 0 &amp; 1 \end{bmatrix} $ , so $K$ is not the same as $SO(1) \times SO(1)$, which contains only the identity. </p> <p>(I might be messing something up here -- this is way outside my domain -- but I'm guessing that you forgot the possibility of orientation-reversing items within $K$. )</p> Answer 21: <p>I think what went wrong is that for $SO(p,q)$ you cannot identify $K$ with $SO(p)\times SO(q)$, but with $S(O(p)\times O(q))=\{(A,B)\in O(p)\times O(q):\det(A)\det(B)=1\}$, which has two connected components. This is exactly the general version of the fact observed in the answer of @JohnHughes for $p=q=1$. </p> Answer 22: <p>Did you work out your claims in detail for $p = q = 1$? </p> <p>In that case, the elements of $G$ are of the form $$ \begin{bmatrix} k\cosh t &amp; \sinh t \\ k\sinh t &amp; \cosh t \end{bmatrix} $$ where $k = \pm 1$, but $K$ contains <em>both</em> the identity and matrix and $\begin{bmatrix} -1 &amp; 0 \\ 0 &amp; 1 \end{bmatrix} $ , so $K$ is not the same as $SO(1) \times SO(1)$, which contains only the identity. </p> <p>(I might be messing something up here -- this is way outside my domain -- but I'm guessing that you forgot the possibility of orientation-reversing items within $K$. )</p>
Show that $A=B$, provided that $A^2=B^2$ (and some other conditions are satisfied)
2,091,143
6
https://math.stackexchange.com/questions/2091143/show-that-a-b-provided-that-a2-b2-and-some-other-conditions-are-satisfie
<p>I'm trying to do this question from an old past paper, no answers to look at and because it's from a previous year I'm not entirely sure I've even covered the material; here it is:</p> <blockquote> <p>Let $A,B\in M_n(\mathbb R)$ be such that $A^2 = B^2,\, AB = BA$ and $\det(A + B) \ne 0$. Show that $A = B$.</p> </blockquote> <p>I've been playing around with it for ages but can't get anything, from the determinant part I'm guessing I have to involve the extant inverse of $A+B$ but I've never done that before and from looking it up it seems abit beyond what I should be doing. Any ideas?</p>
[ { "answer_id": 2091146, "score": 8, "text": "<p><strong>Hint</strong>: Expand and simplify $(A-B)(A+B) = A^2 + AB - BA - B^2 = \\cdots$</p>\n\n<p>If you can show that $(A-B)(A+B) = 0$, then you can multiply both sides by $(A+B)^{-1}$. (The inverse of $A+B$ exists since $\\det(A+B) \\neq 0$.)</p>\n" }, { "answer_id": 2094886, "score": 2, "text": "<p>Here's a direct proof:\n\\begin{align*}\nA&amp;=(A+B)^{-1}(A+B)A\\\\\n&amp;=(A+B)^{-1}(A^2+BA)\\\\\n&amp;=(A+B)^{-1}(B^2+AB)\\\\\n&amp;=(A+B)^{-1}(A+B)B=B\n\\end{align*}</p>\n" }, { "answer_id": 2091170, "score": 1, "text": "<p>When $n=1$, this is immediate since $A$ and $B$ are nothing but real numbers:\n$$\n0=A^2-B^2=(A+B)(A-B)\\tag{1}\n$$\nand $\\det(A+B)\\neq 0$ means $A+B\\neq 0$ which allows one to <em>cancel</em> the $A+B$ term. </p>\n\n<p>For $n&gt;1$, (1) is no true in general since matrices do not necessarily commute. But in this particular problem, one has $AB=BA$.</p>\n" }, { "answer_id": 2625212, "score": 0, "text": "<p>we have , $\\det(A+B)\\neq0 \\implies (A+B)$ is non-singular matrix.</p>\n\n<p>Now, </p>\n\n<p>$(A+B)(A-B)=A^2-AB+BA-B^2=A^2-B^2=O$ ( since, $AB=BA$ and $A^2=B^2 $ )</p>\n\n<p>you should know that ,</p>\n\n<blockquote>\n <p>If the product of two matrices is null , then either both will be\n singular or one of them must be null itself.</p>\n</blockquote>\n\n<p>Therefore $A-B=O\\implies A=B$ .\nHope it may helpful!</p>\n" }, { "answer_id": 2091146, "score": 8, "text": "<p><strong>Hint</strong>: Expand and simplify $(A-B)(A+B) = A^2 + AB - BA - B^2 = \\cdots$</p>\n\n<p>If you can show that $(A-B)(A+B) = 0$, then you can multiply both sides by $(A+B)^{-1}$. (The inverse of $A+B$ exists since $\\det(A+B) \\neq 0$.)</p>\n" }, { "answer_id": 2094886, "score": 2, "text": "<p>Here's a direct proof:\n\\begin{align*}\nA&amp;=(A+B)^{-1}(A+B)A\\\\\n&amp;=(A+B)^{-1}(A^2+BA)\\\\\n&amp;=(A+B)^{-1}(B^2+AB)\\\\\n&amp;=(A+B)^{-1}(A+B)B=B\n\\end{align*}</p>\n" }, { "answer_id": 2091170, "score": 1, "text": "<p>When $n=1$, this is immediate since $A$ and $B$ are nothing but real numbers:\n$$\n0=A^2-B^2=(A+B)(A-B)\\tag{1}\n$$\nand $\\det(A+B)\\neq 0$ means $A+B\\neq 0$ which allows one to <em>cancel</em> the $A+B$ term. </p>\n\n<p>For $n&gt;1$, (1) is no true in general since matrices do not necessarily commute. But in this particular problem, one has $AB=BA$.</p>\n" }, { "answer_id": 2625212, "score": 0, "text": "<p>we have , $\\det(A+B)\\neq0 \\implies (A+B)$ is non-singular matrix.</p>\n\n<p>Now, </p>\n\n<p>$(A+B)(A-B)=A^2-AB+BA-B^2=A^2-B^2=O$ ( since, $AB=BA$ and $A^2=B^2 $ )</p>\n\n<p>you should know that ,</p>\n\n<blockquote>\n <p>If the product of two matrices is null , then either both will be\n singular or one of them must be null itself.</p>\n</blockquote>\n\n<p>Therefore $A-B=O\\implies A=B$ .\nHope it may helpful!</p>\n" }, { "answer_id": 2091146, "score": 8, "text": "<p><strong>Hint</strong>: Expand and simplify $(A-B)(A+B) = A^2 + AB - BA - B^2 = \\cdots$</p>\n\n<p>If you can show that $(A-B)(A+B) = 0$, then you can multiply both sides by $(A+B)^{-1}$. (The inverse of $A+B$ exists since $\\det(A+B) \\neq 0$.)</p>\n" }, { "answer_id": 2094886, "score": 2, "text": "<p>Here's a direct proof:\n\\begin{align*}\nA&amp;=(A+B)^{-1}(A+B)A\\\\\n&amp;=(A+B)^{-1}(A^2+BA)\\\\\n&amp;=(A+B)^{-1}(B^2+AB)\\\\\n&amp;=(A+B)^{-1}(A+B)B=B\n\\end{align*}</p>\n" }, { "answer_id": 2091170, "score": 1, "text": "<p>When $n=1$, this is immediate since $A$ and $B$ are nothing but real numbers:\n$$\n0=A^2-B^2=(A+B)(A-B)\\tag{1}\n$$\nand $\\det(A+B)\\neq 0$ means $A+B\\neq 0$ which allows one to <em>cancel</em> the $A+B$ term. </p>\n\n<p>For $n&gt;1$, (1) is no true in general since matrices do not necessarily commute. But in this particular problem, one has $AB=BA$.</p>\n" }, { "answer_id": 2625212, "score": 0, "text": "<p>we have , $\\det(A+B)\\neq0 \\implies (A+B)$ is non-singular matrix.</p>\n\n<p>Now, </p>\n\n<p>$(A+B)(A-B)=A^2-AB+BA-B^2=A^2-B^2=O$ ( since, $AB=BA$ and $A^2=B^2 $ )</p>\n\n<p>you should know that ,</p>\n\n<blockquote>\n <p>If the product of two matrices is null , then either both will be\n singular or one of them must be null itself.</p>\n</blockquote>\n\n<p>Therefore $A-B=O\\implies A=B$ .\nHope it may helpful!</p>\n" }, { "answer_id": 2091146, "score": 8, "text": "<p><strong>Hint</strong>: Expand and simplify $(A-B)(A+B) = A^2 + AB - BA - B^2 = \\cdots$</p>\n\n<p>If you can show that $(A-B)(A+B) = 0$, then you can multiply both sides by $(A+B)^{-1}$. (The inverse of $A+B$ exists since $\\det(A+B) \\neq 0$.)</p>\n" }, { "answer_id": 2094886, "score": 2, "text": "<p>Here's a direct proof:\n\\begin{align*}\nA&amp;=(A+B)^{-1}(A+B)A\\\\\n&amp;=(A+B)^{-1}(A^2+BA)\\\\\n&amp;=(A+B)^{-1}(B^2+AB)\\\\\n&amp;=(A+B)^{-1}(A+B)B=B\n\\end{align*}</p>\n" }, { "answer_id": 2091170, "score": 1, "text": "<p>When $n=1$, this is immediate since $A$ and $B$ are nothing but real numbers:\n$$\n0=A^2-B^2=(A+B)(A-B)\\tag{1}\n$$\nand $\\det(A+B)\\neq 0$ means $A+B\\neq 0$ which allows one to <em>cancel</em> the $A+B$ term. </p>\n\n<p>For $n&gt;1$, (1) is no true in general since matrices do not necessarily commute. But in this particular problem, one has $AB=BA$.</p>\n" }, { "answer_id": 2625212, "score": 0, "text": "<p>we have , $\\det(A+B)\\neq0 \\implies (A+B)$ is non-singular matrix.</p>\n\n<p>Now, </p>\n\n<p>$(A+B)(A-B)=A^2-AB+BA-B^2=A^2-B^2=O$ ( since, $AB=BA$ and $A^2=B^2 $ )</p>\n\n<p>you should know that ,</p>\n\n<blockquote>\n <p>If the product of two matrices is null , then either both will be\n singular or one of them must be null itself.</p>\n</blockquote>\n\n<p>Therefore $A-B=O\\implies A=B$ .\nHope it may helpful!</p>\n" }, { "answer_id": 2091146, "score": 8, "text": "<p><strong>Hint</strong>: Expand and simplify $(A-B)(A+B) = A^2 + AB - BA - B^2 = \\cdots$</p>\n\n<p>If you can show that $(A-B)(A+B) = 0$, then you can multiply both sides by $(A+B)^{-1}$. (The inverse of $A+B$ exists since $\\det(A+B) \\neq 0$.)</p>\n" }, { "answer_id": 2094886, "score": 2, "text": "<p>Here's a direct proof:\n\\begin{align*}\nA&amp;=(A+B)^{-1}(A+B)A\\\\\n&amp;=(A+B)^{-1}(A^2+BA)\\\\\n&amp;=(A+B)^{-1}(B^2+AB)\\\\\n&amp;=(A+B)^{-1}(A+B)B=B\n\\end{align*}</p>\n" }, { "answer_id": 2091170, "score": 1, "text": "<p>When $n=1$, this is immediate since $A$ and $B$ are nothing but real numbers:\n$$\n0=A^2-B^2=(A+B)(A-B)\\tag{1}\n$$\nand $\\det(A+B)\\neq 0$ means $A+B\\neq 0$ which allows one to <em>cancel</em> the $A+B$ term. </p>\n\n<p>For $n&gt;1$, (1) is no true in general since matrices do not necessarily commute. But in this particular problem, one has $AB=BA$.</p>\n" }, { "answer_id": 2625212, "score": 0, "text": "<p>we have , $\\det(A+B)\\neq0 \\implies (A+B)$ is non-singular matrix.</p>\n\n<p>Now, </p>\n\n<p>$(A+B)(A-B)=A^2-AB+BA-B^2=A^2-B^2=O$ ( since, $AB=BA$ and $A^2=B^2 $ )</p>\n\n<p>you should know that ,</p>\n\n<blockquote>\n <p>If the product of two matrices is null , then either both will be\n singular or one of them must be null itself.</p>\n</blockquote>\n\n<p>Therefore $A-B=O\\implies A=B$ .\nHope it may helpful!</p>\n" }, { "answer_id": 2091146, "score": 8, "text": "<p><strong>Hint</strong>: Expand and simplify $(A-B)(A+B) = A^2 + AB - BA - B^2 = \\cdots$</p>\n\n<p>If you can show that $(A-B)(A+B) = 0$, then you can multiply both sides by $(A+B)^{-1}$. (The inverse of $A+B$ exists since $\\det(A+B) \\neq 0$.)</p>\n" }, { "answer_id": 2094886, "score": 2, "text": "<p>Here's a direct proof:\n\\begin{align*}\nA&amp;=(A+B)^{-1}(A+B)A\\\\\n&amp;=(A+B)^{-1}(A^2+BA)\\\\\n&amp;=(A+B)^{-1}(B^2+AB)\\\\\n&amp;=(A+B)^{-1}(A+B)B=B\n\\end{align*}</p>\n" }, { "answer_id": 2091170, "score": 1, "text": "<p>When $n=1$, this is immediate since $A$ and $B$ are nothing but real numbers:\n$$\n0=A^2-B^2=(A+B)(A-B)\\tag{1}\n$$\nand $\\det(A+B)\\neq 0$ means $A+B\\neq 0$ which allows one to <em>cancel</em> the $A+B$ term. </p>\n\n<p>For $n&gt;1$, (1) is no true in general since matrices do not necessarily commute. But in this particular problem, one has $AB=BA$.</p>\n" }, { "answer_id": 2625212, "score": 0, "text": "<p>we have , $\\det(A+B)\\neq0 \\implies (A+B)$ is non-singular matrix.</p>\n\n<p>Now, </p>\n\n<p>$(A+B)(A-B)=A^2-AB+BA-B^2=A^2-B^2=O$ ( since, $AB=BA$ and $A^2=B^2 $ )</p>\n\n<p>you should know that ,</p>\n\n<blockquote>\n <p>If the product of two matrices is null , then either both will be\n singular or one of them must be null itself.</p>\n</blockquote>\n\n<p>Therefore $A-B=O\\implies A=B$ .\nHope it may helpful!</p>\n" }, { "answer_id": 2091146, "score": 8, "text": "<p><strong>Hint</strong>: Expand and simplify $(A-B)(A+B) = A^2 + AB - BA - B^2 = \\cdots$</p>\n\n<p>If you can show that $(A-B)(A+B) = 0$, then you can multiply both sides by $(A+B)^{-1}$. (The inverse of $A+B$ exists since $\\det(A+B) \\neq 0$.)</p>\n" }, { "answer_id": 2094886, "score": 2, "text": "<p>Here's a direct proof:\n\\begin{align*}\nA&amp;=(A+B)^{-1}(A+B)A\\\\\n&amp;=(A+B)^{-1}(A^2+BA)\\\\\n&amp;=(A+B)^{-1}(B^2+AB)\\\\\n&amp;=(A+B)^{-1}(A+B)B=B\n\\end{align*}</p>\n" }, { "answer_id": 2091170, "score": 1, "text": "<p>When $n=1$, this is immediate since $A$ and $B$ are nothing but real numbers:\n$$\n0=A^2-B^2=(A+B)(A-B)\\tag{1}\n$$\nand $\\det(A+B)\\neq 0$ means $A+B\\neq 0$ which allows one to <em>cancel</em> the $A+B$ term. </p>\n\n<p>For $n&gt;1$, (1) is no true in general since matrices do not necessarily commute. But in this particular problem, one has $AB=BA$.</p>\n" }, { "answer_id": 2625212, "score": 0, "text": "<p>we have , $\\det(A+B)\\neq0 \\implies (A+B)$ is non-singular matrix.</p>\n\n<p>Now, </p>\n\n<p>$(A+B)(A-B)=A^2-AB+BA-B^2=A^2-B^2=O$ ( since, $AB=BA$ and $A^2=B^2 $ )</p>\n\n<p>you should know that ,</p>\n\n<blockquote>\n <p>If the product of two matrices is null , then either both will be\n singular or one of them must be null itself.</p>\n</blockquote>\n\n<p>Therefore $A-B=O\\implies A=B$ .\nHope it may helpful!</p>\n" }, { "answer_id": 2091146, "score": 8, "text": "<p><strong>Hint</strong>: Expand and simplify $(A-B)(A+B) = A^2 + AB - BA - B^2 = \\cdots$</p>\n\n<p>If you can show that $(A-B)(A+B) = 0$, then you can multiply both sides by $(A+B)^{-1}$. (The inverse of $A+B$ exists since $\\det(A+B) \\neq 0$.)</p>\n" }, { "answer_id": 2094886, "score": 2, "text": "<p>Here's a direct proof:\n\\begin{align*}\nA&amp;=(A+B)^{-1}(A+B)A\\\\\n&amp;=(A+B)^{-1}(A^2+BA)\\\\\n&amp;=(A+B)^{-1}(B^2+AB)\\\\\n&amp;=(A+B)^{-1}(A+B)B=B\n\\end{align*}</p>\n" }, { "answer_id": 2091170, "score": 1, "text": "<p>When $n=1$, this is immediate since $A$ and $B$ are nothing but real numbers:\n$$\n0=A^2-B^2=(A+B)(A-B)\\tag{1}\n$$\nand $\\det(A+B)\\neq 0$ means $A+B\\neq 0$ which allows one to <em>cancel</em> the $A+B$ term. </p>\n\n<p>For $n&gt;1$, (1) is no true in general since matrices do not necessarily commute. But in this particular problem, one has $AB=BA$.</p>\n" }, { "answer_id": 2625212, "score": 0, "text": "<p>we have , $\\det(A+B)\\neq0 \\implies (A+B)$ is non-singular matrix.</p>\n\n<p>Now, </p>\n\n<p>$(A+B)(A-B)=A^2-AB+BA-B^2=A^2-B^2=O$ ( since, $AB=BA$ and $A^2=B^2 $ )</p>\n\n<p>you should know that ,</p>\n\n<blockquote>\n <p>If the product of two matrices is null , then either both will be\n singular or one of them must be null itself.</p>\n</blockquote>\n\n<p>Therefore $A-B=O\\implies A=B$ .\nHope it may helpful!</p>\n" }, { "answer_id": 2091146, "score": 8, "text": "<p><strong>Hint</strong>: Expand and simplify $(A-B)(A+B) = A^2 + AB - BA - B^2 = \\cdots$</p>\n\n<p>If you can show that $(A-B)(A+B) = 0$, then you can multiply both sides by $(A+B)^{-1}$. (The inverse of $A+B$ exists since $\\det(A+B) \\neq 0$.)</p>\n" }, { "answer_id": 2094886, "score": 2, "text": "<p>Here's a direct proof:\n\\begin{align*}\nA&amp;=(A+B)^{-1}(A+B)A\\\\\n&amp;=(A+B)^{-1}(A^2+BA)\\\\\n&amp;=(A+B)^{-1}(B^2+AB)\\\\\n&amp;=(A+B)^{-1}(A+B)B=B\n\\end{align*}</p>\n" }, { "answer_id": 2091170, "score": 1, "text": "<p>When $n=1$, this is immediate since $A$ and $B$ are nothing but real numbers:\n$$\n0=A^2-B^2=(A+B)(A-B)\\tag{1}\n$$\nand $\\det(A+B)\\neq 0$ means $A+B\\neq 0$ which allows one to <em>cancel</em> the $A+B$ term. </p>\n\n<p>For $n&gt;1$, (1) is no true in general since matrices do not necessarily commute. But in this particular problem, one has $AB=BA$.</p>\n" }, { "answer_id": 2625212, "score": 0, "text": "<p>we have , $\\det(A+B)\\neq0 \\implies (A+B)$ is non-singular matrix.</p>\n\n<p>Now, </p>\n\n<p>$(A+B)(A-B)=A^2-AB+BA-B^2=A^2-B^2=O$ ( since, $AB=BA$ and $A^2=B^2 $ )</p>\n\n<p>you should know that ,</p>\n\n<blockquote>\n <p>If the product of two matrices is null , then either both will be\n singular or one of them must be null itself.</p>\n</blockquote>\n\n<p>Therefore $A-B=O\\implies A=B$ .\nHope it may helpful!</p>\n" }, { "answer_id": 2091146, "score": 8, "text": "<p><strong>Hint</strong>: Expand and simplify $(A-B)(A+B) = A^2 + AB - BA - B^2 = \\cdots$</p>\n\n<p>If you can show that $(A-B)(A+B) = 0$, then you can multiply both sides by $(A+B)^{-1}$. (The inverse of $A+B$ exists since $\\det(A+B) \\neq 0$.)</p>\n" }, { "answer_id": 2094886, "score": 2, "text": "<p>Here's a direct proof:\n\\begin{align*}\nA&amp;=(A+B)^{-1}(A+B)A\\\\\n&amp;=(A+B)^{-1}(A^2+BA)\\\\\n&amp;=(A+B)^{-1}(B^2+AB)\\\\\n&amp;=(A+B)^{-1}(A+B)B=B\n\\end{align*}</p>\n" }, { "answer_id": 2091170, "score": 1, "text": "<p>When $n=1$, this is immediate since $A$ and $B$ are nothing but real numbers:\n$$\n0=A^2-B^2=(A+B)(A-B)\\tag{1}\n$$\nand $\\det(A+B)\\neq 0$ means $A+B\\neq 0$ which allows one to <em>cancel</em> the $A+B$ term. </p>\n\n<p>For $n&gt;1$, (1) is no true in general since matrices do not necessarily commute. But in this particular problem, one has $AB=BA$.</p>\n" }, { "answer_id": 2625212, "score": 0, "text": "<p>we have , $\\det(A+B)\\neq0 \\implies (A+B)$ is non-singular matrix.</p>\n\n<p>Now, </p>\n\n<p>$(A+B)(A-B)=A^2-AB+BA-B^2=A^2-B^2=O$ ( since, $AB=BA$ and $A^2=B^2 $ )</p>\n\n<p>you should know that ,</p>\n\n<blockquote>\n <p>If the product of two matrices is null , then either both will be\n singular or one of them must be null itself.</p>\n</blockquote>\n\n<p>Therefore $A-B=O\\implies A=B$ .\nHope it may helpful!</p>\n" }, { "answer_id": 2091146, "score": 8, "text": "<p><strong>Hint</strong>: Expand and simplify $(A-B)(A+B) = A^2 + AB - BA - B^2 = \\cdots$</p>\n\n<p>If you can show that $(A-B)(A+B) = 0$, then you can multiply both sides by $(A+B)^{-1}$. (The inverse of $A+B$ exists since $\\det(A+B) \\neq 0$.)</p>\n" }, { "answer_id": 2094886, "score": 2, "text": "<p>Here's a direct proof:\n\\begin{align*}\nA&amp;=(A+B)^{-1}(A+B)A\\\\\n&amp;=(A+B)^{-1}(A^2+BA)\\\\\n&amp;=(A+B)^{-1}(B^2+AB)\\\\\n&amp;=(A+B)^{-1}(A+B)B=B\n\\end{align*}</p>\n" }, { "answer_id": 2091170, "score": 1, "text": "<p>When $n=1$, this is immediate since $A$ and $B$ are nothing but real numbers:\n$$\n0=A^2-B^2=(A+B)(A-B)\\tag{1}\n$$\nand $\\det(A+B)\\neq 0$ means $A+B\\neq 0$ which allows one to <em>cancel</em> the $A+B$ term. </p>\n\n<p>For $n&gt;1$, (1) is no true in general since matrices do not necessarily commute. But in this particular problem, one has $AB=BA$.</p>\n" }, { "answer_id": 2625212, "score": 0, "text": "<p>we have , $\\det(A+B)\\neq0 \\implies (A+B)$ is non-singular matrix.</p>\n\n<p>Now, </p>\n\n<p>$(A+B)(A-B)=A^2-AB+BA-B^2=A^2-B^2=O$ ( since, $AB=BA$ and $A^2=B^2 $ )</p>\n\n<p>you should know that ,</p>\n\n<blockquote>\n <p>If the product of two matrices is null , then either both will be\n singular or one of them must be null itself.</p>\n</blockquote>\n\n<p>Therefore $A-B=O\\implies A=B$ .\nHope it may helpful!</p>\n" } ]
Question: Show that $A=B$, provided that $A^2=B^2$ (and some other conditions are satisfied) Answer 1: <p><strong>Hint</strong>: Expand and simplify $(A-B)(A+B) = A^2 + AB - BA - B^2 = \cdots$</p> <p>If you can show that $(A-B)(A+B) = 0$, then you can multiply both sides by $(A+B)^{-1}$. (The inverse of $A+B$ exists since $\det(A+B) \neq 0$.)</p> Answer 2: <p>Here's a direct proof: \begin{align*} A&amp;=(A+B)^{-1}(A+B)A\\ &amp;=(A+B)^{-1}(A^2+BA)\\ &amp;=(A+B)^{-1}(B^2+AB)\\ &amp;=(A+B)^{-1}(A+B)B=B \end{align*}</p> Answer 3: <p>When $n=1$, this is immediate since $A$ and $B$ are nothing but real numbers: $$ 0=A^2-B^2=(A+B)(A-B)\tag{1} $$ and $\det(A+B)\neq 0$ means $A+B\neq 0$ which allows one to <em>cancel</em> the $A+B$ term. </p> <p>For $n&gt;1$, (1) is no true in general since matrices do not necessarily commute. But in this particular problem, one has $AB=BA$.</p> Answer 4: <p>we have , $\det(A+B)\neq0 \implies (A+B)$ is non-singular matrix.</p> <p>Now, </p> <p>$(A+B)(A-B)=A^2-AB+BA-B^2=A^2-B^2=O$ ( since, $AB=BA$ and $A^2=B^2 $ )</p> <p>you should know that ,</p> <blockquote> <p>If the product of two matrices is null , then either both will be singular or one of them must be null itself.</p> </blockquote> <p>Therefore $A-B=O\implies A=B$ . Hope it may helpful!</p> Answer 5: <p><strong>Hint</strong>: Expand and simplify $(A-B)(A+B) = A^2 + AB - BA - B^2 = \cdots$</p> <p>If you can show that $(A-B)(A+B) = 0$, then you can multiply both sides by $(A+B)^{-1}$. (The inverse of $A+B$ exists since $\det(A+B) \neq 0$.)</p> Answer 6: <p>Here's a direct proof: \begin{align*} A&amp;=(A+B)^{-1}(A+B)A\\ &amp;=(A+B)^{-1}(A^2+BA)\\ &amp;=(A+B)^{-1}(B^2+AB)\\ &amp;=(A+B)^{-1}(A+B)B=B \end{align*}</p> Answer 7: <p>When $n=1$, this is immediate since $A$ and $B$ are nothing but real numbers: $$ 0=A^2-B^2=(A+B)(A-B)\tag{1} $$ and $\det(A+B)\neq 0$ means $A+B\neq 0$ which allows one to <em>cancel</em> the $A+B$ term. </p> <p>For $n&gt;1$, (1) is no true in general since matrices do not necessarily commute. But in this particular problem, one has $AB=BA$.</p> Answer 8: <p>we have , $\det(A+B)\neq0 \implies (A+B)$ is non-singular matrix.</p> <p>Now, </p> <p>$(A+B)(A-B)=A^2-AB+BA-B^2=A^2-B^2=O$ ( since, $AB=BA$ and $A^2=B^2 $ )</p> <p>you should know that ,</p> <blockquote> <p>If the product of two matrices is null , then either both will be singular or one of them must be null itself.</p> </blockquote> <p>Therefore $A-B=O\implies A=B$ . Hope it may helpful!</p> Answer 9: <p><strong>Hint</strong>: Expand and simplify $(A-B)(A+B) = A^2 + AB - BA - B^2 = \cdots$</p> <p>If you can show that $(A-B)(A+B) = 0$, then you can multiply both sides by $(A+B)^{-1}$. (The inverse of $A+B$ exists since $\det(A+B) \neq 0$.)</p> Answer 10: <p>Here's a direct proof: \begin{align*} A&amp;=(A+B)^{-1}(A+B)A\\ &amp;=(A+B)^{-1}(A^2+BA)\\ &amp;=(A+B)^{-1}(B^2+AB)\\ &amp;=(A+B)^{-1}(A+B)B=B \end{align*}</p> Answer 11: <p>When $n=1$, this is immediate since $A$ and $B$ are nothing but real numbers: $$ 0=A^2-B^2=(A+B)(A-B)\tag{1} $$ and $\det(A+B)\neq 0$ means $A+B\neq 0$ which allows one to <em>cancel</em> the $A+B$ term. </p> <p>For $n&gt;1$, (1) is no true in general since matrices do not necessarily commute. But in this particular problem, one has $AB=BA$.</p> Answer 12: <p>we have , $\det(A+B)\neq0 \implies (A+B)$ is non-singular matrix.</p> <p>Now, </p> <p>$(A+B)(A-B)=A^2-AB+BA-B^2=A^2-B^2=O$ ( since, $AB=BA$ and $A^2=B^2 $ )</p> <p>you should know that ,</p> <blockquote> <p>If the product of two matrices is null , then either both will be singular or one of them must be null itself.</p> </blockquote> <p>Therefore $A-B=O\implies A=B$ . Hope it may helpful!</p> Answer 13: <p><strong>Hint</strong>: Expand and simplify $(A-B)(A+B) = A^2 + AB - BA - B^2 = \cdots$</p> <p>If you can show that $(A-B)(A+B) = 0$, then you can multiply both sides by $(A+B)^{-1}$. (The inverse of $A+B$ exists since $\det(A+B) \neq 0$.)</p> Answer 14: <p>Here's a direct proof: \begin{align*} A&amp;=(A+B)^{-1}(A+B)A\\ &amp;=(A+B)^{-1}(A^2+BA)\\ &amp;=(A+B)^{-1}(B^2+AB)\\ &amp;=(A+B)^{-1}(A+B)B=B \end{align*}</p> Answer 15: <p>When $n=1$, this is immediate since $A$ and $B$ are nothing but real numbers: $$ 0=A^2-B^2=(A+B)(A-B)\tag{1} $$ and $\det(A+B)\neq 0$ means $A+B\neq 0$ which allows one to <em>cancel</em> the $A+B$ term. </p> <p>For $n&gt;1$, (1) is no true in general since matrices do not necessarily commute. But in this particular problem, one has $AB=BA$.</p> Answer 16: <p>we have , $\det(A+B)\neq0 \implies (A+B)$ is non-singular matrix.</p> <p>Now, </p> <p>$(A+B)(A-B)=A^2-AB+BA-B^2=A^2-B^2=O$ ( since, $AB=BA$ and $A^2=B^2 $ )</p> <p>you should know that ,</p> <blockquote> <p>If the product of two matrices is null , then either both will be singular or one of them must be null itself.</p> </blockquote> <p>Therefore $A-B=O\implies A=B$ . Hope it may helpful!</p> Answer 17: <p><strong>Hint</strong>: Expand and simplify $(A-B)(A+B) = A^2 + AB - BA - B^2 = \cdots$</p> <p>If you can show that $(A-B)(A+B) = 0$, then you can multiply both sides by $(A+B)^{-1}$. (The inverse of $A+B$ exists since $\det(A+B) \neq 0$.)</p> Answer 18: <p>Here's a direct proof: \begin{align*} A&amp;=(A+B)^{-1}(A+B)A\\ &amp;=(A+B)^{-1}(A^2+BA)\\ &amp;=(A+B)^{-1}(B^2+AB)\\ &amp;=(A+B)^{-1}(A+B)B=B \end{align*}</p> Answer 19: <p>When $n=1$, this is immediate since $A$ and $B$ are nothing but real numbers: $$ 0=A^2-B^2=(A+B)(A-B)\tag{1} $$ and $\det(A+B)\neq 0$ means $A+B\neq 0$ which allows one to <em>cancel</em> the $A+B$ term. </p> <p>For $n&gt;1$, (1) is no true in general since matrices do not necessarily commute. But in this particular problem, one has $AB=BA$.</p> Answer 20: <p>we have , $\det(A+B)\neq0 \implies (A+B)$ is non-singular matrix.</p> <p>Now, </p> <p>$(A+B)(A-B)=A^2-AB+BA-B^2=A^2-B^2=O$ ( since, $AB=BA$ and $A^2=B^2 $ )</p> <p>you should know that ,</p> <blockquote> <p>If the product of two matrices is null , then either both will be singular or one of them must be null itself.</p> </blockquote> <p>Therefore $A-B=O\implies A=B$ . Hope it may helpful!</p> Answer 21: <p><strong>Hint</strong>: Expand and simplify $(A-B)(A+B) = A^2 + AB - BA - B^2 = \cdots$</p> <p>If you can show that $(A-B)(A+B) = 0$, then you can multiply both sides by $(A+B)^{-1}$. (The inverse of $A+B$ exists since $\det(A+B) \neq 0$.)</p> Answer 22: <p>Here's a direct proof: \begin{align*} A&amp;=(A+B)^{-1}(A+B)A\\ &amp;=(A+B)^{-1}(A^2+BA)\\ &amp;=(A+B)^{-1}(B^2+AB)\\ &amp;=(A+B)^{-1}(A+B)B=B \end{align*}</p> Answer 23: <p>When $n=1$, this is immediate since $A$ and $B$ are nothing but real numbers: $$ 0=A^2-B^2=(A+B)(A-B)\tag{1} $$ and $\det(A+B)\neq 0$ means $A+B\neq 0$ which allows one to <em>cancel</em> the $A+B$ term. </p> <p>For $n&gt;1$, (1) is no true in general since matrices do not necessarily commute. But in this particular problem, one has $AB=BA$.</p> Answer 24: <p>we have , $\det(A+B)\neq0 \implies (A+B)$ is non-singular matrix.</p> <p>Now, </p> <p>$(A+B)(A-B)=A^2-AB+BA-B^2=A^2-B^2=O$ ( since, $AB=BA$ and $A^2=B^2 $ )</p> <p>you should know that ,</p> <blockquote> <p>If the product of two matrices is null , then either both will be singular or one of them must be null itself.</p> </blockquote> <p>Therefore $A-B=O\implies A=B$ . Hope it may helpful!</p> Answer 25: <p><strong>Hint</strong>: Expand and simplify $(A-B)(A+B) = A^2 + AB - BA - B^2 = \cdots$</p> <p>If you can show that $(A-B)(A+B) = 0$, then you can multiply both sides by $(A+B)^{-1}$. (The inverse of $A+B$ exists since $\det(A+B) \neq 0$.)</p> Answer 26: <p>Here's a direct proof: \begin{align*} A&amp;=(A+B)^{-1}(A+B)A\\ &amp;=(A+B)^{-1}(A^2+BA)\\ &amp;=(A+B)^{-1}(B^2+AB)\\ &amp;=(A+B)^{-1}(A+B)B=B \end{align*}</p> Answer 27: <p>When $n=1$, this is immediate since $A$ and $B$ are nothing but real numbers: $$ 0=A^2-B^2=(A+B)(A-B)\tag{1} $$ and $\det(A+B)\neq 0$ means $A+B\neq 0$ which allows one to <em>cancel</em> the $A+B$ term. </p> <p>For $n&gt;1$, (1) is no true in general since matrices do not necessarily commute. But in this particular problem, one has $AB=BA$.</p> Answer 28: <p>we have , $\det(A+B)\neq0 \implies (A+B)$ is non-singular matrix.</p> <p>Now, </p> <p>$(A+B)(A-B)=A^2-AB+BA-B^2=A^2-B^2=O$ ( since, $AB=BA$ and $A^2=B^2 $ )</p> <p>you should know that ,</p> <blockquote> <p>If the product of two matrices is null , then either both will be singular or one of them must be null itself.</p> </blockquote> <p>Therefore $A-B=O\implies A=B$ . Hope it may helpful!</p> Answer 29: <p><strong>Hint</strong>: Expand and simplify $(A-B)(A+B) = A^2 + AB - BA - B^2 = \cdots$</p> <p>If you can show that $(A-B)(A+B) = 0$, then you can multiply both sides by $(A+B)^{-1}$. (The inverse of $A+B$ exists since $\det(A+B) \neq 0$.)</p> Answer 30: <p>Here's a direct proof: \begin{align*} A&amp;=(A+B)^{-1}(A+B)A\\ &amp;=(A+B)^{-1}(A^2+BA)\\ &amp;=(A+B)^{-1}(B^2+AB)\\ &amp;=(A+B)^{-1}(A+B)B=B \end{align*}</p> Answer 31: <p>When $n=1$, this is immediate since $A$ and $B$ are nothing but real numbers: $$ 0=A^2-B^2=(A+B)(A-B)\tag{1} $$ and $\det(A+B)\neq 0$ means $A+B\neq 0$ which allows one to <em>cancel</em> the $A+B$ term. </p> <p>For $n&gt;1$, (1) is no true in general since matrices do not necessarily commute. But in this particular problem, one has $AB=BA$.</p> Answer 32: <p>we have , $\det(A+B)\neq0 \implies (A+B)$ is non-singular matrix.</p> <p>Now, </p> <p>$(A+B)(A-B)=A^2-AB+BA-B^2=A^2-B^2=O$ ( since, $AB=BA$ and $A^2=B^2 $ )</p> <p>you should know that ,</p> <blockquote> <p>If the product of two matrices is null , then either both will be singular or one of them must be null itself.</p> </blockquote> <p>Therefore $A-B=O\implies A=B$ . Hope it may helpful!</p> Answer 33: <p><strong>Hint</strong>: Expand and simplify $(A-B)(A+B) = A^2 + AB - BA - B^2 = \cdots$</p> <p>If you can show that $(A-B)(A+B) = 0$, then you can multiply both sides by $(A+B)^{-1}$. (The inverse of $A+B$ exists since $\det(A+B) \neq 0$.)</p> Answer 34: <p>Here's a direct proof: \begin{align*} A&amp;=(A+B)^{-1}(A+B)A\\ &amp;=(A+B)^{-1}(A^2+BA)\\ &amp;=(A+B)^{-1}(B^2+AB)\\ &amp;=(A+B)^{-1}(A+B)B=B \end{align*}</p> Answer 35: <p>When $n=1$, this is immediate since $A$ and $B$ are nothing but real numbers: $$ 0=A^2-B^2=(A+B)(A-B)\tag{1} $$ and $\det(A+B)\neq 0$ means $A+B\neq 0$ which allows one to <em>cancel</em> the $A+B$ term. </p> <p>For $n&gt;1$, (1) is no true in general since matrices do not necessarily commute. But in this particular problem, one has $AB=BA$.</p> Answer 36: <p>we have , $\det(A+B)\neq0 \implies (A+B)$ is non-singular matrix.</p> <p>Now, </p> <p>$(A+B)(A-B)=A^2-AB+BA-B^2=A^2-B^2=O$ ( since, $AB=BA$ and $A^2=B^2 $ )</p> <p>you should know that ,</p> <blockquote> <p>If the product of two matrices is null , then either both will be singular or one of them must be null itself.</p> </blockquote> <p>Therefore $A-B=O\implies A=B$ . Hope it may helpful!</p> Answer 37: <p><strong>Hint</strong>: Expand and simplify $(A-B)(A+B) = A^2 + AB - BA - B^2 = \cdots$</p> <p>If you can show that $(A-B)(A+B) = 0$, then you can multiply both sides by $(A+B)^{-1}$. (The inverse of $A+B$ exists since $\det(A+B) \neq 0$.)</p> Answer 38: <p>Here's a direct proof: \begin{align*} A&amp;=(A+B)^{-1}(A+B)A\\ &amp;=(A+B)^{-1}(A^2+BA)\\ &amp;=(A+B)^{-1}(B^2+AB)\\ &amp;=(A+B)^{-1}(A+B)B=B \end{align*}</p> Answer 39: <p>When $n=1$, this is immediate since $A$ and $B$ are nothing but real numbers: $$ 0=A^2-B^2=(A+B)(A-B)\tag{1} $$ and $\det(A+B)\neq 0$ means $A+B\neq 0$ which allows one to <em>cancel</em> the $A+B$ term. </p> <p>For $n&gt;1$, (1) is no true in general since matrices do not necessarily commute. But in this particular problem, one has $AB=BA$.</p> Answer 40: <p>we have , $\det(A+B)\neq0 \implies (A+B)$ is non-singular matrix.</p> <p>Now, </p> <p>$(A+B)(A-B)=A^2-AB+BA-B^2=A^2-B^2=O$ ( since, $AB=BA$ and $A^2=B^2 $ )</p> <p>you should know that ,</p> <blockquote> <p>If the product of two matrices is null , then either both will be singular or one of them must be null itself.</p> </blockquote> <p>Therefore $A-B=O\implies A=B$ . Hope it may helpful!</p> Answer 41: <p><strong>Hint</strong>: Expand and simplify $(A-B)(A+B) = A^2 + AB - BA - B^2 = \cdots$</p> <p>If you can show that $(A-B)(A+B) = 0$, then you can multiply both sides by $(A+B)^{-1}$. (The inverse of $A+B$ exists since $\det(A+B) \neq 0$.)</p> Answer 42: <p>Here's a direct proof: \begin{align*} A&amp;=(A+B)^{-1}(A+B)A\\ &amp;=(A+B)^{-1}(A^2+BA)\\ &amp;=(A+B)^{-1}(B^2+AB)\\ &amp;=(A+B)^{-1}(A+B)B=B \end{align*}</p> Answer 43: <p>When $n=1$, this is immediate since $A$ and $B$ are nothing but real numbers: $$ 0=A^2-B^2=(A+B)(A-B)\tag{1} $$ and $\det(A+B)\neq 0$ means $A+B\neq 0$ which allows one to <em>cancel</em> the $A+B$ term. </p> <p>For $n&gt;1$, (1) is no true in general since matrices do not necessarily commute. But in this particular problem, one has $AB=BA$.</p> Answer 44: <p>we have , $\det(A+B)\neq0 \implies (A+B)$ is non-singular matrix.</p> <p>Now, </p> <p>$(A+B)(A-B)=A^2-AB+BA-B^2=A^2-B^2=O$ ( since, $AB=BA$ and $A^2=B^2 $ )</p> <p>you should know that ,</p> <blockquote> <p>If the product of two matrices is null , then either both will be singular or one of them must be null itself.</p> </blockquote> <p>Therefore $A-B=O\implies A=B$ . Hope it may helpful!</p>
Find the 100th digit of $( 1 + \sqrt 2 )^{3000}$
2,091,314
6
https://math.stackexchange.com/questions/2091314/find-the-100th-digit-of-1-sqrt-2-3000
<p>What is the 100th digit to the right of the decimal point in the decimal representation of <span class="math-container">$$( 1 + \sqrt 2 )^{3000}\ ?$$</span></p>
[ { "answer_id": 2091320, "score": 9, "text": "<p><strong>Hints</strong>: $(1+\\sqrt{2})^{3000}+(1-\\sqrt{2})^{3000}$ is an integer and $(1-\\sqrt{2})^{3000}$ is a small positive number.</p>\n\n<p>Can you prove these two facts and use them to get the answer?</p>\n" }, { "answer_id": 2092227, "score": 6, "text": "<p>The way I understand it now is that since <br> $ (1+\\sqrt{2})^{2} + (1 - \\sqrt{2})^{2} = 6$ <br> $\\therefore$ $ (1+\\sqrt{2})^{3000} + (1 - \\sqrt{2})^{3000} = N$ (N is an integer) <br> since $(1 - \\sqrt{2})^{3000}$ would be a very small number so $ (1+\\sqrt{2})^{3000} = N - 0.0000...001$ so the 100th digit to the right of the decimal point should be 9</p>\n" }, { "answer_id": 2091320, "score": 9, "text": "<p><strong>Hints</strong>: $(1+\\sqrt{2})^{3000}+(1-\\sqrt{2})^{3000}$ is an integer and $(1-\\sqrt{2})^{3000}$ is a small positive number.</p>\n\n<p>Can you prove these two facts and use them to get the answer?</p>\n" }, { "answer_id": 2092227, "score": 6, "text": "<p>The way I understand it now is that since <br> $ (1+\\sqrt{2})^{2} + (1 - \\sqrt{2})^{2} = 6$ <br> $\\therefore$ $ (1+\\sqrt{2})^{3000} + (1 - \\sqrt{2})^{3000} = N$ (N is an integer) <br> since $(1 - \\sqrt{2})^{3000}$ would be a very small number so $ (1+\\sqrt{2})^{3000} = N - 0.0000...001$ so the 100th digit to the right of the decimal point should be 9</p>\n" }, { "answer_id": 2091320, "score": 9, "text": "<p><strong>Hints</strong>: $(1+\\sqrt{2})^{3000}+(1-\\sqrt{2})^{3000}$ is an integer and $(1-\\sqrt{2})^{3000}$ is a small positive number.</p>\n\n<p>Can you prove these two facts and use them to get the answer?</p>\n" }, { "answer_id": 2092227, "score": 6, "text": "<p>The way I understand it now is that since <br> $ (1+\\sqrt{2})^{2} + (1 - \\sqrt{2})^{2} = 6$ <br> $\\therefore$ $ (1+\\sqrt{2})^{3000} + (1 - \\sqrt{2})^{3000} = N$ (N is an integer) <br> since $(1 - \\sqrt{2})^{3000}$ would be a very small number so $ (1+\\sqrt{2})^{3000} = N - 0.0000...001$ so the 100th digit to the right of the decimal point should be 9</p>\n" }, { "answer_id": 2091320, "score": 9, "text": "<p><strong>Hints</strong>: $(1+\\sqrt{2})^{3000}+(1-\\sqrt{2})^{3000}$ is an integer and $(1-\\sqrt{2})^{3000}$ is a small positive number.</p>\n\n<p>Can you prove these two facts and use them to get the answer?</p>\n" }, { "answer_id": 2092227, "score": 6, "text": "<p>The way I understand it now is that since <br> $ (1+\\sqrt{2})^{2} + (1 - \\sqrt{2})^{2} = 6$ <br> $\\therefore$ $ (1+\\sqrt{2})^{3000} + (1 - \\sqrt{2})^{3000} = N$ (N is an integer) <br> since $(1 - \\sqrt{2})^{3000}$ would be a very small number so $ (1+\\sqrt{2})^{3000} = N - 0.0000...001$ so the 100th digit to the right of the decimal point should be 9</p>\n" }, { "answer_id": 2091320, "score": 9, "text": "<p><strong>Hints</strong>: $(1+\\sqrt{2})^{3000}+(1-\\sqrt{2})^{3000}$ is an integer and $(1-\\sqrt{2})^{3000}$ is a small positive number.</p>\n\n<p>Can you prove these two facts and use them to get the answer?</p>\n" }, { "answer_id": 2092227, "score": 6, "text": "<p>The way I understand it now is that since <br> $ (1+\\sqrt{2})^{2} + (1 - \\sqrt{2})^{2} = 6$ <br> $\\therefore$ $ (1+\\sqrt{2})^{3000} + (1 - \\sqrt{2})^{3000} = N$ (N is an integer) <br> since $(1 - \\sqrt{2})^{3000}$ would be a very small number so $ (1+\\sqrt{2})^{3000} = N - 0.0000...001$ so the 100th digit to the right of the decimal point should be 9</p>\n" }, { "answer_id": 2091320, "score": 9, "text": "<p><strong>Hints</strong>: $(1+\\sqrt{2})^{3000}+(1-\\sqrt{2})^{3000}$ is an integer and $(1-\\sqrt{2})^{3000}$ is a small positive number.</p>\n\n<p>Can you prove these two facts and use them to get the answer?</p>\n" }, { "answer_id": 2092227, "score": 6, "text": "<p>The way I understand it now is that since <br> $ (1+\\sqrt{2})^{2} + (1 - \\sqrt{2})^{2} = 6$ <br> $\\therefore$ $ (1+\\sqrt{2})^{3000} + (1 - \\sqrt{2})^{3000} = N$ (N is an integer) <br> since $(1 - \\sqrt{2})^{3000}$ would be a very small number so $ (1+\\sqrt{2})^{3000} = N - 0.0000...001$ so the 100th digit to the right of the decimal point should be 9</p>\n" }, { "answer_id": 2091320, "score": 9, "text": "<p><strong>Hints</strong>: $(1+\\sqrt{2})^{3000}+(1-\\sqrt{2})^{3000}$ is an integer and $(1-\\sqrt{2})^{3000}$ is a small positive number.</p>\n\n<p>Can you prove these two facts and use them to get the answer?</p>\n" }, { "answer_id": 2092227, "score": 6, "text": "<p>The way I understand it now is that since <br> $ (1+\\sqrt{2})^{2} + (1 - \\sqrt{2})^{2} = 6$ <br> $\\therefore$ $ (1+\\sqrt{2})^{3000} + (1 - \\sqrt{2})^{3000} = N$ (N is an integer) <br> since $(1 - \\sqrt{2})^{3000}$ would be a very small number so $ (1+\\sqrt{2})^{3000} = N - 0.0000...001$ so the 100th digit to the right of the decimal point should be 9</p>\n" }, { "answer_id": 2091320, "score": 9, "text": "<p><strong>Hints</strong>: $(1+\\sqrt{2})^{3000}+(1-\\sqrt{2})^{3000}$ is an integer and $(1-\\sqrt{2})^{3000}$ is a small positive number.</p>\n\n<p>Can you prove these two facts and use them to get the answer?</p>\n" }, { "answer_id": 2092227, "score": 6, "text": "<p>The way I understand it now is that since <br> $ (1+\\sqrt{2})^{2} + (1 - \\sqrt{2})^{2} = 6$ <br> $\\therefore$ $ (1+\\sqrt{2})^{3000} + (1 - \\sqrt{2})^{3000} = N$ (N is an integer) <br> since $(1 - \\sqrt{2})^{3000}$ would be a very small number so $ (1+\\sqrt{2})^{3000} = N - 0.0000...001$ so the 100th digit to the right of the decimal point should be 9</p>\n" }, { "answer_id": 2091320, "score": 9, "text": "<p><strong>Hints</strong>: $(1+\\sqrt{2})^{3000}+(1-\\sqrt{2})^{3000}$ is an integer and $(1-\\sqrt{2})^{3000}$ is a small positive number.</p>\n\n<p>Can you prove these two facts and use them to get the answer?</p>\n" }, { "answer_id": 2092227, "score": 6, "text": "<p>The way I understand it now is that since <br> $ (1+\\sqrt{2})^{2} + (1 - \\sqrt{2})^{2} = 6$ <br> $\\therefore$ $ (1+\\sqrt{2})^{3000} + (1 - \\sqrt{2})^{3000} = N$ (N is an integer) <br> since $(1 - \\sqrt{2})^{3000}$ would be a very small number so $ (1+\\sqrt{2})^{3000} = N - 0.0000...001$ so the 100th digit to the right of the decimal point should be 9</p>\n" }, { "answer_id": 2091320, "score": 9, "text": "<p><strong>Hints</strong>: $(1+\\sqrt{2})^{3000}+(1-\\sqrt{2})^{3000}$ is an integer and $(1-\\sqrt{2})^{3000}$ is a small positive number.</p>\n\n<p>Can you prove these two facts and use them to get the answer?</p>\n" }, { "answer_id": 2092227, "score": 6, "text": "<p>The way I understand it now is that since <br> $ (1+\\sqrt{2})^{2} + (1 - \\sqrt{2})^{2} = 6$ <br> $\\therefore$ $ (1+\\sqrt{2})^{3000} + (1 - \\sqrt{2})^{3000} = N$ (N is an integer) <br> since $(1 - \\sqrt{2})^{3000}$ would be a very small number so $ (1+\\sqrt{2})^{3000} = N - 0.0000...001$ so the 100th digit to the right of the decimal point should be 9</p>\n" }, { "answer_id": 2091320, "score": 9, "text": "<p><strong>Hints</strong>: $(1+\\sqrt{2})^{3000}+(1-\\sqrt{2})^{3000}$ is an integer and $(1-\\sqrt{2})^{3000}$ is a small positive number.</p>\n\n<p>Can you prove these two facts and use them to get the answer?</p>\n" }, { "answer_id": 2092227, "score": 6, "text": "<p>The way I understand it now is that since <br> $ (1+\\sqrt{2})^{2} + (1 - \\sqrt{2})^{2} = 6$ <br> $\\therefore$ $ (1+\\sqrt{2})^{3000} + (1 - \\sqrt{2})^{3000} = N$ (N is an integer) <br> since $(1 - \\sqrt{2})^{3000}$ would be a very small number so $ (1+\\sqrt{2})^{3000} = N - 0.0000...001$ so the 100th digit to the right of the decimal point should be 9</p>\n" } ]
Question: Find the 100th digit of $( 1 + \sqrt 2 )^{3000}$ Answer 1: <p><strong>Hints</strong>: $(1+\sqrt{2})^{3000}+(1-\sqrt{2})^{3000}$ is an integer and $(1-\sqrt{2})^{3000}$ is a small positive number.</p> <p>Can you prove these two facts and use them to get the answer?</p> Answer 2: <p>The way I understand it now is that since <br> $ (1+\sqrt{2})^{2} + (1 - \sqrt{2})^{2} = 6$ <br> $\therefore$ $ (1+\sqrt{2})^{3000} + (1 - \sqrt{2})^{3000} = N$ (N is an integer) <br> since $(1 - \sqrt{2})^{3000}$ would be a very small number so $ (1+\sqrt{2})^{3000} = N - 0.0000...001$ so the 100th digit to the right of the decimal point should be 9</p> Answer 3: <p><strong>Hints</strong>: $(1+\sqrt{2})^{3000}+(1-\sqrt{2})^{3000}$ is an integer and $(1-\sqrt{2})^{3000}$ is a small positive number.</p> <p>Can you prove these two facts and use them to get the answer?</p> Answer 4: <p>The way I understand it now is that since <br> $ (1+\sqrt{2})^{2} + (1 - \sqrt{2})^{2} = 6$ <br> $\therefore$ $ (1+\sqrt{2})^{3000} + (1 - \sqrt{2})^{3000} = N$ (N is an integer) <br> since $(1 - \sqrt{2})^{3000}$ would be a very small number so $ (1+\sqrt{2})^{3000} = N - 0.0000...001$ so the 100th digit to the right of the decimal point should be 9</p> Answer 5: <p><strong>Hints</strong>: $(1+\sqrt{2})^{3000}+(1-\sqrt{2})^{3000}$ is an integer and $(1-\sqrt{2})^{3000}$ is a small positive number.</p> <p>Can you prove these two facts and use them to get the answer?</p> Answer 6: <p>The way I understand it now is that since <br> $ (1+\sqrt{2})^{2} + (1 - \sqrt{2})^{2} = 6$ <br> $\therefore$ $ (1+\sqrt{2})^{3000} + (1 - \sqrt{2})^{3000} = N$ (N is an integer) <br> since $(1 - \sqrt{2})^{3000}$ would be a very small number so $ (1+\sqrt{2})^{3000} = N - 0.0000...001$ so the 100th digit to the right of the decimal point should be 9</p> Answer 7: <p><strong>Hints</strong>: $(1+\sqrt{2})^{3000}+(1-\sqrt{2})^{3000}$ is an integer and $(1-\sqrt{2})^{3000}$ is a small positive number.</p> <p>Can you prove these two facts and use them to get the answer?</p> Answer 8: <p>The way I understand it now is that since <br> $ (1+\sqrt{2})^{2} + (1 - \sqrt{2})^{2} = 6$ <br> $\therefore$ $ (1+\sqrt{2})^{3000} + (1 - \sqrt{2})^{3000} = N$ (N is an integer) <br> since $(1 - \sqrt{2})^{3000}$ would be a very small number so $ (1+\sqrt{2})^{3000} = N - 0.0000...001$ so the 100th digit to the right of the decimal point should be 9</p> Answer 9: <p><strong>Hints</strong>: $(1+\sqrt{2})^{3000}+(1-\sqrt{2})^{3000}$ is an integer and $(1-\sqrt{2})^{3000}$ is a small positive number.</p> <p>Can you prove these two facts and use them to get the answer?</p> Answer 10: <p>The way I understand it now is that since <br> $ (1+\sqrt{2})^{2} + (1 - \sqrt{2})^{2} = 6$ <br> $\therefore$ $ (1+\sqrt{2})^{3000} + (1 - \sqrt{2})^{3000} = N$ (N is an integer) <br> since $(1 - \sqrt{2})^{3000}$ would be a very small number so $ (1+\sqrt{2})^{3000} = N - 0.0000...001$ so the 100th digit to the right of the decimal point should be 9</p> Answer 11: <p><strong>Hints</strong>: $(1+\sqrt{2})^{3000}+(1-\sqrt{2})^{3000}$ is an integer and $(1-\sqrt{2})^{3000}$ is a small positive number.</p> <p>Can you prove these two facts and use them to get the answer?</p> Answer 12: <p>The way I understand it now is that since <br> $ (1+\sqrt{2})^{2} + (1 - \sqrt{2})^{2} = 6$ <br> $\therefore$ $ (1+\sqrt{2})^{3000} + (1 - \sqrt{2})^{3000} = N$ (N is an integer) <br> since $(1 - \sqrt{2})^{3000}$ would be a very small number so $ (1+\sqrt{2})^{3000} = N - 0.0000...001$ so the 100th digit to the right of the decimal point should be 9</p> Answer 13: <p><strong>Hints</strong>: $(1+\sqrt{2})^{3000}+(1-\sqrt{2})^{3000}$ is an integer and $(1-\sqrt{2})^{3000}$ is a small positive number.</p> <p>Can you prove these two facts and use them to get the answer?</p> Answer 14: <p>The way I understand it now is that since <br> $ (1+\sqrt{2})^{2} + (1 - \sqrt{2})^{2} = 6$ <br> $\therefore$ $ (1+\sqrt{2})^{3000} + (1 - \sqrt{2})^{3000} = N$ (N is an integer) <br> since $(1 - \sqrt{2})^{3000}$ would be a very small number so $ (1+\sqrt{2})^{3000} = N - 0.0000...001$ so the 100th digit to the right of the decimal point should be 9</p> Answer 15: <p><strong>Hints</strong>: $(1+\sqrt{2})^{3000}+(1-\sqrt{2})^{3000}$ is an integer and $(1-\sqrt{2})^{3000}$ is a small positive number.</p> <p>Can you prove these two facts and use them to get the answer?</p> Answer 16: <p>The way I understand it now is that since <br> $ (1+\sqrt{2})^{2} + (1 - \sqrt{2})^{2} = 6$ <br> $\therefore$ $ (1+\sqrt{2})^{3000} + (1 - \sqrt{2})^{3000} = N$ (N is an integer) <br> since $(1 - \sqrt{2})^{3000}$ would be a very small number so $ (1+\sqrt{2})^{3000} = N - 0.0000...001$ so the 100th digit to the right of the decimal point should be 9</p> Answer 17: <p><strong>Hints</strong>: $(1+\sqrt{2})^{3000}+(1-\sqrt{2})^{3000}$ is an integer and $(1-\sqrt{2})^{3000}$ is a small positive number.</p> <p>Can you prove these two facts and use them to get the answer?</p> Answer 18: <p>The way I understand it now is that since <br> $ (1+\sqrt{2})^{2} + (1 - \sqrt{2})^{2} = 6$ <br> $\therefore$ $ (1+\sqrt{2})^{3000} + (1 - \sqrt{2})^{3000} = N$ (N is an integer) <br> since $(1 - \sqrt{2})^{3000}$ would be a very small number so $ (1+\sqrt{2})^{3000} = N - 0.0000...001$ so the 100th digit to the right of the decimal point should be 9</p> Answer 19: <p><strong>Hints</strong>: $(1+\sqrt{2})^{3000}+(1-\sqrt{2})^{3000}$ is an integer and $(1-\sqrt{2})^{3000}$ is a small positive number.</p> <p>Can you prove these two facts and use them to get the answer?</p> Answer 20: <p>The way I understand it now is that since <br> $ (1+\sqrt{2})^{2} + (1 - \sqrt{2})^{2} = 6$ <br> $\therefore$ $ (1+\sqrt{2})^{3000} + (1 - \sqrt{2})^{3000} = N$ (N is an integer) <br> since $(1 - \sqrt{2})^{3000}$ would be a very small number so $ (1+\sqrt{2})^{3000} = N - 0.0000...001$ so the 100th digit to the right of the decimal point should be 9</p> Answer 21: <p><strong>Hints</strong>: $(1+\sqrt{2})^{3000}+(1-\sqrt{2})^{3000}$ is an integer and $(1-\sqrt{2})^{3000}$ is a small positive number.</p> <p>Can you prove these two facts and use them to get the answer?</p> Answer 22: <p>The way I understand it now is that since <br> $ (1+\sqrt{2})^{2} + (1 - \sqrt{2})^{2} = 6$ <br> $\therefore$ $ (1+\sqrt{2})^{3000} + (1 - \sqrt{2})^{3000} = N$ (N is an integer) <br> since $(1 - \sqrt{2})^{3000}$ would be a very small number so $ (1+\sqrt{2})^{3000} = N - 0.0000...001$ so the 100th digit to the right of the decimal point should be 9</p>
Knockout tournament.
2,091,350
6
https://math.stackexchange.com/questions/2091350/knockout-tournament
<p>P1, P2, P3, P4, P5, P6, P7, P8, P9, P10, P11, P12, P13, P14, P15, P16 are 16 players who play a knockout tournament. In any match between P(i) and P(j), P(i) wins if i is less than j.</p> <p>Find the probability that P6 reaches the final.</p> <p>I tried making cases, but they seem endless. We know for sure that P1 will win the tournament and P16 will be eliminated in the first round. But there are many other cases for the first round only. Please help.</p>
[ { "answer_id": 2091383, "score": 10, "text": "<p>Consider the intial draw sheet, expressed as a tree.\n<p>\nThe half of the draw containing P6 has $7$ other players, so there are $\\text{C}(15,7)$ choices for those $7$ players, all equally likely.\n<p>\nP6 reaches the final if and only if those $7$ players are all weaker than P6. There are $10$ such players.\n<p>\nIt follows that P6 reaches the final with probability $\\dfrac{\\text{C}(10,7)}{\\text{C}(15,7)} = \\dfrac{8}{429}$.</p>\n" }, { "answer_id": 2091377, "score": 9, "text": "<p>WLOG we can first assign P6 a spot on the tournament tree, then choose 5 spots for P1,...,P5, and the rest of the choices are irrelevant for this question.<br>\nP6 makes the final if and only if P1, P2, ... P5 are all assigned positions on the side of the tree of the tree opposite to the one where P6 is assigned. Once P6 is assigned her spot, there are 15 remaining, of which 8 constitute the \"other side\". There are\n${8 \\choose 5}$ ways to choose 5 of the \"other side\" spots for P1,...,P5, \nout of a total of ${15 \\choose 5}$. Thus the probability is\n$$ \\dfrac{8 \\choose 5}{15 \\choose 5} = \\frac{8}{429}$$</p>\n" }, { "answer_id": 2091383, "score": 10, "text": "<p>Consider the intial draw sheet, expressed as a tree.\n<p>\nThe half of the draw containing P6 has $7$ other players, so there are $\\text{C}(15,7)$ choices for those $7$ players, all equally likely.\n<p>\nP6 reaches the final if and only if those $7$ players are all weaker than P6. There are $10$ such players.\n<p>\nIt follows that P6 reaches the final with probability $\\dfrac{\\text{C}(10,7)}{\\text{C}(15,7)} = \\dfrac{8}{429}$.</p>\n" }, { "answer_id": 2091377, "score": 9, "text": "<p>WLOG we can first assign P6 a spot on the tournament tree, then choose 5 spots for P1,...,P5, and the rest of the choices are irrelevant for this question.<br>\nP6 makes the final if and only if P1, P2, ... P5 are all assigned positions on the side of the tree of the tree opposite to the one where P6 is assigned. Once P6 is assigned her spot, there are 15 remaining, of which 8 constitute the \"other side\". There are\n${8 \\choose 5}$ ways to choose 5 of the \"other side\" spots for P1,...,P5, \nout of a total of ${15 \\choose 5}$. Thus the probability is\n$$ \\dfrac{8 \\choose 5}{15 \\choose 5} = \\frac{8}{429}$$</p>\n" }, { "answer_id": 2091383, "score": 10, "text": "<p>Consider the intial draw sheet, expressed as a tree.\n<p>\nThe half of the draw containing P6 has $7$ other players, so there are $\\text{C}(15,7)$ choices for those $7$ players, all equally likely.\n<p>\nP6 reaches the final if and only if those $7$ players are all weaker than P6. There are $10$ such players.\n<p>\nIt follows that P6 reaches the final with probability $\\dfrac{\\text{C}(10,7)}{\\text{C}(15,7)} = \\dfrac{8}{429}$.</p>\n" }, { "answer_id": 2091377, "score": 9, "text": "<p>WLOG we can first assign P6 a spot on the tournament tree, then choose 5 spots for P1,...,P5, and the rest of the choices are irrelevant for this question.<br>\nP6 makes the final if and only if P1, P2, ... P5 are all assigned positions on the side of the tree of the tree opposite to the one where P6 is assigned. Once P6 is assigned her spot, there are 15 remaining, of which 8 constitute the \"other side\". There are\n${8 \\choose 5}$ ways to choose 5 of the \"other side\" spots for P1,...,P5, \nout of a total of ${15 \\choose 5}$. Thus the probability is\n$$ \\dfrac{8 \\choose 5}{15 \\choose 5} = \\frac{8}{429}$$</p>\n" }, { "answer_id": 2091383, "score": 10, "text": "<p>Consider the intial draw sheet, expressed as a tree.\n<p>\nThe half of the draw containing P6 has $7$ other players, so there are $\\text{C}(15,7)$ choices for those $7$ players, all equally likely.\n<p>\nP6 reaches the final if and only if those $7$ players are all weaker than P6. There are $10$ such players.\n<p>\nIt follows that P6 reaches the final with probability $\\dfrac{\\text{C}(10,7)}{\\text{C}(15,7)} = \\dfrac{8}{429}$.</p>\n" }, { "answer_id": 2091377, "score": 9, "text": "<p>WLOG we can first assign P6 a spot on the tournament tree, then choose 5 spots for P1,...,P5, and the rest of the choices are irrelevant for this question.<br>\nP6 makes the final if and only if P1, P2, ... P5 are all assigned positions on the side of the tree of the tree opposite to the one where P6 is assigned. Once P6 is assigned her spot, there are 15 remaining, of which 8 constitute the \"other side\". There are\n${8 \\choose 5}$ ways to choose 5 of the \"other side\" spots for P1,...,P5, \nout of a total of ${15 \\choose 5}$. Thus the probability is\n$$ \\dfrac{8 \\choose 5}{15 \\choose 5} = \\frac{8}{429}$$</p>\n" }, { "answer_id": 2091383, "score": 10, "text": "<p>Consider the intial draw sheet, expressed as a tree.\n<p>\nThe half of the draw containing P6 has $7$ other players, so there are $\\text{C}(15,7)$ choices for those $7$ players, all equally likely.\n<p>\nP6 reaches the final if and only if those $7$ players are all weaker than P6. There are $10$ such players.\n<p>\nIt follows that P6 reaches the final with probability $\\dfrac{\\text{C}(10,7)}{\\text{C}(15,7)} = \\dfrac{8}{429}$.</p>\n" }, { "answer_id": 2091377, "score": 9, "text": "<p>WLOG we can first assign P6 a spot on the tournament tree, then choose 5 spots for P1,...,P5, and the rest of the choices are irrelevant for this question.<br>\nP6 makes the final if and only if P1, P2, ... P5 are all assigned positions on the side of the tree of the tree opposite to the one where P6 is assigned. Once P6 is assigned her spot, there are 15 remaining, of which 8 constitute the \"other side\". There are\n${8 \\choose 5}$ ways to choose 5 of the \"other side\" spots for P1,...,P5, \nout of a total of ${15 \\choose 5}$. Thus the probability is\n$$ \\dfrac{8 \\choose 5}{15 \\choose 5} = \\frac{8}{429}$$</p>\n" }, { "answer_id": 2091383, "score": 10, "text": "<p>Consider the intial draw sheet, expressed as a tree.\n<p>\nThe half of the draw containing P6 has $7$ other players, so there are $\\text{C}(15,7)$ choices for those $7$ players, all equally likely.\n<p>\nP6 reaches the final if and only if those $7$ players are all weaker than P6. There are $10$ such players.\n<p>\nIt follows that P6 reaches the final with probability $\\dfrac{\\text{C}(10,7)}{\\text{C}(15,7)} = \\dfrac{8}{429}$.</p>\n" }, { "answer_id": 2091377, "score": 9, "text": "<p>WLOG we can first assign P6 a spot on the tournament tree, then choose 5 spots for P1,...,P5, and the rest of the choices are irrelevant for this question.<br>\nP6 makes the final if and only if P1, P2, ... P5 are all assigned positions on the side of the tree of the tree opposite to the one where P6 is assigned. Once P6 is assigned her spot, there are 15 remaining, of which 8 constitute the \"other side\". There are\n${8 \\choose 5}$ ways to choose 5 of the \"other side\" spots for P1,...,P5, \nout of a total of ${15 \\choose 5}$. Thus the probability is\n$$ \\dfrac{8 \\choose 5}{15 \\choose 5} = \\frac{8}{429}$$</p>\n" }, { "answer_id": 2091383, "score": 10, "text": "<p>Consider the intial draw sheet, expressed as a tree.\n<p>\nThe half of the draw containing P6 has $7$ other players, so there are $\\text{C}(15,7)$ choices for those $7$ players, all equally likely.\n<p>\nP6 reaches the final if and only if those $7$ players are all weaker than P6. There are $10$ such players.\n<p>\nIt follows that P6 reaches the final with probability $\\dfrac{\\text{C}(10,7)}{\\text{C}(15,7)} = \\dfrac{8}{429}$.</p>\n" }, { "answer_id": 2091377, "score": 9, "text": "<p>WLOG we can first assign P6 a spot on the tournament tree, then choose 5 spots for P1,...,P5, and the rest of the choices are irrelevant for this question.<br>\nP6 makes the final if and only if P1, P2, ... P5 are all assigned positions on the side of the tree of the tree opposite to the one where P6 is assigned. Once P6 is assigned her spot, there are 15 remaining, of which 8 constitute the \"other side\". There are\n${8 \\choose 5}$ ways to choose 5 of the \"other side\" spots for P1,...,P5, \nout of a total of ${15 \\choose 5}$. Thus the probability is\n$$ \\dfrac{8 \\choose 5}{15 \\choose 5} = \\frac{8}{429}$$</p>\n" }, { "answer_id": 2091383, "score": 10, "text": "<p>Consider the intial draw sheet, expressed as a tree.\n<p>\nThe half of the draw containing P6 has $7$ other players, so there are $\\text{C}(15,7)$ choices for those $7$ players, all equally likely.\n<p>\nP6 reaches the final if and only if those $7$ players are all weaker than P6. There are $10$ such players.\n<p>\nIt follows that P6 reaches the final with probability $\\dfrac{\\text{C}(10,7)}{\\text{C}(15,7)} = \\dfrac{8}{429}$.</p>\n" }, { "answer_id": 2091377, "score": 9, "text": "<p>WLOG we can first assign P6 a spot on the tournament tree, then choose 5 spots for P1,...,P5, and the rest of the choices are irrelevant for this question.<br>\nP6 makes the final if and only if P1, P2, ... P5 are all assigned positions on the side of the tree of the tree opposite to the one where P6 is assigned. Once P6 is assigned her spot, there are 15 remaining, of which 8 constitute the \"other side\". There are\n${8 \\choose 5}$ ways to choose 5 of the \"other side\" spots for P1,...,P5, \nout of a total of ${15 \\choose 5}$. Thus the probability is\n$$ \\dfrac{8 \\choose 5}{15 \\choose 5} = \\frac{8}{429}$$</p>\n" }, { "answer_id": 2091383, "score": 10, "text": "<p>Consider the intial draw sheet, expressed as a tree.\n<p>\nThe half of the draw containing P6 has $7$ other players, so there are $\\text{C}(15,7)$ choices for those $7$ players, all equally likely.\n<p>\nP6 reaches the final if and only if those $7$ players are all weaker than P6. There are $10$ such players.\n<p>\nIt follows that P6 reaches the final with probability $\\dfrac{\\text{C}(10,7)}{\\text{C}(15,7)} = \\dfrac{8}{429}$.</p>\n" }, { "answer_id": 2091377, "score": 9, "text": "<p>WLOG we can first assign P6 a spot on the tournament tree, then choose 5 spots for P1,...,P5, and the rest of the choices are irrelevant for this question.<br>\nP6 makes the final if and only if P1, P2, ... P5 are all assigned positions on the side of the tree of the tree opposite to the one where P6 is assigned. Once P6 is assigned her spot, there are 15 remaining, of which 8 constitute the \"other side\". There are\n${8 \\choose 5}$ ways to choose 5 of the \"other side\" spots for P1,...,P5, \nout of a total of ${15 \\choose 5}$. Thus the probability is\n$$ \\dfrac{8 \\choose 5}{15 \\choose 5} = \\frac{8}{429}$$</p>\n" }, { "answer_id": 2091383, "score": 10, "text": "<p>Consider the intial draw sheet, expressed as a tree.\n<p>\nThe half of the draw containing P6 has $7$ other players, so there are $\\text{C}(15,7)$ choices for those $7$ players, all equally likely.\n<p>\nP6 reaches the final if and only if those $7$ players are all weaker than P6. There are $10$ such players.\n<p>\nIt follows that P6 reaches the final with probability $\\dfrac{\\text{C}(10,7)}{\\text{C}(15,7)} = \\dfrac{8}{429}$.</p>\n" }, { "answer_id": 2091377, "score": 9, "text": "<p>WLOG we can first assign P6 a spot on the tournament tree, then choose 5 spots for P1,...,P5, and the rest of the choices are irrelevant for this question.<br>\nP6 makes the final if and only if P1, P2, ... P5 are all assigned positions on the side of the tree of the tree opposite to the one where P6 is assigned. Once P6 is assigned her spot, there are 15 remaining, of which 8 constitute the \"other side\". There are\n${8 \\choose 5}$ ways to choose 5 of the \"other side\" spots for P1,...,P5, \nout of a total of ${15 \\choose 5}$. Thus the probability is\n$$ \\dfrac{8 \\choose 5}{15 \\choose 5} = \\frac{8}{429}$$</p>\n" }, { "answer_id": 2091383, "score": 10, "text": "<p>Consider the intial draw sheet, expressed as a tree.\n<p>\nThe half of the draw containing P6 has $7$ other players, so there are $\\text{C}(15,7)$ choices for those $7$ players, all equally likely.\n<p>\nP6 reaches the final if and only if those $7$ players are all weaker than P6. There are $10$ such players.\n<p>\nIt follows that P6 reaches the final with probability $\\dfrac{\\text{C}(10,7)}{\\text{C}(15,7)} = \\dfrac{8}{429}$.</p>\n" }, { "answer_id": 2091377, "score": 9, "text": "<p>WLOG we can first assign P6 a spot on the tournament tree, then choose 5 spots for P1,...,P5, and the rest of the choices are irrelevant for this question.<br>\nP6 makes the final if and only if P1, P2, ... P5 are all assigned positions on the side of the tree of the tree opposite to the one where P6 is assigned. Once P6 is assigned her spot, there are 15 remaining, of which 8 constitute the \"other side\". There are\n${8 \\choose 5}$ ways to choose 5 of the \"other side\" spots for P1,...,P5, \nout of a total of ${15 \\choose 5}$. Thus the probability is\n$$ \\dfrac{8 \\choose 5}{15 \\choose 5} = \\frac{8}{429}$$</p>\n" } ]
Question: Knockout tournament. Answer 1: <p>Consider the intial draw sheet, expressed as a tree. <p> The half of the draw containing P6 has $7$ other players, so there are $\text{C}(15,7)$ choices for those $7$ players, all equally likely. <p> P6 reaches the final if and only if those $7$ players are all weaker than P6. There are $10$ such players. <p> It follows that P6 reaches the final with probability $\dfrac{\text{C}(10,7)}{\text{C}(15,7)} = \dfrac{8}{429}$.</p> Answer 2: <p>WLOG we can first assign P6 a spot on the tournament tree, then choose 5 spots for P1,...,P5, and the rest of the choices are irrelevant for this question.<br> P6 makes the final if and only if P1, P2, ... P5 are all assigned positions on the side of the tree of the tree opposite to the one where P6 is assigned. Once P6 is assigned her spot, there are 15 remaining, of which 8 constitute the "other side". There are ${8 \choose 5}$ ways to choose 5 of the "other side" spots for P1,...,P5, out of a total of ${15 \choose 5}$. Thus the probability is $$ \dfrac{8 \choose 5}{15 \choose 5} = \frac{8}{429}$$</p> Answer 3: <p>Consider the intial draw sheet, expressed as a tree. <p> The half of the draw containing P6 has $7$ other players, so there are $\text{C}(15,7)$ choices for those $7$ players, all equally likely. <p> P6 reaches the final if and only if those $7$ players are all weaker than P6. There are $10$ such players. <p> It follows that P6 reaches the final with probability $\dfrac{\text{C}(10,7)}{\text{C}(15,7)} = \dfrac{8}{429}$.</p> Answer 4: <p>WLOG we can first assign P6 a spot on the tournament tree, then choose 5 spots for P1,...,P5, and the rest of the choices are irrelevant for this question.<br> P6 makes the final if and only if P1, P2, ... P5 are all assigned positions on the side of the tree of the tree opposite to the one where P6 is assigned. Once P6 is assigned her spot, there are 15 remaining, of which 8 constitute the "other side". There are ${8 \choose 5}$ ways to choose 5 of the "other side" spots for P1,...,P5, out of a total of ${15 \choose 5}$. Thus the probability is $$ \dfrac{8 \choose 5}{15 \choose 5} = \frac{8}{429}$$</p> Answer 5: <p>Consider the intial draw sheet, expressed as a tree. <p> The half of the draw containing P6 has $7$ other players, so there are $\text{C}(15,7)$ choices for those $7$ players, all equally likely. <p> P6 reaches the final if and only if those $7$ players are all weaker than P6. There are $10$ such players. <p> It follows that P6 reaches the final with probability $\dfrac{\text{C}(10,7)}{\text{C}(15,7)} = \dfrac{8}{429}$.</p> Answer 6: <p>WLOG we can first assign P6 a spot on the tournament tree, then choose 5 spots for P1,...,P5, and the rest of the choices are irrelevant for this question.<br> P6 makes the final if and only if P1, P2, ... P5 are all assigned positions on the side of the tree of the tree opposite to the one where P6 is assigned. Once P6 is assigned her spot, there are 15 remaining, of which 8 constitute the "other side". There are ${8 \choose 5}$ ways to choose 5 of the "other side" spots for P1,...,P5, out of a total of ${15 \choose 5}$. Thus the probability is $$ \dfrac{8 \choose 5}{15 \choose 5} = \frac{8}{429}$$</p> Answer 7: <p>Consider the intial draw sheet, expressed as a tree. <p> The half of the draw containing P6 has $7$ other players, so there are $\text{C}(15,7)$ choices for those $7$ players, all equally likely. <p> P6 reaches the final if and only if those $7$ players are all weaker than P6. There are $10$ such players. <p> It follows that P6 reaches the final with probability $\dfrac{\text{C}(10,7)}{\text{C}(15,7)} = \dfrac{8}{429}$.</p> Answer 8: <p>WLOG we can first assign P6 a spot on the tournament tree, then choose 5 spots for P1,...,P5, and the rest of the choices are irrelevant for this question.<br> P6 makes the final if and only if P1, P2, ... P5 are all assigned positions on the side of the tree of the tree opposite to the one where P6 is assigned. Once P6 is assigned her spot, there are 15 remaining, of which 8 constitute the "other side". There are ${8 \choose 5}$ ways to choose 5 of the "other side" spots for P1,...,P5, out of a total of ${15 \choose 5}$. Thus the probability is $$ \dfrac{8 \choose 5}{15 \choose 5} = \frac{8}{429}$$</p> Answer 9: <p>Consider the intial draw sheet, expressed as a tree. <p> The half of the draw containing P6 has $7$ other players, so there are $\text{C}(15,7)$ choices for those $7$ players, all equally likely. <p> P6 reaches the final if and only if those $7$ players are all weaker than P6. There are $10$ such players. <p> It follows that P6 reaches the final with probability $\dfrac{\text{C}(10,7)}{\text{C}(15,7)} = \dfrac{8}{429}$.</p> Answer 10: <p>WLOG we can first assign P6 a spot on the tournament tree, then choose 5 spots for P1,...,P5, and the rest of the choices are irrelevant for this question.<br> P6 makes the final if and only if P1, P2, ... P5 are all assigned positions on the side of the tree of the tree opposite to the one where P6 is assigned. Once P6 is assigned her spot, there are 15 remaining, of which 8 constitute the "other side". There are ${8 \choose 5}$ ways to choose 5 of the "other side" spots for P1,...,P5, out of a total of ${15 \choose 5}$. Thus the probability is $$ \dfrac{8 \choose 5}{15 \choose 5} = \frac{8}{429}$$</p> Answer 11: <p>Consider the intial draw sheet, expressed as a tree. <p> The half of the draw containing P6 has $7$ other players, so there are $\text{C}(15,7)$ choices for those $7$ players, all equally likely. <p> P6 reaches the final if and only if those $7$ players are all weaker than P6. There are $10$ such players. <p> It follows that P6 reaches the final with probability $\dfrac{\text{C}(10,7)}{\text{C}(15,7)} = \dfrac{8}{429}$.</p> Answer 12: <p>WLOG we can first assign P6 a spot on the tournament tree, then choose 5 spots for P1,...,P5, and the rest of the choices are irrelevant for this question.<br> P6 makes the final if and only if P1, P2, ... P5 are all assigned positions on the side of the tree of the tree opposite to the one where P6 is assigned. Once P6 is assigned her spot, there are 15 remaining, of which 8 constitute the "other side". There are ${8 \choose 5}$ ways to choose 5 of the "other side" spots for P1,...,P5, out of a total of ${15 \choose 5}$. Thus the probability is $$ \dfrac{8 \choose 5}{15 \choose 5} = \frac{8}{429}$$</p> Answer 13: <p>Consider the intial draw sheet, expressed as a tree. <p> The half of the draw containing P6 has $7$ other players, so there are $\text{C}(15,7)$ choices for those $7$ players, all equally likely. <p> P6 reaches the final if and only if those $7$ players are all weaker than P6. There are $10$ such players. <p> It follows that P6 reaches the final with probability $\dfrac{\text{C}(10,7)}{\text{C}(15,7)} = \dfrac{8}{429}$.</p> Answer 14: <p>WLOG we can first assign P6 a spot on the tournament tree, then choose 5 spots for P1,...,P5, and the rest of the choices are irrelevant for this question.<br> P6 makes the final if and only if P1, P2, ... P5 are all assigned positions on the side of the tree of the tree opposite to the one where P6 is assigned. Once P6 is assigned her spot, there are 15 remaining, of which 8 constitute the "other side". There are ${8 \choose 5}$ ways to choose 5 of the "other side" spots for P1,...,P5, out of a total of ${15 \choose 5}$. Thus the probability is $$ \dfrac{8 \choose 5}{15 \choose 5} = \frac{8}{429}$$</p> Answer 15: <p>Consider the intial draw sheet, expressed as a tree. <p> The half of the draw containing P6 has $7$ other players, so there are $\text{C}(15,7)$ choices for those $7$ players, all equally likely. <p> P6 reaches the final if and only if those $7$ players are all weaker than P6. There are $10$ such players. <p> It follows that P6 reaches the final with probability $\dfrac{\text{C}(10,7)}{\text{C}(15,7)} = \dfrac{8}{429}$.</p> Answer 16: <p>WLOG we can first assign P6 a spot on the tournament tree, then choose 5 spots for P1,...,P5, and the rest of the choices are irrelevant for this question.<br> P6 makes the final if and only if P1, P2, ... P5 are all assigned positions on the side of the tree of the tree opposite to the one where P6 is assigned. Once P6 is assigned her spot, there are 15 remaining, of which 8 constitute the "other side". There are ${8 \choose 5}$ ways to choose 5 of the "other side" spots for P1,...,P5, out of a total of ${15 \choose 5}$. Thus the probability is $$ \dfrac{8 \choose 5}{15 \choose 5} = \frac{8}{429}$$</p> Answer 17: <p>Consider the intial draw sheet, expressed as a tree. <p> The half of the draw containing P6 has $7$ other players, so there are $\text{C}(15,7)$ choices for those $7$ players, all equally likely. <p> P6 reaches the final if and only if those $7$ players are all weaker than P6. There are $10$ such players. <p> It follows that P6 reaches the final with probability $\dfrac{\text{C}(10,7)}{\text{C}(15,7)} = \dfrac{8}{429}$.</p> Answer 18: <p>WLOG we can first assign P6 a spot on the tournament tree, then choose 5 spots for P1,...,P5, and the rest of the choices are irrelevant for this question.<br> P6 makes the final if and only if P1, P2, ... P5 are all assigned positions on the side of the tree of the tree opposite to the one where P6 is assigned. Once P6 is assigned her spot, there are 15 remaining, of which 8 constitute the "other side". There are ${8 \choose 5}$ ways to choose 5 of the "other side" spots for P1,...,P5, out of a total of ${15 \choose 5}$. Thus the probability is $$ \dfrac{8 \choose 5}{15 \choose 5} = \frac{8}{429}$$</p> Answer 19: <p>Consider the intial draw sheet, expressed as a tree. <p> The half of the draw containing P6 has $7$ other players, so there are $\text{C}(15,7)$ choices for those $7$ players, all equally likely. <p> P6 reaches the final if and only if those $7$ players are all weaker than P6. There are $10$ such players. <p> It follows that P6 reaches the final with probability $\dfrac{\text{C}(10,7)}{\text{C}(15,7)} = \dfrac{8}{429}$.</p> Answer 20: <p>WLOG we can first assign P6 a spot on the tournament tree, then choose 5 spots for P1,...,P5, and the rest of the choices are irrelevant for this question.<br> P6 makes the final if and only if P1, P2, ... P5 are all assigned positions on the side of the tree of the tree opposite to the one where P6 is assigned. Once P6 is assigned her spot, there are 15 remaining, of which 8 constitute the "other side". There are ${8 \choose 5}$ ways to choose 5 of the "other side" spots for P1,...,P5, out of a total of ${15 \choose 5}$. Thus the probability is $$ \dfrac{8 \choose 5}{15 \choose 5} = \frac{8}{429}$$</p> Answer 21: <p>Consider the intial draw sheet, expressed as a tree. <p> The half of the draw containing P6 has $7$ other players, so there are $\text{C}(15,7)$ choices for those $7$ players, all equally likely. <p> P6 reaches the final if and only if those $7$ players are all weaker than P6. There are $10$ such players. <p> It follows that P6 reaches the final with probability $\dfrac{\text{C}(10,7)}{\text{C}(15,7)} = \dfrac{8}{429}$.</p> Answer 22: <p>WLOG we can first assign P6 a spot on the tournament tree, then choose 5 spots for P1,...,P5, and the rest of the choices are irrelevant for this question.<br> P6 makes the final if and only if P1, P2, ... P5 are all assigned positions on the side of the tree of the tree opposite to the one where P6 is assigned. Once P6 is assigned her spot, there are 15 remaining, of which 8 constitute the "other side". There are ${8 \choose 5}$ ways to choose 5 of the "other side" spots for P1,...,P5, out of a total of ${15 \choose 5}$. Thus the probability is $$ \dfrac{8 \choose 5}{15 \choose 5} = \frac{8}{429}$$</p>
Why are the cardinality of $\mathbb{R^n}$ and $\mathbb{R}$ the same?
2,091,541
6
https://math.stackexchange.com/questions/2091541/why-are-the-cardinality-of-mathbbrn-and-mathbbr-the-same
<p>As I study the first part of abstract algebra, I have a question: why $\left\vert{\mathbb R}\right\vert = \left\vert{\mathbb R^2}\right\vert$?</p> <p>And moreover, I knew the fact that $\left\vert{\mathbb R}\right\vert = \left\vert{\mathbb R^n}\right\vert$.</p> <p>Does the equality apply to any infinite set? How can I prove them?</p>
[ { "answer_id": 2091553, "score": 5, "text": "<p>You can think of a real number in decimal notation: an infinite sequence of digits with a decimal point somewhere in there. Now you can take $n$ (e.g. $n=3$) such numbers and interleave their digits like this:</p>\n\n<pre><code> 1 2. 3 5 6…\n 1 3 7 . 2 7 3 …\n 9 3 .1 5 8 …\n⇒ 10931372.123575836…\n</code></pre>\n\n<p>So this way you have turned a triple of real numbers into a single real number in a reversible fashion. To make this rigorous you'd have to consider special cases like $0.999… = 1$, but the above should convey the general idea in an intuitive way.</p>\n\n<p>For $n=\\infty$ this argument no longer holds in this fashion. And it's not that trivial there, either, since it depends on the <em>kind</em> of infinity. Most likely you have contable infinity in mind there, I guess, so you are asking about $\\mathfrak c^{\\aleph_0}$.</p>\n" }, { "answer_id": 2091610, "score": 5, "text": "<p>The existence of <em>space-filling curves</em> shows that the cardinality of <span class=\"math-container\">$\\mathbb{R}^n$</span> can be at most that of <span class=\"math-container\">$\\mathbb{R}$</span>. A space-filling curve in a curve parametrized by a surjective mapping <span class=\"math-container\">$s: \\mathbb{R} \\rightarrow \\mathbb{R}^n$</span>. Since <span class=\"math-container\">$s$</span> is surjective, we must conclude that <span class=\"math-container\">$|\\mathbb{R}^n| \\leq |\\mathbb{R}|$</span>.</p>\n<p>More generally, to show that we have <span class=\"math-container\">$|A^n| = |A|$</span> for any infinite set <span class=\"math-container\">$A$</span> and finite <span class=\"math-container\">$n \\geq 1$</span>, it is enough to show that <span class=\"math-container\">$|A \\times A| = |A|$</span>, since the general case then follows by a trivial inductive argument.</p>\n<p>To prove the result, consider the family <span class=\"math-container\">$\\mathcal{F}$</span> of those functions <span class=\"math-container\">$f$</span> such that <span class=\"math-container\">$f: X \\times X \\rightarrow X$</span>, where <span class=\"math-container\">$X$</span> is a subset of <span class=\"math-container\">$A$</span>. <span class=\"math-container\">$\\mathcal{F}$</span> is non-empty, for if <span class=\"math-container\">$X$</span> is a countable subset of <span class=\"math-container\">$A$</span>, then the usual dovetailing argument tells us that such an <span class=\"math-container\">$f$</span> exist. It is also easy to see that <span class=\"math-container\">$\\mathcal{F}$</span> can be equipped with a partial order <span class=\"math-container\">$\\sqsubseteq$</span>, namely that of extension: <span class=\"math-container\">$f_1 \\sqsubseteq f_2$</span> if <span class=\"math-container\">$\\mathrm{dom}(f_1) \\subseteq \\mathrm{dom}(f_2)$</span> and whenever <span class=\"math-container\">$f_1(x) = y$</span>, then also <span class=\"math-container\">$f_2(x) = y$</span>. Because <span class=\"math-container\">$\\mathcal{F}$</span> is partially ordered, we know from Zorn's lemma that it has a maximal element <span class=\"math-container\">$f_{max}$</span>. So let <span class=\"math-container\">$\\mathrm{dom}(f_{max}) = X$</span>. All that is left is then to show that <span class=\"math-container\">$|X| = |A|$</span>.</p>\n<p>Suppose <span class=\"math-container\">$X$</span> had strictly smaller cardinality than <span class=\"math-container\">$A$</span>. But then, since <span class=\"math-container\">$|A| = \\max(|X|,|A \\setminus X|)$</span> we must have that <span class=\"math-container\">$|A| = |A \\setminus X|$</span> and therefore that <span class=\"math-container\">$|X| &lt; |A \\setminus X|$</span>. This then means that we can find a <span class=\"math-container\">$Y \\subset A \\setminus X$</span> of the same cardinality as <span class=\"math-container\">$X$</span>. But now consider the sets <span class=\"math-container\">$X \\times Y$</span>, <span class=\"math-container\">$Y \\times X$</span> and <span class=\"math-container\">$Y \\times Y$</span>. Each of these sets is infinite and has the same cardinality as <span class=\"math-container\">$X \\times X$</span> and therefore also the same cardinality as <span class=\"math-container\">$X$</span> and as <span class=\"math-container\">$Y$</span>. Therefore we have</p>\n<p><span class=\"math-container\">$$ |X \\times Y \\cup Y \\times X \\cup Y \\times Y| = |Y|$$</span></p>\n<p>But <span class=\"math-container\">$X \\times Y \\cup Y \\times X \\cup Y \\times Y = (X \\cup Y) \\times (X \\cup Y)$</span>, so this means that we can extend <span class=\"math-container\">$f_{max}$</span> such that its domain is <span class=\"math-container\">$(X \\cup Y) \\times (X \\cup Y)$</span>, and that is a contradiction, since <span class=\"math-container\">$f_{max}$</span> was assumed to be maximal.</p>\n" }, { "answer_id": 2091553, "score": 5, "text": "<p>You can think of a real number in decimal notation: an infinite sequence of digits with a decimal point somewhere in there. Now you can take $n$ (e.g. $n=3$) such numbers and interleave their digits like this:</p>\n\n<pre><code> 1 2. 3 5 6…\n 1 3 7 . 2 7 3 …\n 9 3 .1 5 8 …\n⇒ 10931372.123575836…\n</code></pre>\n\n<p>So this way you have turned a triple of real numbers into a single real number in a reversible fashion. To make this rigorous you'd have to consider special cases like $0.999… = 1$, but the above should convey the general idea in an intuitive way.</p>\n\n<p>For $n=\\infty$ this argument no longer holds in this fashion. And it's not that trivial there, either, since it depends on the <em>kind</em> of infinity. Most likely you have contable infinity in mind there, I guess, so you are asking about $\\mathfrak c^{\\aleph_0}$.</p>\n" }, { "answer_id": 2091610, "score": 5, "text": "<p>The existence of <em>space-filling curves</em> shows that the cardinality of <span class=\"math-container\">$\\mathbb{R}^n$</span> can be at most that of <span class=\"math-container\">$\\mathbb{R}$</span>. A space-filling curve in a curve parametrized by a surjective mapping <span class=\"math-container\">$s: \\mathbb{R} \\rightarrow \\mathbb{R}^n$</span>. Since <span class=\"math-container\">$s$</span> is surjective, we must conclude that <span class=\"math-container\">$|\\mathbb{R}^n| \\leq |\\mathbb{R}|$</span>.</p>\n<p>More generally, to show that we have <span class=\"math-container\">$|A^n| = |A|$</span> for any infinite set <span class=\"math-container\">$A$</span> and finite <span class=\"math-container\">$n \\geq 1$</span>, it is enough to show that <span class=\"math-container\">$|A \\times A| = |A|$</span>, since the general case then follows by a trivial inductive argument.</p>\n<p>To prove the result, consider the family <span class=\"math-container\">$\\mathcal{F}$</span> of those functions <span class=\"math-container\">$f$</span> such that <span class=\"math-container\">$f: X \\times X \\rightarrow X$</span>, where <span class=\"math-container\">$X$</span> is a subset of <span class=\"math-container\">$A$</span>. <span class=\"math-container\">$\\mathcal{F}$</span> is non-empty, for if <span class=\"math-container\">$X$</span> is a countable subset of <span class=\"math-container\">$A$</span>, then the usual dovetailing argument tells us that such an <span class=\"math-container\">$f$</span> exist. It is also easy to see that <span class=\"math-container\">$\\mathcal{F}$</span> can be equipped with a partial order <span class=\"math-container\">$\\sqsubseteq$</span>, namely that of extension: <span class=\"math-container\">$f_1 \\sqsubseteq f_2$</span> if <span class=\"math-container\">$\\mathrm{dom}(f_1) \\subseteq \\mathrm{dom}(f_2)$</span> and whenever <span class=\"math-container\">$f_1(x) = y$</span>, then also <span class=\"math-container\">$f_2(x) = y$</span>. Because <span class=\"math-container\">$\\mathcal{F}$</span> is partially ordered, we know from Zorn's lemma that it has a maximal element <span class=\"math-container\">$f_{max}$</span>. So let <span class=\"math-container\">$\\mathrm{dom}(f_{max}) = X$</span>. All that is left is then to show that <span class=\"math-container\">$|X| = |A|$</span>.</p>\n<p>Suppose <span class=\"math-container\">$X$</span> had strictly smaller cardinality than <span class=\"math-container\">$A$</span>. But then, since <span class=\"math-container\">$|A| = \\max(|X|,|A \\setminus X|)$</span> we must have that <span class=\"math-container\">$|A| = |A \\setminus X|$</span> and therefore that <span class=\"math-container\">$|X| &lt; |A \\setminus X|$</span>. This then means that we can find a <span class=\"math-container\">$Y \\subset A \\setminus X$</span> of the same cardinality as <span class=\"math-container\">$X$</span>. But now consider the sets <span class=\"math-container\">$X \\times Y$</span>, <span class=\"math-container\">$Y \\times X$</span> and <span class=\"math-container\">$Y \\times Y$</span>. Each of these sets is infinite and has the same cardinality as <span class=\"math-container\">$X \\times X$</span> and therefore also the same cardinality as <span class=\"math-container\">$X$</span> and as <span class=\"math-container\">$Y$</span>. Therefore we have</p>\n<p><span class=\"math-container\">$$ |X \\times Y \\cup Y \\times X \\cup Y \\times Y| = |Y|$$</span></p>\n<p>But <span class=\"math-container\">$X \\times Y \\cup Y \\times X \\cup Y \\times Y = (X \\cup Y) \\times (X \\cup Y)$</span>, so this means that we can extend <span class=\"math-container\">$f_{max}$</span> such that its domain is <span class=\"math-container\">$(X \\cup Y) \\times (X \\cup Y)$</span>, and that is a contradiction, since <span class=\"math-container\">$f_{max}$</span> was assumed to be maximal.</p>\n" }, { "answer_id": 2091553, "score": 5, "text": "<p>You can think of a real number in decimal notation: an infinite sequence of digits with a decimal point somewhere in there. Now you can take $n$ (e.g. $n=3$) such numbers and interleave their digits like this:</p>\n\n<pre><code> 1 2. 3 5 6…\n 1 3 7 . 2 7 3 …\n 9 3 .1 5 8 …\n⇒ 10931372.123575836…\n</code></pre>\n\n<p>So this way you have turned a triple of real numbers into a single real number in a reversible fashion. To make this rigorous you'd have to consider special cases like $0.999… = 1$, but the above should convey the general idea in an intuitive way.</p>\n\n<p>For $n=\\infty$ this argument no longer holds in this fashion. And it's not that trivial there, either, since it depends on the <em>kind</em> of infinity. Most likely you have contable infinity in mind there, I guess, so you are asking about $\\mathfrak c^{\\aleph_0}$.</p>\n" }, { "answer_id": 2091610, "score": 5, "text": "<p>The existence of <em>space-filling curves</em> shows that the cardinality of <span class=\"math-container\">$\\mathbb{R}^n$</span> can be at most that of <span class=\"math-container\">$\\mathbb{R}$</span>. A space-filling curve in a curve parametrized by a surjective mapping <span class=\"math-container\">$s: \\mathbb{R} \\rightarrow \\mathbb{R}^n$</span>. Since <span class=\"math-container\">$s$</span> is surjective, we must conclude that <span class=\"math-container\">$|\\mathbb{R}^n| \\leq |\\mathbb{R}|$</span>.</p>\n<p>More generally, to show that we have <span class=\"math-container\">$|A^n| = |A|$</span> for any infinite set <span class=\"math-container\">$A$</span> and finite <span class=\"math-container\">$n \\geq 1$</span>, it is enough to show that <span class=\"math-container\">$|A \\times A| = |A|$</span>, since the general case then follows by a trivial inductive argument.</p>\n<p>To prove the result, consider the family <span class=\"math-container\">$\\mathcal{F}$</span> of those functions <span class=\"math-container\">$f$</span> such that <span class=\"math-container\">$f: X \\times X \\rightarrow X$</span>, where <span class=\"math-container\">$X$</span> is a subset of <span class=\"math-container\">$A$</span>. <span class=\"math-container\">$\\mathcal{F}$</span> is non-empty, for if <span class=\"math-container\">$X$</span> is a countable subset of <span class=\"math-container\">$A$</span>, then the usual dovetailing argument tells us that such an <span class=\"math-container\">$f$</span> exist. It is also easy to see that <span class=\"math-container\">$\\mathcal{F}$</span> can be equipped with a partial order <span class=\"math-container\">$\\sqsubseteq$</span>, namely that of extension: <span class=\"math-container\">$f_1 \\sqsubseteq f_2$</span> if <span class=\"math-container\">$\\mathrm{dom}(f_1) \\subseteq \\mathrm{dom}(f_2)$</span> and whenever <span class=\"math-container\">$f_1(x) = y$</span>, then also <span class=\"math-container\">$f_2(x) = y$</span>. Because <span class=\"math-container\">$\\mathcal{F}$</span> is partially ordered, we know from Zorn's lemma that it has a maximal element <span class=\"math-container\">$f_{max}$</span>. So let <span class=\"math-container\">$\\mathrm{dom}(f_{max}) = X$</span>. All that is left is then to show that <span class=\"math-container\">$|X| = |A|$</span>.</p>\n<p>Suppose <span class=\"math-container\">$X$</span> had strictly smaller cardinality than <span class=\"math-container\">$A$</span>. But then, since <span class=\"math-container\">$|A| = \\max(|X|,|A \\setminus X|)$</span> we must have that <span class=\"math-container\">$|A| = |A \\setminus X|$</span> and therefore that <span class=\"math-container\">$|X| &lt; |A \\setminus X|$</span>. This then means that we can find a <span class=\"math-container\">$Y \\subset A \\setminus X$</span> of the same cardinality as <span class=\"math-container\">$X$</span>. But now consider the sets <span class=\"math-container\">$X \\times Y$</span>, <span class=\"math-container\">$Y \\times X$</span> and <span class=\"math-container\">$Y \\times Y$</span>. Each of these sets is infinite and has the same cardinality as <span class=\"math-container\">$X \\times X$</span> and therefore also the same cardinality as <span class=\"math-container\">$X$</span> and as <span class=\"math-container\">$Y$</span>. Therefore we have</p>\n<p><span class=\"math-container\">$$ |X \\times Y \\cup Y \\times X \\cup Y \\times Y| = |Y|$$</span></p>\n<p>But <span class=\"math-container\">$X \\times Y \\cup Y \\times X \\cup Y \\times Y = (X \\cup Y) \\times (X \\cup Y)$</span>, so this means that we can extend <span class=\"math-container\">$f_{max}$</span> such that its domain is <span class=\"math-container\">$(X \\cup Y) \\times (X \\cup Y)$</span>, and that is a contradiction, since <span class=\"math-container\">$f_{max}$</span> was assumed to be maximal.</p>\n" }, { "answer_id": 2091553, "score": 5, "text": "<p>You can think of a real number in decimal notation: an infinite sequence of digits with a decimal point somewhere in there. Now you can take $n$ (e.g. $n=3$) such numbers and interleave their digits like this:</p>\n\n<pre><code> 1 2. 3 5 6…\n 1 3 7 . 2 7 3 …\n 9 3 .1 5 8 …\n⇒ 10931372.123575836…\n</code></pre>\n\n<p>So this way you have turned a triple of real numbers into a single real number in a reversible fashion. To make this rigorous you'd have to consider special cases like $0.999… = 1$, but the above should convey the general idea in an intuitive way.</p>\n\n<p>For $n=\\infty$ this argument no longer holds in this fashion. And it's not that trivial there, either, since it depends on the <em>kind</em> of infinity. Most likely you have contable infinity in mind there, I guess, so you are asking about $\\mathfrak c^{\\aleph_0}$.</p>\n" }, { "answer_id": 2091610, "score": 5, "text": "<p>The existence of <em>space-filling curves</em> shows that the cardinality of <span class=\"math-container\">$\\mathbb{R}^n$</span> can be at most that of <span class=\"math-container\">$\\mathbb{R}$</span>. A space-filling curve in a curve parametrized by a surjective mapping <span class=\"math-container\">$s: \\mathbb{R} \\rightarrow \\mathbb{R}^n$</span>. Since <span class=\"math-container\">$s$</span> is surjective, we must conclude that <span class=\"math-container\">$|\\mathbb{R}^n| \\leq |\\mathbb{R}|$</span>.</p>\n<p>More generally, to show that we have <span class=\"math-container\">$|A^n| = |A|$</span> for any infinite set <span class=\"math-container\">$A$</span> and finite <span class=\"math-container\">$n \\geq 1$</span>, it is enough to show that <span class=\"math-container\">$|A \\times A| = |A|$</span>, since the general case then follows by a trivial inductive argument.</p>\n<p>To prove the result, consider the family <span class=\"math-container\">$\\mathcal{F}$</span> of those functions <span class=\"math-container\">$f$</span> such that <span class=\"math-container\">$f: X \\times X \\rightarrow X$</span>, where <span class=\"math-container\">$X$</span> is a subset of <span class=\"math-container\">$A$</span>. <span class=\"math-container\">$\\mathcal{F}$</span> is non-empty, for if <span class=\"math-container\">$X$</span> is a countable subset of <span class=\"math-container\">$A$</span>, then the usual dovetailing argument tells us that such an <span class=\"math-container\">$f$</span> exist. It is also easy to see that <span class=\"math-container\">$\\mathcal{F}$</span> can be equipped with a partial order <span class=\"math-container\">$\\sqsubseteq$</span>, namely that of extension: <span class=\"math-container\">$f_1 \\sqsubseteq f_2$</span> if <span class=\"math-container\">$\\mathrm{dom}(f_1) \\subseteq \\mathrm{dom}(f_2)$</span> and whenever <span class=\"math-container\">$f_1(x) = y$</span>, then also <span class=\"math-container\">$f_2(x) = y$</span>. Because <span class=\"math-container\">$\\mathcal{F}$</span> is partially ordered, we know from Zorn's lemma that it has a maximal element <span class=\"math-container\">$f_{max}$</span>. So let <span class=\"math-container\">$\\mathrm{dom}(f_{max}) = X$</span>. All that is left is then to show that <span class=\"math-container\">$|X| = |A|$</span>.</p>\n<p>Suppose <span class=\"math-container\">$X$</span> had strictly smaller cardinality than <span class=\"math-container\">$A$</span>. But then, since <span class=\"math-container\">$|A| = \\max(|X|,|A \\setminus X|)$</span> we must have that <span class=\"math-container\">$|A| = |A \\setminus X|$</span> and therefore that <span class=\"math-container\">$|X| &lt; |A \\setminus X|$</span>. This then means that we can find a <span class=\"math-container\">$Y \\subset A \\setminus X$</span> of the same cardinality as <span class=\"math-container\">$X$</span>. But now consider the sets <span class=\"math-container\">$X \\times Y$</span>, <span class=\"math-container\">$Y \\times X$</span> and <span class=\"math-container\">$Y \\times Y$</span>. Each of these sets is infinite and has the same cardinality as <span class=\"math-container\">$X \\times X$</span> and therefore also the same cardinality as <span class=\"math-container\">$X$</span> and as <span class=\"math-container\">$Y$</span>. Therefore we have</p>\n<p><span class=\"math-container\">$$ |X \\times Y \\cup Y \\times X \\cup Y \\times Y| = |Y|$$</span></p>\n<p>But <span class=\"math-container\">$X \\times Y \\cup Y \\times X \\cup Y \\times Y = (X \\cup Y) \\times (X \\cup Y)$</span>, so this means that we can extend <span class=\"math-container\">$f_{max}$</span> such that its domain is <span class=\"math-container\">$(X \\cup Y) \\times (X \\cup Y)$</span>, and that is a contradiction, since <span class=\"math-container\">$f_{max}$</span> was assumed to be maximal.</p>\n" }, { "answer_id": 2091553, "score": 5, "text": "<p>You can think of a real number in decimal notation: an infinite sequence of digits with a decimal point somewhere in there. Now you can take $n$ (e.g. $n=3$) such numbers and interleave their digits like this:</p>\n\n<pre><code> 1 2. 3 5 6…\n 1 3 7 . 2 7 3 …\n 9 3 .1 5 8 …\n⇒ 10931372.123575836…\n</code></pre>\n\n<p>So this way you have turned a triple of real numbers into a single real number in a reversible fashion. To make this rigorous you'd have to consider special cases like $0.999… = 1$, but the above should convey the general idea in an intuitive way.</p>\n\n<p>For $n=\\infty$ this argument no longer holds in this fashion. And it's not that trivial there, either, since it depends on the <em>kind</em> of infinity. Most likely you have contable infinity in mind there, I guess, so you are asking about $\\mathfrak c^{\\aleph_0}$.</p>\n" }, { "answer_id": 2091610, "score": 5, "text": "<p>The existence of <em>space-filling curves</em> shows that the cardinality of <span class=\"math-container\">$\\mathbb{R}^n$</span> can be at most that of <span class=\"math-container\">$\\mathbb{R}$</span>. A space-filling curve in a curve parametrized by a surjective mapping <span class=\"math-container\">$s: \\mathbb{R} \\rightarrow \\mathbb{R}^n$</span>. Since <span class=\"math-container\">$s$</span> is surjective, we must conclude that <span class=\"math-container\">$|\\mathbb{R}^n| \\leq |\\mathbb{R}|$</span>.</p>\n<p>More generally, to show that we have <span class=\"math-container\">$|A^n| = |A|$</span> for any infinite set <span class=\"math-container\">$A$</span> and finite <span class=\"math-container\">$n \\geq 1$</span>, it is enough to show that <span class=\"math-container\">$|A \\times A| = |A|$</span>, since the general case then follows by a trivial inductive argument.</p>\n<p>To prove the result, consider the family <span class=\"math-container\">$\\mathcal{F}$</span> of those functions <span class=\"math-container\">$f$</span> such that <span class=\"math-container\">$f: X \\times X \\rightarrow X$</span>, where <span class=\"math-container\">$X$</span> is a subset of <span class=\"math-container\">$A$</span>. <span class=\"math-container\">$\\mathcal{F}$</span> is non-empty, for if <span class=\"math-container\">$X$</span> is a countable subset of <span class=\"math-container\">$A$</span>, then the usual dovetailing argument tells us that such an <span class=\"math-container\">$f$</span> exist. It is also easy to see that <span class=\"math-container\">$\\mathcal{F}$</span> can be equipped with a partial order <span class=\"math-container\">$\\sqsubseteq$</span>, namely that of extension: <span class=\"math-container\">$f_1 \\sqsubseteq f_2$</span> if <span class=\"math-container\">$\\mathrm{dom}(f_1) \\subseteq \\mathrm{dom}(f_2)$</span> and whenever <span class=\"math-container\">$f_1(x) = y$</span>, then also <span class=\"math-container\">$f_2(x) = y$</span>. Because <span class=\"math-container\">$\\mathcal{F}$</span> is partially ordered, we know from Zorn's lemma that it has a maximal element <span class=\"math-container\">$f_{max}$</span>. So let <span class=\"math-container\">$\\mathrm{dom}(f_{max}) = X$</span>. All that is left is then to show that <span class=\"math-container\">$|X| = |A|$</span>.</p>\n<p>Suppose <span class=\"math-container\">$X$</span> had strictly smaller cardinality than <span class=\"math-container\">$A$</span>. But then, since <span class=\"math-container\">$|A| = \\max(|X|,|A \\setminus X|)$</span> we must have that <span class=\"math-container\">$|A| = |A \\setminus X|$</span> and therefore that <span class=\"math-container\">$|X| &lt; |A \\setminus X|$</span>. This then means that we can find a <span class=\"math-container\">$Y \\subset A \\setminus X$</span> of the same cardinality as <span class=\"math-container\">$X$</span>. But now consider the sets <span class=\"math-container\">$X \\times Y$</span>, <span class=\"math-container\">$Y \\times X$</span> and <span class=\"math-container\">$Y \\times Y$</span>. Each of these sets is infinite and has the same cardinality as <span class=\"math-container\">$X \\times X$</span> and therefore also the same cardinality as <span class=\"math-container\">$X$</span> and as <span class=\"math-container\">$Y$</span>. Therefore we have</p>\n<p><span class=\"math-container\">$$ |X \\times Y \\cup Y \\times X \\cup Y \\times Y| = |Y|$$</span></p>\n<p>But <span class=\"math-container\">$X \\times Y \\cup Y \\times X \\cup Y \\times Y = (X \\cup Y) \\times (X \\cup Y)$</span>, so this means that we can extend <span class=\"math-container\">$f_{max}$</span> such that its domain is <span class=\"math-container\">$(X \\cup Y) \\times (X \\cup Y)$</span>, and that is a contradiction, since <span class=\"math-container\">$f_{max}$</span> was assumed to be maximal.</p>\n" }, { "answer_id": 2091553, "score": 5, "text": "<p>You can think of a real number in decimal notation: an infinite sequence of digits with a decimal point somewhere in there. Now you can take $n$ (e.g. $n=3$) such numbers and interleave their digits like this:</p>\n\n<pre><code> 1 2. 3 5 6…\n 1 3 7 . 2 7 3 …\n 9 3 .1 5 8 …\n⇒ 10931372.123575836…\n</code></pre>\n\n<p>So this way you have turned a triple of real numbers into a single real number in a reversible fashion. To make this rigorous you'd have to consider special cases like $0.999… = 1$, but the above should convey the general idea in an intuitive way.</p>\n\n<p>For $n=\\infty$ this argument no longer holds in this fashion. And it's not that trivial there, either, since it depends on the <em>kind</em> of infinity. Most likely you have contable infinity in mind there, I guess, so you are asking about $\\mathfrak c^{\\aleph_0}$.</p>\n" }, { "answer_id": 2091610, "score": 5, "text": "<p>The existence of <em>space-filling curves</em> shows that the cardinality of <span class=\"math-container\">$\\mathbb{R}^n$</span> can be at most that of <span class=\"math-container\">$\\mathbb{R}$</span>. A space-filling curve in a curve parametrized by a surjective mapping <span class=\"math-container\">$s: \\mathbb{R} \\rightarrow \\mathbb{R}^n$</span>. Since <span class=\"math-container\">$s$</span> is surjective, we must conclude that <span class=\"math-container\">$|\\mathbb{R}^n| \\leq |\\mathbb{R}|$</span>.</p>\n<p>More generally, to show that we have <span class=\"math-container\">$|A^n| = |A|$</span> for any infinite set <span class=\"math-container\">$A$</span> and finite <span class=\"math-container\">$n \\geq 1$</span>, it is enough to show that <span class=\"math-container\">$|A \\times A| = |A|$</span>, since the general case then follows by a trivial inductive argument.</p>\n<p>To prove the result, consider the family <span class=\"math-container\">$\\mathcal{F}$</span> of those functions <span class=\"math-container\">$f$</span> such that <span class=\"math-container\">$f: X \\times X \\rightarrow X$</span>, where <span class=\"math-container\">$X$</span> is a subset of <span class=\"math-container\">$A$</span>. <span class=\"math-container\">$\\mathcal{F}$</span> is non-empty, for if <span class=\"math-container\">$X$</span> is a countable subset of <span class=\"math-container\">$A$</span>, then the usual dovetailing argument tells us that such an <span class=\"math-container\">$f$</span> exist. It is also easy to see that <span class=\"math-container\">$\\mathcal{F}$</span> can be equipped with a partial order <span class=\"math-container\">$\\sqsubseteq$</span>, namely that of extension: <span class=\"math-container\">$f_1 \\sqsubseteq f_2$</span> if <span class=\"math-container\">$\\mathrm{dom}(f_1) \\subseteq \\mathrm{dom}(f_2)$</span> and whenever <span class=\"math-container\">$f_1(x) = y$</span>, then also <span class=\"math-container\">$f_2(x) = y$</span>. Because <span class=\"math-container\">$\\mathcal{F}$</span> is partially ordered, we know from Zorn's lemma that it has a maximal element <span class=\"math-container\">$f_{max}$</span>. So let <span class=\"math-container\">$\\mathrm{dom}(f_{max}) = X$</span>. All that is left is then to show that <span class=\"math-container\">$|X| = |A|$</span>.</p>\n<p>Suppose <span class=\"math-container\">$X$</span> had strictly smaller cardinality than <span class=\"math-container\">$A$</span>. But then, since <span class=\"math-container\">$|A| = \\max(|X|,|A \\setminus X|)$</span> we must have that <span class=\"math-container\">$|A| = |A \\setminus X|$</span> and therefore that <span class=\"math-container\">$|X| &lt; |A \\setminus X|$</span>. This then means that we can find a <span class=\"math-container\">$Y \\subset A \\setminus X$</span> of the same cardinality as <span class=\"math-container\">$X$</span>. But now consider the sets <span class=\"math-container\">$X \\times Y$</span>, <span class=\"math-container\">$Y \\times X$</span> and <span class=\"math-container\">$Y \\times Y$</span>. Each of these sets is infinite and has the same cardinality as <span class=\"math-container\">$X \\times X$</span> and therefore also the same cardinality as <span class=\"math-container\">$X$</span> and as <span class=\"math-container\">$Y$</span>. Therefore we have</p>\n<p><span class=\"math-container\">$$ |X \\times Y \\cup Y \\times X \\cup Y \\times Y| = |Y|$$</span></p>\n<p>But <span class=\"math-container\">$X \\times Y \\cup Y \\times X \\cup Y \\times Y = (X \\cup Y) \\times (X \\cup Y)$</span>, so this means that we can extend <span class=\"math-container\">$f_{max}$</span> such that its domain is <span class=\"math-container\">$(X \\cup Y) \\times (X \\cup Y)$</span>, and that is a contradiction, since <span class=\"math-container\">$f_{max}$</span> was assumed to be maximal.</p>\n" }, { "answer_id": 2091553, "score": 5, "text": "<p>You can think of a real number in decimal notation: an infinite sequence of digits with a decimal point somewhere in there. Now you can take $n$ (e.g. $n=3$) such numbers and interleave their digits like this:</p>\n\n<pre><code> 1 2. 3 5 6…\n 1 3 7 . 2 7 3 …\n 9 3 .1 5 8 …\n⇒ 10931372.123575836…\n</code></pre>\n\n<p>So this way you have turned a triple of real numbers into a single real number in a reversible fashion. To make this rigorous you'd have to consider special cases like $0.999… = 1$, but the above should convey the general idea in an intuitive way.</p>\n\n<p>For $n=\\infty$ this argument no longer holds in this fashion. And it's not that trivial there, either, since it depends on the <em>kind</em> of infinity. Most likely you have contable infinity in mind there, I guess, so you are asking about $\\mathfrak c^{\\aleph_0}$.</p>\n" }, { "answer_id": 2091610, "score": 5, "text": "<p>The existence of <em>space-filling curves</em> shows that the cardinality of <span class=\"math-container\">$\\mathbb{R}^n$</span> can be at most that of <span class=\"math-container\">$\\mathbb{R}$</span>. A space-filling curve in a curve parametrized by a surjective mapping <span class=\"math-container\">$s: \\mathbb{R} \\rightarrow \\mathbb{R}^n$</span>. Since <span class=\"math-container\">$s$</span> is surjective, we must conclude that <span class=\"math-container\">$|\\mathbb{R}^n| \\leq |\\mathbb{R}|$</span>.</p>\n<p>More generally, to show that we have <span class=\"math-container\">$|A^n| = |A|$</span> for any infinite set <span class=\"math-container\">$A$</span> and finite <span class=\"math-container\">$n \\geq 1$</span>, it is enough to show that <span class=\"math-container\">$|A \\times A| = |A|$</span>, since the general case then follows by a trivial inductive argument.</p>\n<p>To prove the result, consider the family <span class=\"math-container\">$\\mathcal{F}$</span> of those functions <span class=\"math-container\">$f$</span> such that <span class=\"math-container\">$f: X \\times X \\rightarrow X$</span>, where <span class=\"math-container\">$X$</span> is a subset of <span class=\"math-container\">$A$</span>. <span class=\"math-container\">$\\mathcal{F}$</span> is non-empty, for if <span class=\"math-container\">$X$</span> is a countable subset of <span class=\"math-container\">$A$</span>, then the usual dovetailing argument tells us that such an <span class=\"math-container\">$f$</span> exist. It is also easy to see that <span class=\"math-container\">$\\mathcal{F}$</span> can be equipped with a partial order <span class=\"math-container\">$\\sqsubseteq$</span>, namely that of extension: <span class=\"math-container\">$f_1 \\sqsubseteq f_2$</span> if <span class=\"math-container\">$\\mathrm{dom}(f_1) \\subseteq \\mathrm{dom}(f_2)$</span> and whenever <span class=\"math-container\">$f_1(x) = y$</span>, then also <span class=\"math-container\">$f_2(x) = y$</span>. Because <span class=\"math-container\">$\\mathcal{F}$</span> is partially ordered, we know from Zorn's lemma that it has a maximal element <span class=\"math-container\">$f_{max}$</span>. So let <span class=\"math-container\">$\\mathrm{dom}(f_{max}) = X$</span>. All that is left is then to show that <span class=\"math-container\">$|X| = |A|$</span>.</p>\n<p>Suppose <span class=\"math-container\">$X$</span> had strictly smaller cardinality than <span class=\"math-container\">$A$</span>. But then, since <span class=\"math-container\">$|A| = \\max(|X|,|A \\setminus X|)$</span> we must have that <span class=\"math-container\">$|A| = |A \\setminus X|$</span> and therefore that <span class=\"math-container\">$|X| &lt; |A \\setminus X|$</span>. This then means that we can find a <span class=\"math-container\">$Y \\subset A \\setminus X$</span> of the same cardinality as <span class=\"math-container\">$X$</span>. But now consider the sets <span class=\"math-container\">$X \\times Y$</span>, <span class=\"math-container\">$Y \\times X$</span> and <span class=\"math-container\">$Y \\times Y$</span>. Each of these sets is infinite and has the same cardinality as <span class=\"math-container\">$X \\times X$</span> and therefore also the same cardinality as <span class=\"math-container\">$X$</span> and as <span class=\"math-container\">$Y$</span>. Therefore we have</p>\n<p><span class=\"math-container\">$$ |X \\times Y \\cup Y \\times X \\cup Y \\times Y| = |Y|$$</span></p>\n<p>But <span class=\"math-container\">$X \\times Y \\cup Y \\times X \\cup Y \\times Y = (X \\cup Y) \\times (X \\cup Y)$</span>, so this means that we can extend <span class=\"math-container\">$f_{max}$</span> such that its domain is <span class=\"math-container\">$(X \\cup Y) \\times (X \\cup Y)$</span>, and that is a contradiction, since <span class=\"math-container\">$f_{max}$</span> was assumed to be maximal.</p>\n" }, { "answer_id": 2091553, "score": 5, "text": "<p>You can think of a real number in decimal notation: an infinite sequence of digits with a decimal point somewhere in there. Now you can take $n$ (e.g. $n=3$) such numbers and interleave their digits like this:</p>\n\n<pre><code> 1 2. 3 5 6…\n 1 3 7 . 2 7 3 …\n 9 3 .1 5 8 …\n⇒ 10931372.123575836…\n</code></pre>\n\n<p>So this way you have turned a triple of real numbers into a single real number in a reversible fashion. To make this rigorous you'd have to consider special cases like $0.999… = 1$, but the above should convey the general idea in an intuitive way.</p>\n\n<p>For $n=\\infty$ this argument no longer holds in this fashion. And it's not that trivial there, either, since it depends on the <em>kind</em> of infinity. Most likely you have contable infinity in mind there, I guess, so you are asking about $\\mathfrak c^{\\aleph_0}$.</p>\n" }, { "answer_id": 2091610, "score": 5, "text": "<p>The existence of <em>space-filling curves</em> shows that the cardinality of <span class=\"math-container\">$\\mathbb{R}^n$</span> can be at most that of <span class=\"math-container\">$\\mathbb{R}$</span>. A space-filling curve in a curve parametrized by a surjective mapping <span class=\"math-container\">$s: \\mathbb{R} \\rightarrow \\mathbb{R}^n$</span>. Since <span class=\"math-container\">$s$</span> is surjective, we must conclude that <span class=\"math-container\">$|\\mathbb{R}^n| \\leq |\\mathbb{R}|$</span>.</p>\n<p>More generally, to show that we have <span class=\"math-container\">$|A^n| = |A|$</span> for any infinite set <span class=\"math-container\">$A$</span> and finite <span class=\"math-container\">$n \\geq 1$</span>, it is enough to show that <span class=\"math-container\">$|A \\times A| = |A|$</span>, since the general case then follows by a trivial inductive argument.</p>\n<p>To prove the result, consider the family <span class=\"math-container\">$\\mathcal{F}$</span> of those functions <span class=\"math-container\">$f$</span> such that <span class=\"math-container\">$f: X \\times X \\rightarrow X$</span>, where <span class=\"math-container\">$X$</span> is a subset of <span class=\"math-container\">$A$</span>. <span class=\"math-container\">$\\mathcal{F}$</span> is non-empty, for if <span class=\"math-container\">$X$</span> is a countable subset of <span class=\"math-container\">$A$</span>, then the usual dovetailing argument tells us that such an <span class=\"math-container\">$f$</span> exist. It is also easy to see that <span class=\"math-container\">$\\mathcal{F}$</span> can be equipped with a partial order <span class=\"math-container\">$\\sqsubseteq$</span>, namely that of extension: <span class=\"math-container\">$f_1 \\sqsubseteq f_2$</span> if <span class=\"math-container\">$\\mathrm{dom}(f_1) \\subseteq \\mathrm{dom}(f_2)$</span> and whenever <span class=\"math-container\">$f_1(x) = y$</span>, then also <span class=\"math-container\">$f_2(x) = y$</span>. Because <span class=\"math-container\">$\\mathcal{F}$</span> is partially ordered, we know from Zorn's lemma that it has a maximal element <span class=\"math-container\">$f_{max}$</span>. So let <span class=\"math-container\">$\\mathrm{dom}(f_{max}) = X$</span>. All that is left is then to show that <span class=\"math-container\">$|X| = |A|$</span>.</p>\n<p>Suppose <span class=\"math-container\">$X$</span> had strictly smaller cardinality than <span class=\"math-container\">$A$</span>. But then, since <span class=\"math-container\">$|A| = \\max(|X|,|A \\setminus X|)$</span> we must have that <span class=\"math-container\">$|A| = |A \\setminus X|$</span> and therefore that <span class=\"math-container\">$|X| &lt; |A \\setminus X|$</span>. This then means that we can find a <span class=\"math-container\">$Y \\subset A \\setminus X$</span> of the same cardinality as <span class=\"math-container\">$X$</span>. But now consider the sets <span class=\"math-container\">$X \\times Y$</span>, <span class=\"math-container\">$Y \\times X$</span> and <span class=\"math-container\">$Y \\times Y$</span>. Each of these sets is infinite and has the same cardinality as <span class=\"math-container\">$X \\times X$</span> and therefore also the same cardinality as <span class=\"math-container\">$X$</span> and as <span class=\"math-container\">$Y$</span>. Therefore we have</p>\n<p><span class=\"math-container\">$$ |X \\times Y \\cup Y \\times X \\cup Y \\times Y| = |Y|$$</span></p>\n<p>But <span class=\"math-container\">$X \\times Y \\cup Y \\times X \\cup Y \\times Y = (X \\cup Y) \\times (X \\cup Y)$</span>, so this means that we can extend <span class=\"math-container\">$f_{max}$</span> such that its domain is <span class=\"math-container\">$(X \\cup Y) \\times (X \\cup Y)$</span>, and that is a contradiction, since <span class=\"math-container\">$f_{max}$</span> was assumed to be maximal.</p>\n" }, { "answer_id": 2091553, "score": 5, "text": "<p>You can think of a real number in decimal notation: an infinite sequence of digits with a decimal point somewhere in there. Now you can take $n$ (e.g. $n=3$) such numbers and interleave their digits like this:</p>\n\n<pre><code> 1 2. 3 5 6…\n 1 3 7 . 2 7 3 …\n 9 3 .1 5 8 …\n⇒ 10931372.123575836…\n</code></pre>\n\n<p>So this way you have turned a triple of real numbers into a single real number in a reversible fashion. To make this rigorous you'd have to consider special cases like $0.999… = 1$, but the above should convey the general idea in an intuitive way.</p>\n\n<p>For $n=\\infty$ this argument no longer holds in this fashion. And it's not that trivial there, either, since it depends on the <em>kind</em> of infinity. Most likely you have contable infinity in mind there, I guess, so you are asking about $\\mathfrak c^{\\aleph_0}$.</p>\n" }, { "answer_id": 2091610, "score": 5, "text": "<p>The existence of <em>space-filling curves</em> shows that the cardinality of <span class=\"math-container\">$\\mathbb{R}^n$</span> can be at most that of <span class=\"math-container\">$\\mathbb{R}$</span>. A space-filling curve in a curve parametrized by a surjective mapping <span class=\"math-container\">$s: \\mathbb{R} \\rightarrow \\mathbb{R}^n$</span>. Since <span class=\"math-container\">$s$</span> is surjective, we must conclude that <span class=\"math-container\">$|\\mathbb{R}^n| \\leq |\\mathbb{R}|$</span>.</p>\n<p>More generally, to show that we have <span class=\"math-container\">$|A^n| = |A|$</span> for any infinite set <span class=\"math-container\">$A$</span> and finite <span class=\"math-container\">$n \\geq 1$</span>, it is enough to show that <span class=\"math-container\">$|A \\times A| = |A|$</span>, since the general case then follows by a trivial inductive argument.</p>\n<p>To prove the result, consider the family <span class=\"math-container\">$\\mathcal{F}$</span> of those functions <span class=\"math-container\">$f$</span> such that <span class=\"math-container\">$f: X \\times X \\rightarrow X$</span>, where <span class=\"math-container\">$X$</span> is a subset of <span class=\"math-container\">$A$</span>. <span class=\"math-container\">$\\mathcal{F}$</span> is non-empty, for if <span class=\"math-container\">$X$</span> is a countable subset of <span class=\"math-container\">$A$</span>, then the usual dovetailing argument tells us that such an <span class=\"math-container\">$f$</span> exist. It is also easy to see that <span class=\"math-container\">$\\mathcal{F}$</span> can be equipped with a partial order <span class=\"math-container\">$\\sqsubseteq$</span>, namely that of extension: <span class=\"math-container\">$f_1 \\sqsubseteq f_2$</span> if <span class=\"math-container\">$\\mathrm{dom}(f_1) \\subseteq \\mathrm{dom}(f_2)$</span> and whenever <span class=\"math-container\">$f_1(x) = y$</span>, then also <span class=\"math-container\">$f_2(x) = y$</span>. Because <span class=\"math-container\">$\\mathcal{F}$</span> is partially ordered, we know from Zorn's lemma that it has a maximal element <span class=\"math-container\">$f_{max}$</span>. So let <span class=\"math-container\">$\\mathrm{dom}(f_{max}) = X$</span>. All that is left is then to show that <span class=\"math-container\">$|X| = |A|$</span>.</p>\n<p>Suppose <span class=\"math-container\">$X$</span> had strictly smaller cardinality than <span class=\"math-container\">$A$</span>. But then, since <span class=\"math-container\">$|A| = \\max(|X|,|A \\setminus X|)$</span> we must have that <span class=\"math-container\">$|A| = |A \\setminus X|$</span> and therefore that <span class=\"math-container\">$|X| &lt; |A \\setminus X|$</span>. This then means that we can find a <span class=\"math-container\">$Y \\subset A \\setminus X$</span> of the same cardinality as <span class=\"math-container\">$X$</span>. But now consider the sets <span class=\"math-container\">$X \\times Y$</span>, <span class=\"math-container\">$Y \\times X$</span> and <span class=\"math-container\">$Y \\times Y$</span>. Each of these sets is infinite and has the same cardinality as <span class=\"math-container\">$X \\times X$</span> and therefore also the same cardinality as <span class=\"math-container\">$X$</span> and as <span class=\"math-container\">$Y$</span>. Therefore we have</p>\n<p><span class=\"math-container\">$$ |X \\times Y \\cup Y \\times X \\cup Y \\times Y| = |Y|$$</span></p>\n<p>But <span class=\"math-container\">$X \\times Y \\cup Y \\times X \\cup Y \\times Y = (X \\cup Y) \\times (X \\cup Y)$</span>, so this means that we can extend <span class=\"math-container\">$f_{max}$</span> such that its domain is <span class=\"math-container\">$(X \\cup Y) \\times (X \\cup Y)$</span>, and that is a contradiction, since <span class=\"math-container\">$f_{max}$</span> was assumed to be maximal.</p>\n" }, { "answer_id": 2091553, "score": 5, "text": "<p>You can think of a real number in decimal notation: an infinite sequence of digits with a decimal point somewhere in there. Now you can take $n$ (e.g. $n=3$) such numbers and interleave their digits like this:</p>\n\n<pre><code> 1 2. 3 5 6…\n 1 3 7 . 2 7 3 …\n 9 3 .1 5 8 …\n⇒ 10931372.123575836…\n</code></pre>\n\n<p>So this way you have turned a triple of real numbers into a single real number in a reversible fashion. To make this rigorous you'd have to consider special cases like $0.999… = 1$, but the above should convey the general idea in an intuitive way.</p>\n\n<p>For $n=\\infty$ this argument no longer holds in this fashion. And it's not that trivial there, either, since it depends on the <em>kind</em> of infinity. Most likely you have contable infinity in mind there, I guess, so you are asking about $\\mathfrak c^{\\aleph_0}$.</p>\n" }, { "answer_id": 2091610, "score": 5, "text": "<p>The existence of <em>space-filling curves</em> shows that the cardinality of <span class=\"math-container\">$\\mathbb{R}^n$</span> can be at most that of <span class=\"math-container\">$\\mathbb{R}$</span>. A space-filling curve in a curve parametrized by a surjective mapping <span class=\"math-container\">$s: \\mathbb{R} \\rightarrow \\mathbb{R}^n$</span>. Since <span class=\"math-container\">$s$</span> is surjective, we must conclude that <span class=\"math-container\">$|\\mathbb{R}^n| \\leq |\\mathbb{R}|$</span>.</p>\n<p>More generally, to show that we have <span class=\"math-container\">$|A^n| = |A|$</span> for any infinite set <span class=\"math-container\">$A$</span> and finite <span class=\"math-container\">$n \\geq 1$</span>, it is enough to show that <span class=\"math-container\">$|A \\times A| = |A|$</span>, since the general case then follows by a trivial inductive argument.</p>\n<p>To prove the result, consider the family <span class=\"math-container\">$\\mathcal{F}$</span> of those functions <span class=\"math-container\">$f$</span> such that <span class=\"math-container\">$f: X \\times X \\rightarrow X$</span>, where <span class=\"math-container\">$X$</span> is a subset of <span class=\"math-container\">$A$</span>. <span class=\"math-container\">$\\mathcal{F}$</span> is non-empty, for if <span class=\"math-container\">$X$</span> is a countable subset of <span class=\"math-container\">$A$</span>, then the usual dovetailing argument tells us that such an <span class=\"math-container\">$f$</span> exist. It is also easy to see that <span class=\"math-container\">$\\mathcal{F}$</span> can be equipped with a partial order <span class=\"math-container\">$\\sqsubseteq$</span>, namely that of extension: <span class=\"math-container\">$f_1 \\sqsubseteq f_2$</span> if <span class=\"math-container\">$\\mathrm{dom}(f_1) \\subseteq \\mathrm{dom}(f_2)$</span> and whenever <span class=\"math-container\">$f_1(x) = y$</span>, then also <span class=\"math-container\">$f_2(x) = y$</span>. Because <span class=\"math-container\">$\\mathcal{F}$</span> is partially ordered, we know from Zorn's lemma that it has a maximal element <span class=\"math-container\">$f_{max}$</span>. So let <span class=\"math-container\">$\\mathrm{dom}(f_{max}) = X$</span>. All that is left is then to show that <span class=\"math-container\">$|X| = |A|$</span>.</p>\n<p>Suppose <span class=\"math-container\">$X$</span> had strictly smaller cardinality than <span class=\"math-container\">$A$</span>. But then, since <span class=\"math-container\">$|A| = \\max(|X|,|A \\setminus X|)$</span> we must have that <span class=\"math-container\">$|A| = |A \\setminus X|$</span> and therefore that <span class=\"math-container\">$|X| &lt; |A \\setminus X|$</span>. This then means that we can find a <span class=\"math-container\">$Y \\subset A \\setminus X$</span> of the same cardinality as <span class=\"math-container\">$X$</span>. But now consider the sets <span class=\"math-container\">$X \\times Y$</span>, <span class=\"math-container\">$Y \\times X$</span> and <span class=\"math-container\">$Y \\times Y$</span>. Each of these sets is infinite and has the same cardinality as <span class=\"math-container\">$X \\times X$</span> and therefore also the same cardinality as <span class=\"math-container\">$X$</span> and as <span class=\"math-container\">$Y$</span>. Therefore we have</p>\n<p><span class=\"math-container\">$$ |X \\times Y \\cup Y \\times X \\cup Y \\times Y| = |Y|$$</span></p>\n<p>But <span class=\"math-container\">$X \\times Y \\cup Y \\times X \\cup Y \\times Y = (X \\cup Y) \\times (X \\cup Y)$</span>, so this means that we can extend <span class=\"math-container\">$f_{max}$</span> such that its domain is <span class=\"math-container\">$(X \\cup Y) \\times (X \\cup Y)$</span>, and that is a contradiction, since <span class=\"math-container\">$f_{max}$</span> was assumed to be maximal.</p>\n" }, { "answer_id": 2091553, "score": 5, "text": "<p>You can think of a real number in decimal notation: an infinite sequence of digits with a decimal point somewhere in there. Now you can take $n$ (e.g. $n=3$) such numbers and interleave their digits like this:</p>\n\n<pre><code> 1 2. 3 5 6…\n 1 3 7 . 2 7 3 …\n 9 3 .1 5 8 …\n⇒ 10931372.123575836…\n</code></pre>\n\n<p>So this way you have turned a triple of real numbers into a single real number in a reversible fashion. To make this rigorous you'd have to consider special cases like $0.999… = 1$, but the above should convey the general idea in an intuitive way.</p>\n\n<p>For $n=\\infty$ this argument no longer holds in this fashion. And it's not that trivial there, either, since it depends on the <em>kind</em> of infinity. Most likely you have contable infinity in mind there, I guess, so you are asking about $\\mathfrak c^{\\aleph_0}$.</p>\n" }, { "answer_id": 2091610, "score": 5, "text": "<p>The existence of <em>space-filling curves</em> shows that the cardinality of <span class=\"math-container\">$\\mathbb{R}^n$</span> can be at most that of <span class=\"math-container\">$\\mathbb{R}$</span>. A space-filling curve in a curve parametrized by a surjective mapping <span class=\"math-container\">$s: \\mathbb{R} \\rightarrow \\mathbb{R}^n$</span>. Since <span class=\"math-container\">$s$</span> is surjective, we must conclude that <span class=\"math-container\">$|\\mathbb{R}^n| \\leq |\\mathbb{R}|$</span>.</p>\n<p>More generally, to show that we have <span class=\"math-container\">$|A^n| = |A|$</span> for any infinite set <span class=\"math-container\">$A$</span> and finite <span class=\"math-container\">$n \\geq 1$</span>, it is enough to show that <span class=\"math-container\">$|A \\times A| = |A|$</span>, since the general case then follows by a trivial inductive argument.</p>\n<p>To prove the result, consider the family <span class=\"math-container\">$\\mathcal{F}$</span> of those functions <span class=\"math-container\">$f$</span> such that <span class=\"math-container\">$f: X \\times X \\rightarrow X$</span>, where <span class=\"math-container\">$X$</span> is a subset of <span class=\"math-container\">$A$</span>. <span class=\"math-container\">$\\mathcal{F}$</span> is non-empty, for if <span class=\"math-container\">$X$</span> is a countable subset of <span class=\"math-container\">$A$</span>, then the usual dovetailing argument tells us that such an <span class=\"math-container\">$f$</span> exist. It is also easy to see that <span class=\"math-container\">$\\mathcal{F}$</span> can be equipped with a partial order <span class=\"math-container\">$\\sqsubseteq$</span>, namely that of extension: <span class=\"math-container\">$f_1 \\sqsubseteq f_2$</span> if <span class=\"math-container\">$\\mathrm{dom}(f_1) \\subseteq \\mathrm{dom}(f_2)$</span> and whenever <span class=\"math-container\">$f_1(x) = y$</span>, then also <span class=\"math-container\">$f_2(x) = y$</span>. Because <span class=\"math-container\">$\\mathcal{F}$</span> is partially ordered, we know from Zorn's lemma that it has a maximal element <span class=\"math-container\">$f_{max}$</span>. So let <span class=\"math-container\">$\\mathrm{dom}(f_{max}) = X$</span>. All that is left is then to show that <span class=\"math-container\">$|X| = |A|$</span>.</p>\n<p>Suppose <span class=\"math-container\">$X$</span> had strictly smaller cardinality than <span class=\"math-container\">$A$</span>. But then, since <span class=\"math-container\">$|A| = \\max(|X|,|A \\setminus X|)$</span> we must have that <span class=\"math-container\">$|A| = |A \\setminus X|$</span> and therefore that <span class=\"math-container\">$|X| &lt; |A \\setminus X|$</span>. This then means that we can find a <span class=\"math-container\">$Y \\subset A \\setminus X$</span> of the same cardinality as <span class=\"math-container\">$X$</span>. But now consider the sets <span class=\"math-container\">$X \\times Y$</span>, <span class=\"math-container\">$Y \\times X$</span> and <span class=\"math-container\">$Y \\times Y$</span>. Each of these sets is infinite and has the same cardinality as <span class=\"math-container\">$X \\times X$</span> and therefore also the same cardinality as <span class=\"math-container\">$X$</span> and as <span class=\"math-container\">$Y$</span>. Therefore we have</p>\n<p><span class=\"math-container\">$$ |X \\times Y \\cup Y \\times X \\cup Y \\times Y| = |Y|$$</span></p>\n<p>But <span class=\"math-container\">$X \\times Y \\cup Y \\times X \\cup Y \\times Y = (X \\cup Y) \\times (X \\cup Y)$</span>, so this means that we can extend <span class=\"math-container\">$f_{max}$</span> such that its domain is <span class=\"math-container\">$(X \\cup Y) \\times (X \\cup Y)$</span>, and that is a contradiction, since <span class=\"math-container\">$f_{max}$</span> was assumed to be maximal.</p>\n" } ]
Question: Why are the cardinality of $\mathbb{R^n}$ and $\mathbb{R}$ the same? Answer 1: <p>You can think of a real number in decimal notation: an infinite sequence of digits with a decimal point somewhere in there. Now you can take $n$ (e.g. $n=3$) such numbers and interleave their digits like this:</p> <pre><code> 1 2. 3 5 6… 1 3 7 . 2 7 3 … 9 3 .1 5 8 … ⇒ 10931372.123575836… </code></pre> <p>So this way you have turned a triple of real numbers into a single real number in a reversible fashion. To make this rigorous you'd have to consider special cases like $0.999… = 1$, but the above should convey the general idea in an intuitive way.</p> <p>For $n=\infty$ this argument no longer holds in this fashion. And it's not that trivial there, either, since it depends on the <em>kind</em> of infinity. Most likely you have contable infinity in mind there, I guess, so you are asking about $\mathfrak c^{\aleph_0}$.</p> Answer 2: <p>The existence of <em>space-filling curves</em> shows that the cardinality of <span class="math-container">$\mathbb{R}^n$</span> can be at most that of <span class="math-container">$\mathbb{R}$</span>. A space-filling curve in a curve parametrized by a surjective mapping <span class="math-container">$s: \mathbb{R} \rightarrow \mathbb{R}^n$</span>. Since <span class="math-container">$s$</span> is surjective, we must conclude that <span class="math-container">$|\mathbb{R}^n| \leq |\mathbb{R}|$</span>.</p> <p>More generally, to show that we have <span class="math-container">$|A^n| = |A|$</span> for any infinite set <span class="math-container">$A$</span> and finite <span class="math-container">$n \geq 1$</span>, it is enough to show that <span class="math-container">$|A \times A| = |A|$</span>, since the general case then follows by a trivial inductive argument.</p> <p>To prove the result, consider the family <span class="math-container">$\mathcal{F}$</span> of those functions <span class="math-container">$f$</span> such that <span class="math-container">$f: X \times X \rightarrow X$</span>, where <span class="math-container">$X$</span> is a subset of <span class="math-container">$A$</span>. <span class="math-container">$\mathcal{F}$</span> is non-empty, for if <span class="math-container">$X$</span> is a countable subset of <span class="math-container">$A$</span>, then the usual dovetailing argument tells us that such an <span class="math-container">$f$</span> exist. It is also easy to see that <span class="math-container">$\mathcal{F}$</span> can be equipped with a partial order <span class="math-container">$\sqsubseteq$</span>, namely that of extension: <span class="math-container">$f_1 \sqsubseteq f_2$</span> if <span class="math-container">$\mathrm{dom}(f_1) \subseteq \mathrm{dom}(f_2)$</span> and whenever <span class="math-container">$f_1(x) = y$</span>, then also <span class="math-container">$f_2(x) = y$</span>. Because <span class="math-container">$\mathcal{F}$</span> is partially ordered, we know from Zorn's lemma that it has a maximal element <span class="math-container">$f_{max}$</span>. So let <span class="math-container">$\mathrm{dom}(f_{max}) = X$</span>. All that is left is then to show that <span class="math-container">$|X| = |A|$</span>.</p> <p>Suppose <span class="math-container">$X$</span> had strictly smaller cardinality than <span class="math-container">$A$</span>. But then, since <span class="math-container">$|A| = \max(|X|,|A \setminus X|)$</span> we must have that <span class="math-container">$|A| = |A \setminus X|$</span> and therefore that <span class="math-container">$|X| &lt; |A \setminus X|$</span>. This then means that we can find a <span class="math-container">$Y \subset A \setminus X$</span> of the same cardinality as <span class="math-container">$X$</span>. But now consider the sets <span class="math-container">$X \times Y$</span>, <span class="math-container">$Y \times X$</span> and <span class="math-container">$Y \times Y$</span>. Each of these sets is infinite and has the same cardinality as <span class="math-container">$X \times X$</span> and therefore also the same cardinality as <span class="math-container">$X$</span> and as <span class="math-container">$Y$</span>. Therefore we have</p> <p><span class="math-container">$$ |X \times Y \cup Y \times X \cup Y \times Y| = |Y|$$</span></p> <p>But <span class="math-container">$X \times Y \cup Y \times X \cup Y \times Y = (X \cup Y) \times (X \cup Y)$</span>, so this means that we can extend <span class="math-container">$f_{max}$</span> such that its domain is <span class="math-container">$(X \cup Y) \times (X \cup Y)$</span>, and that is a contradiction, since <span class="math-container">$f_{max}$</span> was assumed to be maximal.</p> Answer 3: <p>You can think of a real number in decimal notation: an infinite sequence of digits with a decimal point somewhere in there. Now you can take $n$ (e.g. $n=3$) such numbers and interleave their digits like this:</p> <pre><code> 1 2. 3 5 6… 1 3 7 . 2 7 3 … 9 3 .1 5 8 … ⇒ 10931372.123575836… </code></pre> <p>So this way you have turned a triple of real numbers into a single real number in a reversible fashion. To make this rigorous you'd have to consider special cases like $0.999… = 1$, but the above should convey the general idea in an intuitive way.</p> <p>For $n=\infty$ this argument no longer holds in this fashion. And it's not that trivial there, either, since it depends on the <em>kind</em> of infinity. Most likely you have contable infinity in mind there, I guess, so you are asking about $\mathfrak c^{\aleph_0}$.</p> Answer 4: <p>The existence of <em>space-filling curves</em> shows that the cardinality of <span class="math-container">$\mathbb{R}^n$</span> can be at most that of <span class="math-container">$\mathbb{R}$</span>. A space-filling curve in a curve parametrized by a surjective mapping <span class="math-container">$s: \mathbb{R} \rightarrow \mathbb{R}^n$</span>. Since <span class="math-container">$s$</span> is surjective, we must conclude that <span class="math-container">$|\mathbb{R}^n| \leq |\mathbb{R}|$</span>.</p> <p>More generally, to show that we have <span class="math-container">$|A^n| = |A|$</span> for any infinite set <span class="math-container">$A$</span> and finite <span class="math-container">$n \geq 1$</span>, it is enough to show that <span class="math-container">$|A \times A| = |A|$</span>, since the general case then follows by a trivial inductive argument.</p> <p>To prove the result, consider the family <span class="math-container">$\mathcal{F}$</span> of those functions <span class="math-container">$f$</span> such that <span class="math-container">$f: X \times X \rightarrow X$</span>, where <span class="math-container">$X$</span> is a subset of <span class="math-container">$A$</span>. <span class="math-container">$\mathcal{F}$</span> is non-empty, for if <span class="math-container">$X$</span> is a countable subset of <span class="math-container">$A$</span>, then the usual dovetailing argument tells us that such an <span class="math-container">$f$</span> exist. It is also easy to see that <span class="math-container">$\mathcal{F}$</span> can be equipped with a partial order <span class="math-container">$\sqsubseteq$</span>, namely that of extension: <span class="math-container">$f_1 \sqsubseteq f_2$</span> if <span class="math-container">$\mathrm{dom}(f_1) \subseteq \mathrm{dom}(f_2)$</span> and whenever <span class="math-container">$f_1(x) = y$</span>, then also <span class="math-container">$f_2(x) = y$</span>. Because <span class="math-container">$\mathcal{F}$</span> is partially ordered, we know from Zorn's lemma that it has a maximal element <span class="math-container">$f_{max}$</span>. So let <span class="math-container">$\mathrm{dom}(f_{max}) = X$</span>. All that is left is then to show that <span class="math-container">$|X| = |A|$</span>.</p> <p>Suppose <span class="math-container">$X$</span> had strictly smaller cardinality than <span class="math-container">$A$</span>. But then, since <span class="math-container">$|A| = \max(|X|,|A \setminus X|)$</span> we must have that <span class="math-container">$|A| = |A \setminus X|$</span> and therefore that <span class="math-container">$|X| &lt; |A \setminus X|$</span>. This then means that we can find a <span class="math-container">$Y \subset A \setminus X$</span> of the same cardinality as <span class="math-container">$X$</span>. But now consider the sets <span class="math-container">$X \times Y$</span>, <span class="math-container">$Y \times X$</span> and <span class="math-container">$Y \times Y$</span>. Each of these sets is infinite and has the same cardinality as <span class="math-container">$X \times X$</span> and therefore also the same cardinality as <span class="math-container">$X$</span> and as <span class="math-container">$Y$</span>. Therefore we have</p> <p><span class="math-container">$$ |X \times Y \cup Y \times X \cup Y \times Y| = |Y|$$</span></p> <p>But <span class="math-container">$X \times Y \cup Y \times X \cup Y \times Y = (X \cup Y) \times (X \cup Y)$</span>, so this means that we can extend <span class="math-container">$f_{max}$</span> such that its domain is <span class="math-container">$(X \cup Y) \times (X \cup Y)$</span>, and that is a contradiction, since <span class="math-container">$f_{max}$</span> was assumed to be maximal.</p> Answer 5: <p>You can think of a real number in decimal notation: an infinite sequence of digits with a decimal point somewhere in there. Now you can take $n$ (e.g. $n=3$) such numbers and interleave their digits like this:</p> <pre><code> 1 2. 3 5 6… 1 3 7 . 2 7 3 … 9 3 .1 5 8 … ⇒ 10931372.123575836… </code></pre> <p>So this way you have turned a triple of real numbers into a single real number in a reversible fashion. To make this rigorous you'd have to consider special cases like $0.999… = 1$, but the above should convey the general idea in an intuitive way.</p> <p>For $n=\infty$ this argument no longer holds in this fashion. And it's not that trivial there, either, since it depends on the <em>kind</em> of infinity. Most likely you have contable infinity in mind there, I guess, so you are asking about $\mathfrak c^{\aleph_0}$.</p> Answer 6: <p>The existence of <em>space-filling curves</em> shows that the cardinality of <span class="math-container">$\mathbb{R}^n$</span> can be at most that of <span class="math-container">$\mathbb{R}$</span>. A space-filling curve in a curve parametrized by a surjective mapping <span class="math-container">$s: \mathbb{R} \rightarrow \mathbb{R}^n$</span>. Since <span class="math-container">$s$</span> is surjective, we must conclude that <span class="math-container">$|\mathbb{R}^n| \leq |\mathbb{R}|$</span>.</p> <p>More generally, to show that we have <span class="math-container">$|A^n| = |A|$</span> for any infinite set <span class="math-container">$A$</span> and finite <span class="math-container">$n \geq 1$</span>, it is enough to show that <span class="math-container">$|A \times A| = |A|$</span>, since the general case then follows by a trivial inductive argument.</p> <p>To prove the result, consider the family <span class="math-container">$\mathcal{F}$</span> of those functions <span class="math-container">$f$</span> such that <span class="math-container">$f: X \times X \rightarrow X$</span>, where <span class="math-container">$X$</span> is a subset of <span class="math-container">$A$</span>. <span class="math-container">$\mathcal{F}$</span> is non-empty, for if <span class="math-container">$X$</span> is a countable subset of <span class="math-container">$A$</span>, then the usual dovetailing argument tells us that such an <span class="math-container">$f$</span> exist. It is also easy to see that <span class="math-container">$\mathcal{F}$</span> can be equipped with a partial order <span class="math-container">$\sqsubseteq$</span>, namely that of extension: <span class="math-container">$f_1 \sqsubseteq f_2$</span> if <span class="math-container">$\mathrm{dom}(f_1) \subseteq \mathrm{dom}(f_2)$</span> and whenever <span class="math-container">$f_1(x) = y$</span>, then also <span class="math-container">$f_2(x) = y$</span>. Because <span class="math-container">$\mathcal{F}$</span> is partially ordered, we know from Zorn's lemma that it has a maximal element <span class="math-container">$f_{max}$</span>. So let <span class="math-container">$\mathrm{dom}(f_{max}) = X$</span>. All that is left is then to show that <span class="math-container">$|X| = |A|$</span>.</p> <p>Suppose <span class="math-container">$X$</span> had strictly smaller cardinality than <span class="math-container">$A$</span>. But then, since <span class="math-container">$|A| = \max(|X|,|A \setminus X|)$</span> we must have that <span class="math-container">$|A| = |A \setminus X|$</span> and therefore that <span class="math-container">$|X| &lt; |A \setminus X|$</span>. This then means that we can find a <span class="math-container">$Y \subset A \setminus X$</span> of the same cardinality as <span class="math-container">$X$</span>. But now consider the sets <span class="math-container">$X \times Y$</span>, <span class="math-container">$Y \times X$</span> and <span class="math-container">$Y \times Y$</span>. Each of these sets is infinite and has the same cardinality as <span class="math-container">$X \times X$</span> and therefore also the same cardinality as <span class="math-container">$X$</span> and as <span class="math-container">$Y$</span>. Therefore we have</p> <p><span class="math-container">$$ |X \times Y \cup Y \times X \cup Y \times Y| = |Y|$$</span></p> <p>But <span class="math-container">$X \times Y \cup Y \times X \cup Y \times Y = (X \cup Y) \times (X \cup Y)$</span>, so this means that we can extend <span class="math-container">$f_{max}$</span> such that its domain is <span class="math-container">$(X \cup Y) \times (X \cup Y)$</span>, and that is a contradiction, since <span class="math-container">$f_{max}$</span> was assumed to be maximal.</p> Answer 7: <p>You can think of a real number in decimal notation: an infinite sequence of digits with a decimal point somewhere in there. Now you can take $n$ (e.g. $n=3$) such numbers and interleave their digits like this:</p> <pre><code> 1 2. 3 5 6… 1 3 7 . 2 7 3 … 9 3 .1 5 8 … ⇒ 10931372.123575836… </code></pre> <p>So this way you have turned a triple of real numbers into a single real number in a reversible fashion. To make this rigorous you'd have to consider special cases like $0.999… = 1$, but the above should convey the general idea in an intuitive way.</p> <p>For $n=\infty$ this argument no longer holds in this fashion. And it's not that trivial there, either, since it depends on the <em>kind</em> of infinity. Most likely you have contable infinity in mind there, I guess, so you are asking about $\mathfrak c^{\aleph_0}$.</p> Answer 8: <p>The existence of <em>space-filling curves</em> shows that the cardinality of <span class="math-container">$\mathbb{R}^n$</span> can be at most that of <span class="math-container">$\mathbb{R}$</span>. A space-filling curve in a curve parametrized by a surjective mapping <span class="math-container">$s: \mathbb{R} \rightarrow \mathbb{R}^n$</span>. Since <span class="math-container">$s$</span> is surjective, we must conclude that <span class="math-container">$|\mathbb{R}^n| \leq |\mathbb{R}|$</span>.</p> <p>More generally, to show that we have <span class="math-container">$|A^n| = |A|$</span> for any infinite set <span class="math-container">$A$</span> and finite <span class="math-container">$n \geq 1$</span>, it is enough to show that <span class="math-container">$|A \times A| = |A|$</span>, since the general case then follows by a trivial inductive argument.</p> <p>To prove the result, consider the family <span class="math-container">$\mathcal{F}$</span> of those functions <span class="math-container">$f$</span> such that <span class="math-container">$f: X \times X \rightarrow X$</span>, where <span class="math-container">$X$</span> is a subset of <span class="math-container">$A$</span>. <span class="math-container">$\mathcal{F}$</span> is non-empty, for if <span class="math-container">$X$</span> is a countable subset of <span class="math-container">$A$</span>, then the usual dovetailing argument tells us that such an <span class="math-container">$f$</span> exist. It is also easy to see that <span class="math-container">$\mathcal{F}$</span> can be equipped with a partial order <span class="math-container">$\sqsubseteq$</span>, namely that of extension: <span class="math-container">$f_1 \sqsubseteq f_2$</span> if <span class="math-container">$\mathrm{dom}(f_1) \subseteq \mathrm{dom}(f_2)$</span> and whenever <span class="math-container">$f_1(x) = y$</span>, then also <span class="math-container">$f_2(x) = y$</span>. Because <span class="math-container">$\mathcal{F}$</span> is partially ordered, we know from Zorn's lemma that it has a maximal element <span class="math-container">$f_{max}$</span>. So let <span class="math-container">$\mathrm{dom}(f_{max}) = X$</span>. All that is left is then to show that <span class="math-container">$|X| = |A|$</span>.</p> <p>Suppose <span class="math-container">$X$</span> had strictly smaller cardinality than <span class="math-container">$A$</span>. But then, since <span class="math-container">$|A| = \max(|X|,|A \setminus X|)$</span> we must have that <span class="math-container">$|A| = |A \setminus X|$</span> and therefore that <span class="math-container">$|X| &lt; |A \setminus X|$</span>. This then means that we can find a <span class="math-container">$Y \subset A \setminus X$</span> of the same cardinality as <span class="math-container">$X$</span>. But now consider the sets <span class="math-container">$X \times Y$</span>, <span class="math-container">$Y \times X$</span> and <span class="math-container">$Y \times Y$</span>. Each of these sets is infinite and has the same cardinality as <span class="math-container">$X \times X$</span> and therefore also the same cardinality as <span class="math-container">$X$</span> and as <span class="math-container">$Y$</span>. Therefore we have</p> <p><span class="math-container">$$ |X \times Y \cup Y \times X \cup Y \times Y| = |Y|$$</span></p> <p>But <span class="math-container">$X \times Y \cup Y \times X \cup Y \times Y = (X \cup Y) \times (X \cup Y)$</span>, so this means that we can extend <span class="math-container">$f_{max}$</span> such that its domain is <span class="math-container">$(X \cup Y) \times (X \cup Y)$</span>, and that is a contradiction, since <span class="math-container">$f_{max}$</span> was assumed to be maximal.</p> Answer 9: <p>You can think of a real number in decimal notation: an infinite sequence of digits with a decimal point somewhere in there. Now you can take $n$ (e.g. $n=3$) such numbers and interleave their digits like this:</p> <pre><code> 1 2. 3 5 6… 1 3 7 . 2 7 3 … 9 3 .1 5 8 … ⇒ 10931372.123575836… </code></pre> <p>So this way you have turned a triple of real numbers into a single real number in a reversible fashion. To make this rigorous you'd have to consider special cases like $0.999… = 1$, but the above should convey the general idea in an intuitive way.</p> <p>For $n=\infty$ this argument no longer holds in this fashion. And it's not that trivial there, either, since it depends on the <em>kind</em> of infinity. Most likely you have contable infinity in mind there, I guess, so you are asking about $\mathfrak c^{\aleph_0}$.</p> Answer 10: <p>The existence of <em>space-filling curves</em> shows that the cardinality of <span class="math-container">$\mathbb{R}^n$</span> can be at most that of <span class="math-container">$\mathbb{R}$</span>. A space-filling curve in a curve parametrized by a surjective mapping <span class="math-container">$s: \mathbb{R} \rightarrow \mathbb{R}^n$</span>. Since <span class="math-container">$s$</span> is surjective, we must conclude that <span class="math-container">$|\mathbb{R}^n| \leq |\mathbb{R}|$</span>.</p> <p>More generally, to show that we have <span class="math-container">$|A^n| = |A|$</span> for any infinite set <span class="math-container">$A$</span> and finite <span class="math-container">$n \geq 1$</span>, it is enough to show that <span class="math-container">$|A \times A| = |A|$</span>, since the general case then follows by a trivial inductive argument.</p> <p>To prove the result, consider the family <span class="math-container">$\mathcal{F}$</span> of those functions <span class="math-container">$f$</span> such that <span class="math-container">$f: X \times X \rightarrow X$</span>, where <span class="math-container">$X$</span> is a subset of <span class="math-container">$A$</span>. <span class="math-container">$\mathcal{F}$</span> is non-empty, for if <span class="math-container">$X$</span> is a countable subset of <span class="math-container">$A$</span>, then the usual dovetailing argument tells us that such an <span class="math-container">$f$</span> exist. It is also easy to see that <span class="math-container">$\mathcal{F}$</span> can be equipped with a partial order <span class="math-container">$\sqsubseteq$</span>, namely that of extension: <span class="math-container">$f_1 \sqsubseteq f_2$</span> if <span class="math-container">$\mathrm{dom}(f_1) \subseteq \mathrm{dom}(f_2)$</span> and whenever <span class="math-container">$f_1(x) = y$</span>, then also <span class="math-container">$f_2(x) = y$</span>. Because <span class="math-container">$\mathcal{F}$</span> is partially ordered, we know from Zorn's lemma that it has a maximal element <span class="math-container">$f_{max}$</span>. So let <span class="math-container">$\mathrm{dom}(f_{max}) = X$</span>. All that is left is then to show that <span class="math-container">$|X| = |A|$</span>.</p> <p>Suppose <span class="math-container">$X$</span> had strictly smaller cardinality than <span class="math-container">$A$</span>. But then, since <span class="math-container">$|A| = \max(|X|,|A \setminus X|)$</span> we must have that <span class="math-container">$|A| = |A \setminus X|$</span> and therefore that <span class="math-container">$|X| &lt; |A \setminus X|$</span>. This then means that we can find a <span class="math-container">$Y \subset A \setminus X$</span> of the same cardinality as <span class="math-container">$X$</span>. But now consider the sets <span class="math-container">$X \times Y$</span>, <span class="math-container">$Y \times X$</span> and <span class="math-container">$Y \times Y$</span>. Each of these sets is infinite and has the same cardinality as <span class="math-container">$X \times X$</span> and therefore also the same cardinality as <span class="math-container">$X$</span> and as <span class="math-container">$Y$</span>. Therefore we have</p> <p><span class="math-container">$$ |X \times Y \cup Y \times X \cup Y \times Y| = |Y|$$</span></p> <p>But <span class="math-container">$X \times Y \cup Y \times X \cup Y \times Y = (X \cup Y) \times (X \cup Y)$</span>, so this means that we can extend <span class="math-container">$f_{max}$</span> such that its domain is <span class="math-container">$(X \cup Y) \times (X \cup Y)$</span>, and that is a contradiction, since <span class="math-container">$f_{max}$</span> was assumed to be maximal.</p> Answer 11: <p>You can think of a real number in decimal notation: an infinite sequence of digits with a decimal point somewhere in there. Now you can take $n$ (e.g. $n=3$) such numbers and interleave their digits like this:</p> <pre><code> 1 2. 3 5 6… 1 3 7 . 2 7 3 … 9 3 .1 5 8 … ⇒ 10931372.123575836… </code></pre> <p>So this way you have turned a triple of real numbers into a single real number in a reversible fashion. To make this rigorous you'd have to consider special cases like $0.999… = 1$, but the above should convey the general idea in an intuitive way.</p> <p>For $n=\infty$ this argument no longer holds in this fashion. And it's not that trivial there, either, since it depends on the <em>kind</em> of infinity. Most likely you have contable infinity in mind there, I guess, so you are asking about $\mathfrak c^{\aleph_0}$.</p> Answer 12: <p>The existence of <em>space-filling curves</em> shows that the cardinality of <span class="math-container">$\mathbb{R}^n$</span> can be at most that of <span class="math-container">$\mathbb{R}$</span>. A space-filling curve in a curve parametrized by a surjective mapping <span class="math-container">$s: \mathbb{R} \rightarrow \mathbb{R}^n$</span>. Since <span class="math-container">$s$</span> is surjective, we must conclude that <span class="math-container">$|\mathbb{R}^n| \leq |\mathbb{R}|$</span>.</p> <p>More generally, to show that we have <span class="math-container">$|A^n| = |A|$</span> for any infinite set <span class="math-container">$A$</span> and finite <span class="math-container">$n \geq 1$</span>, it is enough to show that <span class="math-container">$|A \times A| = |A|$</span>, since the general case then follows by a trivial inductive argument.</p> <p>To prove the result, consider the family <span class="math-container">$\mathcal{F}$</span> of those functions <span class="math-container">$f$</span> such that <span class="math-container">$f: X \times X \rightarrow X$</span>, where <span class="math-container">$X$</span> is a subset of <span class="math-container">$A$</span>. <span class="math-container">$\mathcal{F}$</span> is non-empty, for if <span class="math-container">$X$</span> is a countable subset of <span class="math-container">$A$</span>, then the usual dovetailing argument tells us that such an <span class="math-container">$f$</span> exist. It is also easy to see that <span class="math-container">$\mathcal{F}$</span> can be equipped with a partial order <span class="math-container">$\sqsubseteq$</span>, namely that of extension: <span class="math-container">$f_1 \sqsubseteq f_2$</span> if <span class="math-container">$\mathrm{dom}(f_1) \subseteq \mathrm{dom}(f_2)$</span> and whenever <span class="math-container">$f_1(x) = y$</span>, then also <span class="math-container">$f_2(x) = y$</span>. Because <span class="math-container">$\mathcal{F}$</span> is partially ordered, we know from Zorn's lemma that it has a maximal element <span class="math-container">$f_{max}$</span>. So let <span class="math-container">$\mathrm{dom}(f_{max}) = X$</span>. All that is left is then to show that <span class="math-container">$|X| = |A|$</span>.</p> <p>Suppose <span class="math-container">$X$</span> had strictly smaller cardinality than <span class="math-container">$A$</span>. But then, since <span class="math-container">$|A| = \max(|X|,|A \setminus X|)$</span> we must have that <span class="math-container">$|A| = |A \setminus X|$</span> and therefore that <span class="math-container">$|X| &lt; |A \setminus X|$</span>. This then means that we can find a <span class="math-container">$Y \subset A \setminus X$</span> of the same cardinality as <span class="math-container">$X$</span>. But now consider the sets <span class="math-container">$X \times Y$</span>, <span class="math-container">$Y \times X$</span> and <span class="math-container">$Y \times Y$</span>. Each of these sets is infinite and has the same cardinality as <span class="math-container">$X \times X$</span> and therefore also the same cardinality as <span class="math-container">$X$</span> and as <span class="math-container">$Y$</span>. Therefore we have</p> <p><span class="math-container">$$ |X \times Y \cup Y \times X \cup Y \times Y| = |Y|$$</span></p> <p>But <span class="math-container">$X \times Y \cup Y \times X \cup Y \times Y = (X \cup Y) \times (X \cup Y)$</span>, so this means that we can extend <span class="math-container">$f_{max}$</span> such that its domain is <span class="math-container">$(X \cup Y) \times (X \cup Y)$</span>, and that is a contradiction, since <span class="math-container">$f_{max}$</span> was assumed to be maximal.</p> Answer 13: <p>You can think of a real number in decimal notation: an infinite sequence of digits with a decimal point somewhere in there. Now you can take $n$ (e.g. $n=3$) such numbers and interleave their digits like this:</p> <pre><code> 1 2. 3 5 6… 1 3 7 . 2 7 3 … 9 3 .1 5 8 … ⇒ 10931372.123575836… </code></pre> <p>So this way you have turned a triple of real numbers into a single real number in a reversible fashion. To make this rigorous you'd have to consider special cases like $0.999… = 1$, but the above should convey the general idea in an intuitive way.</p> <p>For $n=\infty$ this argument no longer holds in this fashion. And it's not that trivial there, either, since it depends on the <em>kind</em> of infinity. Most likely you have contable infinity in mind there, I guess, so you are asking about $\mathfrak c^{\aleph_0}$.</p> Answer 14: <p>The existence of <em>space-filling curves</em> shows that the cardinality of <span class="math-container">$\mathbb{R}^n$</span> can be at most that of <span class="math-container">$\mathbb{R}$</span>. A space-filling curve in a curve parametrized by a surjective mapping <span class="math-container">$s: \mathbb{R} \rightarrow \mathbb{R}^n$</span>. Since <span class="math-container">$s$</span> is surjective, we must conclude that <span class="math-container">$|\mathbb{R}^n| \leq |\mathbb{R}|$</span>.</p> <p>More generally, to show that we have <span class="math-container">$|A^n| = |A|$</span> for any infinite set <span class="math-container">$A$</span> and finite <span class="math-container">$n \geq 1$</span>, it is enough to show that <span class="math-container">$|A \times A| = |A|$</span>, since the general case then follows by a trivial inductive argument.</p> <p>To prove the result, consider the family <span class="math-container">$\mathcal{F}$</span> of those functions <span class="math-container">$f$</span> such that <span class="math-container">$f: X \times X \rightarrow X$</span>, where <span class="math-container">$X$</span> is a subset of <span class="math-container">$A$</span>. <span class="math-container">$\mathcal{F}$</span> is non-empty, for if <span class="math-container">$X$</span> is a countable subset of <span class="math-container">$A$</span>, then the usual dovetailing argument tells us that such an <span class="math-container">$f$</span> exist. It is also easy to see that <span class="math-container">$\mathcal{F}$</span> can be equipped with a partial order <span class="math-container">$\sqsubseteq$</span>, namely that of extension: <span class="math-container">$f_1 \sqsubseteq f_2$</span> if <span class="math-container">$\mathrm{dom}(f_1) \subseteq \mathrm{dom}(f_2)$</span> and whenever <span class="math-container">$f_1(x) = y$</span>, then also <span class="math-container">$f_2(x) = y$</span>. Because <span class="math-container">$\mathcal{F}$</span> is partially ordered, we know from Zorn's lemma that it has a maximal element <span class="math-container">$f_{max}$</span>. So let <span class="math-container">$\mathrm{dom}(f_{max}) = X$</span>. All that is left is then to show that <span class="math-container">$|X| = |A|$</span>.</p> <p>Suppose <span class="math-container">$X$</span> had strictly smaller cardinality than <span class="math-container">$A$</span>. But then, since <span class="math-container">$|A| = \max(|X|,|A \setminus X|)$</span> we must have that <span class="math-container">$|A| = |A \setminus X|$</span> and therefore that <span class="math-container">$|X| &lt; |A \setminus X|$</span>. This then means that we can find a <span class="math-container">$Y \subset A \setminus X$</span> of the same cardinality as <span class="math-container">$X$</span>. But now consider the sets <span class="math-container">$X \times Y$</span>, <span class="math-container">$Y \times X$</span> and <span class="math-container">$Y \times Y$</span>. Each of these sets is infinite and has the same cardinality as <span class="math-container">$X \times X$</span> and therefore also the same cardinality as <span class="math-container">$X$</span> and as <span class="math-container">$Y$</span>. Therefore we have</p> <p><span class="math-container">$$ |X \times Y \cup Y \times X \cup Y \times Y| = |Y|$$</span></p> <p>But <span class="math-container">$X \times Y \cup Y \times X \cup Y \times Y = (X \cup Y) \times (X \cup Y)$</span>, so this means that we can extend <span class="math-container">$f_{max}$</span> such that its domain is <span class="math-container">$(X \cup Y) \times (X \cup Y)$</span>, and that is a contradiction, since <span class="math-container">$f_{max}$</span> was assumed to be maximal.</p> Answer 15: <p>You can think of a real number in decimal notation: an infinite sequence of digits with a decimal point somewhere in there. Now you can take $n$ (e.g. $n=3$) such numbers and interleave their digits like this:</p> <pre><code> 1 2. 3 5 6… 1 3 7 . 2 7 3 … 9 3 .1 5 8 … ⇒ 10931372.123575836… </code></pre> <p>So this way you have turned a triple of real numbers into a single real number in a reversible fashion. To make this rigorous you'd have to consider special cases like $0.999… = 1$, but the above should convey the general idea in an intuitive way.</p> <p>For $n=\infty$ this argument no longer holds in this fashion. And it's not that trivial there, either, since it depends on the <em>kind</em> of infinity. Most likely you have contable infinity in mind there, I guess, so you are asking about $\mathfrak c^{\aleph_0}$.</p> Answer 16: <p>The existence of <em>space-filling curves</em> shows that the cardinality of <span class="math-container">$\mathbb{R}^n$</span> can be at most that of <span class="math-container">$\mathbb{R}$</span>. A space-filling curve in a curve parametrized by a surjective mapping <span class="math-container">$s: \mathbb{R} \rightarrow \mathbb{R}^n$</span>. Since <span class="math-container">$s$</span> is surjective, we must conclude that <span class="math-container">$|\mathbb{R}^n| \leq |\mathbb{R}|$</span>.</p> <p>More generally, to show that we have <span class="math-container">$|A^n| = |A|$</span> for any infinite set <span class="math-container">$A$</span> and finite <span class="math-container">$n \geq 1$</span>, it is enough to show that <span class="math-container">$|A \times A| = |A|$</span>, since the general case then follows by a trivial inductive argument.</p> <p>To prove the result, consider the family <span class="math-container">$\mathcal{F}$</span> of those functions <span class="math-container">$f$</span> such that <span class="math-container">$f: X \times X \rightarrow X$</span>, where <span class="math-container">$X$</span> is a subset of <span class="math-container">$A$</span>. <span class="math-container">$\mathcal{F}$</span> is non-empty, for if <span class="math-container">$X$</span> is a countable subset of <span class="math-container">$A$</span>, then the usual dovetailing argument tells us that such an <span class="math-container">$f$</span> exist. It is also easy to see that <span class="math-container">$\mathcal{F}$</span> can be equipped with a partial order <span class="math-container">$\sqsubseteq$</span>, namely that of extension: <span class="math-container">$f_1 \sqsubseteq f_2$</span> if <span class="math-container">$\mathrm{dom}(f_1) \subseteq \mathrm{dom}(f_2)$</span> and whenever <span class="math-container">$f_1(x) = y$</span>, then also <span class="math-container">$f_2(x) = y$</span>. Because <span class="math-container">$\mathcal{F}$</span> is partially ordered, we know from Zorn's lemma that it has a maximal element <span class="math-container">$f_{max}$</span>. So let <span class="math-container">$\mathrm{dom}(f_{max}) = X$</span>. All that is left is then to show that <span class="math-container">$|X| = |A|$</span>.</p> <p>Suppose <span class="math-container">$X$</span> had strictly smaller cardinality than <span class="math-container">$A$</span>. But then, since <span class="math-container">$|A| = \max(|X|,|A \setminus X|)$</span> we must have that <span class="math-container">$|A| = |A \setminus X|$</span> and therefore that <span class="math-container">$|X| &lt; |A \setminus X|$</span>. This then means that we can find a <span class="math-container">$Y \subset A \setminus X$</span> of the same cardinality as <span class="math-container">$X$</span>. But now consider the sets <span class="math-container">$X \times Y$</span>, <span class="math-container">$Y \times X$</span> and <span class="math-container">$Y \times Y$</span>. Each of these sets is infinite and has the same cardinality as <span class="math-container">$X \times X$</span> and therefore also the same cardinality as <span class="math-container">$X$</span> and as <span class="math-container">$Y$</span>. Therefore we have</p> <p><span class="math-container">$$ |X \times Y \cup Y \times X \cup Y \times Y| = |Y|$$</span></p> <p>But <span class="math-container">$X \times Y \cup Y \times X \cup Y \times Y = (X \cup Y) \times (X \cup Y)$</span>, so this means that we can extend <span class="math-container">$f_{max}$</span> such that its domain is <span class="math-container">$(X \cup Y) \times (X \cup Y)$</span>, and that is a contradiction, since <span class="math-container">$f_{max}$</span> was assumed to be maximal.</p> Answer 17: <p>You can think of a real number in decimal notation: an infinite sequence of digits with a decimal point somewhere in there. Now you can take $n$ (e.g. $n=3$) such numbers and interleave their digits like this:</p> <pre><code> 1 2. 3 5 6… 1 3 7 . 2 7 3 … 9 3 .1 5 8 … ⇒ 10931372.123575836… </code></pre> <p>So this way you have turned a triple of real numbers into a single real number in a reversible fashion. To make this rigorous you'd have to consider special cases like $0.999… = 1$, but the above should convey the general idea in an intuitive way.</p> <p>For $n=\infty$ this argument no longer holds in this fashion. And it's not that trivial there, either, since it depends on the <em>kind</em> of infinity. Most likely you have contable infinity in mind there, I guess, so you are asking about $\mathfrak c^{\aleph_0}$.</p> Answer 18: <p>The existence of <em>space-filling curves</em> shows that the cardinality of <span class="math-container">$\mathbb{R}^n$</span> can be at most that of <span class="math-container">$\mathbb{R}$</span>. A space-filling curve in a curve parametrized by a surjective mapping <span class="math-container">$s: \mathbb{R} \rightarrow \mathbb{R}^n$</span>. Since <span class="math-container">$s$</span> is surjective, we must conclude that <span class="math-container">$|\mathbb{R}^n| \leq |\mathbb{R}|$</span>.</p> <p>More generally, to show that we have <span class="math-container">$|A^n| = |A|$</span> for any infinite set <span class="math-container">$A$</span> and finite <span class="math-container">$n \geq 1$</span>, it is enough to show that <span class="math-container">$|A \times A| = |A|$</span>, since the general case then follows by a trivial inductive argument.</p> <p>To prove the result, consider the family <span class="math-container">$\mathcal{F}$</span> of those functions <span class="math-container">$f$</span> such that <span class="math-container">$f: X \times X \rightarrow X$</span>, where <span class="math-container">$X$</span> is a subset of <span class="math-container">$A$</span>. <span class="math-container">$\mathcal{F}$</span> is non-empty, for if <span class="math-container">$X$</span> is a countable subset of <span class="math-container">$A$</span>, then the usual dovetailing argument tells us that such an <span class="math-container">$f$</span> exist. It is also easy to see that <span class="math-container">$\mathcal{F}$</span> can be equipped with a partial order <span class="math-container">$\sqsubseteq$</span>, namely that of extension: <span class="math-container">$f_1 \sqsubseteq f_2$</span> if <span class="math-container">$\mathrm{dom}(f_1) \subseteq \mathrm{dom}(f_2)$</span> and whenever <span class="math-container">$f_1(x) = y$</span>, then also <span class="math-container">$f_2(x) = y$</span>. Because <span class="math-container">$\mathcal{F}$</span> is partially ordered, we know from Zorn's lemma that it has a maximal element <span class="math-container">$f_{max}$</span>. So let <span class="math-container">$\mathrm{dom}(f_{max}) = X$</span>. All that is left is then to show that <span class="math-container">$|X| = |A|$</span>.</p> <p>Suppose <span class="math-container">$X$</span> had strictly smaller cardinality than <span class="math-container">$A$</span>. But then, since <span class="math-container">$|A| = \max(|X|,|A \setminus X|)$</span> we must have that <span class="math-container">$|A| = |A \setminus X|$</span> and therefore that <span class="math-container">$|X| &lt; |A \setminus X|$</span>. This then means that we can find a <span class="math-container">$Y \subset A \setminus X$</span> of the same cardinality as <span class="math-container">$X$</span>. But now consider the sets <span class="math-container">$X \times Y$</span>, <span class="math-container">$Y \times X$</span> and <span class="math-container">$Y \times Y$</span>. Each of these sets is infinite and has the same cardinality as <span class="math-container">$X \times X$</span> and therefore also the same cardinality as <span class="math-container">$X$</span> and as <span class="math-container">$Y$</span>. Therefore we have</p> <p><span class="math-container">$$ |X \times Y \cup Y \times X \cup Y \times Y| = |Y|$$</span></p> <p>But <span class="math-container">$X \times Y \cup Y \times X \cup Y \times Y = (X \cup Y) \times (X \cup Y)$</span>, so this means that we can extend <span class="math-container">$f_{max}$</span> such that its domain is <span class="math-container">$(X \cup Y) \times (X \cup Y)$</span>, and that is a contradiction, since <span class="math-container">$f_{max}$</span> was assumed to be maximal.</p> Answer 19: <p>You can think of a real number in decimal notation: an infinite sequence of digits with a decimal point somewhere in there. Now you can take $n$ (e.g. $n=3$) such numbers and interleave their digits like this:</p> <pre><code> 1 2. 3 5 6… 1 3 7 . 2 7 3 … 9 3 .1 5 8 … ⇒ 10931372.123575836… </code></pre> <p>So this way you have turned a triple of real numbers into a single real number in a reversible fashion. To make this rigorous you'd have to consider special cases like $0.999… = 1$, but the above should convey the general idea in an intuitive way.</p> <p>For $n=\infty$ this argument no longer holds in this fashion. And it's not that trivial there, either, since it depends on the <em>kind</em> of infinity. Most likely you have contable infinity in mind there, I guess, so you are asking about $\mathfrak c^{\aleph_0}$.</p> Answer 20: <p>The existence of <em>space-filling curves</em> shows that the cardinality of <span class="math-container">$\mathbb{R}^n$</span> can be at most that of <span class="math-container">$\mathbb{R}$</span>. A space-filling curve in a curve parametrized by a surjective mapping <span class="math-container">$s: \mathbb{R} \rightarrow \mathbb{R}^n$</span>. Since <span class="math-container">$s$</span> is surjective, we must conclude that <span class="math-container">$|\mathbb{R}^n| \leq |\mathbb{R}|$</span>.</p> <p>More generally, to show that we have <span class="math-container">$|A^n| = |A|$</span> for any infinite set <span class="math-container">$A$</span> and finite <span class="math-container">$n \geq 1$</span>, it is enough to show that <span class="math-container">$|A \times A| = |A|$</span>, since the general case then follows by a trivial inductive argument.</p> <p>To prove the result, consider the family <span class="math-container">$\mathcal{F}$</span> of those functions <span class="math-container">$f$</span> such that <span class="math-container">$f: X \times X \rightarrow X$</span>, where <span class="math-container">$X$</span> is a subset of <span class="math-container">$A$</span>. <span class="math-container">$\mathcal{F}$</span> is non-empty, for if <span class="math-container">$X$</span> is a countable subset of <span class="math-container">$A$</span>, then the usual dovetailing argument tells us that such an <span class="math-container">$f$</span> exist. It is also easy to see that <span class="math-container">$\mathcal{F}$</span> can be equipped with a partial order <span class="math-container">$\sqsubseteq$</span>, namely that of extension: <span class="math-container">$f_1 \sqsubseteq f_2$</span> if <span class="math-container">$\mathrm{dom}(f_1) \subseteq \mathrm{dom}(f_2)$</span> and whenever <span class="math-container">$f_1(x) = y$</span>, then also <span class="math-container">$f_2(x) = y$</span>. Because <span class="math-container">$\mathcal{F}$</span> is partially ordered, we know from Zorn's lemma that it has a maximal element <span class="math-container">$f_{max}$</span>. So let <span class="math-container">$\mathrm{dom}(f_{max}) = X$</span>. All that is left is then to show that <span class="math-container">$|X| = |A|$</span>.</p> <p>Suppose <span class="math-container">$X$</span> had strictly smaller cardinality than <span class="math-container">$A$</span>. But then, since <span class="math-container">$|A| = \max(|X|,|A \setminus X|)$</span> we must have that <span class="math-container">$|A| = |A \setminus X|$</span> and therefore that <span class="math-container">$|X| &lt; |A \setminus X|$</span>. This then means that we can find a <span class="math-container">$Y \subset A \setminus X$</span> of the same cardinality as <span class="math-container">$X$</span>. But now consider the sets <span class="math-container">$X \times Y$</span>, <span class="math-container">$Y \times X$</span> and <span class="math-container">$Y \times Y$</span>. Each of these sets is infinite and has the same cardinality as <span class="math-container">$X \times X$</span> and therefore also the same cardinality as <span class="math-container">$X$</span> and as <span class="math-container">$Y$</span>. Therefore we have</p> <p><span class="math-container">$$ |X \times Y \cup Y \times X \cup Y \times Y| = |Y|$$</span></p> <p>But <span class="math-container">$X \times Y \cup Y \times X \cup Y \times Y = (X \cup Y) \times (X \cup Y)$</span>, so this means that we can extend <span class="math-container">$f_{max}$</span> such that its domain is <span class="math-container">$(X \cup Y) \times (X \cup Y)$</span>, and that is a contradiction, since <span class="math-container">$f_{max}$</span> was assumed to be maximal.</p> Answer 21: <p>You can think of a real number in decimal notation: an infinite sequence of digits with a decimal point somewhere in there. Now you can take $n$ (e.g. $n=3$) such numbers and interleave their digits like this:</p> <pre><code> 1 2. 3 5 6… 1 3 7 . 2 7 3 … 9 3 .1 5 8 … ⇒ 10931372.123575836… </code></pre> <p>So this way you have turned a triple of real numbers into a single real number in a reversible fashion. To make this rigorous you'd have to consider special cases like $0.999… = 1$, but the above should convey the general idea in an intuitive way.</p> <p>For $n=\infty$ this argument no longer holds in this fashion. And it's not that trivial there, either, since it depends on the <em>kind</em> of infinity. Most likely you have contable infinity in mind there, I guess, so you are asking about $\mathfrak c^{\aleph_0}$.</p> Answer 22: <p>The existence of <em>space-filling curves</em> shows that the cardinality of <span class="math-container">$\mathbb{R}^n$</span> can be at most that of <span class="math-container">$\mathbb{R}$</span>. A space-filling curve in a curve parametrized by a surjective mapping <span class="math-container">$s: \mathbb{R} \rightarrow \mathbb{R}^n$</span>. Since <span class="math-container">$s$</span> is surjective, we must conclude that <span class="math-container">$|\mathbb{R}^n| \leq |\mathbb{R}|$</span>.</p> <p>More generally, to show that we have <span class="math-container">$|A^n| = |A|$</span> for any infinite set <span class="math-container">$A$</span> and finite <span class="math-container">$n \geq 1$</span>, it is enough to show that <span class="math-container">$|A \times A| = |A|$</span>, since the general case then follows by a trivial inductive argument.</p> <p>To prove the result, consider the family <span class="math-container">$\mathcal{F}$</span> of those functions <span class="math-container">$f$</span> such that <span class="math-container">$f: X \times X \rightarrow X$</span>, where <span class="math-container">$X$</span> is a subset of <span class="math-container">$A$</span>. <span class="math-container">$\mathcal{F}$</span> is non-empty, for if <span class="math-container">$X$</span> is a countable subset of <span class="math-container">$A$</span>, then the usual dovetailing argument tells us that such an <span class="math-container">$f$</span> exist. It is also easy to see that <span class="math-container">$\mathcal{F}$</span> can be equipped with a partial order <span class="math-container">$\sqsubseteq$</span>, namely that of extension: <span class="math-container">$f_1 \sqsubseteq f_2$</span> if <span class="math-container">$\mathrm{dom}(f_1) \subseteq \mathrm{dom}(f_2)$</span> and whenever <span class="math-container">$f_1(x) = y$</span>, then also <span class="math-container">$f_2(x) = y$</span>. Because <span class="math-container">$\mathcal{F}$</span> is partially ordered, we know from Zorn's lemma that it has a maximal element <span class="math-container">$f_{max}$</span>. So let <span class="math-container">$\mathrm{dom}(f_{max}) = X$</span>. All that is left is then to show that <span class="math-container">$|X| = |A|$</span>.</p> <p>Suppose <span class="math-container">$X$</span> had strictly smaller cardinality than <span class="math-container">$A$</span>. But then, since <span class="math-container">$|A| = \max(|X|,|A \setminus X|)$</span> we must have that <span class="math-container">$|A| = |A \setminus X|$</span> and therefore that <span class="math-container">$|X| &lt; |A \setminus X|$</span>. This then means that we can find a <span class="math-container">$Y \subset A \setminus X$</span> of the same cardinality as <span class="math-container">$X$</span>. But now consider the sets <span class="math-container">$X \times Y$</span>, <span class="math-container">$Y \times X$</span> and <span class="math-container">$Y \times Y$</span>. Each of these sets is infinite and has the same cardinality as <span class="math-container">$X \times X$</span> and therefore also the same cardinality as <span class="math-container">$X$</span> and as <span class="math-container">$Y$</span>. Therefore we have</p> <p><span class="math-container">$$ |X \times Y \cup Y \times X \cup Y \times Y| = |Y|$$</span></p> <p>But <span class="math-container">$X \times Y \cup Y \times X \cup Y \times Y = (X \cup Y) \times (X \cup Y)$</span>, so this means that we can extend <span class="math-container">$f_{max}$</span> such that its domain is <span class="math-container">$(X \cup Y) \times (X \cup Y)$</span>, and that is a contradiction, since <span class="math-container">$f_{max}$</span> was assumed to be maximal.</p>
Do $\sqrt{7} \in \mathbb Q(\zeta_{49})$?
2,092,348
6
https://math.stackexchange.com/questions/2092348/do-sqrt7-in-mathbb-q-zeta-49
<p>The problem I have is the following: <br><br>Show that $\mathbb Q(\zeta_{49},\sqrt{7})=K $ is a Galois extension and determine the Galois' group. <br><br>I know that is a Galois extension because it is the splitting field of $f(x)=(x^2-7)(\phi_{49}(x))$ , where $\phi_{49}(x)$ is the minimal polynomial of $\zeta_{49}$ (a $49$th primitive root of unity) but I have difficulty deciding if $\sqrt{7} \in \mathbb Q(\zeta_{49})$.<br> I know that $Aut(\mathbb Q(\zeta_{49})/\mathbb Q)\cong \mathbb Z_{49}^*\cong \mathbb Z_{42}$ is a cyclic group, so there is an unique subextension of $K$ that has grade 2 on $\mathbb Q$. But I have no idea how to conclude. </p>
[ { "answer_id": 2092410, "score": 9, "text": "<p>The degree $[\\mathbb{Q}(\\zeta_{49})\\cap \\mathbb{R}:\\mathbb{Q}]=21$ is odd, so...</p>\n" }, { "answer_id": 2095550, "score": 1, "text": "<p>Another approach than @WimC’s good answer: $[\\Bbb Q(\\zeta_{49}):\\Bbb Q]=42$, and it’s an extension ramified only at $7$, with cyclic Galois group. So the extension has only one intermediate field of each possible degree, every one of them ramified only at $7$. The quadratic extension ramified only at $7$ is $\\Bbb Q(\\sqrt{-7}\\,)$.</p>\n" }, { "answer_id": 2108742, "score": 1, "text": "<p>The quadratic Gauss sum\n$$S=\\sum_{a\\in\\mathbb F_p}\\zeta_p^{a^2}$$\nfor odd prime $p$ satisfies\n$$S^2=(-1)^{(p-1)/2}p$$\nas shown by direct calculation. It follows for $p=7$ that $$S=\\pm\\,i\\sqrt7\\in\\mathbb Q(\\zeta_7)\\subset\\mathbb Q(\\zeta_{49}),$$ so $\\mathbb Q(i\\sqrt7)$ and not $\\mathbb Q(\\sqrt7)$ is the unique quadratic subfield of $\\mathbb Q(\\zeta_{49})$.</p>\n" }, { "answer_id": 2092410, "score": 9, "text": "<p>The degree $[\\mathbb{Q}(\\zeta_{49})\\cap \\mathbb{R}:\\mathbb{Q}]=21$ is odd, so...</p>\n" }, { "answer_id": 2095550, "score": 1, "text": "<p>Another approach than @WimC’s good answer: $[\\Bbb Q(\\zeta_{49}):\\Bbb Q]=42$, and it’s an extension ramified only at $7$, with cyclic Galois group. So the extension has only one intermediate field of each possible degree, every one of them ramified only at $7$. The quadratic extension ramified only at $7$ is $\\Bbb Q(\\sqrt{-7}\\,)$.</p>\n" }, { "answer_id": 2108742, "score": 1, "text": "<p>The quadratic Gauss sum\n$$S=\\sum_{a\\in\\mathbb F_p}\\zeta_p^{a^2}$$\nfor odd prime $p$ satisfies\n$$S^2=(-1)^{(p-1)/2}p$$\nas shown by direct calculation. It follows for $p=7$ that $$S=\\pm\\,i\\sqrt7\\in\\mathbb Q(\\zeta_7)\\subset\\mathbb Q(\\zeta_{49}),$$ so $\\mathbb Q(i\\sqrt7)$ and not $\\mathbb Q(\\sqrt7)$ is the unique quadratic subfield of $\\mathbb Q(\\zeta_{49})$.</p>\n" }, { "answer_id": 2092410, "score": 9, "text": "<p>The degree $[\\mathbb{Q}(\\zeta_{49})\\cap \\mathbb{R}:\\mathbb{Q}]=21$ is odd, so...</p>\n" }, { "answer_id": 2095550, "score": 1, "text": "<p>Another approach than @WimC’s good answer: $[\\Bbb Q(\\zeta_{49}):\\Bbb Q]=42$, and it’s an extension ramified only at $7$, with cyclic Galois group. So the extension has only one intermediate field of each possible degree, every one of them ramified only at $7$. The quadratic extension ramified only at $7$ is $\\Bbb Q(\\sqrt{-7}\\,)$.</p>\n" }, { "answer_id": 2108742, "score": 1, "text": "<p>The quadratic Gauss sum\n$$S=\\sum_{a\\in\\mathbb F_p}\\zeta_p^{a^2}$$\nfor odd prime $p$ satisfies\n$$S^2=(-1)^{(p-1)/2}p$$\nas shown by direct calculation. It follows for $p=7$ that $$S=\\pm\\,i\\sqrt7\\in\\mathbb Q(\\zeta_7)\\subset\\mathbb Q(\\zeta_{49}),$$ so $\\mathbb Q(i\\sqrt7)$ and not $\\mathbb Q(\\sqrt7)$ is the unique quadratic subfield of $\\mathbb Q(\\zeta_{49})$.</p>\n" }, { "answer_id": 2092410, "score": 9, "text": "<p>The degree $[\\mathbb{Q}(\\zeta_{49})\\cap \\mathbb{R}:\\mathbb{Q}]=21$ is odd, so...</p>\n" }, { "answer_id": 2095550, "score": 1, "text": "<p>Another approach than @WimC’s good answer: $[\\Bbb Q(\\zeta_{49}):\\Bbb Q]=42$, and it’s an extension ramified only at $7$, with cyclic Galois group. So the extension has only one intermediate field of each possible degree, every one of them ramified only at $7$. The quadratic extension ramified only at $7$ is $\\Bbb Q(\\sqrt{-7}\\,)$.</p>\n" }, { "answer_id": 2108742, "score": 1, "text": "<p>The quadratic Gauss sum\n$$S=\\sum_{a\\in\\mathbb F_p}\\zeta_p^{a^2}$$\nfor odd prime $p$ satisfies\n$$S^2=(-1)^{(p-1)/2}p$$\nas shown by direct calculation. It follows for $p=7$ that $$S=\\pm\\,i\\sqrt7\\in\\mathbb Q(\\zeta_7)\\subset\\mathbb Q(\\zeta_{49}),$$ so $\\mathbb Q(i\\sqrt7)$ and not $\\mathbb Q(\\sqrt7)$ is the unique quadratic subfield of $\\mathbb Q(\\zeta_{49})$.</p>\n" }, { "answer_id": 2092410, "score": 9, "text": "<p>The degree $[\\mathbb{Q}(\\zeta_{49})\\cap \\mathbb{R}:\\mathbb{Q}]=21$ is odd, so...</p>\n" }, { "answer_id": 2095550, "score": 1, "text": "<p>Another approach than @WimC’s good answer: $[\\Bbb Q(\\zeta_{49}):\\Bbb Q]=42$, and it’s an extension ramified only at $7$, with cyclic Galois group. So the extension has only one intermediate field of each possible degree, every one of them ramified only at $7$. The quadratic extension ramified only at $7$ is $\\Bbb Q(\\sqrt{-7}\\,)$.</p>\n" }, { "answer_id": 2108742, "score": 1, "text": "<p>The quadratic Gauss sum\n$$S=\\sum_{a\\in\\mathbb F_p}\\zeta_p^{a^2}$$\nfor odd prime $p$ satisfies\n$$S^2=(-1)^{(p-1)/2}p$$\nas shown by direct calculation. It follows for $p=7$ that $$S=\\pm\\,i\\sqrt7\\in\\mathbb Q(\\zeta_7)\\subset\\mathbb Q(\\zeta_{49}),$$ so $\\mathbb Q(i\\sqrt7)$ and not $\\mathbb Q(\\sqrt7)$ is the unique quadratic subfield of $\\mathbb Q(\\zeta_{49})$.</p>\n" }, { "answer_id": 2092410, "score": 9, "text": "<p>The degree $[\\mathbb{Q}(\\zeta_{49})\\cap \\mathbb{R}:\\mathbb{Q}]=21$ is odd, so...</p>\n" }, { "answer_id": 2095550, "score": 1, "text": "<p>Another approach than @WimC’s good answer: $[\\Bbb Q(\\zeta_{49}):\\Bbb Q]=42$, and it’s an extension ramified only at $7$, with cyclic Galois group. So the extension has only one intermediate field of each possible degree, every one of them ramified only at $7$. The quadratic extension ramified only at $7$ is $\\Bbb Q(\\sqrt{-7}\\,)$.</p>\n" }, { "answer_id": 2108742, "score": 1, "text": "<p>The quadratic Gauss sum\n$$S=\\sum_{a\\in\\mathbb F_p}\\zeta_p^{a^2}$$\nfor odd prime $p$ satisfies\n$$S^2=(-1)^{(p-1)/2}p$$\nas shown by direct calculation. It follows for $p=7$ that $$S=\\pm\\,i\\sqrt7\\in\\mathbb Q(\\zeta_7)\\subset\\mathbb Q(\\zeta_{49}),$$ so $\\mathbb Q(i\\sqrt7)$ and not $\\mathbb Q(\\sqrt7)$ is the unique quadratic subfield of $\\mathbb Q(\\zeta_{49})$.</p>\n" }, { "answer_id": 2092410, "score": 9, "text": "<p>The degree $[\\mathbb{Q}(\\zeta_{49})\\cap \\mathbb{R}:\\mathbb{Q}]=21$ is odd, so...</p>\n" }, { "answer_id": 2095550, "score": 1, "text": "<p>Another approach than @WimC’s good answer: $[\\Bbb Q(\\zeta_{49}):\\Bbb Q]=42$, and it’s an extension ramified only at $7$, with cyclic Galois group. So the extension has only one intermediate field of each possible degree, every one of them ramified only at $7$. The quadratic extension ramified only at $7$ is $\\Bbb Q(\\sqrt{-7}\\,)$.</p>\n" }, { "answer_id": 2108742, "score": 1, "text": "<p>The quadratic Gauss sum\n$$S=\\sum_{a\\in\\mathbb F_p}\\zeta_p^{a^2}$$\nfor odd prime $p$ satisfies\n$$S^2=(-1)^{(p-1)/2}p$$\nas shown by direct calculation. It follows for $p=7$ that $$S=\\pm\\,i\\sqrt7\\in\\mathbb Q(\\zeta_7)\\subset\\mathbb Q(\\zeta_{49}),$$ so $\\mathbb Q(i\\sqrt7)$ and not $\\mathbb Q(\\sqrt7)$ is the unique quadratic subfield of $\\mathbb Q(\\zeta_{49})$.</p>\n" }, { "answer_id": 2092410, "score": 9, "text": "<p>The degree $[\\mathbb{Q}(\\zeta_{49})\\cap \\mathbb{R}:\\mathbb{Q}]=21$ is odd, so...</p>\n" }, { "answer_id": 2095550, "score": 1, "text": "<p>Another approach than @WimC’s good answer: $[\\Bbb Q(\\zeta_{49}):\\Bbb Q]=42$, and it’s an extension ramified only at $7$, with cyclic Galois group. So the extension has only one intermediate field of each possible degree, every one of them ramified only at $7$. The quadratic extension ramified only at $7$ is $\\Bbb Q(\\sqrt{-7}\\,)$.</p>\n" }, { "answer_id": 2108742, "score": 1, "text": "<p>The quadratic Gauss sum\n$$S=\\sum_{a\\in\\mathbb F_p}\\zeta_p^{a^2}$$\nfor odd prime $p$ satisfies\n$$S^2=(-1)^{(p-1)/2}p$$\nas shown by direct calculation. It follows for $p=7$ that $$S=\\pm\\,i\\sqrt7\\in\\mathbb Q(\\zeta_7)\\subset\\mathbb Q(\\zeta_{49}),$$ so $\\mathbb Q(i\\sqrt7)$ and not $\\mathbb Q(\\sqrt7)$ is the unique quadratic subfield of $\\mathbb Q(\\zeta_{49})$.</p>\n" }, { "answer_id": 2092410, "score": 9, "text": "<p>The degree $[\\mathbb{Q}(\\zeta_{49})\\cap \\mathbb{R}:\\mathbb{Q}]=21$ is odd, so...</p>\n" }, { "answer_id": 2095550, "score": 1, "text": "<p>Another approach than @WimC’s good answer: $[\\Bbb Q(\\zeta_{49}):\\Bbb Q]=42$, and it’s an extension ramified only at $7$, with cyclic Galois group. So the extension has only one intermediate field of each possible degree, every one of them ramified only at $7$. The quadratic extension ramified only at $7$ is $\\Bbb Q(\\sqrt{-7}\\,)$.</p>\n" }, { "answer_id": 2108742, "score": 1, "text": "<p>The quadratic Gauss sum\n$$S=\\sum_{a\\in\\mathbb F_p}\\zeta_p^{a^2}$$\nfor odd prime $p$ satisfies\n$$S^2=(-1)^{(p-1)/2}p$$\nas shown by direct calculation. It follows for $p=7$ that $$S=\\pm\\,i\\sqrt7\\in\\mathbb Q(\\zeta_7)\\subset\\mathbb Q(\\zeta_{49}),$$ so $\\mathbb Q(i\\sqrt7)$ and not $\\mathbb Q(\\sqrt7)$ is the unique quadratic subfield of $\\mathbb Q(\\zeta_{49})$.</p>\n" }, { "answer_id": 2092410, "score": 9, "text": "<p>The degree $[\\mathbb{Q}(\\zeta_{49})\\cap \\mathbb{R}:\\mathbb{Q}]=21$ is odd, so...</p>\n" }, { "answer_id": 2095550, "score": 1, "text": "<p>Another approach than @WimC’s good answer: $[\\Bbb Q(\\zeta_{49}):\\Bbb Q]=42$, and it’s an extension ramified only at $7$, with cyclic Galois group. So the extension has only one intermediate field of each possible degree, every one of them ramified only at $7$. The quadratic extension ramified only at $7$ is $\\Bbb Q(\\sqrt{-7}\\,)$.</p>\n" }, { "answer_id": 2108742, "score": 1, "text": "<p>The quadratic Gauss sum\n$$S=\\sum_{a\\in\\mathbb F_p}\\zeta_p^{a^2}$$\nfor odd prime $p$ satisfies\n$$S^2=(-1)^{(p-1)/2}p$$\nas shown by direct calculation. It follows for $p=7$ that $$S=\\pm\\,i\\sqrt7\\in\\mathbb Q(\\zeta_7)\\subset\\mathbb Q(\\zeta_{49}),$$ so $\\mathbb Q(i\\sqrt7)$ and not $\\mathbb Q(\\sqrt7)$ is the unique quadratic subfield of $\\mathbb Q(\\zeta_{49})$.</p>\n" }, { "answer_id": 2092410, "score": 9, "text": "<p>The degree $[\\mathbb{Q}(\\zeta_{49})\\cap \\mathbb{R}:\\mathbb{Q}]=21$ is odd, so...</p>\n" }, { "answer_id": 2095550, "score": 1, "text": "<p>Another approach than @WimC’s good answer: $[\\Bbb Q(\\zeta_{49}):\\Bbb Q]=42$, and it’s an extension ramified only at $7$, with cyclic Galois group. So the extension has only one intermediate field of each possible degree, every one of them ramified only at $7$. The quadratic extension ramified only at $7$ is $\\Bbb Q(\\sqrt{-7}\\,)$.</p>\n" }, { "answer_id": 2108742, "score": 1, "text": "<p>The quadratic Gauss sum\n$$S=\\sum_{a\\in\\mathbb F_p}\\zeta_p^{a^2}$$\nfor odd prime $p$ satisfies\n$$S^2=(-1)^{(p-1)/2}p$$\nas shown by direct calculation. It follows for $p=7$ that $$S=\\pm\\,i\\sqrt7\\in\\mathbb Q(\\zeta_7)\\subset\\mathbb Q(\\zeta_{49}),$$ so $\\mathbb Q(i\\sqrt7)$ and not $\\mathbb Q(\\sqrt7)$ is the unique quadratic subfield of $\\mathbb Q(\\zeta_{49})$.</p>\n" } ]
Question: Do $\sqrt{7} \in \mathbb Q(\zeta_{49})$? Answer 1: <p>The degree $[\mathbb{Q}(\zeta_{49})\cap \mathbb{R}:\mathbb{Q}]=21$ is odd, so...</p> Answer 2: <p>Another approach than @WimC’s good answer: $[\Bbb Q(\zeta_{49}):\Bbb Q]=42$, and it’s an extension ramified only at $7$, with cyclic Galois group. So the extension has only one intermediate field of each possible degree, every one of them ramified only at $7$. The quadratic extension ramified only at $7$ is $\Bbb Q(\sqrt{-7}\,)$.</p> Answer 3: <p>The quadratic Gauss sum $$S=\sum_{a\in\mathbb F_p}\zeta_p^{a^2}$$ for odd prime $p$ satisfies $$S^2=(-1)^{(p-1)/2}p$$ as shown by direct calculation. It follows for $p=7$ that $$S=\pm\,i\sqrt7\in\mathbb Q(\zeta_7)\subset\mathbb Q(\zeta_{49}),$$ so $\mathbb Q(i\sqrt7)$ and not $\mathbb Q(\sqrt7)$ is the unique quadratic subfield of $\mathbb Q(\zeta_{49})$.</p> Answer 4: <p>The degree $[\mathbb{Q}(\zeta_{49})\cap \mathbb{R}:\mathbb{Q}]=21$ is odd, so...</p> Answer 5: <p>Another approach than @WimC’s good answer: $[\Bbb Q(\zeta_{49}):\Bbb Q]=42$, and it’s an extension ramified only at $7$, with cyclic Galois group. So the extension has only one intermediate field of each possible degree, every one of them ramified only at $7$. The quadratic extension ramified only at $7$ is $\Bbb Q(\sqrt{-7}\,)$.</p> Answer 6: <p>The quadratic Gauss sum $$S=\sum_{a\in\mathbb F_p}\zeta_p^{a^2}$$ for odd prime $p$ satisfies $$S^2=(-1)^{(p-1)/2}p$$ as shown by direct calculation. It follows for $p=7$ that $$S=\pm\,i\sqrt7\in\mathbb Q(\zeta_7)\subset\mathbb Q(\zeta_{49}),$$ so $\mathbb Q(i\sqrt7)$ and not $\mathbb Q(\sqrt7)$ is the unique quadratic subfield of $\mathbb Q(\zeta_{49})$.</p> Answer 7: <p>The degree $[\mathbb{Q}(\zeta_{49})\cap \mathbb{R}:\mathbb{Q}]=21$ is odd, so...</p> Answer 8: <p>Another approach than @WimC’s good answer: $[\Bbb Q(\zeta_{49}):\Bbb Q]=42$, and it’s an extension ramified only at $7$, with cyclic Galois group. So the extension has only one intermediate field of each possible degree, every one of them ramified only at $7$. The quadratic extension ramified only at $7$ is $\Bbb Q(\sqrt{-7}\,)$.</p> Answer 9: <p>The quadratic Gauss sum $$S=\sum_{a\in\mathbb F_p}\zeta_p^{a^2}$$ for odd prime $p$ satisfies $$S^2=(-1)^{(p-1)/2}p$$ as shown by direct calculation. It follows for $p=7$ that $$S=\pm\,i\sqrt7\in\mathbb Q(\zeta_7)\subset\mathbb Q(\zeta_{49}),$$ so $\mathbb Q(i\sqrt7)$ and not $\mathbb Q(\sqrt7)$ is the unique quadratic subfield of $\mathbb Q(\zeta_{49})$.</p> Answer 10: <p>The degree $[\mathbb{Q}(\zeta_{49})\cap \mathbb{R}:\mathbb{Q}]=21$ is odd, so...</p> Answer 11: <p>Another approach than @WimC’s good answer: $[\Bbb Q(\zeta_{49}):\Bbb Q]=42$, and it’s an extension ramified only at $7$, with cyclic Galois group. So the extension has only one intermediate field of each possible degree, every one of them ramified only at $7$. The quadratic extension ramified only at $7$ is $\Bbb Q(\sqrt{-7}\,)$.</p> Answer 12: <p>The quadratic Gauss sum $$S=\sum_{a\in\mathbb F_p}\zeta_p^{a^2}$$ for odd prime $p$ satisfies $$S^2=(-1)^{(p-1)/2}p$$ as shown by direct calculation. It follows for $p=7$ that $$S=\pm\,i\sqrt7\in\mathbb Q(\zeta_7)\subset\mathbb Q(\zeta_{49}),$$ so $\mathbb Q(i\sqrt7)$ and not $\mathbb Q(\sqrt7)$ is the unique quadratic subfield of $\mathbb Q(\zeta_{49})$.</p> Answer 13: <p>The degree $[\mathbb{Q}(\zeta_{49})\cap \mathbb{R}:\mathbb{Q}]=21$ is odd, so...</p> Answer 14: <p>Another approach than @WimC’s good answer: $[\Bbb Q(\zeta_{49}):\Bbb Q]=42$, and it’s an extension ramified only at $7$, with cyclic Galois group. So the extension has only one intermediate field of each possible degree, every one of them ramified only at $7$. The quadratic extension ramified only at $7$ is $\Bbb Q(\sqrt{-7}\,)$.</p> Answer 15: <p>The quadratic Gauss sum $$S=\sum_{a\in\mathbb F_p}\zeta_p^{a^2}$$ for odd prime $p$ satisfies $$S^2=(-1)^{(p-1)/2}p$$ as shown by direct calculation. It follows for $p=7$ that $$S=\pm\,i\sqrt7\in\mathbb Q(\zeta_7)\subset\mathbb Q(\zeta_{49}),$$ so $\mathbb Q(i\sqrt7)$ and not $\mathbb Q(\sqrt7)$ is the unique quadratic subfield of $\mathbb Q(\zeta_{49})$.</p> Answer 16: <p>The degree $[\mathbb{Q}(\zeta_{49})\cap \mathbb{R}:\mathbb{Q}]=21$ is odd, so...</p> Answer 17: <p>Another approach than @WimC’s good answer: $[\Bbb Q(\zeta_{49}):\Bbb Q]=42$, and it’s an extension ramified only at $7$, with cyclic Galois group. So the extension has only one intermediate field of each possible degree, every one of them ramified only at $7$. The quadratic extension ramified only at $7$ is $\Bbb Q(\sqrt{-7}\,)$.</p> Answer 18: <p>The quadratic Gauss sum $$S=\sum_{a\in\mathbb F_p}\zeta_p^{a^2}$$ for odd prime $p$ satisfies $$S^2=(-1)^{(p-1)/2}p$$ as shown by direct calculation. It follows for $p=7$ that $$S=\pm\,i\sqrt7\in\mathbb Q(\zeta_7)\subset\mathbb Q(\zeta_{49}),$$ so $\mathbb Q(i\sqrt7)$ and not $\mathbb Q(\sqrt7)$ is the unique quadratic subfield of $\mathbb Q(\zeta_{49})$.</p> Answer 19: <p>The degree $[\mathbb{Q}(\zeta_{49})\cap \mathbb{R}:\mathbb{Q}]=21$ is odd, so...</p> Answer 20: <p>Another approach than @WimC’s good answer: $[\Bbb Q(\zeta_{49}):\Bbb Q]=42$, and it’s an extension ramified only at $7$, with cyclic Galois group. So the extension has only one intermediate field of each possible degree, every one of them ramified only at $7$. The quadratic extension ramified only at $7$ is $\Bbb Q(\sqrt{-7}\,)$.</p> Answer 21: <p>The quadratic Gauss sum $$S=\sum_{a\in\mathbb F_p}\zeta_p^{a^2}$$ for odd prime $p$ satisfies $$S^2=(-1)^{(p-1)/2}p$$ as shown by direct calculation. It follows for $p=7$ that $$S=\pm\,i\sqrt7\in\mathbb Q(\zeta_7)\subset\mathbb Q(\zeta_{49}),$$ so $\mathbb Q(i\sqrt7)$ and not $\mathbb Q(\sqrt7)$ is the unique quadratic subfield of $\mathbb Q(\zeta_{49})$.</p> Answer 22: <p>The degree $[\mathbb{Q}(\zeta_{49})\cap \mathbb{R}:\mathbb{Q}]=21$ is odd, so...</p> Answer 23: <p>Another approach than @WimC’s good answer: $[\Bbb Q(\zeta_{49}):\Bbb Q]=42$, and it’s an extension ramified only at $7$, with cyclic Galois group. So the extension has only one intermediate field of each possible degree, every one of them ramified only at $7$. The quadratic extension ramified only at $7$ is $\Bbb Q(\sqrt{-7}\,)$.</p> Answer 24: <p>The quadratic Gauss sum $$S=\sum_{a\in\mathbb F_p}\zeta_p^{a^2}$$ for odd prime $p$ satisfies $$S^2=(-1)^{(p-1)/2}p$$ as shown by direct calculation. It follows for $p=7$ that $$S=\pm\,i\sqrt7\in\mathbb Q(\zeta_7)\subset\mathbb Q(\zeta_{49}),$$ so $\mathbb Q(i\sqrt7)$ and not $\mathbb Q(\sqrt7)$ is the unique quadratic subfield of $\mathbb Q(\zeta_{49})$.</p> Answer 25: <p>The degree $[\mathbb{Q}(\zeta_{49})\cap \mathbb{R}:\mathbb{Q}]=21$ is odd, so...</p> Answer 26: <p>Another approach than @WimC’s good answer: $[\Bbb Q(\zeta_{49}):\Bbb Q]=42$, and it’s an extension ramified only at $7$, with cyclic Galois group. So the extension has only one intermediate field of each possible degree, every one of them ramified only at $7$. The quadratic extension ramified only at $7$ is $\Bbb Q(\sqrt{-7}\,)$.</p> Answer 27: <p>The quadratic Gauss sum $$S=\sum_{a\in\mathbb F_p}\zeta_p^{a^2}$$ for odd prime $p$ satisfies $$S^2=(-1)^{(p-1)/2}p$$ as shown by direct calculation. It follows for $p=7$ that $$S=\pm\,i\sqrt7\in\mathbb Q(\zeta_7)\subset\mathbb Q(\zeta_{49}),$$ so $\mathbb Q(i\sqrt7)$ and not $\mathbb Q(\sqrt7)$ is the unique quadratic subfield of $\mathbb Q(\zeta_{49})$.</p> Answer 28: <p>The degree $[\mathbb{Q}(\zeta_{49})\cap \mathbb{R}:\mathbb{Q}]=21$ is odd, so...</p> Answer 29: <p>Another approach than @WimC’s good answer: $[\Bbb Q(\zeta_{49}):\Bbb Q]=42$, and it’s an extension ramified only at $7$, with cyclic Galois group. So the extension has only one intermediate field of each possible degree, every one of them ramified only at $7$. The quadratic extension ramified only at $7$ is $\Bbb Q(\sqrt{-7}\,)$.</p> Answer 30: <p>The quadratic Gauss sum $$S=\sum_{a\in\mathbb F_p}\zeta_p^{a^2}$$ for odd prime $p$ satisfies $$S^2=(-1)^{(p-1)/2}p$$ as shown by direct calculation. It follows for $p=7$ that $$S=\pm\,i\sqrt7\in\mathbb Q(\zeta_7)\subset\mathbb Q(\zeta_{49}),$$ so $\mathbb Q(i\sqrt7)$ and not $\mathbb Q(\sqrt7)$ is the unique quadratic subfield of $\mathbb Q(\zeta_{49})$.</p> Answer 31: <p>The degree $[\mathbb{Q}(\zeta_{49})\cap \mathbb{R}:\mathbb{Q}]=21$ is odd, so...</p> Answer 32: <p>Another approach than @WimC’s good answer: $[\Bbb Q(\zeta_{49}):\Bbb Q]=42$, and it’s an extension ramified only at $7$, with cyclic Galois group. So the extension has only one intermediate field of each possible degree, every one of them ramified only at $7$. The quadratic extension ramified only at $7$ is $\Bbb Q(\sqrt{-7}\,)$.</p> Answer 33: <p>The quadratic Gauss sum $$S=\sum_{a\in\mathbb F_p}\zeta_p^{a^2}$$ for odd prime $p$ satisfies $$S^2=(-1)^{(p-1)/2}p$$ as shown by direct calculation. It follows for $p=7$ that $$S=\pm\,i\sqrt7\in\mathbb Q(\zeta_7)\subset\mathbb Q(\zeta_{49}),$$ so $\mathbb Q(i\sqrt7)$ and not $\mathbb Q(\sqrt7)$ is the unique quadratic subfield of $\mathbb Q(\zeta_{49})$.</p>
Show that if $f$ increases nicely and $x &gt; 1$ then $\lim_{n \to \infty}\frac{f(n)}{x^n}\sum_{k=1}^n \frac{x^k}{f(k)} = \frac{x}{1-x}$
2,092,574
6
https://math.stackexchange.com/questions/2092574/show-that-if-f-increases-nicely-and-x-1-then-lim-n-to-infty-fracfn
<p>More precisely, suppose $f(k)$ is increasing, differentiable, $f(1) \ge 1$, and $f'(n)/f(n) \le d/n$ for some $d&gt;0$.</p> <p>Then, for any $x &gt; 1$, as $n \to \infty$, $$\dfrac{f(n)}{x^n}\sum_{k=1}^n \dfrac{x^k}{f(k)} \to \dfrac{x}{x-1}. $$</p> <p>This is the most reasonable generalization I could find of my answer to <a href="https://math.stackexchange.com/questions/2092326/computing-lim-n-fracn2n-sum-k-1n-frac2kk/2092407#2092407">Computing $\lim_n \frac{n}{2^n}\sum_{k=1}^n \frac{2^k}{k}$</a>.</p> <p>That question is the case $x=2, f(n) = n$.</p> <p>I have what I think is a proof, which uses the same technique as my answer referenced above. But it a little messy, and my hope is that someone can come up with a nicer proof.</p> <p>I will post my proof in a few days if no answer is given.</p>
[ { "answer_id": 2092705, "score": 1, "text": "<p>From the conditions $f(n)\\le f(n+1)$ and $\\frac{f'(n)}{f(n)}\\le d/n$ we have </p>\n\n<p>$$1\\le \\frac{f(n+1)}{f(n)}\\le \\left(1+\\frac1n\\right)^d \\tag 1$$ </p>\n\n<blockquote>\n <p>Note that the right-hand side of the inequality in $(1)$ can be obtained by integrating the inequality $\\int_n^{n+1}\\left(\\log(f(x)\\right)'\\,dx\\le \\int_n^{n+1}d\\frac{1}{x}\\,dx$.</p>\n</blockquote>\n\n<p>Application of the squeeze theorem reveals</p>\n\n<p>$$\\lim_{n\\to \\infty}\\frac{f(n+1)}{f(n)}=1 \\tag 2$$</p>\n\n<hr>\n\n<p>Alongside $(1)$, it is straightforward to show from the condition $f'(n)/f(n)\\le d/n$ that </p>\n\n<p>$$\\lim_{n\\to \\infty}\\frac{x^n}{f(n)}=\\infty \\tag 3$$</p>\n\n<blockquote>\n <p>Note that $\\int_1^n \\left(\\log(f(x)\\right)'\\,dx\\le \\int_1^{n}d\\frac{1}{x}\\,dx$ yields $f(n)\\le f(1)\\,n^d$ and $\\frac{x^n}{f(n)}\\ge \\frac{x^n}{f(1)\\,n^d}\\to \\infty$.</p>\n</blockquote>\n\n<hr>\n\n<p>Using $(2)$ we find </p>\n\n<p>$$\\begin{align}\n\\lim_{n\\to \\infty}\\left(\\frac{\\sum_{k=1}^{n+1}\\frac{x^k}{f(k)}-\\sum_{k=1}^{n}\\frac{x^k}{f(k)}}{\\frac{x^{n+1}}{f(n+1)}-\\frac{x^n}{f(n)}}\\right)&amp;=\\lim_{n\\to \\infty}\\left(\\frac{\\frac{x^{n+1}}{f(n+1)}}{\\frac{x^{n+1}}{f(n+1)}-\\frac{x^{n}}{f(n)}}\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty}\\left(\\frac{x}{x-\\frac{f(n+1)}{f(n)}}\\right)\\\\\\\\\n&amp;=\\frac{x}{x-1}\\tag 4\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, equipped with $(2)$, the Stolz-Cesaro Theorem is applicable from which we obtain the coveted limit</p>\n\n<blockquote>\n <p>$$\\lim_{n\\to \\infty}\\frac{\\sum_{k=1}^n\\frac{x^k}{f(k)}}{\\frac{x^n}{f(n)}}=\\lim_{n\\to \\infty}\\left(\\frac{\\sum_{k=1}^{n+1}\\frac{x^k}{f(k)}-\\sum_{k=1}^{n}\\frac{x^k}{f(k)}}{\\frac{x^{n+1}}{f(n+1)}-\\frac{x^n}{f(n)}}\\right)=\\frac{x}{x-1}$$</p>\n</blockquote>\n\n<p>as conjectured!</p>\n" }, { "answer_id": 2092701, "score": 0, "text": "<p>As in zhw's answer in the linked question we can also here apply <a href=\"https://en.wikipedia.org/wiki/Stolz%E2%80%93Ces%C3%A0ro_theorem\" rel=\"nofollow noreferrer\">Stolz–Cesàro</a> to get</p>\n\n<p>$$\\lim_{n\\to\\infty} \\frac{f(n)}{x^n}\\sum_{k=1}^n\\frac{x^k}{f(k)} = \\lim_{n\\to\\infty} \\frac{x}{x - \\frac{f(n+1)}{f(n)}} = \\frac{x}{x-1}$$</p>\n\n<p>since $\\lim_{n\\to\\infty}\\frac{f(n+1)}{f(n)} = 1$. For this final limit we can for example apply <a href=\"https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality\" rel=\"nofollow noreferrer\">Grönwall's inequality</a> to $f'(x) \\leq \\frac{d}{x}f(x)$ giving us $1\\leq \\frac{f(n+1)}{f(n)} \\leq \\left(\\frac{n+1}{n}\\right)^d$ and the limit follows by the squeeze theorem. This can also be deduced by the mean value theorem applied to $g(x) = \\log f(x)$ which gives $1 \\leq \\frac{f(n+1)}{f(n)} \\leq e^{\\frac{d}{n}}$.</p>\n" }, { "answer_id": 2092705, "score": 1, "text": "<p>From the conditions $f(n)\\le f(n+1)$ and $\\frac{f'(n)}{f(n)}\\le d/n$ we have </p>\n\n<p>$$1\\le \\frac{f(n+1)}{f(n)}\\le \\left(1+\\frac1n\\right)^d \\tag 1$$ </p>\n\n<blockquote>\n <p>Note that the right-hand side of the inequality in $(1)$ can be obtained by integrating the inequality $\\int_n^{n+1}\\left(\\log(f(x)\\right)'\\,dx\\le \\int_n^{n+1}d\\frac{1}{x}\\,dx$.</p>\n</blockquote>\n\n<p>Application of the squeeze theorem reveals</p>\n\n<p>$$\\lim_{n\\to \\infty}\\frac{f(n+1)}{f(n)}=1 \\tag 2$$</p>\n\n<hr>\n\n<p>Alongside $(1)$, it is straightforward to show from the condition $f'(n)/f(n)\\le d/n$ that </p>\n\n<p>$$\\lim_{n\\to \\infty}\\frac{x^n}{f(n)}=\\infty \\tag 3$$</p>\n\n<blockquote>\n <p>Note that $\\int_1^n \\left(\\log(f(x)\\right)'\\,dx\\le \\int_1^{n}d\\frac{1}{x}\\,dx$ yields $f(n)\\le f(1)\\,n^d$ and $\\frac{x^n}{f(n)}\\ge \\frac{x^n}{f(1)\\,n^d}\\to \\infty$.</p>\n</blockquote>\n\n<hr>\n\n<p>Using $(2)$ we find </p>\n\n<p>$$\\begin{align}\n\\lim_{n\\to \\infty}\\left(\\frac{\\sum_{k=1}^{n+1}\\frac{x^k}{f(k)}-\\sum_{k=1}^{n}\\frac{x^k}{f(k)}}{\\frac{x^{n+1}}{f(n+1)}-\\frac{x^n}{f(n)}}\\right)&amp;=\\lim_{n\\to \\infty}\\left(\\frac{\\frac{x^{n+1}}{f(n+1)}}{\\frac{x^{n+1}}{f(n+1)}-\\frac{x^{n}}{f(n)}}\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty}\\left(\\frac{x}{x-\\frac{f(n+1)}{f(n)}}\\right)\\\\\\\\\n&amp;=\\frac{x}{x-1}\\tag 4\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, equipped with $(2)$, the Stolz-Cesaro Theorem is applicable from which we obtain the coveted limit</p>\n\n<blockquote>\n <p>$$\\lim_{n\\to \\infty}\\frac{\\sum_{k=1}^n\\frac{x^k}{f(k)}}{\\frac{x^n}{f(n)}}=\\lim_{n\\to \\infty}\\left(\\frac{\\sum_{k=1}^{n+1}\\frac{x^k}{f(k)}-\\sum_{k=1}^{n}\\frac{x^k}{f(k)}}{\\frac{x^{n+1}}{f(n+1)}-\\frac{x^n}{f(n)}}\\right)=\\frac{x}{x-1}$$</p>\n</blockquote>\n\n<p>as conjectured!</p>\n" }, { "answer_id": 2092701, "score": 0, "text": "<p>As in zhw's answer in the linked question we can also here apply <a href=\"https://en.wikipedia.org/wiki/Stolz%E2%80%93Ces%C3%A0ro_theorem\" rel=\"nofollow noreferrer\">Stolz–Cesàro</a> to get</p>\n\n<p>$$\\lim_{n\\to\\infty} \\frac{f(n)}{x^n}\\sum_{k=1}^n\\frac{x^k}{f(k)} = \\lim_{n\\to\\infty} \\frac{x}{x - \\frac{f(n+1)}{f(n)}} = \\frac{x}{x-1}$$</p>\n\n<p>since $\\lim_{n\\to\\infty}\\frac{f(n+1)}{f(n)} = 1$. For this final limit we can for example apply <a href=\"https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality\" rel=\"nofollow noreferrer\">Grönwall's inequality</a> to $f'(x) \\leq \\frac{d}{x}f(x)$ giving us $1\\leq \\frac{f(n+1)}{f(n)} \\leq \\left(\\frac{n+1}{n}\\right)^d$ and the limit follows by the squeeze theorem. This can also be deduced by the mean value theorem applied to $g(x) = \\log f(x)$ which gives $1 \\leq \\frac{f(n+1)}{f(n)} \\leq e^{\\frac{d}{n}}$.</p>\n" }, { "answer_id": 2092705, "score": 1, "text": "<p>From the conditions $f(n)\\le f(n+1)$ and $\\frac{f'(n)}{f(n)}\\le d/n$ we have </p>\n\n<p>$$1\\le \\frac{f(n+1)}{f(n)}\\le \\left(1+\\frac1n\\right)^d \\tag 1$$ </p>\n\n<blockquote>\n <p>Note that the right-hand side of the inequality in $(1)$ can be obtained by integrating the inequality $\\int_n^{n+1}\\left(\\log(f(x)\\right)'\\,dx\\le \\int_n^{n+1}d\\frac{1}{x}\\,dx$.</p>\n</blockquote>\n\n<p>Application of the squeeze theorem reveals</p>\n\n<p>$$\\lim_{n\\to \\infty}\\frac{f(n+1)}{f(n)}=1 \\tag 2$$</p>\n\n<hr>\n\n<p>Alongside $(1)$, it is straightforward to show from the condition $f'(n)/f(n)\\le d/n$ that </p>\n\n<p>$$\\lim_{n\\to \\infty}\\frac{x^n}{f(n)}=\\infty \\tag 3$$</p>\n\n<blockquote>\n <p>Note that $\\int_1^n \\left(\\log(f(x)\\right)'\\,dx\\le \\int_1^{n}d\\frac{1}{x}\\,dx$ yields $f(n)\\le f(1)\\,n^d$ and $\\frac{x^n}{f(n)}\\ge \\frac{x^n}{f(1)\\,n^d}\\to \\infty$.</p>\n</blockquote>\n\n<hr>\n\n<p>Using $(2)$ we find </p>\n\n<p>$$\\begin{align}\n\\lim_{n\\to \\infty}\\left(\\frac{\\sum_{k=1}^{n+1}\\frac{x^k}{f(k)}-\\sum_{k=1}^{n}\\frac{x^k}{f(k)}}{\\frac{x^{n+1}}{f(n+1)}-\\frac{x^n}{f(n)}}\\right)&amp;=\\lim_{n\\to \\infty}\\left(\\frac{\\frac{x^{n+1}}{f(n+1)}}{\\frac{x^{n+1}}{f(n+1)}-\\frac{x^{n}}{f(n)}}\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty}\\left(\\frac{x}{x-\\frac{f(n+1)}{f(n)}}\\right)\\\\\\\\\n&amp;=\\frac{x}{x-1}\\tag 4\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, equipped with $(2)$, the Stolz-Cesaro Theorem is applicable from which we obtain the coveted limit</p>\n\n<blockquote>\n <p>$$\\lim_{n\\to \\infty}\\frac{\\sum_{k=1}^n\\frac{x^k}{f(k)}}{\\frac{x^n}{f(n)}}=\\lim_{n\\to \\infty}\\left(\\frac{\\sum_{k=1}^{n+1}\\frac{x^k}{f(k)}-\\sum_{k=1}^{n}\\frac{x^k}{f(k)}}{\\frac{x^{n+1}}{f(n+1)}-\\frac{x^n}{f(n)}}\\right)=\\frac{x}{x-1}$$</p>\n</blockquote>\n\n<p>as conjectured!</p>\n" }, { "answer_id": 2092701, "score": 0, "text": "<p>As in zhw's answer in the linked question we can also here apply <a href=\"https://en.wikipedia.org/wiki/Stolz%E2%80%93Ces%C3%A0ro_theorem\" rel=\"nofollow noreferrer\">Stolz–Cesàro</a> to get</p>\n\n<p>$$\\lim_{n\\to\\infty} \\frac{f(n)}{x^n}\\sum_{k=1}^n\\frac{x^k}{f(k)} = \\lim_{n\\to\\infty} \\frac{x}{x - \\frac{f(n+1)}{f(n)}} = \\frac{x}{x-1}$$</p>\n\n<p>since $\\lim_{n\\to\\infty}\\frac{f(n+1)}{f(n)} = 1$. For this final limit we can for example apply <a href=\"https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality\" rel=\"nofollow noreferrer\">Grönwall's inequality</a> to $f'(x) \\leq \\frac{d}{x}f(x)$ giving us $1\\leq \\frac{f(n+1)}{f(n)} \\leq \\left(\\frac{n+1}{n}\\right)^d$ and the limit follows by the squeeze theorem. This can also be deduced by the mean value theorem applied to $g(x) = \\log f(x)$ which gives $1 \\leq \\frac{f(n+1)}{f(n)} \\leq e^{\\frac{d}{n}}$.</p>\n" }, { "answer_id": 2092705, "score": 1, "text": "<p>From the conditions $f(n)\\le f(n+1)$ and $\\frac{f'(n)}{f(n)}\\le d/n$ we have </p>\n\n<p>$$1\\le \\frac{f(n+1)}{f(n)}\\le \\left(1+\\frac1n\\right)^d \\tag 1$$ </p>\n\n<blockquote>\n <p>Note that the right-hand side of the inequality in $(1)$ can be obtained by integrating the inequality $\\int_n^{n+1}\\left(\\log(f(x)\\right)'\\,dx\\le \\int_n^{n+1}d\\frac{1}{x}\\,dx$.</p>\n</blockquote>\n\n<p>Application of the squeeze theorem reveals</p>\n\n<p>$$\\lim_{n\\to \\infty}\\frac{f(n+1)}{f(n)}=1 \\tag 2$$</p>\n\n<hr>\n\n<p>Alongside $(1)$, it is straightforward to show from the condition $f'(n)/f(n)\\le d/n$ that </p>\n\n<p>$$\\lim_{n\\to \\infty}\\frac{x^n}{f(n)}=\\infty \\tag 3$$</p>\n\n<blockquote>\n <p>Note that $\\int_1^n \\left(\\log(f(x)\\right)'\\,dx\\le \\int_1^{n}d\\frac{1}{x}\\,dx$ yields $f(n)\\le f(1)\\,n^d$ and $\\frac{x^n}{f(n)}\\ge \\frac{x^n}{f(1)\\,n^d}\\to \\infty$.</p>\n</blockquote>\n\n<hr>\n\n<p>Using $(2)$ we find </p>\n\n<p>$$\\begin{align}\n\\lim_{n\\to \\infty}\\left(\\frac{\\sum_{k=1}^{n+1}\\frac{x^k}{f(k)}-\\sum_{k=1}^{n}\\frac{x^k}{f(k)}}{\\frac{x^{n+1}}{f(n+1)}-\\frac{x^n}{f(n)}}\\right)&amp;=\\lim_{n\\to \\infty}\\left(\\frac{\\frac{x^{n+1}}{f(n+1)}}{\\frac{x^{n+1}}{f(n+1)}-\\frac{x^{n}}{f(n)}}\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty}\\left(\\frac{x}{x-\\frac{f(n+1)}{f(n)}}\\right)\\\\\\\\\n&amp;=\\frac{x}{x-1}\\tag 4\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, equipped with $(2)$, the Stolz-Cesaro Theorem is applicable from which we obtain the coveted limit</p>\n\n<blockquote>\n <p>$$\\lim_{n\\to \\infty}\\frac{\\sum_{k=1}^n\\frac{x^k}{f(k)}}{\\frac{x^n}{f(n)}}=\\lim_{n\\to \\infty}\\left(\\frac{\\sum_{k=1}^{n+1}\\frac{x^k}{f(k)}-\\sum_{k=1}^{n}\\frac{x^k}{f(k)}}{\\frac{x^{n+1}}{f(n+1)}-\\frac{x^n}{f(n)}}\\right)=\\frac{x}{x-1}$$</p>\n</blockquote>\n\n<p>as conjectured!</p>\n" }, { "answer_id": 2092701, "score": 0, "text": "<p>As in zhw's answer in the linked question we can also here apply <a href=\"https://en.wikipedia.org/wiki/Stolz%E2%80%93Ces%C3%A0ro_theorem\" rel=\"nofollow noreferrer\">Stolz–Cesàro</a> to get</p>\n\n<p>$$\\lim_{n\\to\\infty} \\frac{f(n)}{x^n}\\sum_{k=1}^n\\frac{x^k}{f(k)} = \\lim_{n\\to\\infty} \\frac{x}{x - \\frac{f(n+1)}{f(n)}} = \\frac{x}{x-1}$$</p>\n\n<p>since $\\lim_{n\\to\\infty}\\frac{f(n+1)}{f(n)} = 1$. For this final limit we can for example apply <a href=\"https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality\" rel=\"nofollow noreferrer\">Grönwall's inequality</a> to $f'(x) \\leq \\frac{d}{x}f(x)$ giving us $1\\leq \\frac{f(n+1)}{f(n)} \\leq \\left(\\frac{n+1}{n}\\right)^d$ and the limit follows by the squeeze theorem. This can also be deduced by the mean value theorem applied to $g(x) = \\log f(x)$ which gives $1 \\leq \\frac{f(n+1)}{f(n)} \\leq e^{\\frac{d}{n}}$.</p>\n" }, { "answer_id": 2092705, "score": 1, "text": "<p>From the conditions $f(n)\\le f(n+1)$ and $\\frac{f'(n)}{f(n)}\\le d/n$ we have </p>\n\n<p>$$1\\le \\frac{f(n+1)}{f(n)}\\le \\left(1+\\frac1n\\right)^d \\tag 1$$ </p>\n\n<blockquote>\n <p>Note that the right-hand side of the inequality in $(1)$ can be obtained by integrating the inequality $\\int_n^{n+1}\\left(\\log(f(x)\\right)'\\,dx\\le \\int_n^{n+1}d\\frac{1}{x}\\,dx$.</p>\n</blockquote>\n\n<p>Application of the squeeze theorem reveals</p>\n\n<p>$$\\lim_{n\\to \\infty}\\frac{f(n+1)}{f(n)}=1 \\tag 2$$</p>\n\n<hr>\n\n<p>Alongside $(1)$, it is straightforward to show from the condition $f'(n)/f(n)\\le d/n$ that </p>\n\n<p>$$\\lim_{n\\to \\infty}\\frac{x^n}{f(n)}=\\infty \\tag 3$$</p>\n\n<blockquote>\n <p>Note that $\\int_1^n \\left(\\log(f(x)\\right)'\\,dx\\le \\int_1^{n}d\\frac{1}{x}\\,dx$ yields $f(n)\\le f(1)\\,n^d$ and $\\frac{x^n}{f(n)}\\ge \\frac{x^n}{f(1)\\,n^d}\\to \\infty$.</p>\n</blockquote>\n\n<hr>\n\n<p>Using $(2)$ we find </p>\n\n<p>$$\\begin{align}\n\\lim_{n\\to \\infty}\\left(\\frac{\\sum_{k=1}^{n+1}\\frac{x^k}{f(k)}-\\sum_{k=1}^{n}\\frac{x^k}{f(k)}}{\\frac{x^{n+1}}{f(n+1)}-\\frac{x^n}{f(n)}}\\right)&amp;=\\lim_{n\\to \\infty}\\left(\\frac{\\frac{x^{n+1}}{f(n+1)}}{\\frac{x^{n+1}}{f(n+1)}-\\frac{x^{n}}{f(n)}}\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty}\\left(\\frac{x}{x-\\frac{f(n+1)}{f(n)}}\\right)\\\\\\\\\n&amp;=\\frac{x}{x-1}\\tag 4\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, equipped with $(2)$, the Stolz-Cesaro Theorem is applicable from which we obtain the coveted limit</p>\n\n<blockquote>\n <p>$$\\lim_{n\\to \\infty}\\frac{\\sum_{k=1}^n\\frac{x^k}{f(k)}}{\\frac{x^n}{f(n)}}=\\lim_{n\\to \\infty}\\left(\\frac{\\sum_{k=1}^{n+1}\\frac{x^k}{f(k)}-\\sum_{k=1}^{n}\\frac{x^k}{f(k)}}{\\frac{x^{n+1}}{f(n+1)}-\\frac{x^n}{f(n)}}\\right)=\\frac{x}{x-1}$$</p>\n</blockquote>\n\n<p>as conjectured!</p>\n" }, { "answer_id": 2092701, "score": 0, "text": "<p>As in zhw's answer in the linked question we can also here apply <a href=\"https://en.wikipedia.org/wiki/Stolz%E2%80%93Ces%C3%A0ro_theorem\" rel=\"nofollow noreferrer\">Stolz–Cesàro</a> to get</p>\n\n<p>$$\\lim_{n\\to\\infty} \\frac{f(n)}{x^n}\\sum_{k=1}^n\\frac{x^k}{f(k)} = \\lim_{n\\to\\infty} \\frac{x}{x - \\frac{f(n+1)}{f(n)}} = \\frac{x}{x-1}$$</p>\n\n<p>since $\\lim_{n\\to\\infty}\\frac{f(n+1)}{f(n)} = 1$. For this final limit we can for example apply <a href=\"https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality\" rel=\"nofollow noreferrer\">Grönwall's inequality</a> to $f'(x) \\leq \\frac{d}{x}f(x)$ giving us $1\\leq \\frac{f(n+1)}{f(n)} \\leq \\left(\\frac{n+1}{n}\\right)^d$ and the limit follows by the squeeze theorem. This can also be deduced by the mean value theorem applied to $g(x) = \\log f(x)$ which gives $1 \\leq \\frac{f(n+1)}{f(n)} \\leq e^{\\frac{d}{n}}$.</p>\n" }, { "answer_id": 2092705, "score": 1, "text": "<p>From the conditions $f(n)\\le f(n+1)$ and $\\frac{f'(n)}{f(n)}\\le d/n$ we have </p>\n\n<p>$$1\\le \\frac{f(n+1)}{f(n)}\\le \\left(1+\\frac1n\\right)^d \\tag 1$$ </p>\n\n<blockquote>\n <p>Note that the right-hand side of the inequality in $(1)$ can be obtained by integrating the inequality $\\int_n^{n+1}\\left(\\log(f(x)\\right)'\\,dx\\le \\int_n^{n+1}d\\frac{1}{x}\\,dx$.</p>\n</blockquote>\n\n<p>Application of the squeeze theorem reveals</p>\n\n<p>$$\\lim_{n\\to \\infty}\\frac{f(n+1)}{f(n)}=1 \\tag 2$$</p>\n\n<hr>\n\n<p>Alongside $(1)$, it is straightforward to show from the condition $f'(n)/f(n)\\le d/n$ that </p>\n\n<p>$$\\lim_{n\\to \\infty}\\frac{x^n}{f(n)}=\\infty \\tag 3$$</p>\n\n<blockquote>\n <p>Note that $\\int_1^n \\left(\\log(f(x)\\right)'\\,dx\\le \\int_1^{n}d\\frac{1}{x}\\,dx$ yields $f(n)\\le f(1)\\,n^d$ and $\\frac{x^n}{f(n)}\\ge \\frac{x^n}{f(1)\\,n^d}\\to \\infty$.</p>\n</blockquote>\n\n<hr>\n\n<p>Using $(2)$ we find </p>\n\n<p>$$\\begin{align}\n\\lim_{n\\to \\infty}\\left(\\frac{\\sum_{k=1}^{n+1}\\frac{x^k}{f(k)}-\\sum_{k=1}^{n}\\frac{x^k}{f(k)}}{\\frac{x^{n+1}}{f(n+1)}-\\frac{x^n}{f(n)}}\\right)&amp;=\\lim_{n\\to \\infty}\\left(\\frac{\\frac{x^{n+1}}{f(n+1)}}{\\frac{x^{n+1}}{f(n+1)}-\\frac{x^{n}}{f(n)}}\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty}\\left(\\frac{x}{x-\\frac{f(n+1)}{f(n)}}\\right)\\\\\\\\\n&amp;=\\frac{x}{x-1}\\tag 4\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, equipped with $(2)$, the Stolz-Cesaro Theorem is applicable from which we obtain the coveted limit</p>\n\n<blockquote>\n <p>$$\\lim_{n\\to \\infty}\\frac{\\sum_{k=1}^n\\frac{x^k}{f(k)}}{\\frac{x^n}{f(n)}}=\\lim_{n\\to \\infty}\\left(\\frac{\\sum_{k=1}^{n+1}\\frac{x^k}{f(k)}-\\sum_{k=1}^{n}\\frac{x^k}{f(k)}}{\\frac{x^{n+1}}{f(n+1)}-\\frac{x^n}{f(n)}}\\right)=\\frac{x}{x-1}$$</p>\n</blockquote>\n\n<p>as conjectured!</p>\n" }, { "answer_id": 2092701, "score": 0, "text": "<p>As in zhw's answer in the linked question we can also here apply <a href=\"https://en.wikipedia.org/wiki/Stolz%E2%80%93Ces%C3%A0ro_theorem\" rel=\"nofollow noreferrer\">Stolz–Cesàro</a> to get</p>\n\n<p>$$\\lim_{n\\to\\infty} \\frac{f(n)}{x^n}\\sum_{k=1}^n\\frac{x^k}{f(k)} = \\lim_{n\\to\\infty} \\frac{x}{x - \\frac{f(n+1)}{f(n)}} = \\frac{x}{x-1}$$</p>\n\n<p>since $\\lim_{n\\to\\infty}\\frac{f(n+1)}{f(n)} = 1$. For this final limit we can for example apply <a href=\"https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality\" rel=\"nofollow noreferrer\">Grönwall's inequality</a> to $f'(x) \\leq \\frac{d}{x}f(x)$ giving us $1\\leq \\frac{f(n+1)}{f(n)} \\leq \\left(\\frac{n+1}{n}\\right)^d$ and the limit follows by the squeeze theorem. This can also be deduced by the mean value theorem applied to $g(x) = \\log f(x)$ which gives $1 \\leq \\frac{f(n+1)}{f(n)} \\leq e^{\\frac{d}{n}}$.</p>\n" }, { "answer_id": 2092705, "score": 1, "text": "<p>From the conditions $f(n)\\le f(n+1)$ and $\\frac{f'(n)}{f(n)}\\le d/n$ we have </p>\n\n<p>$$1\\le \\frac{f(n+1)}{f(n)}\\le \\left(1+\\frac1n\\right)^d \\tag 1$$ </p>\n\n<blockquote>\n <p>Note that the right-hand side of the inequality in $(1)$ can be obtained by integrating the inequality $\\int_n^{n+1}\\left(\\log(f(x)\\right)'\\,dx\\le \\int_n^{n+1}d\\frac{1}{x}\\,dx$.</p>\n</blockquote>\n\n<p>Application of the squeeze theorem reveals</p>\n\n<p>$$\\lim_{n\\to \\infty}\\frac{f(n+1)}{f(n)}=1 \\tag 2$$</p>\n\n<hr>\n\n<p>Alongside $(1)$, it is straightforward to show from the condition $f'(n)/f(n)\\le d/n$ that </p>\n\n<p>$$\\lim_{n\\to \\infty}\\frac{x^n}{f(n)}=\\infty \\tag 3$$</p>\n\n<blockquote>\n <p>Note that $\\int_1^n \\left(\\log(f(x)\\right)'\\,dx\\le \\int_1^{n}d\\frac{1}{x}\\,dx$ yields $f(n)\\le f(1)\\,n^d$ and $\\frac{x^n}{f(n)}\\ge \\frac{x^n}{f(1)\\,n^d}\\to \\infty$.</p>\n</blockquote>\n\n<hr>\n\n<p>Using $(2)$ we find </p>\n\n<p>$$\\begin{align}\n\\lim_{n\\to \\infty}\\left(\\frac{\\sum_{k=1}^{n+1}\\frac{x^k}{f(k)}-\\sum_{k=1}^{n}\\frac{x^k}{f(k)}}{\\frac{x^{n+1}}{f(n+1)}-\\frac{x^n}{f(n)}}\\right)&amp;=\\lim_{n\\to \\infty}\\left(\\frac{\\frac{x^{n+1}}{f(n+1)}}{\\frac{x^{n+1}}{f(n+1)}-\\frac{x^{n}}{f(n)}}\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty}\\left(\\frac{x}{x-\\frac{f(n+1)}{f(n)}}\\right)\\\\\\\\\n&amp;=\\frac{x}{x-1}\\tag 4\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, equipped with $(2)$, the Stolz-Cesaro Theorem is applicable from which we obtain the coveted limit</p>\n\n<blockquote>\n <p>$$\\lim_{n\\to \\infty}\\frac{\\sum_{k=1}^n\\frac{x^k}{f(k)}}{\\frac{x^n}{f(n)}}=\\lim_{n\\to \\infty}\\left(\\frac{\\sum_{k=1}^{n+1}\\frac{x^k}{f(k)}-\\sum_{k=1}^{n}\\frac{x^k}{f(k)}}{\\frac{x^{n+1}}{f(n+1)}-\\frac{x^n}{f(n)}}\\right)=\\frac{x}{x-1}$$</p>\n</blockquote>\n\n<p>as conjectured!</p>\n" }, { "answer_id": 2092701, "score": 0, "text": "<p>As in zhw's answer in the linked question we can also here apply <a href=\"https://en.wikipedia.org/wiki/Stolz%E2%80%93Ces%C3%A0ro_theorem\" rel=\"nofollow noreferrer\">Stolz–Cesàro</a> to get</p>\n\n<p>$$\\lim_{n\\to\\infty} \\frac{f(n)}{x^n}\\sum_{k=1}^n\\frac{x^k}{f(k)} = \\lim_{n\\to\\infty} \\frac{x}{x - \\frac{f(n+1)}{f(n)}} = \\frac{x}{x-1}$$</p>\n\n<p>since $\\lim_{n\\to\\infty}\\frac{f(n+1)}{f(n)} = 1$. For this final limit we can for example apply <a href=\"https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality\" rel=\"nofollow noreferrer\">Grönwall's inequality</a> to $f'(x) \\leq \\frac{d}{x}f(x)$ giving us $1\\leq \\frac{f(n+1)}{f(n)} \\leq \\left(\\frac{n+1}{n}\\right)^d$ and the limit follows by the squeeze theorem. This can also be deduced by the mean value theorem applied to $g(x) = \\log f(x)$ which gives $1 \\leq \\frac{f(n+1)}{f(n)} \\leq e^{\\frac{d}{n}}$.</p>\n" }, { "answer_id": 2092705, "score": 1, "text": "<p>From the conditions $f(n)\\le f(n+1)$ and $\\frac{f'(n)}{f(n)}\\le d/n$ we have </p>\n\n<p>$$1\\le \\frac{f(n+1)}{f(n)}\\le \\left(1+\\frac1n\\right)^d \\tag 1$$ </p>\n\n<blockquote>\n <p>Note that the right-hand side of the inequality in $(1)$ can be obtained by integrating the inequality $\\int_n^{n+1}\\left(\\log(f(x)\\right)'\\,dx\\le \\int_n^{n+1}d\\frac{1}{x}\\,dx$.</p>\n</blockquote>\n\n<p>Application of the squeeze theorem reveals</p>\n\n<p>$$\\lim_{n\\to \\infty}\\frac{f(n+1)}{f(n)}=1 \\tag 2$$</p>\n\n<hr>\n\n<p>Alongside $(1)$, it is straightforward to show from the condition $f'(n)/f(n)\\le d/n$ that </p>\n\n<p>$$\\lim_{n\\to \\infty}\\frac{x^n}{f(n)}=\\infty \\tag 3$$</p>\n\n<blockquote>\n <p>Note that $\\int_1^n \\left(\\log(f(x)\\right)'\\,dx\\le \\int_1^{n}d\\frac{1}{x}\\,dx$ yields $f(n)\\le f(1)\\,n^d$ and $\\frac{x^n}{f(n)}\\ge \\frac{x^n}{f(1)\\,n^d}\\to \\infty$.</p>\n</blockquote>\n\n<hr>\n\n<p>Using $(2)$ we find </p>\n\n<p>$$\\begin{align}\n\\lim_{n\\to \\infty}\\left(\\frac{\\sum_{k=1}^{n+1}\\frac{x^k}{f(k)}-\\sum_{k=1}^{n}\\frac{x^k}{f(k)}}{\\frac{x^{n+1}}{f(n+1)}-\\frac{x^n}{f(n)}}\\right)&amp;=\\lim_{n\\to \\infty}\\left(\\frac{\\frac{x^{n+1}}{f(n+1)}}{\\frac{x^{n+1}}{f(n+1)}-\\frac{x^{n}}{f(n)}}\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty}\\left(\\frac{x}{x-\\frac{f(n+1)}{f(n)}}\\right)\\\\\\\\\n&amp;=\\frac{x}{x-1}\\tag 4\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, equipped with $(2)$, the Stolz-Cesaro Theorem is applicable from which we obtain the coveted limit</p>\n\n<blockquote>\n <p>$$\\lim_{n\\to \\infty}\\frac{\\sum_{k=1}^n\\frac{x^k}{f(k)}}{\\frac{x^n}{f(n)}}=\\lim_{n\\to \\infty}\\left(\\frac{\\sum_{k=1}^{n+1}\\frac{x^k}{f(k)}-\\sum_{k=1}^{n}\\frac{x^k}{f(k)}}{\\frac{x^{n+1}}{f(n+1)}-\\frac{x^n}{f(n)}}\\right)=\\frac{x}{x-1}$$</p>\n</blockquote>\n\n<p>as conjectured!</p>\n" }, { "answer_id": 2092701, "score": 0, "text": "<p>As in zhw's answer in the linked question we can also here apply <a href=\"https://en.wikipedia.org/wiki/Stolz%E2%80%93Ces%C3%A0ro_theorem\" rel=\"nofollow noreferrer\">Stolz–Cesàro</a> to get</p>\n\n<p>$$\\lim_{n\\to\\infty} \\frac{f(n)}{x^n}\\sum_{k=1}^n\\frac{x^k}{f(k)} = \\lim_{n\\to\\infty} \\frac{x}{x - \\frac{f(n+1)}{f(n)}} = \\frac{x}{x-1}$$</p>\n\n<p>since $\\lim_{n\\to\\infty}\\frac{f(n+1)}{f(n)} = 1$. For this final limit we can for example apply <a href=\"https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality\" rel=\"nofollow noreferrer\">Grönwall's inequality</a> to $f'(x) \\leq \\frac{d}{x}f(x)$ giving us $1\\leq \\frac{f(n+1)}{f(n)} \\leq \\left(\\frac{n+1}{n}\\right)^d$ and the limit follows by the squeeze theorem. This can also be deduced by the mean value theorem applied to $g(x) = \\log f(x)$ which gives $1 \\leq \\frac{f(n+1)}{f(n)} \\leq e^{\\frac{d}{n}}$.</p>\n" }, { "answer_id": 2092705, "score": 1, "text": "<p>From the conditions $f(n)\\le f(n+1)$ and $\\frac{f'(n)}{f(n)}\\le d/n$ we have </p>\n\n<p>$$1\\le \\frac{f(n+1)}{f(n)}\\le \\left(1+\\frac1n\\right)^d \\tag 1$$ </p>\n\n<blockquote>\n <p>Note that the right-hand side of the inequality in $(1)$ can be obtained by integrating the inequality $\\int_n^{n+1}\\left(\\log(f(x)\\right)'\\,dx\\le \\int_n^{n+1}d\\frac{1}{x}\\,dx$.</p>\n</blockquote>\n\n<p>Application of the squeeze theorem reveals</p>\n\n<p>$$\\lim_{n\\to \\infty}\\frac{f(n+1)}{f(n)}=1 \\tag 2$$</p>\n\n<hr>\n\n<p>Alongside $(1)$, it is straightforward to show from the condition $f'(n)/f(n)\\le d/n$ that </p>\n\n<p>$$\\lim_{n\\to \\infty}\\frac{x^n}{f(n)}=\\infty \\tag 3$$</p>\n\n<blockquote>\n <p>Note that $\\int_1^n \\left(\\log(f(x)\\right)'\\,dx\\le \\int_1^{n}d\\frac{1}{x}\\,dx$ yields $f(n)\\le f(1)\\,n^d$ and $\\frac{x^n}{f(n)}\\ge \\frac{x^n}{f(1)\\,n^d}\\to \\infty$.</p>\n</blockquote>\n\n<hr>\n\n<p>Using $(2)$ we find </p>\n\n<p>$$\\begin{align}\n\\lim_{n\\to \\infty}\\left(\\frac{\\sum_{k=1}^{n+1}\\frac{x^k}{f(k)}-\\sum_{k=1}^{n}\\frac{x^k}{f(k)}}{\\frac{x^{n+1}}{f(n+1)}-\\frac{x^n}{f(n)}}\\right)&amp;=\\lim_{n\\to \\infty}\\left(\\frac{\\frac{x^{n+1}}{f(n+1)}}{\\frac{x^{n+1}}{f(n+1)}-\\frac{x^{n}}{f(n)}}\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty}\\left(\\frac{x}{x-\\frac{f(n+1)}{f(n)}}\\right)\\\\\\\\\n&amp;=\\frac{x}{x-1}\\tag 4\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, equipped with $(2)$, the Stolz-Cesaro Theorem is applicable from which we obtain the coveted limit</p>\n\n<blockquote>\n <p>$$\\lim_{n\\to \\infty}\\frac{\\sum_{k=1}^n\\frac{x^k}{f(k)}}{\\frac{x^n}{f(n)}}=\\lim_{n\\to \\infty}\\left(\\frac{\\sum_{k=1}^{n+1}\\frac{x^k}{f(k)}-\\sum_{k=1}^{n}\\frac{x^k}{f(k)}}{\\frac{x^{n+1}}{f(n+1)}-\\frac{x^n}{f(n)}}\\right)=\\frac{x}{x-1}$$</p>\n</blockquote>\n\n<p>as conjectured!</p>\n" }, { "answer_id": 2092701, "score": 0, "text": "<p>As in zhw's answer in the linked question we can also here apply <a href=\"https://en.wikipedia.org/wiki/Stolz%E2%80%93Ces%C3%A0ro_theorem\" rel=\"nofollow noreferrer\">Stolz–Cesàro</a> to get</p>\n\n<p>$$\\lim_{n\\to\\infty} \\frac{f(n)}{x^n}\\sum_{k=1}^n\\frac{x^k}{f(k)} = \\lim_{n\\to\\infty} \\frac{x}{x - \\frac{f(n+1)}{f(n)}} = \\frac{x}{x-1}$$</p>\n\n<p>since $\\lim_{n\\to\\infty}\\frac{f(n+1)}{f(n)} = 1$. For this final limit we can for example apply <a href=\"https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality\" rel=\"nofollow noreferrer\">Grönwall's inequality</a> to $f'(x) \\leq \\frac{d}{x}f(x)$ giving us $1\\leq \\frac{f(n+1)}{f(n)} \\leq \\left(\\frac{n+1}{n}\\right)^d$ and the limit follows by the squeeze theorem. This can also be deduced by the mean value theorem applied to $g(x) = \\log f(x)$ which gives $1 \\leq \\frac{f(n+1)}{f(n)} \\leq e^{\\frac{d}{n}}$.</p>\n" }, { "answer_id": 2092705, "score": 1, "text": "<p>From the conditions $f(n)\\le f(n+1)$ and $\\frac{f'(n)}{f(n)}\\le d/n$ we have </p>\n\n<p>$$1\\le \\frac{f(n+1)}{f(n)}\\le \\left(1+\\frac1n\\right)^d \\tag 1$$ </p>\n\n<blockquote>\n <p>Note that the right-hand side of the inequality in $(1)$ can be obtained by integrating the inequality $\\int_n^{n+1}\\left(\\log(f(x)\\right)'\\,dx\\le \\int_n^{n+1}d\\frac{1}{x}\\,dx$.</p>\n</blockquote>\n\n<p>Application of the squeeze theorem reveals</p>\n\n<p>$$\\lim_{n\\to \\infty}\\frac{f(n+1)}{f(n)}=1 \\tag 2$$</p>\n\n<hr>\n\n<p>Alongside $(1)$, it is straightforward to show from the condition $f'(n)/f(n)\\le d/n$ that </p>\n\n<p>$$\\lim_{n\\to \\infty}\\frac{x^n}{f(n)}=\\infty \\tag 3$$</p>\n\n<blockquote>\n <p>Note that $\\int_1^n \\left(\\log(f(x)\\right)'\\,dx\\le \\int_1^{n}d\\frac{1}{x}\\,dx$ yields $f(n)\\le f(1)\\,n^d$ and $\\frac{x^n}{f(n)}\\ge \\frac{x^n}{f(1)\\,n^d}\\to \\infty$.</p>\n</blockquote>\n\n<hr>\n\n<p>Using $(2)$ we find </p>\n\n<p>$$\\begin{align}\n\\lim_{n\\to \\infty}\\left(\\frac{\\sum_{k=1}^{n+1}\\frac{x^k}{f(k)}-\\sum_{k=1}^{n}\\frac{x^k}{f(k)}}{\\frac{x^{n+1}}{f(n+1)}-\\frac{x^n}{f(n)}}\\right)&amp;=\\lim_{n\\to \\infty}\\left(\\frac{\\frac{x^{n+1}}{f(n+1)}}{\\frac{x^{n+1}}{f(n+1)}-\\frac{x^{n}}{f(n)}}\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty}\\left(\\frac{x}{x-\\frac{f(n+1)}{f(n)}}\\right)\\\\\\\\\n&amp;=\\frac{x}{x-1}\\tag 4\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, equipped with $(2)$, the Stolz-Cesaro Theorem is applicable from which we obtain the coveted limit</p>\n\n<blockquote>\n <p>$$\\lim_{n\\to \\infty}\\frac{\\sum_{k=1}^n\\frac{x^k}{f(k)}}{\\frac{x^n}{f(n)}}=\\lim_{n\\to \\infty}\\left(\\frac{\\sum_{k=1}^{n+1}\\frac{x^k}{f(k)}-\\sum_{k=1}^{n}\\frac{x^k}{f(k)}}{\\frac{x^{n+1}}{f(n+1)}-\\frac{x^n}{f(n)}}\\right)=\\frac{x}{x-1}$$</p>\n</blockquote>\n\n<p>as conjectured!</p>\n" }, { "answer_id": 2092701, "score": 0, "text": "<p>As in zhw's answer in the linked question we can also here apply <a href=\"https://en.wikipedia.org/wiki/Stolz%E2%80%93Ces%C3%A0ro_theorem\" rel=\"nofollow noreferrer\">Stolz–Cesàro</a> to get</p>\n\n<p>$$\\lim_{n\\to\\infty} \\frac{f(n)}{x^n}\\sum_{k=1}^n\\frac{x^k}{f(k)} = \\lim_{n\\to\\infty} \\frac{x}{x - \\frac{f(n+1)}{f(n)}} = \\frac{x}{x-1}$$</p>\n\n<p>since $\\lim_{n\\to\\infty}\\frac{f(n+1)}{f(n)} = 1$. For this final limit we can for example apply <a href=\"https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality\" rel=\"nofollow noreferrer\">Grönwall's inequality</a> to $f'(x) \\leq \\frac{d}{x}f(x)$ giving us $1\\leq \\frac{f(n+1)}{f(n)} \\leq \\left(\\frac{n+1}{n}\\right)^d$ and the limit follows by the squeeze theorem. This can also be deduced by the mean value theorem applied to $g(x) = \\log f(x)$ which gives $1 \\leq \\frac{f(n+1)}{f(n)} \\leq e^{\\frac{d}{n}}$.</p>\n" }, { "answer_id": 2092705, "score": 1, "text": "<p>From the conditions $f(n)\\le f(n+1)$ and $\\frac{f'(n)}{f(n)}\\le d/n$ we have </p>\n\n<p>$$1\\le \\frac{f(n+1)}{f(n)}\\le \\left(1+\\frac1n\\right)^d \\tag 1$$ </p>\n\n<blockquote>\n <p>Note that the right-hand side of the inequality in $(1)$ can be obtained by integrating the inequality $\\int_n^{n+1}\\left(\\log(f(x)\\right)'\\,dx\\le \\int_n^{n+1}d\\frac{1}{x}\\,dx$.</p>\n</blockquote>\n\n<p>Application of the squeeze theorem reveals</p>\n\n<p>$$\\lim_{n\\to \\infty}\\frac{f(n+1)}{f(n)}=1 \\tag 2$$</p>\n\n<hr>\n\n<p>Alongside $(1)$, it is straightforward to show from the condition $f'(n)/f(n)\\le d/n$ that </p>\n\n<p>$$\\lim_{n\\to \\infty}\\frac{x^n}{f(n)}=\\infty \\tag 3$$</p>\n\n<blockquote>\n <p>Note that $\\int_1^n \\left(\\log(f(x)\\right)'\\,dx\\le \\int_1^{n}d\\frac{1}{x}\\,dx$ yields $f(n)\\le f(1)\\,n^d$ and $\\frac{x^n}{f(n)}\\ge \\frac{x^n}{f(1)\\,n^d}\\to \\infty$.</p>\n</blockquote>\n\n<hr>\n\n<p>Using $(2)$ we find </p>\n\n<p>$$\\begin{align}\n\\lim_{n\\to \\infty}\\left(\\frac{\\sum_{k=1}^{n+1}\\frac{x^k}{f(k)}-\\sum_{k=1}^{n}\\frac{x^k}{f(k)}}{\\frac{x^{n+1}}{f(n+1)}-\\frac{x^n}{f(n)}}\\right)&amp;=\\lim_{n\\to \\infty}\\left(\\frac{\\frac{x^{n+1}}{f(n+1)}}{\\frac{x^{n+1}}{f(n+1)}-\\frac{x^{n}}{f(n)}}\\right)\\\\\\\\\n&amp;=\\lim_{n\\to \\infty}\\left(\\frac{x}{x-\\frac{f(n+1)}{f(n)}}\\right)\\\\\\\\\n&amp;=\\frac{x}{x-1}\\tag 4\n\\end{align}$$</p>\n\n<hr>\n\n<p>Finally, equipped with $(2)$, the Stolz-Cesaro Theorem is applicable from which we obtain the coveted limit</p>\n\n<blockquote>\n <p>$$\\lim_{n\\to \\infty}\\frac{\\sum_{k=1}^n\\frac{x^k}{f(k)}}{\\frac{x^n}{f(n)}}=\\lim_{n\\to \\infty}\\left(\\frac{\\sum_{k=1}^{n+1}\\frac{x^k}{f(k)}-\\sum_{k=1}^{n}\\frac{x^k}{f(k)}}{\\frac{x^{n+1}}{f(n+1)}-\\frac{x^n}{f(n)}}\\right)=\\frac{x}{x-1}$$</p>\n</blockquote>\n\n<p>as conjectured!</p>\n" }, { "answer_id": 2092701, "score": 0, "text": "<p>As in zhw's answer in the linked question we can also here apply <a href=\"https://en.wikipedia.org/wiki/Stolz%E2%80%93Ces%C3%A0ro_theorem\" rel=\"nofollow noreferrer\">Stolz–Cesàro</a> to get</p>\n\n<p>$$\\lim_{n\\to\\infty} \\frac{f(n)}{x^n}\\sum_{k=1}^n\\frac{x^k}{f(k)} = \\lim_{n\\to\\infty} \\frac{x}{x - \\frac{f(n+1)}{f(n)}} = \\frac{x}{x-1}$$</p>\n\n<p>since $\\lim_{n\\to\\infty}\\frac{f(n+1)}{f(n)} = 1$. For this final limit we can for example apply <a href=\"https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality\" rel=\"nofollow noreferrer\">Grönwall's inequality</a> to $f'(x) \\leq \\frac{d}{x}f(x)$ giving us $1\\leq \\frac{f(n+1)}{f(n)} \\leq \\left(\\frac{n+1}{n}\\right)^d$ and the limit follows by the squeeze theorem. This can also be deduced by the mean value theorem applied to $g(x) = \\log f(x)$ which gives $1 \\leq \\frac{f(n+1)}{f(n)} \\leq e^{\\frac{d}{n}}$.</p>\n" } ]
Question: Show that if $f$ increases nicely and $x &gt; 1$ then $\lim_{n \to \infty}\frac{f(n)}{x^n}\sum_{k=1}^n \frac{x^k}{f(k)} = \frac{x}{1-x}$ Answer 1: <p>From the conditions $f(n)\le f(n+1)$ and $\frac{f'(n)}{f(n)}\le d/n$ we have </p> <p>$$1\le \frac{f(n+1)}{f(n)}\le \left(1+\frac1n\right)^d \tag 1$$ </p> <blockquote> <p>Note that the right-hand side of the inequality in $(1)$ can be obtained by integrating the inequality $\int_n^{n+1}\left(\log(f(x)\right)'\,dx\le \int_n^{n+1}d\frac{1}{x}\,dx$.</p> </blockquote> <p>Application of the squeeze theorem reveals</p> <p>$$\lim_{n\to \infty}\frac{f(n+1)}{f(n)}=1 \tag 2$$</p> <hr> <p>Alongside $(1)$, it is straightforward to show from the condition $f'(n)/f(n)\le d/n$ that </p> <p>$$\lim_{n\to \infty}\frac{x^n}{f(n)}=\infty \tag 3$$</p> <blockquote> <p>Note that $\int_1^n \left(\log(f(x)\right)'\,dx\le \int_1^{n}d\frac{1}{x}\,dx$ yields $f(n)\le f(1)\,n^d$ and $\frac{x^n}{f(n)}\ge \frac{x^n}{f(1)\,n^d}\to \infty$.</p> </blockquote> <hr> <p>Using $(2)$ we find </p> <p>$$\begin{align} \lim_{n\to \infty}\left(\frac{\sum_{k=1}^{n+1}\frac{x^k}{f(k)}-\sum_{k=1}^{n}\frac{x^k}{f(k)}}{\frac{x^{n+1}}{f(n+1)}-\frac{x^n}{f(n)}}\right)&amp;=\lim_{n\to \infty}\left(\frac{\frac{x^{n+1}}{f(n+1)}}{\frac{x^{n+1}}{f(n+1)}-\frac{x^{n}}{f(n)}}\right)\\\\ &amp;=\lim_{n\to \infty}\left(\frac{x}{x-\frac{f(n+1)}{f(n)}}\right)\\\\ &amp;=\frac{x}{x-1}\tag 4 \end{align}$$</p> <hr> <p>Finally, equipped with $(2)$, the Stolz-Cesaro Theorem is applicable from which we obtain the coveted limit</p> <blockquote> <p>$$\lim_{n\to \infty}\frac{\sum_{k=1}^n\frac{x^k}{f(k)}}{\frac{x^n}{f(n)}}=\lim_{n\to \infty}\left(\frac{\sum_{k=1}^{n+1}\frac{x^k}{f(k)}-\sum_{k=1}^{n}\frac{x^k}{f(k)}}{\frac{x^{n+1}}{f(n+1)}-\frac{x^n}{f(n)}}\right)=\frac{x}{x-1}$$</p> </blockquote> <p>as conjectured!</p> Answer 2: <p>As in zhw's answer in the linked question we can also here apply <a href="https://en.wikipedia.org/wiki/Stolz%E2%80%93Ces%C3%A0ro_theorem" rel="nofollow noreferrer">Stolz–Cesàro</a> to get</p> <p>$$\lim_{n\to\infty} \frac{f(n)}{x^n}\sum_{k=1}^n\frac{x^k}{f(k)} = \lim_{n\to\infty} \frac{x}{x - \frac{f(n+1)}{f(n)}} = \frac{x}{x-1}$$</p> <p>since $\lim_{n\to\infty}\frac{f(n+1)}{f(n)} = 1$. For this final limit we can for example apply <a href="https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality" rel="nofollow noreferrer">Grönwall's inequality</a> to $f'(x) \leq \frac{d}{x}f(x)$ giving us $1\leq \frac{f(n+1)}{f(n)} \leq \left(\frac{n+1}{n}\right)^d$ and the limit follows by the squeeze theorem. This can also be deduced by the mean value theorem applied to $g(x) = \log f(x)$ which gives $1 \leq \frac{f(n+1)}{f(n)} \leq e^{\frac{d}{n}}$.</p> Answer 3: <p>From the conditions $f(n)\le f(n+1)$ and $\frac{f'(n)}{f(n)}\le d/n$ we have </p> <p>$$1\le \frac{f(n+1)}{f(n)}\le \left(1+\frac1n\right)^d \tag 1$$ </p> <blockquote> <p>Note that the right-hand side of the inequality in $(1)$ can be obtained by integrating the inequality $\int_n^{n+1}\left(\log(f(x)\right)'\,dx\le \int_n^{n+1}d\frac{1}{x}\,dx$.</p> </blockquote> <p>Application of the squeeze theorem reveals</p> <p>$$\lim_{n\to \infty}\frac{f(n+1)}{f(n)}=1 \tag 2$$</p> <hr> <p>Alongside $(1)$, it is straightforward to show from the condition $f'(n)/f(n)\le d/n$ that </p> <p>$$\lim_{n\to \infty}\frac{x^n}{f(n)}=\infty \tag 3$$</p> <blockquote> <p>Note that $\int_1^n \left(\log(f(x)\right)'\,dx\le \int_1^{n}d\frac{1}{x}\,dx$ yields $f(n)\le f(1)\,n^d$ and $\frac{x^n}{f(n)}\ge \frac{x^n}{f(1)\,n^d}\to \infty$.</p> </blockquote> <hr> <p>Using $(2)$ we find </p> <p>$$\begin{align} \lim_{n\to \infty}\left(\frac{\sum_{k=1}^{n+1}\frac{x^k}{f(k)}-\sum_{k=1}^{n}\frac{x^k}{f(k)}}{\frac{x^{n+1}}{f(n+1)}-\frac{x^n}{f(n)}}\right)&amp;=\lim_{n\to \infty}\left(\frac{\frac{x^{n+1}}{f(n+1)}}{\frac{x^{n+1}}{f(n+1)}-\frac{x^{n}}{f(n)}}\right)\\\\ &amp;=\lim_{n\to \infty}\left(\frac{x}{x-\frac{f(n+1)}{f(n)}}\right)\\\\ &amp;=\frac{x}{x-1}\tag 4 \end{align}$$</p> <hr> <p>Finally, equipped with $(2)$, the Stolz-Cesaro Theorem is applicable from which we obtain the coveted limit</p> <blockquote> <p>$$\lim_{n\to \infty}\frac{\sum_{k=1}^n\frac{x^k}{f(k)}}{\frac{x^n}{f(n)}}=\lim_{n\to \infty}\left(\frac{\sum_{k=1}^{n+1}\frac{x^k}{f(k)}-\sum_{k=1}^{n}\frac{x^k}{f(k)}}{\frac{x^{n+1}}{f(n+1)}-\frac{x^n}{f(n)}}\right)=\frac{x}{x-1}$$</p> </blockquote> <p>as conjectured!</p> Answer 4: <p>As in zhw's answer in the linked question we can also here apply <a href="https://en.wikipedia.org/wiki/Stolz%E2%80%93Ces%C3%A0ro_theorem" rel="nofollow noreferrer">Stolz–Cesàro</a> to get</p> <p>$$\lim_{n\to\infty} \frac{f(n)}{x^n}\sum_{k=1}^n\frac{x^k}{f(k)} = \lim_{n\to\infty} \frac{x}{x - \frac{f(n+1)}{f(n)}} = \frac{x}{x-1}$$</p> <p>since $\lim_{n\to\infty}\frac{f(n+1)}{f(n)} = 1$. For this final limit we can for example apply <a href="https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality" rel="nofollow noreferrer">Grönwall's inequality</a> to $f'(x) \leq \frac{d}{x}f(x)$ giving us $1\leq \frac{f(n+1)}{f(n)} \leq \left(\frac{n+1}{n}\right)^d$ and the limit follows by the squeeze theorem. This can also be deduced by the mean value theorem applied to $g(x) = \log f(x)$ which gives $1 \leq \frac{f(n+1)}{f(n)} \leq e^{\frac{d}{n}}$.</p> Answer 5: <p>From the conditions $f(n)\le f(n+1)$ and $\frac{f'(n)}{f(n)}\le d/n$ we have </p> <p>$$1\le \frac{f(n+1)}{f(n)}\le \left(1+\frac1n\right)^d \tag 1$$ </p> <blockquote> <p>Note that the right-hand side of the inequality in $(1)$ can be obtained by integrating the inequality $\int_n^{n+1}\left(\log(f(x)\right)'\,dx\le \int_n^{n+1}d\frac{1}{x}\,dx$.</p> </blockquote> <p>Application of the squeeze theorem reveals</p> <p>$$\lim_{n\to \infty}\frac{f(n+1)}{f(n)}=1 \tag 2$$</p> <hr> <p>Alongside $(1)$, it is straightforward to show from the condition $f'(n)/f(n)\le d/n$ that </p> <p>$$\lim_{n\to \infty}\frac{x^n}{f(n)}=\infty \tag 3$$</p> <blockquote> <p>Note that $\int_1^n \left(\log(f(x)\right)'\,dx\le \int_1^{n}d\frac{1}{x}\,dx$ yields $f(n)\le f(1)\,n^d$ and $\frac{x^n}{f(n)}\ge \frac{x^n}{f(1)\,n^d}\to \infty$.</p> </blockquote> <hr> <p>Using $(2)$ we find </p> <p>$$\begin{align} \lim_{n\to \infty}\left(\frac{\sum_{k=1}^{n+1}\frac{x^k}{f(k)}-\sum_{k=1}^{n}\frac{x^k}{f(k)}}{\frac{x^{n+1}}{f(n+1)}-\frac{x^n}{f(n)}}\right)&amp;=\lim_{n\to \infty}\left(\frac{\frac{x^{n+1}}{f(n+1)}}{\frac{x^{n+1}}{f(n+1)}-\frac{x^{n}}{f(n)}}\right)\\\\ &amp;=\lim_{n\to \infty}\left(\frac{x}{x-\frac{f(n+1)}{f(n)}}\right)\\\\ &amp;=\frac{x}{x-1}\tag 4 \end{align}$$</p> <hr> <p>Finally, equipped with $(2)$, the Stolz-Cesaro Theorem is applicable from which we obtain the coveted limit</p> <blockquote> <p>$$\lim_{n\to \infty}\frac{\sum_{k=1}^n\frac{x^k}{f(k)}}{\frac{x^n}{f(n)}}=\lim_{n\to \infty}\left(\frac{\sum_{k=1}^{n+1}\frac{x^k}{f(k)}-\sum_{k=1}^{n}\frac{x^k}{f(k)}}{\frac{x^{n+1}}{f(n+1)}-\frac{x^n}{f(n)}}\right)=\frac{x}{x-1}$$</p> </blockquote> <p>as conjectured!</p> Answer 6: <p>As in zhw's answer in the linked question we can also here apply <a href="https://en.wikipedia.org/wiki/Stolz%E2%80%93Ces%C3%A0ro_theorem" rel="nofollow noreferrer">Stolz–Cesàro</a> to get</p> <p>$$\lim_{n\to\infty} \frac{f(n)}{x^n}\sum_{k=1}^n\frac{x^k}{f(k)} = \lim_{n\to\infty} \frac{x}{x - \frac{f(n+1)}{f(n)}} = \frac{x}{x-1}$$</p> <p>since $\lim_{n\to\infty}\frac{f(n+1)}{f(n)} = 1$. For this final limit we can for example apply <a href="https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality" rel="nofollow noreferrer">Grönwall's inequality</a> to $f'(x) \leq \frac{d}{x}f(x)$ giving us $1\leq \frac{f(n+1)}{f(n)} \leq \left(\frac{n+1}{n}\right)^d$ and the limit follows by the squeeze theorem. This can also be deduced by the mean value theorem applied to $g(x) = \log f(x)$ which gives $1 \leq \frac{f(n+1)}{f(n)} \leq e^{\frac{d}{n}}$.</p> Answer 7: <p>From the conditions $f(n)\le f(n+1)$ and $\frac{f'(n)}{f(n)}\le d/n$ we have </p> <p>$$1\le \frac{f(n+1)}{f(n)}\le \left(1+\frac1n\right)^d \tag 1$$ </p> <blockquote> <p>Note that the right-hand side of the inequality in $(1)$ can be obtained by integrating the inequality $\int_n^{n+1}\left(\log(f(x)\right)'\,dx\le \int_n^{n+1}d\frac{1}{x}\,dx$.</p> </blockquote> <p>Application of the squeeze theorem reveals</p> <p>$$\lim_{n\to \infty}\frac{f(n+1)}{f(n)}=1 \tag 2$$</p> <hr> <p>Alongside $(1)$, it is straightforward to show from the condition $f'(n)/f(n)\le d/n$ that </p> <p>$$\lim_{n\to \infty}\frac{x^n}{f(n)}=\infty \tag 3$$</p> <blockquote> <p>Note that $\int_1^n \left(\log(f(x)\right)'\,dx\le \int_1^{n}d\frac{1}{x}\,dx$ yields $f(n)\le f(1)\,n^d$ and $\frac{x^n}{f(n)}\ge \frac{x^n}{f(1)\,n^d}\to \infty$.</p> </blockquote> <hr> <p>Using $(2)$ we find </p> <p>$$\begin{align} \lim_{n\to \infty}\left(\frac{\sum_{k=1}^{n+1}\frac{x^k}{f(k)}-\sum_{k=1}^{n}\frac{x^k}{f(k)}}{\frac{x^{n+1}}{f(n+1)}-\frac{x^n}{f(n)}}\right)&amp;=\lim_{n\to \infty}\left(\frac{\frac{x^{n+1}}{f(n+1)}}{\frac{x^{n+1}}{f(n+1)}-\frac{x^{n}}{f(n)}}\right)\\\\ &amp;=\lim_{n\to \infty}\left(\frac{x}{x-\frac{f(n+1)}{f(n)}}\right)\\\\ &amp;=\frac{x}{x-1}\tag 4 \end{align}$$</p> <hr> <p>Finally, equipped with $(2)$, the Stolz-Cesaro Theorem is applicable from which we obtain the coveted limit</p> <blockquote> <p>$$\lim_{n\to \infty}\frac{\sum_{k=1}^n\frac{x^k}{f(k)}}{\frac{x^n}{f(n)}}=\lim_{n\to \infty}\left(\frac{\sum_{k=1}^{n+1}\frac{x^k}{f(k)}-\sum_{k=1}^{n}\frac{x^k}{f(k)}}{\frac{x^{n+1}}{f(n+1)}-\frac{x^n}{f(n)}}\right)=\frac{x}{x-1}$$</p> </blockquote> <p>as conjectured!</p> Answer 8: <p>As in zhw's answer in the linked question we can also here apply <a href="https://en.wikipedia.org/wiki/Stolz%E2%80%93Ces%C3%A0ro_theorem" rel="nofollow noreferrer">Stolz–Cesàro</a> to get</p> <p>$$\lim_{n\to\infty} \frac{f(n)}{x^n}\sum_{k=1}^n\frac{x^k}{f(k)} = \lim_{n\to\infty} \frac{x}{x - \frac{f(n+1)}{f(n)}} = \frac{x}{x-1}$$</p> <p>since $\lim_{n\to\infty}\frac{f(n+1)}{f(n)} = 1$. For this final limit we can for example apply <a href="https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality" rel="nofollow noreferrer">Grönwall's inequality</a> to $f'(x) \leq \frac{d}{x}f(x)$ giving us $1\leq \frac{f(n+1)}{f(n)} \leq \left(\frac{n+1}{n}\right)^d$ and the limit follows by the squeeze theorem. This can also be deduced by the mean value theorem applied to $g(x) = \log f(x)$ which gives $1 \leq \frac{f(n+1)}{f(n)} \leq e^{\frac{d}{n}}$.</p> Answer 9: <p>From the conditions $f(n)\le f(n+1)$ and $\frac{f'(n)}{f(n)}\le d/n$ we have </p> <p>$$1\le \frac{f(n+1)}{f(n)}\le \left(1+\frac1n\right)^d \tag 1$$ </p> <blockquote> <p>Note that the right-hand side of the inequality in $(1)$ can be obtained by integrating the inequality $\int_n^{n+1}\left(\log(f(x)\right)'\,dx\le \int_n^{n+1}d\frac{1}{x}\,dx$.</p> </blockquote> <p>Application of the squeeze theorem reveals</p> <p>$$\lim_{n\to \infty}\frac{f(n+1)}{f(n)}=1 \tag 2$$</p> <hr> <p>Alongside $(1)$, it is straightforward to show from the condition $f'(n)/f(n)\le d/n$ that </p> <p>$$\lim_{n\to \infty}\frac{x^n}{f(n)}=\infty \tag 3$$</p> <blockquote> <p>Note that $\int_1^n \left(\log(f(x)\right)'\,dx\le \int_1^{n}d\frac{1}{x}\,dx$ yields $f(n)\le f(1)\,n^d$ and $\frac{x^n}{f(n)}\ge \frac{x^n}{f(1)\,n^d}\to \infty$.</p> </blockquote> <hr> <p>Using $(2)$ we find </p> <p>$$\begin{align} \lim_{n\to \infty}\left(\frac{\sum_{k=1}^{n+1}\frac{x^k}{f(k)}-\sum_{k=1}^{n}\frac{x^k}{f(k)}}{\frac{x^{n+1}}{f(n+1)}-\frac{x^n}{f(n)}}\right)&amp;=\lim_{n\to \infty}\left(\frac{\frac{x^{n+1}}{f(n+1)}}{\frac{x^{n+1}}{f(n+1)}-\frac{x^{n}}{f(n)}}\right)\\\\ &amp;=\lim_{n\to \infty}\left(\frac{x}{x-\frac{f(n+1)}{f(n)}}\right)\\\\ &amp;=\frac{x}{x-1}\tag 4 \end{align}$$</p> <hr> <p>Finally, equipped with $(2)$, the Stolz-Cesaro Theorem is applicable from which we obtain the coveted limit</p> <blockquote> <p>$$\lim_{n\to \infty}\frac{\sum_{k=1}^n\frac{x^k}{f(k)}}{\frac{x^n}{f(n)}}=\lim_{n\to \infty}\left(\frac{\sum_{k=1}^{n+1}\frac{x^k}{f(k)}-\sum_{k=1}^{n}\frac{x^k}{f(k)}}{\frac{x^{n+1}}{f(n+1)}-\frac{x^n}{f(n)}}\right)=\frac{x}{x-1}$$</p> </blockquote> <p>as conjectured!</p> Answer 10: <p>As in zhw's answer in the linked question we can also here apply <a href="https://en.wikipedia.org/wiki/Stolz%E2%80%93Ces%C3%A0ro_theorem" rel="nofollow noreferrer">Stolz–Cesàro</a> to get</p> <p>$$\lim_{n\to\infty} \frac{f(n)}{x^n}\sum_{k=1}^n\frac{x^k}{f(k)} = \lim_{n\to\infty} \frac{x}{x - \frac{f(n+1)}{f(n)}} = \frac{x}{x-1}$$</p> <p>since $\lim_{n\to\infty}\frac{f(n+1)}{f(n)} = 1$. For this final limit we can for example apply <a href="https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality" rel="nofollow noreferrer">Grönwall's inequality</a> to $f'(x) \leq \frac{d}{x}f(x)$ giving us $1\leq \frac{f(n+1)}{f(n)} \leq \left(\frac{n+1}{n}\right)^d$ and the limit follows by the squeeze theorem. This can also be deduced by the mean value theorem applied to $g(x) = \log f(x)$ which gives $1 \leq \frac{f(n+1)}{f(n)} \leq e^{\frac{d}{n}}$.</p> Answer 11: <p>From the conditions $f(n)\le f(n+1)$ and $\frac{f'(n)}{f(n)}\le d/n$ we have </p> <p>$$1\le \frac{f(n+1)}{f(n)}\le \left(1+\frac1n\right)^d \tag 1$$ </p> <blockquote> <p>Note that the right-hand side of the inequality in $(1)$ can be obtained by integrating the inequality $\int_n^{n+1}\left(\log(f(x)\right)'\,dx\le \int_n^{n+1}d\frac{1}{x}\,dx$.</p> </blockquote> <p>Application of the squeeze theorem reveals</p> <p>$$\lim_{n\to \infty}\frac{f(n+1)}{f(n)}=1 \tag 2$$</p> <hr> <p>Alongside $(1)$, it is straightforward to show from the condition $f'(n)/f(n)\le d/n$ that </p> <p>$$\lim_{n\to \infty}\frac{x^n}{f(n)}=\infty \tag 3$$</p> <blockquote> <p>Note that $\int_1^n \left(\log(f(x)\right)'\,dx\le \int_1^{n}d\frac{1}{x}\,dx$ yields $f(n)\le f(1)\,n^d$ and $\frac{x^n}{f(n)}\ge \frac{x^n}{f(1)\,n^d}\to \infty$.</p> </blockquote> <hr> <p>Using $(2)$ we find </p> <p>$$\begin{align} \lim_{n\to \infty}\left(\frac{\sum_{k=1}^{n+1}\frac{x^k}{f(k)}-\sum_{k=1}^{n}\frac{x^k}{f(k)}}{\frac{x^{n+1}}{f(n+1)}-\frac{x^n}{f(n)}}\right)&amp;=\lim_{n\to \infty}\left(\frac{\frac{x^{n+1}}{f(n+1)}}{\frac{x^{n+1}}{f(n+1)}-\frac{x^{n}}{f(n)}}\right)\\\\ &amp;=\lim_{n\to \infty}\left(\frac{x}{x-\frac{f(n+1)}{f(n)}}\right)\\\\ &amp;=\frac{x}{x-1}\tag 4 \end{align}$$</p> <hr> <p>Finally, equipped with $(2)$, the Stolz-Cesaro Theorem is applicable from which we obtain the coveted limit</p> <blockquote> <p>$$\lim_{n\to \infty}\frac{\sum_{k=1}^n\frac{x^k}{f(k)}}{\frac{x^n}{f(n)}}=\lim_{n\to \infty}\left(\frac{\sum_{k=1}^{n+1}\frac{x^k}{f(k)}-\sum_{k=1}^{n}\frac{x^k}{f(k)}}{\frac{x^{n+1}}{f(n+1)}-\frac{x^n}{f(n)}}\right)=\frac{x}{x-1}$$</p> </blockquote> <p>as conjectured!</p> Answer 12: <p>As in zhw's answer in the linked question we can also here apply <a href="https://en.wikipedia.org/wiki/Stolz%E2%80%93Ces%C3%A0ro_theorem" rel="nofollow noreferrer">Stolz–Cesàro</a> to get</p> <p>$$\lim_{n\to\infty} \frac{f(n)}{x^n}\sum_{k=1}^n\frac{x^k}{f(k)} = \lim_{n\to\infty} \frac{x}{x - \frac{f(n+1)}{f(n)}} = \frac{x}{x-1}$$</p> <p>since $\lim_{n\to\infty}\frac{f(n+1)}{f(n)} = 1$. For this final limit we can for example apply <a href="https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality" rel="nofollow noreferrer">Grönwall's inequality</a> to $f'(x) \leq \frac{d}{x}f(x)$ giving us $1\leq \frac{f(n+1)}{f(n)} \leq \left(\frac{n+1}{n}\right)^d$ and the limit follows by the squeeze theorem. This can also be deduced by the mean value theorem applied to $g(x) = \log f(x)$ which gives $1 \leq \frac{f(n+1)}{f(n)} \leq e^{\frac{d}{n}}$.</p> Answer 13: <p>From the conditions $f(n)\le f(n+1)$ and $\frac{f'(n)}{f(n)}\le d/n$ we have </p> <p>$$1\le \frac{f(n+1)}{f(n)}\le \left(1+\frac1n\right)^d \tag 1$$ </p> <blockquote> <p>Note that the right-hand side of the inequality in $(1)$ can be obtained by integrating the inequality $\int_n^{n+1}\left(\log(f(x)\right)'\,dx\le \int_n^{n+1}d\frac{1}{x}\,dx$.</p> </blockquote> <p>Application of the squeeze theorem reveals</p> <p>$$\lim_{n\to \infty}\frac{f(n+1)}{f(n)}=1 \tag 2$$</p> <hr> <p>Alongside $(1)$, it is straightforward to show from the condition $f'(n)/f(n)\le d/n$ that </p> <p>$$\lim_{n\to \infty}\frac{x^n}{f(n)}=\infty \tag 3$$</p> <blockquote> <p>Note that $\int_1^n \left(\log(f(x)\right)'\,dx\le \int_1^{n}d\frac{1}{x}\,dx$ yields $f(n)\le f(1)\,n^d$ and $\frac{x^n}{f(n)}\ge \frac{x^n}{f(1)\,n^d}\to \infty$.</p> </blockquote> <hr> <p>Using $(2)$ we find </p> <p>$$\begin{align} \lim_{n\to \infty}\left(\frac{\sum_{k=1}^{n+1}\frac{x^k}{f(k)}-\sum_{k=1}^{n}\frac{x^k}{f(k)}}{\frac{x^{n+1}}{f(n+1)}-\frac{x^n}{f(n)}}\right)&amp;=\lim_{n\to \infty}\left(\frac{\frac{x^{n+1}}{f(n+1)}}{\frac{x^{n+1}}{f(n+1)}-\frac{x^{n}}{f(n)}}\right)\\\\ &amp;=\lim_{n\to \infty}\left(\frac{x}{x-\frac{f(n+1)}{f(n)}}\right)\\\\ &amp;=\frac{x}{x-1}\tag 4 \end{align}$$</p> <hr> <p>Finally, equipped with $(2)$, the Stolz-Cesaro Theorem is applicable from which we obtain the coveted limit</p> <blockquote> <p>$$\lim_{n\to \infty}\frac{\sum_{k=1}^n\frac{x^k}{f(k)}}{\frac{x^n}{f(n)}}=\lim_{n\to \infty}\left(\frac{\sum_{k=1}^{n+1}\frac{x^k}{f(k)}-\sum_{k=1}^{n}\frac{x^k}{f(k)}}{\frac{x^{n+1}}{f(n+1)}-\frac{x^n}{f(n)}}\right)=\frac{x}{x-1}$$</p> </blockquote> <p>as conjectured!</p> Answer 14: <p>As in zhw's answer in the linked question we can also here apply <a href="https://en.wikipedia.org/wiki/Stolz%E2%80%93Ces%C3%A0ro_theorem" rel="nofollow noreferrer">Stolz–Cesàro</a> to get</p> <p>$$\lim_{n\to\infty} \frac{f(n)}{x^n}\sum_{k=1}^n\frac{x^k}{f(k)} = \lim_{n\to\infty} \frac{x}{x - \frac{f(n+1)}{f(n)}} = \frac{x}{x-1}$$</p> <p>since $\lim_{n\to\infty}\frac{f(n+1)}{f(n)} = 1$. For this final limit we can for example apply <a href="https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality" rel="nofollow noreferrer">Grönwall's inequality</a> to $f'(x) \leq \frac{d}{x}f(x)$ giving us $1\leq \frac{f(n+1)}{f(n)} \leq \left(\frac{n+1}{n}\right)^d$ and the limit follows by the squeeze theorem. This can also be deduced by the mean value theorem applied to $g(x) = \log f(x)$ which gives $1 \leq \frac{f(n+1)}{f(n)} \leq e^{\frac{d}{n}}$.</p> Answer 15: <p>From the conditions $f(n)\le f(n+1)$ and $\frac{f'(n)}{f(n)}\le d/n$ we have </p> <p>$$1\le \frac{f(n+1)}{f(n)}\le \left(1+\frac1n\right)^d \tag 1$$ </p> <blockquote> <p>Note that the right-hand side of the inequality in $(1)$ can be obtained by integrating the inequality $\int_n^{n+1}\left(\log(f(x)\right)'\,dx\le \int_n^{n+1}d\frac{1}{x}\,dx$.</p> </blockquote> <p>Application of the squeeze theorem reveals</p> <p>$$\lim_{n\to \infty}\frac{f(n+1)}{f(n)}=1 \tag 2$$</p> <hr> <p>Alongside $(1)$, it is straightforward to show from the condition $f'(n)/f(n)\le d/n$ that </p> <p>$$\lim_{n\to \infty}\frac{x^n}{f(n)}=\infty \tag 3$$</p> <blockquote> <p>Note that $\int_1^n \left(\log(f(x)\right)'\,dx\le \int_1^{n}d\frac{1}{x}\,dx$ yields $f(n)\le f(1)\,n^d$ and $\frac{x^n}{f(n)}\ge \frac{x^n}{f(1)\,n^d}\to \infty$.</p> </blockquote> <hr> <p>Using $(2)$ we find </p> <p>$$\begin{align} \lim_{n\to \infty}\left(\frac{\sum_{k=1}^{n+1}\frac{x^k}{f(k)}-\sum_{k=1}^{n}\frac{x^k}{f(k)}}{\frac{x^{n+1}}{f(n+1)}-\frac{x^n}{f(n)}}\right)&amp;=\lim_{n\to \infty}\left(\frac{\frac{x^{n+1}}{f(n+1)}}{\frac{x^{n+1}}{f(n+1)}-\frac{x^{n}}{f(n)}}\right)\\\\ &amp;=\lim_{n\to \infty}\left(\frac{x}{x-\frac{f(n+1)}{f(n)}}\right)\\\\ &amp;=\frac{x}{x-1}\tag 4 \end{align}$$</p> <hr> <p>Finally, equipped with $(2)$, the Stolz-Cesaro Theorem is applicable from which we obtain the coveted limit</p> <blockquote> <p>$$\lim_{n\to \infty}\frac{\sum_{k=1}^n\frac{x^k}{f(k)}}{\frac{x^n}{f(n)}}=\lim_{n\to \infty}\left(\frac{\sum_{k=1}^{n+1}\frac{x^k}{f(k)}-\sum_{k=1}^{n}\frac{x^k}{f(k)}}{\frac{x^{n+1}}{f(n+1)}-\frac{x^n}{f(n)}}\right)=\frac{x}{x-1}$$</p> </blockquote> <p>as conjectured!</p> Answer 16: <p>As in zhw's answer in the linked question we can also here apply <a href="https://en.wikipedia.org/wiki/Stolz%E2%80%93Ces%C3%A0ro_theorem" rel="nofollow noreferrer">Stolz–Cesàro</a> to get</p> <p>$$\lim_{n\to\infty} \frac{f(n)}{x^n}\sum_{k=1}^n\frac{x^k}{f(k)} = \lim_{n\to\infty} \frac{x}{x - \frac{f(n+1)}{f(n)}} = \frac{x}{x-1}$$</p> <p>since $\lim_{n\to\infty}\frac{f(n+1)}{f(n)} = 1$. For this final limit we can for example apply <a href="https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality" rel="nofollow noreferrer">Grönwall's inequality</a> to $f'(x) \leq \frac{d}{x}f(x)$ giving us $1\leq \frac{f(n+1)}{f(n)} \leq \left(\frac{n+1}{n}\right)^d$ and the limit follows by the squeeze theorem. This can also be deduced by the mean value theorem applied to $g(x) = \log f(x)$ which gives $1 \leq \frac{f(n+1)}{f(n)} \leq e^{\frac{d}{n}}$.</p> Answer 17: <p>From the conditions $f(n)\le f(n+1)$ and $\frac{f'(n)}{f(n)}\le d/n$ we have </p> <p>$$1\le \frac{f(n+1)}{f(n)}\le \left(1+\frac1n\right)^d \tag 1$$ </p> <blockquote> <p>Note that the right-hand side of the inequality in $(1)$ can be obtained by integrating the inequality $\int_n^{n+1}\left(\log(f(x)\right)'\,dx\le \int_n^{n+1}d\frac{1}{x}\,dx$.</p> </blockquote> <p>Application of the squeeze theorem reveals</p> <p>$$\lim_{n\to \infty}\frac{f(n+1)}{f(n)}=1 \tag 2$$</p> <hr> <p>Alongside $(1)$, it is straightforward to show from the condition $f'(n)/f(n)\le d/n$ that </p> <p>$$\lim_{n\to \infty}\frac{x^n}{f(n)}=\infty \tag 3$$</p> <blockquote> <p>Note that $\int_1^n \left(\log(f(x)\right)'\,dx\le \int_1^{n}d\frac{1}{x}\,dx$ yields $f(n)\le f(1)\,n^d$ and $\frac{x^n}{f(n)}\ge \frac{x^n}{f(1)\,n^d}\to \infty$.</p> </blockquote> <hr> <p>Using $(2)$ we find </p> <p>$$\begin{align} \lim_{n\to \infty}\left(\frac{\sum_{k=1}^{n+1}\frac{x^k}{f(k)}-\sum_{k=1}^{n}\frac{x^k}{f(k)}}{\frac{x^{n+1}}{f(n+1)}-\frac{x^n}{f(n)}}\right)&amp;=\lim_{n\to \infty}\left(\frac{\frac{x^{n+1}}{f(n+1)}}{\frac{x^{n+1}}{f(n+1)}-\frac{x^{n}}{f(n)}}\right)\\\\ &amp;=\lim_{n\to \infty}\left(\frac{x}{x-\frac{f(n+1)}{f(n)}}\right)\\\\ &amp;=\frac{x}{x-1}\tag 4 \end{align}$$</p> <hr> <p>Finally, equipped with $(2)$, the Stolz-Cesaro Theorem is applicable from which we obtain the coveted limit</p> <blockquote> <p>$$\lim_{n\to \infty}\frac{\sum_{k=1}^n\frac{x^k}{f(k)}}{\frac{x^n}{f(n)}}=\lim_{n\to \infty}\left(\frac{\sum_{k=1}^{n+1}\frac{x^k}{f(k)}-\sum_{k=1}^{n}\frac{x^k}{f(k)}}{\frac{x^{n+1}}{f(n+1)}-\frac{x^n}{f(n)}}\right)=\frac{x}{x-1}$$</p> </blockquote> <p>as conjectured!</p> Answer 18: <p>As in zhw's answer in the linked question we can also here apply <a href="https://en.wikipedia.org/wiki/Stolz%E2%80%93Ces%C3%A0ro_theorem" rel="nofollow noreferrer">Stolz–Cesàro</a> to get</p> <p>$$\lim_{n\to\infty} \frac{f(n)}{x^n}\sum_{k=1}^n\frac{x^k}{f(k)} = \lim_{n\to\infty} \frac{x}{x - \frac{f(n+1)}{f(n)}} = \frac{x}{x-1}$$</p> <p>since $\lim_{n\to\infty}\frac{f(n+1)}{f(n)} = 1$. For this final limit we can for example apply <a href="https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality" rel="nofollow noreferrer">Grönwall's inequality</a> to $f'(x) \leq \frac{d}{x}f(x)$ giving us $1\leq \frac{f(n+1)}{f(n)} \leq \left(\frac{n+1}{n}\right)^d$ and the limit follows by the squeeze theorem. This can also be deduced by the mean value theorem applied to $g(x) = \log f(x)$ which gives $1 \leq \frac{f(n+1)}{f(n)} \leq e^{\frac{d}{n}}$.</p> Answer 19: <p>From the conditions $f(n)\le f(n+1)$ and $\frac{f'(n)}{f(n)}\le d/n$ we have </p> <p>$$1\le \frac{f(n+1)}{f(n)}\le \left(1+\frac1n\right)^d \tag 1$$ </p> <blockquote> <p>Note that the right-hand side of the inequality in $(1)$ can be obtained by integrating the inequality $\int_n^{n+1}\left(\log(f(x)\right)'\,dx\le \int_n^{n+1}d\frac{1}{x}\,dx$.</p> </blockquote> <p>Application of the squeeze theorem reveals</p> <p>$$\lim_{n\to \infty}\frac{f(n+1)}{f(n)}=1 \tag 2$$</p> <hr> <p>Alongside $(1)$, it is straightforward to show from the condition $f'(n)/f(n)\le d/n$ that </p> <p>$$\lim_{n\to \infty}\frac{x^n}{f(n)}=\infty \tag 3$$</p> <blockquote> <p>Note that $\int_1^n \left(\log(f(x)\right)'\,dx\le \int_1^{n}d\frac{1}{x}\,dx$ yields $f(n)\le f(1)\,n^d$ and $\frac{x^n}{f(n)}\ge \frac{x^n}{f(1)\,n^d}\to \infty$.</p> </blockquote> <hr> <p>Using $(2)$ we find </p> <p>$$\begin{align} \lim_{n\to \infty}\left(\frac{\sum_{k=1}^{n+1}\frac{x^k}{f(k)}-\sum_{k=1}^{n}\frac{x^k}{f(k)}}{\frac{x^{n+1}}{f(n+1)}-\frac{x^n}{f(n)}}\right)&amp;=\lim_{n\to \infty}\left(\frac{\frac{x^{n+1}}{f(n+1)}}{\frac{x^{n+1}}{f(n+1)}-\frac{x^{n}}{f(n)}}\right)\\\\ &amp;=\lim_{n\to \infty}\left(\frac{x}{x-\frac{f(n+1)}{f(n)}}\right)\\\\ &amp;=\frac{x}{x-1}\tag 4 \end{align}$$</p> <hr> <p>Finally, equipped with $(2)$, the Stolz-Cesaro Theorem is applicable from which we obtain the coveted limit</p> <blockquote> <p>$$\lim_{n\to \infty}\frac{\sum_{k=1}^n\frac{x^k}{f(k)}}{\frac{x^n}{f(n)}}=\lim_{n\to \infty}\left(\frac{\sum_{k=1}^{n+1}\frac{x^k}{f(k)}-\sum_{k=1}^{n}\frac{x^k}{f(k)}}{\frac{x^{n+1}}{f(n+1)}-\frac{x^n}{f(n)}}\right)=\frac{x}{x-1}$$</p> </blockquote> <p>as conjectured!</p> Answer 20: <p>As in zhw's answer in the linked question we can also here apply <a href="https://en.wikipedia.org/wiki/Stolz%E2%80%93Ces%C3%A0ro_theorem" rel="nofollow noreferrer">Stolz–Cesàro</a> to get</p> <p>$$\lim_{n\to\infty} \frac{f(n)}{x^n}\sum_{k=1}^n\frac{x^k}{f(k)} = \lim_{n\to\infty} \frac{x}{x - \frac{f(n+1)}{f(n)}} = \frac{x}{x-1}$$</p> <p>since $\lim_{n\to\infty}\frac{f(n+1)}{f(n)} = 1$. For this final limit we can for example apply <a href="https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality" rel="nofollow noreferrer">Grönwall's inequality</a> to $f'(x) \leq \frac{d}{x}f(x)$ giving us $1\leq \frac{f(n+1)}{f(n)} \leq \left(\frac{n+1}{n}\right)^d$ and the limit follows by the squeeze theorem. This can also be deduced by the mean value theorem applied to $g(x) = \log f(x)$ which gives $1 \leq \frac{f(n+1)}{f(n)} \leq e^{\frac{d}{n}}$.</p> Answer 21: <p>From the conditions $f(n)\le f(n+1)$ and $\frac{f'(n)}{f(n)}\le d/n$ we have </p> <p>$$1\le \frac{f(n+1)}{f(n)}\le \left(1+\frac1n\right)^d \tag 1$$ </p> <blockquote> <p>Note that the right-hand side of the inequality in $(1)$ can be obtained by integrating the inequality $\int_n^{n+1}\left(\log(f(x)\right)'\,dx\le \int_n^{n+1}d\frac{1}{x}\,dx$.</p> </blockquote> <p>Application of the squeeze theorem reveals</p> <p>$$\lim_{n\to \infty}\frac{f(n+1)}{f(n)}=1 \tag 2$$</p> <hr> <p>Alongside $(1)$, it is straightforward to show from the condition $f'(n)/f(n)\le d/n$ that </p> <p>$$\lim_{n\to \infty}\frac{x^n}{f(n)}=\infty \tag 3$$</p> <blockquote> <p>Note that $\int_1^n \left(\log(f(x)\right)'\,dx\le \int_1^{n}d\frac{1}{x}\,dx$ yields $f(n)\le f(1)\,n^d$ and $\frac{x^n}{f(n)}\ge \frac{x^n}{f(1)\,n^d}\to \infty$.</p> </blockquote> <hr> <p>Using $(2)$ we find </p> <p>$$\begin{align} \lim_{n\to \infty}\left(\frac{\sum_{k=1}^{n+1}\frac{x^k}{f(k)}-\sum_{k=1}^{n}\frac{x^k}{f(k)}}{\frac{x^{n+1}}{f(n+1)}-\frac{x^n}{f(n)}}\right)&amp;=\lim_{n\to \infty}\left(\frac{\frac{x^{n+1}}{f(n+1)}}{\frac{x^{n+1}}{f(n+1)}-\frac{x^{n}}{f(n)}}\right)\\\\ &amp;=\lim_{n\to \infty}\left(\frac{x}{x-\frac{f(n+1)}{f(n)}}\right)\\\\ &amp;=\frac{x}{x-1}\tag 4 \end{align}$$</p> <hr> <p>Finally, equipped with $(2)$, the Stolz-Cesaro Theorem is applicable from which we obtain the coveted limit</p> <blockquote> <p>$$\lim_{n\to \infty}\frac{\sum_{k=1}^n\frac{x^k}{f(k)}}{\frac{x^n}{f(n)}}=\lim_{n\to \infty}\left(\frac{\sum_{k=1}^{n+1}\frac{x^k}{f(k)}-\sum_{k=1}^{n}\frac{x^k}{f(k)}}{\frac{x^{n+1}}{f(n+1)}-\frac{x^n}{f(n)}}\right)=\frac{x}{x-1}$$</p> </blockquote> <p>as conjectured!</p> Answer 22: <p>As in zhw's answer in the linked question we can also here apply <a href="https://en.wikipedia.org/wiki/Stolz%E2%80%93Ces%C3%A0ro_theorem" rel="nofollow noreferrer">Stolz–Cesàro</a> to get</p> <p>$$\lim_{n\to\infty} \frac{f(n)}{x^n}\sum_{k=1}^n\frac{x^k}{f(k)} = \lim_{n\to\infty} \frac{x}{x - \frac{f(n+1)}{f(n)}} = \frac{x}{x-1}$$</p> <p>since $\lim_{n\to\infty}\frac{f(n+1)}{f(n)} = 1$. For this final limit we can for example apply <a href="https://en.wikipedia.org/wiki/Gr%C3%B6nwall&#39;s_inequality" rel="nofollow noreferrer">Grönwall's inequality</a> to $f'(x) \leq \frac{d}{x}f(x)$ giving us $1\leq \frac{f(n+1)}{f(n)} \leq \left(\frac{n+1}{n}\right)^d$ and the limit follows by the squeeze theorem. This can also be deduced by the mean value theorem applied to $g(x) = \log f(x)$ which gives $1 \leq \frac{f(n+1)}{f(n)} \leq e^{\frac{d}{n}}$.</p>
Compute ${11 \choose 1} + {11 \choose 3} + ... + {11 \choose 11}$?
2,092,649
6
https://math.stackexchange.com/questions/2092649/compute-11-choose-1-11-choose-3-11-choose-11
<p>I did this with brute force, and got 1024.</p> <p>What is the faster method of solving this?</p>
[ { "answer_id": 2092653, "score": 12, "text": "<p>Since any time you choose an odd number of elements out of a set of eleven, you leave behind an even number of elements, \n$$\n\\sum_{k\\text{ odd}} \\binom{11}{k} = \\sum_{k\\text{ even}} \\binom{11}{k} \n$$</p>\n\n<p>So the number of ways of choosing any subset of eleven elements is twice the number of ways of choosing an odd number of elements.</p>\n\n<p>But to choose any subset, each element is either in our out of the chosen subset: Than gives $2^{11}$.</p>\n\n<p>So the number of ways to choose an odd number of elements is \n$$\\frac12\\cdot 2^{11} = 2^{10} = 1024$$</p>\n\n<p>P.S. This reasoning does not work when the main set has an even number of elements. But the property of the number of ways to choose an odd number of elements being half the total number of subsets still holds. Can you see why?</p>\n" }, { "answer_id": 2092654, "score": 12, "text": "<p>I don't know if this is the fastest method, but you can use the binomial theorem twice: $$2^{11}=(1+1)^{11}=\\sum_{i=0}^{11}\\binom{11}{i}$$ and $$0=(1-1)^{11}=\\sum_{i=0}^{11}\\binom{11}{i}(-1)^i.$$ Upon subtracting these two results and dividing by two, we obtain the result.</p>\n" }, { "answer_id": 2093257, "score": 5, "text": "<p>Since $\\binom{11}{1}=\\binom{11}{10},\\binom{11}{3}=\\binom{11}{8}$ and so on,\n$$ \\binom{11}{1}+\\binom{11}{3}+\\ldots+\\binom{11}{11} = \\binom{11}{10}+\\binom{11}{8}+\\ldots+\\binom{11}{0} $$\nso both the terms are half the sum $\\binom{11}{0}+\\binom{11}{1}+\\binom{11}{2}+\\ldots+\\binom{11}{11}=2^{11}$, i.e. $2^{10}=\\color{red}{1024}$.</p>\n" }, { "answer_id": 2093283, "score": 2, "text": "<p>Usually you break up ${n+1\\choose k}, k\\le n$ into ${n\\choose k-1}+{n\\choose k}$. This is one of the first tricks you learn from probably every baby probability textbook. For your problem:</p>\n\n<p>$${11\\choose 1}+{11\\choose 3}+...+{11\\choose 9}+{11\\choose 11}=\\left({10\\choose 0}+{10\\choose 1}\\right)+\\left({10\\choose 2}+{10\\choose 3}\\right)+...+\\left({10\\choose 8}+{10\\choose 9}\\right)+{10\\choose 10}=(1+1)^{10}$$</p>\n\n<p>by recognizing ${11\\choose 11}={10\\choose 10}$.</p>\n" }, { "answer_id": 2092679, "score": 0, "text": "<p>You might recoginize ${m \\choose n}$ as the co-efficient of the $x^n$ term in the binomial expansion of $(x+1)^m$</p>\n\n<p>So the sum of all of the coefficients equals</p>\n\n<p>$(1+1)^m$</p>\n\n<p>In your example above only have the odd coefficients.</p>\n\n<p>${11 \\choose n} ={11 \\choose 11-n}$</p>\n\n<p>For every odd coefficient you have a symmetric even counterpart.</p>\n\n<p>$\\sum_\\limits {i=0}^{5} {11\\choose 2i+1} = \\frac 12 \\sum_\\limits {i=0}^{11} {11\\choose i} = 2^{10}$</p>\n" }, { "answer_id": 2105272, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\sum_{k = 0}^{5}{11 \\choose 2k + 1} &amp; =\n\\sum_{k = 0}^{10}{11 \\choose k + 1}{1 + \\pars{-1}^{k} \\over 2} =\n\\sum_{k = 1}^{11}{11 \\choose k}{1 - \\pars{-1}^{k} \\over 2}\n\\\\[5mm] &amp; =\n{1 \\over 2}\\underbrace{\\sum_{k = 0}^{11}{11 \\choose k}}_{\\ds{2^{11} = 2048}}\\ -\\ {1 \\over 2}\\underbrace{\\sum_{k = 0}^{11}{11 \\choose k}\\pars{-1}^{k}}_{\\ds{=\\ 0}}\\ = \\bbx{\\ds{1024}} \n\\end{align}</p>\n" }, { "answer_id": 2092653, "score": 12, "text": "<p>Since any time you choose an odd number of elements out of a set of eleven, you leave behind an even number of elements, \n$$\n\\sum_{k\\text{ odd}} \\binom{11}{k} = \\sum_{k\\text{ even}} \\binom{11}{k} \n$$</p>\n\n<p>So the number of ways of choosing any subset of eleven elements is twice the number of ways of choosing an odd number of elements.</p>\n\n<p>But to choose any subset, each element is either in our out of the chosen subset: Than gives $2^{11}$.</p>\n\n<p>So the number of ways to choose an odd number of elements is \n$$\\frac12\\cdot 2^{11} = 2^{10} = 1024$$</p>\n\n<p>P.S. This reasoning does not work when the main set has an even number of elements. But the property of the number of ways to choose an odd number of elements being half the total number of subsets still holds. Can you see why?</p>\n" }, { "answer_id": 2092654, "score": 12, "text": "<p>I don't know if this is the fastest method, but you can use the binomial theorem twice: $$2^{11}=(1+1)^{11}=\\sum_{i=0}^{11}\\binom{11}{i}$$ and $$0=(1-1)^{11}=\\sum_{i=0}^{11}\\binom{11}{i}(-1)^i.$$ Upon subtracting these two results and dividing by two, we obtain the result.</p>\n" }, { "answer_id": 2093257, "score": 5, "text": "<p>Since $\\binom{11}{1}=\\binom{11}{10},\\binom{11}{3}=\\binom{11}{8}$ and so on,\n$$ \\binom{11}{1}+\\binom{11}{3}+\\ldots+\\binom{11}{11} = \\binom{11}{10}+\\binom{11}{8}+\\ldots+\\binom{11}{0} $$\nso both the terms are half the sum $\\binom{11}{0}+\\binom{11}{1}+\\binom{11}{2}+\\ldots+\\binom{11}{11}=2^{11}$, i.e. $2^{10}=\\color{red}{1024}$.</p>\n" }, { "answer_id": 2093283, "score": 2, "text": "<p>Usually you break up ${n+1\\choose k}, k\\le n$ into ${n\\choose k-1}+{n\\choose k}$. This is one of the first tricks you learn from probably every baby probability textbook. For your problem:</p>\n\n<p>$${11\\choose 1}+{11\\choose 3}+...+{11\\choose 9}+{11\\choose 11}=\\left({10\\choose 0}+{10\\choose 1}\\right)+\\left({10\\choose 2}+{10\\choose 3}\\right)+...+\\left({10\\choose 8}+{10\\choose 9}\\right)+{10\\choose 10}=(1+1)^{10}$$</p>\n\n<p>by recognizing ${11\\choose 11}={10\\choose 10}$.</p>\n" }, { "answer_id": 2092679, "score": 0, "text": "<p>You might recoginize ${m \\choose n}$ as the co-efficient of the $x^n$ term in the binomial expansion of $(x+1)^m$</p>\n\n<p>So the sum of all of the coefficients equals</p>\n\n<p>$(1+1)^m$</p>\n\n<p>In your example above only have the odd coefficients.</p>\n\n<p>${11 \\choose n} ={11 \\choose 11-n}$</p>\n\n<p>For every odd coefficient you have a symmetric even counterpart.</p>\n\n<p>$\\sum_\\limits {i=0}^{5} {11\\choose 2i+1} = \\frac 12 \\sum_\\limits {i=0}^{11} {11\\choose i} = 2^{10}$</p>\n" }, { "answer_id": 2105272, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\sum_{k = 0}^{5}{11 \\choose 2k + 1} &amp; =\n\\sum_{k = 0}^{10}{11 \\choose k + 1}{1 + \\pars{-1}^{k} \\over 2} =\n\\sum_{k = 1}^{11}{11 \\choose k}{1 - \\pars{-1}^{k} \\over 2}\n\\\\[5mm] &amp; =\n{1 \\over 2}\\underbrace{\\sum_{k = 0}^{11}{11 \\choose k}}_{\\ds{2^{11} = 2048}}\\ -\\ {1 \\over 2}\\underbrace{\\sum_{k = 0}^{11}{11 \\choose k}\\pars{-1}^{k}}_{\\ds{=\\ 0}}\\ = \\bbx{\\ds{1024}} \n\\end{align}</p>\n" }, { "answer_id": 2092653, "score": 12, "text": "<p>Since any time you choose an odd number of elements out of a set of eleven, you leave behind an even number of elements, \n$$\n\\sum_{k\\text{ odd}} \\binom{11}{k} = \\sum_{k\\text{ even}} \\binom{11}{k} \n$$</p>\n\n<p>So the number of ways of choosing any subset of eleven elements is twice the number of ways of choosing an odd number of elements.</p>\n\n<p>But to choose any subset, each element is either in our out of the chosen subset: Than gives $2^{11}$.</p>\n\n<p>So the number of ways to choose an odd number of elements is \n$$\\frac12\\cdot 2^{11} = 2^{10} = 1024$$</p>\n\n<p>P.S. This reasoning does not work when the main set has an even number of elements. But the property of the number of ways to choose an odd number of elements being half the total number of subsets still holds. Can you see why?</p>\n" }, { "answer_id": 2092654, "score": 12, "text": "<p>I don't know if this is the fastest method, but you can use the binomial theorem twice: $$2^{11}=(1+1)^{11}=\\sum_{i=0}^{11}\\binom{11}{i}$$ and $$0=(1-1)^{11}=\\sum_{i=0}^{11}\\binom{11}{i}(-1)^i.$$ Upon subtracting these two results and dividing by two, we obtain the result.</p>\n" }, { "answer_id": 2093257, "score": 5, "text": "<p>Since $\\binom{11}{1}=\\binom{11}{10},\\binom{11}{3}=\\binom{11}{8}$ and so on,\n$$ \\binom{11}{1}+\\binom{11}{3}+\\ldots+\\binom{11}{11} = \\binom{11}{10}+\\binom{11}{8}+\\ldots+\\binom{11}{0} $$\nso both the terms are half the sum $\\binom{11}{0}+\\binom{11}{1}+\\binom{11}{2}+\\ldots+\\binom{11}{11}=2^{11}$, i.e. $2^{10}=\\color{red}{1024}$.</p>\n" }, { "answer_id": 2093283, "score": 2, "text": "<p>Usually you break up ${n+1\\choose k}, k\\le n$ into ${n\\choose k-1}+{n\\choose k}$. This is one of the first tricks you learn from probably every baby probability textbook. For your problem:</p>\n\n<p>$${11\\choose 1}+{11\\choose 3}+...+{11\\choose 9}+{11\\choose 11}=\\left({10\\choose 0}+{10\\choose 1}\\right)+\\left({10\\choose 2}+{10\\choose 3}\\right)+...+\\left({10\\choose 8}+{10\\choose 9}\\right)+{10\\choose 10}=(1+1)^{10}$$</p>\n\n<p>by recognizing ${11\\choose 11}={10\\choose 10}$.</p>\n" }, { "answer_id": 2092679, "score": 0, "text": "<p>You might recoginize ${m \\choose n}$ as the co-efficient of the $x^n$ term in the binomial expansion of $(x+1)^m$</p>\n\n<p>So the sum of all of the coefficients equals</p>\n\n<p>$(1+1)^m$</p>\n\n<p>In your example above only have the odd coefficients.</p>\n\n<p>${11 \\choose n} ={11 \\choose 11-n}$</p>\n\n<p>For every odd coefficient you have a symmetric even counterpart.</p>\n\n<p>$\\sum_\\limits {i=0}^{5} {11\\choose 2i+1} = \\frac 12 \\sum_\\limits {i=0}^{11} {11\\choose i} = 2^{10}$</p>\n" }, { "answer_id": 2105272, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\sum_{k = 0}^{5}{11 \\choose 2k + 1} &amp; =\n\\sum_{k = 0}^{10}{11 \\choose k + 1}{1 + \\pars{-1}^{k} \\over 2} =\n\\sum_{k = 1}^{11}{11 \\choose k}{1 - \\pars{-1}^{k} \\over 2}\n\\\\[5mm] &amp; =\n{1 \\over 2}\\underbrace{\\sum_{k = 0}^{11}{11 \\choose k}}_{\\ds{2^{11} = 2048}}\\ -\\ {1 \\over 2}\\underbrace{\\sum_{k = 0}^{11}{11 \\choose k}\\pars{-1}^{k}}_{\\ds{=\\ 0}}\\ = \\bbx{\\ds{1024}} \n\\end{align}</p>\n" }, { "answer_id": 2092653, "score": 12, "text": "<p>Since any time you choose an odd number of elements out of a set of eleven, you leave behind an even number of elements, \n$$\n\\sum_{k\\text{ odd}} \\binom{11}{k} = \\sum_{k\\text{ even}} \\binom{11}{k} \n$$</p>\n\n<p>So the number of ways of choosing any subset of eleven elements is twice the number of ways of choosing an odd number of elements.</p>\n\n<p>But to choose any subset, each element is either in our out of the chosen subset: Than gives $2^{11}$.</p>\n\n<p>So the number of ways to choose an odd number of elements is \n$$\\frac12\\cdot 2^{11} = 2^{10} = 1024$$</p>\n\n<p>P.S. This reasoning does not work when the main set has an even number of elements. But the property of the number of ways to choose an odd number of elements being half the total number of subsets still holds. Can you see why?</p>\n" }, { "answer_id": 2092654, "score": 12, "text": "<p>I don't know if this is the fastest method, but you can use the binomial theorem twice: $$2^{11}=(1+1)^{11}=\\sum_{i=0}^{11}\\binom{11}{i}$$ and $$0=(1-1)^{11}=\\sum_{i=0}^{11}\\binom{11}{i}(-1)^i.$$ Upon subtracting these two results and dividing by two, we obtain the result.</p>\n" }, { "answer_id": 2093257, "score": 5, "text": "<p>Since $\\binom{11}{1}=\\binom{11}{10},\\binom{11}{3}=\\binom{11}{8}$ and so on,\n$$ \\binom{11}{1}+\\binom{11}{3}+\\ldots+\\binom{11}{11} = \\binom{11}{10}+\\binom{11}{8}+\\ldots+\\binom{11}{0} $$\nso both the terms are half the sum $\\binom{11}{0}+\\binom{11}{1}+\\binom{11}{2}+\\ldots+\\binom{11}{11}=2^{11}$, i.e. $2^{10}=\\color{red}{1024}$.</p>\n" }, { "answer_id": 2093283, "score": 2, "text": "<p>Usually you break up ${n+1\\choose k}, k\\le n$ into ${n\\choose k-1}+{n\\choose k}$. This is one of the first tricks you learn from probably every baby probability textbook. For your problem:</p>\n\n<p>$${11\\choose 1}+{11\\choose 3}+...+{11\\choose 9}+{11\\choose 11}=\\left({10\\choose 0}+{10\\choose 1}\\right)+\\left({10\\choose 2}+{10\\choose 3}\\right)+...+\\left({10\\choose 8}+{10\\choose 9}\\right)+{10\\choose 10}=(1+1)^{10}$$</p>\n\n<p>by recognizing ${11\\choose 11}={10\\choose 10}$.</p>\n" }, { "answer_id": 2092679, "score": 0, "text": "<p>You might recoginize ${m \\choose n}$ as the co-efficient of the $x^n$ term in the binomial expansion of $(x+1)^m$</p>\n\n<p>So the sum of all of the coefficients equals</p>\n\n<p>$(1+1)^m$</p>\n\n<p>In your example above only have the odd coefficients.</p>\n\n<p>${11 \\choose n} ={11 \\choose 11-n}$</p>\n\n<p>For every odd coefficient you have a symmetric even counterpart.</p>\n\n<p>$\\sum_\\limits {i=0}^{5} {11\\choose 2i+1} = \\frac 12 \\sum_\\limits {i=0}^{11} {11\\choose i} = 2^{10}$</p>\n" }, { "answer_id": 2105272, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\sum_{k = 0}^{5}{11 \\choose 2k + 1} &amp; =\n\\sum_{k = 0}^{10}{11 \\choose k + 1}{1 + \\pars{-1}^{k} \\over 2} =\n\\sum_{k = 1}^{11}{11 \\choose k}{1 - \\pars{-1}^{k} \\over 2}\n\\\\[5mm] &amp; =\n{1 \\over 2}\\underbrace{\\sum_{k = 0}^{11}{11 \\choose k}}_{\\ds{2^{11} = 2048}}\\ -\\ {1 \\over 2}\\underbrace{\\sum_{k = 0}^{11}{11 \\choose k}\\pars{-1}^{k}}_{\\ds{=\\ 0}}\\ = \\bbx{\\ds{1024}} \n\\end{align}</p>\n" }, { "answer_id": 2092653, "score": 12, "text": "<p>Since any time you choose an odd number of elements out of a set of eleven, you leave behind an even number of elements, \n$$\n\\sum_{k\\text{ odd}} \\binom{11}{k} = \\sum_{k\\text{ even}} \\binom{11}{k} \n$$</p>\n\n<p>So the number of ways of choosing any subset of eleven elements is twice the number of ways of choosing an odd number of elements.</p>\n\n<p>But to choose any subset, each element is either in our out of the chosen subset: Than gives $2^{11}$.</p>\n\n<p>So the number of ways to choose an odd number of elements is \n$$\\frac12\\cdot 2^{11} = 2^{10} = 1024$$</p>\n\n<p>P.S. This reasoning does not work when the main set has an even number of elements. But the property of the number of ways to choose an odd number of elements being half the total number of subsets still holds. Can you see why?</p>\n" }, { "answer_id": 2092654, "score": 12, "text": "<p>I don't know if this is the fastest method, but you can use the binomial theorem twice: $$2^{11}=(1+1)^{11}=\\sum_{i=0}^{11}\\binom{11}{i}$$ and $$0=(1-1)^{11}=\\sum_{i=0}^{11}\\binom{11}{i}(-1)^i.$$ Upon subtracting these two results and dividing by two, we obtain the result.</p>\n" }, { "answer_id": 2093257, "score": 5, "text": "<p>Since $\\binom{11}{1}=\\binom{11}{10},\\binom{11}{3}=\\binom{11}{8}$ and so on,\n$$ \\binom{11}{1}+\\binom{11}{3}+\\ldots+\\binom{11}{11} = \\binom{11}{10}+\\binom{11}{8}+\\ldots+\\binom{11}{0} $$\nso both the terms are half the sum $\\binom{11}{0}+\\binom{11}{1}+\\binom{11}{2}+\\ldots+\\binom{11}{11}=2^{11}$, i.e. $2^{10}=\\color{red}{1024}$.</p>\n" }, { "answer_id": 2093283, "score": 2, "text": "<p>Usually you break up ${n+1\\choose k}, k\\le n$ into ${n\\choose k-1}+{n\\choose k}$. This is one of the first tricks you learn from probably every baby probability textbook. For your problem:</p>\n\n<p>$${11\\choose 1}+{11\\choose 3}+...+{11\\choose 9}+{11\\choose 11}=\\left({10\\choose 0}+{10\\choose 1}\\right)+\\left({10\\choose 2}+{10\\choose 3}\\right)+...+\\left({10\\choose 8}+{10\\choose 9}\\right)+{10\\choose 10}=(1+1)^{10}$$</p>\n\n<p>by recognizing ${11\\choose 11}={10\\choose 10}$.</p>\n" }, { "answer_id": 2092679, "score": 0, "text": "<p>You might recoginize ${m \\choose n}$ as the co-efficient of the $x^n$ term in the binomial expansion of $(x+1)^m$</p>\n\n<p>So the sum of all of the coefficients equals</p>\n\n<p>$(1+1)^m$</p>\n\n<p>In your example above only have the odd coefficients.</p>\n\n<p>${11 \\choose n} ={11 \\choose 11-n}$</p>\n\n<p>For every odd coefficient you have a symmetric even counterpart.</p>\n\n<p>$\\sum_\\limits {i=0}^{5} {11\\choose 2i+1} = \\frac 12 \\sum_\\limits {i=0}^{11} {11\\choose i} = 2^{10}$</p>\n" }, { "answer_id": 2105272, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\sum_{k = 0}^{5}{11 \\choose 2k + 1} &amp; =\n\\sum_{k = 0}^{10}{11 \\choose k + 1}{1 + \\pars{-1}^{k} \\over 2} =\n\\sum_{k = 1}^{11}{11 \\choose k}{1 - \\pars{-1}^{k} \\over 2}\n\\\\[5mm] &amp; =\n{1 \\over 2}\\underbrace{\\sum_{k = 0}^{11}{11 \\choose k}}_{\\ds{2^{11} = 2048}}\\ -\\ {1 \\over 2}\\underbrace{\\sum_{k = 0}^{11}{11 \\choose k}\\pars{-1}^{k}}_{\\ds{=\\ 0}}\\ = \\bbx{\\ds{1024}} \n\\end{align}</p>\n" }, { "answer_id": 2092653, "score": 12, "text": "<p>Since any time you choose an odd number of elements out of a set of eleven, you leave behind an even number of elements, \n$$\n\\sum_{k\\text{ odd}} \\binom{11}{k} = \\sum_{k\\text{ even}} \\binom{11}{k} \n$$</p>\n\n<p>So the number of ways of choosing any subset of eleven elements is twice the number of ways of choosing an odd number of elements.</p>\n\n<p>But to choose any subset, each element is either in our out of the chosen subset: Than gives $2^{11}$.</p>\n\n<p>So the number of ways to choose an odd number of elements is \n$$\\frac12\\cdot 2^{11} = 2^{10} = 1024$$</p>\n\n<p>P.S. This reasoning does not work when the main set has an even number of elements. But the property of the number of ways to choose an odd number of elements being half the total number of subsets still holds. Can you see why?</p>\n" }, { "answer_id": 2092654, "score": 12, "text": "<p>I don't know if this is the fastest method, but you can use the binomial theorem twice: $$2^{11}=(1+1)^{11}=\\sum_{i=0}^{11}\\binom{11}{i}$$ and $$0=(1-1)^{11}=\\sum_{i=0}^{11}\\binom{11}{i}(-1)^i.$$ Upon subtracting these two results and dividing by two, we obtain the result.</p>\n" }, { "answer_id": 2093257, "score": 5, "text": "<p>Since $\\binom{11}{1}=\\binom{11}{10},\\binom{11}{3}=\\binom{11}{8}$ and so on,\n$$ \\binom{11}{1}+\\binom{11}{3}+\\ldots+\\binom{11}{11} = \\binom{11}{10}+\\binom{11}{8}+\\ldots+\\binom{11}{0} $$\nso both the terms are half the sum $\\binom{11}{0}+\\binom{11}{1}+\\binom{11}{2}+\\ldots+\\binom{11}{11}=2^{11}$, i.e. $2^{10}=\\color{red}{1024}$.</p>\n" }, { "answer_id": 2093283, "score": 2, "text": "<p>Usually you break up ${n+1\\choose k}, k\\le n$ into ${n\\choose k-1}+{n\\choose k}$. This is one of the first tricks you learn from probably every baby probability textbook. For your problem:</p>\n\n<p>$${11\\choose 1}+{11\\choose 3}+...+{11\\choose 9}+{11\\choose 11}=\\left({10\\choose 0}+{10\\choose 1}\\right)+\\left({10\\choose 2}+{10\\choose 3}\\right)+...+\\left({10\\choose 8}+{10\\choose 9}\\right)+{10\\choose 10}=(1+1)^{10}$$</p>\n\n<p>by recognizing ${11\\choose 11}={10\\choose 10}$.</p>\n" }, { "answer_id": 2092679, "score": 0, "text": "<p>You might recoginize ${m \\choose n}$ as the co-efficient of the $x^n$ term in the binomial expansion of $(x+1)^m$</p>\n\n<p>So the sum of all of the coefficients equals</p>\n\n<p>$(1+1)^m$</p>\n\n<p>In your example above only have the odd coefficients.</p>\n\n<p>${11 \\choose n} ={11 \\choose 11-n}$</p>\n\n<p>For every odd coefficient you have a symmetric even counterpart.</p>\n\n<p>$\\sum_\\limits {i=0}^{5} {11\\choose 2i+1} = \\frac 12 \\sum_\\limits {i=0}^{11} {11\\choose i} = 2^{10}$</p>\n" }, { "answer_id": 2105272, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\sum_{k = 0}^{5}{11 \\choose 2k + 1} &amp; =\n\\sum_{k = 0}^{10}{11 \\choose k + 1}{1 + \\pars{-1}^{k} \\over 2} =\n\\sum_{k = 1}^{11}{11 \\choose k}{1 - \\pars{-1}^{k} \\over 2}\n\\\\[5mm] &amp; =\n{1 \\over 2}\\underbrace{\\sum_{k = 0}^{11}{11 \\choose k}}_{\\ds{2^{11} = 2048}}\\ -\\ {1 \\over 2}\\underbrace{\\sum_{k = 0}^{11}{11 \\choose k}\\pars{-1}^{k}}_{\\ds{=\\ 0}}\\ = \\bbx{\\ds{1024}} \n\\end{align}</p>\n" }, { "answer_id": 2092653, "score": 12, "text": "<p>Since any time you choose an odd number of elements out of a set of eleven, you leave behind an even number of elements, \n$$\n\\sum_{k\\text{ odd}} \\binom{11}{k} = \\sum_{k\\text{ even}} \\binom{11}{k} \n$$</p>\n\n<p>So the number of ways of choosing any subset of eleven elements is twice the number of ways of choosing an odd number of elements.</p>\n\n<p>But to choose any subset, each element is either in our out of the chosen subset: Than gives $2^{11}$.</p>\n\n<p>So the number of ways to choose an odd number of elements is \n$$\\frac12\\cdot 2^{11} = 2^{10} = 1024$$</p>\n\n<p>P.S. This reasoning does not work when the main set has an even number of elements. But the property of the number of ways to choose an odd number of elements being half the total number of subsets still holds. Can you see why?</p>\n" }, { "answer_id": 2092654, "score": 12, "text": "<p>I don't know if this is the fastest method, but you can use the binomial theorem twice: $$2^{11}=(1+1)^{11}=\\sum_{i=0}^{11}\\binom{11}{i}$$ and $$0=(1-1)^{11}=\\sum_{i=0}^{11}\\binom{11}{i}(-1)^i.$$ Upon subtracting these two results and dividing by two, we obtain the result.</p>\n" }, { "answer_id": 2093257, "score": 5, "text": "<p>Since $\\binom{11}{1}=\\binom{11}{10},\\binom{11}{3}=\\binom{11}{8}$ and so on,\n$$ \\binom{11}{1}+\\binom{11}{3}+\\ldots+\\binom{11}{11} = \\binom{11}{10}+\\binom{11}{8}+\\ldots+\\binom{11}{0} $$\nso both the terms are half the sum $\\binom{11}{0}+\\binom{11}{1}+\\binom{11}{2}+\\ldots+\\binom{11}{11}=2^{11}$, i.e. $2^{10}=\\color{red}{1024}$.</p>\n" }, { "answer_id": 2093283, "score": 2, "text": "<p>Usually you break up ${n+1\\choose k}, k\\le n$ into ${n\\choose k-1}+{n\\choose k}$. This is one of the first tricks you learn from probably every baby probability textbook. For your problem:</p>\n\n<p>$${11\\choose 1}+{11\\choose 3}+...+{11\\choose 9}+{11\\choose 11}=\\left({10\\choose 0}+{10\\choose 1}\\right)+\\left({10\\choose 2}+{10\\choose 3}\\right)+...+\\left({10\\choose 8}+{10\\choose 9}\\right)+{10\\choose 10}=(1+1)^{10}$$</p>\n\n<p>by recognizing ${11\\choose 11}={10\\choose 10}$.</p>\n" }, { "answer_id": 2092679, "score": 0, "text": "<p>You might recoginize ${m \\choose n}$ as the co-efficient of the $x^n$ term in the binomial expansion of $(x+1)^m$</p>\n\n<p>So the sum of all of the coefficients equals</p>\n\n<p>$(1+1)^m$</p>\n\n<p>In your example above only have the odd coefficients.</p>\n\n<p>${11 \\choose n} ={11 \\choose 11-n}$</p>\n\n<p>For every odd coefficient you have a symmetric even counterpart.</p>\n\n<p>$\\sum_\\limits {i=0}^{5} {11\\choose 2i+1} = \\frac 12 \\sum_\\limits {i=0}^{11} {11\\choose i} = 2^{10}$</p>\n" }, { "answer_id": 2105272, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\sum_{k = 0}^{5}{11 \\choose 2k + 1} &amp; =\n\\sum_{k = 0}^{10}{11 \\choose k + 1}{1 + \\pars{-1}^{k} \\over 2} =\n\\sum_{k = 1}^{11}{11 \\choose k}{1 - \\pars{-1}^{k} \\over 2}\n\\\\[5mm] &amp; =\n{1 \\over 2}\\underbrace{\\sum_{k = 0}^{11}{11 \\choose k}}_{\\ds{2^{11} = 2048}}\\ -\\ {1 \\over 2}\\underbrace{\\sum_{k = 0}^{11}{11 \\choose k}\\pars{-1}^{k}}_{\\ds{=\\ 0}}\\ = \\bbx{\\ds{1024}} \n\\end{align}</p>\n" }, { "answer_id": 2092653, "score": 12, "text": "<p>Since any time you choose an odd number of elements out of a set of eleven, you leave behind an even number of elements, \n$$\n\\sum_{k\\text{ odd}} \\binom{11}{k} = \\sum_{k\\text{ even}} \\binom{11}{k} \n$$</p>\n\n<p>So the number of ways of choosing any subset of eleven elements is twice the number of ways of choosing an odd number of elements.</p>\n\n<p>But to choose any subset, each element is either in our out of the chosen subset: Than gives $2^{11}$.</p>\n\n<p>So the number of ways to choose an odd number of elements is \n$$\\frac12\\cdot 2^{11} = 2^{10} = 1024$$</p>\n\n<p>P.S. This reasoning does not work when the main set has an even number of elements. But the property of the number of ways to choose an odd number of elements being half the total number of subsets still holds. Can you see why?</p>\n" }, { "answer_id": 2092654, "score": 12, "text": "<p>I don't know if this is the fastest method, but you can use the binomial theorem twice: $$2^{11}=(1+1)^{11}=\\sum_{i=0}^{11}\\binom{11}{i}$$ and $$0=(1-1)^{11}=\\sum_{i=0}^{11}\\binom{11}{i}(-1)^i.$$ Upon subtracting these two results and dividing by two, we obtain the result.</p>\n" }, { "answer_id": 2093257, "score": 5, "text": "<p>Since $\\binom{11}{1}=\\binom{11}{10},\\binom{11}{3}=\\binom{11}{8}$ and so on,\n$$ \\binom{11}{1}+\\binom{11}{3}+\\ldots+\\binom{11}{11} = \\binom{11}{10}+\\binom{11}{8}+\\ldots+\\binom{11}{0} $$\nso both the terms are half the sum $\\binom{11}{0}+\\binom{11}{1}+\\binom{11}{2}+\\ldots+\\binom{11}{11}=2^{11}$, i.e. $2^{10}=\\color{red}{1024}$.</p>\n" }, { "answer_id": 2093283, "score": 2, "text": "<p>Usually you break up ${n+1\\choose k}, k\\le n$ into ${n\\choose k-1}+{n\\choose k}$. This is one of the first tricks you learn from probably every baby probability textbook. For your problem:</p>\n\n<p>$${11\\choose 1}+{11\\choose 3}+...+{11\\choose 9}+{11\\choose 11}=\\left({10\\choose 0}+{10\\choose 1}\\right)+\\left({10\\choose 2}+{10\\choose 3}\\right)+...+\\left({10\\choose 8}+{10\\choose 9}\\right)+{10\\choose 10}=(1+1)^{10}$$</p>\n\n<p>by recognizing ${11\\choose 11}={10\\choose 10}$.</p>\n" }, { "answer_id": 2092679, "score": 0, "text": "<p>You might recoginize ${m \\choose n}$ as the co-efficient of the $x^n$ term in the binomial expansion of $(x+1)^m$</p>\n\n<p>So the sum of all of the coefficients equals</p>\n\n<p>$(1+1)^m$</p>\n\n<p>In your example above only have the odd coefficients.</p>\n\n<p>${11 \\choose n} ={11 \\choose 11-n}$</p>\n\n<p>For every odd coefficient you have a symmetric even counterpart.</p>\n\n<p>$\\sum_\\limits {i=0}^{5} {11\\choose 2i+1} = \\frac 12 \\sum_\\limits {i=0}^{11} {11\\choose i} = 2^{10}$</p>\n" }, { "answer_id": 2105272, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\sum_{k = 0}^{5}{11 \\choose 2k + 1} &amp; =\n\\sum_{k = 0}^{10}{11 \\choose k + 1}{1 + \\pars{-1}^{k} \\over 2} =\n\\sum_{k = 1}^{11}{11 \\choose k}{1 - \\pars{-1}^{k} \\over 2}\n\\\\[5mm] &amp; =\n{1 \\over 2}\\underbrace{\\sum_{k = 0}^{11}{11 \\choose k}}_{\\ds{2^{11} = 2048}}\\ -\\ {1 \\over 2}\\underbrace{\\sum_{k = 0}^{11}{11 \\choose k}\\pars{-1}^{k}}_{\\ds{=\\ 0}}\\ = \\bbx{\\ds{1024}} \n\\end{align}</p>\n" }, { "answer_id": 2092653, "score": 12, "text": "<p>Since any time you choose an odd number of elements out of a set of eleven, you leave behind an even number of elements, \n$$\n\\sum_{k\\text{ odd}} \\binom{11}{k} = \\sum_{k\\text{ even}} \\binom{11}{k} \n$$</p>\n\n<p>So the number of ways of choosing any subset of eleven elements is twice the number of ways of choosing an odd number of elements.</p>\n\n<p>But to choose any subset, each element is either in our out of the chosen subset: Than gives $2^{11}$.</p>\n\n<p>So the number of ways to choose an odd number of elements is \n$$\\frac12\\cdot 2^{11} = 2^{10} = 1024$$</p>\n\n<p>P.S. This reasoning does not work when the main set has an even number of elements. But the property of the number of ways to choose an odd number of elements being half the total number of subsets still holds. Can you see why?</p>\n" }, { "answer_id": 2092654, "score": 12, "text": "<p>I don't know if this is the fastest method, but you can use the binomial theorem twice: $$2^{11}=(1+1)^{11}=\\sum_{i=0}^{11}\\binom{11}{i}$$ and $$0=(1-1)^{11}=\\sum_{i=0}^{11}\\binom{11}{i}(-1)^i.$$ Upon subtracting these two results and dividing by two, we obtain the result.</p>\n" }, { "answer_id": 2093257, "score": 5, "text": "<p>Since $\\binom{11}{1}=\\binom{11}{10},\\binom{11}{3}=\\binom{11}{8}$ and so on,\n$$ \\binom{11}{1}+\\binom{11}{3}+\\ldots+\\binom{11}{11} = \\binom{11}{10}+\\binom{11}{8}+\\ldots+\\binom{11}{0} $$\nso both the terms are half the sum $\\binom{11}{0}+\\binom{11}{1}+\\binom{11}{2}+\\ldots+\\binom{11}{11}=2^{11}$, i.e. $2^{10}=\\color{red}{1024}$.</p>\n" }, { "answer_id": 2093283, "score": 2, "text": "<p>Usually you break up ${n+1\\choose k}, k\\le n$ into ${n\\choose k-1}+{n\\choose k}$. This is one of the first tricks you learn from probably every baby probability textbook. For your problem:</p>\n\n<p>$${11\\choose 1}+{11\\choose 3}+...+{11\\choose 9}+{11\\choose 11}=\\left({10\\choose 0}+{10\\choose 1}\\right)+\\left({10\\choose 2}+{10\\choose 3}\\right)+...+\\left({10\\choose 8}+{10\\choose 9}\\right)+{10\\choose 10}=(1+1)^{10}$$</p>\n\n<p>by recognizing ${11\\choose 11}={10\\choose 10}$.</p>\n" }, { "answer_id": 2092679, "score": 0, "text": "<p>You might recoginize ${m \\choose n}$ as the co-efficient of the $x^n$ term in the binomial expansion of $(x+1)^m$</p>\n\n<p>So the sum of all of the coefficients equals</p>\n\n<p>$(1+1)^m$</p>\n\n<p>In your example above only have the odd coefficients.</p>\n\n<p>${11 \\choose n} ={11 \\choose 11-n}$</p>\n\n<p>For every odd coefficient you have a symmetric even counterpart.</p>\n\n<p>$\\sum_\\limits {i=0}^{5} {11\\choose 2i+1} = \\frac 12 \\sum_\\limits {i=0}^{11} {11\\choose i} = 2^{10}$</p>\n" }, { "answer_id": 2105272, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\sum_{k = 0}^{5}{11 \\choose 2k + 1} &amp; =\n\\sum_{k = 0}^{10}{11 \\choose k + 1}{1 + \\pars{-1}^{k} \\over 2} =\n\\sum_{k = 1}^{11}{11 \\choose k}{1 - \\pars{-1}^{k} \\over 2}\n\\\\[5mm] &amp; =\n{1 \\over 2}\\underbrace{\\sum_{k = 0}^{11}{11 \\choose k}}_{\\ds{2^{11} = 2048}}\\ -\\ {1 \\over 2}\\underbrace{\\sum_{k = 0}^{11}{11 \\choose k}\\pars{-1}^{k}}_{\\ds{=\\ 0}}\\ = \\bbx{\\ds{1024}} \n\\end{align}</p>\n" }, { "answer_id": 2092653, "score": 12, "text": "<p>Since any time you choose an odd number of elements out of a set of eleven, you leave behind an even number of elements, \n$$\n\\sum_{k\\text{ odd}} \\binom{11}{k} = \\sum_{k\\text{ even}} \\binom{11}{k} \n$$</p>\n\n<p>So the number of ways of choosing any subset of eleven elements is twice the number of ways of choosing an odd number of elements.</p>\n\n<p>But to choose any subset, each element is either in our out of the chosen subset: Than gives $2^{11}$.</p>\n\n<p>So the number of ways to choose an odd number of elements is \n$$\\frac12\\cdot 2^{11} = 2^{10} = 1024$$</p>\n\n<p>P.S. This reasoning does not work when the main set has an even number of elements. But the property of the number of ways to choose an odd number of elements being half the total number of subsets still holds. Can you see why?</p>\n" }, { "answer_id": 2092654, "score": 12, "text": "<p>I don't know if this is the fastest method, but you can use the binomial theorem twice: $$2^{11}=(1+1)^{11}=\\sum_{i=0}^{11}\\binom{11}{i}$$ and $$0=(1-1)^{11}=\\sum_{i=0}^{11}\\binom{11}{i}(-1)^i.$$ Upon subtracting these two results and dividing by two, we obtain the result.</p>\n" }, { "answer_id": 2093257, "score": 5, "text": "<p>Since $\\binom{11}{1}=\\binom{11}{10},\\binom{11}{3}=\\binom{11}{8}$ and so on,\n$$ \\binom{11}{1}+\\binom{11}{3}+\\ldots+\\binom{11}{11} = \\binom{11}{10}+\\binom{11}{8}+\\ldots+\\binom{11}{0} $$\nso both the terms are half the sum $\\binom{11}{0}+\\binom{11}{1}+\\binom{11}{2}+\\ldots+\\binom{11}{11}=2^{11}$, i.e. $2^{10}=\\color{red}{1024}$.</p>\n" }, { "answer_id": 2093283, "score": 2, "text": "<p>Usually you break up ${n+1\\choose k}, k\\le n$ into ${n\\choose k-1}+{n\\choose k}$. This is one of the first tricks you learn from probably every baby probability textbook. For your problem:</p>\n\n<p>$${11\\choose 1}+{11\\choose 3}+...+{11\\choose 9}+{11\\choose 11}=\\left({10\\choose 0}+{10\\choose 1}\\right)+\\left({10\\choose 2}+{10\\choose 3}\\right)+...+\\left({10\\choose 8}+{10\\choose 9}\\right)+{10\\choose 10}=(1+1)^{10}$$</p>\n\n<p>by recognizing ${11\\choose 11}={10\\choose 10}$.</p>\n" }, { "answer_id": 2092679, "score": 0, "text": "<p>You might recoginize ${m \\choose n}$ as the co-efficient of the $x^n$ term in the binomial expansion of $(x+1)^m$</p>\n\n<p>So the sum of all of the coefficients equals</p>\n\n<p>$(1+1)^m$</p>\n\n<p>In your example above only have the odd coefficients.</p>\n\n<p>${11 \\choose n} ={11 \\choose 11-n}$</p>\n\n<p>For every odd coefficient you have a symmetric even counterpart.</p>\n\n<p>$\\sum_\\limits {i=0}^{5} {11\\choose 2i+1} = \\frac 12 \\sum_\\limits {i=0}^{11} {11\\choose i} = 2^{10}$</p>\n" }, { "answer_id": 2105272, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\sum_{k = 0}^{5}{11 \\choose 2k + 1} &amp; =\n\\sum_{k = 0}^{10}{11 \\choose k + 1}{1 + \\pars{-1}^{k} \\over 2} =\n\\sum_{k = 1}^{11}{11 \\choose k}{1 - \\pars{-1}^{k} \\over 2}\n\\\\[5mm] &amp; =\n{1 \\over 2}\\underbrace{\\sum_{k = 0}^{11}{11 \\choose k}}_{\\ds{2^{11} = 2048}}\\ -\\ {1 \\over 2}\\underbrace{\\sum_{k = 0}^{11}{11 \\choose k}\\pars{-1}^{k}}_{\\ds{=\\ 0}}\\ = \\bbx{\\ds{1024}} \n\\end{align}</p>\n" }, { "answer_id": 2092653, "score": 12, "text": "<p>Since any time you choose an odd number of elements out of a set of eleven, you leave behind an even number of elements, \n$$\n\\sum_{k\\text{ odd}} \\binom{11}{k} = \\sum_{k\\text{ even}} \\binom{11}{k} \n$$</p>\n\n<p>So the number of ways of choosing any subset of eleven elements is twice the number of ways of choosing an odd number of elements.</p>\n\n<p>But to choose any subset, each element is either in our out of the chosen subset: Than gives $2^{11}$.</p>\n\n<p>So the number of ways to choose an odd number of elements is \n$$\\frac12\\cdot 2^{11} = 2^{10} = 1024$$</p>\n\n<p>P.S. This reasoning does not work when the main set has an even number of elements. But the property of the number of ways to choose an odd number of elements being half the total number of subsets still holds. Can you see why?</p>\n" }, { "answer_id": 2092654, "score": 12, "text": "<p>I don't know if this is the fastest method, but you can use the binomial theorem twice: $$2^{11}=(1+1)^{11}=\\sum_{i=0}^{11}\\binom{11}{i}$$ and $$0=(1-1)^{11}=\\sum_{i=0}^{11}\\binom{11}{i}(-1)^i.$$ Upon subtracting these two results and dividing by two, we obtain the result.</p>\n" }, { "answer_id": 2093257, "score": 5, "text": "<p>Since $\\binom{11}{1}=\\binom{11}{10},\\binom{11}{3}=\\binom{11}{8}$ and so on,\n$$ \\binom{11}{1}+\\binom{11}{3}+\\ldots+\\binom{11}{11} = \\binom{11}{10}+\\binom{11}{8}+\\ldots+\\binom{11}{0} $$\nso both the terms are half the sum $\\binom{11}{0}+\\binom{11}{1}+\\binom{11}{2}+\\ldots+\\binom{11}{11}=2^{11}$, i.e. $2^{10}=\\color{red}{1024}$.</p>\n" }, { "answer_id": 2093283, "score": 2, "text": "<p>Usually you break up ${n+1\\choose k}, k\\le n$ into ${n\\choose k-1}+{n\\choose k}$. This is one of the first tricks you learn from probably every baby probability textbook. For your problem:</p>\n\n<p>$${11\\choose 1}+{11\\choose 3}+...+{11\\choose 9}+{11\\choose 11}=\\left({10\\choose 0}+{10\\choose 1}\\right)+\\left({10\\choose 2}+{10\\choose 3}\\right)+...+\\left({10\\choose 8}+{10\\choose 9}\\right)+{10\\choose 10}=(1+1)^{10}$$</p>\n\n<p>by recognizing ${11\\choose 11}={10\\choose 10}$.</p>\n" }, { "answer_id": 2092679, "score": 0, "text": "<p>You might recoginize ${m \\choose n}$ as the co-efficient of the $x^n$ term in the binomial expansion of $(x+1)^m$</p>\n\n<p>So the sum of all of the coefficients equals</p>\n\n<p>$(1+1)^m$</p>\n\n<p>In your example above only have the odd coefficients.</p>\n\n<p>${11 \\choose n} ={11 \\choose 11-n}$</p>\n\n<p>For every odd coefficient you have a symmetric even counterpart.</p>\n\n<p>$\\sum_\\limits {i=0}^{5} {11\\choose 2i+1} = \\frac 12 \\sum_\\limits {i=0}^{11} {11\\choose i} = 2^{10}$</p>\n" }, { "answer_id": 2105272, "score": 0, "text": "<p>$\\newcommand{\\bbx}[1]{\\,\\bbox[15px,border:1px groove navy]{\\displaystyle{#1}}\\,}\n \\newcommand{\\braces}[1]{\\left\\lbrace\\,{#1}\\,\\right\\rbrace}\n \\newcommand{\\bracks}[1]{\\left\\lbrack\\,{#1}\\,\\right\\rbrack}\n \\newcommand{\\dd}{\\mathrm{d}}\n \\newcommand{\\ds}[1]{\\displaystyle{#1}}\n \\newcommand{\\expo}[1]{\\,\\mathrm{e}^{#1}\\,}\n \\newcommand{\\ic}{\\mathrm{i}}\n \\newcommand{\\mc}[1]{\\mathcal{#1}}\n \\newcommand{\\mrm}[1]{\\mathrm{#1}}\n \\newcommand{\\pars}[1]{\\left(\\,{#1}\\,\\right)}\n \\newcommand{\\partiald}[3][]{\\frac{\\partial^{#1} #2}{\\partial #3^{#1}}}\n \\newcommand{\\root}[2][]{\\,\\sqrt[#1]{\\,{#2}\\,}\\,}\n \\newcommand{\\totald}[3][]{\\frac{\\mathrm{d}^{#1} #2}{\\mathrm{d} #3^{#1}}}\n \\newcommand{\\verts}[1]{\\left\\vert\\,{#1}\\,\\right\\vert}$\n\\begin{align}\n\\sum_{k = 0}^{5}{11 \\choose 2k + 1} &amp; =\n\\sum_{k = 0}^{10}{11 \\choose k + 1}{1 + \\pars{-1}^{k} \\over 2} =\n\\sum_{k = 1}^{11}{11 \\choose k}{1 - \\pars{-1}^{k} \\over 2}\n\\\\[5mm] &amp; =\n{1 \\over 2}\\underbrace{\\sum_{k = 0}^{11}{11 \\choose k}}_{\\ds{2^{11} = 2048}}\\ -\\ {1 \\over 2}\\underbrace{\\sum_{k = 0}^{11}{11 \\choose k}\\pars{-1}^{k}}_{\\ds{=\\ 0}}\\ = \\bbx{\\ds{1024}} \n\\end{align}</p>\n" } ]
Question: Compute ${11 \choose 1} + {11 \choose 3} + ... + {11 \choose 11}$? Answer 1: <p>Since any time you choose an odd number of elements out of a set of eleven, you leave behind an even number of elements, $$ \sum_{k\text{ odd}} \binom{11}{k} = \sum_{k\text{ even}} \binom{11}{k} $$</p> <p>So the number of ways of choosing any subset of eleven elements is twice the number of ways of choosing an odd number of elements.</p> <p>But to choose any subset, each element is either in our out of the chosen subset: Than gives $2^{11}$.</p> <p>So the number of ways to choose an odd number of elements is $$\frac12\cdot 2^{11} = 2^{10} = 1024$$</p> <p>P.S. This reasoning does not work when the main set has an even number of elements. But the property of the number of ways to choose an odd number of elements being half the total number of subsets still holds. Can you see why?</p> Answer 2: <p>I don't know if this is the fastest method, but you can use the binomial theorem twice: $$2^{11}=(1+1)^{11}=\sum_{i=0}^{11}\binom{11}{i}$$ and $$0=(1-1)^{11}=\sum_{i=0}^{11}\binom{11}{i}(-1)^i.$$ Upon subtracting these two results and dividing by two, we obtain the result.</p> Answer 3: <p>Since $\binom{11}{1}=\binom{11}{10},\binom{11}{3}=\binom{11}{8}$ and so on, $$ \binom{11}{1}+\binom{11}{3}+\ldots+\binom{11}{11} = \binom{11}{10}+\binom{11}{8}+\ldots+\binom{11}{0} $$ so both the terms are half the sum $\binom{11}{0}+\binom{11}{1}+\binom{11}{2}+\ldots+\binom{11}{11}=2^{11}$, i.e. $2^{10}=\color{red}{1024}$.</p> Answer 4: <p>Usually you break up ${n+1\choose k}, k\le n$ into ${n\choose k-1}+{n\choose k}$. This is one of the first tricks you learn from probably every baby probability textbook. For your problem:</p> <p>$${11\choose 1}+{11\choose 3}+...+{11\choose 9}+{11\choose 11}=\left({10\choose 0}+{10\choose 1}\right)+\left({10\choose 2}+{10\choose 3}\right)+...+\left({10\choose 8}+{10\choose 9}\right)+{10\choose 10}=(1+1)^{10}$$</p> <p>by recognizing ${11\choose 11}={10\choose 10}$.</p> Answer 5: <p>You might recoginize ${m \choose n}$ as the co-efficient of the $x^n$ term in the binomial expansion of $(x+1)^m$</p> <p>So the sum of all of the coefficients equals</p> <p>$(1+1)^m$</p> <p>In your example above only have the odd coefficients.</p> <p>${11 \choose n} ={11 \choose 11-n}$</p> <p>For every odd coefficient you have a symmetric even counterpart.</p> <p>$\sum_\limits {i=0}^{5} {11\choose 2i+1} = \frac 12 \sum_\limits {i=0}^{11} {11\choose i} = 2^{10}$</p> Answer 6: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{k = 0}^{5}{11 \choose 2k + 1} &amp; = \sum_{k = 0}^{10}{11 \choose k + 1}{1 + \pars{-1}^{k} \over 2} = \sum_{k = 1}^{11}{11 \choose k}{1 - \pars{-1}^{k} \over 2} \\[5mm] &amp; = {1 \over 2}\underbrace{\sum_{k = 0}^{11}{11 \choose k}}_{\ds{2^{11} = 2048}}\ -\ {1 \over 2}\underbrace{\sum_{k = 0}^{11}{11 \choose k}\pars{-1}^{k}}_{\ds{=\ 0}}\ = \bbx{\ds{1024}} \end{align}</p> Answer 7: <p>Since any time you choose an odd number of elements out of a set of eleven, you leave behind an even number of elements, $$ \sum_{k\text{ odd}} \binom{11}{k} = \sum_{k\text{ even}} \binom{11}{k} $$</p> <p>So the number of ways of choosing any subset of eleven elements is twice the number of ways of choosing an odd number of elements.</p> <p>But to choose any subset, each element is either in our out of the chosen subset: Than gives $2^{11}$.</p> <p>So the number of ways to choose an odd number of elements is $$\frac12\cdot 2^{11} = 2^{10} = 1024$$</p> <p>P.S. This reasoning does not work when the main set has an even number of elements. But the property of the number of ways to choose an odd number of elements being half the total number of subsets still holds. Can you see why?</p> Answer 8: <p>I don't know if this is the fastest method, but you can use the binomial theorem twice: $$2^{11}=(1+1)^{11}=\sum_{i=0}^{11}\binom{11}{i}$$ and $$0=(1-1)^{11}=\sum_{i=0}^{11}\binom{11}{i}(-1)^i.$$ Upon subtracting these two results and dividing by two, we obtain the result.</p> Answer 9: <p>Since $\binom{11}{1}=\binom{11}{10},\binom{11}{3}=\binom{11}{8}$ and so on, $$ \binom{11}{1}+\binom{11}{3}+\ldots+\binom{11}{11} = \binom{11}{10}+\binom{11}{8}+\ldots+\binom{11}{0} $$ so both the terms are half the sum $\binom{11}{0}+\binom{11}{1}+\binom{11}{2}+\ldots+\binom{11}{11}=2^{11}$, i.e. $2^{10}=\color{red}{1024}$.</p> Answer 10: <p>Usually you break up ${n+1\choose k}, k\le n$ into ${n\choose k-1}+{n\choose k}$. This is one of the first tricks you learn from probably every baby probability textbook. For your problem:</p> <p>$${11\choose 1}+{11\choose 3}+...+{11\choose 9}+{11\choose 11}=\left({10\choose 0}+{10\choose 1}\right)+\left({10\choose 2}+{10\choose 3}\right)+...+\left({10\choose 8}+{10\choose 9}\right)+{10\choose 10}=(1+1)^{10}$$</p> <p>by recognizing ${11\choose 11}={10\choose 10}$.</p> Answer 11: <p>You might recoginize ${m \choose n}$ as the co-efficient of the $x^n$ term in the binomial expansion of $(x+1)^m$</p> <p>So the sum of all of the coefficients equals</p> <p>$(1+1)^m$</p> <p>In your example above only have the odd coefficients.</p> <p>${11 \choose n} ={11 \choose 11-n}$</p> <p>For every odd coefficient you have a symmetric even counterpart.</p> <p>$\sum_\limits {i=0}^{5} {11\choose 2i+1} = \frac 12 \sum_\limits {i=0}^{11} {11\choose i} = 2^{10}$</p> Answer 12: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{k = 0}^{5}{11 \choose 2k + 1} &amp; = \sum_{k = 0}^{10}{11 \choose k + 1}{1 + \pars{-1}^{k} \over 2} = \sum_{k = 1}^{11}{11 \choose k}{1 - \pars{-1}^{k} \over 2} \\[5mm] &amp; = {1 \over 2}\underbrace{\sum_{k = 0}^{11}{11 \choose k}}_{\ds{2^{11} = 2048}}\ -\ {1 \over 2}\underbrace{\sum_{k = 0}^{11}{11 \choose k}\pars{-1}^{k}}_{\ds{=\ 0}}\ = \bbx{\ds{1024}} \end{align}</p> Answer 13: <p>Since any time you choose an odd number of elements out of a set of eleven, you leave behind an even number of elements, $$ \sum_{k\text{ odd}} \binom{11}{k} = \sum_{k\text{ even}} \binom{11}{k} $$</p> <p>So the number of ways of choosing any subset of eleven elements is twice the number of ways of choosing an odd number of elements.</p> <p>But to choose any subset, each element is either in our out of the chosen subset: Than gives $2^{11}$.</p> <p>So the number of ways to choose an odd number of elements is $$\frac12\cdot 2^{11} = 2^{10} = 1024$$</p> <p>P.S. This reasoning does not work when the main set has an even number of elements. But the property of the number of ways to choose an odd number of elements being half the total number of subsets still holds. Can you see why?</p> Answer 14: <p>I don't know if this is the fastest method, but you can use the binomial theorem twice: $$2^{11}=(1+1)^{11}=\sum_{i=0}^{11}\binom{11}{i}$$ and $$0=(1-1)^{11}=\sum_{i=0}^{11}\binom{11}{i}(-1)^i.$$ Upon subtracting these two results and dividing by two, we obtain the result.</p> Answer 15: <p>Since $\binom{11}{1}=\binom{11}{10},\binom{11}{3}=\binom{11}{8}$ and so on, $$ \binom{11}{1}+\binom{11}{3}+\ldots+\binom{11}{11} = \binom{11}{10}+\binom{11}{8}+\ldots+\binom{11}{0} $$ so both the terms are half the sum $\binom{11}{0}+\binom{11}{1}+\binom{11}{2}+\ldots+\binom{11}{11}=2^{11}$, i.e. $2^{10}=\color{red}{1024}$.</p> Answer 16: <p>Usually you break up ${n+1\choose k}, k\le n$ into ${n\choose k-1}+{n\choose k}$. This is one of the first tricks you learn from probably every baby probability textbook. For your problem:</p> <p>$${11\choose 1}+{11\choose 3}+...+{11\choose 9}+{11\choose 11}=\left({10\choose 0}+{10\choose 1}\right)+\left({10\choose 2}+{10\choose 3}\right)+...+\left({10\choose 8}+{10\choose 9}\right)+{10\choose 10}=(1+1)^{10}$$</p> <p>by recognizing ${11\choose 11}={10\choose 10}$.</p> Answer 17: <p>You might recoginize ${m \choose n}$ as the co-efficient of the $x^n$ term in the binomial expansion of $(x+1)^m$</p> <p>So the sum of all of the coefficients equals</p> <p>$(1+1)^m$</p> <p>In your example above only have the odd coefficients.</p> <p>${11 \choose n} ={11 \choose 11-n}$</p> <p>For every odd coefficient you have a symmetric even counterpart.</p> <p>$\sum_\limits {i=0}^{5} {11\choose 2i+1} = \frac 12 \sum_\limits {i=0}^{11} {11\choose i} = 2^{10}$</p> Answer 18: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{k = 0}^{5}{11 \choose 2k + 1} &amp; = \sum_{k = 0}^{10}{11 \choose k + 1}{1 + \pars{-1}^{k} \over 2} = \sum_{k = 1}^{11}{11 \choose k}{1 - \pars{-1}^{k} \over 2} \\[5mm] &amp; = {1 \over 2}\underbrace{\sum_{k = 0}^{11}{11 \choose k}}_{\ds{2^{11} = 2048}}\ -\ {1 \over 2}\underbrace{\sum_{k = 0}^{11}{11 \choose k}\pars{-1}^{k}}_{\ds{=\ 0}}\ = \bbx{\ds{1024}} \end{align}</p> Answer 19: <p>Since any time you choose an odd number of elements out of a set of eleven, you leave behind an even number of elements, $$ \sum_{k\text{ odd}} \binom{11}{k} = \sum_{k\text{ even}} \binom{11}{k} $$</p> <p>So the number of ways of choosing any subset of eleven elements is twice the number of ways of choosing an odd number of elements.</p> <p>But to choose any subset, each element is either in our out of the chosen subset: Than gives $2^{11}$.</p> <p>So the number of ways to choose an odd number of elements is $$\frac12\cdot 2^{11} = 2^{10} = 1024$$</p> <p>P.S. This reasoning does not work when the main set has an even number of elements. But the property of the number of ways to choose an odd number of elements being half the total number of subsets still holds. Can you see why?</p> Answer 20: <p>I don't know if this is the fastest method, but you can use the binomial theorem twice: $$2^{11}=(1+1)^{11}=\sum_{i=0}^{11}\binom{11}{i}$$ and $$0=(1-1)^{11}=\sum_{i=0}^{11}\binom{11}{i}(-1)^i.$$ Upon subtracting these two results and dividing by two, we obtain the result.</p> Answer 21: <p>Since $\binom{11}{1}=\binom{11}{10},\binom{11}{3}=\binom{11}{8}$ and so on, $$ \binom{11}{1}+\binom{11}{3}+\ldots+\binom{11}{11} = \binom{11}{10}+\binom{11}{8}+\ldots+\binom{11}{0} $$ so both the terms are half the sum $\binom{11}{0}+\binom{11}{1}+\binom{11}{2}+\ldots+\binom{11}{11}=2^{11}$, i.e. $2^{10}=\color{red}{1024}$.</p> Answer 22: <p>Usually you break up ${n+1\choose k}, k\le n$ into ${n\choose k-1}+{n\choose k}$. This is one of the first tricks you learn from probably every baby probability textbook. For your problem:</p> <p>$${11\choose 1}+{11\choose 3}+...+{11\choose 9}+{11\choose 11}=\left({10\choose 0}+{10\choose 1}\right)+\left({10\choose 2}+{10\choose 3}\right)+...+\left({10\choose 8}+{10\choose 9}\right)+{10\choose 10}=(1+1)^{10}$$</p> <p>by recognizing ${11\choose 11}={10\choose 10}$.</p> Answer 23: <p>You might recoginize ${m \choose n}$ as the co-efficient of the $x^n$ term in the binomial expansion of $(x+1)^m$</p> <p>So the sum of all of the coefficients equals</p> <p>$(1+1)^m$</p> <p>In your example above only have the odd coefficients.</p> <p>${11 \choose n} ={11 \choose 11-n}$</p> <p>For every odd coefficient you have a symmetric even counterpart.</p> <p>$\sum_\limits {i=0}^{5} {11\choose 2i+1} = \frac 12 \sum_\limits {i=0}^{11} {11\choose i} = 2^{10}$</p> Answer 24: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{k = 0}^{5}{11 \choose 2k + 1} &amp; = \sum_{k = 0}^{10}{11 \choose k + 1}{1 + \pars{-1}^{k} \over 2} = \sum_{k = 1}^{11}{11 \choose k}{1 - \pars{-1}^{k} \over 2} \\[5mm] &amp; = {1 \over 2}\underbrace{\sum_{k = 0}^{11}{11 \choose k}}_{\ds{2^{11} = 2048}}\ -\ {1 \over 2}\underbrace{\sum_{k = 0}^{11}{11 \choose k}\pars{-1}^{k}}_{\ds{=\ 0}}\ = \bbx{\ds{1024}} \end{align}</p> Answer 25: <p>Since any time you choose an odd number of elements out of a set of eleven, you leave behind an even number of elements, $$ \sum_{k\text{ odd}} \binom{11}{k} = \sum_{k\text{ even}} \binom{11}{k} $$</p> <p>So the number of ways of choosing any subset of eleven elements is twice the number of ways of choosing an odd number of elements.</p> <p>But to choose any subset, each element is either in our out of the chosen subset: Than gives $2^{11}$.</p> <p>So the number of ways to choose an odd number of elements is $$\frac12\cdot 2^{11} = 2^{10} = 1024$$</p> <p>P.S. This reasoning does not work when the main set has an even number of elements. But the property of the number of ways to choose an odd number of elements being half the total number of subsets still holds. Can you see why?</p> Answer 26: <p>I don't know if this is the fastest method, but you can use the binomial theorem twice: $$2^{11}=(1+1)^{11}=\sum_{i=0}^{11}\binom{11}{i}$$ and $$0=(1-1)^{11}=\sum_{i=0}^{11}\binom{11}{i}(-1)^i.$$ Upon subtracting these two results and dividing by two, we obtain the result.</p> Answer 27: <p>Since $\binom{11}{1}=\binom{11}{10},\binom{11}{3}=\binom{11}{8}$ and so on, $$ \binom{11}{1}+\binom{11}{3}+\ldots+\binom{11}{11} = \binom{11}{10}+\binom{11}{8}+\ldots+\binom{11}{0} $$ so both the terms are half the sum $\binom{11}{0}+\binom{11}{1}+\binom{11}{2}+\ldots+\binom{11}{11}=2^{11}$, i.e. $2^{10}=\color{red}{1024}$.</p> Answer 28: <p>Usually you break up ${n+1\choose k}, k\le n$ into ${n\choose k-1}+{n\choose k}$. This is one of the first tricks you learn from probably every baby probability textbook. For your problem:</p> <p>$${11\choose 1}+{11\choose 3}+...+{11\choose 9}+{11\choose 11}=\left({10\choose 0}+{10\choose 1}\right)+\left({10\choose 2}+{10\choose 3}\right)+...+\left({10\choose 8}+{10\choose 9}\right)+{10\choose 10}=(1+1)^{10}$$</p> <p>by recognizing ${11\choose 11}={10\choose 10}$.</p> Answer 29: <p>You might recoginize ${m \choose n}$ as the co-efficient of the $x^n$ term in the binomial expansion of $(x+1)^m$</p> <p>So the sum of all of the coefficients equals</p> <p>$(1+1)^m$</p> <p>In your example above only have the odd coefficients.</p> <p>${11 \choose n} ={11 \choose 11-n}$</p> <p>For every odd coefficient you have a symmetric even counterpart.</p> <p>$\sum_\limits {i=0}^{5} {11\choose 2i+1} = \frac 12 \sum_\limits {i=0}^{11} {11\choose i} = 2^{10}$</p> Answer 30: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{k = 0}^{5}{11 \choose 2k + 1} &amp; = \sum_{k = 0}^{10}{11 \choose k + 1}{1 + \pars{-1}^{k} \over 2} = \sum_{k = 1}^{11}{11 \choose k}{1 - \pars{-1}^{k} \over 2} \\[5mm] &amp; = {1 \over 2}\underbrace{\sum_{k = 0}^{11}{11 \choose k}}_{\ds{2^{11} = 2048}}\ -\ {1 \over 2}\underbrace{\sum_{k = 0}^{11}{11 \choose k}\pars{-1}^{k}}_{\ds{=\ 0}}\ = \bbx{\ds{1024}} \end{align}</p> Answer 31: <p>Since any time you choose an odd number of elements out of a set of eleven, you leave behind an even number of elements, $$ \sum_{k\text{ odd}} \binom{11}{k} = \sum_{k\text{ even}} \binom{11}{k} $$</p> <p>So the number of ways of choosing any subset of eleven elements is twice the number of ways of choosing an odd number of elements.</p> <p>But to choose any subset, each element is either in our out of the chosen subset: Than gives $2^{11}$.</p> <p>So the number of ways to choose an odd number of elements is $$\frac12\cdot 2^{11} = 2^{10} = 1024$$</p> <p>P.S. This reasoning does not work when the main set has an even number of elements. But the property of the number of ways to choose an odd number of elements being half the total number of subsets still holds. Can you see why?</p> Answer 32: <p>I don't know if this is the fastest method, but you can use the binomial theorem twice: $$2^{11}=(1+1)^{11}=\sum_{i=0}^{11}\binom{11}{i}$$ and $$0=(1-1)^{11}=\sum_{i=0}^{11}\binom{11}{i}(-1)^i.$$ Upon subtracting these two results and dividing by two, we obtain the result.</p> Answer 33: <p>Since $\binom{11}{1}=\binom{11}{10},\binom{11}{3}=\binom{11}{8}$ and so on, $$ \binom{11}{1}+\binom{11}{3}+\ldots+\binom{11}{11} = \binom{11}{10}+\binom{11}{8}+\ldots+\binom{11}{0} $$ so both the terms are half the sum $\binom{11}{0}+\binom{11}{1}+\binom{11}{2}+\ldots+\binom{11}{11}=2^{11}$, i.e. $2^{10}=\color{red}{1024}$.</p> Answer 34: <p>Usually you break up ${n+1\choose k}, k\le n$ into ${n\choose k-1}+{n\choose k}$. This is one of the first tricks you learn from probably every baby probability textbook. For your problem:</p> <p>$${11\choose 1}+{11\choose 3}+...+{11\choose 9}+{11\choose 11}=\left({10\choose 0}+{10\choose 1}\right)+\left({10\choose 2}+{10\choose 3}\right)+...+\left({10\choose 8}+{10\choose 9}\right)+{10\choose 10}=(1+1)^{10}$$</p> <p>by recognizing ${11\choose 11}={10\choose 10}$.</p> Answer 35: <p>You might recoginize ${m \choose n}$ as the co-efficient of the $x^n$ term in the binomial expansion of $(x+1)^m$</p> <p>So the sum of all of the coefficients equals</p> <p>$(1+1)^m$</p> <p>In your example above only have the odd coefficients.</p> <p>${11 \choose n} ={11 \choose 11-n}$</p> <p>For every odd coefficient you have a symmetric even counterpart.</p> <p>$\sum_\limits {i=0}^{5} {11\choose 2i+1} = \frac 12 \sum_\limits {i=0}^{11} {11\choose i} = 2^{10}$</p> Answer 36: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{k = 0}^{5}{11 \choose 2k + 1} &amp; = \sum_{k = 0}^{10}{11 \choose k + 1}{1 + \pars{-1}^{k} \over 2} = \sum_{k = 1}^{11}{11 \choose k}{1 - \pars{-1}^{k} \over 2} \\[5mm] &amp; = {1 \over 2}\underbrace{\sum_{k = 0}^{11}{11 \choose k}}_{\ds{2^{11} = 2048}}\ -\ {1 \over 2}\underbrace{\sum_{k = 0}^{11}{11 \choose k}\pars{-1}^{k}}_{\ds{=\ 0}}\ = \bbx{\ds{1024}} \end{align}</p> Answer 37: <p>Since any time you choose an odd number of elements out of a set of eleven, you leave behind an even number of elements, $$ \sum_{k\text{ odd}} \binom{11}{k} = \sum_{k\text{ even}} \binom{11}{k} $$</p> <p>So the number of ways of choosing any subset of eleven elements is twice the number of ways of choosing an odd number of elements.</p> <p>But to choose any subset, each element is either in our out of the chosen subset: Than gives $2^{11}$.</p> <p>So the number of ways to choose an odd number of elements is $$\frac12\cdot 2^{11} = 2^{10} = 1024$$</p> <p>P.S. This reasoning does not work when the main set has an even number of elements. But the property of the number of ways to choose an odd number of elements being half the total number of subsets still holds. Can you see why?</p> Answer 38: <p>I don't know if this is the fastest method, but you can use the binomial theorem twice: $$2^{11}=(1+1)^{11}=\sum_{i=0}^{11}\binom{11}{i}$$ and $$0=(1-1)^{11}=\sum_{i=0}^{11}\binom{11}{i}(-1)^i.$$ Upon subtracting these two results and dividing by two, we obtain the result.</p> Answer 39: <p>Since $\binom{11}{1}=\binom{11}{10},\binom{11}{3}=\binom{11}{8}$ and so on, $$ \binom{11}{1}+\binom{11}{3}+\ldots+\binom{11}{11} = \binom{11}{10}+\binom{11}{8}+\ldots+\binom{11}{0} $$ so both the terms are half the sum $\binom{11}{0}+\binom{11}{1}+\binom{11}{2}+\ldots+\binom{11}{11}=2^{11}$, i.e. $2^{10}=\color{red}{1024}$.</p> Answer 40: <p>Usually you break up ${n+1\choose k}, k\le n$ into ${n\choose k-1}+{n\choose k}$. This is one of the first tricks you learn from probably every baby probability textbook. For your problem:</p> <p>$${11\choose 1}+{11\choose 3}+...+{11\choose 9}+{11\choose 11}=\left({10\choose 0}+{10\choose 1}\right)+\left({10\choose 2}+{10\choose 3}\right)+...+\left({10\choose 8}+{10\choose 9}\right)+{10\choose 10}=(1+1)^{10}$$</p> <p>by recognizing ${11\choose 11}={10\choose 10}$.</p> Answer 41: <p>You might recoginize ${m \choose n}$ as the co-efficient of the $x^n$ term in the binomial expansion of $(x+1)^m$</p> <p>So the sum of all of the coefficients equals</p> <p>$(1+1)^m$</p> <p>In your example above only have the odd coefficients.</p> <p>${11 \choose n} ={11 \choose 11-n}$</p> <p>For every odd coefficient you have a symmetric even counterpart.</p> <p>$\sum_\limits {i=0}^{5} {11\choose 2i+1} = \frac 12 \sum_\limits {i=0}^{11} {11\choose i} = 2^{10}$</p> Answer 42: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{k = 0}^{5}{11 \choose 2k + 1} &amp; = \sum_{k = 0}^{10}{11 \choose k + 1}{1 + \pars{-1}^{k} \over 2} = \sum_{k = 1}^{11}{11 \choose k}{1 - \pars{-1}^{k} \over 2} \\[5mm] &amp; = {1 \over 2}\underbrace{\sum_{k = 0}^{11}{11 \choose k}}_{\ds{2^{11} = 2048}}\ -\ {1 \over 2}\underbrace{\sum_{k = 0}^{11}{11 \choose k}\pars{-1}^{k}}_{\ds{=\ 0}}\ = \bbx{\ds{1024}} \end{align}</p> Answer 43: <p>Since any time you choose an odd number of elements out of a set of eleven, you leave behind an even number of elements, $$ \sum_{k\text{ odd}} \binom{11}{k} = \sum_{k\text{ even}} \binom{11}{k} $$</p> <p>So the number of ways of choosing any subset of eleven elements is twice the number of ways of choosing an odd number of elements.</p> <p>But to choose any subset, each element is either in our out of the chosen subset: Than gives $2^{11}$.</p> <p>So the number of ways to choose an odd number of elements is $$\frac12\cdot 2^{11} = 2^{10} = 1024$$</p> <p>P.S. This reasoning does not work when the main set has an even number of elements. But the property of the number of ways to choose an odd number of elements being half the total number of subsets still holds. Can you see why?</p> Answer 44: <p>I don't know if this is the fastest method, but you can use the binomial theorem twice: $$2^{11}=(1+1)^{11}=\sum_{i=0}^{11}\binom{11}{i}$$ and $$0=(1-1)^{11}=\sum_{i=0}^{11}\binom{11}{i}(-1)^i.$$ Upon subtracting these two results and dividing by two, we obtain the result.</p> Answer 45: <p>Since $\binom{11}{1}=\binom{11}{10},\binom{11}{3}=\binom{11}{8}$ and so on, $$ \binom{11}{1}+\binom{11}{3}+\ldots+\binom{11}{11} = \binom{11}{10}+\binom{11}{8}+\ldots+\binom{11}{0} $$ so both the terms are half the sum $\binom{11}{0}+\binom{11}{1}+\binom{11}{2}+\ldots+\binom{11}{11}=2^{11}$, i.e. $2^{10}=\color{red}{1024}$.</p> Answer 46: <p>Usually you break up ${n+1\choose k}, k\le n$ into ${n\choose k-1}+{n\choose k}$. This is one of the first tricks you learn from probably every baby probability textbook. For your problem:</p> <p>$${11\choose 1}+{11\choose 3}+...+{11\choose 9}+{11\choose 11}=\left({10\choose 0}+{10\choose 1}\right)+\left({10\choose 2}+{10\choose 3}\right)+...+\left({10\choose 8}+{10\choose 9}\right)+{10\choose 10}=(1+1)^{10}$$</p> <p>by recognizing ${11\choose 11}={10\choose 10}$.</p> Answer 47: <p>You might recoginize ${m \choose n}$ as the co-efficient of the $x^n$ term in the binomial expansion of $(x+1)^m$</p> <p>So the sum of all of the coefficients equals</p> <p>$(1+1)^m$</p> <p>In your example above only have the odd coefficients.</p> <p>${11 \choose n} ={11 \choose 11-n}$</p> <p>For every odd coefficient you have a symmetric even counterpart.</p> <p>$\sum_\limits {i=0}^{5} {11\choose 2i+1} = \frac 12 \sum_\limits {i=0}^{11} {11\choose i} = 2^{10}$</p> Answer 48: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{k = 0}^{5}{11 \choose 2k + 1} &amp; = \sum_{k = 0}^{10}{11 \choose k + 1}{1 + \pars{-1}^{k} \over 2} = \sum_{k = 1}^{11}{11 \choose k}{1 - \pars{-1}^{k} \over 2} \\[5mm] &amp; = {1 \over 2}\underbrace{\sum_{k = 0}^{11}{11 \choose k}}_{\ds{2^{11} = 2048}}\ -\ {1 \over 2}\underbrace{\sum_{k = 0}^{11}{11 \choose k}\pars{-1}^{k}}_{\ds{=\ 0}}\ = \bbx{\ds{1024}} \end{align}</p> Answer 49: <p>Since any time you choose an odd number of elements out of a set of eleven, you leave behind an even number of elements, $$ \sum_{k\text{ odd}} \binom{11}{k} = \sum_{k\text{ even}} \binom{11}{k} $$</p> <p>So the number of ways of choosing any subset of eleven elements is twice the number of ways of choosing an odd number of elements.</p> <p>But to choose any subset, each element is either in our out of the chosen subset: Than gives $2^{11}$.</p> <p>So the number of ways to choose an odd number of elements is $$\frac12\cdot 2^{11} = 2^{10} = 1024$$</p> <p>P.S. This reasoning does not work when the main set has an even number of elements. But the property of the number of ways to choose an odd number of elements being half the total number of subsets still holds. Can you see why?</p> Answer 50: <p>I don't know if this is the fastest method, but you can use the binomial theorem twice: $$2^{11}=(1+1)^{11}=\sum_{i=0}^{11}\binom{11}{i}$$ and $$0=(1-1)^{11}=\sum_{i=0}^{11}\binom{11}{i}(-1)^i.$$ Upon subtracting these two results and dividing by two, we obtain the result.</p> Answer 51: <p>Since $\binom{11}{1}=\binom{11}{10},\binom{11}{3}=\binom{11}{8}$ and so on, $$ \binom{11}{1}+\binom{11}{3}+\ldots+\binom{11}{11} = \binom{11}{10}+\binom{11}{8}+\ldots+\binom{11}{0} $$ so both the terms are half the sum $\binom{11}{0}+\binom{11}{1}+\binom{11}{2}+\ldots+\binom{11}{11}=2^{11}$, i.e. $2^{10}=\color{red}{1024}$.</p> Answer 52: <p>Usually you break up ${n+1\choose k}, k\le n$ into ${n\choose k-1}+{n\choose k}$. This is one of the first tricks you learn from probably every baby probability textbook. For your problem:</p> <p>$${11\choose 1}+{11\choose 3}+...+{11\choose 9}+{11\choose 11}=\left({10\choose 0}+{10\choose 1}\right)+\left({10\choose 2}+{10\choose 3}\right)+...+\left({10\choose 8}+{10\choose 9}\right)+{10\choose 10}=(1+1)^{10}$$</p> <p>by recognizing ${11\choose 11}={10\choose 10}$.</p> Answer 53: <p>You might recoginize ${m \choose n}$ as the co-efficient of the $x^n$ term in the binomial expansion of $(x+1)^m$</p> <p>So the sum of all of the coefficients equals</p> <p>$(1+1)^m$</p> <p>In your example above only have the odd coefficients.</p> <p>${11 \choose n} ={11 \choose 11-n}$</p> <p>For every odd coefficient you have a symmetric even counterpart.</p> <p>$\sum_\limits {i=0}^{5} {11\choose 2i+1} = \frac 12 \sum_\limits {i=0}^{11} {11\choose i} = 2^{10}$</p> Answer 54: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{k = 0}^{5}{11 \choose 2k + 1} &amp; = \sum_{k = 0}^{10}{11 \choose k + 1}{1 + \pars{-1}^{k} \over 2} = \sum_{k = 1}^{11}{11 \choose k}{1 - \pars{-1}^{k} \over 2} \\[5mm] &amp; = {1 \over 2}\underbrace{\sum_{k = 0}^{11}{11 \choose k}}_{\ds{2^{11} = 2048}}\ -\ {1 \over 2}\underbrace{\sum_{k = 0}^{11}{11 \choose k}\pars{-1}^{k}}_{\ds{=\ 0}}\ = \bbx{\ds{1024}} \end{align}</p> Answer 55: <p>Since any time you choose an odd number of elements out of a set of eleven, you leave behind an even number of elements, $$ \sum_{k\text{ odd}} \binom{11}{k} = \sum_{k\text{ even}} \binom{11}{k} $$</p> <p>So the number of ways of choosing any subset of eleven elements is twice the number of ways of choosing an odd number of elements.</p> <p>But to choose any subset, each element is either in our out of the chosen subset: Than gives $2^{11}$.</p> <p>So the number of ways to choose an odd number of elements is $$\frac12\cdot 2^{11} = 2^{10} = 1024$$</p> <p>P.S. This reasoning does not work when the main set has an even number of elements. But the property of the number of ways to choose an odd number of elements being half the total number of subsets still holds. Can you see why?</p> Answer 56: <p>I don't know if this is the fastest method, but you can use the binomial theorem twice: $$2^{11}=(1+1)^{11}=\sum_{i=0}^{11}\binom{11}{i}$$ and $$0=(1-1)^{11}=\sum_{i=0}^{11}\binom{11}{i}(-1)^i.$$ Upon subtracting these two results and dividing by two, we obtain the result.</p> Answer 57: <p>Since $\binom{11}{1}=\binom{11}{10},\binom{11}{3}=\binom{11}{8}$ and so on, $$ \binom{11}{1}+\binom{11}{3}+\ldots+\binom{11}{11} = \binom{11}{10}+\binom{11}{8}+\ldots+\binom{11}{0} $$ so both the terms are half the sum $\binom{11}{0}+\binom{11}{1}+\binom{11}{2}+\ldots+\binom{11}{11}=2^{11}$, i.e. $2^{10}=\color{red}{1024}$.</p> Answer 58: <p>Usually you break up ${n+1\choose k}, k\le n$ into ${n\choose k-1}+{n\choose k}$. This is one of the first tricks you learn from probably every baby probability textbook. For your problem:</p> <p>$${11\choose 1}+{11\choose 3}+...+{11\choose 9}+{11\choose 11}=\left({10\choose 0}+{10\choose 1}\right)+\left({10\choose 2}+{10\choose 3}\right)+...+\left({10\choose 8}+{10\choose 9}\right)+{10\choose 10}=(1+1)^{10}$$</p> <p>by recognizing ${11\choose 11}={10\choose 10}$.</p> Answer 59: <p>You might recoginize ${m \choose n}$ as the co-efficient of the $x^n$ term in the binomial expansion of $(x+1)^m$</p> <p>So the sum of all of the coefficients equals</p> <p>$(1+1)^m$</p> <p>In your example above only have the odd coefficients.</p> <p>${11 \choose n} ={11 \choose 11-n}$</p> <p>For every odd coefficient you have a symmetric even counterpart.</p> <p>$\sum_\limits {i=0}^{5} {11\choose 2i+1} = \frac 12 \sum_\limits {i=0}^{11} {11\choose i} = 2^{10}$</p> Answer 60: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{k = 0}^{5}{11 \choose 2k + 1} &amp; = \sum_{k = 0}^{10}{11 \choose k + 1}{1 + \pars{-1}^{k} \over 2} = \sum_{k = 1}^{11}{11 \choose k}{1 - \pars{-1}^{k} \over 2} \\[5mm] &amp; = {1 \over 2}\underbrace{\sum_{k = 0}^{11}{11 \choose k}}_{\ds{2^{11} = 2048}}\ -\ {1 \over 2}\underbrace{\sum_{k = 0}^{11}{11 \choose k}\pars{-1}^{k}}_{\ds{=\ 0}}\ = \bbx{\ds{1024}} \end{align}</p> Answer 61: <p>Since any time you choose an odd number of elements out of a set of eleven, you leave behind an even number of elements, $$ \sum_{k\text{ odd}} \binom{11}{k} = \sum_{k\text{ even}} \binom{11}{k} $$</p> <p>So the number of ways of choosing any subset of eleven elements is twice the number of ways of choosing an odd number of elements.</p> <p>But to choose any subset, each element is either in our out of the chosen subset: Than gives $2^{11}$.</p> <p>So the number of ways to choose an odd number of elements is $$\frac12\cdot 2^{11} = 2^{10} = 1024$$</p> <p>P.S. This reasoning does not work when the main set has an even number of elements. But the property of the number of ways to choose an odd number of elements being half the total number of subsets still holds. Can you see why?</p> Answer 62: <p>I don't know if this is the fastest method, but you can use the binomial theorem twice: $$2^{11}=(1+1)^{11}=\sum_{i=0}^{11}\binom{11}{i}$$ and $$0=(1-1)^{11}=\sum_{i=0}^{11}\binom{11}{i}(-1)^i.$$ Upon subtracting these two results and dividing by two, we obtain the result.</p> Answer 63: <p>Since $\binom{11}{1}=\binom{11}{10},\binom{11}{3}=\binom{11}{8}$ and so on, $$ \binom{11}{1}+\binom{11}{3}+\ldots+\binom{11}{11} = \binom{11}{10}+\binom{11}{8}+\ldots+\binom{11}{0} $$ so both the terms are half the sum $\binom{11}{0}+\binom{11}{1}+\binom{11}{2}+\ldots+\binom{11}{11}=2^{11}$, i.e. $2^{10}=\color{red}{1024}$.</p> Answer 64: <p>Usually you break up ${n+1\choose k}, k\le n$ into ${n\choose k-1}+{n\choose k}$. This is one of the first tricks you learn from probably every baby probability textbook. For your problem:</p> <p>$${11\choose 1}+{11\choose 3}+...+{11\choose 9}+{11\choose 11}=\left({10\choose 0}+{10\choose 1}\right)+\left({10\choose 2}+{10\choose 3}\right)+...+\left({10\choose 8}+{10\choose 9}\right)+{10\choose 10}=(1+1)^{10}$$</p> <p>by recognizing ${11\choose 11}={10\choose 10}$.</p> Answer 65: <p>You might recoginize ${m \choose n}$ as the co-efficient of the $x^n$ term in the binomial expansion of $(x+1)^m$</p> <p>So the sum of all of the coefficients equals</p> <p>$(1+1)^m$</p> <p>In your example above only have the odd coefficients.</p> <p>${11 \choose n} ={11 \choose 11-n}$</p> <p>For every odd coefficient you have a symmetric even counterpart.</p> <p>$\sum_\limits {i=0}^{5} {11\choose 2i+1} = \frac 12 \sum_\limits {i=0}^{11} {11\choose i} = 2^{10}$</p> Answer 66: <p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{k = 0}^{5}{11 \choose 2k + 1} &amp; = \sum_{k = 0}^{10}{11 \choose k + 1}{1 + \pars{-1}^{k} \over 2} = \sum_{k = 1}^{11}{11 \choose k}{1 - \pars{-1}^{k} \over 2} \\[5mm] &amp; = {1 \over 2}\underbrace{\sum_{k = 0}^{11}{11 \choose k}}_{\ds{2^{11} = 2048}}\ -\ {1 \over 2}\underbrace{\sum_{k = 0}^{11}{11 \choose k}\pars{-1}^{k}}_{\ds{=\ 0}}\ = \bbx{\ds{1024}} \end{align}</p>
Total number of odd numbers in a row of Pascal&#39;s triangle is a natural power of two
2,093,295
6
https://math.stackexchange.com/questions/2093295/total-number-of-odd-numbers-in-a-row-of-pascals-triangle-is-a-natural-power-of
<p>This seems like a combinatorial problem so there might be something simple that hasn't struck me. Although I do have an idea but I am unable to proceed from it. The statement of the question is:</p> <blockquote> <p>Prove that in any row of <a href="https://en.wikipedia.org/wiki/Pascal%27s_triangle" rel="nofollow noreferrer">Pascal's triangle</a>, the number of odd elements is <span class="math-container">$2^k$</span>, for some <span class="math-container">$k \in \mathbb{N}$</span>.</p> </blockquote> <p>I was working on a Pascal's triangle but in a binary format, where two adjacent 1's add up to 0. Something of the sort:</p> <pre><code> 1 1 1 1 0 1 1 1 1 1 1 0 0 0 1 1 1 0 0 1 1 </code></pre> <p>It is a definitive sequence and the thing I liked about it was that you can add up the adjacent 1's in a row to get the number of odd elements, but I haven't been able to generalize this information. I was thinking along the lines of a recursive relation in polynomials whose coefficients would represent the rows of this triangle, then I can just feed 1 into the equation and inductively prove that it is a perfect power of 2.</p> <p>Can someone help me out on a proof along these lines, if I'm going in the right direction here?</p> <p>P.S. I know there is the modulo 2 proof but can someone help me generalize that binary pascal's triangle?</p>
[ { "answer_id": 2093632, "score": 3, "text": "<p>Similar idea to Andreas Caranti's answer:</p>\n\n<p>Kummer's Theorem says that the number of factors of <span class=\"math-container\">$p$</span> in <span class=\"math-container\">$\\binom{n}{k}$</span> is the number of carries when adding <span class=\"math-container\">$k$</span> and <span class=\"math-container\">$n-k$</span> in base-<span class=\"math-container\">$p$</span>. There are no carries when adding <span class=\"math-container\">$k$</span> and <span class=\"math-container\">$n-k$</span> in binary if and only if <span class=\"math-container\">$k\\veebar n-k=0$</span>; when that is true, <span class=\"math-container\">$n=k\\lor n-k$</span>. This means that <span class=\"math-container\">$\\binom{n}{k}$</span> is odd when the <span class=\"math-container\">$1$</span> bits of <span class=\"math-container\">$k$</span> are a subset of the <span class=\"math-container\">$1$</span> bits of <span class=\"math-container\">$n$</span>. That is, there are <span class=\"math-container\">$2^{\\sigma_2(n)}$</span> odd entries in row <span class=\"math-container\">$n$</span> of Pascal's Triangle, where <span class=\"math-container\">$\\sigma_2(n)$</span> is the sum of the bits in the binary representation of <span class=\"math-container\">$n$</span>.</p>\n" }, { "answer_id": 2093331, "score": 1, "text": "<p>Will delete because this is not what OP intends, but a proof via <a href=\"https://en.wikipedia.org/wiki/Kummer&#39;s_theorem\" rel=\"nofollow noreferrer\">Kummer's Theorem</a> looks rather neat to me.</p>\n\n<p>Write $n, m$ in base $2$\n$$\nn = 2^{k} n_{k} + \\dots + 2 n_{1} + n_{0},\n\\qquad\nm = 2^{k} m_{k} + \\dots + 2 m_{1} + m_{0},\n$$\nwith $n_{i}, m_{i} \\in \\{ 0, 1 \\}$.</p>\n\n<p>Then </p>\n\n<blockquote>\n <p>$\\dbinom{n}{m}$ is odd iff there are no carries in the sum in base $2$ \n $$\nm + (n - m) = n,\n$$\n that is, iff $m_{i} \\le n_{i}$ for each $i$. </p>\n</blockquote>\n\n<p>Given $n$, there are thus\n$$\n\\prod_{i=0}^{k} (n_{i} + 1)\n$$\npossibilities for $m$, and each $n_{i} + 1 \\in \\{ 1, 2 \\}$.</p>\n\n<p>This also tells you what is the exponent $k$ in the question - it is the number of $1$'s in the diadic expansion of $n$, or equivalently, the number of <em>distinct</em> powers of $2$ that sum up to $n$.</p>\n" }, { "answer_id": 3305880, "score": 1, "text": "<p>The simplest (at least simplest to explain) solution (using Lucas's theorem) has not been posted; let me fill that \"much needed gap\". Lazy as I am, I will mostly copypaste my answer from <a href=\"https://math.stackexchange.com/a/2184460/\">https://math.stackexchange.com/a/2184460/</a> , since the argument is essentially the same.</p>\n\n<p>I use the notation <span class=\"math-container\">$\\mathbb{N}$</span> for the set <span class=\"math-container\">$\\left\\{0,1,2,\\ldots\\right\\}$</span>.</p>\n\n<p>The question asks the following:</p>\n\n<blockquote>\n <p><strong>Theorem 1.</strong> Let <span class=\"math-container\">$n \\in \\mathbb{N}$</span>. Then, the number of <span class=\"math-container\">$i \\in \\left\\{0,1,\\ldots,n\\right\\}$</span> such that <span class=\"math-container\">$\\dbinom{n}{i}$</span> is odd is a power of <span class=\"math-container\">$2$</span>.</p>\n</blockquote>\n\n<p>I shall prove something slightly more general:</p>\n\n<blockquote>\n <p><strong>Theorem 2.</strong> Let <span class=\"math-container\">$n \\in \\mathbb{N}$</span>. Let <span class=\"math-container\">$q$</span> be a real number. Then, there exists a finite subset <span class=\"math-container\">$G$</span> of <span class=\"math-container\">$\\mathbb{N}$</span> such that\n <span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} q^i = \\prod_{g \\in G} \\left(q^{2^g} + 1\\right) .\n\\end{align}</span></p>\n</blockquote>\n\n<p><em>Proof of Theorem 2.</em>\nWrite <span class=\"math-container\">$n$</span> in the form <span class=\"math-container\">$n=a_{k}2^{k}+a_{k-1}2^{k-1}+\\cdots+a_{0}2^{0}$</span> for some\n<span class=\"math-container\">$k\\in\\mathbb{N}$</span> and <span class=\"math-container\">$a_{0},a_{1},\\ldots,a_{k}\\in\\left\\{ 0,1\\right\\} $</span>.\n(This is just the base-<span class=\"math-container\">$2$</span> representation of <span class=\"math-container\">$n$</span>, possibly with leading zeroes.)</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Lucas&#39;s_theorem\" rel=\"nofollow noreferrer\">Lucas's theorem</a> tells you\nthat if <span class=\"math-container\">$i=b_{k}2^{k}+b_{k-1}2^{k-1}+\\cdots+b_{0}2^{0}$</span> for some <span class=\"math-container\">$b_{0}\n,b_{1},\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} $</span>, then\n<span class=\"math-container\">\\begin{align}\n\\dbinom{n}{i}\n&amp;\\equiv\n\\dbinom{a_{k}}{b_{k}}\\dbinom{a_{k-1}}{b_{k-1}}\n\\cdots\\dbinom{a_{0}}{b_{0}}=\\prod\\limits_{j=0}^{k}\\underbrace{\\dbinom{a_{j}}{b_{j}}\n}_{\\substack{=\n\\begin{cases}\n1, &amp; \\text{if }b_{j}\\leq a_{j}\\\\\n0, &amp; \\text{if }b_{j}&gt;a_{j}\n\\end{cases}\n\\\\\\text{(since }a_{j}\\text{ and }b_{j}\\text{ lie in }\\left\\{ 0,1\\right\\}\n\\text{)}}} \\\\\n&amp;=\\prod\\limits_{j=0}^{k}\n\\begin{cases}\n1, &amp; \\text{if }b_{j}\\leq a_{j}\\\\\n0, &amp; \\text{if }b_{j}&gt;a_{j}\n\\end{cases}\n=\n\\begin{cases}\n1, &amp; \\text{if }b_{j}\\leq a_{j}\\text{ for all }j\\text{;}\\\\\n0, &amp; \\text{otherwise}\n\\end{cases}\n\\mod 2 .\n\\end{align}</span>\nHence, the <span class=\"math-container\">$i\\in\\mathbb{N}$</span> for which <span class=\"math-container\">$\\dbinom{n}{i}$</span> is\nodd are precisely the numbers of the form <span class=\"math-container\">$b_{k}2^{k}+b_{k-1}2^{k-1}\n+\\cdots+b_{0}2^{0}$</span> for <span class=\"math-container\">$b_{0},b_{1},\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} $</span>\nsatisfying <span class=\"math-container\">$\\left( b_{j}\\leq a_{j}\\text{ for all }j\\right) $</span>.\nSince all these <span class=\"math-container\">$i$</span> satisfy <span class=\"math-container\">$i \\in \\left\\{ 0,1,\\ldots,n\\right\\}$</span>\n(because otherwise, <span class=\"math-container\">$\\dbinom{n}{i}$</span> would be <span class=\"math-container\">$0$</span> and therefore could not\nbe odd), we can rewrite this as follows: The\n<span class=\"math-container\">$i \\in \\left\\{ 0,1,\\ldots,n\\right\\}$</span> for which <span class=\"math-container\">$\\dbinom{n}{i}$</span> is\nodd are precisely the numbers of the form <span class=\"math-container\">$b_{k}2^{k}+b_{k-1}2^{k-1}\n+\\cdots+b_{0}2^{0}$</span> for <span class=\"math-container\">$b_{0},b_{1},\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} $</span>\nsatisfying <span class=\"math-container\">$\\left( b_{j}\\leq a_{j}\\text{ for all }j\\right) $</span>.\nSince these\nnumbers are distinct (because the base-<span class=\"math-container\">$2$</span> representation of any\n<span class=\"math-container\">$i\\in\\mathbb{N}$</span> is unique, as long as we fix the number of digits), we thus\ncan substitute <span class=\"math-container\">$b_{k}2^{k}+b_{k-1}2^{k-1}+\\cdots+b_{0}2^{0}$</span> for <span class=\"math-container\">$i$</span> in the\nsum <span class=\"math-container\">$\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\} ;\\\\\n\\dbinom{n}{i}\\text{ is odd}}}q^{i}$</span>. Thus, this sum rewrites as follows:\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\} ;\\\\\n\\dbinom{n}{i}\\text{ is odd}}}q^{i}\n&amp;=\\underbrace{\\sum\\limits_{\\substack{b_{0},b_{1}\n,\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} ;\\\\b_{j}\\leq a_{j}\\text{ for all }j}\n}}_{=\\sum\\limits_{b_{0}=0}^{a_{0}}\\sum\\limits_{b_{1}=0}^{a_{1}}\\cdots\\sum\\limits_{b_{k}=0}^{a_k}\n}\\underbrace{q^{b_{k}2^{k}+b_{k-1}2^{k-1}+\\cdots+b_{0}2^{0}}}_{=\\left(\nq^{2^{k}}\\right) ^{b_{k}}\\left( q^{2^{k-1}}\\right) ^{b_{k-1}}\\cdots\\left(\nq^{2^{0}}\\right) ^{b_{0}}} \\\\\n&amp;=\\sum\\limits_{b_{0}=0}^{a_{0}}\\sum\\limits_{b_{1}=0}^{a_{1}}\\cdots\\sum\\limits_{b_{k}=0}^{a_{k}\n}\\left( q^{2^{k}}\\right) ^{b_{k}}\\left( q^{2^{k-1}}\\right) ^{b_{k-1}\n}\\cdots\\left( q^{2^{0}}\\right) ^{b_{0}} \\\\\n&amp;=\\left( \\sum\\limits_{b_{k}=0}^{a_{k}}\\left( q^{2^{k}}\\right) ^{b_{k}}\\right)\n\\left( \\sum\\limits_{b_{k-1}=0}^{a_{k-1}}\\left( q^{2^{k-1}}\\right) ^{b_{k-1}\n}\\right) \\cdots\\left( \\sum\\limits_{b_{0}=0}^{a_{0}}\\left( q^{2^{0}}\\right)\n^{b_{0}}\\right) \\\\\n&amp;=\\left( \\sum\\limits_{b=0}^{a_{k}}\\left( q^{2^{k}}\\right) ^{b}\\right) \\left(\n\\sum\\limits_{b=0}^{a_{k-1}}\\left( q^{2^{k-1}}\\right) ^{b}\\right) \\cdots\\left(\n\\sum\\limits_{b=0}^{a_{0}}\\left( q^{2^{0}}\\right) ^{b}\\right) \\\\\n&amp;=\\prod\\limits_{g=0}^{k}\\underbrace{\\left( \\sum\\limits_{b=0}^{a_{g}}\\left( q^{2^{g}}\\right) ^{b}\\right) }_{\\substack{=\n\\begin{cases}\nq^{2^{g}}+1, &amp; \\text{if }a_{g}=1;\\\\\n1 &amp; \\text{if }a_{g}=0\n\\end{cases}\n\\\\\\text{(since }a_{g}\\in\\left\\{ 0,1\\right\\} \\text{)}}} \\\\\n&amp;=\\prod\\limits_{g=0}^{k}\n\\begin{cases}\nq^{2^{g}}+1, &amp; \\text{if }a_{g}=1;\\\\\n1 &amp; \\text{if }a_{g}=0\n\\end{cases}\n\\\\\n&amp;=\\left( \\prod\\limits_{\\substack{g\\in\\left\\{ 0,1,\\ldots,k\\right\\} ;\\\\a_{g}\n=1}}\\left( q^{2^{g}}+1\\right) \\right) \\underbrace{\\left( \\prod\\limits\n_{\\substack{g\\in\\left\\{ 0,1,\\ldots,k\\right\\} ;\\\\a_{g}=0}}1\\right) }_{=1} \\\\\n&amp;=\\prod\\limits_{\\substack{g\\in\\left\\{ 0,1,\\ldots,k\\right\\} ;\\\\a_{g}=1}}\\left(\nq^{2^{g}}+1\\right) .\n\\end{align}</span>\nThus, there exists a finite subset <span class=\"math-container\">$G$</span> of <span class=\"math-container\">$\\mathbb{N}$</span> such that\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} q^i = \\prod_{g \\in G} \\left(q^{2^g} + 1\\right)\n\\end{align}</span>\n(namely, <span class=\"math-container\">$G$</span> is the set of all <span class=\"math-container\">$g\\in\\left\\{ 0,1,\\ldots,k\\right\\}$</span> satisfying <span class=\"math-container\">$a_g = 1$</span>). This proves Theorem 2. <span class=\"math-container\">$\\blacksquare$</span></p>\n\n<p><em>Proof of Theorem 1.</em> Theorem 2 (applied to <span class=\"math-container\">$q = 1$</span>) shows that there exists a finite subset <span class=\"math-container\">$G$</span> of <span class=\"math-container\">$\\mathbb{N}$</span> such that\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} 1^i = \\prod_{g \\in G} \\left(1^{2^g} + 1\\right) .\n\\end{align}</span>\nConsider this <span class=\"math-container\">$G$</span>.\nComparing\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} 1^i = \\prod_{g \\in G} \\underbrace{\\left(1^{2^g} + 1\\right)}_{= 1+1 = 2}\n= \\prod_{g \\in G} 2 = 2^{\\left|G\\right|}\n\\end{align}</span>\nwith\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} \\underbrace{1^i}_{=1}\n&amp; = \\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} 1 \\\\\n&amp; = \\left(\\text{the number of all $i\\in\\left\\{ 0,1,\\ldots,n\\right\\}$ such that $\\dbinom{n}{i}$ is odd}\\right) \\cdot 1 \\\\\n&amp; = \\left(\\text{the number of all $i\\in\\left\\{ 0,1,\\ldots,n\\right\\}$ such that $\\dbinom{n}{i}$ is odd}\\right) ,\n\\end{align}</span>\nwe obtain\n<span class=\"math-container\">\\begin{align}\n\\left(\\text{the number of all $i\\in\\left\\{ 0,1,\\ldots,n\\right\\}$ such that $\\dbinom{n}{i}$ is odd}\\right) = 2^{\\left|G\\right|} .\n\\end{align}</span>\nHence, the number of <span class=\"math-container\">$i \\in \\left\\{0,1,\\ldots,n\\right\\}$</span> such that <span class=\"math-container\">$\\dbinom{n}{i}$</span> is odd is a power of <span class=\"math-container\">$2$</span>. This proves Theorem 1. <span class=\"math-container\">$\\blacksquare$</span></p>\n" }, { "answer_id": 2093632, "score": 3, "text": "<p>Similar idea to Andreas Caranti's answer:</p>\n\n<p>Kummer's Theorem says that the number of factors of <span class=\"math-container\">$p$</span> in <span class=\"math-container\">$\\binom{n}{k}$</span> is the number of carries when adding <span class=\"math-container\">$k$</span> and <span class=\"math-container\">$n-k$</span> in base-<span class=\"math-container\">$p$</span>. There are no carries when adding <span class=\"math-container\">$k$</span> and <span class=\"math-container\">$n-k$</span> in binary if and only if <span class=\"math-container\">$k\\veebar n-k=0$</span>; when that is true, <span class=\"math-container\">$n=k\\lor n-k$</span>. This means that <span class=\"math-container\">$\\binom{n}{k}$</span> is odd when the <span class=\"math-container\">$1$</span> bits of <span class=\"math-container\">$k$</span> are a subset of the <span class=\"math-container\">$1$</span> bits of <span class=\"math-container\">$n$</span>. That is, there are <span class=\"math-container\">$2^{\\sigma_2(n)}$</span> odd entries in row <span class=\"math-container\">$n$</span> of Pascal's Triangle, where <span class=\"math-container\">$\\sigma_2(n)$</span> is the sum of the bits in the binary representation of <span class=\"math-container\">$n$</span>.</p>\n" }, { "answer_id": 2093331, "score": 1, "text": "<p>Will delete because this is not what OP intends, but a proof via <a href=\"https://en.wikipedia.org/wiki/Kummer&#39;s_theorem\" rel=\"nofollow noreferrer\">Kummer's Theorem</a> looks rather neat to me.</p>\n\n<p>Write $n, m$ in base $2$\n$$\nn = 2^{k} n_{k} + \\dots + 2 n_{1} + n_{0},\n\\qquad\nm = 2^{k} m_{k} + \\dots + 2 m_{1} + m_{0},\n$$\nwith $n_{i}, m_{i} \\in \\{ 0, 1 \\}$.</p>\n\n<p>Then </p>\n\n<blockquote>\n <p>$\\dbinom{n}{m}$ is odd iff there are no carries in the sum in base $2$ \n $$\nm + (n - m) = n,\n$$\n that is, iff $m_{i} \\le n_{i}$ for each $i$. </p>\n</blockquote>\n\n<p>Given $n$, there are thus\n$$\n\\prod_{i=0}^{k} (n_{i} + 1)\n$$\npossibilities for $m$, and each $n_{i} + 1 \\in \\{ 1, 2 \\}$.</p>\n\n<p>This also tells you what is the exponent $k$ in the question - it is the number of $1$'s in the diadic expansion of $n$, or equivalently, the number of <em>distinct</em> powers of $2$ that sum up to $n$.</p>\n" }, { "answer_id": 3305880, "score": 1, "text": "<p>The simplest (at least simplest to explain) solution (using Lucas's theorem) has not been posted; let me fill that \"much needed gap\". Lazy as I am, I will mostly copypaste my answer from <a href=\"https://math.stackexchange.com/a/2184460/\">https://math.stackexchange.com/a/2184460/</a> , since the argument is essentially the same.</p>\n\n<p>I use the notation <span class=\"math-container\">$\\mathbb{N}$</span> for the set <span class=\"math-container\">$\\left\\{0,1,2,\\ldots\\right\\}$</span>.</p>\n\n<p>The question asks the following:</p>\n\n<blockquote>\n <p><strong>Theorem 1.</strong> Let <span class=\"math-container\">$n \\in \\mathbb{N}$</span>. Then, the number of <span class=\"math-container\">$i \\in \\left\\{0,1,\\ldots,n\\right\\}$</span> such that <span class=\"math-container\">$\\dbinom{n}{i}$</span> is odd is a power of <span class=\"math-container\">$2$</span>.</p>\n</blockquote>\n\n<p>I shall prove something slightly more general:</p>\n\n<blockquote>\n <p><strong>Theorem 2.</strong> Let <span class=\"math-container\">$n \\in \\mathbb{N}$</span>. Let <span class=\"math-container\">$q$</span> be a real number. Then, there exists a finite subset <span class=\"math-container\">$G$</span> of <span class=\"math-container\">$\\mathbb{N}$</span> such that\n <span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} q^i = \\prod_{g \\in G} \\left(q^{2^g} + 1\\right) .\n\\end{align}</span></p>\n</blockquote>\n\n<p><em>Proof of Theorem 2.</em>\nWrite <span class=\"math-container\">$n$</span> in the form <span class=\"math-container\">$n=a_{k}2^{k}+a_{k-1}2^{k-1}+\\cdots+a_{0}2^{0}$</span> for some\n<span class=\"math-container\">$k\\in\\mathbb{N}$</span> and <span class=\"math-container\">$a_{0},a_{1},\\ldots,a_{k}\\in\\left\\{ 0,1\\right\\} $</span>.\n(This is just the base-<span class=\"math-container\">$2$</span> representation of <span class=\"math-container\">$n$</span>, possibly with leading zeroes.)</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Lucas&#39;s_theorem\" rel=\"nofollow noreferrer\">Lucas's theorem</a> tells you\nthat if <span class=\"math-container\">$i=b_{k}2^{k}+b_{k-1}2^{k-1}+\\cdots+b_{0}2^{0}$</span> for some <span class=\"math-container\">$b_{0}\n,b_{1},\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} $</span>, then\n<span class=\"math-container\">\\begin{align}\n\\dbinom{n}{i}\n&amp;\\equiv\n\\dbinom{a_{k}}{b_{k}}\\dbinom{a_{k-1}}{b_{k-1}}\n\\cdots\\dbinom{a_{0}}{b_{0}}=\\prod\\limits_{j=0}^{k}\\underbrace{\\dbinom{a_{j}}{b_{j}}\n}_{\\substack{=\n\\begin{cases}\n1, &amp; \\text{if }b_{j}\\leq a_{j}\\\\\n0, &amp; \\text{if }b_{j}&gt;a_{j}\n\\end{cases}\n\\\\\\text{(since }a_{j}\\text{ and }b_{j}\\text{ lie in }\\left\\{ 0,1\\right\\}\n\\text{)}}} \\\\\n&amp;=\\prod\\limits_{j=0}^{k}\n\\begin{cases}\n1, &amp; \\text{if }b_{j}\\leq a_{j}\\\\\n0, &amp; \\text{if }b_{j}&gt;a_{j}\n\\end{cases}\n=\n\\begin{cases}\n1, &amp; \\text{if }b_{j}\\leq a_{j}\\text{ for all }j\\text{;}\\\\\n0, &amp; \\text{otherwise}\n\\end{cases}\n\\mod 2 .\n\\end{align}</span>\nHence, the <span class=\"math-container\">$i\\in\\mathbb{N}$</span> for which <span class=\"math-container\">$\\dbinom{n}{i}$</span> is\nodd are precisely the numbers of the form <span class=\"math-container\">$b_{k}2^{k}+b_{k-1}2^{k-1}\n+\\cdots+b_{0}2^{0}$</span> for <span class=\"math-container\">$b_{0},b_{1},\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} $</span>\nsatisfying <span class=\"math-container\">$\\left( b_{j}\\leq a_{j}\\text{ for all }j\\right) $</span>.\nSince all these <span class=\"math-container\">$i$</span> satisfy <span class=\"math-container\">$i \\in \\left\\{ 0,1,\\ldots,n\\right\\}$</span>\n(because otherwise, <span class=\"math-container\">$\\dbinom{n}{i}$</span> would be <span class=\"math-container\">$0$</span> and therefore could not\nbe odd), we can rewrite this as follows: The\n<span class=\"math-container\">$i \\in \\left\\{ 0,1,\\ldots,n\\right\\}$</span> for which <span class=\"math-container\">$\\dbinom{n}{i}$</span> is\nodd are precisely the numbers of the form <span class=\"math-container\">$b_{k}2^{k}+b_{k-1}2^{k-1}\n+\\cdots+b_{0}2^{0}$</span> for <span class=\"math-container\">$b_{0},b_{1},\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} $</span>\nsatisfying <span class=\"math-container\">$\\left( b_{j}\\leq a_{j}\\text{ for all }j\\right) $</span>.\nSince these\nnumbers are distinct (because the base-<span class=\"math-container\">$2$</span> representation of any\n<span class=\"math-container\">$i\\in\\mathbb{N}$</span> is unique, as long as we fix the number of digits), we thus\ncan substitute <span class=\"math-container\">$b_{k}2^{k}+b_{k-1}2^{k-1}+\\cdots+b_{0}2^{0}$</span> for <span class=\"math-container\">$i$</span> in the\nsum <span class=\"math-container\">$\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\} ;\\\\\n\\dbinom{n}{i}\\text{ is odd}}}q^{i}$</span>. Thus, this sum rewrites as follows:\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\} ;\\\\\n\\dbinom{n}{i}\\text{ is odd}}}q^{i}\n&amp;=\\underbrace{\\sum\\limits_{\\substack{b_{0},b_{1}\n,\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} ;\\\\b_{j}\\leq a_{j}\\text{ for all }j}\n}}_{=\\sum\\limits_{b_{0}=0}^{a_{0}}\\sum\\limits_{b_{1}=0}^{a_{1}}\\cdots\\sum\\limits_{b_{k}=0}^{a_k}\n}\\underbrace{q^{b_{k}2^{k}+b_{k-1}2^{k-1}+\\cdots+b_{0}2^{0}}}_{=\\left(\nq^{2^{k}}\\right) ^{b_{k}}\\left( q^{2^{k-1}}\\right) ^{b_{k-1}}\\cdots\\left(\nq^{2^{0}}\\right) ^{b_{0}}} \\\\\n&amp;=\\sum\\limits_{b_{0}=0}^{a_{0}}\\sum\\limits_{b_{1}=0}^{a_{1}}\\cdots\\sum\\limits_{b_{k}=0}^{a_{k}\n}\\left( q^{2^{k}}\\right) ^{b_{k}}\\left( q^{2^{k-1}}\\right) ^{b_{k-1}\n}\\cdots\\left( q^{2^{0}}\\right) ^{b_{0}} \\\\\n&amp;=\\left( \\sum\\limits_{b_{k}=0}^{a_{k}}\\left( q^{2^{k}}\\right) ^{b_{k}}\\right)\n\\left( \\sum\\limits_{b_{k-1}=0}^{a_{k-1}}\\left( q^{2^{k-1}}\\right) ^{b_{k-1}\n}\\right) \\cdots\\left( \\sum\\limits_{b_{0}=0}^{a_{0}}\\left( q^{2^{0}}\\right)\n^{b_{0}}\\right) \\\\\n&amp;=\\left( \\sum\\limits_{b=0}^{a_{k}}\\left( q^{2^{k}}\\right) ^{b}\\right) \\left(\n\\sum\\limits_{b=0}^{a_{k-1}}\\left( q^{2^{k-1}}\\right) ^{b}\\right) \\cdots\\left(\n\\sum\\limits_{b=0}^{a_{0}}\\left( q^{2^{0}}\\right) ^{b}\\right) \\\\\n&amp;=\\prod\\limits_{g=0}^{k}\\underbrace{\\left( \\sum\\limits_{b=0}^{a_{g}}\\left( q^{2^{g}}\\right) ^{b}\\right) }_{\\substack{=\n\\begin{cases}\nq^{2^{g}}+1, &amp; \\text{if }a_{g}=1;\\\\\n1 &amp; \\text{if }a_{g}=0\n\\end{cases}\n\\\\\\text{(since }a_{g}\\in\\left\\{ 0,1\\right\\} \\text{)}}} \\\\\n&amp;=\\prod\\limits_{g=0}^{k}\n\\begin{cases}\nq^{2^{g}}+1, &amp; \\text{if }a_{g}=1;\\\\\n1 &amp; \\text{if }a_{g}=0\n\\end{cases}\n\\\\\n&amp;=\\left( \\prod\\limits_{\\substack{g\\in\\left\\{ 0,1,\\ldots,k\\right\\} ;\\\\a_{g}\n=1}}\\left( q^{2^{g}}+1\\right) \\right) \\underbrace{\\left( \\prod\\limits\n_{\\substack{g\\in\\left\\{ 0,1,\\ldots,k\\right\\} ;\\\\a_{g}=0}}1\\right) }_{=1} \\\\\n&amp;=\\prod\\limits_{\\substack{g\\in\\left\\{ 0,1,\\ldots,k\\right\\} ;\\\\a_{g}=1}}\\left(\nq^{2^{g}}+1\\right) .\n\\end{align}</span>\nThus, there exists a finite subset <span class=\"math-container\">$G$</span> of <span class=\"math-container\">$\\mathbb{N}$</span> such that\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} q^i = \\prod_{g \\in G} \\left(q^{2^g} + 1\\right)\n\\end{align}</span>\n(namely, <span class=\"math-container\">$G$</span> is the set of all <span class=\"math-container\">$g\\in\\left\\{ 0,1,\\ldots,k\\right\\}$</span> satisfying <span class=\"math-container\">$a_g = 1$</span>). This proves Theorem 2. <span class=\"math-container\">$\\blacksquare$</span></p>\n\n<p><em>Proof of Theorem 1.</em> Theorem 2 (applied to <span class=\"math-container\">$q = 1$</span>) shows that there exists a finite subset <span class=\"math-container\">$G$</span> of <span class=\"math-container\">$\\mathbb{N}$</span> such that\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} 1^i = \\prod_{g \\in G} \\left(1^{2^g} + 1\\right) .\n\\end{align}</span>\nConsider this <span class=\"math-container\">$G$</span>.\nComparing\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} 1^i = \\prod_{g \\in G} \\underbrace{\\left(1^{2^g} + 1\\right)}_{= 1+1 = 2}\n= \\prod_{g \\in G} 2 = 2^{\\left|G\\right|}\n\\end{align}</span>\nwith\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} \\underbrace{1^i}_{=1}\n&amp; = \\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} 1 \\\\\n&amp; = \\left(\\text{the number of all $i\\in\\left\\{ 0,1,\\ldots,n\\right\\}$ such that $\\dbinom{n}{i}$ is odd}\\right) \\cdot 1 \\\\\n&amp; = \\left(\\text{the number of all $i\\in\\left\\{ 0,1,\\ldots,n\\right\\}$ such that $\\dbinom{n}{i}$ is odd}\\right) ,\n\\end{align}</span>\nwe obtain\n<span class=\"math-container\">\\begin{align}\n\\left(\\text{the number of all $i\\in\\left\\{ 0,1,\\ldots,n\\right\\}$ such that $\\dbinom{n}{i}$ is odd}\\right) = 2^{\\left|G\\right|} .\n\\end{align}</span>\nHence, the number of <span class=\"math-container\">$i \\in \\left\\{0,1,\\ldots,n\\right\\}$</span> such that <span class=\"math-container\">$\\dbinom{n}{i}$</span> is odd is a power of <span class=\"math-container\">$2$</span>. This proves Theorem 1. <span class=\"math-container\">$\\blacksquare$</span></p>\n" }, { "answer_id": 2093632, "score": 3, "text": "<p>Similar idea to Andreas Caranti's answer:</p>\n\n<p>Kummer's Theorem says that the number of factors of <span class=\"math-container\">$p$</span> in <span class=\"math-container\">$\\binom{n}{k}$</span> is the number of carries when adding <span class=\"math-container\">$k$</span> and <span class=\"math-container\">$n-k$</span> in base-<span class=\"math-container\">$p$</span>. There are no carries when adding <span class=\"math-container\">$k$</span> and <span class=\"math-container\">$n-k$</span> in binary if and only if <span class=\"math-container\">$k\\veebar n-k=0$</span>; when that is true, <span class=\"math-container\">$n=k\\lor n-k$</span>. This means that <span class=\"math-container\">$\\binom{n}{k}$</span> is odd when the <span class=\"math-container\">$1$</span> bits of <span class=\"math-container\">$k$</span> are a subset of the <span class=\"math-container\">$1$</span> bits of <span class=\"math-container\">$n$</span>. That is, there are <span class=\"math-container\">$2^{\\sigma_2(n)}$</span> odd entries in row <span class=\"math-container\">$n$</span> of Pascal's Triangle, where <span class=\"math-container\">$\\sigma_2(n)$</span> is the sum of the bits in the binary representation of <span class=\"math-container\">$n$</span>.</p>\n" }, { "answer_id": 2093331, "score": 1, "text": "<p>Will delete because this is not what OP intends, but a proof via <a href=\"https://en.wikipedia.org/wiki/Kummer&#39;s_theorem\" rel=\"nofollow noreferrer\">Kummer's Theorem</a> looks rather neat to me.</p>\n\n<p>Write $n, m$ in base $2$\n$$\nn = 2^{k} n_{k} + \\dots + 2 n_{1} + n_{0},\n\\qquad\nm = 2^{k} m_{k} + \\dots + 2 m_{1} + m_{0},\n$$\nwith $n_{i}, m_{i} \\in \\{ 0, 1 \\}$.</p>\n\n<p>Then </p>\n\n<blockquote>\n <p>$\\dbinom{n}{m}$ is odd iff there are no carries in the sum in base $2$ \n $$\nm + (n - m) = n,\n$$\n that is, iff $m_{i} \\le n_{i}$ for each $i$. </p>\n</blockquote>\n\n<p>Given $n$, there are thus\n$$\n\\prod_{i=0}^{k} (n_{i} + 1)\n$$\npossibilities for $m$, and each $n_{i} + 1 \\in \\{ 1, 2 \\}$.</p>\n\n<p>This also tells you what is the exponent $k$ in the question - it is the number of $1$'s in the diadic expansion of $n$, or equivalently, the number of <em>distinct</em> powers of $2$ that sum up to $n$.</p>\n" }, { "answer_id": 3305880, "score": 1, "text": "<p>The simplest (at least simplest to explain) solution (using Lucas's theorem) has not been posted; let me fill that \"much needed gap\". Lazy as I am, I will mostly copypaste my answer from <a href=\"https://math.stackexchange.com/a/2184460/\">https://math.stackexchange.com/a/2184460/</a> , since the argument is essentially the same.</p>\n\n<p>I use the notation <span class=\"math-container\">$\\mathbb{N}$</span> for the set <span class=\"math-container\">$\\left\\{0,1,2,\\ldots\\right\\}$</span>.</p>\n\n<p>The question asks the following:</p>\n\n<blockquote>\n <p><strong>Theorem 1.</strong> Let <span class=\"math-container\">$n \\in \\mathbb{N}$</span>. Then, the number of <span class=\"math-container\">$i \\in \\left\\{0,1,\\ldots,n\\right\\}$</span> such that <span class=\"math-container\">$\\dbinom{n}{i}$</span> is odd is a power of <span class=\"math-container\">$2$</span>.</p>\n</blockquote>\n\n<p>I shall prove something slightly more general:</p>\n\n<blockquote>\n <p><strong>Theorem 2.</strong> Let <span class=\"math-container\">$n \\in \\mathbb{N}$</span>. Let <span class=\"math-container\">$q$</span> be a real number. Then, there exists a finite subset <span class=\"math-container\">$G$</span> of <span class=\"math-container\">$\\mathbb{N}$</span> such that\n <span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} q^i = \\prod_{g \\in G} \\left(q^{2^g} + 1\\right) .\n\\end{align}</span></p>\n</blockquote>\n\n<p><em>Proof of Theorem 2.</em>\nWrite <span class=\"math-container\">$n$</span> in the form <span class=\"math-container\">$n=a_{k}2^{k}+a_{k-1}2^{k-1}+\\cdots+a_{0}2^{0}$</span> for some\n<span class=\"math-container\">$k\\in\\mathbb{N}$</span> and <span class=\"math-container\">$a_{0},a_{1},\\ldots,a_{k}\\in\\left\\{ 0,1\\right\\} $</span>.\n(This is just the base-<span class=\"math-container\">$2$</span> representation of <span class=\"math-container\">$n$</span>, possibly with leading zeroes.)</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Lucas&#39;s_theorem\" rel=\"nofollow noreferrer\">Lucas's theorem</a> tells you\nthat if <span class=\"math-container\">$i=b_{k}2^{k}+b_{k-1}2^{k-1}+\\cdots+b_{0}2^{0}$</span> for some <span class=\"math-container\">$b_{0}\n,b_{1},\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} $</span>, then\n<span class=\"math-container\">\\begin{align}\n\\dbinom{n}{i}\n&amp;\\equiv\n\\dbinom{a_{k}}{b_{k}}\\dbinom{a_{k-1}}{b_{k-1}}\n\\cdots\\dbinom{a_{0}}{b_{0}}=\\prod\\limits_{j=0}^{k}\\underbrace{\\dbinom{a_{j}}{b_{j}}\n}_{\\substack{=\n\\begin{cases}\n1, &amp; \\text{if }b_{j}\\leq a_{j}\\\\\n0, &amp; \\text{if }b_{j}&gt;a_{j}\n\\end{cases}\n\\\\\\text{(since }a_{j}\\text{ and }b_{j}\\text{ lie in }\\left\\{ 0,1\\right\\}\n\\text{)}}} \\\\\n&amp;=\\prod\\limits_{j=0}^{k}\n\\begin{cases}\n1, &amp; \\text{if }b_{j}\\leq a_{j}\\\\\n0, &amp; \\text{if }b_{j}&gt;a_{j}\n\\end{cases}\n=\n\\begin{cases}\n1, &amp; \\text{if }b_{j}\\leq a_{j}\\text{ for all }j\\text{;}\\\\\n0, &amp; \\text{otherwise}\n\\end{cases}\n\\mod 2 .\n\\end{align}</span>\nHence, the <span class=\"math-container\">$i\\in\\mathbb{N}$</span> for which <span class=\"math-container\">$\\dbinom{n}{i}$</span> is\nodd are precisely the numbers of the form <span class=\"math-container\">$b_{k}2^{k}+b_{k-1}2^{k-1}\n+\\cdots+b_{0}2^{0}$</span> for <span class=\"math-container\">$b_{0},b_{1},\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} $</span>\nsatisfying <span class=\"math-container\">$\\left( b_{j}\\leq a_{j}\\text{ for all }j\\right) $</span>.\nSince all these <span class=\"math-container\">$i$</span> satisfy <span class=\"math-container\">$i \\in \\left\\{ 0,1,\\ldots,n\\right\\}$</span>\n(because otherwise, <span class=\"math-container\">$\\dbinom{n}{i}$</span> would be <span class=\"math-container\">$0$</span> and therefore could not\nbe odd), we can rewrite this as follows: The\n<span class=\"math-container\">$i \\in \\left\\{ 0,1,\\ldots,n\\right\\}$</span> for which <span class=\"math-container\">$\\dbinom{n}{i}$</span> is\nodd are precisely the numbers of the form <span class=\"math-container\">$b_{k}2^{k}+b_{k-1}2^{k-1}\n+\\cdots+b_{0}2^{0}$</span> for <span class=\"math-container\">$b_{0},b_{1},\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} $</span>\nsatisfying <span class=\"math-container\">$\\left( b_{j}\\leq a_{j}\\text{ for all }j\\right) $</span>.\nSince these\nnumbers are distinct (because the base-<span class=\"math-container\">$2$</span> representation of any\n<span class=\"math-container\">$i\\in\\mathbb{N}$</span> is unique, as long as we fix the number of digits), we thus\ncan substitute <span class=\"math-container\">$b_{k}2^{k}+b_{k-1}2^{k-1}+\\cdots+b_{0}2^{0}$</span> for <span class=\"math-container\">$i$</span> in the\nsum <span class=\"math-container\">$\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\} ;\\\\\n\\dbinom{n}{i}\\text{ is odd}}}q^{i}$</span>. Thus, this sum rewrites as follows:\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\} ;\\\\\n\\dbinom{n}{i}\\text{ is odd}}}q^{i}\n&amp;=\\underbrace{\\sum\\limits_{\\substack{b_{0},b_{1}\n,\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} ;\\\\b_{j}\\leq a_{j}\\text{ for all }j}\n}}_{=\\sum\\limits_{b_{0}=0}^{a_{0}}\\sum\\limits_{b_{1}=0}^{a_{1}}\\cdots\\sum\\limits_{b_{k}=0}^{a_k}\n}\\underbrace{q^{b_{k}2^{k}+b_{k-1}2^{k-1}+\\cdots+b_{0}2^{0}}}_{=\\left(\nq^{2^{k}}\\right) ^{b_{k}}\\left( q^{2^{k-1}}\\right) ^{b_{k-1}}\\cdots\\left(\nq^{2^{0}}\\right) ^{b_{0}}} \\\\\n&amp;=\\sum\\limits_{b_{0}=0}^{a_{0}}\\sum\\limits_{b_{1}=0}^{a_{1}}\\cdots\\sum\\limits_{b_{k}=0}^{a_{k}\n}\\left( q^{2^{k}}\\right) ^{b_{k}}\\left( q^{2^{k-1}}\\right) ^{b_{k-1}\n}\\cdots\\left( q^{2^{0}}\\right) ^{b_{0}} \\\\\n&amp;=\\left( \\sum\\limits_{b_{k}=0}^{a_{k}}\\left( q^{2^{k}}\\right) ^{b_{k}}\\right)\n\\left( \\sum\\limits_{b_{k-1}=0}^{a_{k-1}}\\left( q^{2^{k-1}}\\right) ^{b_{k-1}\n}\\right) \\cdots\\left( \\sum\\limits_{b_{0}=0}^{a_{0}}\\left( q^{2^{0}}\\right)\n^{b_{0}}\\right) \\\\\n&amp;=\\left( \\sum\\limits_{b=0}^{a_{k}}\\left( q^{2^{k}}\\right) ^{b}\\right) \\left(\n\\sum\\limits_{b=0}^{a_{k-1}}\\left( q^{2^{k-1}}\\right) ^{b}\\right) \\cdots\\left(\n\\sum\\limits_{b=0}^{a_{0}}\\left( q^{2^{0}}\\right) ^{b}\\right) \\\\\n&amp;=\\prod\\limits_{g=0}^{k}\\underbrace{\\left( \\sum\\limits_{b=0}^{a_{g}}\\left( q^{2^{g}}\\right) ^{b}\\right) }_{\\substack{=\n\\begin{cases}\nq^{2^{g}}+1, &amp; \\text{if }a_{g}=1;\\\\\n1 &amp; \\text{if }a_{g}=0\n\\end{cases}\n\\\\\\text{(since }a_{g}\\in\\left\\{ 0,1\\right\\} \\text{)}}} \\\\\n&amp;=\\prod\\limits_{g=0}^{k}\n\\begin{cases}\nq^{2^{g}}+1, &amp; \\text{if }a_{g}=1;\\\\\n1 &amp; \\text{if }a_{g}=0\n\\end{cases}\n\\\\\n&amp;=\\left( \\prod\\limits_{\\substack{g\\in\\left\\{ 0,1,\\ldots,k\\right\\} ;\\\\a_{g}\n=1}}\\left( q^{2^{g}}+1\\right) \\right) \\underbrace{\\left( \\prod\\limits\n_{\\substack{g\\in\\left\\{ 0,1,\\ldots,k\\right\\} ;\\\\a_{g}=0}}1\\right) }_{=1} \\\\\n&amp;=\\prod\\limits_{\\substack{g\\in\\left\\{ 0,1,\\ldots,k\\right\\} ;\\\\a_{g}=1}}\\left(\nq^{2^{g}}+1\\right) .\n\\end{align}</span>\nThus, there exists a finite subset <span class=\"math-container\">$G$</span> of <span class=\"math-container\">$\\mathbb{N}$</span> such that\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} q^i = \\prod_{g \\in G} \\left(q^{2^g} + 1\\right)\n\\end{align}</span>\n(namely, <span class=\"math-container\">$G$</span> is the set of all <span class=\"math-container\">$g\\in\\left\\{ 0,1,\\ldots,k\\right\\}$</span> satisfying <span class=\"math-container\">$a_g = 1$</span>). This proves Theorem 2. <span class=\"math-container\">$\\blacksquare$</span></p>\n\n<p><em>Proof of Theorem 1.</em> Theorem 2 (applied to <span class=\"math-container\">$q = 1$</span>) shows that there exists a finite subset <span class=\"math-container\">$G$</span> of <span class=\"math-container\">$\\mathbb{N}$</span> such that\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} 1^i = \\prod_{g \\in G} \\left(1^{2^g} + 1\\right) .\n\\end{align}</span>\nConsider this <span class=\"math-container\">$G$</span>.\nComparing\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} 1^i = \\prod_{g \\in G} \\underbrace{\\left(1^{2^g} + 1\\right)}_{= 1+1 = 2}\n= \\prod_{g \\in G} 2 = 2^{\\left|G\\right|}\n\\end{align}</span>\nwith\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} \\underbrace{1^i}_{=1}\n&amp; = \\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} 1 \\\\\n&amp; = \\left(\\text{the number of all $i\\in\\left\\{ 0,1,\\ldots,n\\right\\}$ such that $\\dbinom{n}{i}$ is odd}\\right) \\cdot 1 \\\\\n&amp; = \\left(\\text{the number of all $i\\in\\left\\{ 0,1,\\ldots,n\\right\\}$ such that $\\dbinom{n}{i}$ is odd}\\right) ,\n\\end{align}</span>\nwe obtain\n<span class=\"math-container\">\\begin{align}\n\\left(\\text{the number of all $i\\in\\left\\{ 0,1,\\ldots,n\\right\\}$ such that $\\dbinom{n}{i}$ is odd}\\right) = 2^{\\left|G\\right|} .\n\\end{align}</span>\nHence, the number of <span class=\"math-container\">$i \\in \\left\\{0,1,\\ldots,n\\right\\}$</span> such that <span class=\"math-container\">$\\dbinom{n}{i}$</span> is odd is a power of <span class=\"math-container\">$2$</span>. This proves Theorem 1. <span class=\"math-container\">$\\blacksquare$</span></p>\n" }, { "answer_id": 2093632, "score": 3, "text": "<p>Similar idea to Andreas Caranti's answer:</p>\n\n<p>Kummer's Theorem says that the number of factors of <span class=\"math-container\">$p$</span> in <span class=\"math-container\">$\\binom{n}{k}$</span> is the number of carries when adding <span class=\"math-container\">$k$</span> and <span class=\"math-container\">$n-k$</span> in base-<span class=\"math-container\">$p$</span>. There are no carries when adding <span class=\"math-container\">$k$</span> and <span class=\"math-container\">$n-k$</span> in binary if and only if <span class=\"math-container\">$k\\veebar n-k=0$</span>; when that is true, <span class=\"math-container\">$n=k\\lor n-k$</span>. This means that <span class=\"math-container\">$\\binom{n}{k}$</span> is odd when the <span class=\"math-container\">$1$</span> bits of <span class=\"math-container\">$k$</span> are a subset of the <span class=\"math-container\">$1$</span> bits of <span class=\"math-container\">$n$</span>. That is, there are <span class=\"math-container\">$2^{\\sigma_2(n)}$</span> odd entries in row <span class=\"math-container\">$n$</span> of Pascal's Triangle, where <span class=\"math-container\">$\\sigma_2(n)$</span> is the sum of the bits in the binary representation of <span class=\"math-container\">$n$</span>.</p>\n" }, { "answer_id": 2093331, "score": 1, "text": "<p>Will delete because this is not what OP intends, but a proof via <a href=\"https://en.wikipedia.org/wiki/Kummer&#39;s_theorem\" rel=\"nofollow noreferrer\">Kummer's Theorem</a> looks rather neat to me.</p>\n\n<p>Write $n, m$ in base $2$\n$$\nn = 2^{k} n_{k} + \\dots + 2 n_{1} + n_{0},\n\\qquad\nm = 2^{k} m_{k} + \\dots + 2 m_{1} + m_{0},\n$$\nwith $n_{i}, m_{i} \\in \\{ 0, 1 \\}$.</p>\n\n<p>Then </p>\n\n<blockquote>\n <p>$\\dbinom{n}{m}$ is odd iff there are no carries in the sum in base $2$ \n $$\nm + (n - m) = n,\n$$\n that is, iff $m_{i} \\le n_{i}$ for each $i$. </p>\n</blockquote>\n\n<p>Given $n$, there are thus\n$$\n\\prod_{i=0}^{k} (n_{i} + 1)\n$$\npossibilities for $m$, and each $n_{i} + 1 \\in \\{ 1, 2 \\}$.</p>\n\n<p>This also tells you what is the exponent $k$ in the question - it is the number of $1$'s in the diadic expansion of $n$, or equivalently, the number of <em>distinct</em> powers of $2$ that sum up to $n$.</p>\n" }, { "answer_id": 3305880, "score": 1, "text": "<p>The simplest (at least simplest to explain) solution (using Lucas's theorem) has not been posted; let me fill that \"much needed gap\". Lazy as I am, I will mostly copypaste my answer from <a href=\"https://math.stackexchange.com/a/2184460/\">https://math.stackexchange.com/a/2184460/</a> , since the argument is essentially the same.</p>\n\n<p>I use the notation <span class=\"math-container\">$\\mathbb{N}$</span> for the set <span class=\"math-container\">$\\left\\{0,1,2,\\ldots\\right\\}$</span>.</p>\n\n<p>The question asks the following:</p>\n\n<blockquote>\n <p><strong>Theorem 1.</strong> Let <span class=\"math-container\">$n \\in \\mathbb{N}$</span>. Then, the number of <span class=\"math-container\">$i \\in \\left\\{0,1,\\ldots,n\\right\\}$</span> such that <span class=\"math-container\">$\\dbinom{n}{i}$</span> is odd is a power of <span class=\"math-container\">$2$</span>.</p>\n</blockquote>\n\n<p>I shall prove something slightly more general:</p>\n\n<blockquote>\n <p><strong>Theorem 2.</strong> Let <span class=\"math-container\">$n \\in \\mathbb{N}$</span>. Let <span class=\"math-container\">$q$</span> be a real number. Then, there exists a finite subset <span class=\"math-container\">$G$</span> of <span class=\"math-container\">$\\mathbb{N}$</span> such that\n <span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} q^i = \\prod_{g \\in G} \\left(q^{2^g} + 1\\right) .\n\\end{align}</span></p>\n</blockquote>\n\n<p><em>Proof of Theorem 2.</em>\nWrite <span class=\"math-container\">$n$</span> in the form <span class=\"math-container\">$n=a_{k}2^{k}+a_{k-1}2^{k-1}+\\cdots+a_{0}2^{0}$</span> for some\n<span class=\"math-container\">$k\\in\\mathbb{N}$</span> and <span class=\"math-container\">$a_{0},a_{1},\\ldots,a_{k}\\in\\left\\{ 0,1\\right\\} $</span>.\n(This is just the base-<span class=\"math-container\">$2$</span> representation of <span class=\"math-container\">$n$</span>, possibly with leading zeroes.)</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Lucas&#39;s_theorem\" rel=\"nofollow noreferrer\">Lucas's theorem</a> tells you\nthat if <span class=\"math-container\">$i=b_{k}2^{k}+b_{k-1}2^{k-1}+\\cdots+b_{0}2^{0}$</span> for some <span class=\"math-container\">$b_{0}\n,b_{1},\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} $</span>, then\n<span class=\"math-container\">\\begin{align}\n\\dbinom{n}{i}\n&amp;\\equiv\n\\dbinom{a_{k}}{b_{k}}\\dbinom{a_{k-1}}{b_{k-1}}\n\\cdots\\dbinom{a_{0}}{b_{0}}=\\prod\\limits_{j=0}^{k}\\underbrace{\\dbinom{a_{j}}{b_{j}}\n}_{\\substack{=\n\\begin{cases}\n1, &amp; \\text{if }b_{j}\\leq a_{j}\\\\\n0, &amp; \\text{if }b_{j}&gt;a_{j}\n\\end{cases}\n\\\\\\text{(since }a_{j}\\text{ and }b_{j}\\text{ lie in }\\left\\{ 0,1\\right\\}\n\\text{)}}} \\\\\n&amp;=\\prod\\limits_{j=0}^{k}\n\\begin{cases}\n1, &amp; \\text{if }b_{j}\\leq a_{j}\\\\\n0, &amp; \\text{if }b_{j}&gt;a_{j}\n\\end{cases}\n=\n\\begin{cases}\n1, &amp; \\text{if }b_{j}\\leq a_{j}\\text{ for all }j\\text{;}\\\\\n0, &amp; \\text{otherwise}\n\\end{cases}\n\\mod 2 .\n\\end{align}</span>\nHence, the <span class=\"math-container\">$i\\in\\mathbb{N}$</span> for which <span class=\"math-container\">$\\dbinom{n}{i}$</span> is\nodd are precisely the numbers of the form <span class=\"math-container\">$b_{k}2^{k}+b_{k-1}2^{k-1}\n+\\cdots+b_{0}2^{0}$</span> for <span class=\"math-container\">$b_{0},b_{1},\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} $</span>\nsatisfying <span class=\"math-container\">$\\left( b_{j}\\leq a_{j}\\text{ for all }j\\right) $</span>.\nSince all these <span class=\"math-container\">$i$</span> satisfy <span class=\"math-container\">$i \\in \\left\\{ 0,1,\\ldots,n\\right\\}$</span>\n(because otherwise, <span class=\"math-container\">$\\dbinom{n}{i}$</span> would be <span class=\"math-container\">$0$</span> and therefore could not\nbe odd), we can rewrite this as follows: The\n<span class=\"math-container\">$i \\in \\left\\{ 0,1,\\ldots,n\\right\\}$</span> for which <span class=\"math-container\">$\\dbinom{n}{i}$</span> is\nodd are precisely the numbers of the form <span class=\"math-container\">$b_{k}2^{k}+b_{k-1}2^{k-1}\n+\\cdots+b_{0}2^{0}$</span> for <span class=\"math-container\">$b_{0},b_{1},\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} $</span>\nsatisfying <span class=\"math-container\">$\\left( b_{j}\\leq a_{j}\\text{ for all }j\\right) $</span>.\nSince these\nnumbers are distinct (because the base-<span class=\"math-container\">$2$</span> representation of any\n<span class=\"math-container\">$i\\in\\mathbb{N}$</span> is unique, as long as we fix the number of digits), we thus\ncan substitute <span class=\"math-container\">$b_{k}2^{k}+b_{k-1}2^{k-1}+\\cdots+b_{0}2^{0}$</span> for <span class=\"math-container\">$i$</span> in the\nsum <span class=\"math-container\">$\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\} ;\\\\\n\\dbinom{n}{i}\\text{ is odd}}}q^{i}$</span>. Thus, this sum rewrites as follows:\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\} ;\\\\\n\\dbinom{n}{i}\\text{ is odd}}}q^{i}\n&amp;=\\underbrace{\\sum\\limits_{\\substack{b_{0},b_{1}\n,\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} ;\\\\b_{j}\\leq a_{j}\\text{ for all }j}\n}}_{=\\sum\\limits_{b_{0}=0}^{a_{0}}\\sum\\limits_{b_{1}=0}^{a_{1}}\\cdots\\sum\\limits_{b_{k}=0}^{a_k}\n}\\underbrace{q^{b_{k}2^{k}+b_{k-1}2^{k-1}+\\cdots+b_{0}2^{0}}}_{=\\left(\nq^{2^{k}}\\right) ^{b_{k}}\\left( q^{2^{k-1}}\\right) ^{b_{k-1}}\\cdots\\left(\nq^{2^{0}}\\right) ^{b_{0}}} \\\\\n&amp;=\\sum\\limits_{b_{0}=0}^{a_{0}}\\sum\\limits_{b_{1}=0}^{a_{1}}\\cdots\\sum\\limits_{b_{k}=0}^{a_{k}\n}\\left( q^{2^{k}}\\right) ^{b_{k}}\\left( q^{2^{k-1}}\\right) ^{b_{k-1}\n}\\cdots\\left( q^{2^{0}}\\right) ^{b_{0}} \\\\\n&amp;=\\left( \\sum\\limits_{b_{k}=0}^{a_{k}}\\left( q^{2^{k}}\\right) ^{b_{k}}\\right)\n\\left( \\sum\\limits_{b_{k-1}=0}^{a_{k-1}}\\left( q^{2^{k-1}}\\right) ^{b_{k-1}\n}\\right) \\cdots\\left( \\sum\\limits_{b_{0}=0}^{a_{0}}\\left( q^{2^{0}}\\right)\n^{b_{0}}\\right) \\\\\n&amp;=\\left( \\sum\\limits_{b=0}^{a_{k}}\\left( q^{2^{k}}\\right) ^{b}\\right) \\left(\n\\sum\\limits_{b=0}^{a_{k-1}}\\left( q^{2^{k-1}}\\right) ^{b}\\right) \\cdots\\left(\n\\sum\\limits_{b=0}^{a_{0}}\\left( q^{2^{0}}\\right) ^{b}\\right) \\\\\n&amp;=\\prod\\limits_{g=0}^{k}\\underbrace{\\left( \\sum\\limits_{b=0}^{a_{g}}\\left( q^{2^{g}}\\right) ^{b}\\right) }_{\\substack{=\n\\begin{cases}\nq^{2^{g}}+1, &amp; \\text{if }a_{g}=1;\\\\\n1 &amp; \\text{if }a_{g}=0\n\\end{cases}\n\\\\\\text{(since }a_{g}\\in\\left\\{ 0,1\\right\\} \\text{)}}} \\\\\n&amp;=\\prod\\limits_{g=0}^{k}\n\\begin{cases}\nq^{2^{g}}+1, &amp; \\text{if }a_{g}=1;\\\\\n1 &amp; \\text{if }a_{g}=0\n\\end{cases}\n\\\\\n&amp;=\\left( \\prod\\limits_{\\substack{g\\in\\left\\{ 0,1,\\ldots,k\\right\\} ;\\\\a_{g}\n=1}}\\left( q^{2^{g}}+1\\right) \\right) \\underbrace{\\left( \\prod\\limits\n_{\\substack{g\\in\\left\\{ 0,1,\\ldots,k\\right\\} ;\\\\a_{g}=0}}1\\right) }_{=1} \\\\\n&amp;=\\prod\\limits_{\\substack{g\\in\\left\\{ 0,1,\\ldots,k\\right\\} ;\\\\a_{g}=1}}\\left(\nq^{2^{g}}+1\\right) .\n\\end{align}</span>\nThus, there exists a finite subset <span class=\"math-container\">$G$</span> of <span class=\"math-container\">$\\mathbb{N}$</span> such that\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} q^i = \\prod_{g \\in G} \\left(q^{2^g} + 1\\right)\n\\end{align}</span>\n(namely, <span class=\"math-container\">$G$</span> is the set of all <span class=\"math-container\">$g\\in\\left\\{ 0,1,\\ldots,k\\right\\}$</span> satisfying <span class=\"math-container\">$a_g = 1$</span>). This proves Theorem 2. <span class=\"math-container\">$\\blacksquare$</span></p>\n\n<p><em>Proof of Theorem 1.</em> Theorem 2 (applied to <span class=\"math-container\">$q = 1$</span>) shows that there exists a finite subset <span class=\"math-container\">$G$</span> of <span class=\"math-container\">$\\mathbb{N}$</span> such that\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} 1^i = \\prod_{g \\in G} \\left(1^{2^g} + 1\\right) .\n\\end{align}</span>\nConsider this <span class=\"math-container\">$G$</span>.\nComparing\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} 1^i = \\prod_{g \\in G} \\underbrace{\\left(1^{2^g} + 1\\right)}_{= 1+1 = 2}\n= \\prod_{g \\in G} 2 = 2^{\\left|G\\right|}\n\\end{align}</span>\nwith\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} \\underbrace{1^i}_{=1}\n&amp; = \\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} 1 \\\\\n&amp; = \\left(\\text{the number of all $i\\in\\left\\{ 0,1,\\ldots,n\\right\\}$ such that $\\dbinom{n}{i}$ is odd}\\right) \\cdot 1 \\\\\n&amp; = \\left(\\text{the number of all $i\\in\\left\\{ 0,1,\\ldots,n\\right\\}$ such that $\\dbinom{n}{i}$ is odd}\\right) ,\n\\end{align}</span>\nwe obtain\n<span class=\"math-container\">\\begin{align}\n\\left(\\text{the number of all $i\\in\\left\\{ 0,1,\\ldots,n\\right\\}$ such that $\\dbinom{n}{i}$ is odd}\\right) = 2^{\\left|G\\right|} .\n\\end{align}</span>\nHence, the number of <span class=\"math-container\">$i \\in \\left\\{0,1,\\ldots,n\\right\\}$</span> such that <span class=\"math-container\">$\\dbinom{n}{i}$</span> is odd is a power of <span class=\"math-container\">$2$</span>. This proves Theorem 1. <span class=\"math-container\">$\\blacksquare$</span></p>\n" }, { "answer_id": 2093632, "score": 3, "text": "<p>Similar idea to Andreas Caranti's answer:</p>\n\n<p>Kummer's Theorem says that the number of factors of <span class=\"math-container\">$p$</span> in <span class=\"math-container\">$\\binom{n}{k}$</span> is the number of carries when adding <span class=\"math-container\">$k$</span> and <span class=\"math-container\">$n-k$</span> in base-<span class=\"math-container\">$p$</span>. There are no carries when adding <span class=\"math-container\">$k$</span> and <span class=\"math-container\">$n-k$</span> in binary if and only if <span class=\"math-container\">$k\\veebar n-k=0$</span>; when that is true, <span class=\"math-container\">$n=k\\lor n-k$</span>. This means that <span class=\"math-container\">$\\binom{n}{k}$</span> is odd when the <span class=\"math-container\">$1$</span> bits of <span class=\"math-container\">$k$</span> are a subset of the <span class=\"math-container\">$1$</span> bits of <span class=\"math-container\">$n$</span>. That is, there are <span class=\"math-container\">$2^{\\sigma_2(n)}$</span> odd entries in row <span class=\"math-container\">$n$</span> of Pascal's Triangle, where <span class=\"math-container\">$\\sigma_2(n)$</span> is the sum of the bits in the binary representation of <span class=\"math-container\">$n$</span>.</p>\n" }, { "answer_id": 2093331, "score": 1, "text": "<p>Will delete because this is not what OP intends, but a proof via <a href=\"https://en.wikipedia.org/wiki/Kummer&#39;s_theorem\" rel=\"nofollow noreferrer\">Kummer's Theorem</a> looks rather neat to me.</p>\n\n<p>Write $n, m$ in base $2$\n$$\nn = 2^{k} n_{k} + \\dots + 2 n_{1} + n_{0},\n\\qquad\nm = 2^{k} m_{k} + \\dots + 2 m_{1} + m_{0},\n$$\nwith $n_{i}, m_{i} \\in \\{ 0, 1 \\}$.</p>\n\n<p>Then </p>\n\n<blockquote>\n <p>$\\dbinom{n}{m}$ is odd iff there are no carries in the sum in base $2$ \n $$\nm + (n - m) = n,\n$$\n that is, iff $m_{i} \\le n_{i}$ for each $i$. </p>\n</blockquote>\n\n<p>Given $n$, there are thus\n$$\n\\prod_{i=0}^{k} (n_{i} + 1)\n$$\npossibilities for $m$, and each $n_{i} + 1 \\in \\{ 1, 2 \\}$.</p>\n\n<p>This also tells you what is the exponent $k$ in the question - it is the number of $1$'s in the diadic expansion of $n$, or equivalently, the number of <em>distinct</em> powers of $2$ that sum up to $n$.</p>\n" }, { "answer_id": 3305880, "score": 1, "text": "<p>The simplest (at least simplest to explain) solution (using Lucas's theorem) has not been posted; let me fill that \"much needed gap\". Lazy as I am, I will mostly copypaste my answer from <a href=\"https://math.stackexchange.com/a/2184460/\">https://math.stackexchange.com/a/2184460/</a> , since the argument is essentially the same.</p>\n\n<p>I use the notation <span class=\"math-container\">$\\mathbb{N}$</span> for the set <span class=\"math-container\">$\\left\\{0,1,2,\\ldots\\right\\}$</span>.</p>\n\n<p>The question asks the following:</p>\n\n<blockquote>\n <p><strong>Theorem 1.</strong> Let <span class=\"math-container\">$n \\in \\mathbb{N}$</span>. Then, the number of <span class=\"math-container\">$i \\in \\left\\{0,1,\\ldots,n\\right\\}$</span> such that <span class=\"math-container\">$\\dbinom{n}{i}$</span> is odd is a power of <span class=\"math-container\">$2$</span>.</p>\n</blockquote>\n\n<p>I shall prove something slightly more general:</p>\n\n<blockquote>\n <p><strong>Theorem 2.</strong> Let <span class=\"math-container\">$n \\in \\mathbb{N}$</span>. Let <span class=\"math-container\">$q$</span> be a real number. Then, there exists a finite subset <span class=\"math-container\">$G$</span> of <span class=\"math-container\">$\\mathbb{N}$</span> such that\n <span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} q^i = \\prod_{g \\in G} \\left(q^{2^g} + 1\\right) .\n\\end{align}</span></p>\n</blockquote>\n\n<p><em>Proof of Theorem 2.</em>\nWrite <span class=\"math-container\">$n$</span> in the form <span class=\"math-container\">$n=a_{k}2^{k}+a_{k-1}2^{k-1}+\\cdots+a_{0}2^{0}$</span> for some\n<span class=\"math-container\">$k\\in\\mathbb{N}$</span> and <span class=\"math-container\">$a_{0},a_{1},\\ldots,a_{k}\\in\\left\\{ 0,1\\right\\} $</span>.\n(This is just the base-<span class=\"math-container\">$2$</span> representation of <span class=\"math-container\">$n$</span>, possibly with leading zeroes.)</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Lucas&#39;s_theorem\" rel=\"nofollow noreferrer\">Lucas's theorem</a> tells you\nthat if <span class=\"math-container\">$i=b_{k}2^{k}+b_{k-1}2^{k-1}+\\cdots+b_{0}2^{0}$</span> for some <span class=\"math-container\">$b_{0}\n,b_{1},\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} $</span>, then\n<span class=\"math-container\">\\begin{align}\n\\dbinom{n}{i}\n&amp;\\equiv\n\\dbinom{a_{k}}{b_{k}}\\dbinom{a_{k-1}}{b_{k-1}}\n\\cdots\\dbinom{a_{0}}{b_{0}}=\\prod\\limits_{j=0}^{k}\\underbrace{\\dbinom{a_{j}}{b_{j}}\n}_{\\substack{=\n\\begin{cases}\n1, &amp; \\text{if }b_{j}\\leq a_{j}\\\\\n0, &amp; \\text{if }b_{j}&gt;a_{j}\n\\end{cases}\n\\\\\\text{(since }a_{j}\\text{ and }b_{j}\\text{ lie in }\\left\\{ 0,1\\right\\}\n\\text{)}}} \\\\\n&amp;=\\prod\\limits_{j=0}^{k}\n\\begin{cases}\n1, &amp; \\text{if }b_{j}\\leq a_{j}\\\\\n0, &amp; \\text{if }b_{j}&gt;a_{j}\n\\end{cases}\n=\n\\begin{cases}\n1, &amp; \\text{if }b_{j}\\leq a_{j}\\text{ for all }j\\text{;}\\\\\n0, &amp; \\text{otherwise}\n\\end{cases}\n\\mod 2 .\n\\end{align}</span>\nHence, the <span class=\"math-container\">$i\\in\\mathbb{N}$</span> for which <span class=\"math-container\">$\\dbinom{n}{i}$</span> is\nodd are precisely the numbers of the form <span class=\"math-container\">$b_{k}2^{k}+b_{k-1}2^{k-1}\n+\\cdots+b_{0}2^{0}$</span> for <span class=\"math-container\">$b_{0},b_{1},\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} $</span>\nsatisfying <span class=\"math-container\">$\\left( b_{j}\\leq a_{j}\\text{ for all }j\\right) $</span>.\nSince all these <span class=\"math-container\">$i$</span> satisfy <span class=\"math-container\">$i \\in \\left\\{ 0,1,\\ldots,n\\right\\}$</span>\n(because otherwise, <span class=\"math-container\">$\\dbinom{n}{i}$</span> would be <span class=\"math-container\">$0$</span> and therefore could not\nbe odd), we can rewrite this as follows: The\n<span class=\"math-container\">$i \\in \\left\\{ 0,1,\\ldots,n\\right\\}$</span> for which <span class=\"math-container\">$\\dbinom{n}{i}$</span> is\nodd are precisely the numbers of the form <span class=\"math-container\">$b_{k}2^{k}+b_{k-1}2^{k-1}\n+\\cdots+b_{0}2^{0}$</span> for <span class=\"math-container\">$b_{0},b_{1},\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} $</span>\nsatisfying <span class=\"math-container\">$\\left( b_{j}\\leq a_{j}\\text{ for all }j\\right) $</span>.\nSince these\nnumbers are distinct (because the base-<span class=\"math-container\">$2$</span> representation of any\n<span class=\"math-container\">$i\\in\\mathbb{N}$</span> is unique, as long as we fix the number of digits), we thus\ncan substitute <span class=\"math-container\">$b_{k}2^{k}+b_{k-1}2^{k-1}+\\cdots+b_{0}2^{0}$</span> for <span class=\"math-container\">$i$</span> in the\nsum <span class=\"math-container\">$\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\} ;\\\\\n\\dbinom{n}{i}\\text{ is odd}}}q^{i}$</span>. Thus, this sum rewrites as follows:\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\} ;\\\\\n\\dbinom{n}{i}\\text{ is odd}}}q^{i}\n&amp;=\\underbrace{\\sum\\limits_{\\substack{b_{0},b_{1}\n,\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} ;\\\\b_{j}\\leq a_{j}\\text{ for all }j}\n}}_{=\\sum\\limits_{b_{0}=0}^{a_{0}}\\sum\\limits_{b_{1}=0}^{a_{1}}\\cdots\\sum\\limits_{b_{k}=0}^{a_k}\n}\\underbrace{q^{b_{k}2^{k}+b_{k-1}2^{k-1}+\\cdots+b_{0}2^{0}}}_{=\\left(\nq^{2^{k}}\\right) ^{b_{k}}\\left( q^{2^{k-1}}\\right) ^{b_{k-1}}\\cdots\\left(\nq^{2^{0}}\\right) ^{b_{0}}} \\\\\n&amp;=\\sum\\limits_{b_{0}=0}^{a_{0}}\\sum\\limits_{b_{1}=0}^{a_{1}}\\cdots\\sum\\limits_{b_{k}=0}^{a_{k}\n}\\left( q^{2^{k}}\\right) ^{b_{k}}\\left( q^{2^{k-1}}\\right) ^{b_{k-1}\n}\\cdots\\left( q^{2^{0}}\\right) ^{b_{0}} \\\\\n&amp;=\\left( \\sum\\limits_{b_{k}=0}^{a_{k}}\\left( q^{2^{k}}\\right) ^{b_{k}}\\right)\n\\left( \\sum\\limits_{b_{k-1}=0}^{a_{k-1}}\\left( q^{2^{k-1}}\\right) ^{b_{k-1}\n}\\right) \\cdots\\left( \\sum\\limits_{b_{0}=0}^{a_{0}}\\left( q^{2^{0}}\\right)\n^{b_{0}}\\right) \\\\\n&amp;=\\left( \\sum\\limits_{b=0}^{a_{k}}\\left( q^{2^{k}}\\right) ^{b}\\right) \\left(\n\\sum\\limits_{b=0}^{a_{k-1}}\\left( q^{2^{k-1}}\\right) ^{b}\\right) \\cdots\\left(\n\\sum\\limits_{b=0}^{a_{0}}\\left( q^{2^{0}}\\right) ^{b}\\right) \\\\\n&amp;=\\prod\\limits_{g=0}^{k}\\underbrace{\\left( \\sum\\limits_{b=0}^{a_{g}}\\left( q^{2^{g}}\\right) ^{b}\\right) }_{\\substack{=\n\\begin{cases}\nq^{2^{g}}+1, &amp; \\text{if }a_{g}=1;\\\\\n1 &amp; \\text{if }a_{g}=0\n\\end{cases}\n\\\\\\text{(since }a_{g}\\in\\left\\{ 0,1\\right\\} \\text{)}}} \\\\\n&amp;=\\prod\\limits_{g=0}^{k}\n\\begin{cases}\nq^{2^{g}}+1, &amp; \\text{if }a_{g}=1;\\\\\n1 &amp; \\text{if }a_{g}=0\n\\end{cases}\n\\\\\n&amp;=\\left( \\prod\\limits_{\\substack{g\\in\\left\\{ 0,1,\\ldots,k\\right\\} ;\\\\a_{g}\n=1}}\\left( q^{2^{g}}+1\\right) \\right) \\underbrace{\\left( \\prod\\limits\n_{\\substack{g\\in\\left\\{ 0,1,\\ldots,k\\right\\} ;\\\\a_{g}=0}}1\\right) }_{=1} \\\\\n&amp;=\\prod\\limits_{\\substack{g\\in\\left\\{ 0,1,\\ldots,k\\right\\} ;\\\\a_{g}=1}}\\left(\nq^{2^{g}}+1\\right) .\n\\end{align}</span>\nThus, there exists a finite subset <span class=\"math-container\">$G$</span> of <span class=\"math-container\">$\\mathbb{N}$</span> such that\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} q^i = \\prod_{g \\in G} \\left(q^{2^g} + 1\\right)\n\\end{align}</span>\n(namely, <span class=\"math-container\">$G$</span> is the set of all <span class=\"math-container\">$g\\in\\left\\{ 0,1,\\ldots,k\\right\\}$</span> satisfying <span class=\"math-container\">$a_g = 1$</span>). This proves Theorem 2. <span class=\"math-container\">$\\blacksquare$</span></p>\n\n<p><em>Proof of Theorem 1.</em> Theorem 2 (applied to <span class=\"math-container\">$q = 1$</span>) shows that there exists a finite subset <span class=\"math-container\">$G$</span> of <span class=\"math-container\">$\\mathbb{N}$</span> such that\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} 1^i = \\prod_{g \\in G} \\left(1^{2^g} + 1\\right) .\n\\end{align}</span>\nConsider this <span class=\"math-container\">$G$</span>.\nComparing\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} 1^i = \\prod_{g \\in G} \\underbrace{\\left(1^{2^g} + 1\\right)}_{= 1+1 = 2}\n= \\prod_{g \\in G} 2 = 2^{\\left|G\\right|}\n\\end{align}</span>\nwith\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} \\underbrace{1^i}_{=1}\n&amp; = \\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} 1 \\\\\n&amp; = \\left(\\text{the number of all $i\\in\\left\\{ 0,1,\\ldots,n\\right\\}$ such that $\\dbinom{n}{i}$ is odd}\\right) \\cdot 1 \\\\\n&amp; = \\left(\\text{the number of all $i\\in\\left\\{ 0,1,\\ldots,n\\right\\}$ such that $\\dbinom{n}{i}$ is odd}\\right) ,\n\\end{align}</span>\nwe obtain\n<span class=\"math-container\">\\begin{align}\n\\left(\\text{the number of all $i\\in\\left\\{ 0,1,\\ldots,n\\right\\}$ such that $\\dbinom{n}{i}$ is odd}\\right) = 2^{\\left|G\\right|} .\n\\end{align}</span>\nHence, the number of <span class=\"math-container\">$i \\in \\left\\{0,1,\\ldots,n\\right\\}$</span> such that <span class=\"math-container\">$\\dbinom{n}{i}$</span> is odd is a power of <span class=\"math-container\">$2$</span>. This proves Theorem 1. <span class=\"math-container\">$\\blacksquare$</span></p>\n" }, { "answer_id": 2093632, "score": 3, "text": "<p>Similar idea to Andreas Caranti's answer:</p>\n\n<p>Kummer's Theorem says that the number of factors of <span class=\"math-container\">$p$</span> in <span class=\"math-container\">$\\binom{n}{k}$</span> is the number of carries when adding <span class=\"math-container\">$k$</span> and <span class=\"math-container\">$n-k$</span> in base-<span class=\"math-container\">$p$</span>. There are no carries when adding <span class=\"math-container\">$k$</span> and <span class=\"math-container\">$n-k$</span> in binary if and only if <span class=\"math-container\">$k\\veebar n-k=0$</span>; when that is true, <span class=\"math-container\">$n=k\\lor n-k$</span>. This means that <span class=\"math-container\">$\\binom{n}{k}$</span> is odd when the <span class=\"math-container\">$1$</span> bits of <span class=\"math-container\">$k$</span> are a subset of the <span class=\"math-container\">$1$</span> bits of <span class=\"math-container\">$n$</span>. That is, there are <span class=\"math-container\">$2^{\\sigma_2(n)}$</span> odd entries in row <span class=\"math-container\">$n$</span> of Pascal's Triangle, where <span class=\"math-container\">$\\sigma_2(n)$</span> is the sum of the bits in the binary representation of <span class=\"math-container\">$n$</span>.</p>\n" }, { "answer_id": 2093331, "score": 1, "text": "<p>Will delete because this is not what OP intends, but a proof via <a href=\"https://en.wikipedia.org/wiki/Kummer&#39;s_theorem\" rel=\"nofollow noreferrer\">Kummer's Theorem</a> looks rather neat to me.</p>\n\n<p>Write $n, m$ in base $2$\n$$\nn = 2^{k} n_{k} + \\dots + 2 n_{1} + n_{0},\n\\qquad\nm = 2^{k} m_{k} + \\dots + 2 m_{1} + m_{0},\n$$\nwith $n_{i}, m_{i} \\in \\{ 0, 1 \\}$.</p>\n\n<p>Then </p>\n\n<blockquote>\n <p>$\\dbinom{n}{m}$ is odd iff there are no carries in the sum in base $2$ \n $$\nm + (n - m) = n,\n$$\n that is, iff $m_{i} \\le n_{i}$ for each $i$. </p>\n</blockquote>\n\n<p>Given $n$, there are thus\n$$\n\\prod_{i=0}^{k} (n_{i} + 1)\n$$\npossibilities for $m$, and each $n_{i} + 1 \\in \\{ 1, 2 \\}$.</p>\n\n<p>This also tells you what is the exponent $k$ in the question - it is the number of $1$'s in the diadic expansion of $n$, or equivalently, the number of <em>distinct</em> powers of $2$ that sum up to $n$.</p>\n" }, { "answer_id": 3305880, "score": 1, "text": "<p>The simplest (at least simplest to explain) solution (using Lucas's theorem) has not been posted; let me fill that \"much needed gap\". Lazy as I am, I will mostly copypaste my answer from <a href=\"https://math.stackexchange.com/a/2184460/\">https://math.stackexchange.com/a/2184460/</a> , since the argument is essentially the same.</p>\n\n<p>I use the notation <span class=\"math-container\">$\\mathbb{N}$</span> for the set <span class=\"math-container\">$\\left\\{0,1,2,\\ldots\\right\\}$</span>.</p>\n\n<p>The question asks the following:</p>\n\n<blockquote>\n <p><strong>Theorem 1.</strong> Let <span class=\"math-container\">$n \\in \\mathbb{N}$</span>. Then, the number of <span class=\"math-container\">$i \\in \\left\\{0,1,\\ldots,n\\right\\}$</span> such that <span class=\"math-container\">$\\dbinom{n}{i}$</span> is odd is a power of <span class=\"math-container\">$2$</span>.</p>\n</blockquote>\n\n<p>I shall prove something slightly more general:</p>\n\n<blockquote>\n <p><strong>Theorem 2.</strong> Let <span class=\"math-container\">$n \\in \\mathbb{N}$</span>. Let <span class=\"math-container\">$q$</span> be a real number. Then, there exists a finite subset <span class=\"math-container\">$G$</span> of <span class=\"math-container\">$\\mathbb{N}$</span> such that\n <span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} q^i = \\prod_{g \\in G} \\left(q^{2^g} + 1\\right) .\n\\end{align}</span></p>\n</blockquote>\n\n<p><em>Proof of Theorem 2.</em>\nWrite <span class=\"math-container\">$n$</span> in the form <span class=\"math-container\">$n=a_{k}2^{k}+a_{k-1}2^{k-1}+\\cdots+a_{0}2^{0}$</span> for some\n<span class=\"math-container\">$k\\in\\mathbb{N}$</span> and <span class=\"math-container\">$a_{0},a_{1},\\ldots,a_{k}\\in\\left\\{ 0,1\\right\\} $</span>.\n(This is just the base-<span class=\"math-container\">$2$</span> representation of <span class=\"math-container\">$n$</span>, possibly with leading zeroes.)</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Lucas&#39;s_theorem\" rel=\"nofollow noreferrer\">Lucas's theorem</a> tells you\nthat if <span class=\"math-container\">$i=b_{k}2^{k}+b_{k-1}2^{k-1}+\\cdots+b_{0}2^{0}$</span> for some <span class=\"math-container\">$b_{0}\n,b_{1},\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} $</span>, then\n<span class=\"math-container\">\\begin{align}\n\\dbinom{n}{i}\n&amp;\\equiv\n\\dbinom{a_{k}}{b_{k}}\\dbinom{a_{k-1}}{b_{k-1}}\n\\cdots\\dbinom{a_{0}}{b_{0}}=\\prod\\limits_{j=0}^{k}\\underbrace{\\dbinom{a_{j}}{b_{j}}\n}_{\\substack{=\n\\begin{cases}\n1, &amp; \\text{if }b_{j}\\leq a_{j}\\\\\n0, &amp; \\text{if }b_{j}&gt;a_{j}\n\\end{cases}\n\\\\\\text{(since }a_{j}\\text{ and }b_{j}\\text{ lie in }\\left\\{ 0,1\\right\\}\n\\text{)}}} \\\\\n&amp;=\\prod\\limits_{j=0}^{k}\n\\begin{cases}\n1, &amp; \\text{if }b_{j}\\leq a_{j}\\\\\n0, &amp; \\text{if }b_{j}&gt;a_{j}\n\\end{cases}\n=\n\\begin{cases}\n1, &amp; \\text{if }b_{j}\\leq a_{j}\\text{ for all }j\\text{;}\\\\\n0, &amp; \\text{otherwise}\n\\end{cases}\n\\mod 2 .\n\\end{align}</span>\nHence, the <span class=\"math-container\">$i\\in\\mathbb{N}$</span> for which <span class=\"math-container\">$\\dbinom{n}{i}$</span> is\nodd are precisely the numbers of the form <span class=\"math-container\">$b_{k}2^{k}+b_{k-1}2^{k-1}\n+\\cdots+b_{0}2^{0}$</span> for <span class=\"math-container\">$b_{0},b_{1},\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} $</span>\nsatisfying <span class=\"math-container\">$\\left( b_{j}\\leq a_{j}\\text{ for all }j\\right) $</span>.\nSince all these <span class=\"math-container\">$i$</span> satisfy <span class=\"math-container\">$i \\in \\left\\{ 0,1,\\ldots,n\\right\\}$</span>\n(because otherwise, <span class=\"math-container\">$\\dbinom{n}{i}$</span> would be <span class=\"math-container\">$0$</span> and therefore could not\nbe odd), we can rewrite this as follows: The\n<span class=\"math-container\">$i \\in \\left\\{ 0,1,\\ldots,n\\right\\}$</span> for which <span class=\"math-container\">$\\dbinom{n}{i}$</span> is\nodd are precisely the numbers of the form <span class=\"math-container\">$b_{k}2^{k}+b_{k-1}2^{k-1}\n+\\cdots+b_{0}2^{0}$</span> for <span class=\"math-container\">$b_{0},b_{1},\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} $</span>\nsatisfying <span class=\"math-container\">$\\left( b_{j}\\leq a_{j}\\text{ for all }j\\right) $</span>.\nSince these\nnumbers are distinct (because the base-<span class=\"math-container\">$2$</span> representation of any\n<span class=\"math-container\">$i\\in\\mathbb{N}$</span> is unique, as long as we fix the number of digits), we thus\ncan substitute <span class=\"math-container\">$b_{k}2^{k}+b_{k-1}2^{k-1}+\\cdots+b_{0}2^{0}$</span> for <span class=\"math-container\">$i$</span> in the\nsum <span class=\"math-container\">$\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\} ;\\\\\n\\dbinom{n}{i}\\text{ is odd}}}q^{i}$</span>. Thus, this sum rewrites as follows:\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\} ;\\\\\n\\dbinom{n}{i}\\text{ is odd}}}q^{i}\n&amp;=\\underbrace{\\sum\\limits_{\\substack{b_{0},b_{1}\n,\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} ;\\\\b_{j}\\leq a_{j}\\text{ for all }j}\n}}_{=\\sum\\limits_{b_{0}=0}^{a_{0}}\\sum\\limits_{b_{1}=0}^{a_{1}}\\cdots\\sum\\limits_{b_{k}=0}^{a_k}\n}\\underbrace{q^{b_{k}2^{k}+b_{k-1}2^{k-1}+\\cdots+b_{0}2^{0}}}_{=\\left(\nq^{2^{k}}\\right) ^{b_{k}}\\left( q^{2^{k-1}}\\right) ^{b_{k-1}}\\cdots\\left(\nq^{2^{0}}\\right) ^{b_{0}}} \\\\\n&amp;=\\sum\\limits_{b_{0}=0}^{a_{0}}\\sum\\limits_{b_{1}=0}^{a_{1}}\\cdots\\sum\\limits_{b_{k}=0}^{a_{k}\n}\\left( q^{2^{k}}\\right) ^{b_{k}}\\left( q^{2^{k-1}}\\right) ^{b_{k-1}\n}\\cdots\\left( q^{2^{0}}\\right) ^{b_{0}} \\\\\n&amp;=\\left( \\sum\\limits_{b_{k}=0}^{a_{k}}\\left( q^{2^{k}}\\right) ^{b_{k}}\\right)\n\\left( \\sum\\limits_{b_{k-1}=0}^{a_{k-1}}\\left( q^{2^{k-1}}\\right) ^{b_{k-1}\n}\\right) \\cdots\\left( \\sum\\limits_{b_{0}=0}^{a_{0}}\\left( q^{2^{0}}\\right)\n^{b_{0}}\\right) \\\\\n&amp;=\\left( \\sum\\limits_{b=0}^{a_{k}}\\left( q^{2^{k}}\\right) ^{b}\\right) \\left(\n\\sum\\limits_{b=0}^{a_{k-1}}\\left( q^{2^{k-1}}\\right) ^{b}\\right) \\cdots\\left(\n\\sum\\limits_{b=0}^{a_{0}}\\left( q^{2^{0}}\\right) ^{b}\\right) \\\\\n&amp;=\\prod\\limits_{g=0}^{k}\\underbrace{\\left( \\sum\\limits_{b=0}^{a_{g}}\\left( q^{2^{g}}\\right) ^{b}\\right) }_{\\substack{=\n\\begin{cases}\nq^{2^{g}}+1, &amp; \\text{if }a_{g}=1;\\\\\n1 &amp; \\text{if }a_{g}=0\n\\end{cases}\n\\\\\\text{(since }a_{g}\\in\\left\\{ 0,1\\right\\} \\text{)}}} \\\\\n&amp;=\\prod\\limits_{g=0}^{k}\n\\begin{cases}\nq^{2^{g}}+1, &amp; \\text{if }a_{g}=1;\\\\\n1 &amp; \\text{if }a_{g}=0\n\\end{cases}\n\\\\\n&amp;=\\left( \\prod\\limits_{\\substack{g\\in\\left\\{ 0,1,\\ldots,k\\right\\} ;\\\\a_{g}\n=1}}\\left( q^{2^{g}}+1\\right) \\right) \\underbrace{\\left( \\prod\\limits\n_{\\substack{g\\in\\left\\{ 0,1,\\ldots,k\\right\\} ;\\\\a_{g}=0}}1\\right) }_{=1} \\\\\n&amp;=\\prod\\limits_{\\substack{g\\in\\left\\{ 0,1,\\ldots,k\\right\\} ;\\\\a_{g}=1}}\\left(\nq^{2^{g}}+1\\right) .\n\\end{align}</span>\nThus, there exists a finite subset <span class=\"math-container\">$G$</span> of <span class=\"math-container\">$\\mathbb{N}$</span> such that\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} q^i = \\prod_{g \\in G} \\left(q^{2^g} + 1\\right)\n\\end{align}</span>\n(namely, <span class=\"math-container\">$G$</span> is the set of all <span class=\"math-container\">$g\\in\\left\\{ 0,1,\\ldots,k\\right\\}$</span> satisfying <span class=\"math-container\">$a_g = 1$</span>). This proves Theorem 2. <span class=\"math-container\">$\\blacksquare$</span></p>\n\n<p><em>Proof of Theorem 1.</em> Theorem 2 (applied to <span class=\"math-container\">$q = 1$</span>) shows that there exists a finite subset <span class=\"math-container\">$G$</span> of <span class=\"math-container\">$\\mathbb{N}$</span> such that\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} 1^i = \\prod_{g \\in G} \\left(1^{2^g} + 1\\right) .\n\\end{align}</span>\nConsider this <span class=\"math-container\">$G$</span>.\nComparing\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} 1^i = \\prod_{g \\in G} \\underbrace{\\left(1^{2^g} + 1\\right)}_{= 1+1 = 2}\n= \\prod_{g \\in G} 2 = 2^{\\left|G\\right|}\n\\end{align}</span>\nwith\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} \\underbrace{1^i}_{=1}\n&amp; = \\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} 1 \\\\\n&amp; = \\left(\\text{the number of all $i\\in\\left\\{ 0,1,\\ldots,n\\right\\}$ such that $\\dbinom{n}{i}$ is odd}\\right) \\cdot 1 \\\\\n&amp; = \\left(\\text{the number of all $i\\in\\left\\{ 0,1,\\ldots,n\\right\\}$ such that $\\dbinom{n}{i}$ is odd}\\right) ,\n\\end{align}</span>\nwe obtain\n<span class=\"math-container\">\\begin{align}\n\\left(\\text{the number of all $i\\in\\left\\{ 0,1,\\ldots,n\\right\\}$ such that $\\dbinom{n}{i}$ is odd}\\right) = 2^{\\left|G\\right|} .\n\\end{align}</span>\nHence, the number of <span class=\"math-container\">$i \\in \\left\\{0,1,\\ldots,n\\right\\}$</span> such that <span class=\"math-container\">$\\dbinom{n}{i}$</span> is odd is a power of <span class=\"math-container\">$2$</span>. This proves Theorem 1. <span class=\"math-container\">$\\blacksquare$</span></p>\n" }, { "answer_id": 2093632, "score": 3, "text": "<p>Similar idea to Andreas Caranti's answer:</p>\n\n<p>Kummer's Theorem says that the number of factors of <span class=\"math-container\">$p$</span> in <span class=\"math-container\">$\\binom{n}{k}$</span> is the number of carries when adding <span class=\"math-container\">$k$</span> and <span class=\"math-container\">$n-k$</span> in base-<span class=\"math-container\">$p$</span>. There are no carries when adding <span class=\"math-container\">$k$</span> and <span class=\"math-container\">$n-k$</span> in binary if and only if <span class=\"math-container\">$k\\veebar n-k=0$</span>; when that is true, <span class=\"math-container\">$n=k\\lor n-k$</span>. This means that <span class=\"math-container\">$\\binom{n}{k}$</span> is odd when the <span class=\"math-container\">$1$</span> bits of <span class=\"math-container\">$k$</span> are a subset of the <span class=\"math-container\">$1$</span> bits of <span class=\"math-container\">$n$</span>. That is, there are <span class=\"math-container\">$2^{\\sigma_2(n)}$</span> odd entries in row <span class=\"math-container\">$n$</span> of Pascal's Triangle, where <span class=\"math-container\">$\\sigma_2(n)$</span> is the sum of the bits in the binary representation of <span class=\"math-container\">$n$</span>.</p>\n" }, { "answer_id": 2093331, "score": 1, "text": "<p>Will delete because this is not what OP intends, but a proof via <a href=\"https://en.wikipedia.org/wiki/Kummer&#39;s_theorem\" rel=\"nofollow noreferrer\">Kummer's Theorem</a> looks rather neat to me.</p>\n\n<p>Write $n, m$ in base $2$\n$$\nn = 2^{k} n_{k} + \\dots + 2 n_{1} + n_{0},\n\\qquad\nm = 2^{k} m_{k} + \\dots + 2 m_{1} + m_{0},\n$$\nwith $n_{i}, m_{i} \\in \\{ 0, 1 \\}$.</p>\n\n<p>Then </p>\n\n<blockquote>\n <p>$\\dbinom{n}{m}$ is odd iff there are no carries in the sum in base $2$ \n $$\nm + (n - m) = n,\n$$\n that is, iff $m_{i} \\le n_{i}$ for each $i$. </p>\n</blockquote>\n\n<p>Given $n$, there are thus\n$$\n\\prod_{i=0}^{k} (n_{i} + 1)\n$$\npossibilities for $m$, and each $n_{i} + 1 \\in \\{ 1, 2 \\}$.</p>\n\n<p>This also tells you what is the exponent $k$ in the question - it is the number of $1$'s in the diadic expansion of $n$, or equivalently, the number of <em>distinct</em> powers of $2$ that sum up to $n$.</p>\n" }, { "answer_id": 3305880, "score": 1, "text": "<p>The simplest (at least simplest to explain) solution (using Lucas's theorem) has not been posted; let me fill that \"much needed gap\". Lazy as I am, I will mostly copypaste my answer from <a href=\"https://math.stackexchange.com/a/2184460/\">https://math.stackexchange.com/a/2184460/</a> , since the argument is essentially the same.</p>\n\n<p>I use the notation <span class=\"math-container\">$\\mathbb{N}$</span> for the set <span class=\"math-container\">$\\left\\{0,1,2,\\ldots\\right\\}$</span>.</p>\n\n<p>The question asks the following:</p>\n\n<blockquote>\n <p><strong>Theorem 1.</strong> Let <span class=\"math-container\">$n \\in \\mathbb{N}$</span>. Then, the number of <span class=\"math-container\">$i \\in \\left\\{0,1,\\ldots,n\\right\\}$</span> such that <span class=\"math-container\">$\\dbinom{n}{i}$</span> is odd is a power of <span class=\"math-container\">$2$</span>.</p>\n</blockquote>\n\n<p>I shall prove something slightly more general:</p>\n\n<blockquote>\n <p><strong>Theorem 2.</strong> Let <span class=\"math-container\">$n \\in \\mathbb{N}$</span>. Let <span class=\"math-container\">$q$</span> be a real number. Then, there exists a finite subset <span class=\"math-container\">$G$</span> of <span class=\"math-container\">$\\mathbb{N}$</span> such that\n <span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} q^i = \\prod_{g \\in G} \\left(q^{2^g} + 1\\right) .\n\\end{align}</span></p>\n</blockquote>\n\n<p><em>Proof of Theorem 2.</em>\nWrite <span class=\"math-container\">$n$</span> in the form <span class=\"math-container\">$n=a_{k}2^{k}+a_{k-1}2^{k-1}+\\cdots+a_{0}2^{0}$</span> for some\n<span class=\"math-container\">$k\\in\\mathbb{N}$</span> and <span class=\"math-container\">$a_{0},a_{1},\\ldots,a_{k}\\in\\left\\{ 0,1\\right\\} $</span>.\n(This is just the base-<span class=\"math-container\">$2$</span> representation of <span class=\"math-container\">$n$</span>, possibly with leading zeroes.)</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Lucas&#39;s_theorem\" rel=\"nofollow noreferrer\">Lucas's theorem</a> tells you\nthat if <span class=\"math-container\">$i=b_{k}2^{k}+b_{k-1}2^{k-1}+\\cdots+b_{0}2^{0}$</span> for some <span class=\"math-container\">$b_{0}\n,b_{1},\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} $</span>, then\n<span class=\"math-container\">\\begin{align}\n\\dbinom{n}{i}\n&amp;\\equiv\n\\dbinom{a_{k}}{b_{k}}\\dbinom{a_{k-1}}{b_{k-1}}\n\\cdots\\dbinom{a_{0}}{b_{0}}=\\prod\\limits_{j=0}^{k}\\underbrace{\\dbinom{a_{j}}{b_{j}}\n}_{\\substack{=\n\\begin{cases}\n1, &amp; \\text{if }b_{j}\\leq a_{j}\\\\\n0, &amp; \\text{if }b_{j}&gt;a_{j}\n\\end{cases}\n\\\\\\text{(since }a_{j}\\text{ and }b_{j}\\text{ lie in }\\left\\{ 0,1\\right\\}\n\\text{)}}} \\\\\n&amp;=\\prod\\limits_{j=0}^{k}\n\\begin{cases}\n1, &amp; \\text{if }b_{j}\\leq a_{j}\\\\\n0, &amp; \\text{if }b_{j}&gt;a_{j}\n\\end{cases}\n=\n\\begin{cases}\n1, &amp; \\text{if }b_{j}\\leq a_{j}\\text{ for all }j\\text{;}\\\\\n0, &amp; \\text{otherwise}\n\\end{cases}\n\\mod 2 .\n\\end{align}</span>\nHence, the <span class=\"math-container\">$i\\in\\mathbb{N}$</span> for which <span class=\"math-container\">$\\dbinom{n}{i}$</span> is\nodd are precisely the numbers of the form <span class=\"math-container\">$b_{k}2^{k}+b_{k-1}2^{k-1}\n+\\cdots+b_{0}2^{0}$</span> for <span class=\"math-container\">$b_{0},b_{1},\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} $</span>\nsatisfying <span class=\"math-container\">$\\left( b_{j}\\leq a_{j}\\text{ for all }j\\right) $</span>.\nSince all these <span class=\"math-container\">$i$</span> satisfy <span class=\"math-container\">$i \\in \\left\\{ 0,1,\\ldots,n\\right\\}$</span>\n(because otherwise, <span class=\"math-container\">$\\dbinom{n}{i}$</span> would be <span class=\"math-container\">$0$</span> and therefore could not\nbe odd), we can rewrite this as follows: The\n<span class=\"math-container\">$i \\in \\left\\{ 0,1,\\ldots,n\\right\\}$</span> for which <span class=\"math-container\">$\\dbinom{n}{i}$</span> is\nodd are precisely the numbers of the form <span class=\"math-container\">$b_{k}2^{k}+b_{k-1}2^{k-1}\n+\\cdots+b_{0}2^{0}$</span> for <span class=\"math-container\">$b_{0},b_{1},\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} $</span>\nsatisfying <span class=\"math-container\">$\\left( b_{j}\\leq a_{j}\\text{ for all }j\\right) $</span>.\nSince these\nnumbers are distinct (because the base-<span class=\"math-container\">$2$</span> representation of any\n<span class=\"math-container\">$i\\in\\mathbb{N}$</span> is unique, as long as we fix the number of digits), we thus\ncan substitute <span class=\"math-container\">$b_{k}2^{k}+b_{k-1}2^{k-1}+\\cdots+b_{0}2^{0}$</span> for <span class=\"math-container\">$i$</span> in the\nsum <span class=\"math-container\">$\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\} ;\\\\\n\\dbinom{n}{i}\\text{ is odd}}}q^{i}$</span>. Thus, this sum rewrites as follows:\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\} ;\\\\\n\\dbinom{n}{i}\\text{ is odd}}}q^{i}\n&amp;=\\underbrace{\\sum\\limits_{\\substack{b_{0},b_{1}\n,\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} ;\\\\b_{j}\\leq a_{j}\\text{ for all }j}\n}}_{=\\sum\\limits_{b_{0}=0}^{a_{0}}\\sum\\limits_{b_{1}=0}^{a_{1}}\\cdots\\sum\\limits_{b_{k}=0}^{a_k}\n}\\underbrace{q^{b_{k}2^{k}+b_{k-1}2^{k-1}+\\cdots+b_{0}2^{0}}}_{=\\left(\nq^{2^{k}}\\right) ^{b_{k}}\\left( q^{2^{k-1}}\\right) ^{b_{k-1}}\\cdots\\left(\nq^{2^{0}}\\right) ^{b_{0}}} \\\\\n&amp;=\\sum\\limits_{b_{0}=0}^{a_{0}}\\sum\\limits_{b_{1}=0}^{a_{1}}\\cdots\\sum\\limits_{b_{k}=0}^{a_{k}\n}\\left( q^{2^{k}}\\right) ^{b_{k}}\\left( q^{2^{k-1}}\\right) ^{b_{k-1}\n}\\cdots\\left( q^{2^{0}}\\right) ^{b_{0}} \\\\\n&amp;=\\left( \\sum\\limits_{b_{k}=0}^{a_{k}}\\left( q^{2^{k}}\\right) ^{b_{k}}\\right)\n\\left( \\sum\\limits_{b_{k-1}=0}^{a_{k-1}}\\left( q^{2^{k-1}}\\right) ^{b_{k-1}\n}\\right) \\cdots\\left( \\sum\\limits_{b_{0}=0}^{a_{0}}\\left( q^{2^{0}}\\right)\n^{b_{0}}\\right) \\\\\n&amp;=\\left( \\sum\\limits_{b=0}^{a_{k}}\\left( q^{2^{k}}\\right) ^{b}\\right) \\left(\n\\sum\\limits_{b=0}^{a_{k-1}}\\left( q^{2^{k-1}}\\right) ^{b}\\right) \\cdots\\left(\n\\sum\\limits_{b=0}^{a_{0}}\\left( q^{2^{0}}\\right) ^{b}\\right) \\\\\n&amp;=\\prod\\limits_{g=0}^{k}\\underbrace{\\left( \\sum\\limits_{b=0}^{a_{g}}\\left( q^{2^{g}}\\right) ^{b}\\right) }_{\\substack{=\n\\begin{cases}\nq^{2^{g}}+1, &amp; \\text{if }a_{g}=1;\\\\\n1 &amp; \\text{if }a_{g}=0\n\\end{cases}\n\\\\\\text{(since }a_{g}\\in\\left\\{ 0,1\\right\\} \\text{)}}} \\\\\n&amp;=\\prod\\limits_{g=0}^{k}\n\\begin{cases}\nq^{2^{g}}+1, &amp; \\text{if }a_{g}=1;\\\\\n1 &amp; \\text{if }a_{g}=0\n\\end{cases}\n\\\\\n&amp;=\\left( \\prod\\limits_{\\substack{g\\in\\left\\{ 0,1,\\ldots,k\\right\\} ;\\\\a_{g}\n=1}}\\left( q^{2^{g}}+1\\right) \\right) \\underbrace{\\left( \\prod\\limits\n_{\\substack{g\\in\\left\\{ 0,1,\\ldots,k\\right\\} ;\\\\a_{g}=0}}1\\right) }_{=1} \\\\\n&amp;=\\prod\\limits_{\\substack{g\\in\\left\\{ 0,1,\\ldots,k\\right\\} ;\\\\a_{g}=1}}\\left(\nq^{2^{g}}+1\\right) .\n\\end{align}</span>\nThus, there exists a finite subset <span class=\"math-container\">$G$</span> of <span class=\"math-container\">$\\mathbb{N}$</span> such that\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} q^i = \\prod_{g \\in G} \\left(q^{2^g} + 1\\right)\n\\end{align}</span>\n(namely, <span class=\"math-container\">$G$</span> is the set of all <span class=\"math-container\">$g\\in\\left\\{ 0,1,\\ldots,k\\right\\}$</span> satisfying <span class=\"math-container\">$a_g = 1$</span>). This proves Theorem 2. <span class=\"math-container\">$\\blacksquare$</span></p>\n\n<p><em>Proof of Theorem 1.</em> Theorem 2 (applied to <span class=\"math-container\">$q = 1$</span>) shows that there exists a finite subset <span class=\"math-container\">$G$</span> of <span class=\"math-container\">$\\mathbb{N}$</span> such that\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} 1^i = \\prod_{g \\in G} \\left(1^{2^g} + 1\\right) .\n\\end{align}</span>\nConsider this <span class=\"math-container\">$G$</span>.\nComparing\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} 1^i = \\prod_{g \\in G} \\underbrace{\\left(1^{2^g} + 1\\right)}_{= 1+1 = 2}\n= \\prod_{g \\in G} 2 = 2^{\\left|G\\right|}\n\\end{align}</span>\nwith\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} \\underbrace{1^i}_{=1}\n&amp; = \\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} 1 \\\\\n&amp; = \\left(\\text{the number of all $i\\in\\left\\{ 0,1,\\ldots,n\\right\\}$ such that $\\dbinom{n}{i}$ is odd}\\right) \\cdot 1 \\\\\n&amp; = \\left(\\text{the number of all $i\\in\\left\\{ 0,1,\\ldots,n\\right\\}$ such that $\\dbinom{n}{i}$ is odd}\\right) ,\n\\end{align}</span>\nwe obtain\n<span class=\"math-container\">\\begin{align}\n\\left(\\text{the number of all $i\\in\\left\\{ 0,1,\\ldots,n\\right\\}$ such that $\\dbinom{n}{i}$ is odd}\\right) = 2^{\\left|G\\right|} .\n\\end{align}</span>\nHence, the number of <span class=\"math-container\">$i \\in \\left\\{0,1,\\ldots,n\\right\\}$</span> such that <span class=\"math-container\">$\\dbinom{n}{i}$</span> is odd is a power of <span class=\"math-container\">$2$</span>. This proves Theorem 1. <span class=\"math-container\">$\\blacksquare$</span></p>\n" }, { "answer_id": 2093632, "score": 3, "text": "<p>Similar idea to Andreas Caranti's answer:</p>\n\n<p>Kummer's Theorem says that the number of factors of <span class=\"math-container\">$p$</span> in <span class=\"math-container\">$\\binom{n}{k}$</span> is the number of carries when adding <span class=\"math-container\">$k$</span> and <span class=\"math-container\">$n-k$</span> in base-<span class=\"math-container\">$p$</span>. There are no carries when adding <span class=\"math-container\">$k$</span> and <span class=\"math-container\">$n-k$</span> in binary if and only if <span class=\"math-container\">$k\\veebar n-k=0$</span>; when that is true, <span class=\"math-container\">$n=k\\lor n-k$</span>. This means that <span class=\"math-container\">$\\binom{n}{k}$</span> is odd when the <span class=\"math-container\">$1$</span> bits of <span class=\"math-container\">$k$</span> are a subset of the <span class=\"math-container\">$1$</span> bits of <span class=\"math-container\">$n$</span>. That is, there are <span class=\"math-container\">$2^{\\sigma_2(n)}$</span> odd entries in row <span class=\"math-container\">$n$</span> of Pascal's Triangle, where <span class=\"math-container\">$\\sigma_2(n)$</span> is the sum of the bits in the binary representation of <span class=\"math-container\">$n$</span>.</p>\n" }, { "answer_id": 2093331, "score": 1, "text": "<p>Will delete because this is not what OP intends, but a proof via <a href=\"https://en.wikipedia.org/wiki/Kummer&#39;s_theorem\" rel=\"nofollow noreferrer\">Kummer's Theorem</a> looks rather neat to me.</p>\n\n<p>Write $n, m$ in base $2$\n$$\nn = 2^{k} n_{k} + \\dots + 2 n_{1} + n_{0},\n\\qquad\nm = 2^{k} m_{k} + \\dots + 2 m_{1} + m_{0},\n$$\nwith $n_{i}, m_{i} \\in \\{ 0, 1 \\}$.</p>\n\n<p>Then </p>\n\n<blockquote>\n <p>$\\dbinom{n}{m}$ is odd iff there are no carries in the sum in base $2$ \n $$\nm + (n - m) = n,\n$$\n that is, iff $m_{i} \\le n_{i}$ for each $i$. </p>\n</blockquote>\n\n<p>Given $n$, there are thus\n$$\n\\prod_{i=0}^{k} (n_{i} + 1)\n$$\npossibilities for $m$, and each $n_{i} + 1 \\in \\{ 1, 2 \\}$.</p>\n\n<p>This also tells you what is the exponent $k$ in the question - it is the number of $1$'s in the diadic expansion of $n$, or equivalently, the number of <em>distinct</em> powers of $2$ that sum up to $n$.</p>\n" }, { "answer_id": 3305880, "score": 1, "text": "<p>The simplest (at least simplest to explain) solution (using Lucas's theorem) has not been posted; let me fill that \"much needed gap\". Lazy as I am, I will mostly copypaste my answer from <a href=\"https://math.stackexchange.com/a/2184460/\">https://math.stackexchange.com/a/2184460/</a> , since the argument is essentially the same.</p>\n\n<p>I use the notation <span class=\"math-container\">$\\mathbb{N}$</span> for the set <span class=\"math-container\">$\\left\\{0,1,2,\\ldots\\right\\}$</span>.</p>\n\n<p>The question asks the following:</p>\n\n<blockquote>\n <p><strong>Theorem 1.</strong> Let <span class=\"math-container\">$n \\in \\mathbb{N}$</span>. Then, the number of <span class=\"math-container\">$i \\in \\left\\{0,1,\\ldots,n\\right\\}$</span> such that <span class=\"math-container\">$\\dbinom{n}{i}$</span> is odd is a power of <span class=\"math-container\">$2$</span>.</p>\n</blockquote>\n\n<p>I shall prove something slightly more general:</p>\n\n<blockquote>\n <p><strong>Theorem 2.</strong> Let <span class=\"math-container\">$n \\in \\mathbb{N}$</span>. Let <span class=\"math-container\">$q$</span> be a real number. Then, there exists a finite subset <span class=\"math-container\">$G$</span> of <span class=\"math-container\">$\\mathbb{N}$</span> such that\n <span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} q^i = \\prod_{g \\in G} \\left(q^{2^g} + 1\\right) .\n\\end{align}</span></p>\n</blockquote>\n\n<p><em>Proof of Theorem 2.</em>\nWrite <span class=\"math-container\">$n$</span> in the form <span class=\"math-container\">$n=a_{k}2^{k}+a_{k-1}2^{k-1}+\\cdots+a_{0}2^{0}$</span> for some\n<span class=\"math-container\">$k\\in\\mathbb{N}$</span> and <span class=\"math-container\">$a_{0},a_{1},\\ldots,a_{k}\\in\\left\\{ 0,1\\right\\} $</span>.\n(This is just the base-<span class=\"math-container\">$2$</span> representation of <span class=\"math-container\">$n$</span>, possibly with leading zeroes.)</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Lucas&#39;s_theorem\" rel=\"nofollow noreferrer\">Lucas's theorem</a> tells you\nthat if <span class=\"math-container\">$i=b_{k}2^{k}+b_{k-1}2^{k-1}+\\cdots+b_{0}2^{0}$</span> for some <span class=\"math-container\">$b_{0}\n,b_{1},\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} $</span>, then\n<span class=\"math-container\">\\begin{align}\n\\dbinom{n}{i}\n&amp;\\equiv\n\\dbinom{a_{k}}{b_{k}}\\dbinom{a_{k-1}}{b_{k-1}}\n\\cdots\\dbinom{a_{0}}{b_{0}}=\\prod\\limits_{j=0}^{k}\\underbrace{\\dbinom{a_{j}}{b_{j}}\n}_{\\substack{=\n\\begin{cases}\n1, &amp; \\text{if }b_{j}\\leq a_{j}\\\\\n0, &amp; \\text{if }b_{j}&gt;a_{j}\n\\end{cases}\n\\\\\\text{(since }a_{j}\\text{ and }b_{j}\\text{ lie in }\\left\\{ 0,1\\right\\}\n\\text{)}}} \\\\\n&amp;=\\prod\\limits_{j=0}^{k}\n\\begin{cases}\n1, &amp; \\text{if }b_{j}\\leq a_{j}\\\\\n0, &amp; \\text{if }b_{j}&gt;a_{j}\n\\end{cases}\n=\n\\begin{cases}\n1, &amp; \\text{if }b_{j}\\leq a_{j}\\text{ for all }j\\text{;}\\\\\n0, &amp; \\text{otherwise}\n\\end{cases}\n\\mod 2 .\n\\end{align}</span>\nHence, the <span class=\"math-container\">$i\\in\\mathbb{N}$</span> for which <span class=\"math-container\">$\\dbinom{n}{i}$</span> is\nodd are precisely the numbers of the form <span class=\"math-container\">$b_{k}2^{k}+b_{k-1}2^{k-1}\n+\\cdots+b_{0}2^{0}$</span> for <span class=\"math-container\">$b_{0},b_{1},\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} $</span>\nsatisfying <span class=\"math-container\">$\\left( b_{j}\\leq a_{j}\\text{ for all }j\\right) $</span>.\nSince all these <span class=\"math-container\">$i$</span> satisfy <span class=\"math-container\">$i \\in \\left\\{ 0,1,\\ldots,n\\right\\}$</span>\n(because otherwise, <span class=\"math-container\">$\\dbinom{n}{i}$</span> would be <span class=\"math-container\">$0$</span> and therefore could not\nbe odd), we can rewrite this as follows: The\n<span class=\"math-container\">$i \\in \\left\\{ 0,1,\\ldots,n\\right\\}$</span> for which <span class=\"math-container\">$\\dbinom{n}{i}$</span> is\nodd are precisely the numbers of the form <span class=\"math-container\">$b_{k}2^{k}+b_{k-1}2^{k-1}\n+\\cdots+b_{0}2^{0}$</span> for <span class=\"math-container\">$b_{0},b_{1},\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} $</span>\nsatisfying <span class=\"math-container\">$\\left( b_{j}\\leq a_{j}\\text{ for all }j\\right) $</span>.\nSince these\nnumbers are distinct (because the base-<span class=\"math-container\">$2$</span> representation of any\n<span class=\"math-container\">$i\\in\\mathbb{N}$</span> is unique, as long as we fix the number of digits), we thus\ncan substitute <span class=\"math-container\">$b_{k}2^{k}+b_{k-1}2^{k-1}+\\cdots+b_{0}2^{0}$</span> for <span class=\"math-container\">$i$</span> in the\nsum <span class=\"math-container\">$\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\} ;\\\\\n\\dbinom{n}{i}\\text{ is odd}}}q^{i}$</span>. Thus, this sum rewrites as follows:\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\} ;\\\\\n\\dbinom{n}{i}\\text{ is odd}}}q^{i}\n&amp;=\\underbrace{\\sum\\limits_{\\substack{b_{0},b_{1}\n,\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} ;\\\\b_{j}\\leq a_{j}\\text{ for all }j}\n}}_{=\\sum\\limits_{b_{0}=0}^{a_{0}}\\sum\\limits_{b_{1}=0}^{a_{1}}\\cdots\\sum\\limits_{b_{k}=0}^{a_k}\n}\\underbrace{q^{b_{k}2^{k}+b_{k-1}2^{k-1}+\\cdots+b_{0}2^{0}}}_{=\\left(\nq^{2^{k}}\\right) ^{b_{k}}\\left( q^{2^{k-1}}\\right) ^{b_{k-1}}\\cdots\\left(\nq^{2^{0}}\\right) ^{b_{0}}} \\\\\n&amp;=\\sum\\limits_{b_{0}=0}^{a_{0}}\\sum\\limits_{b_{1}=0}^{a_{1}}\\cdots\\sum\\limits_{b_{k}=0}^{a_{k}\n}\\left( q^{2^{k}}\\right) ^{b_{k}}\\left( q^{2^{k-1}}\\right) ^{b_{k-1}\n}\\cdots\\left( q^{2^{0}}\\right) ^{b_{0}} \\\\\n&amp;=\\left( \\sum\\limits_{b_{k}=0}^{a_{k}}\\left( q^{2^{k}}\\right) ^{b_{k}}\\right)\n\\left( \\sum\\limits_{b_{k-1}=0}^{a_{k-1}}\\left( q^{2^{k-1}}\\right) ^{b_{k-1}\n}\\right) \\cdots\\left( \\sum\\limits_{b_{0}=0}^{a_{0}}\\left( q^{2^{0}}\\right)\n^{b_{0}}\\right) \\\\\n&amp;=\\left( \\sum\\limits_{b=0}^{a_{k}}\\left( q^{2^{k}}\\right) ^{b}\\right) \\left(\n\\sum\\limits_{b=0}^{a_{k-1}}\\left( q^{2^{k-1}}\\right) ^{b}\\right) \\cdots\\left(\n\\sum\\limits_{b=0}^{a_{0}}\\left( q^{2^{0}}\\right) ^{b}\\right) \\\\\n&amp;=\\prod\\limits_{g=0}^{k}\\underbrace{\\left( \\sum\\limits_{b=0}^{a_{g}}\\left( q^{2^{g}}\\right) ^{b}\\right) }_{\\substack{=\n\\begin{cases}\nq^{2^{g}}+1, &amp; \\text{if }a_{g}=1;\\\\\n1 &amp; \\text{if }a_{g}=0\n\\end{cases}\n\\\\\\text{(since }a_{g}\\in\\left\\{ 0,1\\right\\} \\text{)}}} \\\\\n&amp;=\\prod\\limits_{g=0}^{k}\n\\begin{cases}\nq^{2^{g}}+1, &amp; \\text{if }a_{g}=1;\\\\\n1 &amp; \\text{if }a_{g}=0\n\\end{cases}\n\\\\\n&amp;=\\left( \\prod\\limits_{\\substack{g\\in\\left\\{ 0,1,\\ldots,k\\right\\} ;\\\\a_{g}\n=1}}\\left( q^{2^{g}}+1\\right) \\right) \\underbrace{\\left( \\prod\\limits\n_{\\substack{g\\in\\left\\{ 0,1,\\ldots,k\\right\\} ;\\\\a_{g}=0}}1\\right) }_{=1} \\\\\n&amp;=\\prod\\limits_{\\substack{g\\in\\left\\{ 0,1,\\ldots,k\\right\\} ;\\\\a_{g}=1}}\\left(\nq^{2^{g}}+1\\right) .\n\\end{align}</span>\nThus, there exists a finite subset <span class=\"math-container\">$G$</span> of <span class=\"math-container\">$\\mathbb{N}$</span> such that\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} q^i = \\prod_{g \\in G} \\left(q^{2^g} + 1\\right)\n\\end{align}</span>\n(namely, <span class=\"math-container\">$G$</span> is the set of all <span class=\"math-container\">$g\\in\\left\\{ 0,1,\\ldots,k\\right\\}$</span> satisfying <span class=\"math-container\">$a_g = 1$</span>). This proves Theorem 2. <span class=\"math-container\">$\\blacksquare$</span></p>\n\n<p><em>Proof of Theorem 1.</em> Theorem 2 (applied to <span class=\"math-container\">$q = 1$</span>) shows that there exists a finite subset <span class=\"math-container\">$G$</span> of <span class=\"math-container\">$\\mathbb{N}$</span> such that\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} 1^i = \\prod_{g \\in G} \\left(1^{2^g} + 1\\right) .\n\\end{align}</span>\nConsider this <span class=\"math-container\">$G$</span>.\nComparing\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} 1^i = \\prod_{g \\in G} \\underbrace{\\left(1^{2^g} + 1\\right)}_{= 1+1 = 2}\n= \\prod_{g \\in G} 2 = 2^{\\left|G\\right|}\n\\end{align}</span>\nwith\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} \\underbrace{1^i}_{=1}\n&amp; = \\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} 1 \\\\\n&amp; = \\left(\\text{the number of all $i\\in\\left\\{ 0,1,\\ldots,n\\right\\}$ such that $\\dbinom{n}{i}$ is odd}\\right) \\cdot 1 \\\\\n&amp; = \\left(\\text{the number of all $i\\in\\left\\{ 0,1,\\ldots,n\\right\\}$ such that $\\dbinom{n}{i}$ is odd}\\right) ,\n\\end{align}</span>\nwe obtain\n<span class=\"math-container\">\\begin{align}\n\\left(\\text{the number of all $i\\in\\left\\{ 0,1,\\ldots,n\\right\\}$ such that $\\dbinom{n}{i}$ is odd}\\right) = 2^{\\left|G\\right|} .\n\\end{align}</span>\nHence, the number of <span class=\"math-container\">$i \\in \\left\\{0,1,\\ldots,n\\right\\}$</span> such that <span class=\"math-container\">$\\dbinom{n}{i}$</span> is odd is a power of <span class=\"math-container\">$2$</span>. This proves Theorem 1. <span class=\"math-container\">$\\blacksquare$</span></p>\n" }, { "answer_id": 2093632, "score": 3, "text": "<p>Similar idea to Andreas Caranti's answer:</p>\n\n<p>Kummer's Theorem says that the number of factors of <span class=\"math-container\">$p$</span> in <span class=\"math-container\">$\\binom{n}{k}$</span> is the number of carries when adding <span class=\"math-container\">$k$</span> and <span class=\"math-container\">$n-k$</span> in base-<span class=\"math-container\">$p$</span>. There are no carries when adding <span class=\"math-container\">$k$</span> and <span class=\"math-container\">$n-k$</span> in binary if and only if <span class=\"math-container\">$k\\veebar n-k=0$</span>; when that is true, <span class=\"math-container\">$n=k\\lor n-k$</span>. This means that <span class=\"math-container\">$\\binom{n}{k}$</span> is odd when the <span class=\"math-container\">$1$</span> bits of <span class=\"math-container\">$k$</span> are a subset of the <span class=\"math-container\">$1$</span> bits of <span class=\"math-container\">$n$</span>. That is, there are <span class=\"math-container\">$2^{\\sigma_2(n)}$</span> odd entries in row <span class=\"math-container\">$n$</span> of Pascal's Triangle, where <span class=\"math-container\">$\\sigma_2(n)$</span> is the sum of the bits in the binary representation of <span class=\"math-container\">$n$</span>.</p>\n" }, { "answer_id": 2093331, "score": 1, "text": "<p>Will delete because this is not what OP intends, but a proof via <a href=\"https://en.wikipedia.org/wiki/Kummer&#39;s_theorem\" rel=\"nofollow noreferrer\">Kummer's Theorem</a> looks rather neat to me.</p>\n\n<p>Write $n, m$ in base $2$\n$$\nn = 2^{k} n_{k} + \\dots + 2 n_{1} + n_{0},\n\\qquad\nm = 2^{k} m_{k} + \\dots + 2 m_{1} + m_{0},\n$$\nwith $n_{i}, m_{i} \\in \\{ 0, 1 \\}$.</p>\n\n<p>Then </p>\n\n<blockquote>\n <p>$\\dbinom{n}{m}$ is odd iff there are no carries in the sum in base $2$ \n $$\nm + (n - m) = n,\n$$\n that is, iff $m_{i} \\le n_{i}$ for each $i$. </p>\n</blockquote>\n\n<p>Given $n$, there are thus\n$$\n\\prod_{i=0}^{k} (n_{i} + 1)\n$$\npossibilities for $m$, and each $n_{i} + 1 \\in \\{ 1, 2 \\}$.</p>\n\n<p>This also tells you what is the exponent $k$ in the question - it is the number of $1$'s in the diadic expansion of $n$, or equivalently, the number of <em>distinct</em> powers of $2$ that sum up to $n$.</p>\n" }, { "answer_id": 3305880, "score": 1, "text": "<p>The simplest (at least simplest to explain) solution (using Lucas's theorem) has not been posted; let me fill that \"much needed gap\". Lazy as I am, I will mostly copypaste my answer from <a href=\"https://math.stackexchange.com/a/2184460/\">https://math.stackexchange.com/a/2184460/</a> , since the argument is essentially the same.</p>\n\n<p>I use the notation <span class=\"math-container\">$\\mathbb{N}$</span> for the set <span class=\"math-container\">$\\left\\{0,1,2,\\ldots\\right\\}$</span>.</p>\n\n<p>The question asks the following:</p>\n\n<blockquote>\n <p><strong>Theorem 1.</strong> Let <span class=\"math-container\">$n \\in \\mathbb{N}$</span>. Then, the number of <span class=\"math-container\">$i \\in \\left\\{0,1,\\ldots,n\\right\\}$</span> such that <span class=\"math-container\">$\\dbinom{n}{i}$</span> is odd is a power of <span class=\"math-container\">$2$</span>.</p>\n</blockquote>\n\n<p>I shall prove something slightly more general:</p>\n\n<blockquote>\n <p><strong>Theorem 2.</strong> Let <span class=\"math-container\">$n \\in \\mathbb{N}$</span>. Let <span class=\"math-container\">$q$</span> be a real number. Then, there exists a finite subset <span class=\"math-container\">$G$</span> of <span class=\"math-container\">$\\mathbb{N}$</span> such that\n <span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} q^i = \\prod_{g \\in G} \\left(q^{2^g} + 1\\right) .\n\\end{align}</span></p>\n</blockquote>\n\n<p><em>Proof of Theorem 2.</em>\nWrite <span class=\"math-container\">$n$</span> in the form <span class=\"math-container\">$n=a_{k}2^{k}+a_{k-1}2^{k-1}+\\cdots+a_{0}2^{0}$</span> for some\n<span class=\"math-container\">$k\\in\\mathbb{N}$</span> and <span class=\"math-container\">$a_{0},a_{1},\\ldots,a_{k}\\in\\left\\{ 0,1\\right\\} $</span>.\n(This is just the base-<span class=\"math-container\">$2$</span> representation of <span class=\"math-container\">$n$</span>, possibly with leading zeroes.)</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Lucas&#39;s_theorem\" rel=\"nofollow noreferrer\">Lucas's theorem</a> tells you\nthat if <span class=\"math-container\">$i=b_{k}2^{k}+b_{k-1}2^{k-1}+\\cdots+b_{0}2^{0}$</span> for some <span class=\"math-container\">$b_{0}\n,b_{1},\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} $</span>, then\n<span class=\"math-container\">\\begin{align}\n\\dbinom{n}{i}\n&amp;\\equiv\n\\dbinom{a_{k}}{b_{k}}\\dbinom{a_{k-1}}{b_{k-1}}\n\\cdots\\dbinom{a_{0}}{b_{0}}=\\prod\\limits_{j=0}^{k}\\underbrace{\\dbinom{a_{j}}{b_{j}}\n}_{\\substack{=\n\\begin{cases}\n1, &amp; \\text{if }b_{j}\\leq a_{j}\\\\\n0, &amp; \\text{if }b_{j}&gt;a_{j}\n\\end{cases}\n\\\\\\text{(since }a_{j}\\text{ and }b_{j}\\text{ lie in }\\left\\{ 0,1\\right\\}\n\\text{)}}} \\\\\n&amp;=\\prod\\limits_{j=0}^{k}\n\\begin{cases}\n1, &amp; \\text{if }b_{j}\\leq a_{j}\\\\\n0, &amp; \\text{if }b_{j}&gt;a_{j}\n\\end{cases}\n=\n\\begin{cases}\n1, &amp; \\text{if }b_{j}\\leq a_{j}\\text{ for all }j\\text{;}\\\\\n0, &amp; \\text{otherwise}\n\\end{cases}\n\\mod 2 .\n\\end{align}</span>\nHence, the <span class=\"math-container\">$i\\in\\mathbb{N}$</span> for which <span class=\"math-container\">$\\dbinom{n}{i}$</span> is\nodd are precisely the numbers of the form <span class=\"math-container\">$b_{k}2^{k}+b_{k-1}2^{k-1}\n+\\cdots+b_{0}2^{0}$</span> for <span class=\"math-container\">$b_{0},b_{1},\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} $</span>\nsatisfying <span class=\"math-container\">$\\left( b_{j}\\leq a_{j}\\text{ for all }j\\right) $</span>.\nSince all these <span class=\"math-container\">$i$</span> satisfy <span class=\"math-container\">$i \\in \\left\\{ 0,1,\\ldots,n\\right\\}$</span>\n(because otherwise, <span class=\"math-container\">$\\dbinom{n}{i}$</span> would be <span class=\"math-container\">$0$</span> and therefore could not\nbe odd), we can rewrite this as follows: The\n<span class=\"math-container\">$i \\in \\left\\{ 0,1,\\ldots,n\\right\\}$</span> for which <span class=\"math-container\">$\\dbinom{n}{i}$</span> is\nodd are precisely the numbers of the form <span class=\"math-container\">$b_{k}2^{k}+b_{k-1}2^{k-1}\n+\\cdots+b_{0}2^{0}$</span> for <span class=\"math-container\">$b_{0},b_{1},\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} $</span>\nsatisfying <span class=\"math-container\">$\\left( b_{j}\\leq a_{j}\\text{ for all }j\\right) $</span>.\nSince these\nnumbers are distinct (because the base-<span class=\"math-container\">$2$</span> representation of any\n<span class=\"math-container\">$i\\in\\mathbb{N}$</span> is unique, as long as we fix the number of digits), we thus\ncan substitute <span class=\"math-container\">$b_{k}2^{k}+b_{k-1}2^{k-1}+\\cdots+b_{0}2^{0}$</span> for <span class=\"math-container\">$i$</span> in the\nsum <span class=\"math-container\">$\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\} ;\\\\\n\\dbinom{n}{i}\\text{ is odd}}}q^{i}$</span>. Thus, this sum rewrites as follows:\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\} ;\\\\\n\\dbinom{n}{i}\\text{ is odd}}}q^{i}\n&amp;=\\underbrace{\\sum\\limits_{\\substack{b_{0},b_{1}\n,\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} ;\\\\b_{j}\\leq a_{j}\\text{ for all }j}\n}}_{=\\sum\\limits_{b_{0}=0}^{a_{0}}\\sum\\limits_{b_{1}=0}^{a_{1}}\\cdots\\sum\\limits_{b_{k}=0}^{a_k}\n}\\underbrace{q^{b_{k}2^{k}+b_{k-1}2^{k-1}+\\cdots+b_{0}2^{0}}}_{=\\left(\nq^{2^{k}}\\right) ^{b_{k}}\\left( q^{2^{k-1}}\\right) ^{b_{k-1}}\\cdots\\left(\nq^{2^{0}}\\right) ^{b_{0}}} \\\\\n&amp;=\\sum\\limits_{b_{0}=0}^{a_{0}}\\sum\\limits_{b_{1}=0}^{a_{1}}\\cdots\\sum\\limits_{b_{k}=0}^{a_{k}\n}\\left( q^{2^{k}}\\right) ^{b_{k}}\\left( q^{2^{k-1}}\\right) ^{b_{k-1}\n}\\cdots\\left( q^{2^{0}}\\right) ^{b_{0}} \\\\\n&amp;=\\left( \\sum\\limits_{b_{k}=0}^{a_{k}}\\left( q^{2^{k}}\\right) ^{b_{k}}\\right)\n\\left( \\sum\\limits_{b_{k-1}=0}^{a_{k-1}}\\left( q^{2^{k-1}}\\right) ^{b_{k-1}\n}\\right) \\cdots\\left( \\sum\\limits_{b_{0}=0}^{a_{0}}\\left( q^{2^{0}}\\right)\n^{b_{0}}\\right) \\\\\n&amp;=\\left( \\sum\\limits_{b=0}^{a_{k}}\\left( q^{2^{k}}\\right) ^{b}\\right) \\left(\n\\sum\\limits_{b=0}^{a_{k-1}}\\left( q^{2^{k-1}}\\right) ^{b}\\right) \\cdots\\left(\n\\sum\\limits_{b=0}^{a_{0}}\\left( q^{2^{0}}\\right) ^{b}\\right) \\\\\n&amp;=\\prod\\limits_{g=0}^{k}\\underbrace{\\left( \\sum\\limits_{b=0}^{a_{g}}\\left( q^{2^{g}}\\right) ^{b}\\right) }_{\\substack{=\n\\begin{cases}\nq^{2^{g}}+1, &amp; \\text{if }a_{g}=1;\\\\\n1 &amp; \\text{if }a_{g}=0\n\\end{cases}\n\\\\\\text{(since }a_{g}\\in\\left\\{ 0,1\\right\\} \\text{)}}} \\\\\n&amp;=\\prod\\limits_{g=0}^{k}\n\\begin{cases}\nq^{2^{g}}+1, &amp; \\text{if }a_{g}=1;\\\\\n1 &amp; \\text{if }a_{g}=0\n\\end{cases}\n\\\\\n&amp;=\\left( \\prod\\limits_{\\substack{g\\in\\left\\{ 0,1,\\ldots,k\\right\\} ;\\\\a_{g}\n=1}}\\left( q^{2^{g}}+1\\right) \\right) \\underbrace{\\left( \\prod\\limits\n_{\\substack{g\\in\\left\\{ 0,1,\\ldots,k\\right\\} ;\\\\a_{g}=0}}1\\right) }_{=1} \\\\\n&amp;=\\prod\\limits_{\\substack{g\\in\\left\\{ 0,1,\\ldots,k\\right\\} ;\\\\a_{g}=1}}\\left(\nq^{2^{g}}+1\\right) .\n\\end{align}</span>\nThus, there exists a finite subset <span class=\"math-container\">$G$</span> of <span class=\"math-container\">$\\mathbb{N}$</span> such that\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} q^i = \\prod_{g \\in G} \\left(q^{2^g} + 1\\right)\n\\end{align}</span>\n(namely, <span class=\"math-container\">$G$</span> is the set of all <span class=\"math-container\">$g\\in\\left\\{ 0,1,\\ldots,k\\right\\}$</span> satisfying <span class=\"math-container\">$a_g = 1$</span>). This proves Theorem 2. <span class=\"math-container\">$\\blacksquare$</span></p>\n\n<p><em>Proof of Theorem 1.</em> Theorem 2 (applied to <span class=\"math-container\">$q = 1$</span>) shows that there exists a finite subset <span class=\"math-container\">$G$</span> of <span class=\"math-container\">$\\mathbb{N}$</span> such that\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} 1^i = \\prod_{g \\in G} \\left(1^{2^g} + 1\\right) .\n\\end{align}</span>\nConsider this <span class=\"math-container\">$G$</span>.\nComparing\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} 1^i = \\prod_{g \\in G} \\underbrace{\\left(1^{2^g} + 1\\right)}_{= 1+1 = 2}\n= \\prod_{g \\in G} 2 = 2^{\\left|G\\right|}\n\\end{align}</span>\nwith\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} \\underbrace{1^i}_{=1}\n&amp; = \\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} 1 \\\\\n&amp; = \\left(\\text{the number of all $i\\in\\left\\{ 0,1,\\ldots,n\\right\\}$ such that $\\dbinom{n}{i}$ is odd}\\right) \\cdot 1 \\\\\n&amp; = \\left(\\text{the number of all $i\\in\\left\\{ 0,1,\\ldots,n\\right\\}$ such that $\\dbinom{n}{i}$ is odd}\\right) ,\n\\end{align}</span>\nwe obtain\n<span class=\"math-container\">\\begin{align}\n\\left(\\text{the number of all $i\\in\\left\\{ 0,1,\\ldots,n\\right\\}$ such that $\\dbinom{n}{i}$ is odd}\\right) = 2^{\\left|G\\right|} .\n\\end{align}</span>\nHence, the number of <span class=\"math-container\">$i \\in \\left\\{0,1,\\ldots,n\\right\\}$</span> such that <span class=\"math-container\">$\\dbinom{n}{i}$</span> is odd is a power of <span class=\"math-container\">$2$</span>. This proves Theorem 1. <span class=\"math-container\">$\\blacksquare$</span></p>\n" }, { "answer_id": 2093632, "score": 3, "text": "<p>Similar idea to Andreas Caranti's answer:</p>\n\n<p>Kummer's Theorem says that the number of factors of <span class=\"math-container\">$p$</span> in <span class=\"math-container\">$\\binom{n}{k}$</span> is the number of carries when adding <span class=\"math-container\">$k$</span> and <span class=\"math-container\">$n-k$</span> in base-<span class=\"math-container\">$p$</span>. There are no carries when adding <span class=\"math-container\">$k$</span> and <span class=\"math-container\">$n-k$</span> in binary if and only if <span class=\"math-container\">$k\\veebar n-k=0$</span>; when that is true, <span class=\"math-container\">$n=k\\lor n-k$</span>. This means that <span class=\"math-container\">$\\binom{n}{k}$</span> is odd when the <span class=\"math-container\">$1$</span> bits of <span class=\"math-container\">$k$</span> are a subset of the <span class=\"math-container\">$1$</span> bits of <span class=\"math-container\">$n$</span>. That is, there are <span class=\"math-container\">$2^{\\sigma_2(n)}$</span> odd entries in row <span class=\"math-container\">$n$</span> of Pascal's Triangle, where <span class=\"math-container\">$\\sigma_2(n)$</span> is the sum of the bits in the binary representation of <span class=\"math-container\">$n$</span>.</p>\n" }, { "answer_id": 2093331, "score": 1, "text": "<p>Will delete because this is not what OP intends, but a proof via <a href=\"https://en.wikipedia.org/wiki/Kummer&#39;s_theorem\" rel=\"nofollow noreferrer\">Kummer's Theorem</a> looks rather neat to me.</p>\n\n<p>Write $n, m$ in base $2$\n$$\nn = 2^{k} n_{k} + \\dots + 2 n_{1} + n_{0},\n\\qquad\nm = 2^{k} m_{k} + \\dots + 2 m_{1} + m_{0},\n$$\nwith $n_{i}, m_{i} \\in \\{ 0, 1 \\}$.</p>\n\n<p>Then </p>\n\n<blockquote>\n <p>$\\dbinom{n}{m}$ is odd iff there are no carries in the sum in base $2$ \n $$\nm + (n - m) = n,\n$$\n that is, iff $m_{i} \\le n_{i}$ for each $i$. </p>\n</blockquote>\n\n<p>Given $n$, there are thus\n$$\n\\prod_{i=0}^{k} (n_{i} + 1)\n$$\npossibilities for $m$, and each $n_{i} + 1 \\in \\{ 1, 2 \\}$.</p>\n\n<p>This also tells you what is the exponent $k$ in the question - it is the number of $1$'s in the diadic expansion of $n$, or equivalently, the number of <em>distinct</em> powers of $2$ that sum up to $n$.</p>\n" }, { "answer_id": 3305880, "score": 1, "text": "<p>The simplest (at least simplest to explain) solution (using Lucas's theorem) has not been posted; let me fill that \"much needed gap\". Lazy as I am, I will mostly copypaste my answer from <a href=\"https://math.stackexchange.com/a/2184460/\">https://math.stackexchange.com/a/2184460/</a> , since the argument is essentially the same.</p>\n\n<p>I use the notation <span class=\"math-container\">$\\mathbb{N}$</span> for the set <span class=\"math-container\">$\\left\\{0,1,2,\\ldots\\right\\}$</span>.</p>\n\n<p>The question asks the following:</p>\n\n<blockquote>\n <p><strong>Theorem 1.</strong> Let <span class=\"math-container\">$n \\in \\mathbb{N}$</span>. Then, the number of <span class=\"math-container\">$i \\in \\left\\{0,1,\\ldots,n\\right\\}$</span> such that <span class=\"math-container\">$\\dbinom{n}{i}$</span> is odd is a power of <span class=\"math-container\">$2$</span>.</p>\n</blockquote>\n\n<p>I shall prove something slightly more general:</p>\n\n<blockquote>\n <p><strong>Theorem 2.</strong> Let <span class=\"math-container\">$n \\in \\mathbb{N}$</span>. Let <span class=\"math-container\">$q$</span> be a real number. Then, there exists a finite subset <span class=\"math-container\">$G$</span> of <span class=\"math-container\">$\\mathbb{N}$</span> such that\n <span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} q^i = \\prod_{g \\in G} \\left(q^{2^g} + 1\\right) .\n\\end{align}</span></p>\n</blockquote>\n\n<p><em>Proof of Theorem 2.</em>\nWrite <span class=\"math-container\">$n$</span> in the form <span class=\"math-container\">$n=a_{k}2^{k}+a_{k-1}2^{k-1}+\\cdots+a_{0}2^{0}$</span> for some\n<span class=\"math-container\">$k\\in\\mathbb{N}$</span> and <span class=\"math-container\">$a_{0},a_{1},\\ldots,a_{k}\\in\\left\\{ 0,1\\right\\} $</span>.\n(This is just the base-<span class=\"math-container\">$2$</span> representation of <span class=\"math-container\">$n$</span>, possibly with leading zeroes.)</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Lucas&#39;s_theorem\" rel=\"nofollow noreferrer\">Lucas's theorem</a> tells you\nthat if <span class=\"math-container\">$i=b_{k}2^{k}+b_{k-1}2^{k-1}+\\cdots+b_{0}2^{0}$</span> for some <span class=\"math-container\">$b_{0}\n,b_{1},\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} $</span>, then\n<span class=\"math-container\">\\begin{align}\n\\dbinom{n}{i}\n&amp;\\equiv\n\\dbinom{a_{k}}{b_{k}}\\dbinom{a_{k-1}}{b_{k-1}}\n\\cdots\\dbinom{a_{0}}{b_{0}}=\\prod\\limits_{j=0}^{k}\\underbrace{\\dbinom{a_{j}}{b_{j}}\n}_{\\substack{=\n\\begin{cases}\n1, &amp; \\text{if }b_{j}\\leq a_{j}\\\\\n0, &amp; \\text{if }b_{j}&gt;a_{j}\n\\end{cases}\n\\\\\\text{(since }a_{j}\\text{ and }b_{j}\\text{ lie in }\\left\\{ 0,1\\right\\}\n\\text{)}}} \\\\\n&amp;=\\prod\\limits_{j=0}^{k}\n\\begin{cases}\n1, &amp; \\text{if }b_{j}\\leq a_{j}\\\\\n0, &amp; \\text{if }b_{j}&gt;a_{j}\n\\end{cases}\n=\n\\begin{cases}\n1, &amp; \\text{if }b_{j}\\leq a_{j}\\text{ for all }j\\text{;}\\\\\n0, &amp; \\text{otherwise}\n\\end{cases}\n\\mod 2 .\n\\end{align}</span>\nHence, the <span class=\"math-container\">$i\\in\\mathbb{N}$</span> for which <span class=\"math-container\">$\\dbinom{n}{i}$</span> is\nodd are precisely the numbers of the form <span class=\"math-container\">$b_{k}2^{k}+b_{k-1}2^{k-1}\n+\\cdots+b_{0}2^{0}$</span> for <span class=\"math-container\">$b_{0},b_{1},\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} $</span>\nsatisfying <span class=\"math-container\">$\\left( b_{j}\\leq a_{j}\\text{ for all }j\\right) $</span>.\nSince all these <span class=\"math-container\">$i$</span> satisfy <span class=\"math-container\">$i \\in \\left\\{ 0,1,\\ldots,n\\right\\}$</span>\n(because otherwise, <span class=\"math-container\">$\\dbinom{n}{i}$</span> would be <span class=\"math-container\">$0$</span> and therefore could not\nbe odd), we can rewrite this as follows: The\n<span class=\"math-container\">$i \\in \\left\\{ 0,1,\\ldots,n\\right\\}$</span> for which <span class=\"math-container\">$\\dbinom{n}{i}$</span> is\nodd are precisely the numbers of the form <span class=\"math-container\">$b_{k}2^{k}+b_{k-1}2^{k-1}\n+\\cdots+b_{0}2^{0}$</span> for <span class=\"math-container\">$b_{0},b_{1},\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} $</span>\nsatisfying <span class=\"math-container\">$\\left( b_{j}\\leq a_{j}\\text{ for all }j\\right) $</span>.\nSince these\nnumbers are distinct (because the base-<span class=\"math-container\">$2$</span> representation of any\n<span class=\"math-container\">$i\\in\\mathbb{N}$</span> is unique, as long as we fix the number of digits), we thus\ncan substitute <span class=\"math-container\">$b_{k}2^{k}+b_{k-1}2^{k-1}+\\cdots+b_{0}2^{0}$</span> for <span class=\"math-container\">$i$</span> in the\nsum <span class=\"math-container\">$\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\} ;\\\\\n\\dbinom{n}{i}\\text{ is odd}}}q^{i}$</span>. Thus, this sum rewrites as follows:\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\} ;\\\\\n\\dbinom{n}{i}\\text{ is odd}}}q^{i}\n&amp;=\\underbrace{\\sum\\limits_{\\substack{b_{0},b_{1}\n,\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} ;\\\\b_{j}\\leq a_{j}\\text{ for all }j}\n}}_{=\\sum\\limits_{b_{0}=0}^{a_{0}}\\sum\\limits_{b_{1}=0}^{a_{1}}\\cdots\\sum\\limits_{b_{k}=0}^{a_k}\n}\\underbrace{q^{b_{k}2^{k}+b_{k-1}2^{k-1}+\\cdots+b_{0}2^{0}}}_{=\\left(\nq^{2^{k}}\\right) ^{b_{k}}\\left( q^{2^{k-1}}\\right) ^{b_{k-1}}\\cdots\\left(\nq^{2^{0}}\\right) ^{b_{0}}} \\\\\n&amp;=\\sum\\limits_{b_{0}=0}^{a_{0}}\\sum\\limits_{b_{1}=0}^{a_{1}}\\cdots\\sum\\limits_{b_{k}=0}^{a_{k}\n}\\left( q^{2^{k}}\\right) ^{b_{k}}\\left( q^{2^{k-1}}\\right) ^{b_{k-1}\n}\\cdots\\left( q^{2^{0}}\\right) ^{b_{0}} \\\\\n&amp;=\\left( \\sum\\limits_{b_{k}=0}^{a_{k}}\\left( q^{2^{k}}\\right) ^{b_{k}}\\right)\n\\left( \\sum\\limits_{b_{k-1}=0}^{a_{k-1}}\\left( q^{2^{k-1}}\\right) ^{b_{k-1}\n}\\right) \\cdots\\left( \\sum\\limits_{b_{0}=0}^{a_{0}}\\left( q^{2^{0}}\\right)\n^{b_{0}}\\right) \\\\\n&amp;=\\left( \\sum\\limits_{b=0}^{a_{k}}\\left( q^{2^{k}}\\right) ^{b}\\right) \\left(\n\\sum\\limits_{b=0}^{a_{k-1}}\\left( q^{2^{k-1}}\\right) ^{b}\\right) \\cdots\\left(\n\\sum\\limits_{b=0}^{a_{0}}\\left( q^{2^{0}}\\right) ^{b}\\right) \\\\\n&amp;=\\prod\\limits_{g=0}^{k}\\underbrace{\\left( \\sum\\limits_{b=0}^{a_{g}}\\left( q^{2^{g}}\\right) ^{b}\\right) }_{\\substack{=\n\\begin{cases}\nq^{2^{g}}+1, &amp; \\text{if }a_{g}=1;\\\\\n1 &amp; \\text{if }a_{g}=0\n\\end{cases}\n\\\\\\text{(since }a_{g}\\in\\left\\{ 0,1\\right\\} \\text{)}}} \\\\\n&amp;=\\prod\\limits_{g=0}^{k}\n\\begin{cases}\nq^{2^{g}}+1, &amp; \\text{if }a_{g}=1;\\\\\n1 &amp; \\text{if }a_{g}=0\n\\end{cases}\n\\\\\n&amp;=\\left( \\prod\\limits_{\\substack{g\\in\\left\\{ 0,1,\\ldots,k\\right\\} ;\\\\a_{g}\n=1}}\\left( q^{2^{g}}+1\\right) \\right) \\underbrace{\\left( \\prod\\limits\n_{\\substack{g\\in\\left\\{ 0,1,\\ldots,k\\right\\} ;\\\\a_{g}=0}}1\\right) }_{=1} \\\\\n&amp;=\\prod\\limits_{\\substack{g\\in\\left\\{ 0,1,\\ldots,k\\right\\} ;\\\\a_{g}=1}}\\left(\nq^{2^{g}}+1\\right) .\n\\end{align}</span>\nThus, there exists a finite subset <span class=\"math-container\">$G$</span> of <span class=\"math-container\">$\\mathbb{N}$</span> such that\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} q^i = \\prod_{g \\in G} \\left(q^{2^g} + 1\\right)\n\\end{align}</span>\n(namely, <span class=\"math-container\">$G$</span> is the set of all <span class=\"math-container\">$g\\in\\left\\{ 0,1,\\ldots,k\\right\\}$</span> satisfying <span class=\"math-container\">$a_g = 1$</span>). This proves Theorem 2. <span class=\"math-container\">$\\blacksquare$</span></p>\n\n<p><em>Proof of Theorem 1.</em> Theorem 2 (applied to <span class=\"math-container\">$q = 1$</span>) shows that there exists a finite subset <span class=\"math-container\">$G$</span> of <span class=\"math-container\">$\\mathbb{N}$</span> such that\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} 1^i = \\prod_{g \\in G} \\left(1^{2^g} + 1\\right) .\n\\end{align}</span>\nConsider this <span class=\"math-container\">$G$</span>.\nComparing\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} 1^i = \\prod_{g \\in G} \\underbrace{\\left(1^{2^g} + 1\\right)}_{= 1+1 = 2}\n= \\prod_{g \\in G} 2 = 2^{\\left|G\\right|}\n\\end{align}</span>\nwith\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} \\underbrace{1^i}_{=1}\n&amp; = \\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} 1 \\\\\n&amp; = \\left(\\text{the number of all $i\\in\\left\\{ 0,1,\\ldots,n\\right\\}$ such that $\\dbinom{n}{i}$ is odd}\\right) \\cdot 1 \\\\\n&amp; = \\left(\\text{the number of all $i\\in\\left\\{ 0,1,\\ldots,n\\right\\}$ such that $\\dbinom{n}{i}$ is odd}\\right) ,\n\\end{align}</span>\nwe obtain\n<span class=\"math-container\">\\begin{align}\n\\left(\\text{the number of all $i\\in\\left\\{ 0,1,\\ldots,n\\right\\}$ such that $\\dbinom{n}{i}$ is odd}\\right) = 2^{\\left|G\\right|} .\n\\end{align}</span>\nHence, the number of <span class=\"math-container\">$i \\in \\left\\{0,1,\\ldots,n\\right\\}$</span> such that <span class=\"math-container\">$\\dbinom{n}{i}$</span> is odd is a power of <span class=\"math-container\">$2$</span>. This proves Theorem 1. <span class=\"math-container\">$\\blacksquare$</span></p>\n" }, { "answer_id": 2093632, "score": 3, "text": "<p>Similar idea to Andreas Caranti's answer:</p>\n\n<p>Kummer's Theorem says that the number of factors of <span class=\"math-container\">$p$</span> in <span class=\"math-container\">$\\binom{n}{k}$</span> is the number of carries when adding <span class=\"math-container\">$k$</span> and <span class=\"math-container\">$n-k$</span> in base-<span class=\"math-container\">$p$</span>. There are no carries when adding <span class=\"math-container\">$k$</span> and <span class=\"math-container\">$n-k$</span> in binary if and only if <span class=\"math-container\">$k\\veebar n-k=0$</span>; when that is true, <span class=\"math-container\">$n=k\\lor n-k$</span>. This means that <span class=\"math-container\">$\\binom{n}{k}$</span> is odd when the <span class=\"math-container\">$1$</span> bits of <span class=\"math-container\">$k$</span> are a subset of the <span class=\"math-container\">$1$</span> bits of <span class=\"math-container\">$n$</span>. That is, there are <span class=\"math-container\">$2^{\\sigma_2(n)}$</span> odd entries in row <span class=\"math-container\">$n$</span> of Pascal's Triangle, where <span class=\"math-container\">$\\sigma_2(n)$</span> is the sum of the bits in the binary representation of <span class=\"math-container\">$n$</span>.</p>\n" }, { "answer_id": 2093331, "score": 1, "text": "<p>Will delete because this is not what OP intends, but a proof via <a href=\"https://en.wikipedia.org/wiki/Kummer&#39;s_theorem\" rel=\"nofollow noreferrer\">Kummer's Theorem</a> looks rather neat to me.</p>\n\n<p>Write $n, m$ in base $2$\n$$\nn = 2^{k} n_{k} + \\dots + 2 n_{1} + n_{0},\n\\qquad\nm = 2^{k} m_{k} + \\dots + 2 m_{1} + m_{0},\n$$\nwith $n_{i}, m_{i} \\in \\{ 0, 1 \\}$.</p>\n\n<p>Then </p>\n\n<blockquote>\n <p>$\\dbinom{n}{m}$ is odd iff there are no carries in the sum in base $2$ \n $$\nm + (n - m) = n,\n$$\n that is, iff $m_{i} \\le n_{i}$ for each $i$. </p>\n</blockquote>\n\n<p>Given $n$, there are thus\n$$\n\\prod_{i=0}^{k} (n_{i} + 1)\n$$\npossibilities for $m$, and each $n_{i} + 1 \\in \\{ 1, 2 \\}$.</p>\n\n<p>This also tells you what is the exponent $k$ in the question - it is the number of $1$'s in the diadic expansion of $n$, or equivalently, the number of <em>distinct</em> powers of $2$ that sum up to $n$.</p>\n" }, { "answer_id": 3305880, "score": 1, "text": "<p>The simplest (at least simplest to explain) solution (using Lucas's theorem) has not been posted; let me fill that \"much needed gap\". Lazy as I am, I will mostly copypaste my answer from <a href=\"https://math.stackexchange.com/a/2184460/\">https://math.stackexchange.com/a/2184460/</a> , since the argument is essentially the same.</p>\n\n<p>I use the notation <span class=\"math-container\">$\\mathbb{N}$</span> for the set <span class=\"math-container\">$\\left\\{0,1,2,\\ldots\\right\\}$</span>.</p>\n\n<p>The question asks the following:</p>\n\n<blockquote>\n <p><strong>Theorem 1.</strong> Let <span class=\"math-container\">$n \\in \\mathbb{N}$</span>. Then, the number of <span class=\"math-container\">$i \\in \\left\\{0,1,\\ldots,n\\right\\}$</span> such that <span class=\"math-container\">$\\dbinom{n}{i}$</span> is odd is a power of <span class=\"math-container\">$2$</span>.</p>\n</blockquote>\n\n<p>I shall prove something slightly more general:</p>\n\n<blockquote>\n <p><strong>Theorem 2.</strong> Let <span class=\"math-container\">$n \\in \\mathbb{N}$</span>. Let <span class=\"math-container\">$q$</span> be a real number. Then, there exists a finite subset <span class=\"math-container\">$G$</span> of <span class=\"math-container\">$\\mathbb{N}$</span> such that\n <span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} q^i = \\prod_{g \\in G} \\left(q^{2^g} + 1\\right) .\n\\end{align}</span></p>\n</blockquote>\n\n<p><em>Proof of Theorem 2.</em>\nWrite <span class=\"math-container\">$n$</span> in the form <span class=\"math-container\">$n=a_{k}2^{k}+a_{k-1}2^{k-1}+\\cdots+a_{0}2^{0}$</span> for some\n<span class=\"math-container\">$k\\in\\mathbb{N}$</span> and <span class=\"math-container\">$a_{0},a_{1},\\ldots,a_{k}\\in\\left\\{ 0,1\\right\\} $</span>.\n(This is just the base-<span class=\"math-container\">$2$</span> representation of <span class=\"math-container\">$n$</span>, possibly with leading zeroes.)</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Lucas&#39;s_theorem\" rel=\"nofollow noreferrer\">Lucas's theorem</a> tells you\nthat if <span class=\"math-container\">$i=b_{k}2^{k}+b_{k-1}2^{k-1}+\\cdots+b_{0}2^{0}$</span> for some <span class=\"math-container\">$b_{0}\n,b_{1},\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} $</span>, then\n<span class=\"math-container\">\\begin{align}\n\\dbinom{n}{i}\n&amp;\\equiv\n\\dbinom{a_{k}}{b_{k}}\\dbinom{a_{k-1}}{b_{k-1}}\n\\cdots\\dbinom{a_{0}}{b_{0}}=\\prod\\limits_{j=0}^{k}\\underbrace{\\dbinom{a_{j}}{b_{j}}\n}_{\\substack{=\n\\begin{cases}\n1, &amp; \\text{if }b_{j}\\leq a_{j}\\\\\n0, &amp; \\text{if }b_{j}&gt;a_{j}\n\\end{cases}\n\\\\\\text{(since }a_{j}\\text{ and }b_{j}\\text{ lie in }\\left\\{ 0,1\\right\\}\n\\text{)}}} \\\\\n&amp;=\\prod\\limits_{j=0}^{k}\n\\begin{cases}\n1, &amp; \\text{if }b_{j}\\leq a_{j}\\\\\n0, &amp; \\text{if }b_{j}&gt;a_{j}\n\\end{cases}\n=\n\\begin{cases}\n1, &amp; \\text{if }b_{j}\\leq a_{j}\\text{ for all }j\\text{;}\\\\\n0, &amp; \\text{otherwise}\n\\end{cases}\n\\mod 2 .\n\\end{align}</span>\nHence, the <span class=\"math-container\">$i\\in\\mathbb{N}$</span> for which <span class=\"math-container\">$\\dbinom{n}{i}$</span> is\nodd are precisely the numbers of the form <span class=\"math-container\">$b_{k}2^{k}+b_{k-1}2^{k-1}\n+\\cdots+b_{0}2^{0}$</span> for <span class=\"math-container\">$b_{0},b_{1},\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} $</span>\nsatisfying <span class=\"math-container\">$\\left( b_{j}\\leq a_{j}\\text{ for all }j\\right) $</span>.\nSince all these <span class=\"math-container\">$i$</span> satisfy <span class=\"math-container\">$i \\in \\left\\{ 0,1,\\ldots,n\\right\\}$</span>\n(because otherwise, <span class=\"math-container\">$\\dbinom{n}{i}$</span> would be <span class=\"math-container\">$0$</span> and therefore could not\nbe odd), we can rewrite this as follows: The\n<span class=\"math-container\">$i \\in \\left\\{ 0,1,\\ldots,n\\right\\}$</span> for which <span class=\"math-container\">$\\dbinom{n}{i}$</span> is\nodd are precisely the numbers of the form <span class=\"math-container\">$b_{k}2^{k}+b_{k-1}2^{k-1}\n+\\cdots+b_{0}2^{0}$</span> for <span class=\"math-container\">$b_{0},b_{1},\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} $</span>\nsatisfying <span class=\"math-container\">$\\left( b_{j}\\leq a_{j}\\text{ for all }j\\right) $</span>.\nSince these\nnumbers are distinct (because the base-<span class=\"math-container\">$2$</span> representation of any\n<span class=\"math-container\">$i\\in\\mathbb{N}$</span> is unique, as long as we fix the number of digits), we thus\ncan substitute <span class=\"math-container\">$b_{k}2^{k}+b_{k-1}2^{k-1}+\\cdots+b_{0}2^{0}$</span> for <span class=\"math-container\">$i$</span> in the\nsum <span class=\"math-container\">$\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\} ;\\\\\n\\dbinom{n}{i}\\text{ is odd}}}q^{i}$</span>. Thus, this sum rewrites as follows:\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\} ;\\\\\n\\dbinom{n}{i}\\text{ is odd}}}q^{i}\n&amp;=\\underbrace{\\sum\\limits_{\\substack{b_{0},b_{1}\n,\\ldots,b_{k}\\in\\left\\{ 0,1\\right\\} ;\\\\b_{j}\\leq a_{j}\\text{ for all }j}\n}}_{=\\sum\\limits_{b_{0}=0}^{a_{0}}\\sum\\limits_{b_{1}=0}^{a_{1}}\\cdots\\sum\\limits_{b_{k}=0}^{a_k}\n}\\underbrace{q^{b_{k}2^{k}+b_{k-1}2^{k-1}+\\cdots+b_{0}2^{0}}}_{=\\left(\nq^{2^{k}}\\right) ^{b_{k}}\\left( q^{2^{k-1}}\\right) ^{b_{k-1}}\\cdots\\left(\nq^{2^{0}}\\right) ^{b_{0}}} \\\\\n&amp;=\\sum\\limits_{b_{0}=0}^{a_{0}}\\sum\\limits_{b_{1}=0}^{a_{1}}\\cdots\\sum\\limits_{b_{k}=0}^{a_{k}\n}\\left( q^{2^{k}}\\right) ^{b_{k}}\\left( q^{2^{k-1}}\\right) ^{b_{k-1}\n}\\cdots\\left( q^{2^{0}}\\right) ^{b_{0}} \\\\\n&amp;=\\left( \\sum\\limits_{b_{k}=0}^{a_{k}}\\left( q^{2^{k}}\\right) ^{b_{k}}\\right)\n\\left( \\sum\\limits_{b_{k-1}=0}^{a_{k-1}}\\left( q^{2^{k-1}}\\right) ^{b_{k-1}\n}\\right) \\cdots\\left( \\sum\\limits_{b_{0}=0}^{a_{0}}\\left( q^{2^{0}}\\right)\n^{b_{0}}\\right) \\\\\n&amp;=\\left( \\sum\\limits_{b=0}^{a_{k}}\\left( q^{2^{k}}\\right) ^{b}\\right) \\left(\n\\sum\\limits_{b=0}^{a_{k-1}}\\left( q^{2^{k-1}}\\right) ^{b}\\right) \\cdots\\left(\n\\sum\\limits_{b=0}^{a_{0}}\\left( q^{2^{0}}\\right) ^{b}\\right) \\\\\n&amp;=\\prod\\limits_{g=0}^{k}\\underbrace{\\left( \\sum\\limits_{b=0}^{a_{g}}\\left( q^{2^{g}}\\right) ^{b}\\right) }_{\\substack{=\n\\begin{cases}\nq^{2^{g}}+1, &amp; \\text{if }a_{g}=1;\\\\\n1 &amp; \\text{if }a_{g}=0\n\\end{cases}\n\\\\\\text{(since }a_{g}\\in\\left\\{ 0,1\\right\\} \\text{)}}} \\\\\n&amp;=\\prod\\limits_{g=0}^{k}\n\\begin{cases}\nq^{2^{g}}+1, &amp; \\text{if }a_{g}=1;\\\\\n1 &amp; \\text{if }a_{g}=0\n\\end{cases}\n\\\\\n&amp;=\\left( \\prod\\limits_{\\substack{g\\in\\left\\{ 0,1,\\ldots,k\\right\\} ;\\\\a_{g}\n=1}}\\left( q^{2^{g}}+1\\right) \\right) \\underbrace{\\left( \\prod\\limits\n_{\\substack{g\\in\\left\\{ 0,1,\\ldots,k\\right\\} ;\\\\a_{g}=0}}1\\right) }_{=1} \\\\\n&amp;=\\prod\\limits_{\\substack{g\\in\\left\\{ 0,1,\\ldots,k\\right\\} ;\\\\a_{g}=1}}\\left(\nq^{2^{g}}+1\\right) .\n\\end{align}</span>\nThus, there exists a finite subset <span class=\"math-container\">$G$</span> of <span class=\"math-container\">$\\mathbb{N}$</span> such that\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} q^i = \\prod_{g \\in G} \\left(q^{2^g} + 1\\right)\n\\end{align}</span>\n(namely, <span class=\"math-container\">$G$</span> is the set of all <span class=\"math-container\">$g\\in\\left\\{ 0,1,\\ldots,k\\right\\}$</span> satisfying <span class=\"math-container\">$a_g = 1$</span>). This proves Theorem 2. <span class=\"math-container\">$\\blacksquare$</span></p>\n\n<p><em>Proof of Theorem 1.</em> Theorem 2 (applied to <span class=\"math-container\">$q = 1$</span>) shows that there exists a finite subset <span class=\"math-container\">$G$</span> of <span class=\"math-container\">$\\mathbb{N}$</span> such that\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} 1^i = \\prod_{g \\in G} \\left(1^{2^g} + 1\\right) .\n\\end{align}</span>\nConsider this <span class=\"math-container\">$G$</span>.\nComparing\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} 1^i = \\prod_{g \\in G} \\underbrace{\\left(1^{2^g} + 1\\right)}_{= 1+1 = 2}\n= \\prod_{g \\in G} 2 = 2^{\\left|G\\right|}\n\\end{align}</span>\nwith\n<span class=\"math-container\">\\begin{align}\n\\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} \\underbrace{1^i}_{=1}\n&amp; = \\sum\\limits\\limits_{\\substack{i\\in\\left\\{ 0,1,\\ldots,n\\right\\}\n;\\\\\\dbinom{n}{i}\\text{ is odd}}} 1 \\\\\n&amp; = \\left(\\text{the number of all $i\\in\\left\\{ 0,1,\\ldots,n\\right\\}$ such that $\\dbinom{n}{i}$ is odd}\\right) \\cdot 1 \\\\\n&amp; = \\left(\\text{the number of all $i\\in\\left\\{ 0,1,\\ldots,n\\right\\}$ such that $\\dbinom{n}{i}$ is odd}\\right) ,\n\\end{align}</span>\nwe obtain\n<span class=\"math-container\">\\begin{align}\n\\left(\\text{the number of all $i\\in\\left\\{ 0,1,\\ldots,n\\right\\}$ such that $\\dbinom{n}{i}$ is odd}\\right) = 2^{\\left|G\\right|} .\n\\end{align}</span>\nHence, the number of <span class=\"math-container\">$i \\in \\left\\{0,1,\\ldots,n\\right\\}$</span> such that <span class=\"math-container\">$\\dbinom{n}{i}$</span> is odd is a power of <span class=\"math-container\">$2$</span>. This proves Theorem 1. <span class=\"math-container\">$\\blacksquare$</span></p>\n" } ]
Question: Total number of odd numbers in a row of Pascal&#39;s triangle is a natural power of two Answer 1: <p>Similar idea to Andreas Caranti's answer:</p> <p>Kummer's Theorem says that the number of factors of <span class="math-container">$p$</span> in <span class="math-container">$\binom{n}{k}$</span> is the number of carries when adding <span class="math-container">$k$</span> and <span class="math-container">$n-k$</span> in base-<span class="math-container">$p$</span>. There are no carries when adding <span class="math-container">$k$</span> and <span class="math-container">$n-k$</span> in binary if and only if <span class="math-container">$k\veebar n-k=0$</span>; when that is true, <span class="math-container">$n=k\lor n-k$</span>. This means that <span class="math-container">$\binom{n}{k}$</span> is odd when the <span class="math-container">$1$</span> bits of <span class="math-container">$k$</span> are a subset of the <span class="math-container">$1$</span> bits of <span class="math-container">$n$</span>. That is, there are <span class="math-container">$2^{\sigma_2(n)}$</span> odd entries in row <span class="math-container">$n$</span> of Pascal's Triangle, where <span class="math-container">$\sigma_2(n)$</span> is the sum of the bits in the binary representation of <span class="math-container">$n$</span>.</p> Answer 2: <p>Will delete because this is not what OP intends, but a proof via <a href="https://en.wikipedia.org/wiki/Kummer&#39;s_theorem" rel="nofollow noreferrer">Kummer's Theorem</a> looks rather neat to me.</p> <p>Write $n, m$ in base $2$ $$ n = 2^{k} n_{k} + \dots + 2 n_{1} + n_{0}, \qquad m = 2^{k} m_{k} + \dots + 2 m_{1} + m_{0}, $$ with $n_{i}, m_{i} \in \{ 0, 1 \}$.</p> <p>Then </p> <blockquote> <p>$\dbinom{n}{m}$ is odd iff there are no carries in the sum in base $2$ $$ m + (n - m) = n, $$ that is, iff $m_{i} \le n_{i}$ for each $i$. </p> </blockquote> <p>Given $n$, there are thus $$ \prod_{i=0}^{k} (n_{i} + 1) $$ possibilities for $m$, and each $n_{i} + 1 \in \{ 1, 2 \}$.</p> <p>This also tells you what is the exponent $k$ in the question - it is the number of $1$'s in the diadic expansion of $n$, or equivalently, the number of <em>distinct</em> powers of $2$ that sum up to $n$.</p> Answer 3: <p>The simplest (at least simplest to explain) solution (using Lucas's theorem) has not been posted; let me fill that "much needed gap". Lazy as I am, I will mostly copypaste my answer from <a href="https://math.stackexchange.com/a/2184460/">https://math.stackexchange.com/a/2184460/</a> , since the argument is essentially the same.</p> <p>I use the notation <span class="math-container">$\mathbb{N}$</span> for the set <span class="math-container">$\left\{0,1,2,\ldots\right\}$</span>.</p> <p>The question asks the following:</p> <blockquote> <p><strong>Theorem 1.</strong> Let <span class="math-container">$n \in \mathbb{N}$</span>. Then, the number of <span class="math-container">$i \in \left\{0,1,\ldots,n\right\}$</span> such that <span class="math-container">$\dbinom{n}{i}$</span> is odd is a power of <span class="math-container">$2$</span>.</p> </blockquote> <p>I shall prove something slightly more general:</p> <blockquote> <p><strong>Theorem 2.</strong> Let <span class="math-container">$n \in \mathbb{N}$</span>. Let <span class="math-container">$q$</span> be a real number. Then, there exists a finite subset <span class="math-container">$G$</span> of <span class="math-container">$\mathbb{N}$</span> such that <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} q^i = \prod_{g \in G} \left(q^{2^g} + 1\right) . \end{align}</span></p> </blockquote> <p><em>Proof of Theorem 2.</em> Write <span class="math-container">$n$</span> in the form <span class="math-container">$n=a_{k}2^{k}+a_{k-1}2^{k-1}+\cdots+a_{0}2^{0}$</span> for some <span class="math-container">$k\in\mathbb{N}$</span> and <span class="math-container">$a_{0},a_{1},\ldots,a_{k}\in\left\{ 0,1\right\} $</span>. (This is just the base-<span class="math-container">$2$</span> representation of <span class="math-container">$n$</span>, possibly with leading zeroes.)</p> <p><a href="https://en.wikipedia.org/wiki/Lucas&#39;s_theorem" rel="nofollow noreferrer">Lucas's theorem</a> tells you that if <span class="math-container">$i=b_{k}2^{k}+b_{k-1}2^{k-1}+\cdots+b_{0}2^{0}$</span> for some <span class="math-container">$b_{0} ,b_{1},\ldots,b_{k}\in\left\{ 0,1\right\} $</span>, then <span class="math-container">\begin{align} \dbinom{n}{i} &amp;\equiv \dbinom{a_{k}}{b_{k}}\dbinom{a_{k-1}}{b_{k-1}} \cdots\dbinom{a_{0}}{b_{0}}=\prod\limits_{j=0}^{k}\underbrace{\dbinom{a_{j}}{b_{j}} }_{\substack{= \begin{cases} 1, &amp; \text{if }b_{j}\leq a_{j}\\ 0, &amp; \text{if }b_{j}&gt;a_{j} \end{cases} \\\text{(since }a_{j}\text{ and }b_{j}\text{ lie in }\left\{ 0,1\right\} \text{)}}} \\ &amp;=\prod\limits_{j=0}^{k} \begin{cases} 1, &amp; \text{if }b_{j}\leq a_{j}\\ 0, &amp; \text{if }b_{j}&gt;a_{j} \end{cases} = \begin{cases} 1, &amp; \text{if }b_{j}\leq a_{j}\text{ for all }j\text{;}\\ 0, &amp; \text{otherwise} \end{cases} \mod 2 . \end{align}</span> Hence, the <span class="math-container">$i\in\mathbb{N}$</span> for which <span class="math-container">$\dbinom{n}{i}$</span> is odd are precisely the numbers of the form <span class="math-container">$b_{k}2^{k}+b_{k-1}2^{k-1} +\cdots+b_{0}2^{0}$</span> for <span class="math-container">$b_{0},b_{1},\ldots,b_{k}\in\left\{ 0,1\right\} $</span> satisfying <span class="math-container">$\left( b_{j}\leq a_{j}\text{ for all }j\right) $</span>. Since all these <span class="math-container">$i$</span> satisfy <span class="math-container">$i \in \left\{ 0,1,\ldots,n\right\}$</span> (because otherwise, <span class="math-container">$\dbinom{n}{i}$</span> would be <span class="math-container">$0$</span> and therefore could not be odd), we can rewrite this as follows: The <span class="math-container">$i \in \left\{ 0,1,\ldots,n\right\}$</span> for which <span class="math-container">$\dbinom{n}{i}$</span> is odd are precisely the numbers of the form <span class="math-container">$b_{k}2^{k}+b_{k-1}2^{k-1} +\cdots+b_{0}2^{0}$</span> for <span class="math-container">$b_{0},b_{1},\ldots,b_{k}\in\left\{ 0,1\right\} $</span> satisfying <span class="math-container">$\left( b_{j}\leq a_{j}\text{ for all }j\right) $</span>. Since these numbers are distinct (because the base-<span class="math-container">$2$</span> representation of any <span class="math-container">$i\in\mathbb{N}$</span> is unique, as long as we fix the number of digits), we thus can substitute <span class="math-container">$b_{k}2^{k}+b_{k-1}2^{k-1}+\cdots+b_{0}2^{0}$</span> for <span class="math-container">$i$</span> in the sum <span class="math-container">$\sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\ \dbinom{n}{i}\text{ is odd}}}q^{i}$</span>. Thus, this sum rewrites as follows: <span class="math-container">\begin{align} \sum\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\ \dbinom{n}{i}\text{ is odd}}}q^{i} &amp;=\underbrace{\sum\limits_{\substack{b_{0},b_{1} ,\ldots,b_{k}\in\left\{ 0,1\right\} ;\\b_{j}\leq a_{j}\text{ for all }j} }}_{=\sum\limits_{b_{0}=0}^{a_{0}}\sum\limits_{b_{1}=0}^{a_{1}}\cdots\sum\limits_{b_{k}=0}^{a_k} }\underbrace{q^{b_{k}2^{k}+b_{k-1}2^{k-1}+\cdots+b_{0}2^{0}}}_{=\left( q^{2^{k}}\right) ^{b_{k}}\left( q^{2^{k-1}}\right) ^{b_{k-1}}\cdots\left( q^{2^{0}}\right) ^{b_{0}}} \\ &amp;=\sum\limits_{b_{0}=0}^{a_{0}}\sum\limits_{b_{1}=0}^{a_{1}}\cdots\sum\limits_{b_{k}=0}^{a_{k} }\left( q^{2^{k}}\right) ^{b_{k}}\left( q^{2^{k-1}}\right) ^{b_{k-1} }\cdots\left( q^{2^{0}}\right) ^{b_{0}} \\ &amp;=\left( \sum\limits_{b_{k}=0}^{a_{k}}\left( q^{2^{k}}\right) ^{b_{k}}\right) \left( \sum\limits_{b_{k-1}=0}^{a_{k-1}}\left( q^{2^{k-1}}\right) ^{b_{k-1} }\right) \cdots\left( \sum\limits_{b_{0}=0}^{a_{0}}\left( q^{2^{0}}\right) ^{b_{0}}\right) \\ &amp;=\left( \sum\limits_{b=0}^{a_{k}}\left( q^{2^{k}}\right) ^{b}\right) \left( \sum\limits_{b=0}^{a_{k-1}}\left( q^{2^{k-1}}\right) ^{b}\right) \cdots\left( \sum\limits_{b=0}^{a_{0}}\left( q^{2^{0}}\right) ^{b}\right) \\ &amp;=\prod\limits_{g=0}^{k}\underbrace{\left( \sum\limits_{b=0}^{a_{g}}\left( q^{2^{g}}\right) ^{b}\right) }_{\substack{= \begin{cases} q^{2^{g}}+1, &amp; \text{if }a_{g}=1;\\ 1 &amp; \text{if }a_{g}=0 \end{cases} \\\text{(since }a_{g}\in\left\{ 0,1\right\} \text{)}}} \\ &amp;=\prod\limits_{g=0}^{k} \begin{cases} q^{2^{g}}+1, &amp; \text{if }a_{g}=1;\\ 1 &amp; \text{if }a_{g}=0 \end{cases} \\ &amp;=\left( \prod\limits_{\substack{g\in\left\{ 0,1,\ldots,k\right\} ;\\a_{g} =1}}\left( q^{2^{g}}+1\right) \right) \underbrace{\left( \prod\limits _{\substack{g\in\left\{ 0,1,\ldots,k\right\} ;\\a_{g}=0}}1\right) }_{=1} \\ &amp;=\prod\limits_{\substack{g\in\left\{ 0,1,\ldots,k\right\} ;\\a_{g}=1}}\left( q^{2^{g}}+1\right) . \end{align}</span> Thus, there exists a finite subset <span class="math-container">$G$</span> of <span class="math-container">$\mathbb{N}$</span> such that <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} q^i = \prod_{g \in G} \left(q^{2^g} + 1\right) \end{align}</span> (namely, <span class="math-container">$G$</span> is the set of all <span class="math-container">$g\in\left\{ 0,1,\ldots,k\right\}$</span> satisfying <span class="math-container">$a_g = 1$</span>). This proves Theorem 2. <span class="math-container">$\blacksquare$</span></p> <p><em>Proof of Theorem 1.</em> Theorem 2 (applied to <span class="math-container">$q = 1$</span>) shows that there exists a finite subset <span class="math-container">$G$</span> of <span class="math-container">$\mathbb{N}$</span> such that <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} 1^i = \prod_{g \in G} \left(1^{2^g} + 1\right) . \end{align}</span> Consider this <span class="math-container">$G$</span>. Comparing <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} 1^i = \prod_{g \in G} \underbrace{\left(1^{2^g} + 1\right)}_{= 1+1 = 2} = \prod_{g \in G} 2 = 2^{\left|G\right|} \end{align}</span> with <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} \underbrace{1^i}_{=1} &amp; = \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} 1 \\ &amp; = \left(\text{the number of all $i\in\left\{ 0,1,\ldots,n\right\}$ such that $\dbinom{n}{i}$ is odd}\right) \cdot 1 \\ &amp; = \left(\text{the number of all $i\in\left\{ 0,1,\ldots,n\right\}$ such that $\dbinom{n}{i}$ is odd}\right) , \end{align}</span> we obtain <span class="math-container">\begin{align} \left(\text{the number of all $i\in\left\{ 0,1,\ldots,n\right\}$ such that $\dbinom{n}{i}$ is odd}\right) = 2^{\left|G\right|} . \end{align}</span> Hence, the number of <span class="math-container">$i \in \left\{0,1,\ldots,n\right\}$</span> such that <span class="math-container">$\dbinom{n}{i}$</span> is odd is a power of <span class="math-container">$2$</span>. This proves Theorem 1. <span class="math-container">$\blacksquare$</span></p> Answer 4: <p>Similar idea to Andreas Caranti's answer:</p> <p>Kummer's Theorem says that the number of factors of <span class="math-container">$p$</span> in <span class="math-container">$\binom{n}{k}$</span> is the number of carries when adding <span class="math-container">$k$</span> and <span class="math-container">$n-k$</span> in base-<span class="math-container">$p$</span>. There are no carries when adding <span class="math-container">$k$</span> and <span class="math-container">$n-k$</span> in binary if and only if <span class="math-container">$k\veebar n-k=0$</span>; when that is true, <span class="math-container">$n=k\lor n-k$</span>. This means that <span class="math-container">$\binom{n}{k}$</span> is odd when the <span class="math-container">$1$</span> bits of <span class="math-container">$k$</span> are a subset of the <span class="math-container">$1$</span> bits of <span class="math-container">$n$</span>. That is, there are <span class="math-container">$2^{\sigma_2(n)}$</span> odd entries in row <span class="math-container">$n$</span> of Pascal's Triangle, where <span class="math-container">$\sigma_2(n)$</span> is the sum of the bits in the binary representation of <span class="math-container">$n$</span>.</p> Answer 5: <p>Will delete because this is not what OP intends, but a proof via <a href="https://en.wikipedia.org/wiki/Kummer&#39;s_theorem" rel="nofollow noreferrer">Kummer's Theorem</a> looks rather neat to me.</p> <p>Write $n, m$ in base $2$ $$ n = 2^{k} n_{k} + \dots + 2 n_{1} + n_{0}, \qquad m = 2^{k} m_{k} + \dots + 2 m_{1} + m_{0}, $$ with $n_{i}, m_{i} \in \{ 0, 1 \}$.</p> <p>Then </p> <blockquote> <p>$\dbinom{n}{m}$ is odd iff there are no carries in the sum in base $2$ $$ m + (n - m) = n, $$ that is, iff $m_{i} \le n_{i}$ for each $i$. </p> </blockquote> <p>Given $n$, there are thus $$ \prod_{i=0}^{k} (n_{i} + 1) $$ possibilities for $m$, and each $n_{i} + 1 \in \{ 1, 2 \}$.</p> <p>This also tells you what is the exponent $k$ in the question - it is the number of $1$'s in the diadic expansion of $n$, or equivalently, the number of <em>distinct</em> powers of $2$ that sum up to $n$.</p> Answer 6: <p>The simplest (at least simplest to explain) solution (using Lucas's theorem) has not been posted; let me fill that "much needed gap". Lazy as I am, I will mostly copypaste my answer from <a href="https://math.stackexchange.com/a/2184460/">https://math.stackexchange.com/a/2184460/</a> , since the argument is essentially the same.</p> <p>I use the notation <span class="math-container">$\mathbb{N}$</span> for the set <span class="math-container">$\left\{0,1,2,\ldots\right\}$</span>.</p> <p>The question asks the following:</p> <blockquote> <p><strong>Theorem 1.</strong> Let <span class="math-container">$n \in \mathbb{N}$</span>. Then, the number of <span class="math-container">$i \in \left\{0,1,\ldots,n\right\}$</span> such that <span class="math-container">$\dbinom{n}{i}$</span> is odd is a power of <span class="math-container">$2$</span>.</p> </blockquote> <p>I shall prove something slightly more general:</p> <blockquote> <p><strong>Theorem 2.</strong> Let <span class="math-container">$n \in \mathbb{N}$</span>. Let <span class="math-container">$q$</span> be a real number. Then, there exists a finite subset <span class="math-container">$G$</span> of <span class="math-container">$\mathbb{N}$</span> such that <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} q^i = \prod_{g \in G} \left(q^{2^g} + 1\right) . \end{align}</span></p> </blockquote> <p><em>Proof of Theorem 2.</em> Write <span class="math-container">$n$</span> in the form <span class="math-container">$n=a_{k}2^{k}+a_{k-1}2^{k-1}+\cdots+a_{0}2^{0}$</span> for some <span class="math-container">$k\in\mathbb{N}$</span> and <span class="math-container">$a_{0},a_{1},\ldots,a_{k}\in\left\{ 0,1\right\} $</span>. (This is just the base-<span class="math-container">$2$</span> representation of <span class="math-container">$n$</span>, possibly with leading zeroes.)</p> <p><a href="https://en.wikipedia.org/wiki/Lucas&#39;s_theorem" rel="nofollow noreferrer">Lucas's theorem</a> tells you that if <span class="math-container">$i=b_{k}2^{k}+b_{k-1}2^{k-1}+\cdots+b_{0}2^{0}$</span> for some <span class="math-container">$b_{0} ,b_{1},\ldots,b_{k}\in\left\{ 0,1\right\} $</span>, then <span class="math-container">\begin{align} \dbinom{n}{i} &amp;\equiv \dbinom{a_{k}}{b_{k}}\dbinom{a_{k-1}}{b_{k-1}} \cdots\dbinom{a_{0}}{b_{0}}=\prod\limits_{j=0}^{k}\underbrace{\dbinom{a_{j}}{b_{j}} }_{\substack{= \begin{cases} 1, &amp; \text{if }b_{j}\leq a_{j}\\ 0, &amp; \text{if }b_{j}&gt;a_{j} \end{cases} \\\text{(since }a_{j}\text{ and }b_{j}\text{ lie in }\left\{ 0,1\right\} \text{)}}} \\ &amp;=\prod\limits_{j=0}^{k} \begin{cases} 1, &amp; \text{if }b_{j}\leq a_{j}\\ 0, &amp; \text{if }b_{j}&gt;a_{j} \end{cases} = \begin{cases} 1, &amp; \text{if }b_{j}\leq a_{j}\text{ for all }j\text{;}\\ 0, &amp; \text{otherwise} \end{cases} \mod 2 . \end{align}</span> Hence, the <span class="math-container">$i\in\mathbb{N}$</span> for which <span class="math-container">$\dbinom{n}{i}$</span> is odd are precisely the numbers of the form <span class="math-container">$b_{k}2^{k}+b_{k-1}2^{k-1} +\cdots+b_{0}2^{0}$</span> for <span class="math-container">$b_{0},b_{1},\ldots,b_{k}\in\left\{ 0,1\right\} $</span> satisfying <span class="math-container">$\left( b_{j}\leq a_{j}\text{ for all }j\right) $</span>. Since all these <span class="math-container">$i$</span> satisfy <span class="math-container">$i \in \left\{ 0,1,\ldots,n\right\}$</span> (because otherwise, <span class="math-container">$\dbinom{n}{i}$</span> would be <span class="math-container">$0$</span> and therefore could not be odd), we can rewrite this as follows: The <span class="math-container">$i \in \left\{ 0,1,\ldots,n\right\}$</span> for which <span class="math-container">$\dbinom{n}{i}$</span> is odd are precisely the numbers of the form <span class="math-container">$b_{k}2^{k}+b_{k-1}2^{k-1} +\cdots+b_{0}2^{0}$</span> for <span class="math-container">$b_{0},b_{1},\ldots,b_{k}\in\left\{ 0,1\right\} $</span> satisfying <span class="math-container">$\left( b_{j}\leq a_{j}\text{ for all }j\right) $</span>. Since these numbers are distinct (because the base-<span class="math-container">$2$</span> representation of any <span class="math-container">$i\in\mathbb{N}$</span> is unique, as long as we fix the number of digits), we thus can substitute <span class="math-container">$b_{k}2^{k}+b_{k-1}2^{k-1}+\cdots+b_{0}2^{0}$</span> for <span class="math-container">$i$</span> in the sum <span class="math-container">$\sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\ \dbinom{n}{i}\text{ is odd}}}q^{i}$</span>. Thus, this sum rewrites as follows: <span class="math-container">\begin{align} \sum\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\ \dbinom{n}{i}\text{ is odd}}}q^{i} &amp;=\underbrace{\sum\limits_{\substack{b_{0},b_{1} ,\ldots,b_{k}\in\left\{ 0,1\right\} ;\\b_{j}\leq a_{j}\text{ for all }j} }}_{=\sum\limits_{b_{0}=0}^{a_{0}}\sum\limits_{b_{1}=0}^{a_{1}}\cdots\sum\limits_{b_{k}=0}^{a_k} }\underbrace{q^{b_{k}2^{k}+b_{k-1}2^{k-1}+\cdots+b_{0}2^{0}}}_{=\left( q^{2^{k}}\right) ^{b_{k}}\left( q^{2^{k-1}}\right) ^{b_{k-1}}\cdots\left( q^{2^{0}}\right) ^{b_{0}}} \\ &amp;=\sum\limits_{b_{0}=0}^{a_{0}}\sum\limits_{b_{1}=0}^{a_{1}}\cdots\sum\limits_{b_{k}=0}^{a_{k} }\left( q^{2^{k}}\right) ^{b_{k}}\left( q^{2^{k-1}}\right) ^{b_{k-1} }\cdots\left( q^{2^{0}}\right) ^{b_{0}} \\ &amp;=\left( \sum\limits_{b_{k}=0}^{a_{k}}\left( q^{2^{k}}\right) ^{b_{k}}\right) \left( \sum\limits_{b_{k-1}=0}^{a_{k-1}}\left( q^{2^{k-1}}\right) ^{b_{k-1} }\right) \cdots\left( \sum\limits_{b_{0}=0}^{a_{0}}\left( q^{2^{0}}\right) ^{b_{0}}\right) \\ &amp;=\left( \sum\limits_{b=0}^{a_{k}}\left( q^{2^{k}}\right) ^{b}\right) \left( \sum\limits_{b=0}^{a_{k-1}}\left( q^{2^{k-1}}\right) ^{b}\right) \cdots\left( \sum\limits_{b=0}^{a_{0}}\left( q^{2^{0}}\right) ^{b}\right) \\ &amp;=\prod\limits_{g=0}^{k}\underbrace{\left( \sum\limits_{b=0}^{a_{g}}\left( q^{2^{g}}\right) ^{b}\right) }_{\substack{= \begin{cases} q^{2^{g}}+1, &amp; \text{if }a_{g}=1;\\ 1 &amp; \text{if }a_{g}=0 \end{cases} \\\text{(since }a_{g}\in\left\{ 0,1\right\} \text{)}}} \\ &amp;=\prod\limits_{g=0}^{k} \begin{cases} q^{2^{g}}+1, &amp; \text{if }a_{g}=1;\\ 1 &amp; \text{if }a_{g}=0 \end{cases} \\ &amp;=\left( \prod\limits_{\substack{g\in\left\{ 0,1,\ldots,k\right\} ;\\a_{g} =1}}\left( q^{2^{g}}+1\right) \right) \underbrace{\left( \prod\limits _{\substack{g\in\left\{ 0,1,\ldots,k\right\} ;\\a_{g}=0}}1\right) }_{=1} \\ &amp;=\prod\limits_{\substack{g\in\left\{ 0,1,\ldots,k\right\} ;\\a_{g}=1}}\left( q^{2^{g}}+1\right) . \end{align}</span> Thus, there exists a finite subset <span class="math-container">$G$</span> of <span class="math-container">$\mathbb{N}$</span> such that <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} q^i = \prod_{g \in G} \left(q^{2^g} + 1\right) \end{align}</span> (namely, <span class="math-container">$G$</span> is the set of all <span class="math-container">$g\in\left\{ 0,1,\ldots,k\right\}$</span> satisfying <span class="math-container">$a_g = 1$</span>). This proves Theorem 2. <span class="math-container">$\blacksquare$</span></p> <p><em>Proof of Theorem 1.</em> Theorem 2 (applied to <span class="math-container">$q = 1$</span>) shows that there exists a finite subset <span class="math-container">$G$</span> of <span class="math-container">$\mathbb{N}$</span> such that <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} 1^i = \prod_{g \in G} \left(1^{2^g} + 1\right) . \end{align}</span> Consider this <span class="math-container">$G$</span>. Comparing <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} 1^i = \prod_{g \in G} \underbrace{\left(1^{2^g} + 1\right)}_{= 1+1 = 2} = \prod_{g \in G} 2 = 2^{\left|G\right|} \end{align}</span> with <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} \underbrace{1^i}_{=1} &amp; = \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} 1 \\ &amp; = \left(\text{the number of all $i\in\left\{ 0,1,\ldots,n\right\}$ such that $\dbinom{n}{i}$ is odd}\right) \cdot 1 \\ &amp; = \left(\text{the number of all $i\in\left\{ 0,1,\ldots,n\right\}$ such that $\dbinom{n}{i}$ is odd}\right) , \end{align}</span> we obtain <span class="math-container">\begin{align} \left(\text{the number of all $i\in\left\{ 0,1,\ldots,n\right\}$ such that $\dbinom{n}{i}$ is odd}\right) = 2^{\left|G\right|} . \end{align}</span> Hence, the number of <span class="math-container">$i \in \left\{0,1,\ldots,n\right\}$</span> such that <span class="math-container">$\dbinom{n}{i}$</span> is odd is a power of <span class="math-container">$2$</span>. This proves Theorem 1. <span class="math-container">$\blacksquare$</span></p> Answer 7: <p>Similar idea to Andreas Caranti's answer:</p> <p>Kummer's Theorem says that the number of factors of <span class="math-container">$p$</span> in <span class="math-container">$\binom{n}{k}$</span> is the number of carries when adding <span class="math-container">$k$</span> and <span class="math-container">$n-k$</span> in base-<span class="math-container">$p$</span>. There are no carries when adding <span class="math-container">$k$</span> and <span class="math-container">$n-k$</span> in binary if and only if <span class="math-container">$k\veebar n-k=0$</span>; when that is true, <span class="math-container">$n=k\lor n-k$</span>. This means that <span class="math-container">$\binom{n}{k}$</span> is odd when the <span class="math-container">$1$</span> bits of <span class="math-container">$k$</span> are a subset of the <span class="math-container">$1$</span> bits of <span class="math-container">$n$</span>. That is, there are <span class="math-container">$2^{\sigma_2(n)}$</span> odd entries in row <span class="math-container">$n$</span> of Pascal's Triangle, where <span class="math-container">$\sigma_2(n)$</span> is the sum of the bits in the binary representation of <span class="math-container">$n$</span>.</p> Answer 8: <p>Will delete because this is not what OP intends, but a proof via <a href="https://en.wikipedia.org/wiki/Kummer&#39;s_theorem" rel="nofollow noreferrer">Kummer's Theorem</a> looks rather neat to me.</p> <p>Write $n, m$ in base $2$ $$ n = 2^{k} n_{k} + \dots + 2 n_{1} + n_{0}, \qquad m = 2^{k} m_{k} + \dots + 2 m_{1} + m_{0}, $$ with $n_{i}, m_{i} \in \{ 0, 1 \}$.</p> <p>Then </p> <blockquote> <p>$\dbinom{n}{m}$ is odd iff there are no carries in the sum in base $2$ $$ m + (n - m) = n, $$ that is, iff $m_{i} \le n_{i}$ for each $i$. </p> </blockquote> <p>Given $n$, there are thus $$ \prod_{i=0}^{k} (n_{i} + 1) $$ possibilities for $m$, and each $n_{i} + 1 \in \{ 1, 2 \}$.</p> <p>This also tells you what is the exponent $k$ in the question - it is the number of $1$'s in the diadic expansion of $n$, or equivalently, the number of <em>distinct</em> powers of $2$ that sum up to $n$.</p> Answer 9: <p>The simplest (at least simplest to explain) solution (using Lucas's theorem) has not been posted; let me fill that "much needed gap". Lazy as I am, I will mostly copypaste my answer from <a href="https://math.stackexchange.com/a/2184460/">https://math.stackexchange.com/a/2184460/</a> , since the argument is essentially the same.</p> <p>I use the notation <span class="math-container">$\mathbb{N}$</span> for the set <span class="math-container">$\left\{0,1,2,\ldots\right\}$</span>.</p> <p>The question asks the following:</p> <blockquote> <p><strong>Theorem 1.</strong> Let <span class="math-container">$n \in \mathbb{N}$</span>. Then, the number of <span class="math-container">$i \in \left\{0,1,\ldots,n\right\}$</span> such that <span class="math-container">$\dbinom{n}{i}$</span> is odd is a power of <span class="math-container">$2$</span>.</p> </blockquote> <p>I shall prove something slightly more general:</p> <blockquote> <p><strong>Theorem 2.</strong> Let <span class="math-container">$n \in \mathbb{N}$</span>. Let <span class="math-container">$q$</span> be a real number. Then, there exists a finite subset <span class="math-container">$G$</span> of <span class="math-container">$\mathbb{N}$</span> such that <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} q^i = \prod_{g \in G} \left(q^{2^g} + 1\right) . \end{align}</span></p> </blockquote> <p><em>Proof of Theorem 2.</em> Write <span class="math-container">$n$</span> in the form <span class="math-container">$n=a_{k}2^{k}+a_{k-1}2^{k-1}+\cdots+a_{0}2^{0}$</span> for some <span class="math-container">$k\in\mathbb{N}$</span> and <span class="math-container">$a_{0},a_{1},\ldots,a_{k}\in\left\{ 0,1\right\} $</span>. (This is just the base-<span class="math-container">$2$</span> representation of <span class="math-container">$n$</span>, possibly with leading zeroes.)</p> <p><a href="https://en.wikipedia.org/wiki/Lucas&#39;s_theorem" rel="nofollow noreferrer">Lucas's theorem</a> tells you that if <span class="math-container">$i=b_{k}2^{k}+b_{k-1}2^{k-1}+\cdots+b_{0}2^{0}$</span> for some <span class="math-container">$b_{0} ,b_{1},\ldots,b_{k}\in\left\{ 0,1\right\} $</span>, then <span class="math-container">\begin{align} \dbinom{n}{i} &amp;\equiv \dbinom{a_{k}}{b_{k}}\dbinom{a_{k-1}}{b_{k-1}} \cdots\dbinom{a_{0}}{b_{0}}=\prod\limits_{j=0}^{k}\underbrace{\dbinom{a_{j}}{b_{j}} }_{\substack{= \begin{cases} 1, &amp; \text{if }b_{j}\leq a_{j}\\ 0, &amp; \text{if }b_{j}&gt;a_{j} \end{cases} \\\text{(since }a_{j}\text{ and }b_{j}\text{ lie in }\left\{ 0,1\right\} \text{)}}} \\ &amp;=\prod\limits_{j=0}^{k} \begin{cases} 1, &amp; \text{if }b_{j}\leq a_{j}\\ 0, &amp; \text{if }b_{j}&gt;a_{j} \end{cases} = \begin{cases} 1, &amp; \text{if }b_{j}\leq a_{j}\text{ for all }j\text{;}\\ 0, &amp; \text{otherwise} \end{cases} \mod 2 . \end{align}</span> Hence, the <span class="math-container">$i\in\mathbb{N}$</span> for which <span class="math-container">$\dbinom{n}{i}$</span> is odd are precisely the numbers of the form <span class="math-container">$b_{k}2^{k}+b_{k-1}2^{k-1} +\cdots+b_{0}2^{0}$</span> for <span class="math-container">$b_{0},b_{1},\ldots,b_{k}\in\left\{ 0,1\right\} $</span> satisfying <span class="math-container">$\left( b_{j}\leq a_{j}\text{ for all }j\right) $</span>. Since all these <span class="math-container">$i$</span> satisfy <span class="math-container">$i \in \left\{ 0,1,\ldots,n\right\}$</span> (because otherwise, <span class="math-container">$\dbinom{n}{i}$</span> would be <span class="math-container">$0$</span> and therefore could not be odd), we can rewrite this as follows: The <span class="math-container">$i \in \left\{ 0,1,\ldots,n\right\}$</span> for which <span class="math-container">$\dbinom{n}{i}$</span> is odd are precisely the numbers of the form <span class="math-container">$b_{k}2^{k}+b_{k-1}2^{k-1} +\cdots+b_{0}2^{0}$</span> for <span class="math-container">$b_{0},b_{1},\ldots,b_{k}\in\left\{ 0,1\right\} $</span> satisfying <span class="math-container">$\left( b_{j}\leq a_{j}\text{ for all }j\right) $</span>. Since these numbers are distinct (because the base-<span class="math-container">$2$</span> representation of any <span class="math-container">$i\in\mathbb{N}$</span> is unique, as long as we fix the number of digits), we thus can substitute <span class="math-container">$b_{k}2^{k}+b_{k-1}2^{k-1}+\cdots+b_{0}2^{0}$</span> for <span class="math-container">$i$</span> in the sum <span class="math-container">$\sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\ \dbinom{n}{i}\text{ is odd}}}q^{i}$</span>. Thus, this sum rewrites as follows: <span class="math-container">\begin{align} \sum\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\ \dbinom{n}{i}\text{ is odd}}}q^{i} &amp;=\underbrace{\sum\limits_{\substack{b_{0},b_{1} ,\ldots,b_{k}\in\left\{ 0,1\right\} ;\\b_{j}\leq a_{j}\text{ for all }j} }}_{=\sum\limits_{b_{0}=0}^{a_{0}}\sum\limits_{b_{1}=0}^{a_{1}}\cdots\sum\limits_{b_{k}=0}^{a_k} }\underbrace{q^{b_{k}2^{k}+b_{k-1}2^{k-1}+\cdots+b_{0}2^{0}}}_{=\left( q^{2^{k}}\right) ^{b_{k}}\left( q^{2^{k-1}}\right) ^{b_{k-1}}\cdots\left( q^{2^{0}}\right) ^{b_{0}}} \\ &amp;=\sum\limits_{b_{0}=0}^{a_{0}}\sum\limits_{b_{1}=0}^{a_{1}}\cdots\sum\limits_{b_{k}=0}^{a_{k} }\left( q^{2^{k}}\right) ^{b_{k}}\left( q^{2^{k-1}}\right) ^{b_{k-1} }\cdots\left( q^{2^{0}}\right) ^{b_{0}} \\ &amp;=\left( \sum\limits_{b_{k}=0}^{a_{k}}\left( q^{2^{k}}\right) ^{b_{k}}\right) \left( \sum\limits_{b_{k-1}=0}^{a_{k-1}}\left( q^{2^{k-1}}\right) ^{b_{k-1} }\right) \cdots\left( \sum\limits_{b_{0}=0}^{a_{0}}\left( q^{2^{0}}\right) ^{b_{0}}\right) \\ &amp;=\left( \sum\limits_{b=0}^{a_{k}}\left( q^{2^{k}}\right) ^{b}\right) \left( \sum\limits_{b=0}^{a_{k-1}}\left( q^{2^{k-1}}\right) ^{b}\right) \cdots\left( \sum\limits_{b=0}^{a_{0}}\left( q^{2^{0}}\right) ^{b}\right) \\ &amp;=\prod\limits_{g=0}^{k}\underbrace{\left( \sum\limits_{b=0}^{a_{g}}\left( q^{2^{g}}\right) ^{b}\right) }_{\substack{= \begin{cases} q^{2^{g}}+1, &amp; \text{if }a_{g}=1;\\ 1 &amp; \text{if }a_{g}=0 \end{cases} \\\text{(since }a_{g}\in\left\{ 0,1\right\} \text{)}}} \\ &amp;=\prod\limits_{g=0}^{k} \begin{cases} q^{2^{g}}+1, &amp; \text{if }a_{g}=1;\\ 1 &amp; \text{if }a_{g}=0 \end{cases} \\ &amp;=\left( \prod\limits_{\substack{g\in\left\{ 0,1,\ldots,k\right\} ;\\a_{g} =1}}\left( q^{2^{g}}+1\right) \right) \underbrace{\left( \prod\limits _{\substack{g\in\left\{ 0,1,\ldots,k\right\} ;\\a_{g}=0}}1\right) }_{=1} \\ &amp;=\prod\limits_{\substack{g\in\left\{ 0,1,\ldots,k\right\} ;\\a_{g}=1}}\left( q^{2^{g}}+1\right) . \end{align}</span> Thus, there exists a finite subset <span class="math-container">$G$</span> of <span class="math-container">$\mathbb{N}$</span> such that <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} q^i = \prod_{g \in G} \left(q^{2^g} + 1\right) \end{align}</span> (namely, <span class="math-container">$G$</span> is the set of all <span class="math-container">$g\in\left\{ 0,1,\ldots,k\right\}$</span> satisfying <span class="math-container">$a_g = 1$</span>). This proves Theorem 2. <span class="math-container">$\blacksquare$</span></p> <p><em>Proof of Theorem 1.</em> Theorem 2 (applied to <span class="math-container">$q = 1$</span>) shows that there exists a finite subset <span class="math-container">$G$</span> of <span class="math-container">$\mathbb{N}$</span> such that <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} 1^i = \prod_{g \in G} \left(1^{2^g} + 1\right) . \end{align}</span> Consider this <span class="math-container">$G$</span>. Comparing <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} 1^i = \prod_{g \in G} \underbrace{\left(1^{2^g} + 1\right)}_{= 1+1 = 2} = \prod_{g \in G} 2 = 2^{\left|G\right|} \end{align}</span> with <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} \underbrace{1^i}_{=1} &amp; = \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} 1 \\ &amp; = \left(\text{the number of all $i\in\left\{ 0,1,\ldots,n\right\}$ such that $\dbinom{n}{i}$ is odd}\right) \cdot 1 \\ &amp; = \left(\text{the number of all $i\in\left\{ 0,1,\ldots,n\right\}$ such that $\dbinom{n}{i}$ is odd}\right) , \end{align}</span> we obtain <span class="math-container">\begin{align} \left(\text{the number of all $i\in\left\{ 0,1,\ldots,n\right\}$ such that $\dbinom{n}{i}$ is odd}\right) = 2^{\left|G\right|} . \end{align}</span> Hence, the number of <span class="math-container">$i \in \left\{0,1,\ldots,n\right\}$</span> such that <span class="math-container">$\dbinom{n}{i}$</span> is odd is a power of <span class="math-container">$2$</span>. This proves Theorem 1. <span class="math-container">$\blacksquare$</span></p> Answer 10: <p>Similar idea to Andreas Caranti's answer:</p> <p>Kummer's Theorem says that the number of factors of <span class="math-container">$p$</span> in <span class="math-container">$\binom{n}{k}$</span> is the number of carries when adding <span class="math-container">$k$</span> and <span class="math-container">$n-k$</span> in base-<span class="math-container">$p$</span>. There are no carries when adding <span class="math-container">$k$</span> and <span class="math-container">$n-k$</span> in binary if and only if <span class="math-container">$k\veebar n-k=0$</span>; when that is true, <span class="math-container">$n=k\lor n-k$</span>. This means that <span class="math-container">$\binom{n}{k}$</span> is odd when the <span class="math-container">$1$</span> bits of <span class="math-container">$k$</span> are a subset of the <span class="math-container">$1$</span> bits of <span class="math-container">$n$</span>. That is, there are <span class="math-container">$2^{\sigma_2(n)}$</span> odd entries in row <span class="math-container">$n$</span> of Pascal's Triangle, where <span class="math-container">$\sigma_2(n)$</span> is the sum of the bits in the binary representation of <span class="math-container">$n$</span>.</p> Answer 11: <p>Will delete because this is not what OP intends, but a proof via <a href="https://en.wikipedia.org/wiki/Kummer&#39;s_theorem" rel="nofollow noreferrer">Kummer's Theorem</a> looks rather neat to me.</p> <p>Write $n, m$ in base $2$ $$ n = 2^{k} n_{k} + \dots + 2 n_{1} + n_{0}, \qquad m = 2^{k} m_{k} + \dots + 2 m_{1} + m_{0}, $$ with $n_{i}, m_{i} \in \{ 0, 1 \}$.</p> <p>Then </p> <blockquote> <p>$\dbinom{n}{m}$ is odd iff there are no carries in the sum in base $2$ $$ m + (n - m) = n, $$ that is, iff $m_{i} \le n_{i}$ for each $i$. </p> </blockquote> <p>Given $n$, there are thus $$ \prod_{i=0}^{k} (n_{i} + 1) $$ possibilities for $m$, and each $n_{i} + 1 \in \{ 1, 2 \}$.</p> <p>This also tells you what is the exponent $k$ in the question - it is the number of $1$'s in the diadic expansion of $n$, or equivalently, the number of <em>distinct</em> powers of $2$ that sum up to $n$.</p> Answer 12: <p>The simplest (at least simplest to explain) solution (using Lucas's theorem) has not been posted; let me fill that "much needed gap". Lazy as I am, I will mostly copypaste my answer from <a href="https://math.stackexchange.com/a/2184460/">https://math.stackexchange.com/a/2184460/</a> , since the argument is essentially the same.</p> <p>I use the notation <span class="math-container">$\mathbb{N}$</span> for the set <span class="math-container">$\left\{0,1,2,\ldots\right\}$</span>.</p> <p>The question asks the following:</p> <blockquote> <p><strong>Theorem 1.</strong> Let <span class="math-container">$n \in \mathbb{N}$</span>. Then, the number of <span class="math-container">$i \in \left\{0,1,\ldots,n\right\}$</span> such that <span class="math-container">$\dbinom{n}{i}$</span> is odd is a power of <span class="math-container">$2$</span>.</p> </blockquote> <p>I shall prove something slightly more general:</p> <blockquote> <p><strong>Theorem 2.</strong> Let <span class="math-container">$n \in \mathbb{N}$</span>. Let <span class="math-container">$q$</span> be a real number. Then, there exists a finite subset <span class="math-container">$G$</span> of <span class="math-container">$\mathbb{N}$</span> such that <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} q^i = \prod_{g \in G} \left(q^{2^g} + 1\right) . \end{align}</span></p> </blockquote> <p><em>Proof of Theorem 2.</em> Write <span class="math-container">$n$</span> in the form <span class="math-container">$n=a_{k}2^{k}+a_{k-1}2^{k-1}+\cdots+a_{0}2^{0}$</span> for some <span class="math-container">$k\in\mathbb{N}$</span> and <span class="math-container">$a_{0},a_{1},\ldots,a_{k}\in\left\{ 0,1\right\} $</span>. (This is just the base-<span class="math-container">$2$</span> representation of <span class="math-container">$n$</span>, possibly with leading zeroes.)</p> <p><a href="https://en.wikipedia.org/wiki/Lucas&#39;s_theorem" rel="nofollow noreferrer">Lucas's theorem</a> tells you that if <span class="math-container">$i=b_{k}2^{k}+b_{k-1}2^{k-1}+\cdots+b_{0}2^{0}$</span> for some <span class="math-container">$b_{0} ,b_{1},\ldots,b_{k}\in\left\{ 0,1\right\} $</span>, then <span class="math-container">\begin{align} \dbinom{n}{i} &amp;\equiv \dbinom{a_{k}}{b_{k}}\dbinom{a_{k-1}}{b_{k-1}} \cdots\dbinom{a_{0}}{b_{0}}=\prod\limits_{j=0}^{k}\underbrace{\dbinom{a_{j}}{b_{j}} }_{\substack{= \begin{cases} 1, &amp; \text{if }b_{j}\leq a_{j}\\ 0, &amp; \text{if }b_{j}&gt;a_{j} \end{cases} \\\text{(since }a_{j}\text{ and }b_{j}\text{ lie in }\left\{ 0,1\right\} \text{)}}} \\ &amp;=\prod\limits_{j=0}^{k} \begin{cases} 1, &amp; \text{if }b_{j}\leq a_{j}\\ 0, &amp; \text{if }b_{j}&gt;a_{j} \end{cases} = \begin{cases} 1, &amp; \text{if }b_{j}\leq a_{j}\text{ for all }j\text{;}\\ 0, &amp; \text{otherwise} \end{cases} \mod 2 . \end{align}</span> Hence, the <span class="math-container">$i\in\mathbb{N}$</span> for which <span class="math-container">$\dbinom{n}{i}$</span> is odd are precisely the numbers of the form <span class="math-container">$b_{k}2^{k}+b_{k-1}2^{k-1} +\cdots+b_{0}2^{0}$</span> for <span class="math-container">$b_{0},b_{1},\ldots,b_{k}\in\left\{ 0,1\right\} $</span> satisfying <span class="math-container">$\left( b_{j}\leq a_{j}\text{ for all }j\right) $</span>. Since all these <span class="math-container">$i$</span> satisfy <span class="math-container">$i \in \left\{ 0,1,\ldots,n\right\}$</span> (because otherwise, <span class="math-container">$\dbinom{n}{i}$</span> would be <span class="math-container">$0$</span> and therefore could not be odd), we can rewrite this as follows: The <span class="math-container">$i \in \left\{ 0,1,\ldots,n\right\}$</span> for which <span class="math-container">$\dbinom{n}{i}$</span> is odd are precisely the numbers of the form <span class="math-container">$b_{k}2^{k}+b_{k-1}2^{k-1} +\cdots+b_{0}2^{0}$</span> for <span class="math-container">$b_{0},b_{1},\ldots,b_{k}\in\left\{ 0,1\right\} $</span> satisfying <span class="math-container">$\left( b_{j}\leq a_{j}\text{ for all }j\right) $</span>. Since these numbers are distinct (because the base-<span class="math-container">$2$</span> representation of any <span class="math-container">$i\in\mathbb{N}$</span> is unique, as long as we fix the number of digits), we thus can substitute <span class="math-container">$b_{k}2^{k}+b_{k-1}2^{k-1}+\cdots+b_{0}2^{0}$</span> for <span class="math-container">$i$</span> in the sum <span class="math-container">$\sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\ \dbinom{n}{i}\text{ is odd}}}q^{i}$</span>. Thus, this sum rewrites as follows: <span class="math-container">\begin{align} \sum\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\ \dbinom{n}{i}\text{ is odd}}}q^{i} &amp;=\underbrace{\sum\limits_{\substack{b_{0},b_{1} ,\ldots,b_{k}\in\left\{ 0,1\right\} ;\\b_{j}\leq a_{j}\text{ for all }j} }}_{=\sum\limits_{b_{0}=0}^{a_{0}}\sum\limits_{b_{1}=0}^{a_{1}}\cdots\sum\limits_{b_{k}=0}^{a_k} }\underbrace{q^{b_{k}2^{k}+b_{k-1}2^{k-1}+\cdots+b_{0}2^{0}}}_{=\left( q^{2^{k}}\right) ^{b_{k}}\left( q^{2^{k-1}}\right) ^{b_{k-1}}\cdots\left( q^{2^{0}}\right) ^{b_{0}}} \\ &amp;=\sum\limits_{b_{0}=0}^{a_{0}}\sum\limits_{b_{1}=0}^{a_{1}}\cdots\sum\limits_{b_{k}=0}^{a_{k} }\left( q^{2^{k}}\right) ^{b_{k}}\left( q^{2^{k-1}}\right) ^{b_{k-1} }\cdots\left( q^{2^{0}}\right) ^{b_{0}} \\ &amp;=\left( \sum\limits_{b_{k}=0}^{a_{k}}\left( q^{2^{k}}\right) ^{b_{k}}\right) \left( \sum\limits_{b_{k-1}=0}^{a_{k-1}}\left( q^{2^{k-1}}\right) ^{b_{k-1} }\right) \cdots\left( \sum\limits_{b_{0}=0}^{a_{0}}\left( q^{2^{0}}\right) ^{b_{0}}\right) \\ &amp;=\left( \sum\limits_{b=0}^{a_{k}}\left( q^{2^{k}}\right) ^{b}\right) \left( \sum\limits_{b=0}^{a_{k-1}}\left( q^{2^{k-1}}\right) ^{b}\right) \cdots\left( \sum\limits_{b=0}^{a_{0}}\left( q^{2^{0}}\right) ^{b}\right) \\ &amp;=\prod\limits_{g=0}^{k}\underbrace{\left( \sum\limits_{b=0}^{a_{g}}\left( q^{2^{g}}\right) ^{b}\right) }_{\substack{= \begin{cases} q^{2^{g}}+1, &amp; \text{if }a_{g}=1;\\ 1 &amp; \text{if }a_{g}=0 \end{cases} \\\text{(since }a_{g}\in\left\{ 0,1\right\} \text{)}}} \\ &amp;=\prod\limits_{g=0}^{k} \begin{cases} q^{2^{g}}+1, &amp; \text{if }a_{g}=1;\\ 1 &amp; \text{if }a_{g}=0 \end{cases} \\ &amp;=\left( \prod\limits_{\substack{g\in\left\{ 0,1,\ldots,k\right\} ;\\a_{g} =1}}\left( q^{2^{g}}+1\right) \right) \underbrace{\left( \prod\limits _{\substack{g\in\left\{ 0,1,\ldots,k\right\} ;\\a_{g}=0}}1\right) }_{=1} \\ &amp;=\prod\limits_{\substack{g\in\left\{ 0,1,\ldots,k\right\} ;\\a_{g}=1}}\left( q^{2^{g}}+1\right) . \end{align}</span> Thus, there exists a finite subset <span class="math-container">$G$</span> of <span class="math-container">$\mathbb{N}$</span> such that <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} q^i = \prod_{g \in G} \left(q^{2^g} + 1\right) \end{align}</span> (namely, <span class="math-container">$G$</span> is the set of all <span class="math-container">$g\in\left\{ 0,1,\ldots,k\right\}$</span> satisfying <span class="math-container">$a_g = 1$</span>). This proves Theorem 2. <span class="math-container">$\blacksquare$</span></p> <p><em>Proof of Theorem 1.</em> Theorem 2 (applied to <span class="math-container">$q = 1$</span>) shows that there exists a finite subset <span class="math-container">$G$</span> of <span class="math-container">$\mathbb{N}$</span> such that <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} 1^i = \prod_{g \in G} \left(1^{2^g} + 1\right) . \end{align}</span> Consider this <span class="math-container">$G$</span>. Comparing <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} 1^i = \prod_{g \in G} \underbrace{\left(1^{2^g} + 1\right)}_{= 1+1 = 2} = \prod_{g \in G} 2 = 2^{\left|G\right|} \end{align}</span> with <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} \underbrace{1^i}_{=1} &amp; = \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} 1 \\ &amp; = \left(\text{the number of all $i\in\left\{ 0,1,\ldots,n\right\}$ such that $\dbinom{n}{i}$ is odd}\right) \cdot 1 \\ &amp; = \left(\text{the number of all $i\in\left\{ 0,1,\ldots,n\right\}$ such that $\dbinom{n}{i}$ is odd}\right) , \end{align}</span> we obtain <span class="math-container">\begin{align} \left(\text{the number of all $i\in\left\{ 0,1,\ldots,n\right\}$ such that $\dbinom{n}{i}$ is odd}\right) = 2^{\left|G\right|} . \end{align}</span> Hence, the number of <span class="math-container">$i \in \left\{0,1,\ldots,n\right\}$</span> such that <span class="math-container">$\dbinom{n}{i}$</span> is odd is a power of <span class="math-container">$2$</span>. This proves Theorem 1. <span class="math-container">$\blacksquare$</span></p> Answer 13: <p>Similar idea to Andreas Caranti's answer:</p> <p>Kummer's Theorem says that the number of factors of <span class="math-container">$p$</span> in <span class="math-container">$\binom{n}{k}$</span> is the number of carries when adding <span class="math-container">$k$</span> and <span class="math-container">$n-k$</span> in base-<span class="math-container">$p$</span>. There are no carries when adding <span class="math-container">$k$</span> and <span class="math-container">$n-k$</span> in binary if and only if <span class="math-container">$k\veebar n-k=0$</span>; when that is true, <span class="math-container">$n=k\lor n-k$</span>. This means that <span class="math-container">$\binom{n}{k}$</span> is odd when the <span class="math-container">$1$</span> bits of <span class="math-container">$k$</span> are a subset of the <span class="math-container">$1$</span> bits of <span class="math-container">$n$</span>. That is, there are <span class="math-container">$2^{\sigma_2(n)}$</span> odd entries in row <span class="math-container">$n$</span> of Pascal's Triangle, where <span class="math-container">$\sigma_2(n)$</span> is the sum of the bits in the binary representation of <span class="math-container">$n$</span>.</p> Answer 14: <p>Will delete because this is not what OP intends, but a proof via <a href="https://en.wikipedia.org/wiki/Kummer&#39;s_theorem" rel="nofollow noreferrer">Kummer's Theorem</a> looks rather neat to me.</p> <p>Write $n, m$ in base $2$ $$ n = 2^{k} n_{k} + \dots + 2 n_{1} + n_{0}, \qquad m = 2^{k} m_{k} + \dots + 2 m_{1} + m_{0}, $$ with $n_{i}, m_{i} \in \{ 0, 1 \}$.</p> <p>Then </p> <blockquote> <p>$\dbinom{n}{m}$ is odd iff there are no carries in the sum in base $2$ $$ m + (n - m) = n, $$ that is, iff $m_{i} \le n_{i}$ for each $i$. </p> </blockquote> <p>Given $n$, there are thus $$ \prod_{i=0}^{k} (n_{i} + 1) $$ possibilities for $m$, and each $n_{i} + 1 \in \{ 1, 2 \}$.</p> <p>This also tells you what is the exponent $k$ in the question - it is the number of $1$'s in the diadic expansion of $n$, or equivalently, the number of <em>distinct</em> powers of $2$ that sum up to $n$.</p> Answer 15: <p>The simplest (at least simplest to explain) solution (using Lucas's theorem) has not been posted; let me fill that "much needed gap". Lazy as I am, I will mostly copypaste my answer from <a href="https://math.stackexchange.com/a/2184460/">https://math.stackexchange.com/a/2184460/</a> , since the argument is essentially the same.</p> <p>I use the notation <span class="math-container">$\mathbb{N}$</span> for the set <span class="math-container">$\left\{0,1,2,\ldots\right\}$</span>.</p> <p>The question asks the following:</p> <blockquote> <p><strong>Theorem 1.</strong> Let <span class="math-container">$n \in \mathbb{N}$</span>. Then, the number of <span class="math-container">$i \in \left\{0,1,\ldots,n\right\}$</span> such that <span class="math-container">$\dbinom{n}{i}$</span> is odd is a power of <span class="math-container">$2$</span>.</p> </blockquote> <p>I shall prove something slightly more general:</p> <blockquote> <p><strong>Theorem 2.</strong> Let <span class="math-container">$n \in \mathbb{N}$</span>. Let <span class="math-container">$q$</span> be a real number. Then, there exists a finite subset <span class="math-container">$G$</span> of <span class="math-container">$\mathbb{N}$</span> such that <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} q^i = \prod_{g \in G} \left(q^{2^g} + 1\right) . \end{align}</span></p> </blockquote> <p><em>Proof of Theorem 2.</em> Write <span class="math-container">$n$</span> in the form <span class="math-container">$n=a_{k}2^{k}+a_{k-1}2^{k-1}+\cdots+a_{0}2^{0}$</span> for some <span class="math-container">$k\in\mathbb{N}$</span> and <span class="math-container">$a_{0},a_{1},\ldots,a_{k}\in\left\{ 0,1\right\} $</span>. (This is just the base-<span class="math-container">$2$</span> representation of <span class="math-container">$n$</span>, possibly with leading zeroes.)</p> <p><a href="https://en.wikipedia.org/wiki/Lucas&#39;s_theorem" rel="nofollow noreferrer">Lucas's theorem</a> tells you that if <span class="math-container">$i=b_{k}2^{k}+b_{k-1}2^{k-1}+\cdots+b_{0}2^{0}$</span> for some <span class="math-container">$b_{0} ,b_{1},\ldots,b_{k}\in\left\{ 0,1\right\} $</span>, then <span class="math-container">\begin{align} \dbinom{n}{i} &amp;\equiv \dbinom{a_{k}}{b_{k}}\dbinom{a_{k-1}}{b_{k-1}} \cdots\dbinom{a_{0}}{b_{0}}=\prod\limits_{j=0}^{k}\underbrace{\dbinom{a_{j}}{b_{j}} }_{\substack{= \begin{cases} 1, &amp; \text{if }b_{j}\leq a_{j}\\ 0, &amp; \text{if }b_{j}&gt;a_{j} \end{cases} \\\text{(since }a_{j}\text{ and }b_{j}\text{ lie in }\left\{ 0,1\right\} \text{)}}} \\ &amp;=\prod\limits_{j=0}^{k} \begin{cases} 1, &amp; \text{if }b_{j}\leq a_{j}\\ 0, &amp; \text{if }b_{j}&gt;a_{j} \end{cases} = \begin{cases} 1, &amp; \text{if }b_{j}\leq a_{j}\text{ for all }j\text{;}\\ 0, &amp; \text{otherwise} \end{cases} \mod 2 . \end{align}</span> Hence, the <span class="math-container">$i\in\mathbb{N}$</span> for which <span class="math-container">$\dbinom{n}{i}$</span> is odd are precisely the numbers of the form <span class="math-container">$b_{k}2^{k}+b_{k-1}2^{k-1} +\cdots+b_{0}2^{0}$</span> for <span class="math-container">$b_{0},b_{1},\ldots,b_{k}\in\left\{ 0,1\right\} $</span> satisfying <span class="math-container">$\left( b_{j}\leq a_{j}\text{ for all }j\right) $</span>. Since all these <span class="math-container">$i$</span> satisfy <span class="math-container">$i \in \left\{ 0,1,\ldots,n\right\}$</span> (because otherwise, <span class="math-container">$\dbinom{n}{i}$</span> would be <span class="math-container">$0$</span> and therefore could not be odd), we can rewrite this as follows: The <span class="math-container">$i \in \left\{ 0,1,\ldots,n\right\}$</span> for which <span class="math-container">$\dbinom{n}{i}$</span> is odd are precisely the numbers of the form <span class="math-container">$b_{k}2^{k}+b_{k-1}2^{k-1} +\cdots+b_{0}2^{0}$</span> for <span class="math-container">$b_{0},b_{1},\ldots,b_{k}\in\left\{ 0,1\right\} $</span> satisfying <span class="math-container">$\left( b_{j}\leq a_{j}\text{ for all }j\right) $</span>. Since these numbers are distinct (because the base-<span class="math-container">$2$</span> representation of any <span class="math-container">$i\in\mathbb{N}$</span> is unique, as long as we fix the number of digits), we thus can substitute <span class="math-container">$b_{k}2^{k}+b_{k-1}2^{k-1}+\cdots+b_{0}2^{0}$</span> for <span class="math-container">$i$</span> in the sum <span class="math-container">$\sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\ \dbinom{n}{i}\text{ is odd}}}q^{i}$</span>. Thus, this sum rewrites as follows: <span class="math-container">\begin{align} \sum\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\ \dbinom{n}{i}\text{ is odd}}}q^{i} &amp;=\underbrace{\sum\limits_{\substack{b_{0},b_{1} ,\ldots,b_{k}\in\left\{ 0,1\right\} ;\\b_{j}\leq a_{j}\text{ for all }j} }}_{=\sum\limits_{b_{0}=0}^{a_{0}}\sum\limits_{b_{1}=0}^{a_{1}}\cdots\sum\limits_{b_{k}=0}^{a_k} }\underbrace{q^{b_{k}2^{k}+b_{k-1}2^{k-1}+\cdots+b_{0}2^{0}}}_{=\left( q^{2^{k}}\right) ^{b_{k}}\left( q^{2^{k-1}}\right) ^{b_{k-1}}\cdots\left( q^{2^{0}}\right) ^{b_{0}}} \\ &amp;=\sum\limits_{b_{0}=0}^{a_{0}}\sum\limits_{b_{1}=0}^{a_{1}}\cdots\sum\limits_{b_{k}=0}^{a_{k} }\left( q^{2^{k}}\right) ^{b_{k}}\left( q^{2^{k-1}}\right) ^{b_{k-1} }\cdots\left( q^{2^{0}}\right) ^{b_{0}} \\ &amp;=\left( \sum\limits_{b_{k}=0}^{a_{k}}\left( q^{2^{k}}\right) ^{b_{k}}\right) \left( \sum\limits_{b_{k-1}=0}^{a_{k-1}}\left( q^{2^{k-1}}\right) ^{b_{k-1} }\right) \cdots\left( \sum\limits_{b_{0}=0}^{a_{0}}\left( q^{2^{0}}\right) ^{b_{0}}\right) \\ &amp;=\left( \sum\limits_{b=0}^{a_{k}}\left( q^{2^{k}}\right) ^{b}\right) \left( \sum\limits_{b=0}^{a_{k-1}}\left( q^{2^{k-1}}\right) ^{b}\right) \cdots\left( \sum\limits_{b=0}^{a_{0}}\left( q^{2^{0}}\right) ^{b}\right) \\ &amp;=\prod\limits_{g=0}^{k}\underbrace{\left( \sum\limits_{b=0}^{a_{g}}\left( q^{2^{g}}\right) ^{b}\right) }_{\substack{= \begin{cases} q^{2^{g}}+1, &amp; \text{if }a_{g}=1;\\ 1 &amp; \text{if }a_{g}=0 \end{cases} \\\text{(since }a_{g}\in\left\{ 0,1\right\} \text{)}}} \\ &amp;=\prod\limits_{g=0}^{k} \begin{cases} q^{2^{g}}+1, &amp; \text{if }a_{g}=1;\\ 1 &amp; \text{if }a_{g}=0 \end{cases} \\ &amp;=\left( \prod\limits_{\substack{g\in\left\{ 0,1,\ldots,k\right\} ;\\a_{g} =1}}\left( q^{2^{g}}+1\right) \right) \underbrace{\left( \prod\limits _{\substack{g\in\left\{ 0,1,\ldots,k\right\} ;\\a_{g}=0}}1\right) }_{=1} \\ &amp;=\prod\limits_{\substack{g\in\left\{ 0,1,\ldots,k\right\} ;\\a_{g}=1}}\left( q^{2^{g}}+1\right) . \end{align}</span> Thus, there exists a finite subset <span class="math-container">$G$</span> of <span class="math-container">$\mathbb{N}$</span> such that <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} q^i = \prod_{g \in G} \left(q^{2^g} + 1\right) \end{align}</span> (namely, <span class="math-container">$G$</span> is the set of all <span class="math-container">$g\in\left\{ 0,1,\ldots,k\right\}$</span> satisfying <span class="math-container">$a_g = 1$</span>). This proves Theorem 2. <span class="math-container">$\blacksquare$</span></p> <p><em>Proof of Theorem 1.</em> Theorem 2 (applied to <span class="math-container">$q = 1$</span>) shows that there exists a finite subset <span class="math-container">$G$</span> of <span class="math-container">$\mathbb{N}$</span> such that <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} 1^i = \prod_{g \in G} \left(1^{2^g} + 1\right) . \end{align}</span> Consider this <span class="math-container">$G$</span>. Comparing <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} 1^i = \prod_{g \in G} \underbrace{\left(1^{2^g} + 1\right)}_{= 1+1 = 2} = \prod_{g \in G} 2 = 2^{\left|G\right|} \end{align}</span> with <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} \underbrace{1^i}_{=1} &amp; = \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} 1 \\ &amp; = \left(\text{the number of all $i\in\left\{ 0,1,\ldots,n\right\}$ such that $\dbinom{n}{i}$ is odd}\right) \cdot 1 \\ &amp; = \left(\text{the number of all $i\in\left\{ 0,1,\ldots,n\right\}$ such that $\dbinom{n}{i}$ is odd}\right) , \end{align}</span> we obtain <span class="math-container">\begin{align} \left(\text{the number of all $i\in\left\{ 0,1,\ldots,n\right\}$ such that $\dbinom{n}{i}$ is odd}\right) = 2^{\left|G\right|} . \end{align}</span> Hence, the number of <span class="math-container">$i \in \left\{0,1,\ldots,n\right\}$</span> such that <span class="math-container">$\dbinom{n}{i}$</span> is odd is a power of <span class="math-container">$2$</span>. This proves Theorem 1. <span class="math-container">$\blacksquare$</span></p> Answer 16: <p>Similar idea to Andreas Caranti's answer:</p> <p>Kummer's Theorem says that the number of factors of <span class="math-container">$p$</span> in <span class="math-container">$\binom{n}{k}$</span> is the number of carries when adding <span class="math-container">$k$</span> and <span class="math-container">$n-k$</span> in base-<span class="math-container">$p$</span>. There are no carries when adding <span class="math-container">$k$</span> and <span class="math-container">$n-k$</span> in binary if and only if <span class="math-container">$k\veebar n-k=0$</span>; when that is true, <span class="math-container">$n=k\lor n-k$</span>. This means that <span class="math-container">$\binom{n}{k}$</span> is odd when the <span class="math-container">$1$</span> bits of <span class="math-container">$k$</span> are a subset of the <span class="math-container">$1$</span> bits of <span class="math-container">$n$</span>. That is, there are <span class="math-container">$2^{\sigma_2(n)}$</span> odd entries in row <span class="math-container">$n$</span> of Pascal's Triangle, where <span class="math-container">$\sigma_2(n)$</span> is the sum of the bits in the binary representation of <span class="math-container">$n$</span>.</p> Answer 17: <p>Will delete because this is not what OP intends, but a proof via <a href="https://en.wikipedia.org/wiki/Kummer&#39;s_theorem" rel="nofollow noreferrer">Kummer's Theorem</a> looks rather neat to me.</p> <p>Write $n, m$ in base $2$ $$ n = 2^{k} n_{k} + \dots + 2 n_{1} + n_{0}, \qquad m = 2^{k} m_{k} + \dots + 2 m_{1} + m_{0}, $$ with $n_{i}, m_{i} \in \{ 0, 1 \}$.</p> <p>Then </p> <blockquote> <p>$\dbinom{n}{m}$ is odd iff there are no carries in the sum in base $2$ $$ m + (n - m) = n, $$ that is, iff $m_{i} \le n_{i}$ for each $i$. </p> </blockquote> <p>Given $n$, there are thus $$ \prod_{i=0}^{k} (n_{i} + 1) $$ possibilities for $m$, and each $n_{i} + 1 \in \{ 1, 2 \}$.</p> <p>This also tells you what is the exponent $k$ in the question - it is the number of $1$'s in the diadic expansion of $n$, or equivalently, the number of <em>distinct</em> powers of $2$ that sum up to $n$.</p> Answer 18: <p>The simplest (at least simplest to explain) solution (using Lucas's theorem) has not been posted; let me fill that "much needed gap". Lazy as I am, I will mostly copypaste my answer from <a href="https://math.stackexchange.com/a/2184460/">https://math.stackexchange.com/a/2184460/</a> , since the argument is essentially the same.</p> <p>I use the notation <span class="math-container">$\mathbb{N}$</span> for the set <span class="math-container">$\left\{0,1,2,\ldots\right\}$</span>.</p> <p>The question asks the following:</p> <blockquote> <p><strong>Theorem 1.</strong> Let <span class="math-container">$n \in \mathbb{N}$</span>. Then, the number of <span class="math-container">$i \in \left\{0,1,\ldots,n\right\}$</span> such that <span class="math-container">$\dbinom{n}{i}$</span> is odd is a power of <span class="math-container">$2$</span>.</p> </blockquote> <p>I shall prove something slightly more general:</p> <blockquote> <p><strong>Theorem 2.</strong> Let <span class="math-container">$n \in \mathbb{N}$</span>. Let <span class="math-container">$q$</span> be a real number. Then, there exists a finite subset <span class="math-container">$G$</span> of <span class="math-container">$\mathbb{N}$</span> such that <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} q^i = \prod_{g \in G} \left(q^{2^g} + 1\right) . \end{align}</span></p> </blockquote> <p><em>Proof of Theorem 2.</em> Write <span class="math-container">$n$</span> in the form <span class="math-container">$n=a_{k}2^{k}+a_{k-1}2^{k-1}+\cdots+a_{0}2^{0}$</span> for some <span class="math-container">$k\in\mathbb{N}$</span> and <span class="math-container">$a_{0},a_{1},\ldots,a_{k}\in\left\{ 0,1\right\} $</span>. (This is just the base-<span class="math-container">$2$</span> representation of <span class="math-container">$n$</span>, possibly with leading zeroes.)</p> <p><a href="https://en.wikipedia.org/wiki/Lucas&#39;s_theorem" rel="nofollow noreferrer">Lucas's theorem</a> tells you that if <span class="math-container">$i=b_{k}2^{k}+b_{k-1}2^{k-1}+\cdots+b_{0}2^{0}$</span> for some <span class="math-container">$b_{0} ,b_{1},\ldots,b_{k}\in\left\{ 0,1\right\} $</span>, then <span class="math-container">\begin{align} \dbinom{n}{i} &amp;\equiv \dbinom{a_{k}}{b_{k}}\dbinom{a_{k-1}}{b_{k-1}} \cdots\dbinom{a_{0}}{b_{0}}=\prod\limits_{j=0}^{k}\underbrace{\dbinom{a_{j}}{b_{j}} }_{\substack{= \begin{cases} 1, &amp; \text{if }b_{j}\leq a_{j}\\ 0, &amp; \text{if }b_{j}&gt;a_{j} \end{cases} \\\text{(since }a_{j}\text{ and }b_{j}\text{ lie in }\left\{ 0,1\right\} \text{)}}} \\ &amp;=\prod\limits_{j=0}^{k} \begin{cases} 1, &amp; \text{if }b_{j}\leq a_{j}\\ 0, &amp; \text{if }b_{j}&gt;a_{j} \end{cases} = \begin{cases} 1, &amp; \text{if }b_{j}\leq a_{j}\text{ for all }j\text{;}\\ 0, &amp; \text{otherwise} \end{cases} \mod 2 . \end{align}</span> Hence, the <span class="math-container">$i\in\mathbb{N}$</span> for which <span class="math-container">$\dbinom{n}{i}$</span> is odd are precisely the numbers of the form <span class="math-container">$b_{k}2^{k}+b_{k-1}2^{k-1} +\cdots+b_{0}2^{0}$</span> for <span class="math-container">$b_{0},b_{1},\ldots,b_{k}\in\left\{ 0,1\right\} $</span> satisfying <span class="math-container">$\left( b_{j}\leq a_{j}\text{ for all }j\right) $</span>. Since all these <span class="math-container">$i$</span> satisfy <span class="math-container">$i \in \left\{ 0,1,\ldots,n\right\}$</span> (because otherwise, <span class="math-container">$\dbinom{n}{i}$</span> would be <span class="math-container">$0$</span> and therefore could not be odd), we can rewrite this as follows: The <span class="math-container">$i \in \left\{ 0,1,\ldots,n\right\}$</span> for which <span class="math-container">$\dbinom{n}{i}$</span> is odd are precisely the numbers of the form <span class="math-container">$b_{k}2^{k}+b_{k-1}2^{k-1} +\cdots+b_{0}2^{0}$</span> for <span class="math-container">$b_{0},b_{1},\ldots,b_{k}\in\left\{ 0,1\right\} $</span> satisfying <span class="math-container">$\left( b_{j}\leq a_{j}\text{ for all }j\right) $</span>. Since these numbers are distinct (because the base-<span class="math-container">$2$</span> representation of any <span class="math-container">$i\in\mathbb{N}$</span> is unique, as long as we fix the number of digits), we thus can substitute <span class="math-container">$b_{k}2^{k}+b_{k-1}2^{k-1}+\cdots+b_{0}2^{0}$</span> for <span class="math-container">$i$</span> in the sum <span class="math-container">$\sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\ \dbinom{n}{i}\text{ is odd}}}q^{i}$</span>. Thus, this sum rewrites as follows: <span class="math-container">\begin{align} \sum\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\ \dbinom{n}{i}\text{ is odd}}}q^{i} &amp;=\underbrace{\sum\limits_{\substack{b_{0},b_{1} ,\ldots,b_{k}\in\left\{ 0,1\right\} ;\\b_{j}\leq a_{j}\text{ for all }j} }}_{=\sum\limits_{b_{0}=0}^{a_{0}}\sum\limits_{b_{1}=0}^{a_{1}}\cdots\sum\limits_{b_{k}=0}^{a_k} }\underbrace{q^{b_{k}2^{k}+b_{k-1}2^{k-1}+\cdots+b_{0}2^{0}}}_{=\left( q^{2^{k}}\right) ^{b_{k}}\left( q^{2^{k-1}}\right) ^{b_{k-1}}\cdots\left( q^{2^{0}}\right) ^{b_{0}}} \\ &amp;=\sum\limits_{b_{0}=0}^{a_{0}}\sum\limits_{b_{1}=0}^{a_{1}}\cdots\sum\limits_{b_{k}=0}^{a_{k} }\left( q^{2^{k}}\right) ^{b_{k}}\left( q^{2^{k-1}}\right) ^{b_{k-1} }\cdots\left( q^{2^{0}}\right) ^{b_{0}} \\ &amp;=\left( \sum\limits_{b_{k}=0}^{a_{k}}\left( q^{2^{k}}\right) ^{b_{k}}\right) \left( \sum\limits_{b_{k-1}=0}^{a_{k-1}}\left( q^{2^{k-1}}\right) ^{b_{k-1} }\right) \cdots\left( \sum\limits_{b_{0}=0}^{a_{0}}\left( q^{2^{0}}\right) ^{b_{0}}\right) \\ &amp;=\left( \sum\limits_{b=0}^{a_{k}}\left( q^{2^{k}}\right) ^{b}\right) \left( \sum\limits_{b=0}^{a_{k-1}}\left( q^{2^{k-1}}\right) ^{b}\right) \cdots\left( \sum\limits_{b=0}^{a_{0}}\left( q^{2^{0}}\right) ^{b}\right) \\ &amp;=\prod\limits_{g=0}^{k}\underbrace{\left( \sum\limits_{b=0}^{a_{g}}\left( q^{2^{g}}\right) ^{b}\right) }_{\substack{= \begin{cases} q^{2^{g}}+1, &amp; \text{if }a_{g}=1;\\ 1 &amp; \text{if }a_{g}=0 \end{cases} \\\text{(since }a_{g}\in\left\{ 0,1\right\} \text{)}}} \\ &amp;=\prod\limits_{g=0}^{k} \begin{cases} q^{2^{g}}+1, &amp; \text{if }a_{g}=1;\\ 1 &amp; \text{if }a_{g}=0 \end{cases} \\ &amp;=\left( \prod\limits_{\substack{g\in\left\{ 0,1,\ldots,k\right\} ;\\a_{g} =1}}\left( q^{2^{g}}+1\right) \right) \underbrace{\left( \prod\limits _{\substack{g\in\left\{ 0,1,\ldots,k\right\} ;\\a_{g}=0}}1\right) }_{=1} \\ &amp;=\prod\limits_{\substack{g\in\left\{ 0,1,\ldots,k\right\} ;\\a_{g}=1}}\left( q^{2^{g}}+1\right) . \end{align}</span> Thus, there exists a finite subset <span class="math-container">$G$</span> of <span class="math-container">$\mathbb{N}$</span> such that <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} q^i = \prod_{g \in G} \left(q^{2^g} + 1\right) \end{align}</span> (namely, <span class="math-container">$G$</span> is the set of all <span class="math-container">$g\in\left\{ 0,1,\ldots,k\right\}$</span> satisfying <span class="math-container">$a_g = 1$</span>). This proves Theorem 2. <span class="math-container">$\blacksquare$</span></p> <p><em>Proof of Theorem 1.</em> Theorem 2 (applied to <span class="math-container">$q = 1$</span>) shows that there exists a finite subset <span class="math-container">$G$</span> of <span class="math-container">$\mathbb{N}$</span> such that <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} 1^i = \prod_{g \in G} \left(1^{2^g} + 1\right) . \end{align}</span> Consider this <span class="math-container">$G$</span>. Comparing <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} 1^i = \prod_{g \in G} \underbrace{\left(1^{2^g} + 1\right)}_{= 1+1 = 2} = \prod_{g \in G} 2 = 2^{\left|G\right|} \end{align}</span> with <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} \underbrace{1^i}_{=1} &amp; = \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} 1 \\ &amp; = \left(\text{the number of all $i\in\left\{ 0,1,\ldots,n\right\}$ such that $\dbinom{n}{i}$ is odd}\right) \cdot 1 \\ &amp; = \left(\text{the number of all $i\in\left\{ 0,1,\ldots,n\right\}$ such that $\dbinom{n}{i}$ is odd}\right) , \end{align}</span> we obtain <span class="math-container">\begin{align} \left(\text{the number of all $i\in\left\{ 0,1,\ldots,n\right\}$ such that $\dbinom{n}{i}$ is odd}\right) = 2^{\left|G\right|} . \end{align}</span> Hence, the number of <span class="math-container">$i \in \left\{0,1,\ldots,n\right\}$</span> such that <span class="math-container">$\dbinom{n}{i}$</span> is odd is a power of <span class="math-container">$2$</span>. This proves Theorem 1. <span class="math-container">$\blacksquare$</span></p> Answer 19: <p>Similar idea to Andreas Caranti's answer:</p> <p>Kummer's Theorem says that the number of factors of <span class="math-container">$p$</span> in <span class="math-container">$\binom{n}{k}$</span> is the number of carries when adding <span class="math-container">$k$</span> and <span class="math-container">$n-k$</span> in base-<span class="math-container">$p$</span>. There are no carries when adding <span class="math-container">$k$</span> and <span class="math-container">$n-k$</span> in binary if and only if <span class="math-container">$k\veebar n-k=0$</span>; when that is true, <span class="math-container">$n=k\lor n-k$</span>. This means that <span class="math-container">$\binom{n}{k}$</span> is odd when the <span class="math-container">$1$</span> bits of <span class="math-container">$k$</span> are a subset of the <span class="math-container">$1$</span> bits of <span class="math-container">$n$</span>. That is, there are <span class="math-container">$2^{\sigma_2(n)}$</span> odd entries in row <span class="math-container">$n$</span> of Pascal's Triangle, where <span class="math-container">$\sigma_2(n)$</span> is the sum of the bits in the binary representation of <span class="math-container">$n$</span>.</p> Answer 20: <p>Will delete because this is not what OP intends, but a proof via <a href="https://en.wikipedia.org/wiki/Kummer&#39;s_theorem" rel="nofollow noreferrer">Kummer's Theorem</a> looks rather neat to me.</p> <p>Write $n, m$ in base $2$ $$ n = 2^{k} n_{k} + \dots + 2 n_{1} + n_{0}, \qquad m = 2^{k} m_{k} + \dots + 2 m_{1} + m_{0}, $$ with $n_{i}, m_{i} \in \{ 0, 1 \}$.</p> <p>Then </p> <blockquote> <p>$\dbinom{n}{m}$ is odd iff there are no carries in the sum in base $2$ $$ m + (n - m) = n, $$ that is, iff $m_{i} \le n_{i}$ for each $i$. </p> </blockquote> <p>Given $n$, there are thus $$ \prod_{i=0}^{k} (n_{i} + 1) $$ possibilities for $m$, and each $n_{i} + 1 \in \{ 1, 2 \}$.</p> <p>This also tells you what is the exponent $k$ in the question - it is the number of $1$'s in the diadic expansion of $n$, or equivalently, the number of <em>distinct</em> powers of $2$ that sum up to $n$.</p> Answer 21: <p>The simplest (at least simplest to explain) solution (using Lucas's theorem) has not been posted; let me fill that "much needed gap". Lazy as I am, I will mostly copypaste my answer from <a href="https://math.stackexchange.com/a/2184460/">https://math.stackexchange.com/a/2184460/</a> , since the argument is essentially the same.</p> <p>I use the notation <span class="math-container">$\mathbb{N}$</span> for the set <span class="math-container">$\left\{0,1,2,\ldots\right\}$</span>.</p> <p>The question asks the following:</p> <blockquote> <p><strong>Theorem 1.</strong> Let <span class="math-container">$n \in \mathbb{N}$</span>. Then, the number of <span class="math-container">$i \in \left\{0,1,\ldots,n\right\}$</span> such that <span class="math-container">$\dbinom{n}{i}$</span> is odd is a power of <span class="math-container">$2$</span>.</p> </blockquote> <p>I shall prove something slightly more general:</p> <blockquote> <p><strong>Theorem 2.</strong> Let <span class="math-container">$n \in \mathbb{N}$</span>. Let <span class="math-container">$q$</span> be a real number. Then, there exists a finite subset <span class="math-container">$G$</span> of <span class="math-container">$\mathbb{N}$</span> such that <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} q^i = \prod_{g \in G} \left(q^{2^g} + 1\right) . \end{align}</span></p> </blockquote> <p><em>Proof of Theorem 2.</em> Write <span class="math-container">$n$</span> in the form <span class="math-container">$n=a_{k}2^{k}+a_{k-1}2^{k-1}+\cdots+a_{0}2^{0}$</span> for some <span class="math-container">$k\in\mathbb{N}$</span> and <span class="math-container">$a_{0},a_{1},\ldots,a_{k}\in\left\{ 0,1\right\} $</span>. (This is just the base-<span class="math-container">$2$</span> representation of <span class="math-container">$n$</span>, possibly with leading zeroes.)</p> <p><a href="https://en.wikipedia.org/wiki/Lucas&#39;s_theorem" rel="nofollow noreferrer">Lucas's theorem</a> tells you that if <span class="math-container">$i=b_{k}2^{k}+b_{k-1}2^{k-1}+\cdots+b_{0}2^{0}$</span> for some <span class="math-container">$b_{0} ,b_{1},\ldots,b_{k}\in\left\{ 0,1\right\} $</span>, then <span class="math-container">\begin{align} \dbinom{n}{i} &amp;\equiv \dbinom{a_{k}}{b_{k}}\dbinom{a_{k-1}}{b_{k-1}} \cdots\dbinom{a_{0}}{b_{0}}=\prod\limits_{j=0}^{k}\underbrace{\dbinom{a_{j}}{b_{j}} }_{\substack{= \begin{cases} 1, &amp; \text{if }b_{j}\leq a_{j}\\ 0, &amp; \text{if }b_{j}&gt;a_{j} \end{cases} \\\text{(since }a_{j}\text{ and }b_{j}\text{ lie in }\left\{ 0,1\right\} \text{)}}} \\ &amp;=\prod\limits_{j=0}^{k} \begin{cases} 1, &amp; \text{if }b_{j}\leq a_{j}\\ 0, &amp; \text{if }b_{j}&gt;a_{j} \end{cases} = \begin{cases} 1, &amp; \text{if }b_{j}\leq a_{j}\text{ for all }j\text{;}\\ 0, &amp; \text{otherwise} \end{cases} \mod 2 . \end{align}</span> Hence, the <span class="math-container">$i\in\mathbb{N}$</span> for which <span class="math-container">$\dbinom{n}{i}$</span> is odd are precisely the numbers of the form <span class="math-container">$b_{k}2^{k}+b_{k-1}2^{k-1} +\cdots+b_{0}2^{0}$</span> for <span class="math-container">$b_{0},b_{1},\ldots,b_{k}\in\left\{ 0,1\right\} $</span> satisfying <span class="math-container">$\left( b_{j}\leq a_{j}\text{ for all }j\right) $</span>. Since all these <span class="math-container">$i$</span> satisfy <span class="math-container">$i \in \left\{ 0,1,\ldots,n\right\}$</span> (because otherwise, <span class="math-container">$\dbinom{n}{i}$</span> would be <span class="math-container">$0$</span> and therefore could not be odd), we can rewrite this as follows: The <span class="math-container">$i \in \left\{ 0,1,\ldots,n\right\}$</span> for which <span class="math-container">$\dbinom{n}{i}$</span> is odd are precisely the numbers of the form <span class="math-container">$b_{k}2^{k}+b_{k-1}2^{k-1} +\cdots+b_{0}2^{0}$</span> for <span class="math-container">$b_{0},b_{1},\ldots,b_{k}\in\left\{ 0,1\right\} $</span> satisfying <span class="math-container">$\left( b_{j}\leq a_{j}\text{ for all }j\right) $</span>. Since these numbers are distinct (because the base-<span class="math-container">$2$</span> representation of any <span class="math-container">$i\in\mathbb{N}$</span> is unique, as long as we fix the number of digits), we thus can substitute <span class="math-container">$b_{k}2^{k}+b_{k-1}2^{k-1}+\cdots+b_{0}2^{0}$</span> for <span class="math-container">$i$</span> in the sum <span class="math-container">$\sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\ \dbinom{n}{i}\text{ is odd}}}q^{i}$</span>. Thus, this sum rewrites as follows: <span class="math-container">\begin{align} \sum\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\ \dbinom{n}{i}\text{ is odd}}}q^{i} &amp;=\underbrace{\sum\limits_{\substack{b_{0},b_{1} ,\ldots,b_{k}\in\left\{ 0,1\right\} ;\\b_{j}\leq a_{j}\text{ for all }j} }}_{=\sum\limits_{b_{0}=0}^{a_{0}}\sum\limits_{b_{1}=0}^{a_{1}}\cdots\sum\limits_{b_{k}=0}^{a_k} }\underbrace{q^{b_{k}2^{k}+b_{k-1}2^{k-1}+\cdots+b_{0}2^{0}}}_{=\left( q^{2^{k}}\right) ^{b_{k}}\left( q^{2^{k-1}}\right) ^{b_{k-1}}\cdots\left( q^{2^{0}}\right) ^{b_{0}}} \\ &amp;=\sum\limits_{b_{0}=0}^{a_{0}}\sum\limits_{b_{1}=0}^{a_{1}}\cdots\sum\limits_{b_{k}=0}^{a_{k} }\left( q^{2^{k}}\right) ^{b_{k}}\left( q^{2^{k-1}}\right) ^{b_{k-1} }\cdots\left( q^{2^{0}}\right) ^{b_{0}} \\ &amp;=\left( \sum\limits_{b_{k}=0}^{a_{k}}\left( q^{2^{k}}\right) ^{b_{k}}\right) \left( \sum\limits_{b_{k-1}=0}^{a_{k-1}}\left( q^{2^{k-1}}\right) ^{b_{k-1} }\right) \cdots\left( \sum\limits_{b_{0}=0}^{a_{0}}\left( q^{2^{0}}\right) ^{b_{0}}\right) \\ &amp;=\left( \sum\limits_{b=0}^{a_{k}}\left( q^{2^{k}}\right) ^{b}\right) \left( \sum\limits_{b=0}^{a_{k-1}}\left( q^{2^{k-1}}\right) ^{b}\right) \cdots\left( \sum\limits_{b=0}^{a_{0}}\left( q^{2^{0}}\right) ^{b}\right) \\ &amp;=\prod\limits_{g=0}^{k}\underbrace{\left( \sum\limits_{b=0}^{a_{g}}\left( q^{2^{g}}\right) ^{b}\right) }_{\substack{= \begin{cases} q^{2^{g}}+1, &amp; \text{if }a_{g}=1;\\ 1 &amp; \text{if }a_{g}=0 \end{cases} \\\text{(since }a_{g}\in\left\{ 0,1\right\} \text{)}}} \\ &amp;=\prod\limits_{g=0}^{k} \begin{cases} q^{2^{g}}+1, &amp; \text{if }a_{g}=1;\\ 1 &amp; \text{if }a_{g}=0 \end{cases} \\ &amp;=\left( \prod\limits_{\substack{g\in\left\{ 0,1,\ldots,k\right\} ;\\a_{g} =1}}\left( q^{2^{g}}+1\right) \right) \underbrace{\left( \prod\limits _{\substack{g\in\left\{ 0,1,\ldots,k\right\} ;\\a_{g}=0}}1\right) }_{=1} \\ &amp;=\prod\limits_{\substack{g\in\left\{ 0,1,\ldots,k\right\} ;\\a_{g}=1}}\left( q^{2^{g}}+1\right) . \end{align}</span> Thus, there exists a finite subset <span class="math-container">$G$</span> of <span class="math-container">$\mathbb{N}$</span> such that <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} q^i = \prod_{g \in G} \left(q^{2^g} + 1\right) \end{align}</span> (namely, <span class="math-container">$G$</span> is the set of all <span class="math-container">$g\in\left\{ 0,1,\ldots,k\right\}$</span> satisfying <span class="math-container">$a_g = 1$</span>). This proves Theorem 2. <span class="math-container">$\blacksquare$</span></p> <p><em>Proof of Theorem 1.</em> Theorem 2 (applied to <span class="math-container">$q = 1$</span>) shows that there exists a finite subset <span class="math-container">$G$</span> of <span class="math-container">$\mathbb{N}$</span> such that <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} 1^i = \prod_{g \in G} \left(1^{2^g} + 1\right) . \end{align}</span> Consider this <span class="math-container">$G$</span>. Comparing <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} 1^i = \prod_{g \in G} \underbrace{\left(1^{2^g} + 1\right)}_{= 1+1 = 2} = \prod_{g \in G} 2 = 2^{\left|G\right|} \end{align}</span> with <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} \underbrace{1^i}_{=1} &amp; = \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} 1 \\ &amp; = \left(\text{the number of all $i\in\left\{ 0,1,\ldots,n\right\}$ such that $\dbinom{n}{i}$ is odd}\right) \cdot 1 \\ &amp; = \left(\text{the number of all $i\in\left\{ 0,1,\ldots,n\right\}$ such that $\dbinom{n}{i}$ is odd}\right) , \end{align}</span> we obtain <span class="math-container">\begin{align} \left(\text{the number of all $i\in\left\{ 0,1,\ldots,n\right\}$ such that $\dbinom{n}{i}$ is odd}\right) = 2^{\left|G\right|} . \end{align}</span> Hence, the number of <span class="math-container">$i \in \left\{0,1,\ldots,n\right\}$</span> such that <span class="math-container">$\dbinom{n}{i}$</span> is odd is a power of <span class="math-container">$2$</span>. This proves Theorem 1. <span class="math-container">$\blacksquare$</span></p> Answer 22: <p>Similar idea to Andreas Caranti's answer:</p> <p>Kummer's Theorem says that the number of factors of <span class="math-container">$p$</span> in <span class="math-container">$\binom{n}{k}$</span> is the number of carries when adding <span class="math-container">$k$</span> and <span class="math-container">$n-k$</span> in base-<span class="math-container">$p$</span>. There are no carries when adding <span class="math-container">$k$</span> and <span class="math-container">$n-k$</span> in binary if and only if <span class="math-container">$k\veebar n-k=0$</span>; when that is true, <span class="math-container">$n=k\lor n-k$</span>. This means that <span class="math-container">$\binom{n}{k}$</span> is odd when the <span class="math-container">$1$</span> bits of <span class="math-container">$k$</span> are a subset of the <span class="math-container">$1$</span> bits of <span class="math-container">$n$</span>. That is, there are <span class="math-container">$2^{\sigma_2(n)}$</span> odd entries in row <span class="math-container">$n$</span> of Pascal's Triangle, where <span class="math-container">$\sigma_2(n)$</span> is the sum of the bits in the binary representation of <span class="math-container">$n$</span>.</p> Answer 23: <p>Will delete because this is not what OP intends, but a proof via <a href="https://en.wikipedia.org/wiki/Kummer&#39;s_theorem" rel="nofollow noreferrer">Kummer's Theorem</a> looks rather neat to me.</p> <p>Write $n, m$ in base $2$ $$ n = 2^{k} n_{k} + \dots + 2 n_{1} + n_{0}, \qquad m = 2^{k} m_{k} + \dots + 2 m_{1} + m_{0}, $$ with $n_{i}, m_{i} \in \{ 0, 1 \}$.</p> <p>Then </p> <blockquote> <p>$\dbinom{n}{m}$ is odd iff there are no carries in the sum in base $2$ $$ m + (n - m) = n, $$ that is, iff $m_{i} \le n_{i}$ for each $i$. </p> </blockquote> <p>Given $n$, there are thus $$ \prod_{i=0}^{k} (n_{i} + 1) $$ possibilities for $m$, and each $n_{i} + 1 \in \{ 1, 2 \}$.</p> <p>This also tells you what is the exponent $k$ in the question - it is the number of $1$'s in the diadic expansion of $n$, or equivalently, the number of <em>distinct</em> powers of $2$ that sum up to $n$.</p> Answer 24: <p>The simplest (at least simplest to explain) solution (using Lucas's theorem) has not been posted; let me fill that "much needed gap". Lazy as I am, I will mostly copypaste my answer from <a href="https://math.stackexchange.com/a/2184460/">https://math.stackexchange.com/a/2184460/</a> , since the argument is essentially the same.</p> <p>I use the notation <span class="math-container">$\mathbb{N}$</span> for the set <span class="math-container">$\left\{0,1,2,\ldots\right\}$</span>.</p> <p>The question asks the following:</p> <blockquote> <p><strong>Theorem 1.</strong> Let <span class="math-container">$n \in \mathbb{N}$</span>. Then, the number of <span class="math-container">$i \in \left\{0,1,\ldots,n\right\}$</span> such that <span class="math-container">$\dbinom{n}{i}$</span> is odd is a power of <span class="math-container">$2$</span>.</p> </blockquote> <p>I shall prove something slightly more general:</p> <blockquote> <p><strong>Theorem 2.</strong> Let <span class="math-container">$n \in \mathbb{N}$</span>. Let <span class="math-container">$q$</span> be a real number. Then, there exists a finite subset <span class="math-container">$G$</span> of <span class="math-container">$\mathbb{N}$</span> such that <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} q^i = \prod_{g \in G} \left(q^{2^g} + 1\right) . \end{align}</span></p> </blockquote> <p><em>Proof of Theorem 2.</em> Write <span class="math-container">$n$</span> in the form <span class="math-container">$n=a_{k}2^{k}+a_{k-1}2^{k-1}+\cdots+a_{0}2^{0}$</span> for some <span class="math-container">$k\in\mathbb{N}$</span> and <span class="math-container">$a_{0},a_{1},\ldots,a_{k}\in\left\{ 0,1\right\} $</span>. (This is just the base-<span class="math-container">$2$</span> representation of <span class="math-container">$n$</span>, possibly with leading zeroes.)</p> <p><a href="https://en.wikipedia.org/wiki/Lucas&#39;s_theorem" rel="nofollow noreferrer">Lucas's theorem</a> tells you that if <span class="math-container">$i=b_{k}2^{k}+b_{k-1}2^{k-1}+\cdots+b_{0}2^{0}$</span> for some <span class="math-container">$b_{0} ,b_{1},\ldots,b_{k}\in\left\{ 0,1\right\} $</span>, then <span class="math-container">\begin{align} \dbinom{n}{i} &amp;\equiv \dbinom{a_{k}}{b_{k}}\dbinom{a_{k-1}}{b_{k-1}} \cdots\dbinom{a_{0}}{b_{0}}=\prod\limits_{j=0}^{k}\underbrace{\dbinom{a_{j}}{b_{j}} }_{\substack{= \begin{cases} 1, &amp; \text{if }b_{j}\leq a_{j}\\ 0, &amp; \text{if }b_{j}&gt;a_{j} \end{cases} \\\text{(since }a_{j}\text{ and }b_{j}\text{ lie in }\left\{ 0,1\right\} \text{)}}} \\ &amp;=\prod\limits_{j=0}^{k} \begin{cases} 1, &amp; \text{if }b_{j}\leq a_{j}\\ 0, &amp; \text{if }b_{j}&gt;a_{j} \end{cases} = \begin{cases} 1, &amp; \text{if }b_{j}\leq a_{j}\text{ for all }j\text{;}\\ 0, &amp; \text{otherwise} \end{cases} \mod 2 . \end{align}</span> Hence, the <span class="math-container">$i\in\mathbb{N}$</span> for which <span class="math-container">$\dbinom{n}{i}$</span> is odd are precisely the numbers of the form <span class="math-container">$b_{k}2^{k}+b_{k-1}2^{k-1} +\cdots+b_{0}2^{0}$</span> for <span class="math-container">$b_{0},b_{1},\ldots,b_{k}\in\left\{ 0,1\right\} $</span> satisfying <span class="math-container">$\left( b_{j}\leq a_{j}\text{ for all }j\right) $</span>. Since all these <span class="math-container">$i$</span> satisfy <span class="math-container">$i \in \left\{ 0,1,\ldots,n\right\}$</span> (because otherwise, <span class="math-container">$\dbinom{n}{i}$</span> would be <span class="math-container">$0$</span> and therefore could not be odd), we can rewrite this as follows: The <span class="math-container">$i \in \left\{ 0,1,\ldots,n\right\}$</span> for which <span class="math-container">$\dbinom{n}{i}$</span> is odd are precisely the numbers of the form <span class="math-container">$b_{k}2^{k}+b_{k-1}2^{k-1} +\cdots+b_{0}2^{0}$</span> for <span class="math-container">$b_{0},b_{1},\ldots,b_{k}\in\left\{ 0,1\right\} $</span> satisfying <span class="math-container">$\left( b_{j}\leq a_{j}\text{ for all }j\right) $</span>. Since these numbers are distinct (because the base-<span class="math-container">$2$</span> representation of any <span class="math-container">$i\in\mathbb{N}$</span> is unique, as long as we fix the number of digits), we thus can substitute <span class="math-container">$b_{k}2^{k}+b_{k-1}2^{k-1}+\cdots+b_{0}2^{0}$</span> for <span class="math-container">$i$</span> in the sum <span class="math-container">$\sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\ \dbinom{n}{i}\text{ is odd}}}q^{i}$</span>. Thus, this sum rewrites as follows: <span class="math-container">\begin{align} \sum\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\ \dbinom{n}{i}\text{ is odd}}}q^{i} &amp;=\underbrace{\sum\limits_{\substack{b_{0},b_{1} ,\ldots,b_{k}\in\left\{ 0,1\right\} ;\\b_{j}\leq a_{j}\text{ for all }j} }}_{=\sum\limits_{b_{0}=0}^{a_{0}}\sum\limits_{b_{1}=0}^{a_{1}}\cdots\sum\limits_{b_{k}=0}^{a_k} }\underbrace{q^{b_{k}2^{k}+b_{k-1}2^{k-1}+\cdots+b_{0}2^{0}}}_{=\left( q^{2^{k}}\right) ^{b_{k}}\left( q^{2^{k-1}}\right) ^{b_{k-1}}\cdots\left( q^{2^{0}}\right) ^{b_{0}}} \\ &amp;=\sum\limits_{b_{0}=0}^{a_{0}}\sum\limits_{b_{1}=0}^{a_{1}}\cdots\sum\limits_{b_{k}=0}^{a_{k} }\left( q^{2^{k}}\right) ^{b_{k}}\left( q^{2^{k-1}}\right) ^{b_{k-1} }\cdots\left( q^{2^{0}}\right) ^{b_{0}} \\ &amp;=\left( \sum\limits_{b_{k}=0}^{a_{k}}\left( q^{2^{k}}\right) ^{b_{k}}\right) \left( \sum\limits_{b_{k-1}=0}^{a_{k-1}}\left( q^{2^{k-1}}\right) ^{b_{k-1} }\right) \cdots\left( \sum\limits_{b_{0}=0}^{a_{0}}\left( q^{2^{0}}\right) ^{b_{0}}\right) \\ &amp;=\left( \sum\limits_{b=0}^{a_{k}}\left( q^{2^{k}}\right) ^{b}\right) \left( \sum\limits_{b=0}^{a_{k-1}}\left( q^{2^{k-1}}\right) ^{b}\right) \cdots\left( \sum\limits_{b=0}^{a_{0}}\left( q^{2^{0}}\right) ^{b}\right) \\ &amp;=\prod\limits_{g=0}^{k}\underbrace{\left( \sum\limits_{b=0}^{a_{g}}\left( q^{2^{g}}\right) ^{b}\right) }_{\substack{= \begin{cases} q^{2^{g}}+1, &amp; \text{if }a_{g}=1;\\ 1 &amp; \text{if }a_{g}=0 \end{cases} \\\text{(since }a_{g}\in\left\{ 0,1\right\} \text{)}}} \\ &amp;=\prod\limits_{g=0}^{k} \begin{cases} q^{2^{g}}+1, &amp; \text{if }a_{g}=1;\\ 1 &amp; \text{if }a_{g}=0 \end{cases} \\ &amp;=\left( \prod\limits_{\substack{g\in\left\{ 0,1,\ldots,k\right\} ;\\a_{g} =1}}\left( q^{2^{g}}+1\right) \right) \underbrace{\left( \prod\limits _{\substack{g\in\left\{ 0,1,\ldots,k\right\} ;\\a_{g}=0}}1\right) }_{=1} \\ &amp;=\prod\limits_{\substack{g\in\left\{ 0,1,\ldots,k\right\} ;\\a_{g}=1}}\left( q^{2^{g}}+1\right) . \end{align}</span> Thus, there exists a finite subset <span class="math-container">$G$</span> of <span class="math-container">$\mathbb{N}$</span> such that <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} q^i = \prod_{g \in G} \left(q^{2^g} + 1\right) \end{align}</span> (namely, <span class="math-container">$G$</span> is the set of all <span class="math-container">$g\in\left\{ 0,1,\ldots,k\right\}$</span> satisfying <span class="math-container">$a_g = 1$</span>). This proves Theorem 2. <span class="math-container">$\blacksquare$</span></p> <p><em>Proof of Theorem 1.</em> Theorem 2 (applied to <span class="math-container">$q = 1$</span>) shows that there exists a finite subset <span class="math-container">$G$</span> of <span class="math-container">$\mathbb{N}$</span> such that <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} 1^i = \prod_{g \in G} \left(1^{2^g} + 1\right) . \end{align}</span> Consider this <span class="math-container">$G$</span>. Comparing <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} 1^i = \prod_{g \in G} \underbrace{\left(1^{2^g} + 1\right)}_{= 1+1 = 2} = \prod_{g \in G} 2 = 2^{\left|G\right|} \end{align}</span> with <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} \underbrace{1^i}_{=1} &amp; = \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} 1 \\ &amp; = \left(\text{the number of all $i\in\left\{ 0,1,\ldots,n\right\}$ such that $\dbinom{n}{i}$ is odd}\right) \cdot 1 \\ &amp; = \left(\text{the number of all $i\in\left\{ 0,1,\ldots,n\right\}$ such that $\dbinom{n}{i}$ is odd}\right) , \end{align}</span> we obtain <span class="math-container">\begin{align} \left(\text{the number of all $i\in\left\{ 0,1,\ldots,n\right\}$ such that $\dbinom{n}{i}$ is odd}\right) = 2^{\left|G\right|} . \end{align}</span> Hence, the number of <span class="math-container">$i \in \left\{0,1,\ldots,n\right\}$</span> such that <span class="math-container">$\dbinom{n}{i}$</span> is odd is a power of <span class="math-container">$2$</span>. This proves Theorem 1. <span class="math-container">$\blacksquare$</span></p> Answer 25: <p>Similar idea to Andreas Caranti's answer:</p> <p>Kummer's Theorem says that the number of factors of <span class="math-container">$p$</span> in <span class="math-container">$\binom{n}{k}$</span> is the number of carries when adding <span class="math-container">$k$</span> and <span class="math-container">$n-k$</span> in base-<span class="math-container">$p$</span>. There are no carries when adding <span class="math-container">$k$</span> and <span class="math-container">$n-k$</span> in binary if and only if <span class="math-container">$k\veebar n-k=0$</span>; when that is true, <span class="math-container">$n=k\lor n-k$</span>. This means that <span class="math-container">$\binom{n}{k}$</span> is odd when the <span class="math-container">$1$</span> bits of <span class="math-container">$k$</span> are a subset of the <span class="math-container">$1$</span> bits of <span class="math-container">$n$</span>. That is, there are <span class="math-container">$2^{\sigma_2(n)}$</span> odd entries in row <span class="math-container">$n$</span> of Pascal's Triangle, where <span class="math-container">$\sigma_2(n)$</span> is the sum of the bits in the binary representation of <span class="math-container">$n$</span>.</p> Answer 26: <p>Will delete because this is not what OP intends, but a proof via <a href="https://en.wikipedia.org/wiki/Kummer&#39;s_theorem" rel="nofollow noreferrer">Kummer's Theorem</a> looks rather neat to me.</p> <p>Write $n, m$ in base $2$ $$ n = 2^{k} n_{k} + \dots + 2 n_{1} + n_{0}, \qquad m = 2^{k} m_{k} + \dots + 2 m_{1} + m_{0}, $$ with $n_{i}, m_{i} \in \{ 0, 1 \}$.</p> <p>Then </p> <blockquote> <p>$\dbinom{n}{m}$ is odd iff there are no carries in the sum in base $2$ $$ m + (n - m) = n, $$ that is, iff $m_{i} \le n_{i}$ for each $i$. </p> </blockquote> <p>Given $n$, there are thus $$ \prod_{i=0}^{k} (n_{i} + 1) $$ possibilities for $m$, and each $n_{i} + 1 \in \{ 1, 2 \}$.</p> <p>This also tells you what is the exponent $k$ in the question - it is the number of $1$'s in the diadic expansion of $n$, or equivalently, the number of <em>distinct</em> powers of $2$ that sum up to $n$.</p> Answer 27: <p>The simplest (at least simplest to explain) solution (using Lucas's theorem) has not been posted; let me fill that "much needed gap". Lazy as I am, I will mostly copypaste my answer from <a href="https://math.stackexchange.com/a/2184460/">https://math.stackexchange.com/a/2184460/</a> , since the argument is essentially the same.</p> <p>I use the notation <span class="math-container">$\mathbb{N}$</span> for the set <span class="math-container">$\left\{0,1,2,\ldots\right\}$</span>.</p> <p>The question asks the following:</p> <blockquote> <p><strong>Theorem 1.</strong> Let <span class="math-container">$n \in \mathbb{N}$</span>. Then, the number of <span class="math-container">$i \in \left\{0,1,\ldots,n\right\}$</span> such that <span class="math-container">$\dbinom{n}{i}$</span> is odd is a power of <span class="math-container">$2$</span>.</p> </blockquote> <p>I shall prove something slightly more general:</p> <blockquote> <p><strong>Theorem 2.</strong> Let <span class="math-container">$n \in \mathbb{N}$</span>. Let <span class="math-container">$q$</span> be a real number. Then, there exists a finite subset <span class="math-container">$G$</span> of <span class="math-container">$\mathbb{N}$</span> such that <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} q^i = \prod_{g \in G} \left(q^{2^g} + 1\right) . \end{align}</span></p> </blockquote> <p><em>Proof of Theorem 2.</em> Write <span class="math-container">$n$</span> in the form <span class="math-container">$n=a_{k}2^{k}+a_{k-1}2^{k-1}+\cdots+a_{0}2^{0}$</span> for some <span class="math-container">$k\in\mathbb{N}$</span> and <span class="math-container">$a_{0},a_{1},\ldots,a_{k}\in\left\{ 0,1\right\} $</span>. (This is just the base-<span class="math-container">$2$</span> representation of <span class="math-container">$n$</span>, possibly with leading zeroes.)</p> <p><a href="https://en.wikipedia.org/wiki/Lucas&#39;s_theorem" rel="nofollow noreferrer">Lucas's theorem</a> tells you that if <span class="math-container">$i=b_{k}2^{k}+b_{k-1}2^{k-1}+\cdots+b_{0}2^{0}$</span> for some <span class="math-container">$b_{0} ,b_{1},\ldots,b_{k}\in\left\{ 0,1\right\} $</span>, then <span class="math-container">\begin{align} \dbinom{n}{i} &amp;\equiv \dbinom{a_{k}}{b_{k}}\dbinom{a_{k-1}}{b_{k-1}} \cdots\dbinom{a_{0}}{b_{0}}=\prod\limits_{j=0}^{k}\underbrace{\dbinom{a_{j}}{b_{j}} }_{\substack{= \begin{cases} 1, &amp; \text{if }b_{j}\leq a_{j}\\ 0, &amp; \text{if }b_{j}&gt;a_{j} \end{cases} \\\text{(since }a_{j}\text{ and }b_{j}\text{ lie in }\left\{ 0,1\right\} \text{)}}} \\ &amp;=\prod\limits_{j=0}^{k} \begin{cases} 1, &amp; \text{if }b_{j}\leq a_{j}\\ 0, &amp; \text{if }b_{j}&gt;a_{j} \end{cases} = \begin{cases} 1, &amp; \text{if }b_{j}\leq a_{j}\text{ for all }j\text{;}\\ 0, &amp; \text{otherwise} \end{cases} \mod 2 . \end{align}</span> Hence, the <span class="math-container">$i\in\mathbb{N}$</span> for which <span class="math-container">$\dbinom{n}{i}$</span> is odd are precisely the numbers of the form <span class="math-container">$b_{k}2^{k}+b_{k-1}2^{k-1} +\cdots+b_{0}2^{0}$</span> for <span class="math-container">$b_{0},b_{1},\ldots,b_{k}\in\left\{ 0,1\right\} $</span> satisfying <span class="math-container">$\left( b_{j}\leq a_{j}\text{ for all }j\right) $</span>. Since all these <span class="math-container">$i$</span> satisfy <span class="math-container">$i \in \left\{ 0,1,\ldots,n\right\}$</span> (because otherwise, <span class="math-container">$\dbinom{n}{i}$</span> would be <span class="math-container">$0$</span> and therefore could not be odd), we can rewrite this as follows: The <span class="math-container">$i \in \left\{ 0,1,\ldots,n\right\}$</span> for which <span class="math-container">$\dbinom{n}{i}$</span> is odd are precisely the numbers of the form <span class="math-container">$b_{k}2^{k}+b_{k-1}2^{k-1} +\cdots+b_{0}2^{0}$</span> for <span class="math-container">$b_{0},b_{1},\ldots,b_{k}\in\left\{ 0,1\right\} $</span> satisfying <span class="math-container">$\left( b_{j}\leq a_{j}\text{ for all }j\right) $</span>. Since these numbers are distinct (because the base-<span class="math-container">$2$</span> representation of any <span class="math-container">$i\in\mathbb{N}$</span> is unique, as long as we fix the number of digits), we thus can substitute <span class="math-container">$b_{k}2^{k}+b_{k-1}2^{k-1}+\cdots+b_{0}2^{0}$</span> for <span class="math-container">$i$</span> in the sum <span class="math-container">$\sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\ \dbinom{n}{i}\text{ is odd}}}q^{i}$</span>. Thus, this sum rewrites as follows: <span class="math-container">\begin{align} \sum\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\ \dbinom{n}{i}\text{ is odd}}}q^{i} &amp;=\underbrace{\sum\limits_{\substack{b_{0},b_{1} ,\ldots,b_{k}\in\left\{ 0,1\right\} ;\\b_{j}\leq a_{j}\text{ for all }j} }}_{=\sum\limits_{b_{0}=0}^{a_{0}}\sum\limits_{b_{1}=0}^{a_{1}}\cdots\sum\limits_{b_{k}=0}^{a_k} }\underbrace{q^{b_{k}2^{k}+b_{k-1}2^{k-1}+\cdots+b_{0}2^{0}}}_{=\left( q^{2^{k}}\right) ^{b_{k}}\left( q^{2^{k-1}}\right) ^{b_{k-1}}\cdots\left( q^{2^{0}}\right) ^{b_{0}}} \\ &amp;=\sum\limits_{b_{0}=0}^{a_{0}}\sum\limits_{b_{1}=0}^{a_{1}}\cdots\sum\limits_{b_{k}=0}^{a_{k} }\left( q^{2^{k}}\right) ^{b_{k}}\left( q^{2^{k-1}}\right) ^{b_{k-1} }\cdots\left( q^{2^{0}}\right) ^{b_{0}} \\ &amp;=\left( \sum\limits_{b_{k}=0}^{a_{k}}\left( q^{2^{k}}\right) ^{b_{k}}\right) \left( \sum\limits_{b_{k-1}=0}^{a_{k-1}}\left( q^{2^{k-1}}\right) ^{b_{k-1} }\right) \cdots\left( \sum\limits_{b_{0}=0}^{a_{0}}\left( q^{2^{0}}\right) ^{b_{0}}\right) \\ &amp;=\left( \sum\limits_{b=0}^{a_{k}}\left( q^{2^{k}}\right) ^{b}\right) \left( \sum\limits_{b=0}^{a_{k-1}}\left( q^{2^{k-1}}\right) ^{b}\right) \cdots\left( \sum\limits_{b=0}^{a_{0}}\left( q^{2^{0}}\right) ^{b}\right) \\ &amp;=\prod\limits_{g=0}^{k}\underbrace{\left( \sum\limits_{b=0}^{a_{g}}\left( q^{2^{g}}\right) ^{b}\right) }_{\substack{= \begin{cases} q^{2^{g}}+1, &amp; \text{if }a_{g}=1;\\ 1 &amp; \text{if }a_{g}=0 \end{cases} \\\text{(since }a_{g}\in\left\{ 0,1\right\} \text{)}}} \\ &amp;=\prod\limits_{g=0}^{k} \begin{cases} q^{2^{g}}+1, &amp; \text{if }a_{g}=1;\\ 1 &amp; \text{if }a_{g}=0 \end{cases} \\ &amp;=\left( \prod\limits_{\substack{g\in\left\{ 0,1,\ldots,k\right\} ;\\a_{g} =1}}\left( q^{2^{g}}+1\right) \right) \underbrace{\left( \prod\limits _{\substack{g\in\left\{ 0,1,\ldots,k\right\} ;\\a_{g}=0}}1\right) }_{=1} \\ &amp;=\prod\limits_{\substack{g\in\left\{ 0,1,\ldots,k\right\} ;\\a_{g}=1}}\left( q^{2^{g}}+1\right) . \end{align}</span> Thus, there exists a finite subset <span class="math-container">$G$</span> of <span class="math-container">$\mathbb{N}$</span> such that <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} q^i = \prod_{g \in G} \left(q^{2^g} + 1\right) \end{align}</span> (namely, <span class="math-container">$G$</span> is the set of all <span class="math-container">$g\in\left\{ 0,1,\ldots,k\right\}$</span> satisfying <span class="math-container">$a_g = 1$</span>). This proves Theorem 2. <span class="math-container">$\blacksquare$</span></p> <p><em>Proof of Theorem 1.</em> Theorem 2 (applied to <span class="math-container">$q = 1$</span>) shows that there exists a finite subset <span class="math-container">$G$</span> of <span class="math-container">$\mathbb{N}$</span> such that <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} 1^i = \prod_{g \in G} \left(1^{2^g} + 1\right) . \end{align}</span> Consider this <span class="math-container">$G$</span>. Comparing <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} 1^i = \prod_{g \in G} \underbrace{\left(1^{2^g} + 1\right)}_{= 1+1 = 2} = \prod_{g \in G} 2 = 2^{\left|G\right|} \end{align}</span> with <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} \underbrace{1^i}_{=1} &amp; = \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} 1 \\ &amp; = \left(\text{the number of all $i\in\left\{ 0,1,\ldots,n\right\}$ such that $\dbinom{n}{i}$ is odd}\right) \cdot 1 \\ &amp; = \left(\text{the number of all $i\in\left\{ 0,1,\ldots,n\right\}$ such that $\dbinom{n}{i}$ is odd}\right) , \end{align}</span> we obtain <span class="math-container">\begin{align} \left(\text{the number of all $i\in\left\{ 0,1,\ldots,n\right\}$ such that $\dbinom{n}{i}$ is odd}\right) = 2^{\left|G\right|} . \end{align}</span> Hence, the number of <span class="math-container">$i \in \left\{0,1,\ldots,n\right\}$</span> such that <span class="math-container">$\dbinom{n}{i}$</span> is odd is a power of <span class="math-container">$2$</span>. This proves Theorem 1. <span class="math-container">$\blacksquare$</span></p> Answer 28: <p>Similar idea to Andreas Caranti's answer:</p> <p>Kummer's Theorem says that the number of factors of <span class="math-container">$p$</span> in <span class="math-container">$\binom{n}{k}$</span> is the number of carries when adding <span class="math-container">$k$</span> and <span class="math-container">$n-k$</span> in base-<span class="math-container">$p$</span>. There are no carries when adding <span class="math-container">$k$</span> and <span class="math-container">$n-k$</span> in binary if and only if <span class="math-container">$k\veebar n-k=0$</span>; when that is true, <span class="math-container">$n=k\lor n-k$</span>. This means that <span class="math-container">$\binom{n}{k}$</span> is odd when the <span class="math-container">$1$</span> bits of <span class="math-container">$k$</span> are a subset of the <span class="math-container">$1$</span> bits of <span class="math-container">$n$</span>. That is, there are <span class="math-container">$2^{\sigma_2(n)}$</span> odd entries in row <span class="math-container">$n$</span> of Pascal's Triangle, where <span class="math-container">$\sigma_2(n)$</span> is the sum of the bits in the binary representation of <span class="math-container">$n$</span>.</p> Answer 29: <p>Will delete because this is not what OP intends, but a proof via <a href="https://en.wikipedia.org/wiki/Kummer&#39;s_theorem" rel="nofollow noreferrer">Kummer's Theorem</a> looks rather neat to me.</p> <p>Write $n, m$ in base $2$ $$ n = 2^{k} n_{k} + \dots + 2 n_{1} + n_{0}, \qquad m = 2^{k} m_{k} + \dots + 2 m_{1} + m_{0}, $$ with $n_{i}, m_{i} \in \{ 0, 1 \}$.</p> <p>Then </p> <blockquote> <p>$\dbinom{n}{m}$ is odd iff there are no carries in the sum in base $2$ $$ m + (n - m) = n, $$ that is, iff $m_{i} \le n_{i}$ for each $i$. </p> </blockquote> <p>Given $n$, there are thus $$ \prod_{i=0}^{k} (n_{i} + 1) $$ possibilities for $m$, and each $n_{i} + 1 \in \{ 1, 2 \}$.</p> <p>This also tells you what is the exponent $k$ in the question - it is the number of $1$'s in the diadic expansion of $n$, or equivalently, the number of <em>distinct</em> powers of $2$ that sum up to $n$.</p> Answer 30: <p>The simplest (at least simplest to explain) solution (using Lucas's theorem) has not been posted; let me fill that "much needed gap". Lazy as I am, I will mostly copypaste my answer from <a href="https://math.stackexchange.com/a/2184460/">https://math.stackexchange.com/a/2184460/</a> , since the argument is essentially the same.</p> <p>I use the notation <span class="math-container">$\mathbb{N}$</span> for the set <span class="math-container">$\left\{0,1,2,\ldots\right\}$</span>.</p> <p>The question asks the following:</p> <blockquote> <p><strong>Theorem 1.</strong> Let <span class="math-container">$n \in \mathbb{N}$</span>. Then, the number of <span class="math-container">$i \in \left\{0,1,\ldots,n\right\}$</span> such that <span class="math-container">$\dbinom{n}{i}$</span> is odd is a power of <span class="math-container">$2$</span>.</p> </blockquote> <p>I shall prove something slightly more general:</p> <blockquote> <p><strong>Theorem 2.</strong> Let <span class="math-container">$n \in \mathbb{N}$</span>. Let <span class="math-container">$q$</span> be a real number. Then, there exists a finite subset <span class="math-container">$G$</span> of <span class="math-container">$\mathbb{N}$</span> such that <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} q^i = \prod_{g \in G} \left(q^{2^g} + 1\right) . \end{align}</span></p> </blockquote> <p><em>Proof of Theorem 2.</em> Write <span class="math-container">$n$</span> in the form <span class="math-container">$n=a_{k}2^{k}+a_{k-1}2^{k-1}+\cdots+a_{0}2^{0}$</span> for some <span class="math-container">$k\in\mathbb{N}$</span> and <span class="math-container">$a_{0},a_{1},\ldots,a_{k}\in\left\{ 0,1\right\} $</span>. (This is just the base-<span class="math-container">$2$</span> representation of <span class="math-container">$n$</span>, possibly with leading zeroes.)</p> <p><a href="https://en.wikipedia.org/wiki/Lucas&#39;s_theorem" rel="nofollow noreferrer">Lucas's theorem</a> tells you that if <span class="math-container">$i=b_{k}2^{k}+b_{k-1}2^{k-1}+\cdots+b_{0}2^{0}$</span> for some <span class="math-container">$b_{0} ,b_{1},\ldots,b_{k}\in\left\{ 0,1\right\} $</span>, then <span class="math-container">\begin{align} \dbinom{n}{i} &amp;\equiv \dbinom{a_{k}}{b_{k}}\dbinom{a_{k-1}}{b_{k-1}} \cdots\dbinom{a_{0}}{b_{0}}=\prod\limits_{j=0}^{k}\underbrace{\dbinom{a_{j}}{b_{j}} }_{\substack{= \begin{cases} 1, &amp; \text{if }b_{j}\leq a_{j}\\ 0, &amp; \text{if }b_{j}&gt;a_{j} \end{cases} \\\text{(since }a_{j}\text{ and }b_{j}\text{ lie in }\left\{ 0,1\right\} \text{)}}} \\ &amp;=\prod\limits_{j=0}^{k} \begin{cases} 1, &amp; \text{if }b_{j}\leq a_{j}\\ 0, &amp; \text{if }b_{j}&gt;a_{j} \end{cases} = \begin{cases} 1, &amp; \text{if }b_{j}\leq a_{j}\text{ for all }j\text{;}\\ 0, &amp; \text{otherwise} \end{cases} \mod 2 . \end{align}</span> Hence, the <span class="math-container">$i\in\mathbb{N}$</span> for which <span class="math-container">$\dbinom{n}{i}$</span> is odd are precisely the numbers of the form <span class="math-container">$b_{k}2^{k}+b_{k-1}2^{k-1} +\cdots+b_{0}2^{0}$</span> for <span class="math-container">$b_{0},b_{1},\ldots,b_{k}\in\left\{ 0,1\right\} $</span> satisfying <span class="math-container">$\left( b_{j}\leq a_{j}\text{ for all }j\right) $</span>. Since all these <span class="math-container">$i$</span> satisfy <span class="math-container">$i \in \left\{ 0,1,\ldots,n\right\}$</span> (because otherwise, <span class="math-container">$\dbinom{n}{i}$</span> would be <span class="math-container">$0$</span> and therefore could not be odd), we can rewrite this as follows: The <span class="math-container">$i \in \left\{ 0,1,\ldots,n\right\}$</span> for which <span class="math-container">$\dbinom{n}{i}$</span> is odd are precisely the numbers of the form <span class="math-container">$b_{k}2^{k}+b_{k-1}2^{k-1} +\cdots+b_{0}2^{0}$</span> for <span class="math-container">$b_{0},b_{1},\ldots,b_{k}\in\left\{ 0,1\right\} $</span> satisfying <span class="math-container">$\left( b_{j}\leq a_{j}\text{ for all }j\right) $</span>. Since these numbers are distinct (because the base-<span class="math-container">$2$</span> representation of any <span class="math-container">$i\in\mathbb{N}$</span> is unique, as long as we fix the number of digits), we thus can substitute <span class="math-container">$b_{k}2^{k}+b_{k-1}2^{k-1}+\cdots+b_{0}2^{0}$</span> for <span class="math-container">$i$</span> in the sum <span class="math-container">$\sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\ \dbinom{n}{i}\text{ is odd}}}q^{i}$</span>. Thus, this sum rewrites as follows: <span class="math-container">\begin{align} \sum\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\ \dbinom{n}{i}\text{ is odd}}}q^{i} &amp;=\underbrace{\sum\limits_{\substack{b_{0},b_{1} ,\ldots,b_{k}\in\left\{ 0,1\right\} ;\\b_{j}\leq a_{j}\text{ for all }j} }}_{=\sum\limits_{b_{0}=0}^{a_{0}}\sum\limits_{b_{1}=0}^{a_{1}}\cdots\sum\limits_{b_{k}=0}^{a_k} }\underbrace{q^{b_{k}2^{k}+b_{k-1}2^{k-1}+\cdots+b_{0}2^{0}}}_{=\left( q^{2^{k}}\right) ^{b_{k}}\left( q^{2^{k-1}}\right) ^{b_{k-1}}\cdots\left( q^{2^{0}}\right) ^{b_{0}}} \\ &amp;=\sum\limits_{b_{0}=0}^{a_{0}}\sum\limits_{b_{1}=0}^{a_{1}}\cdots\sum\limits_{b_{k}=0}^{a_{k} }\left( q^{2^{k}}\right) ^{b_{k}}\left( q^{2^{k-1}}\right) ^{b_{k-1} }\cdots\left( q^{2^{0}}\right) ^{b_{0}} \\ &amp;=\left( \sum\limits_{b_{k}=0}^{a_{k}}\left( q^{2^{k}}\right) ^{b_{k}}\right) \left( \sum\limits_{b_{k-1}=0}^{a_{k-1}}\left( q^{2^{k-1}}\right) ^{b_{k-1} }\right) \cdots\left( \sum\limits_{b_{0}=0}^{a_{0}}\left( q^{2^{0}}\right) ^{b_{0}}\right) \\ &amp;=\left( \sum\limits_{b=0}^{a_{k}}\left( q^{2^{k}}\right) ^{b}\right) \left( \sum\limits_{b=0}^{a_{k-1}}\left( q^{2^{k-1}}\right) ^{b}\right) \cdots\left( \sum\limits_{b=0}^{a_{0}}\left( q^{2^{0}}\right) ^{b}\right) \\ &amp;=\prod\limits_{g=0}^{k}\underbrace{\left( \sum\limits_{b=0}^{a_{g}}\left( q^{2^{g}}\right) ^{b}\right) }_{\substack{= \begin{cases} q^{2^{g}}+1, &amp; \text{if }a_{g}=1;\\ 1 &amp; \text{if }a_{g}=0 \end{cases} \\\text{(since }a_{g}\in\left\{ 0,1\right\} \text{)}}} \\ &amp;=\prod\limits_{g=0}^{k} \begin{cases} q^{2^{g}}+1, &amp; \text{if }a_{g}=1;\\ 1 &amp; \text{if }a_{g}=0 \end{cases} \\ &amp;=\left( \prod\limits_{\substack{g\in\left\{ 0,1,\ldots,k\right\} ;\\a_{g} =1}}\left( q^{2^{g}}+1\right) \right) \underbrace{\left( \prod\limits _{\substack{g\in\left\{ 0,1,\ldots,k\right\} ;\\a_{g}=0}}1\right) }_{=1} \\ &amp;=\prod\limits_{\substack{g\in\left\{ 0,1,\ldots,k\right\} ;\\a_{g}=1}}\left( q^{2^{g}}+1\right) . \end{align}</span> Thus, there exists a finite subset <span class="math-container">$G$</span> of <span class="math-container">$\mathbb{N}$</span> such that <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} q^i = \prod_{g \in G} \left(q^{2^g} + 1\right) \end{align}</span> (namely, <span class="math-container">$G$</span> is the set of all <span class="math-container">$g\in\left\{ 0,1,\ldots,k\right\}$</span> satisfying <span class="math-container">$a_g = 1$</span>). This proves Theorem 2. <span class="math-container">$\blacksquare$</span></p> <p><em>Proof of Theorem 1.</em> Theorem 2 (applied to <span class="math-container">$q = 1$</span>) shows that there exists a finite subset <span class="math-container">$G$</span> of <span class="math-container">$\mathbb{N}$</span> such that <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} 1^i = \prod_{g \in G} \left(1^{2^g} + 1\right) . \end{align}</span> Consider this <span class="math-container">$G$</span>. Comparing <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} 1^i = \prod_{g \in G} \underbrace{\left(1^{2^g} + 1\right)}_{= 1+1 = 2} = \prod_{g \in G} 2 = 2^{\left|G\right|} \end{align}</span> with <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} \underbrace{1^i}_{=1} &amp; = \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} 1 \\ &amp; = \left(\text{the number of all $i\in\left\{ 0,1,\ldots,n\right\}$ such that $\dbinom{n}{i}$ is odd}\right) \cdot 1 \\ &amp; = \left(\text{the number of all $i\in\left\{ 0,1,\ldots,n\right\}$ such that $\dbinom{n}{i}$ is odd}\right) , \end{align}</span> we obtain <span class="math-container">\begin{align} \left(\text{the number of all $i\in\left\{ 0,1,\ldots,n\right\}$ such that $\dbinom{n}{i}$ is odd}\right) = 2^{\left|G\right|} . \end{align}</span> Hence, the number of <span class="math-container">$i \in \left\{0,1,\ldots,n\right\}$</span> such that <span class="math-container">$\dbinom{n}{i}$</span> is odd is a power of <span class="math-container">$2$</span>. This proves Theorem 1. <span class="math-container">$\blacksquare$</span></p> Answer 31: <p>Similar idea to Andreas Caranti's answer:</p> <p>Kummer's Theorem says that the number of factors of <span class="math-container">$p$</span> in <span class="math-container">$\binom{n}{k}$</span> is the number of carries when adding <span class="math-container">$k$</span> and <span class="math-container">$n-k$</span> in base-<span class="math-container">$p$</span>. There are no carries when adding <span class="math-container">$k$</span> and <span class="math-container">$n-k$</span> in binary if and only if <span class="math-container">$k\veebar n-k=0$</span>; when that is true, <span class="math-container">$n=k\lor n-k$</span>. This means that <span class="math-container">$\binom{n}{k}$</span> is odd when the <span class="math-container">$1$</span> bits of <span class="math-container">$k$</span> are a subset of the <span class="math-container">$1$</span> bits of <span class="math-container">$n$</span>. That is, there are <span class="math-container">$2^{\sigma_2(n)}$</span> odd entries in row <span class="math-container">$n$</span> of Pascal's Triangle, where <span class="math-container">$\sigma_2(n)$</span> is the sum of the bits in the binary representation of <span class="math-container">$n$</span>.</p> Answer 32: <p>Will delete because this is not what OP intends, but a proof via <a href="https://en.wikipedia.org/wiki/Kummer&#39;s_theorem" rel="nofollow noreferrer">Kummer's Theorem</a> looks rather neat to me.</p> <p>Write $n, m$ in base $2$ $$ n = 2^{k} n_{k} + \dots + 2 n_{1} + n_{0}, \qquad m = 2^{k} m_{k} + \dots + 2 m_{1} + m_{0}, $$ with $n_{i}, m_{i} \in \{ 0, 1 \}$.</p> <p>Then </p> <blockquote> <p>$\dbinom{n}{m}$ is odd iff there are no carries in the sum in base $2$ $$ m + (n - m) = n, $$ that is, iff $m_{i} \le n_{i}$ for each $i$. </p> </blockquote> <p>Given $n$, there are thus $$ \prod_{i=0}^{k} (n_{i} + 1) $$ possibilities for $m$, and each $n_{i} + 1 \in \{ 1, 2 \}$.</p> <p>This also tells you what is the exponent $k$ in the question - it is the number of $1$'s in the diadic expansion of $n$, or equivalently, the number of <em>distinct</em> powers of $2$ that sum up to $n$.</p> Answer 33: <p>The simplest (at least simplest to explain) solution (using Lucas's theorem) has not been posted; let me fill that "much needed gap". Lazy as I am, I will mostly copypaste my answer from <a href="https://math.stackexchange.com/a/2184460/">https://math.stackexchange.com/a/2184460/</a> , since the argument is essentially the same.</p> <p>I use the notation <span class="math-container">$\mathbb{N}$</span> for the set <span class="math-container">$\left\{0,1,2,\ldots\right\}$</span>.</p> <p>The question asks the following:</p> <blockquote> <p><strong>Theorem 1.</strong> Let <span class="math-container">$n \in \mathbb{N}$</span>. Then, the number of <span class="math-container">$i \in \left\{0,1,\ldots,n\right\}$</span> such that <span class="math-container">$\dbinom{n}{i}$</span> is odd is a power of <span class="math-container">$2$</span>.</p> </blockquote> <p>I shall prove something slightly more general:</p> <blockquote> <p><strong>Theorem 2.</strong> Let <span class="math-container">$n \in \mathbb{N}$</span>. Let <span class="math-container">$q$</span> be a real number. Then, there exists a finite subset <span class="math-container">$G$</span> of <span class="math-container">$\mathbb{N}$</span> such that <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} q^i = \prod_{g \in G} \left(q^{2^g} + 1\right) . \end{align}</span></p> </blockquote> <p><em>Proof of Theorem 2.</em> Write <span class="math-container">$n$</span> in the form <span class="math-container">$n=a_{k}2^{k}+a_{k-1}2^{k-1}+\cdots+a_{0}2^{0}$</span> for some <span class="math-container">$k\in\mathbb{N}$</span> and <span class="math-container">$a_{0},a_{1},\ldots,a_{k}\in\left\{ 0,1\right\} $</span>. (This is just the base-<span class="math-container">$2$</span> representation of <span class="math-container">$n$</span>, possibly with leading zeroes.)</p> <p><a href="https://en.wikipedia.org/wiki/Lucas&#39;s_theorem" rel="nofollow noreferrer">Lucas's theorem</a> tells you that if <span class="math-container">$i=b_{k}2^{k}+b_{k-1}2^{k-1}+\cdots+b_{0}2^{0}$</span> for some <span class="math-container">$b_{0} ,b_{1},\ldots,b_{k}\in\left\{ 0,1\right\} $</span>, then <span class="math-container">\begin{align} \dbinom{n}{i} &amp;\equiv \dbinom{a_{k}}{b_{k}}\dbinom{a_{k-1}}{b_{k-1}} \cdots\dbinom{a_{0}}{b_{0}}=\prod\limits_{j=0}^{k}\underbrace{\dbinom{a_{j}}{b_{j}} }_{\substack{= \begin{cases} 1, &amp; \text{if }b_{j}\leq a_{j}\\ 0, &amp; \text{if }b_{j}&gt;a_{j} \end{cases} \\\text{(since }a_{j}\text{ and }b_{j}\text{ lie in }\left\{ 0,1\right\} \text{)}}} \\ &amp;=\prod\limits_{j=0}^{k} \begin{cases} 1, &amp; \text{if }b_{j}\leq a_{j}\\ 0, &amp; \text{if }b_{j}&gt;a_{j} \end{cases} = \begin{cases} 1, &amp; \text{if }b_{j}\leq a_{j}\text{ for all }j\text{;}\\ 0, &amp; \text{otherwise} \end{cases} \mod 2 . \end{align}</span> Hence, the <span class="math-container">$i\in\mathbb{N}$</span> for which <span class="math-container">$\dbinom{n}{i}$</span> is odd are precisely the numbers of the form <span class="math-container">$b_{k}2^{k}+b_{k-1}2^{k-1} +\cdots+b_{0}2^{0}$</span> for <span class="math-container">$b_{0},b_{1},\ldots,b_{k}\in\left\{ 0,1\right\} $</span> satisfying <span class="math-container">$\left( b_{j}\leq a_{j}\text{ for all }j\right) $</span>. Since all these <span class="math-container">$i$</span> satisfy <span class="math-container">$i \in \left\{ 0,1,\ldots,n\right\}$</span> (because otherwise, <span class="math-container">$\dbinom{n}{i}$</span> would be <span class="math-container">$0$</span> and therefore could not be odd), we can rewrite this as follows: The <span class="math-container">$i \in \left\{ 0,1,\ldots,n\right\}$</span> for which <span class="math-container">$\dbinom{n}{i}$</span> is odd are precisely the numbers of the form <span class="math-container">$b_{k}2^{k}+b_{k-1}2^{k-1} +\cdots+b_{0}2^{0}$</span> for <span class="math-container">$b_{0},b_{1},\ldots,b_{k}\in\left\{ 0,1\right\} $</span> satisfying <span class="math-container">$\left( b_{j}\leq a_{j}\text{ for all }j\right) $</span>. Since these numbers are distinct (because the base-<span class="math-container">$2$</span> representation of any <span class="math-container">$i\in\mathbb{N}$</span> is unique, as long as we fix the number of digits), we thus can substitute <span class="math-container">$b_{k}2^{k}+b_{k-1}2^{k-1}+\cdots+b_{0}2^{0}$</span> for <span class="math-container">$i$</span> in the sum <span class="math-container">$\sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\ \dbinom{n}{i}\text{ is odd}}}q^{i}$</span>. Thus, this sum rewrites as follows: <span class="math-container">\begin{align} \sum\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\ \dbinom{n}{i}\text{ is odd}}}q^{i} &amp;=\underbrace{\sum\limits_{\substack{b_{0},b_{1} ,\ldots,b_{k}\in\left\{ 0,1\right\} ;\\b_{j}\leq a_{j}\text{ for all }j} }}_{=\sum\limits_{b_{0}=0}^{a_{0}}\sum\limits_{b_{1}=0}^{a_{1}}\cdots\sum\limits_{b_{k}=0}^{a_k} }\underbrace{q^{b_{k}2^{k}+b_{k-1}2^{k-1}+\cdots+b_{0}2^{0}}}_{=\left( q^{2^{k}}\right) ^{b_{k}}\left( q^{2^{k-1}}\right) ^{b_{k-1}}\cdots\left( q^{2^{0}}\right) ^{b_{0}}} \\ &amp;=\sum\limits_{b_{0}=0}^{a_{0}}\sum\limits_{b_{1}=0}^{a_{1}}\cdots\sum\limits_{b_{k}=0}^{a_{k} }\left( q^{2^{k}}\right) ^{b_{k}}\left( q^{2^{k-1}}\right) ^{b_{k-1} }\cdots\left( q^{2^{0}}\right) ^{b_{0}} \\ &amp;=\left( \sum\limits_{b_{k}=0}^{a_{k}}\left( q^{2^{k}}\right) ^{b_{k}}\right) \left( \sum\limits_{b_{k-1}=0}^{a_{k-1}}\left( q^{2^{k-1}}\right) ^{b_{k-1} }\right) \cdots\left( \sum\limits_{b_{0}=0}^{a_{0}}\left( q^{2^{0}}\right) ^{b_{0}}\right) \\ &amp;=\left( \sum\limits_{b=0}^{a_{k}}\left( q^{2^{k}}\right) ^{b}\right) \left( \sum\limits_{b=0}^{a_{k-1}}\left( q^{2^{k-1}}\right) ^{b}\right) \cdots\left( \sum\limits_{b=0}^{a_{0}}\left( q^{2^{0}}\right) ^{b}\right) \\ &amp;=\prod\limits_{g=0}^{k}\underbrace{\left( \sum\limits_{b=0}^{a_{g}}\left( q^{2^{g}}\right) ^{b}\right) }_{\substack{= \begin{cases} q^{2^{g}}+1, &amp; \text{if }a_{g}=1;\\ 1 &amp; \text{if }a_{g}=0 \end{cases} \\\text{(since }a_{g}\in\left\{ 0,1\right\} \text{)}}} \\ &amp;=\prod\limits_{g=0}^{k} \begin{cases} q^{2^{g}}+1, &amp; \text{if }a_{g}=1;\\ 1 &amp; \text{if }a_{g}=0 \end{cases} \\ &amp;=\left( \prod\limits_{\substack{g\in\left\{ 0,1,\ldots,k\right\} ;\\a_{g} =1}}\left( q^{2^{g}}+1\right) \right) \underbrace{\left( \prod\limits _{\substack{g\in\left\{ 0,1,\ldots,k\right\} ;\\a_{g}=0}}1\right) }_{=1} \\ &amp;=\prod\limits_{\substack{g\in\left\{ 0,1,\ldots,k\right\} ;\\a_{g}=1}}\left( q^{2^{g}}+1\right) . \end{align}</span> Thus, there exists a finite subset <span class="math-container">$G$</span> of <span class="math-container">$\mathbb{N}$</span> such that <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} q^i = \prod_{g \in G} \left(q^{2^g} + 1\right) \end{align}</span> (namely, <span class="math-container">$G$</span> is the set of all <span class="math-container">$g\in\left\{ 0,1,\ldots,k\right\}$</span> satisfying <span class="math-container">$a_g = 1$</span>). This proves Theorem 2. <span class="math-container">$\blacksquare$</span></p> <p><em>Proof of Theorem 1.</em> Theorem 2 (applied to <span class="math-container">$q = 1$</span>) shows that there exists a finite subset <span class="math-container">$G$</span> of <span class="math-container">$\mathbb{N}$</span> such that <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} 1^i = \prod_{g \in G} \left(1^{2^g} + 1\right) . \end{align}</span> Consider this <span class="math-container">$G$</span>. Comparing <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} 1^i = \prod_{g \in G} \underbrace{\left(1^{2^g} + 1\right)}_{= 1+1 = 2} = \prod_{g \in G} 2 = 2^{\left|G\right|} \end{align}</span> with <span class="math-container">\begin{align} \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} \underbrace{1^i}_{=1} &amp; = \sum\limits\limits_{\substack{i\in\left\{ 0,1,\ldots,n\right\} ;\\\dbinom{n}{i}\text{ is odd}}} 1 \\ &amp; = \left(\text{the number of all $i\in\left\{ 0,1,\ldots,n\right\}$ such that $\dbinom{n}{i}$ is odd}\right) \cdot 1 \\ &amp; = \left(\text{the number of all $i\in\left\{ 0,1,\ldots,n\right\}$ such that $\dbinom{n}{i}$ is odd}\right) , \end{align}</span> we obtain <span class="math-container">\begin{align} \left(\text{the number of all $i\in\left\{ 0,1,\ldots,n\right\}$ such that $\dbinom{n}{i}$ is odd}\right) = 2^{\left|G\right|} . \end{align}</span> Hence, the number of <span class="math-container">$i \in \left\{0,1,\ldots,n\right\}$</span> such that <span class="math-container">$\dbinom{n}{i}$</span> is odd is a power of <span class="math-container">$2$</span>. This proves Theorem 1. <span class="math-container">$\blacksquare$</span></p>
Points on an ellipse
2,093,569
6
https://math.stackexchange.com/questions/2093569/points-on-an-ellipse
<p>For an app, I want to equally distribute a certain number of points on the perimeter of a known ellipse, and, to draw them, I need to know, for every point, <strong>the angle of the line that connects it to the center of the ellipse</strong>.</p> <p>Here is an horrible drawing of what I must achieve: number of the points is known, distance of the points on the ellipse is constant (or at least should be) but unknown (well, it is circumference/number of points), horizontal and vertical radiuses are known, I look for the <strong>angles a0-an</strong></p> <p><a href="https://i.stack.imgur.com/VhYXm.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/VhYXm.jpg" alt="ellipse"></a></p> <p>I already know that this is a not easy problem, that does not have a finite solution. The fact is that I don't need perfection in the points distribution, but I need speed in the calculation of the positioning.</p> <p>Is there a way or an easy formula that approximates the real solution? Some altorithm that makes it possible to be implemented?</p> <p>Thank you in advance.</p>
[ { "answer_id": 2093586, "score": 7, "text": "<p>For a small eccentricity <span class=\"math-container\">$e$</span>,</p>\n<p><span class=\"math-container\">\\begin{align*}\n e^2 &amp;= 1-\\frac{b^2}{a^2}\\\\\n \\theta(t) &amp;=\n t+\\left( \\frac{e^2}{8}+\\frac{e^4}{16}+\\frac{71e^6}{2048} \\right) \\sin 2t+\n \\left( \\frac{5e^4}{256}+\\frac{5e^6}{256} \\right) \\sin 4t+\n \\frac{29e^6}{6144} \\sin 6t+O(e^{8}) \\\\\n \\begin{bmatrix} x \\\\ y \\end{bmatrix} &amp;=\n \\begin{bmatrix} a\\cos \\theta(t) \\\\ b\\sin \\theta(t) \\end{bmatrix}\n\\end{align*}</span></p>\n<p>the arclength spacing is approximately equal for uniform spacing of <span class=\"math-container\">$t\\in [0,2\\pi]$</span>.</p>\n<p><a href=\"https://i.stack.imgur.com/zZ0VS.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/zZ0VS.jpg\" alt=\"enter image description here\" /></a></p>\n<p>See also my newer post <a href=\"https://math.stackexchange.com/questions/4284133/parameterization-of-an-ellipse-by-arc-length/4284944#4284944\"><em>here</em></a>.</p>\n" }, { "answer_id": 2093575, "score": 4, "text": "<p>Here is a simple algorithm that distributes points evenly on an ellipse:</p>\n\n<ol>\n<li><p>Sample the ellipse uniformly according to angle using the polar parametrization.</p></li>\n<li><p>Replace each point with the midpoint of its neighbors.<br>\nProject the midpoint ortohogonally onto the ellipse.</p></li>\n<li><p>Repeat step 2 as needed.</p></li>\n</ol>\n\n<p><a href=\"https://i.stack.imgur.com/OjndP.gif\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/OjndP.gif\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2093784, "score": 4, "text": "<p>Perhaps the <a href=\"https://en.wikipedia.org/wiki/Ellipse#Parallelogram_method\" rel=\"nofollow noreferrer\">parallelogram method</a> is suited to your purpose? Here it is in action:</p>\n\n<p><a href=\"https://i.stack.imgur.com/79jt4.gif\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/79jt4.gif\" alt=\"parallelogram method\"></a></p>\n\n<p>(Image by <a href=\"https://en.wikipedia.org/wiki/User:Cmapm\" rel=\"nofollow noreferrer\">Wikipedia user Cmarm</a>.)</p>\n" }, { "answer_id": 2093586, "score": 7, "text": "<p>For a small eccentricity <span class=\"math-container\">$e$</span>,</p>\n<p><span class=\"math-container\">\\begin{align*}\n e^2 &amp;= 1-\\frac{b^2}{a^2}\\\\\n \\theta(t) &amp;=\n t+\\left( \\frac{e^2}{8}+\\frac{e^4}{16}+\\frac{71e^6}{2048} \\right) \\sin 2t+\n \\left( \\frac{5e^4}{256}+\\frac{5e^6}{256} \\right) \\sin 4t+\n \\frac{29e^6}{6144} \\sin 6t+O(e^{8}) \\\\\n \\begin{bmatrix} x \\\\ y \\end{bmatrix} &amp;=\n \\begin{bmatrix} a\\cos \\theta(t) \\\\ b\\sin \\theta(t) \\end{bmatrix}\n\\end{align*}</span></p>\n<p>the arclength spacing is approximately equal for uniform spacing of <span class=\"math-container\">$t\\in [0,2\\pi]$</span>.</p>\n<p><a href=\"https://i.stack.imgur.com/zZ0VS.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/zZ0VS.jpg\" alt=\"enter image description here\" /></a></p>\n<p>See also my newer post <a href=\"https://math.stackexchange.com/questions/4284133/parameterization-of-an-ellipse-by-arc-length/4284944#4284944\"><em>here</em></a>.</p>\n" }, { "answer_id": 2093575, "score": 4, "text": "<p>Here is a simple algorithm that distributes points evenly on an ellipse:</p>\n\n<ol>\n<li><p>Sample the ellipse uniformly according to angle using the polar parametrization.</p></li>\n<li><p>Replace each point with the midpoint of its neighbors.<br>\nProject the midpoint ortohogonally onto the ellipse.</p></li>\n<li><p>Repeat step 2 as needed.</p></li>\n</ol>\n\n<p><a href=\"https://i.stack.imgur.com/OjndP.gif\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/OjndP.gif\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2093784, "score": 4, "text": "<p>Perhaps the <a href=\"https://en.wikipedia.org/wiki/Ellipse#Parallelogram_method\" rel=\"nofollow noreferrer\">parallelogram method</a> is suited to your purpose? Here it is in action:</p>\n\n<p><a href=\"https://i.stack.imgur.com/79jt4.gif\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/79jt4.gif\" alt=\"parallelogram method\"></a></p>\n\n<p>(Image by <a href=\"https://en.wikipedia.org/wiki/User:Cmapm\" rel=\"nofollow noreferrer\">Wikipedia user Cmarm</a>.)</p>\n" }, { "answer_id": 2093586, "score": 7, "text": "<p>For a small eccentricity <span class=\"math-container\">$e$</span>,</p>\n<p><span class=\"math-container\">\\begin{align*}\n e^2 &amp;= 1-\\frac{b^2}{a^2}\\\\\n \\theta(t) &amp;=\n t+\\left( \\frac{e^2}{8}+\\frac{e^4}{16}+\\frac{71e^6}{2048} \\right) \\sin 2t+\n \\left( \\frac{5e^4}{256}+\\frac{5e^6}{256} \\right) \\sin 4t+\n \\frac{29e^6}{6144} \\sin 6t+O(e^{8}) \\\\\n \\begin{bmatrix} x \\\\ y \\end{bmatrix} &amp;=\n \\begin{bmatrix} a\\cos \\theta(t) \\\\ b\\sin \\theta(t) \\end{bmatrix}\n\\end{align*}</span></p>\n<p>the arclength spacing is approximately equal for uniform spacing of <span class=\"math-container\">$t\\in [0,2\\pi]$</span>.</p>\n<p><a href=\"https://i.stack.imgur.com/zZ0VS.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/zZ0VS.jpg\" alt=\"enter image description here\" /></a></p>\n<p>See also my newer post <a href=\"https://math.stackexchange.com/questions/4284133/parameterization-of-an-ellipse-by-arc-length/4284944#4284944\"><em>here</em></a>.</p>\n" }, { "answer_id": 2093575, "score": 4, "text": "<p>Here is a simple algorithm that distributes points evenly on an ellipse:</p>\n\n<ol>\n<li><p>Sample the ellipse uniformly according to angle using the polar parametrization.</p></li>\n<li><p>Replace each point with the midpoint of its neighbors.<br>\nProject the midpoint ortohogonally onto the ellipse.</p></li>\n<li><p>Repeat step 2 as needed.</p></li>\n</ol>\n\n<p><a href=\"https://i.stack.imgur.com/OjndP.gif\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/OjndP.gif\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2093784, "score": 4, "text": "<p>Perhaps the <a href=\"https://en.wikipedia.org/wiki/Ellipse#Parallelogram_method\" rel=\"nofollow noreferrer\">parallelogram method</a> is suited to your purpose? Here it is in action:</p>\n\n<p><a href=\"https://i.stack.imgur.com/79jt4.gif\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/79jt4.gif\" alt=\"parallelogram method\"></a></p>\n\n<p>(Image by <a href=\"https://en.wikipedia.org/wiki/User:Cmapm\" rel=\"nofollow noreferrer\">Wikipedia user Cmarm</a>.)</p>\n" }, { "answer_id": 2093586, "score": 7, "text": "<p>For a small eccentricity <span class=\"math-container\">$e$</span>,</p>\n<p><span class=\"math-container\">\\begin{align*}\n e^2 &amp;= 1-\\frac{b^2}{a^2}\\\\\n \\theta(t) &amp;=\n t+\\left( \\frac{e^2}{8}+\\frac{e^4}{16}+\\frac{71e^6}{2048} \\right) \\sin 2t+\n \\left( \\frac{5e^4}{256}+\\frac{5e^6}{256} \\right) \\sin 4t+\n \\frac{29e^6}{6144} \\sin 6t+O(e^{8}) \\\\\n \\begin{bmatrix} x \\\\ y \\end{bmatrix} &amp;=\n \\begin{bmatrix} a\\cos \\theta(t) \\\\ b\\sin \\theta(t) \\end{bmatrix}\n\\end{align*}</span></p>\n<p>the arclength spacing is approximately equal for uniform spacing of <span class=\"math-container\">$t\\in [0,2\\pi]$</span>.</p>\n<p><a href=\"https://i.stack.imgur.com/zZ0VS.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/zZ0VS.jpg\" alt=\"enter image description here\" /></a></p>\n<p>See also my newer post <a href=\"https://math.stackexchange.com/questions/4284133/parameterization-of-an-ellipse-by-arc-length/4284944#4284944\"><em>here</em></a>.</p>\n" }, { "answer_id": 2093575, "score": 4, "text": "<p>Here is a simple algorithm that distributes points evenly on an ellipse:</p>\n\n<ol>\n<li><p>Sample the ellipse uniformly according to angle using the polar parametrization.</p></li>\n<li><p>Replace each point with the midpoint of its neighbors.<br>\nProject the midpoint ortohogonally onto the ellipse.</p></li>\n<li><p>Repeat step 2 as needed.</p></li>\n</ol>\n\n<p><a href=\"https://i.stack.imgur.com/OjndP.gif\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/OjndP.gif\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2093784, "score": 4, "text": "<p>Perhaps the <a href=\"https://en.wikipedia.org/wiki/Ellipse#Parallelogram_method\" rel=\"nofollow noreferrer\">parallelogram method</a> is suited to your purpose? Here it is in action:</p>\n\n<p><a href=\"https://i.stack.imgur.com/79jt4.gif\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/79jt4.gif\" alt=\"parallelogram method\"></a></p>\n\n<p>(Image by <a href=\"https://en.wikipedia.org/wiki/User:Cmapm\" rel=\"nofollow noreferrer\">Wikipedia user Cmarm</a>.)</p>\n" }, { "answer_id": 2093586, "score": 7, "text": "<p>For a small eccentricity <span class=\"math-container\">$e$</span>,</p>\n<p><span class=\"math-container\">\\begin{align*}\n e^2 &amp;= 1-\\frac{b^2}{a^2}\\\\\n \\theta(t) &amp;=\n t+\\left( \\frac{e^2}{8}+\\frac{e^4}{16}+\\frac{71e^6}{2048} \\right) \\sin 2t+\n \\left( \\frac{5e^4}{256}+\\frac{5e^6}{256} \\right) \\sin 4t+\n \\frac{29e^6}{6144} \\sin 6t+O(e^{8}) \\\\\n \\begin{bmatrix} x \\\\ y \\end{bmatrix} &amp;=\n \\begin{bmatrix} a\\cos \\theta(t) \\\\ b\\sin \\theta(t) \\end{bmatrix}\n\\end{align*}</span></p>\n<p>the arclength spacing is approximately equal for uniform spacing of <span class=\"math-container\">$t\\in [0,2\\pi]$</span>.</p>\n<p><a href=\"https://i.stack.imgur.com/zZ0VS.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/zZ0VS.jpg\" alt=\"enter image description here\" /></a></p>\n<p>See also my newer post <a href=\"https://math.stackexchange.com/questions/4284133/parameterization-of-an-ellipse-by-arc-length/4284944#4284944\"><em>here</em></a>.</p>\n" }, { "answer_id": 2093575, "score": 4, "text": "<p>Here is a simple algorithm that distributes points evenly on an ellipse:</p>\n\n<ol>\n<li><p>Sample the ellipse uniformly according to angle using the polar parametrization.</p></li>\n<li><p>Replace each point with the midpoint of its neighbors.<br>\nProject the midpoint ortohogonally onto the ellipse.</p></li>\n<li><p>Repeat step 2 as needed.</p></li>\n</ol>\n\n<p><a href=\"https://i.stack.imgur.com/OjndP.gif\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/OjndP.gif\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2093784, "score": 4, "text": "<p>Perhaps the <a href=\"https://en.wikipedia.org/wiki/Ellipse#Parallelogram_method\" rel=\"nofollow noreferrer\">parallelogram method</a> is suited to your purpose? Here it is in action:</p>\n\n<p><a href=\"https://i.stack.imgur.com/79jt4.gif\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/79jt4.gif\" alt=\"parallelogram method\"></a></p>\n\n<p>(Image by <a href=\"https://en.wikipedia.org/wiki/User:Cmapm\" rel=\"nofollow noreferrer\">Wikipedia user Cmarm</a>.)</p>\n" }, { "answer_id": 2093586, "score": 7, "text": "<p>For a small eccentricity <span class=\"math-container\">$e$</span>,</p>\n<p><span class=\"math-container\">\\begin{align*}\n e^2 &amp;= 1-\\frac{b^2}{a^2}\\\\\n \\theta(t) &amp;=\n t+\\left( \\frac{e^2}{8}+\\frac{e^4}{16}+\\frac{71e^6}{2048} \\right) \\sin 2t+\n \\left( \\frac{5e^4}{256}+\\frac{5e^6}{256} \\right) \\sin 4t+\n \\frac{29e^6}{6144} \\sin 6t+O(e^{8}) \\\\\n \\begin{bmatrix} x \\\\ y \\end{bmatrix} &amp;=\n \\begin{bmatrix} a\\cos \\theta(t) \\\\ b\\sin \\theta(t) \\end{bmatrix}\n\\end{align*}</span></p>\n<p>the arclength spacing is approximately equal for uniform spacing of <span class=\"math-container\">$t\\in [0,2\\pi]$</span>.</p>\n<p><a href=\"https://i.stack.imgur.com/zZ0VS.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/zZ0VS.jpg\" alt=\"enter image description here\" /></a></p>\n<p>See also my newer post <a href=\"https://math.stackexchange.com/questions/4284133/parameterization-of-an-ellipse-by-arc-length/4284944#4284944\"><em>here</em></a>.</p>\n" }, { "answer_id": 2093575, "score": 4, "text": "<p>Here is a simple algorithm that distributes points evenly on an ellipse:</p>\n\n<ol>\n<li><p>Sample the ellipse uniformly according to angle using the polar parametrization.</p></li>\n<li><p>Replace each point with the midpoint of its neighbors.<br>\nProject the midpoint ortohogonally onto the ellipse.</p></li>\n<li><p>Repeat step 2 as needed.</p></li>\n</ol>\n\n<p><a href=\"https://i.stack.imgur.com/OjndP.gif\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/OjndP.gif\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2093784, "score": 4, "text": "<p>Perhaps the <a href=\"https://en.wikipedia.org/wiki/Ellipse#Parallelogram_method\" rel=\"nofollow noreferrer\">parallelogram method</a> is suited to your purpose? Here it is in action:</p>\n\n<p><a href=\"https://i.stack.imgur.com/79jt4.gif\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/79jt4.gif\" alt=\"parallelogram method\"></a></p>\n\n<p>(Image by <a href=\"https://en.wikipedia.org/wiki/User:Cmapm\" rel=\"nofollow noreferrer\">Wikipedia user Cmarm</a>.)</p>\n" }, { "answer_id": 2093586, "score": 7, "text": "<p>For a small eccentricity <span class=\"math-container\">$e$</span>,</p>\n<p><span class=\"math-container\">\\begin{align*}\n e^2 &amp;= 1-\\frac{b^2}{a^2}\\\\\n \\theta(t) &amp;=\n t+\\left( \\frac{e^2}{8}+\\frac{e^4}{16}+\\frac{71e^6}{2048} \\right) \\sin 2t+\n \\left( \\frac{5e^4}{256}+\\frac{5e^6}{256} \\right) \\sin 4t+\n \\frac{29e^6}{6144} \\sin 6t+O(e^{8}) \\\\\n \\begin{bmatrix} x \\\\ y \\end{bmatrix} &amp;=\n \\begin{bmatrix} a\\cos \\theta(t) \\\\ b\\sin \\theta(t) \\end{bmatrix}\n\\end{align*}</span></p>\n<p>the arclength spacing is approximately equal for uniform spacing of <span class=\"math-container\">$t\\in [0,2\\pi]$</span>.</p>\n<p><a href=\"https://i.stack.imgur.com/zZ0VS.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/zZ0VS.jpg\" alt=\"enter image description here\" /></a></p>\n<p>See also my newer post <a href=\"https://math.stackexchange.com/questions/4284133/parameterization-of-an-ellipse-by-arc-length/4284944#4284944\"><em>here</em></a>.</p>\n" }, { "answer_id": 2093575, "score": 4, "text": "<p>Here is a simple algorithm that distributes points evenly on an ellipse:</p>\n\n<ol>\n<li><p>Sample the ellipse uniformly according to angle using the polar parametrization.</p></li>\n<li><p>Replace each point with the midpoint of its neighbors.<br>\nProject the midpoint ortohogonally onto the ellipse.</p></li>\n<li><p>Repeat step 2 as needed.</p></li>\n</ol>\n\n<p><a href=\"https://i.stack.imgur.com/OjndP.gif\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/OjndP.gif\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2093784, "score": 4, "text": "<p>Perhaps the <a href=\"https://en.wikipedia.org/wiki/Ellipse#Parallelogram_method\" rel=\"nofollow noreferrer\">parallelogram method</a> is suited to your purpose? Here it is in action:</p>\n\n<p><a href=\"https://i.stack.imgur.com/79jt4.gif\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/79jt4.gif\" alt=\"parallelogram method\"></a></p>\n\n<p>(Image by <a href=\"https://en.wikipedia.org/wiki/User:Cmapm\" rel=\"nofollow noreferrer\">Wikipedia user Cmarm</a>.)</p>\n" }, { "answer_id": 2093586, "score": 7, "text": "<p>For a small eccentricity <span class=\"math-container\">$e$</span>,</p>\n<p><span class=\"math-container\">\\begin{align*}\n e^2 &amp;= 1-\\frac{b^2}{a^2}\\\\\n \\theta(t) &amp;=\n t+\\left( \\frac{e^2}{8}+\\frac{e^4}{16}+\\frac{71e^6}{2048} \\right) \\sin 2t+\n \\left( \\frac{5e^4}{256}+\\frac{5e^6}{256} \\right) \\sin 4t+\n \\frac{29e^6}{6144} \\sin 6t+O(e^{8}) \\\\\n \\begin{bmatrix} x \\\\ y \\end{bmatrix} &amp;=\n \\begin{bmatrix} a\\cos \\theta(t) \\\\ b\\sin \\theta(t) \\end{bmatrix}\n\\end{align*}</span></p>\n<p>the arclength spacing is approximately equal for uniform spacing of <span class=\"math-container\">$t\\in [0,2\\pi]$</span>.</p>\n<p><a href=\"https://i.stack.imgur.com/zZ0VS.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/zZ0VS.jpg\" alt=\"enter image description here\" /></a></p>\n<p>See also my newer post <a href=\"https://math.stackexchange.com/questions/4284133/parameterization-of-an-ellipse-by-arc-length/4284944#4284944\"><em>here</em></a>.</p>\n" }, { "answer_id": 2093575, "score": 4, "text": "<p>Here is a simple algorithm that distributes points evenly on an ellipse:</p>\n\n<ol>\n<li><p>Sample the ellipse uniformly according to angle using the polar parametrization.</p></li>\n<li><p>Replace each point with the midpoint of its neighbors.<br>\nProject the midpoint ortohogonally onto the ellipse.</p></li>\n<li><p>Repeat step 2 as needed.</p></li>\n</ol>\n\n<p><a href=\"https://i.stack.imgur.com/OjndP.gif\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/OjndP.gif\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2093784, "score": 4, "text": "<p>Perhaps the <a href=\"https://en.wikipedia.org/wiki/Ellipse#Parallelogram_method\" rel=\"nofollow noreferrer\">parallelogram method</a> is suited to your purpose? Here it is in action:</p>\n\n<p><a href=\"https://i.stack.imgur.com/79jt4.gif\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/79jt4.gif\" alt=\"parallelogram method\"></a></p>\n\n<p>(Image by <a href=\"https://en.wikipedia.org/wiki/User:Cmapm\" rel=\"nofollow noreferrer\">Wikipedia user Cmarm</a>.)</p>\n" }, { "answer_id": 2093586, "score": 7, "text": "<p>For a small eccentricity <span class=\"math-container\">$e$</span>,</p>\n<p><span class=\"math-container\">\\begin{align*}\n e^2 &amp;= 1-\\frac{b^2}{a^2}\\\\\n \\theta(t) &amp;=\n t+\\left( \\frac{e^2}{8}+\\frac{e^4}{16}+\\frac{71e^6}{2048} \\right) \\sin 2t+\n \\left( \\frac{5e^4}{256}+\\frac{5e^6}{256} \\right) \\sin 4t+\n \\frac{29e^6}{6144} \\sin 6t+O(e^{8}) \\\\\n \\begin{bmatrix} x \\\\ y \\end{bmatrix} &amp;=\n \\begin{bmatrix} a\\cos \\theta(t) \\\\ b\\sin \\theta(t) \\end{bmatrix}\n\\end{align*}</span></p>\n<p>the arclength spacing is approximately equal for uniform spacing of <span class=\"math-container\">$t\\in [0,2\\pi]$</span>.</p>\n<p><a href=\"https://i.stack.imgur.com/zZ0VS.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/zZ0VS.jpg\" alt=\"enter image description here\" /></a></p>\n<p>See also my newer post <a href=\"https://math.stackexchange.com/questions/4284133/parameterization-of-an-ellipse-by-arc-length/4284944#4284944\"><em>here</em></a>.</p>\n" }, { "answer_id": 2093575, "score": 4, "text": "<p>Here is a simple algorithm that distributes points evenly on an ellipse:</p>\n\n<ol>\n<li><p>Sample the ellipse uniformly according to angle using the polar parametrization.</p></li>\n<li><p>Replace each point with the midpoint of its neighbors.<br>\nProject the midpoint ortohogonally onto the ellipse.</p></li>\n<li><p>Repeat step 2 as needed.</p></li>\n</ol>\n\n<p><a href=\"https://i.stack.imgur.com/OjndP.gif\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/OjndP.gif\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2093784, "score": 4, "text": "<p>Perhaps the <a href=\"https://en.wikipedia.org/wiki/Ellipse#Parallelogram_method\" rel=\"nofollow noreferrer\">parallelogram method</a> is suited to your purpose? Here it is in action:</p>\n\n<p><a href=\"https://i.stack.imgur.com/79jt4.gif\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/79jt4.gif\" alt=\"parallelogram method\"></a></p>\n\n<p>(Image by <a href=\"https://en.wikipedia.org/wiki/User:Cmapm\" rel=\"nofollow noreferrer\">Wikipedia user Cmarm</a>.)</p>\n" }, { "answer_id": 2093586, "score": 7, "text": "<p>For a small eccentricity <span class=\"math-container\">$e$</span>,</p>\n<p><span class=\"math-container\">\\begin{align*}\n e^2 &amp;= 1-\\frac{b^2}{a^2}\\\\\n \\theta(t) &amp;=\n t+\\left( \\frac{e^2}{8}+\\frac{e^4}{16}+\\frac{71e^6}{2048} \\right) \\sin 2t+\n \\left( \\frac{5e^4}{256}+\\frac{5e^6}{256} \\right) \\sin 4t+\n \\frac{29e^6}{6144} \\sin 6t+O(e^{8}) \\\\\n \\begin{bmatrix} x \\\\ y \\end{bmatrix} &amp;=\n \\begin{bmatrix} a\\cos \\theta(t) \\\\ b\\sin \\theta(t) \\end{bmatrix}\n\\end{align*}</span></p>\n<p>the arclength spacing is approximately equal for uniform spacing of <span class=\"math-container\">$t\\in [0,2\\pi]$</span>.</p>\n<p><a href=\"https://i.stack.imgur.com/zZ0VS.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/zZ0VS.jpg\" alt=\"enter image description here\" /></a></p>\n<p>See also my newer post <a href=\"https://math.stackexchange.com/questions/4284133/parameterization-of-an-ellipse-by-arc-length/4284944#4284944\"><em>here</em></a>.</p>\n" }, { "answer_id": 2093575, "score": 4, "text": "<p>Here is a simple algorithm that distributes points evenly on an ellipse:</p>\n\n<ol>\n<li><p>Sample the ellipse uniformly according to angle using the polar parametrization.</p></li>\n<li><p>Replace each point with the midpoint of its neighbors.<br>\nProject the midpoint ortohogonally onto the ellipse.</p></li>\n<li><p>Repeat step 2 as needed.</p></li>\n</ol>\n\n<p><a href=\"https://i.stack.imgur.com/OjndP.gif\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/OjndP.gif\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2093784, "score": 4, "text": "<p>Perhaps the <a href=\"https://en.wikipedia.org/wiki/Ellipse#Parallelogram_method\" rel=\"nofollow noreferrer\">parallelogram method</a> is suited to your purpose? Here it is in action:</p>\n\n<p><a href=\"https://i.stack.imgur.com/79jt4.gif\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/79jt4.gif\" alt=\"parallelogram method\"></a></p>\n\n<p>(Image by <a href=\"https://en.wikipedia.org/wiki/User:Cmapm\" rel=\"nofollow noreferrer\">Wikipedia user Cmarm</a>.)</p>\n" }, { "answer_id": 2093586, "score": 7, "text": "<p>For a small eccentricity <span class=\"math-container\">$e$</span>,</p>\n<p><span class=\"math-container\">\\begin{align*}\n e^2 &amp;= 1-\\frac{b^2}{a^2}\\\\\n \\theta(t) &amp;=\n t+\\left( \\frac{e^2}{8}+\\frac{e^4}{16}+\\frac{71e^6}{2048} \\right) \\sin 2t+\n \\left( \\frac{5e^4}{256}+\\frac{5e^6}{256} \\right) \\sin 4t+\n \\frac{29e^6}{6144} \\sin 6t+O(e^{8}) \\\\\n \\begin{bmatrix} x \\\\ y \\end{bmatrix} &amp;=\n \\begin{bmatrix} a\\cos \\theta(t) \\\\ b\\sin \\theta(t) \\end{bmatrix}\n\\end{align*}</span></p>\n<p>the arclength spacing is approximately equal for uniform spacing of <span class=\"math-container\">$t\\in [0,2\\pi]$</span>.</p>\n<p><a href=\"https://i.stack.imgur.com/zZ0VS.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/zZ0VS.jpg\" alt=\"enter image description here\" /></a></p>\n<p>See also my newer post <a href=\"https://math.stackexchange.com/questions/4284133/parameterization-of-an-ellipse-by-arc-length/4284944#4284944\"><em>here</em></a>.</p>\n" }, { "answer_id": 2093575, "score": 4, "text": "<p>Here is a simple algorithm that distributes points evenly on an ellipse:</p>\n\n<ol>\n<li><p>Sample the ellipse uniformly according to angle using the polar parametrization.</p></li>\n<li><p>Replace each point with the midpoint of its neighbors.<br>\nProject the midpoint ortohogonally onto the ellipse.</p></li>\n<li><p>Repeat step 2 as needed.</p></li>\n</ol>\n\n<p><a href=\"https://i.stack.imgur.com/OjndP.gif\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/OjndP.gif\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2093784, "score": 4, "text": "<p>Perhaps the <a href=\"https://en.wikipedia.org/wiki/Ellipse#Parallelogram_method\" rel=\"nofollow noreferrer\">parallelogram method</a> is suited to your purpose? Here it is in action:</p>\n\n<p><a href=\"https://i.stack.imgur.com/79jt4.gif\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/79jt4.gif\" alt=\"parallelogram method\"></a></p>\n\n<p>(Image by <a href=\"https://en.wikipedia.org/wiki/User:Cmapm\" rel=\"nofollow noreferrer\">Wikipedia user Cmarm</a>.)</p>\n" } ]
Question: Points on an ellipse Answer 1: <p>For a small eccentricity <span class="math-container">$e$</span>,</p> <p><span class="math-container">\begin{align*} e^2 &amp;= 1-\frac{b^2}{a^2}\\ \theta(t) &amp;= t+\left( \frac{e^2}{8}+\frac{e^4}{16}+\frac{71e^6}{2048} \right) \sin 2t+ \left( \frac{5e^4}{256}+\frac{5e^6}{256} \right) \sin 4t+ \frac{29e^6}{6144} \sin 6t+O(e^{8}) \\ \begin{bmatrix} x \\ y \end{bmatrix} &amp;= \begin{bmatrix} a\cos \theta(t) \\ b\sin \theta(t) \end{bmatrix} \end{align*}</span></p> <p>the arclength spacing is approximately equal for uniform spacing of <span class="math-container">$t\in [0,2\pi]$</span>.</p> <p><a href="https://i.stack.imgur.com/zZ0VS.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zZ0VS.jpg" alt="enter image description here" /></a></p> <p>See also my newer post <a href="https://math.stackexchange.com/questions/4284133/parameterization-of-an-ellipse-by-arc-length/4284944#4284944"><em>here</em></a>.</p> Answer 2: <p>Here is a simple algorithm that distributes points evenly on an ellipse:</p> <ol> <li><p>Sample the ellipse uniformly according to angle using the polar parametrization.</p></li> <li><p>Replace each point with the midpoint of its neighbors.<br> Project the midpoint ortohogonally onto the ellipse.</p></li> <li><p>Repeat step 2 as needed.</p></li> </ol> <p><a href="https://i.stack.imgur.com/OjndP.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OjndP.gif" alt="enter image description here"></a></p> Answer 3: <p>Perhaps the <a href="https://en.wikipedia.org/wiki/Ellipse#Parallelogram_method" rel="nofollow noreferrer">parallelogram method</a> is suited to your purpose? Here it is in action:</p> <p><a href="https://i.stack.imgur.com/79jt4.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/79jt4.gif" alt="parallelogram method"></a></p> <p>(Image by <a href="https://en.wikipedia.org/wiki/User:Cmapm" rel="nofollow noreferrer">Wikipedia user Cmarm</a>.)</p> Answer 4: <p>For a small eccentricity <span class="math-container">$e$</span>,</p> <p><span class="math-container">\begin{align*} e^2 &amp;= 1-\frac{b^2}{a^2}\\ \theta(t) &amp;= t+\left( \frac{e^2}{8}+\frac{e^4}{16}+\frac{71e^6}{2048} \right) \sin 2t+ \left( \frac{5e^4}{256}+\frac{5e^6}{256} \right) \sin 4t+ \frac{29e^6}{6144} \sin 6t+O(e^{8}) \\ \begin{bmatrix} x \\ y \end{bmatrix} &amp;= \begin{bmatrix} a\cos \theta(t) \\ b\sin \theta(t) \end{bmatrix} \end{align*}</span></p> <p>the arclength spacing is approximately equal for uniform spacing of <span class="math-container">$t\in [0,2\pi]$</span>.</p> <p><a href="https://i.stack.imgur.com/zZ0VS.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zZ0VS.jpg" alt="enter image description here" /></a></p> <p>See also my newer post <a href="https://math.stackexchange.com/questions/4284133/parameterization-of-an-ellipse-by-arc-length/4284944#4284944"><em>here</em></a>.</p> Answer 5: <p>Here is a simple algorithm that distributes points evenly on an ellipse:</p> <ol> <li><p>Sample the ellipse uniformly according to angle using the polar parametrization.</p></li> <li><p>Replace each point with the midpoint of its neighbors.<br> Project the midpoint ortohogonally onto the ellipse.</p></li> <li><p>Repeat step 2 as needed.</p></li> </ol> <p><a href="https://i.stack.imgur.com/OjndP.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OjndP.gif" alt="enter image description here"></a></p> Answer 6: <p>Perhaps the <a href="https://en.wikipedia.org/wiki/Ellipse#Parallelogram_method" rel="nofollow noreferrer">parallelogram method</a> is suited to your purpose? Here it is in action:</p> <p><a href="https://i.stack.imgur.com/79jt4.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/79jt4.gif" alt="parallelogram method"></a></p> <p>(Image by <a href="https://en.wikipedia.org/wiki/User:Cmapm" rel="nofollow noreferrer">Wikipedia user Cmarm</a>.)</p> Answer 7: <p>For a small eccentricity <span class="math-container">$e$</span>,</p> <p><span class="math-container">\begin{align*} e^2 &amp;= 1-\frac{b^2}{a^2}\\ \theta(t) &amp;= t+\left( \frac{e^2}{8}+\frac{e^4}{16}+\frac{71e^6}{2048} \right) \sin 2t+ \left( \frac{5e^4}{256}+\frac{5e^6}{256} \right) \sin 4t+ \frac{29e^6}{6144} \sin 6t+O(e^{8}) \\ \begin{bmatrix} x \\ y \end{bmatrix} &amp;= \begin{bmatrix} a\cos \theta(t) \\ b\sin \theta(t) \end{bmatrix} \end{align*}</span></p> <p>the arclength spacing is approximately equal for uniform spacing of <span class="math-container">$t\in [0,2\pi]$</span>.</p> <p><a href="https://i.stack.imgur.com/zZ0VS.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zZ0VS.jpg" alt="enter image description here" /></a></p> <p>See also my newer post <a href="https://math.stackexchange.com/questions/4284133/parameterization-of-an-ellipse-by-arc-length/4284944#4284944"><em>here</em></a>.</p> Answer 8: <p>Here is a simple algorithm that distributes points evenly on an ellipse:</p> <ol> <li><p>Sample the ellipse uniformly according to angle using the polar parametrization.</p></li> <li><p>Replace each point with the midpoint of its neighbors.<br> Project the midpoint ortohogonally onto the ellipse.</p></li> <li><p>Repeat step 2 as needed.</p></li> </ol> <p><a href="https://i.stack.imgur.com/OjndP.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OjndP.gif" alt="enter image description here"></a></p> Answer 9: <p>Perhaps the <a href="https://en.wikipedia.org/wiki/Ellipse#Parallelogram_method" rel="nofollow noreferrer">parallelogram method</a> is suited to your purpose? Here it is in action:</p> <p><a href="https://i.stack.imgur.com/79jt4.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/79jt4.gif" alt="parallelogram method"></a></p> <p>(Image by <a href="https://en.wikipedia.org/wiki/User:Cmapm" rel="nofollow noreferrer">Wikipedia user Cmarm</a>.)</p> Answer 10: <p>For a small eccentricity <span class="math-container">$e$</span>,</p> <p><span class="math-container">\begin{align*} e^2 &amp;= 1-\frac{b^2}{a^2}\\ \theta(t) &amp;= t+\left( \frac{e^2}{8}+\frac{e^4}{16}+\frac{71e^6}{2048} \right) \sin 2t+ \left( \frac{5e^4}{256}+\frac{5e^6}{256} \right) \sin 4t+ \frac{29e^6}{6144} \sin 6t+O(e^{8}) \\ \begin{bmatrix} x \\ y \end{bmatrix} &amp;= \begin{bmatrix} a\cos \theta(t) \\ b\sin \theta(t) \end{bmatrix} \end{align*}</span></p> <p>the arclength spacing is approximately equal for uniform spacing of <span class="math-container">$t\in [0,2\pi]$</span>.</p> <p><a href="https://i.stack.imgur.com/zZ0VS.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zZ0VS.jpg" alt="enter image description here" /></a></p> <p>See also my newer post <a href="https://math.stackexchange.com/questions/4284133/parameterization-of-an-ellipse-by-arc-length/4284944#4284944"><em>here</em></a>.</p> Answer 11: <p>Here is a simple algorithm that distributes points evenly on an ellipse:</p> <ol> <li><p>Sample the ellipse uniformly according to angle using the polar parametrization.</p></li> <li><p>Replace each point with the midpoint of its neighbors.<br> Project the midpoint ortohogonally onto the ellipse.</p></li> <li><p>Repeat step 2 as needed.</p></li> </ol> <p><a href="https://i.stack.imgur.com/OjndP.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OjndP.gif" alt="enter image description here"></a></p> Answer 12: <p>Perhaps the <a href="https://en.wikipedia.org/wiki/Ellipse#Parallelogram_method" rel="nofollow noreferrer">parallelogram method</a> is suited to your purpose? Here it is in action:</p> <p><a href="https://i.stack.imgur.com/79jt4.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/79jt4.gif" alt="parallelogram method"></a></p> <p>(Image by <a href="https://en.wikipedia.org/wiki/User:Cmapm" rel="nofollow noreferrer">Wikipedia user Cmarm</a>.)</p> Answer 13: <p>For a small eccentricity <span class="math-container">$e$</span>,</p> <p><span class="math-container">\begin{align*} e^2 &amp;= 1-\frac{b^2}{a^2}\\ \theta(t) &amp;= t+\left( \frac{e^2}{8}+\frac{e^4}{16}+\frac{71e^6}{2048} \right) \sin 2t+ \left( \frac{5e^4}{256}+\frac{5e^6}{256} \right) \sin 4t+ \frac{29e^6}{6144} \sin 6t+O(e^{8}) \\ \begin{bmatrix} x \\ y \end{bmatrix} &amp;= \begin{bmatrix} a\cos \theta(t) \\ b\sin \theta(t) \end{bmatrix} \end{align*}</span></p> <p>the arclength spacing is approximately equal for uniform spacing of <span class="math-container">$t\in [0,2\pi]$</span>.</p> <p><a href="https://i.stack.imgur.com/zZ0VS.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zZ0VS.jpg" alt="enter image description here" /></a></p> <p>See also my newer post <a href="https://math.stackexchange.com/questions/4284133/parameterization-of-an-ellipse-by-arc-length/4284944#4284944"><em>here</em></a>.</p> Answer 14: <p>Here is a simple algorithm that distributes points evenly on an ellipse:</p> <ol> <li><p>Sample the ellipse uniformly according to angle using the polar parametrization.</p></li> <li><p>Replace each point with the midpoint of its neighbors.<br> Project the midpoint ortohogonally onto the ellipse.</p></li> <li><p>Repeat step 2 as needed.</p></li> </ol> <p><a href="https://i.stack.imgur.com/OjndP.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OjndP.gif" alt="enter image description here"></a></p> Answer 15: <p>Perhaps the <a href="https://en.wikipedia.org/wiki/Ellipse#Parallelogram_method" rel="nofollow noreferrer">parallelogram method</a> is suited to your purpose? Here it is in action:</p> <p><a href="https://i.stack.imgur.com/79jt4.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/79jt4.gif" alt="parallelogram method"></a></p> <p>(Image by <a href="https://en.wikipedia.org/wiki/User:Cmapm" rel="nofollow noreferrer">Wikipedia user Cmarm</a>.)</p> Answer 16: <p>For a small eccentricity <span class="math-container">$e$</span>,</p> <p><span class="math-container">\begin{align*} e^2 &amp;= 1-\frac{b^2}{a^2}\\ \theta(t) &amp;= t+\left( \frac{e^2}{8}+\frac{e^4}{16}+\frac{71e^6}{2048} \right) \sin 2t+ \left( \frac{5e^4}{256}+\frac{5e^6}{256} \right) \sin 4t+ \frac{29e^6}{6144} \sin 6t+O(e^{8}) \\ \begin{bmatrix} x \\ y \end{bmatrix} &amp;= \begin{bmatrix} a\cos \theta(t) \\ b\sin \theta(t) \end{bmatrix} \end{align*}</span></p> <p>the arclength spacing is approximately equal for uniform spacing of <span class="math-container">$t\in [0,2\pi]$</span>.</p> <p><a href="https://i.stack.imgur.com/zZ0VS.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zZ0VS.jpg" alt="enter image description here" /></a></p> <p>See also my newer post <a href="https://math.stackexchange.com/questions/4284133/parameterization-of-an-ellipse-by-arc-length/4284944#4284944"><em>here</em></a>.</p> Answer 17: <p>Here is a simple algorithm that distributes points evenly on an ellipse:</p> <ol> <li><p>Sample the ellipse uniformly according to angle using the polar parametrization.</p></li> <li><p>Replace each point with the midpoint of its neighbors.<br> Project the midpoint ortohogonally onto the ellipse.</p></li> <li><p>Repeat step 2 as needed.</p></li> </ol> <p><a href="https://i.stack.imgur.com/OjndP.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OjndP.gif" alt="enter image description here"></a></p> Answer 18: <p>Perhaps the <a href="https://en.wikipedia.org/wiki/Ellipse#Parallelogram_method" rel="nofollow noreferrer">parallelogram method</a> is suited to your purpose? Here it is in action:</p> <p><a href="https://i.stack.imgur.com/79jt4.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/79jt4.gif" alt="parallelogram method"></a></p> <p>(Image by <a href="https://en.wikipedia.org/wiki/User:Cmapm" rel="nofollow noreferrer">Wikipedia user Cmarm</a>.)</p> Answer 19: <p>For a small eccentricity <span class="math-container">$e$</span>,</p> <p><span class="math-container">\begin{align*} e^2 &amp;= 1-\frac{b^2}{a^2}\\ \theta(t) &amp;= t+\left( \frac{e^2}{8}+\frac{e^4}{16}+\frac{71e^6}{2048} \right) \sin 2t+ \left( \frac{5e^4}{256}+\frac{5e^6}{256} \right) \sin 4t+ \frac{29e^6}{6144} \sin 6t+O(e^{8}) \\ \begin{bmatrix} x \\ y \end{bmatrix} &amp;= \begin{bmatrix} a\cos \theta(t) \\ b\sin \theta(t) \end{bmatrix} \end{align*}</span></p> <p>the arclength spacing is approximately equal for uniform spacing of <span class="math-container">$t\in [0,2\pi]$</span>.</p> <p><a href="https://i.stack.imgur.com/zZ0VS.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zZ0VS.jpg" alt="enter image description here" /></a></p> <p>See also my newer post <a href="https://math.stackexchange.com/questions/4284133/parameterization-of-an-ellipse-by-arc-length/4284944#4284944"><em>here</em></a>.</p> Answer 20: <p>Here is a simple algorithm that distributes points evenly on an ellipse:</p> <ol> <li><p>Sample the ellipse uniformly according to angle using the polar parametrization.</p></li> <li><p>Replace each point with the midpoint of its neighbors.<br> Project the midpoint ortohogonally onto the ellipse.</p></li> <li><p>Repeat step 2 as needed.</p></li> </ol> <p><a href="https://i.stack.imgur.com/OjndP.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OjndP.gif" alt="enter image description here"></a></p> Answer 21: <p>Perhaps the <a href="https://en.wikipedia.org/wiki/Ellipse#Parallelogram_method" rel="nofollow noreferrer">parallelogram method</a> is suited to your purpose? Here it is in action:</p> <p><a href="https://i.stack.imgur.com/79jt4.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/79jt4.gif" alt="parallelogram method"></a></p> <p>(Image by <a href="https://en.wikipedia.org/wiki/User:Cmapm" rel="nofollow noreferrer">Wikipedia user Cmarm</a>.)</p> Answer 22: <p>For a small eccentricity <span class="math-container">$e$</span>,</p> <p><span class="math-container">\begin{align*} e^2 &amp;= 1-\frac{b^2}{a^2}\\ \theta(t) &amp;= t+\left( \frac{e^2}{8}+\frac{e^4}{16}+\frac{71e^6}{2048} \right) \sin 2t+ \left( \frac{5e^4}{256}+\frac{5e^6}{256} \right) \sin 4t+ \frac{29e^6}{6144} \sin 6t+O(e^{8}) \\ \begin{bmatrix} x \\ y \end{bmatrix} &amp;= \begin{bmatrix} a\cos \theta(t) \\ b\sin \theta(t) \end{bmatrix} \end{align*}</span></p> <p>the arclength spacing is approximately equal for uniform spacing of <span class="math-container">$t\in [0,2\pi]$</span>.</p> <p><a href="https://i.stack.imgur.com/zZ0VS.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zZ0VS.jpg" alt="enter image description here" /></a></p> <p>See also my newer post <a href="https://math.stackexchange.com/questions/4284133/parameterization-of-an-ellipse-by-arc-length/4284944#4284944"><em>here</em></a>.</p> Answer 23: <p>Here is a simple algorithm that distributes points evenly on an ellipse:</p> <ol> <li><p>Sample the ellipse uniformly according to angle using the polar parametrization.</p></li> <li><p>Replace each point with the midpoint of its neighbors.<br> Project the midpoint ortohogonally onto the ellipse.</p></li> <li><p>Repeat step 2 as needed.</p></li> </ol> <p><a href="https://i.stack.imgur.com/OjndP.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OjndP.gif" alt="enter image description here"></a></p> Answer 24: <p>Perhaps the <a href="https://en.wikipedia.org/wiki/Ellipse#Parallelogram_method" rel="nofollow noreferrer">parallelogram method</a> is suited to your purpose? Here it is in action:</p> <p><a href="https://i.stack.imgur.com/79jt4.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/79jt4.gif" alt="parallelogram method"></a></p> <p>(Image by <a href="https://en.wikipedia.org/wiki/User:Cmapm" rel="nofollow noreferrer">Wikipedia user Cmarm</a>.)</p> Answer 25: <p>For a small eccentricity <span class="math-container">$e$</span>,</p> <p><span class="math-container">\begin{align*} e^2 &amp;= 1-\frac{b^2}{a^2}\\ \theta(t) &amp;= t+\left( \frac{e^2}{8}+\frac{e^4}{16}+\frac{71e^6}{2048} \right) \sin 2t+ \left( \frac{5e^4}{256}+\frac{5e^6}{256} \right) \sin 4t+ \frac{29e^6}{6144} \sin 6t+O(e^{8}) \\ \begin{bmatrix} x \\ y \end{bmatrix} &amp;= \begin{bmatrix} a\cos \theta(t) \\ b\sin \theta(t) \end{bmatrix} \end{align*}</span></p> <p>the arclength spacing is approximately equal for uniform spacing of <span class="math-container">$t\in [0,2\pi]$</span>.</p> <p><a href="https://i.stack.imgur.com/zZ0VS.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zZ0VS.jpg" alt="enter image description here" /></a></p> <p>See also my newer post <a href="https://math.stackexchange.com/questions/4284133/parameterization-of-an-ellipse-by-arc-length/4284944#4284944"><em>here</em></a>.</p> Answer 26: <p>Here is a simple algorithm that distributes points evenly on an ellipse:</p> <ol> <li><p>Sample the ellipse uniformly according to angle using the polar parametrization.</p></li> <li><p>Replace each point with the midpoint of its neighbors.<br> Project the midpoint ortohogonally onto the ellipse.</p></li> <li><p>Repeat step 2 as needed.</p></li> </ol> <p><a href="https://i.stack.imgur.com/OjndP.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OjndP.gif" alt="enter image description here"></a></p> Answer 27: <p>Perhaps the <a href="https://en.wikipedia.org/wiki/Ellipse#Parallelogram_method" rel="nofollow noreferrer">parallelogram method</a> is suited to your purpose? Here it is in action:</p> <p><a href="https://i.stack.imgur.com/79jt4.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/79jt4.gif" alt="parallelogram method"></a></p> <p>(Image by <a href="https://en.wikipedia.org/wiki/User:Cmapm" rel="nofollow noreferrer">Wikipedia user Cmarm</a>.)</p> Answer 28: <p>For a small eccentricity <span class="math-container">$e$</span>,</p> <p><span class="math-container">\begin{align*} e^2 &amp;= 1-\frac{b^2}{a^2}\\ \theta(t) &amp;= t+\left( \frac{e^2}{8}+\frac{e^4}{16}+\frac{71e^6}{2048} \right) \sin 2t+ \left( \frac{5e^4}{256}+\frac{5e^6}{256} \right) \sin 4t+ \frac{29e^6}{6144} \sin 6t+O(e^{8}) \\ \begin{bmatrix} x \\ y \end{bmatrix} &amp;= \begin{bmatrix} a\cos \theta(t) \\ b\sin \theta(t) \end{bmatrix} \end{align*}</span></p> <p>the arclength spacing is approximately equal for uniform spacing of <span class="math-container">$t\in [0,2\pi]$</span>.</p> <p><a href="https://i.stack.imgur.com/zZ0VS.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zZ0VS.jpg" alt="enter image description here" /></a></p> <p>See also my newer post <a href="https://math.stackexchange.com/questions/4284133/parameterization-of-an-ellipse-by-arc-length/4284944#4284944"><em>here</em></a>.</p> Answer 29: <p>Here is a simple algorithm that distributes points evenly on an ellipse:</p> <ol> <li><p>Sample the ellipse uniformly according to angle using the polar parametrization.</p></li> <li><p>Replace each point with the midpoint of its neighbors.<br> Project the midpoint ortohogonally onto the ellipse.</p></li> <li><p>Repeat step 2 as needed.</p></li> </ol> <p><a href="https://i.stack.imgur.com/OjndP.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OjndP.gif" alt="enter image description here"></a></p> Answer 30: <p>Perhaps the <a href="https://en.wikipedia.org/wiki/Ellipse#Parallelogram_method" rel="nofollow noreferrer">parallelogram method</a> is suited to your purpose? Here it is in action:</p> <p><a href="https://i.stack.imgur.com/79jt4.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/79jt4.gif" alt="parallelogram method"></a></p> <p>(Image by <a href="https://en.wikipedia.org/wiki/User:Cmapm" rel="nofollow noreferrer">Wikipedia user Cmarm</a>.)</p> Answer 31: <p>For a small eccentricity <span class="math-container">$e$</span>,</p> <p><span class="math-container">\begin{align*} e^2 &amp;= 1-\frac{b^2}{a^2}\\ \theta(t) &amp;= t+\left( \frac{e^2}{8}+\frac{e^4}{16}+\frac{71e^6}{2048} \right) \sin 2t+ \left( \frac{5e^4}{256}+\frac{5e^6}{256} \right) \sin 4t+ \frac{29e^6}{6144} \sin 6t+O(e^{8}) \\ \begin{bmatrix} x \\ y \end{bmatrix} &amp;= \begin{bmatrix} a\cos \theta(t) \\ b\sin \theta(t) \end{bmatrix} \end{align*}</span></p> <p>the arclength spacing is approximately equal for uniform spacing of <span class="math-container">$t\in [0,2\pi]$</span>.</p> <p><a href="https://i.stack.imgur.com/zZ0VS.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zZ0VS.jpg" alt="enter image description here" /></a></p> <p>See also my newer post <a href="https://math.stackexchange.com/questions/4284133/parameterization-of-an-ellipse-by-arc-length/4284944#4284944"><em>here</em></a>.</p> Answer 32: <p>Here is a simple algorithm that distributes points evenly on an ellipse:</p> <ol> <li><p>Sample the ellipse uniformly according to angle using the polar parametrization.</p></li> <li><p>Replace each point with the midpoint of its neighbors.<br> Project the midpoint ortohogonally onto the ellipse.</p></li> <li><p>Repeat step 2 as needed.</p></li> </ol> <p><a href="https://i.stack.imgur.com/OjndP.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OjndP.gif" alt="enter image description here"></a></p> Answer 33: <p>Perhaps the <a href="https://en.wikipedia.org/wiki/Ellipse#Parallelogram_method" rel="nofollow noreferrer">parallelogram method</a> is suited to your purpose? Here it is in action:</p> <p><a href="https://i.stack.imgur.com/79jt4.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/79jt4.gif" alt="parallelogram method"></a></p> <p>(Image by <a href="https://en.wikipedia.org/wiki/User:Cmapm" rel="nofollow noreferrer">Wikipedia user Cmarm</a>.)</p>
Proving that the set of all increasing sequences is uncountable
2,093,673
6
https://math.stackexchange.com/questions/2093673/proving-that-the-set-of-all-increasing-sequences-is-uncountable
<p>Claim: Prove that the set of all increasing sequences $n_1&lt;n_2&lt;...&lt;n_n&lt;...$ is uncountable. </p> <p>Here's my attempt:</p> <p>Let $X=\{(x_n)_{n\in N};(x_n)_{n\in N}$ is increasing$\}$. Consider $f:N\rightarrow X$. We claim that no $f$ can be surjective.</p> <p>Denote $(u_n)_{n\in N}$ the image of $f$ at the point $u\in N$, ie, $f(u) =(u_n)_{n\in N}$ . We construct a sequence $(s_n)_{n\in N}$ such that $(s_n)_{n\in N}$$\neq f(s)$ for all $s\in N$. </p> <p>For, construct $(s_n)_{n\in N}$ taking $s_1=\prod_{k=1}^{\infty} f(k)_1$, $s_2=\prod_{k=1}^{\infty} f(k)_2$, ... , $s_n = \prod_{k=1}^{\infty} f(k)_n$, ...</p> <p>By the monoticity of multiplication, we guarantee that $(s_n)_{n\in N} \in X$. But by the way we built it, it is such that $(s_n)_{n\in N} \neq f(s)$ for all s natural. Hence, $f$ cannot be surjective and therefore X is not countable.</p> <p>Is it correct? And if yes, i find pretty intuitive that $(s_n)_{n\in N} \neq f(s)$ for all s natural. But can someone turn it out more rigorous and clear?</p> <p>EDIT:</p> <p>I came up with a new solution and better notation. Here it is:</p> <p>Let $X=\{(x_n)_{n\in N};(x_n)_{n\in N}$ is increasing$\}$. Consider $f:N\rightarrow X$. </p> <p>Denote $(u_n)_{n\in N}$ the image of $f$ at the point $u\in N$, ie, $f(u) =(u_n)_{n\in N}$ . We may write this sequence as $f(u)= (f(u)_1,...,f(u)_n,...)$. Also define $A_u=\{f(u)_1,...,f(u)_n,...\}$ as the set of all elements of the sequence $f(u)$. We define this set for every $u \in N$.</p> <p>Now we construct a new sequence $(s_n)_{n\in N}$ inductively given by:</p> <p>Define $f(s)_1 $= the smallest element of $A_1$.</p> <p>Supposing defined $f(s)_1,...,f(s)_n$, let $S_{n+1} = \{x \in A_{n+1}; x&gt;y$, where $y$ is the smallest element of $A_{n}\}$.</p> <p>Then take $f(s)_{n+1}$= smallest element of $S_{n+1}$.</p> <p>Therefore the sequence $(s_n)_{n\in N}$ is now defined inductively, and by construction $(s_n)_{n\in N}$ is increasing. Therefore, $(s_n)_{n\in N} \in X$ and $(s_n)_{n\in N}$ is such that $(s_n)_{n\in N} \neq f(s)$, for all $s \in N$. Therefore $f$ is not surjective and hence X is not countable.</p>
[ { "answer_id": 2093683, "score": 6, "text": "<p>Here is another approach.\nSuppose the set is countable, hence we can make a list of its elements\n$f_1, f_2, f_3, \\ldots$. Consider the function $f$ defined by $f(1)=f_1(1)+1$ and\n$$ f(n)=f_n(n)+f(n-1)+1$$\nfor $n&gt;1$. Then $f$ is increasing, but $f$ differs from every $f_i$ of the list. Therefore, as we can't make a list of its elements, the set is uncountable.</p>\n" }, { "answer_id": 2093713, "score": 4, "text": "<p>If you already know that there are uncountably many subsets of $\\mathbb N$, then you can solve this problem by exhibiting a one-to-one map from subsets of $\\mathbb N$ to increasing sequences of natural numbers, for example the map sending any $X\\subseteq\\mathbb N$ to the sequence $(x_n)$ where $x_n=2n$ for $n\\in X$ and $x_n=2n+1$ for $n\\notin X$.</p>\n" }, { "answer_id": 2093683, "score": 6, "text": "<p>Here is another approach.\nSuppose the set is countable, hence we can make a list of its elements\n$f_1, f_2, f_3, \\ldots$. Consider the function $f$ defined by $f(1)=f_1(1)+1$ and\n$$ f(n)=f_n(n)+f(n-1)+1$$\nfor $n&gt;1$. Then $f$ is increasing, but $f$ differs from every $f_i$ of the list. Therefore, as we can't make a list of its elements, the set is uncountable.</p>\n" }, { "answer_id": 2093713, "score": 4, "text": "<p>If you already know that there are uncountably many subsets of $\\mathbb N$, then you can solve this problem by exhibiting a one-to-one map from subsets of $\\mathbb N$ to increasing sequences of natural numbers, for example the map sending any $X\\subseteq\\mathbb N$ to the sequence $(x_n)$ where $x_n=2n$ for $n\\in X$ and $x_n=2n+1$ for $n\\notin X$.</p>\n" }, { "answer_id": 2093683, "score": 6, "text": "<p>Here is another approach.\nSuppose the set is countable, hence we can make a list of its elements\n$f_1, f_2, f_3, \\ldots$. Consider the function $f$ defined by $f(1)=f_1(1)+1$ and\n$$ f(n)=f_n(n)+f(n-1)+1$$\nfor $n&gt;1$. Then $f$ is increasing, but $f$ differs from every $f_i$ of the list. Therefore, as we can't make a list of its elements, the set is uncountable.</p>\n" }, { "answer_id": 2093713, "score": 4, "text": "<p>If you already know that there are uncountably many subsets of $\\mathbb N$, then you can solve this problem by exhibiting a one-to-one map from subsets of $\\mathbb N$ to increasing sequences of natural numbers, for example the map sending any $X\\subseteq\\mathbb N$ to the sequence $(x_n)$ where $x_n=2n$ for $n\\in X$ and $x_n=2n+1$ for $n\\notin X$.</p>\n" }, { "answer_id": 2093683, "score": 6, "text": "<p>Here is another approach.\nSuppose the set is countable, hence we can make a list of its elements\n$f_1, f_2, f_3, \\ldots$. Consider the function $f$ defined by $f(1)=f_1(1)+1$ and\n$$ f(n)=f_n(n)+f(n-1)+1$$\nfor $n&gt;1$. Then $f$ is increasing, but $f$ differs from every $f_i$ of the list. Therefore, as we can't make a list of its elements, the set is uncountable.</p>\n" }, { "answer_id": 2093713, "score": 4, "text": "<p>If you already know that there are uncountably many subsets of $\\mathbb N$, then you can solve this problem by exhibiting a one-to-one map from subsets of $\\mathbb N$ to increasing sequences of natural numbers, for example the map sending any $X\\subseteq\\mathbb N$ to the sequence $(x_n)$ where $x_n=2n$ for $n\\in X$ and $x_n=2n+1$ for $n\\notin X$.</p>\n" }, { "answer_id": 2093683, "score": 6, "text": "<p>Here is another approach.\nSuppose the set is countable, hence we can make a list of its elements\n$f_1, f_2, f_3, \\ldots$. Consider the function $f$ defined by $f(1)=f_1(1)+1$ and\n$$ f(n)=f_n(n)+f(n-1)+1$$\nfor $n&gt;1$. Then $f$ is increasing, but $f$ differs from every $f_i$ of the list. Therefore, as we can't make a list of its elements, the set is uncountable.</p>\n" }, { "answer_id": 2093713, "score": 4, "text": "<p>If you already know that there are uncountably many subsets of $\\mathbb N$, then you can solve this problem by exhibiting a one-to-one map from subsets of $\\mathbb N$ to increasing sequences of natural numbers, for example the map sending any $X\\subseteq\\mathbb N$ to the sequence $(x_n)$ where $x_n=2n$ for $n\\in X$ and $x_n=2n+1$ for $n\\notin X$.</p>\n" }, { "answer_id": 2093683, "score": 6, "text": "<p>Here is another approach.\nSuppose the set is countable, hence we can make a list of its elements\n$f_1, f_2, f_3, \\ldots$. Consider the function $f$ defined by $f(1)=f_1(1)+1$ and\n$$ f(n)=f_n(n)+f(n-1)+1$$\nfor $n&gt;1$. Then $f$ is increasing, but $f$ differs from every $f_i$ of the list. Therefore, as we can't make a list of its elements, the set is uncountable.</p>\n" }, { "answer_id": 2093713, "score": 4, "text": "<p>If you already know that there are uncountably many subsets of $\\mathbb N$, then you can solve this problem by exhibiting a one-to-one map from subsets of $\\mathbb N$ to increasing sequences of natural numbers, for example the map sending any $X\\subseteq\\mathbb N$ to the sequence $(x_n)$ where $x_n=2n$ for $n\\in X$ and $x_n=2n+1$ for $n\\notin X$.</p>\n" }, { "answer_id": 2093683, "score": 6, "text": "<p>Here is another approach.\nSuppose the set is countable, hence we can make a list of its elements\n$f_1, f_2, f_3, \\ldots$. Consider the function $f$ defined by $f(1)=f_1(1)+1$ and\n$$ f(n)=f_n(n)+f(n-1)+1$$\nfor $n&gt;1$. Then $f$ is increasing, but $f$ differs from every $f_i$ of the list. Therefore, as we can't make a list of its elements, the set is uncountable.</p>\n" }, { "answer_id": 2093713, "score": 4, "text": "<p>If you already know that there are uncountably many subsets of $\\mathbb N$, then you can solve this problem by exhibiting a one-to-one map from subsets of $\\mathbb N$ to increasing sequences of natural numbers, for example the map sending any $X\\subseteq\\mathbb N$ to the sequence $(x_n)$ where $x_n=2n$ for $n\\in X$ and $x_n=2n+1$ for $n\\notin X$.</p>\n" }, { "answer_id": 2093683, "score": 6, "text": "<p>Here is another approach.\nSuppose the set is countable, hence we can make a list of its elements\n$f_1, f_2, f_3, \\ldots$. Consider the function $f$ defined by $f(1)=f_1(1)+1$ and\n$$ f(n)=f_n(n)+f(n-1)+1$$\nfor $n&gt;1$. Then $f$ is increasing, but $f$ differs from every $f_i$ of the list. Therefore, as we can't make a list of its elements, the set is uncountable.</p>\n" }, { "answer_id": 2093713, "score": 4, "text": "<p>If you already know that there are uncountably many subsets of $\\mathbb N$, then you can solve this problem by exhibiting a one-to-one map from subsets of $\\mathbb N$ to increasing sequences of natural numbers, for example the map sending any $X\\subseteq\\mathbb N$ to the sequence $(x_n)$ where $x_n=2n$ for $n\\in X$ and $x_n=2n+1$ for $n\\notin X$.</p>\n" }, { "answer_id": 2093683, "score": 6, "text": "<p>Here is another approach.\nSuppose the set is countable, hence we can make a list of its elements\n$f_1, f_2, f_3, \\ldots$. Consider the function $f$ defined by $f(1)=f_1(1)+1$ and\n$$ f(n)=f_n(n)+f(n-1)+1$$\nfor $n&gt;1$. Then $f$ is increasing, but $f$ differs from every $f_i$ of the list. Therefore, as we can't make a list of its elements, the set is uncountable.</p>\n" }, { "answer_id": 2093713, "score": 4, "text": "<p>If you already know that there are uncountably many subsets of $\\mathbb N$, then you can solve this problem by exhibiting a one-to-one map from subsets of $\\mathbb N$ to increasing sequences of natural numbers, for example the map sending any $X\\subseteq\\mathbb N$ to the sequence $(x_n)$ where $x_n=2n$ for $n\\in X$ and $x_n=2n+1$ for $n\\notin X$.</p>\n" }, { "answer_id": 2093683, "score": 6, "text": "<p>Here is another approach.\nSuppose the set is countable, hence we can make a list of its elements\n$f_1, f_2, f_3, \\ldots$. Consider the function $f$ defined by $f(1)=f_1(1)+1$ and\n$$ f(n)=f_n(n)+f(n-1)+1$$\nfor $n&gt;1$. Then $f$ is increasing, but $f$ differs from every $f_i$ of the list. Therefore, as we can't make a list of its elements, the set is uncountable.</p>\n" }, { "answer_id": 2093713, "score": 4, "text": "<p>If you already know that there are uncountably many subsets of $\\mathbb N$, then you can solve this problem by exhibiting a one-to-one map from subsets of $\\mathbb N$ to increasing sequences of natural numbers, for example the map sending any $X\\subseteq\\mathbb N$ to the sequence $(x_n)$ where $x_n=2n$ for $n\\in X$ and $x_n=2n+1$ for $n\\notin X$.</p>\n" }, { "answer_id": 2093683, "score": 6, "text": "<p>Here is another approach.\nSuppose the set is countable, hence we can make a list of its elements\n$f_1, f_2, f_3, \\ldots$. Consider the function $f$ defined by $f(1)=f_1(1)+1$ and\n$$ f(n)=f_n(n)+f(n-1)+1$$\nfor $n&gt;1$. Then $f$ is increasing, but $f$ differs from every $f_i$ of the list. Therefore, as we can't make a list of its elements, the set is uncountable.</p>\n" }, { "answer_id": 2093713, "score": 4, "text": "<p>If you already know that there are uncountably many subsets of $\\mathbb N$, then you can solve this problem by exhibiting a one-to-one map from subsets of $\\mathbb N$ to increasing sequences of natural numbers, for example the map sending any $X\\subseteq\\mathbb N$ to the sequence $(x_n)$ where $x_n=2n$ for $n\\in X$ and $x_n=2n+1$ for $n\\notin X$.</p>\n" } ]
Question: Proving that the set of all increasing sequences is uncountable Answer 1: <p>Here is another approach. Suppose the set is countable, hence we can make a list of its elements $f_1, f_2, f_3, \ldots$. Consider the function $f$ defined by $f(1)=f_1(1)+1$ and $$ f(n)=f_n(n)+f(n-1)+1$$ for $n&gt;1$. Then $f$ is increasing, but $f$ differs from every $f_i$ of the list. Therefore, as we can't make a list of its elements, the set is uncountable.</p> Answer 2: <p>If you already know that there are uncountably many subsets of $\mathbb N$, then you can solve this problem by exhibiting a one-to-one map from subsets of $\mathbb N$ to increasing sequences of natural numbers, for example the map sending any $X\subseteq\mathbb N$ to the sequence $(x_n)$ where $x_n=2n$ for $n\in X$ and $x_n=2n+1$ for $n\notin X$.</p> Answer 3: <p>Here is another approach. Suppose the set is countable, hence we can make a list of its elements $f_1, f_2, f_3, \ldots$. Consider the function $f$ defined by $f(1)=f_1(1)+1$ and $$ f(n)=f_n(n)+f(n-1)+1$$ for $n&gt;1$. Then $f$ is increasing, but $f$ differs from every $f_i$ of the list. Therefore, as we can't make a list of its elements, the set is uncountable.</p> Answer 4: <p>If you already know that there are uncountably many subsets of $\mathbb N$, then you can solve this problem by exhibiting a one-to-one map from subsets of $\mathbb N$ to increasing sequences of natural numbers, for example the map sending any $X\subseteq\mathbb N$ to the sequence $(x_n)$ where $x_n=2n$ for $n\in X$ and $x_n=2n+1$ for $n\notin X$.</p> Answer 5: <p>Here is another approach. Suppose the set is countable, hence we can make a list of its elements $f_1, f_2, f_3, \ldots$. Consider the function $f$ defined by $f(1)=f_1(1)+1$ and $$ f(n)=f_n(n)+f(n-1)+1$$ for $n&gt;1$. Then $f$ is increasing, but $f$ differs from every $f_i$ of the list. Therefore, as we can't make a list of its elements, the set is uncountable.</p> Answer 6: <p>If you already know that there are uncountably many subsets of $\mathbb N$, then you can solve this problem by exhibiting a one-to-one map from subsets of $\mathbb N$ to increasing sequences of natural numbers, for example the map sending any $X\subseteq\mathbb N$ to the sequence $(x_n)$ where $x_n=2n$ for $n\in X$ and $x_n=2n+1$ for $n\notin X$.</p> Answer 7: <p>Here is another approach. Suppose the set is countable, hence we can make a list of its elements $f_1, f_2, f_3, \ldots$. Consider the function $f$ defined by $f(1)=f_1(1)+1$ and $$ f(n)=f_n(n)+f(n-1)+1$$ for $n&gt;1$. Then $f$ is increasing, but $f$ differs from every $f_i$ of the list. Therefore, as we can't make a list of its elements, the set is uncountable.</p> Answer 8: <p>If you already know that there are uncountably many subsets of $\mathbb N$, then you can solve this problem by exhibiting a one-to-one map from subsets of $\mathbb N$ to increasing sequences of natural numbers, for example the map sending any $X\subseteq\mathbb N$ to the sequence $(x_n)$ where $x_n=2n$ for $n\in X$ and $x_n=2n+1$ for $n\notin X$.</p> Answer 9: <p>Here is another approach. Suppose the set is countable, hence we can make a list of its elements $f_1, f_2, f_3, \ldots$. Consider the function $f$ defined by $f(1)=f_1(1)+1$ and $$ f(n)=f_n(n)+f(n-1)+1$$ for $n&gt;1$. Then $f$ is increasing, but $f$ differs from every $f_i$ of the list. Therefore, as we can't make a list of its elements, the set is uncountable.</p> Answer 10: <p>If you already know that there are uncountably many subsets of $\mathbb N$, then you can solve this problem by exhibiting a one-to-one map from subsets of $\mathbb N$ to increasing sequences of natural numbers, for example the map sending any $X\subseteq\mathbb N$ to the sequence $(x_n)$ where $x_n=2n$ for $n\in X$ and $x_n=2n+1$ for $n\notin X$.</p> Answer 11: <p>Here is another approach. Suppose the set is countable, hence we can make a list of its elements $f_1, f_2, f_3, \ldots$. Consider the function $f$ defined by $f(1)=f_1(1)+1$ and $$ f(n)=f_n(n)+f(n-1)+1$$ for $n&gt;1$. Then $f$ is increasing, but $f$ differs from every $f_i$ of the list. Therefore, as we can't make a list of its elements, the set is uncountable.</p> Answer 12: <p>If you already know that there are uncountably many subsets of $\mathbb N$, then you can solve this problem by exhibiting a one-to-one map from subsets of $\mathbb N$ to increasing sequences of natural numbers, for example the map sending any $X\subseteq\mathbb N$ to the sequence $(x_n)$ where $x_n=2n$ for $n\in X$ and $x_n=2n+1$ for $n\notin X$.</p> Answer 13: <p>Here is another approach. Suppose the set is countable, hence we can make a list of its elements $f_1, f_2, f_3, \ldots$. Consider the function $f$ defined by $f(1)=f_1(1)+1$ and $$ f(n)=f_n(n)+f(n-1)+1$$ for $n&gt;1$. Then $f$ is increasing, but $f$ differs from every $f_i$ of the list. Therefore, as we can't make a list of its elements, the set is uncountable.</p> Answer 14: <p>If you already know that there are uncountably many subsets of $\mathbb N$, then you can solve this problem by exhibiting a one-to-one map from subsets of $\mathbb N$ to increasing sequences of natural numbers, for example the map sending any $X\subseteq\mathbb N$ to the sequence $(x_n)$ where $x_n=2n$ for $n\in X$ and $x_n=2n+1$ for $n\notin X$.</p> Answer 15: <p>Here is another approach. Suppose the set is countable, hence we can make a list of its elements $f_1, f_2, f_3, \ldots$. Consider the function $f$ defined by $f(1)=f_1(1)+1$ and $$ f(n)=f_n(n)+f(n-1)+1$$ for $n&gt;1$. Then $f$ is increasing, but $f$ differs from every $f_i$ of the list. Therefore, as we can't make a list of its elements, the set is uncountable.</p> Answer 16: <p>If you already know that there are uncountably many subsets of $\mathbb N$, then you can solve this problem by exhibiting a one-to-one map from subsets of $\mathbb N$ to increasing sequences of natural numbers, for example the map sending any $X\subseteq\mathbb N$ to the sequence $(x_n)$ where $x_n=2n$ for $n\in X$ and $x_n=2n+1$ for $n\notin X$.</p> Answer 17: <p>Here is another approach. Suppose the set is countable, hence we can make a list of its elements $f_1, f_2, f_3, \ldots$. Consider the function $f$ defined by $f(1)=f_1(1)+1$ and $$ f(n)=f_n(n)+f(n-1)+1$$ for $n&gt;1$. Then $f$ is increasing, but $f$ differs from every $f_i$ of the list. Therefore, as we can't make a list of its elements, the set is uncountable.</p> Answer 18: <p>If you already know that there are uncountably many subsets of $\mathbb N$, then you can solve this problem by exhibiting a one-to-one map from subsets of $\mathbb N$ to increasing sequences of natural numbers, for example the map sending any $X\subseteq\mathbb N$ to the sequence $(x_n)$ where $x_n=2n$ for $n\in X$ and $x_n=2n+1$ for $n\notin X$.</p> Answer 19: <p>Here is another approach. Suppose the set is countable, hence we can make a list of its elements $f_1, f_2, f_3, \ldots$. Consider the function $f$ defined by $f(1)=f_1(1)+1$ and $$ f(n)=f_n(n)+f(n-1)+1$$ for $n&gt;1$. Then $f$ is increasing, but $f$ differs from every $f_i$ of the list. Therefore, as we can't make a list of its elements, the set is uncountable.</p> Answer 20: <p>If you already know that there are uncountably many subsets of $\mathbb N$, then you can solve this problem by exhibiting a one-to-one map from subsets of $\mathbb N$ to increasing sequences of natural numbers, for example the map sending any $X\subseteq\mathbb N$ to the sequence $(x_n)$ where $x_n=2n$ for $n\in X$ and $x_n=2n+1$ for $n\notin X$.</p> Answer 21: <p>Here is another approach. Suppose the set is countable, hence we can make a list of its elements $f_1, f_2, f_3, \ldots$. Consider the function $f$ defined by $f(1)=f_1(1)+1$ and $$ f(n)=f_n(n)+f(n-1)+1$$ for $n&gt;1$. Then $f$ is increasing, but $f$ differs from every $f_i$ of the list. Therefore, as we can't make a list of its elements, the set is uncountable.</p> Answer 22: <p>If you already know that there are uncountably many subsets of $\mathbb N$, then you can solve this problem by exhibiting a one-to-one map from subsets of $\mathbb N$ to increasing sequences of natural numbers, for example the map sending any $X\subseteq\mathbb N$ to the sequence $(x_n)$ where $x_n=2n$ for $n\in X$ and $x_n=2n+1$ for $n\notin X$.</p>
Solving the integral $\int\sqrt{\ln(x)}\,dx $
2,093,810
6
https://math.stackexchange.com/questions/2093810/solving-the-integral-int-sqrt-lnx-dx
<p>How to solve the following integral <span class="math-container">$$ \int\sqrt{\ln(x)}\,dx $$</span> If the above integral is not solve-able, how to proof that the function <span class="math-container">$\sqrt{\ln(x)}$</span> is not integrable.</p>
[ { "answer_id": 2093867, "score": 4, "text": "<p>Let $x=e^{-u}$ and you get</p>\n\n<p>$$\\int_a^b\\sqrt{\\ln x}\\ dx=-\\int_{-\\ln a}^{-\\ln b}\\sqrt{-u}e^{-u}\\ du=i\\left(\\gamma(3/2,-\\ln a)-\\gamma(3/2,-\\ln b)\\right)$$</p>\n\n<p>where $\\gamma(a,x)$ is the <a href=\"http://mathworld.wolfram.com/IncompleteGammaFunction.html\" rel=\"nofollow noreferrer\">lower incomplete gamma function</a>.</p>\n" }, { "answer_id": 2094338, "score": 4, "text": "<p>Considering $$I=\\int\\sqrt{\\log( x)}\\ dx$$ change variable $$\\sqrt{\\log( x)}=t\\implies x=e^{t^2}\\implies dx=2te^{t^2}\\,dt$$ which makes $$I=\\int 2t^2e^{t^2}\\,dt$$ Now, integration by parts $$u=t\\implies u'=dt$$ $$v'=2te^{t^2}\\,dt\\implies v=e^{t^2}$$ $$I=te^{t^2}-\\int e^{t^2}\\,dt=te^{t^2} -\\frac{\\sqrt{\\pi }}{2} \\text{erfi}(t)$$ where appears the imaginary error function.</p>\n\n<p>Back to $x$, $$I=x \\sqrt{\\log (x)}-\\frac{\\sqrt{\\pi }}{2} \\text{erfi}\\left(\\sqrt{\\log (x)}\\right)$$ which is real if $x\\geq 1$.</p>\n" }, { "answer_id": 2093867, "score": 4, "text": "<p>Let $x=e^{-u}$ and you get</p>\n\n<p>$$\\int_a^b\\sqrt{\\ln x}\\ dx=-\\int_{-\\ln a}^{-\\ln b}\\sqrt{-u}e^{-u}\\ du=i\\left(\\gamma(3/2,-\\ln a)-\\gamma(3/2,-\\ln b)\\right)$$</p>\n\n<p>where $\\gamma(a,x)$ is the <a href=\"http://mathworld.wolfram.com/IncompleteGammaFunction.html\" rel=\"nofollow noreferrer\">lower incomplete gamma function</a>.</p>\n" }, { "answer_id": 2094338, "score": 4, "text": "<p>Considering $$I=\\int\\sqrt{\\log( x)}\\ dx$$ change variable $$\\sqrt{\\log( x)}=t\\implies x=e^{t^2}\\implies dx=2te^{t^2}\\,dt$$ which makes $$I=\\int 2t^2e^{t^2}\\,dt$$ Now, integration by parts $$u=t\\implies u'=dt$$ $$v'=2te^{t^2}\\,dt\\implies v=e^{t^2}$$ $$I=te^{t^2}-\\int e^{t^2}\\,dt=te^{t^2} -\\frac{\\sqrt{\\pi }}{2} \\text{erfi}(t)$$ where appears the imaginary error function.</p>\n\n<p>Back to $x$, $$I=x \\sqrt{\\log (x)}-\\frac{\\sqrt{\\pi }}{2} \\text{erfi}\\left(\\sqrt{\\log (x)}\\right)$$ which is real if $x\\geq 1$.</p>\n" }, { "answer_id": 2093867, "score": 4, "text": "<p>Let $x=e^{-u}$ and you get</p>\n\n<p>$$\\int_a^b\\sqrt{\\ln x}\\ dx=-\\int_{-\\ln a}^{-\\ln b}\\sqrt{-u}e^{-u}\\ du=i\\left(\\gamma(3/2,-\\ln a)-\\gamma(3/2,-\\ln b)\\right)$$</p>\n\n<p>where $\\gamma(a,x)$ is the <a href=\"http://mathworld.wolfram.com/IncompleteGammaFunction.html\" rel=\"nofollow noreferrer\">lower incomplete gamma function</a>.</p>\n" }, { "answer_id": 2094338, "score": 4, "text": "<p>Considering $$I=\\int\\sqrt{\\log( x)}\\ dx$$ change variable $$\\sqrt{\\log( x)}=t\\implies x=e^{t^2}\\implies dx=2te^{t^2}\\,dt$$ which makes $$I=\\int 2t^2e^{t^2}\\,dt$$ Now, integration by parts $$u=t\\implies u'=dt$$ $$v'=2te^{t^2}\\,dt\\implies v=e^{t^2}$$ $$I=te^{t^2}-\\int e^{t^2}\\,dt=te^{t^2} -\\frac{\\sqrt{\\pi }}{2} \\text{erfi}(t)$$ where appears the imaginary error function.</p>\n\n<p>Back to $x$, $$I=x \\sqrt{\\log (x)}-\\frac{\\sqrt{\\pi }}{2} \\text{erfi}\\left(\\sqrt{\\log (x)}\\right)$$ which is real if $x\\geq 1$.</p>\n" }, { "answer_id": 2093867, "score": 4, "text": "<p>Let $x=e^{-u}$ and you get</p>\n\n<p>$$\\int_a^b\\sqrt{\\ln x}\\ dx=-\\int_{-\\ln a}^{-\\ln b}\\sqrt{-u}e^{-u}\\ du=i\\left(\\gamma(3/2,-\\ln a)-\\gamma(3/2,-\\ln b)\\right)$$</p>\n\n<p>where $\\gamma(a,x)$ is the <a href=\"http://mathworld.wolfram.com/IncompleteGammaFunction.html\" rel=\"nofollow noreferrer\">lower incomplete gamma function</a>.</p>\n" }, { "answer_id": 2094338, "score": 4, "text": "<p>Considering $$I=\\int\\sqrt{\\log( x)}\\ dx$$ change variable $$\\sqrt{\\log( x)}=t\\implies x=e^{t^2}\\implies dx=2te^{t^2}\\,dt$$ which makes $$I=\\int 2t^2e^{t^2}\\,dt$$ Now, integration by parts $$u=t\\implies u'=dt$$ $$v'=2te^{t^2}\\,dt\\implies v=e^{t^2}$$ $$I=te^{t^2}-\\int e^{t^2}\\,dt=te^{t^2} -\\frac{\\sqrt{\\pi }}{2} \\text{erfi}(t)$$ where appears the imaginary error function.</p>\n\n<p>Back to $x$, $$I=x \\sqrt{\\log (x)}-\\frac{\\sqrt{\\pi }}{2} \\text{erfi}\\left(\\sqrt{\\log (x)}\\right)$$ which is real if $x\\geq 1$.</p>\n" }, { "answer_id": 2093867, "score": 4, "text": "<p>Let $x=e^{-u}$ and you get</p>\n\n<p>$$\\int_a^b\\sqrt{\\ln x}\\ dx=-\\int_{-\\ln a}^{-\\ln b}\\sqrt{-u}e^{-u}\\ du=i\\left(\\gamma(3/2,-\\ln a)-\\gamma(3/2,-\\ln b)\\right)$$</p>\n\n<p>where $\\gamma(a,x)$ is the <a href=\"http://mathworld.wolfram.com/IncompleteGammaFunction.html\" rel=\"nofollow noreferrer\">lower incomplete gamma function</a>.</p>\n" }, { "answer_id": 2094338, "score": 4, "text": "<p>Considering $$I=\\int\\sqrt{\\log( x)}\\ dx$$ change variable $$\\sqrt{\\log( x)}=t\\implies x=e^{t^2}\\implies dx=2te^{t^2}\\,dt$$ which makes $$I=\\int 2t^2e^{t^2}\\,dt$$ Now, integration by parts $$u=t\\implies u'=dt$$ $$v'=2te^{t^2}\\,dt\\implies v=e^{t^2}$$ $$I=te^{t^2}-\\int e^{t^2}\\,dt=te^{t^2} -\\frac{\\sqrt{\\pi }}{2} \\text{erfi}(t)$$ where appears the imaginary error function.</p>\n\n<p>Back to $x$, $$I=x \\sqrt{\\log (x)}-\\frac{\\sqrt{\\pi }}{2} \\text{erfi}\\left(\\sqrt{\\log (x)}\\right)$$ which is real if $x\\geq 1$.</p>\n" }, { "answer_id": 2093867, "score": 4, "text": "<p>Let $x=e^{-u}$ and you get</p>\n\n<p>$$\\int_a^b\\sqrt{\\ln x}\\ dx=-\\int_{-\\ln a}^{-\\ln b}\\sqrt{-u}e^{-u}\\ du=i\\left(\\gamma(3/2,-\\ln a)-\\gamma(3/2,-\\ln b)\\right)$$</p>\n\n<p>where $\\gamma(a,x)$ is the <a href=\"http://mathworld.wolfram.com/IncompleteGammaFunction.html\" rel=\"nofollow noreferrer\">lower incomplete gamma function</a>.</p>\n" }, { "answer_id": 2094338, "score": 4, "text": "<p>Considering $$I=\\int\\sqrt{\\log( x)}\\ dx$$ change variable $$\\sqrt{\\log( x)}=t\\implies x=e^{t^2}\\implies dx=2te^{t^2}\\,dt$$ which makes $$I=\\int 2t^2e^{t^2}\\,dt$$ Now, integration by parts $$u=t\\implies u'=dt$$ $$v'=2te^{t^2}\\,dt\\implies v=e^{t^2}$$ $$I=te^{t^2}-\\int e^{t^2}\\,dt=te^{t^2} -\\frac{\\sqrt{\\pi }}{2} \\text{erfi}(t)$$ where appears the imaginary error function.</p>\n\n<p>Back to $x$, $$I=x \\sqrt{\\log (x)}-\\frac{\\sqrt{\\pi }}{2} \\text{erfi}\\left(\\sqrt{\\log (x)}\\right)$$ which is real if $x\\geq 1$.</p>\n" }, { "answer_id": 2093867, "score": 4, "text": "<p>Let $x=e^{-u}$ and you get</p>\n\n<p>$$\\int_a^b\\sqrt{\\ln x}\\ dx=-\\int_{-\\ln a}^{-\\ln b}\\sqrt{-u}e^{-u}\\ du=i\\left(\\gamma(3/2,-\\ln a)-\\gamma(3/2,-\\ln b)\\right)$$</p>\n\n<p>where $\\gamma(a,x)$ is the <a href=\"http://mathworld.wolfram.com/IncompleteGammaFunction.html\" rel=\"nofollow noreferrer\">lower incomplete gamma function</a>.</p>\n" }, { "answer_id": 2094338, "score": 4, "text": "<p>Considering $$I=\\int\\sqrt{\\log( x)}\\ dx$$ change variable $$\\sqrt{\\log( x)}=t\\implies x=e^{t^2}\\implies dx=2te^{t^2}\\,dt$$ which makes $$I=\\int 2t^2e^{t^2}\\,dt$$ Now, integration by parts $$u=t\\implies u'=dt$$ $$v'=2te^{t^2}\\,dt\\implies v=e^{t^2}$$ $$I=te^{t^2}-\\int e^{t^2}\\,dt=te^{t^2} -\\frac{\\sqrt{\\pi }}{2} \\text{erfi}(t)$$ where appears the imaginary error function.</p>\n\n<p>Back to $x$, $$I=x \\sqrt{\\log (x)}-\\frac{\\sqrt{\\pi }}{2} \\text{erfi}\\left(\\sqrt{\\log (x)}\\right)$$ which is real if $x\\geq 1$.</p>\n" }, { "answer_id": 2093867, "score": 4, "text": "<p>Let $x=e^{-u}$ and you get</p>\n\n<p>$$\\int_a^b\\sqrt{\\ln x}\\ dx=-\\int_{-\\ln a}^{-\\ln b}\\sqrt{-u}e^{-u}\\ du=i\\left(\\gamma(3/2,-\\ln a)-\\gamma(3/2,-\\ln b)\\right)$$</p>\n\n<p>where $\\gamma(a,x)$ is the <a href=\"http://mathworld.wolfram.com/IncompleteGammaFunction.html\" rel=\"nofollow noreferrer\">lower incomplete gamma function</a>.</p>\n" }, { "answer_id": 2094338, "score": 4, "text": "<p>Considering $$I=\\int\\sqrt{\\log( x)}\\ dx$$ change variable $$\\sqrt{\\log( x)}=t\\implies x=e^{t^2}\\implies dx=2te^{t^2}\\,dt$$ which makes $$I=\\int 2t^2e^{t^2}\\,dt$$ Now, integration by parts $$u=t\\implies u'=dt$$ $$v'=2te^{t^2}\\,dt\\implies v=e^{t^2}$$ $$I=te^{t^2}-\\int e^{t^2}\\,dt=te^{t^2} -\\frac{\\sqrt{\\pi }}{2} \\text{erfi}(t)$$ where appears the imaginary error function.</p>\n\n<p>Back to $x$, $$I=x \\sqrt{\\log (x)}-\\frac{\\sqrt{\\pi }}{2} \\text{erfi}\\left(\\sqrt{\\log (x)}\\right)$$ which is real if $x\\geq 1$.</p>\n" }, { "answer_id": 2093867, "score": 4, "text": "<p>Let $x=e^{-u}$ and you get</p>\n\n<p>$$\\int_a^b\\sqrt{\\ln x}\\ dx=-\\int_{-\\ln a}^{-\\ln b}\\sqrt{-u}e^{-u}\\ du=i\\left(\\gamma(3/2,-\\ln a)-\\gamma(3/2,-\\ln b)\\right)$$</p>\n\n<p>where $\\gamma(a,x)$ is the <a href=\"http://mathworld.wolfram.com/IncompleteGammaFunction.html\" rel=\"nofollow noreferrer\">lower incomplete gamma function</a>.</p>\n" }, { "answer_id": 2094338, "score": 4, "text": "<p>Considering $$I=\\int\\sqrt{\\log( x)}\\ dx$$ change variable $$\\sqrt{\\log( x)}=t\\implies x=e^{t^2}\\implies dx=2te^{t^2}\\,dt$$ which makes $$I=\\int 2t^2e^{t^2}\\,dt$$ Now, integration by parts $$u=t\\implies u'=dt$$ $$v'=2te^{t^2}\\,dt\\implies v=e^{t^2}$$ $$I=te^{t^2}-\\int e^{t^2}\\,dt=te^{t^2} -\\frac{\\sqrt{\\pi }}{2} \\text{erfi}(t)$$ where appears the imaginary error function.</p>\n\n<p>Back to $x$, $$I=x \\sqrt{\\log (x)}-\\frac{\\sqrt{\\pi }}{2} \\text{erfi}\\left(\\sqrt{\\log (x)}\\right)$$ which is real if $x\\geq 1$.</p>\n" }, { "answer_id": 2093867, "score": 4, "text": "<p>Let $x=e^{-u}$ and you get</p>\n\n<p>$$\\int_a^b\\sqrt{\\ln x}\\ dx=-\\int_{-\\ln a}^{-\\ln b}\\sqrt{-u}e^{-u}\\ du=i\\left(\\gamma(3/2,-\\ln a)-\\gamma(3/2,-\\ln b)\\right)$$</p>\n\n<p>where $\\gamma(a,x)$ is the <a href=\"http://mathworld.wolfram.com/IncompleteGammaFunction.html\" rel=\"nofollow noreferrer\">lower incomplete gamma function</a>.</p>\n" }, { "answer_id": 2094338, "score": 4, "text": "<p>Considering $$I=\\int\\sqrt{\\log( x)}\\ dx$$ change variable $$\\sqrt{\\log( x)}=t\\implies x=e^{t^2}\\implies dx=2te^{t^2}\\,dt$$ which makes $$I=\\int 2t^2e^{t^2}\\,dt$$ Now, integration by parts $$u=t\\implies u'=dt$$ $$v'=2te^{t^2}\\,dt\\implies v=e^{t^2}$$ $$I=te^{t^2}-\\int e^{t^2}\\,dt=te^{t^2} -\\frac{\\sqrt{\\pi }}{2} \\text{erfi}(t)$$ where appears the imaginary error function.</p>\n\n<p>Back to $x$, $$I=x \\sqrt{\\log (x)}-\\frac{\\sqrt{\\pi }}{2} \\text{erfi}\\left(\\sqrt{\\log (x)}\\right)$$ which is real if $x\\geq 1$.</p>\n" }, { "answer_id": 2093867, "score": 4, "text": "<p>Let $x=e^{-u}$ and you get</p>\n\n<p>$$\\int_a^b\\sqrt{\\ln x}\\ dx=-\\int_{-\\ln a}^{-\\ln b}\\sqrt{-u}e^{-u}\\ du=i\\left(\\gamma(3/2,-\\ln a)-\\gamma(3/2,-\\ln b)\\right)$$</p>\n\n<p>where $\\gamma(a,x)$ is the <a href=\"http://mathworld.wolfram.com/IncompleteGammaFunction.html\" rel=\"nofollow noreferrer\">lower incomplete gamma function</a>.</p>\n" }, { "answer_id": 2094338, "score": 4, "text": "<p>Considering $$I=\\int\\sqrt{\\log( x)}\\ dx$$ change variable $$\\sqrt{\\log( x)}=t\\implies x=e^{t^2}\\implies dx=2te^{t^2}\\,dt$$ which makes $$I=\\int 2t^2e^{t^2}\\,dt$$ Now, integration by parts $$u=t\\implies u'=dt$$ $$v'=2te^{t^2}\\,dt\\implies v=e^{t^2}$$ $$I=te^{t^2}-\\int e^{t^2}\\,dt=te^{t^2} -\\frac{\\sqrt{\\pi }}{2} \\text{erfi}(t)$$ where appears the imaginary error function.</p>\n\n<p>Back to $x$, $$I=x \\sqrt{\\log (x)}-\\frac{\\sqrt{\\pi }}{2} \\text{erfi}\\left(\\sqrt{\\log (x)}\\right)$$ which is real if $x\\geq 1$.</p>\n" } ]
Question: Solving the integral $\int\sqrt{\ln(x)}\,dx $ Answer 1: <p>Let $x=e^{-u}$ and you get</p> <p>$$\int_a^b\sqrt{\ln x}\ dx=-\int_{-\ln a}^{-\ln b}\sqrt{-u}e^{-u}\ du=i\left(\gamma(3/2,-\ln a)-\gamma(3/2,-\ln b)\right)$$</p> <p>where $\gamma(a,x)$ is the <a href="http://mathworld.wolfram.com/IncompleteGammaFunction.html" rel="nofollow noreferrer">lower incomplete gamma function</a>.</p> Answer 2: <p>Considering $$I=\int\sqrt{\log( x)}\ dx$$ change variable $$\sqrt{\log( x)}=t\implies x=e^{t^2}\implies dx=2te^{t^2}\,dt$$ which makes $$I=\int 2t^2e^{t^2}\,dt$$ Now, integration by parts $$u=t\implies u'=dt$$ $$v'=2te^{t^2}\,dt\implies v=e^{t^2}$$ $$I=te^{t^2}-\int e^{t^2}\,dt=te^{t^2} -\frac{\sqrt{\pi }}{2} \text{erfi}(t)$$ where appears the imaginary error function.</p> <p>Back to $x$, $$I=x \sqrt{\log (x)}-\frac{\sqrt{\pi }}{2} \text{erfi}\left(\sqrt{\log (x)}\right)$$ which is real if $x\geq 1$.</p> Answer 3: <p>Let $x=e^{-u}$ and you get</p> <p>$$\int_a^b\sqrt{\ln x}\ dx=-\int_{-\ln a}^{-\ln b}\sqrt{-u}e^{-u}\ du=i\left(\gamma(3/2,-\ln a)-\gamma(3/2,-\ln b)\right)$$</p> <p>where $\gamma(a,x)$ is the <a href="http://mathworld.wolfram.com/IncompleteGammaFunction.html" rel="nofollow noreferrer">lower incomplete gamma function</a>.</p> Answer 4: <p>Considering $$I=\int\sqrt{\log( x)}\ dx$$ change variable $$\sqrt{\log( x)}=t\implies x=e^{t^2}\implies dx=2te^{t^2}\,dt$$ which makes $$I=\int 2t^2e^{t^2}\,dt$$ Now, integration by parts $$u=t\implies u'=dt$$ $$v'=2te^{t^2}\,dt\implies v=e^{t^2}$$ $$I=te^{t^2}-\int e^{t^2}\,dt=te^{t^2} -\frac{\sqrt{\pi }}{2} \text{erfi}(t)$$ where appears the imaginary error function.</p> <p>Back to $x$, $$I=x \sqrt{\log (x)}-\frac{\sqrt{\pi }}{2} \text{erfi}\left(\sqrt{\log (x)}\right)$$ which is real if $x\geq 1$.</p> Answer 5: <p>Let $x=e^{-u}$ and you get</p> <p>$$\int_a^b\sqrt{\ln x}\ dx=-\int_{-\ln a}^{-\ln b}\sqrt{-u}e^{-u}\ du=i\left(\gamma(3/2,-\ln a)-\gamma(3/2,-\ln b)\right)$$</p> <p>where $\gamma(a,x)$ is the <a href="http://mathworld.wolfram.com/IncompleteGammaFunction.html" rel="nofollow noreferrer">lower incomplete gamma function</a>.</p> Answer 6: <p>Considering $$I=\int\sqrt{\log( x)}\ dx$$ change variable $$\sqrt{\log( x)}=t\implies x=e^{t^2}\implies dx=2te^{t^2}\,dt$$ which makes $$I=\int 2t^2e^{t^2}\,dt$$ Now, integration by parts $$u=t\implies u'=dt$$ $$v'=2te^{t^2}\,dt\implies v=e^{t^2}$$ $$I=te^{t^2}-\int e^{t^2}\,dt=te^{t^2} -\frac{\sqrt{\pi }}{2} \text{erfi}(t)$$ where appears the imaginary error function.</p> <p>Back to $x$, $$I=x \sqrt{\log (x)}-\frac{\sqrt{\pi }}{2} \text{erfi}\left(\sqrt{\log (x)}\right)$$ which is real if $x\geq 1$.</p> Answer 7: <p>Let $x=e^{-u}$ and you get</p> <p>$$\int_a^b\sqrt{\ln x}\ dx=-\int_{-\ln a}^{-\ln b}\sqrt{-u}e^{-u}\ du=i\left(\gamma(3/2,-\ln a)-\gamma(3/2,-\ln b)\right)$$</p> <p>where $\gamma(a,x)$ is the <a href="http://mathworld.wolfram.com/IncompleteGammaFunction.html" rel="nofollow noreferrer">lower incomplete gamma function</a>.</p> Answer 8: <p>Considering $$I=\int\sqrt{\log( x)}\ dx$$ change variable $$\sqrt{\log( x)}=t\implies x=e^{t^2}\implies dx=2te^{t^2}\,dt$$ which makes $$I=\int 2t^2e^{t^2}\,dt$$ Now, integration by parts $$u=t\implies u'=dt$$ $$v'=2te^{t^2}\,dt\implies v=e^{t^2}$$ $$I=te^{t^2}-\int e^{t^2}\,dt=te^{t^2} -\frac{\sqrt{\pi }}{2} \text{erfi}(t)$$ where appears the imaginary error function.</p> <p>Back to $x$, $$I=x \sqrt{\log (x)}-\frac{\sqrt{\pi }}{2} \text{erfi}\left(\sqrt{\log (x)}\right)$$ which is real if $x\geq 1$.</p> Answer 9: <p>Let $x=e^{-u}$ and you get</p> <p>$$\int_a^b\sqrt{\ln x}\ dx=-\int_{-\ln a}^{-\ln b}\sqrt{-u}e^{-u}\ du=i\left(\gamma(3/2,-\ln a)-\gamma(3/2,-\ln b)\right)$$</p> <p>where $\gamma(a,x)$ is the <a href="http://mathworld.wolfram.com/IncompleteGammaFunction.html" rel="nofollow noreferrer">lower incomplete gamma function</a>.</p> Answer 10: <p>Considering $$I=\int\sqrt{\log( x)}\ dx$$ change variable $$\sqrt{\log( x)}=t\implies x=e^{t^2}\implies dx=2te^{t^2}\,dt$$ which makes $$I=\int 2t^2e^{t^2}\,dt$$ Now, integration by parts $$u=t\implies u'=dt$$ $$v'=2te^{t^2}\,dt\implies v=e^{t^2}$$ $$I=te^{t^2}-\int e^{t^2}\,dt=te^{t^2} -\frac{\sqrt{\pi }}{2} \text{erfi}(t)$$ where appears the imaginary error function.</p> <p>Back to $x$, $$I=x \sqrt{\log (x)}-\frac{\sqrt{\pi }}{2} \text{erfi}\left(\sqrt{\log (x)}\right)$$ which is real if $x\geq 1$.</p> Answer 11: <p>Let $x=e^{-u}$ and you get</p> <p>$$\int_a^b\sqrt{\ln x}\ dx=-\int_{-\ln a}^{-\ln b}\sqrt{-u}e^{-u}\ du=i\left(\gamma(3/2,-\ln a)-\gamma(3/2,-\ln b)\right)$$</p> <p>where $\gamma(a,x)$ is the <a href="http://mathworld.wolfram.com/IncompleteGammaFunction.html" rel="nofollow noreferrer">lower incomplete gamma function</a>.</p> Answer 12: <p>Considering $$I=\int\sqrt{\log( x)}\ dx$$ change variable $$\sqrt{\log( x)}=t\implies x=e^{t^2}\implies dx=2te^{t^2}\,dt$$ which makes $$I=\int 2t^2e^{t^2}\,dt$$ Now, integration by parts $$u=t\implies u'=dt$$ $$v'=2te^{t^2}\,dt\implies v=e^{t^2}$$ $$I=te^{t^2}-\int e^{t^2}\,dt=te^{t^2} -\frac{\sqrt{\pi }}{2} \text{erfi}(t)$$ where appears the imaginary error function.</p> <p>Back to $x$, $$I=x \sqrt{\log (x)}-\frac{\sqrt{\pi }}{2} \text{erfi}\left(\sqrt{\log (x)}\right)$$ which is real if $x\geq 1$.</p> Answer 13: <p>Let $x=e^{-u}$ and you get</p> <p>$$\int_a^b\sqrt{\ln x}\ dx=-\int_{-\ln a}^{-\ln b}\sqrt{-u}e^{-u}\ du=i\left(\gamma(3/2,-\ln a)-\gamma(3/2,-\ln b)\right)$$</p> <p>where $\gamma(a,x)$ is the <a href="http://mathworld.wolfram.com/IncompleteGammaFunction.html" rel="nofollow noreferrer">lower incomplete gamma function</a>.</p> Answer 14: <p>Considering $$I=\int\sqrt{\log( x)}\ dx$$ change variable $$\sqrt{\log( x)}=t\implies x=e^{t^2}\implies dx=2te^{t^2}\,dt$$ which makes $$I=\int 2t^2e^{t^2}\,dt$$ Now, integration by parts $$u=t\implies u'=dt$$ $$v'=2te^{t^2}\,dt\implies v=e^{t^2}$$ $$I=te^{t^2}-\int e^{t^2}\,dt=te^{t^2} -\frac{\sqrt{\pi }}{2} \text{erfi}(t)$$ where appears the imaginary error function.</p> <p>Back to $x$, $$I=x \sqrt{\log (x)}-\frac{\sqrt{\pi }}{2} \text{erfi}\left(\sqrt{\log (x)}\right)$$ which is real if $x\geq 1$.</p> Answer 15: <p>Let $x=e^{-u}$ and you get</p> <p>$$\int_a^b\sqrt{\ln x}\ dx=-\int_{-\ln a}^{-\ln b}\sqrt{-u}e^{-u}\ du=i\left(\gamma(3/2,-\ln a)-\gamma(3/2,-\ln b)\right)$$</p> <p>where $\gamma(a,x)$ is the <a href="http://mathworld.wolfram.com/IncompleteGammaFunction.html" rel="nofollow noreferrer">lower incomplete gamma function</a>.</p> Answer 16: <p>Considering $$I=\int\sqrt{\log( x)}\ dx$$ change variable $$\sqrt{\log( x)}=t\implies x=e^{t^2}\implies dx=2te^{t^2}\,dt$$ which makes $$I=\int 2t^2e^{t^2}\,dt$$ Now, integration by parts $$u=t\implies u'=dt$$ $$v'=2te^{t^2}\,dt\implies v=e^{t^2}$$ $$I=te^{t^2}-\int e^{t^2}\,dt=te^{t^2} -\frac{\sqrt{\pi }}{2} \text{erfi}(t)$$ where appears the imaginary error function.</p> <p>Back to $x$, $$I=x \sqrt{\log (x)}-\frac{\sqrt{\pi }}{2} \text{erfi}\left(\sqrt{\log (x)}\right)$$ which is real if $x\geq 1$.</p> Answer 17: <p>Let $x=e^{-u}$ and you get</p> <p>$$\int_a^b\sqrt{\ln x}\ dx=-\int_{-\ln a}^{-\ln b}\sqrt{-u}e^{-u}\ du=i\left(\gamma(3/2,-\ln a)-\gamma(3/2,-\ln b)\right)$$</p> <p>where $\gamma(a,x)$ is the <a href="http://mathworld.wolfram.com/IncompleteGammaFunction.html" rel="nofollow noreferrer">lower incomplete gamma function</a>.</p> Answer 18: <p>Considering $$I=\int\sqrt{\log( x)}\ dx$$ change variable $$\sqrt{\log( x)}=t\implies x=e^{t^2}\implies dx=2te^{t^2}\,dt$$ which makes $$I=\int 2t^2e^{t^2}\,dt$$ Now, integration by parts $$u=t\implies u'=dt$$ $$v'=2te^{t^2}\,dt\implies v=e^{t^2}$$ $$I=te^{t^2}-\int e^{t^2}\,dt=te^{t^2} -\frac{\sqrt{\pi }}{2} \text{erfi}(t)$$ where appears the imaginary error function.</p> <p>Back to $x$, $$I=x \sqrt{\log (x)}-\frac{\sqrt{\pi }}{2} \text{erfi}\left(\sqrt{\log (x)}\right)$$ which is real if $x\geq 1$.</p> Answer 19: <p>Let $x=e^{-u}$ and you get</p> <p>$$\int_a^b\sqrt{\ln x}\ dx=-\int_{-\ln a}^{-\ln b}\sqrt{-u}e^{-u}\ du=i\left(\gamma(3/2,-\ln a)-\gamma(3/2,-\ln b)\right)$$</p> <p>where $\gamma(a,x)$ is the <a href="http://mathworld.wolfram.com/IncompleteGammaFunction.html" rel="nofollow noreferrer">lower incomplete gamma function</a>.</p> Answer 20: <p>Considering $$I=\int\sqrt{\log( x)}\ dx$$ change variable $$\sqrt{\log( x)}=t\implies x=e^{t^2}\implies dx=2te^{t^2}\,dt$$ which makes $$I=\int 2t^2e^{t^2}\,dt$$ Now, integration by parts $$u=t\implies u'=dt$$ $$v'=2te^{t^2}\,dt\implies v=e^{t^2}$$ $$I=te^{t^2}-\int e^{t^2}\,dt=te^{t^2} -\frac{\sqrt{\pi }}{2} \text{erfi}(t)$$ where appears the imaginary error function.</p> <p>Back to $x$, $$I=x \sqrt{\log (x)}-\frac{\sqrt{\pi }}{2} \text{erfi}\left(\sqrt{\log (x)}\right)$$ which is real if $x\geq 1$.</p> Answer 21: <p>Let $x=e^{-u}$ and you get</p> <p>$$\int_a^b\sqrt{\ln x}\ dx=-\int_{-\ln a}^{-\ln b}\sqrt{-u}e^{-u}\ du=i\left(\gamma(3/2,-\ln a)-\gamma(3/2,-\ln b)\right)$$</p> <p>where $\gamma(a,x)$ is the <a href="http://mathworld.wolfram.com/IncompleteGammaFunction.html" rel="nofollow noreferrer">lower incomplete gamma function</a>.</p> Answer 22: <p>Considering $$I=\int\sqrt{\log( x)}\ dx$$ change variable $$\sqrt{\log( x)}=t\implies x=e^{t^2}\implies dx=2te^{t^2}\,dt$$ which makes $$I=\int 2t^2e^{t^2}\,dt$$ Now, integration by parts $$u=t\implies u'=dt$$ $$v'=2te^{t^2}\,dt\implies v=e^{t^2}$$ $$I=te^{t^2}-\int e^{t^2}\,dt=te^{t^2} -\frac{\sqrt{\pi }}{2} \text{erfi}(t)$$ where appears the imaginary error function.</p> <p>Back to $x$, $$I=x \sqrt{\log (x)}-\frac{\sqrt{\pi }}{2} \text{erfi}\left(\sqrt{\log (x)}\right)$$ which is real if $x\geq 1$.</p>
Limit of the exponential functions: $\lim_{x\to 0} \frac{e^x-e^{x \cos x}} {x +\sin x}$
2,094,362
6
https://math.stackexchange.com/questions/2094362/limit-of-the-exponential-functions-lim-x-to-0-fracex-ex-cos-x-x
<p>I want to find the limit of this function by simply using algebraic manipulation. Though I have computed the limit through L' Hospital's method but still I want to compute the limit purely by function's manipulation to yield a form where limit can be applied $$\lim_{x\to 0} \frac{e^x-e^{x \cos x}} {x +\sin x} $$</p> <p>Till now we have been taught basic limits such as $\lim_{x\to 0} \frac{e^x-1}{x}=1$ and that's why I have been trying to bring such form in this expression.</p> <p>P.S. I got the current answer by L'Hospital's rule i.e. $0$</p>
[ { "answer_id": 2094376, "score": 7, "text": "<p>$$\\begin{align*}\n&amp; \\lim_{x \\rightarrow 0} \\frac{e^x - e^{x\\cos x}}{x + \\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x - 1 + 1 - e^{x\\cos x}}{x + \\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x-1}{x+\\sin x}\n+ \\lim_{x \\rightarrow 0} \\frac{1-e^{x\\cos x}}{x+\\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x-1}{x} \\cdot \\frac{x}{x+\\sin x}\n+ \\lim_{x \\rightarrow 0} \\frac{1-e^{x\\cos x}}{-x\\cos x} \\cdot \\frac{-x\\cos x}{x+\\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x-1}{x} \\cdot\n\\lim_{x \\rightarrow 0} \\frac{1}{1 + \\frac{\\sin x}{x}}\n+ \\lim_{x\\cos x \\rightarrow 0} \\frac{e^{x\\cos x}-1}{x\\cos x} \\cdot\n\\lim_{x \\rightarrow 0} -\\frac{x}{x+\\sin x} \\cdot \\cos x \\\\\n&amp; = 1\\cdot \\frac{1}{1+1} + 1 \\cdot -\\frac{1}{1+1} \\cdot 1\\\\\n&amp; = \\frac{1}{2} - \\frac{1}{2}\\\\\n&amp; = 0 \\\\\n\\end{align*}$$ </p>\n" }, { "answer_id": 2094387, "score": 3, "text": "<p>$$\\frac{e^x-e^{x \\cos x}} {x +\\sin x}=\\frac{\\frac{e^x}{x}-\\frac{e^{x \\cos x}}{x}}{1+\\frac{\\sin x}{x}}=\\frac{\\frac{e^x-1}{x}-\\frac{e^{x \\cos x}-1}{x}}{1+\\frac{\\sin x}{x}}=\\frac{\\frac{e^x-1}{x}-\\cos x\\frac{e^{x \\cos x}-1}{x \\cos x}}{1+\\frac{\\sin x}{x}}$$</p>\n\n<p>Hence the limit is $\\frac{1-1(1)}{1+1}=0$.</p>\n" }, { "answer_id": 2095942, "score": 2, "text": "<p>Note that $$\\frac{e^{x} - e^{x\\cos x}} {x+\\sin x} = e^{x\\cos x} \\cdot\\frac{e^{x(1-\\cos x)} - 1}{x(1-\\cos x)} \\cdot\\dfrac{1-\\cos x} {1 + \\dfrac{\\sin x} {x}} $$ which tends to $$e^{0\\cdot 1}\\cdot 1\\cdot\\frac{1-1}{1+1}=0$$ as $x\\to 0$.</p>\n" }, { "answer_id": 2094469, "score": 1, "text": "<p>$$\\lim _{x\\to 0}\\left(\\frac{e^x-e^{x\\:\\cos(x)}}{\\:x+sin(x)}\\:\\right) \\approx \\lim _{x\\to 0}\\left(\\frac{1+x+o\\left(x^2\\right)-\\left(1+x+o\\left(x^2\\right)\\right)}{\\:x+x+o\\left(x^3\\right)}\\:\\right) = \\color{red}{0}$$</p>\n" }, { "answer_id": 2094508, "score": 1, "text": "<p>Consider\n$$\n\\frac{e^x-e^{x\\cos x}}{x+\\sin x}=\\frac{e^x(1-e^{x(\\cos x-1)})}{x+\\sin x}=\n\\frac{-xe^x}{x+\\sin x}\\frac{e^{x(\\cos x-1)}-1}{x}\n$$\nThe first factor is easily seen to have limit $-1/2$. The limit of the second factor is\n$$\n\\lim_{x\\to0}\\frac{e^{x(\\cos x-1)}-1}{x}\n$$\nwhich is the derivative at $0$ of $f(x)=e^{x(\\cos x-1)}$. Any manipulation you do will just be adapting the proof of the chain rule to this particular case, so why not using it directly? Since\n$$\nf'(x)=e^{x(\\cos x-1)}(\\cos x-1-x\\sin x)\n$$\nwe have $f'(0)=0$. </p>\n\n<p>Without the derivative,\n$$\n\\lim_{x\\to0}\\frac{e^{x(\\cos x-1)}-1}{x}=\n\\lim_{x\\to0}\\frac{e^{x(\\cos x-1)}-1}{x(\\cos x-1)}(\\cos x-1)\n$$\nand the limit of the fraction is $1$ because it's essentially the same as\n$$\n\\lim_{t\\to0}\\frac{e^t-1}{t}\n$$\nbecause $\\lim_{x\\to0} x(\\cos x-1)=0$ and the function $g(x)=x(\\cos x-1)$ is invertible in a neighborhood of $0$.</p>\n" }, { "answer_id": 2094368, "score": 0, "text": "<p>Hint: series expansion of $e^x$, $e^{x \\cos x}$ and $\\sin x$.</p>\n" }, { "answer_id": 2094376, "score": 7, "text": "<p>$$\\begin{align*}\n&amp; \\lim_{x \\rightarrow 0} \\frac{e^x - e^{x\\cos x}}{x + \\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x - 1 + 1 - e^{x\\cos x}}{x + \\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x-1}{x+\\sin x}\n+ \\lim_{x \\rightarrow 0} \\frac{1-e^{x\\cos x}}{x+\\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x-1}{x} \\cdot \\frac{x}{x+\\sin x}\n+ \\lim_{x \\rightarrow 0} \\frac{1-e^{x\\cos x}}{-x\\cos x} \\cdot \\frac{-x\\cos x}{x+\\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x-1}{x} \\cdot\n\\lim_{x \\rightarrow 0} \\frac{1}{1 + \\frac{\\sin x}{x}}\n+ \\lim_{x\\cos x \\rightarrow 0} \\frac{e^{x\\cos x}-1}{x\\cos x} \\cdot\n\\lim_{x \\rightarrow 0} -\\frac{x}{x+\\sin x} \\cdot \\cos x \\\\\n&amp; = 1\\cdot \\frac{1}{1+1} + 1 \\cdot -\\frac{1}{1+1} \\cdot 1\\\\\n&amp; = \\frac{1}{2} - \\frac{1}{2}\\\\\n&amp; = 0 \\\\\n\\end{align*}$$ </p>\n" }, { "answer_id": 2094387, "score": 3, "text": "<p>$$\\frac{e^x-e^{x \\cos x}} {x +\\sin x}=\\frac{\\frac{e^x}{x}-\\frac{e^{x \\cos x}}{x}}{1+\\frac{\\sin x}{x}}=\\frac{\\frac{e^x-1}{x}-\\frac{e^{x \\cos x}-1}{x}}{1+\\frac{\\sin x}{x}}=\\frac{\\frac{e^x-1}{x}-\\cos x\\frac{e^{x \\cos x}-1}{x \\cos x}}{1+\\frac{\\sin x}{x}}$$</p>\n\n<p>Hence the limit is $\\frac{1-1(1)}{1+1}=0$.</p>\n" }, { "answer_id": 2095942, "score": 2, "text": "<p>Note that $$\\frac{e^{x} - e^{x\\cos x}} {x+\\sin x} = e^{x\\cos x} \\cdot\\frac{e^{x(1-\\cos x)} - 1}{x(1-\\cos x)} \\cdot\\dfrac{1-\\cos x} {1 + \\dfrac{\\sin x} {x}} $$ which tends to $$e^{0\\cdot 1}\\cdot 1\\cdot\\frac{1-1}{1+1}=0$$ as $x\\to 0$.</p>\n" }, { "answer_id": 2094469, "score": 1, "text": "<p>$$\\lim _{x\\to 0}\\left(\\frac{e^x-e^{x\\:\\cos(x)}}{\\:x+sin(x)}\\:\\right) \\approx \\lim _{x\\to 0}\\left(\\frac{1+x+o\\left(x^2\\right)-\\left(1+x+o\\left(x^2\\right)\\right)}{\\:x+x+o\\left(x^3\\right)}\\:\\right) = \\color{red}{0}$$</p>\n" }, { "answer_id": 2094508, "score": 1, "text": "<p>Consider\n$$\n\\frac{e^x-e^{x\\cos x}}{x+\\sin x}=\\frac{e^x(1-e^{x(\\cos x-1)})}{x+\\sin x}=\n\\frac{-xe^x}{x+\\sin x}\\frac{e^{x(\\cos x-1)}-1}{x}\n$$\nThe first factor is easily seen to have limit $-1/2$. The limit of the second factor is\n$$\n\\lim_{x\\to0}\\frac{e^{x(\\cos x-1)}-1}{x}\n$$\nwhich is the derivative at $0$ of $f(x)=e^{x(\\cos x-1)}$. Any manipulation you do will just be adapting the proof of the chain rule to this particular case, so why not using it directly? Since\n$$\nf'(x)=e^{x(\\cos x-1)}(\\cos x-1-x\\sin x)\n$$\nwe have $f'(0)=0$. </p>\n\n<p>Without the derivative,\n$$\n\\lim_{x\\to0}\\frac{e^{x(\\cos x-1)}-1}{x}=\n\\lim_{x\\to0}\\frac{e^{x(\\cos x-1)}-1}{x(\\cos x-1)}(\\cos x-1)\n$$\nand the limit of the fraction is $1$ because it's essentially the same as\n$$\n\\lim_{t\\to0}\\frac{e^t-1}{t}\n$$\nbecause $\\lim_{x\\to0} x(\\cos x-1)=0$ and the function $g(x)=x(\\cos x-1)$ is invertible in a neighborhood of $0$.</p>\n" }, { "answer_id": 2094368, "score": 0, "text": "<p>Hint: series expansion of $e^x$, $e^{x \\cos x}$ and $\\sin x$.</p>\n" }, { "answer_id": 2094376, "score": 7, "text": "<p>$$\\begin{align*}\n&amp; \\lim_{x \\rightarrow 0} \\frac{e^x - e^{x\\cos x}}{x + \\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x - 1 + 1 - e^{x\\cos x}}{x + \\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x-1}{x+\\sin x}\n+ \\lim_{x \\rightarrow 0} \\frac{1-e^{x\\cos x}}{x+\\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x-1}{x} \\cdot \\frac{x}{x+\\sin x}\n+ \\lim_{x \\rightarrow 0} \\frac{1-e^{x\\cos x}}{-x\\cos x} \\cdot \\frac{-x\\cos x}{x+\\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x-1}{x} \\cdot\n\\lim_{x \\rightarrow 0} \\frac{1}{1 + \\frac{\\sin x}{x}}\n+ \\lim_{x\\cos x \\rightarrow 0} \\frac{e^{x\\cos x}-1}{x\\cos x} \\cdot\n\\lim_{x \\rightarrow 0} -\\frac{x}{x+\\sin x} \\cdot \\cos x \\\\\n&amp; = 1\\cdot \\frac{1}{1+1} + 1 \\cdot -\\frac{1}{1+1} \\cdot 1\\\\\n&amp; = \\frac{1}{2} - \\frac{1}{2}\\\\\n&amp; = 0 \\\\\n\\end{align*}$$ </p>\n" }, { "answer_id": 2094387, "score": 3, "text": "<p>$$\\frac{e^x-e^{x \\cos x}} {x +\\sin x}=\\frac{\\frac{e^x}{x}-\\frac{e^{x \\cos x}}{x}}{1+\\frac{\\sin x}{x}}=\\frac{\\frac{e^x-1}{x}-\\frac{e^{x \\cos x}-1}{x}}{1+\\frac{\\sin x}{x}}=\\frac{\\frac{e^x-1}{x}-\\cos x\\frac{e^{x \\cos x}-1}{x \\cos x}}{1+\\frac{\\sin x}{x}}$$</p>\n\n<p>Hence the limit is $\\frac{1-1(1)}{1+1}=0$.</p>\n" }, { "answer_id": 2095942, "score": 2, "text": "<p>Note that $$\\frac{e^{x} - e^{x\\cos x}} {x+\\sin x} = e^{x\\cos x} \\cdot\\frac{e^{x(1-\\cos x)} - 1}{x(1-\\cos x)} \\cdot\\dfrac{1-\\cos x} {1 + \\dfrac{\\sin x} {x}} $$ which tends to $$e^{0\\cdot 1}\\cdot 1\\cdot\\frac{1-1}{1+1}=0$$ as $x\\to 0$.</p>\n" }, { "answer_id": 2094469, "score": 1, "text": "<p>$$\\lim _{x\\to 0}\\left(\\frac{e^x-e^{x\\:\\cos(x)}}{\\:x+sin(x)}\\:\\right) \\approx \\lim _{x\\to 0}\\left(\\frac{1+x+o\\left(x^2\\right)-\\left(1+x+o\\left(x^2\\right)\\right)}{\\:x+x+o\\left(x^3\\right)}\\:\\right) = \\color{red}{0}$$</p>\n" }, { "answer_id": 2094508, "score": 1, "text": "<p>Consider\n$$\n\\frac{e^x-e^{x\\cos x}}{x+\\sin x}=\\frac{e^x(1-e^{x(\\cos x-1)})}{x+\\sin x}=\n\\frac{-xe^x}{x+\\sin x}\\frac{e^{x(\\cos x-1)}-1}{x}\n$$\nThe first factor is easily seen to have limit $-1/2$. The limit of the second factor is\n$$\n\\lim_{x\\to0}\\frac{e^{x(\\cos x-1)}-1}{x}\n$$\nwhich is the derivative at $0$ of $f(x)=e^{x(\\cos x-1)}$. Any manipulation you do will just be adapting the proof of the chain rule to this particular case, so why not using it directly? Since\n$$\nf'(x)=e^{x(\\cos x-1)}(\\cos x-1-x\\sin x)\n$$\nwe have $f'(0)=0$. </p>\n\n<p>Without the derivative,\n$$\n\\lim_{x\\to0}\\frac{e^{x(\\cos x-1)}-1}{x}=\n\\lim_{x\\to0}\\frac{e^{x(\\cos x-1)}-1}{x(\\cos x-1)}(\\cos x-1)\n$$\nand the limit of the fraction is $1$ because it's essentially the same as\n$$\n\\lim_{t\\to0}\\frac{e^t-1}{t}\n$$\nbecause $\\lim_{x\\to0} x(\\cos x-1)=0$ and the function $g(x)=x(\\cos x-1)$ is invertible in a neighborhood of $0$.</p>\n" }, { "answer_id": 2094368, "score": 0, "text": "<p>Hint: series expansion of $e^x$, $e^{x \\cos x}$ and $\\sin x$.</p>\n" }, { "answer_id": 2094376, "score": 7, "text": "<p>$$\\begin{align*}\n&amp; \\lim_{x \\rightarrow 0} \\frac{e^x - e^{x\\cos x}}{x + \\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x - 1 + 1 - e^{x\\cos x}}{x + \\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x-1}{x+\\sin x}\n+ \\lim_{x \\rightarrow 0} \\frac{1-e^{x\\cos x}}{x+\\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x-1}{x} \\cdot \\frac{x}{x+\\sin x}\n+ \\lim_{x \\rightarrow 0} \\frac{1-e^{x\\cos x}}{-x\\cos x} \\cdot \\frac{-x\\cos x}{x+\\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x-1}{x} \\cdot\n\\lim_{x \\rightarrow 0} \\frac{1}{1 + \\frac{\\sin x}{x}}\n+ \\lim_{x\\cos x \\rightarrow 0} \\frac{e^{x\\cos x}-1}{x\\cos x} \\cdot\n\\lim_{x \\rightarrow 0} -\\frac{x}{x+\\sin x} \\cdot \\cos x \\\\\n&amp; = 1\\cdot \\frac{1}{1+1} + 1 \\cdot -\\frac{1}{1+1} \\cdot 1\\\\\n&amp; = \\frac{1}{2} - \\frac{1}{2}\\\\\n&amp; = 0 \\\\\n\\end{align*}$$ </p>\n" }, { "answer_id": 2094387, "score": 3, "text": "<p>$$\\frac{e^x-e^{x \\cos x}} {x +\\sin x}=\\frac{\\frac{e^x}{x}-\\frac{e^{x \\cos x}}{x}}{1+\\frac{\\sin x}{x}}=\\frac{\\frac{e^x-1}{x}-\\frac{e^{x \\cos x}-1}{x}}{1+\\frac{\\sin x}{x}}=\\frac{\\frac{e^x-1}{x}-\\cos x\\frac{e^{x \\cos x}-1}{x \\cos x}}{1+\\frac{\\sin x}{x}}$$</p>\n\n<p>Hence the limit is $\\frac{1-1(1)}{1+1}=0$.</p>\n" }, { "answer_id": 2095942, "score": 2, "text": "<p>Note that $$\\frac{e^{x} - e^{x\\cos x}} {x+\\sin x} = e^{x\\cos x} \\cdot\\frac{e^{x(1-\\cos x)} - 1}{x(1-\\cos x)} \\cdot\\dfrac{1-\\cos x} {1 + \\dfrac{\\sin x} {x}} $$ which tends to $$e^{0\\cdot 1}\\cdot 1\\cdot\\frac{1-1}{1+1}=0$$ as $x\\to 0$.</p>\n" }, { "answer_id": 2094469, "score": 1, "text": "<p>$$\\lim _{x\\to 0}\\left(\\frac{e^x-e^{x\\:\\cos(x)}}{\\:x+sin(x)}\\:\\right) \\approx \\lim _{x\\to 0}\\left(\\frac{1+x+o\\left(x^2\\right)-\\left(1+x+o\\left(x^2\\right)\\right)}{\\:x+x+o\\left(x^3\\right)}\\:\\right) = \\color{red}{0}$$</p>\n" }, { "answer_id": 2094508, "score": 1, "text": "<p>Consider\n$$\n\\frac{e^x-e^{x\\cos x}}{x+\\sin x}=\\frac{e^x(1-e^{x(\\cos x-1)})}{x+\\sin x}=\n\\frac{-xe^x}{x+\\sin x}\\frac{e^{x(\\cos x-1)}-1}{x}\n$$\nThe first factor is easily seen to have limit $-1/2$. The limit of the second factor is\n$$\n\\lim_{x\\to0}\\frac{e^{x(\\cos x-1)}-1}{x}\n$$\nwhich is the derivative at $0$ of $f(x)=e^{x(\\cos x-1)}$. Any manipulation you do will just be adapting the proof of the chain rule to this particular case, so why not using it directly? Since\n$$\nf'(x)=e^{x(\\cos x-1)}(\\cos x-1-x\\sin x)\n$$\nwe have $f'(0)=0$. </p>\n\n<p>Without the derivative,\n$$\n\\lim_{x\\to0}\\frac{e^{x(\\cos x-1)}-1}{x}=\n\\lim_{x\\to0}\\frac{e^{x(\\cos x-1)}-1}{x(\\cos x-1)}(\\cos x-1)\n$$\nand the limit of the fraction is $1$ because it's essentially the same as\n$$\n\\lim_{t\\to0}\\frac{e^t-1}{t}\n$$\nbecause $\\lim_{x\\to0} x(\\cos x-1)=0$ and the function $g(x)=x(\\cos x-1)$ is invertible in a neighborhood of $0$.</p>\n" }, { "answer_id": 2094368, "score": 0, "text": "<p>Hint: series expansion of $e^x$, $e^{x \\cos x}$ and $\\sin x$.</p>\n" }, { "answer_id": 2094376, "score": 7, "text": "<p>$$\\begin{align*}\n&amp; \\lim_{x \\rightarrow 0} \\frac{e^x - e^{x\\cos x}}{x + \\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x - 1 + 1 - e^{x\\cos x}}{x + \\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x-1}{x+\\sin x}\n+ \\lim_{x \\rightarrow 0} \\frac{1-e^{x\\cos x}}{x+\\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x-1}{x} \\cdot \\frac{x}{x+\\sin x}\n+ \\lim_{x \\rightarrow 0} \\frac{1-e^{x\\cos x}}{-x\\cos x} \\cdot \\frac{-x\\cos x}{x+\\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x-1}{x} \\cdot\n\\lim_{x \\rightarrow 0} \\frac{1}{1 + \\frac{\\sin x}{x}}\n+ \\lim_{x\\cos x \\rightarrow 0} \\frac{e^{x\\cos x}-1}{x\\cos x} \\cdot\n\\lim_{x \\rightarrow 0} -\\frac{x}{x+\\sin x} \\cdot \\cos x \\\\\n&amp; = 1\\cdot \\frac{1}{1+1} + 1 \\cdot -\\frac{1}{1+1} \\cdot 1\\\\\n&amp; = \\frac{1}{2} - \\frac{1}{2}\\\\\n&amp; = 0 \\\\\n\\end{align*}$$ </p>\n" }, { "answer_id": 2094387, "score": 3, "text": "<p>$$\\frac{e^x-e^{x \\cos x}} {x +\\sin x}=\\frac{\\frac{e^x}{x}-\\frac{e^{x \\cos x}}{x}}{1+\\frac{\\sin x}{x}}=\\frac{\\frac{e^x-1}{x}-\\frac{e^{x \\cos x}-1}{x}}{1+\\frac{\\sin x}{x}}=\\frac{\\frac{e^x-1}{x}-\\cos x\\frac{e^{x \\cos x}-1}{x \\cos x}}{1+\\frac{\\sin x}{x}}$$</p>\n\n<p>Hence the limit is $\\frac{1-1(1)}{1+1}=0$.</p>\n" }, { "answer_id": 2095942, "score": 2, "text": "<p>Note that $$\\frac{e^{x} - e^{x\\cos x}} {x+\\sin x} = e^{x\\cos x} \\cdot\\frac{e^{x(1-\\cos x)} - 1}{x(1-\\cos x)} \\cdot\\dfrac{1-\\cos x} {1 + \\dfrac{\\sin x} {x}} $$ which tends to $$e^{0\\cdot 1}\\cdot 1\\cdot\\frac{1-1}{1+1}=0$$ as $x\\to 0$.</p>\n" }, { "answer_id": 2094469, "score": 1, "text": "<p>$$\\lim _{x\\to 0}\\left(\\frac{e^x-e^{x\\:\\cos(x)}}{\\:x+sin(x)}\\:\\right) \\approx \\lim _{x\\to 0}\\left(\\frac{1+x+o\\left(x^2\\right)-\\left(1+x+o\\left(x^2\\right)\\right)}{\\:x+x+o\\left(x^3\\right)}\\:\\right) = \\color{red}{0}$$</p>\n" }, { "answer_id": 2094508, "score": 1, "text": "<p>Consider\n$$\n\\frac{e^x-e^{x\\cos x}}{x+\\sin x}=\\frac{e^x(1-e^{x(\\cos x-1)})}{x+\\sin x}=\n\\frac{-xe^x}{x+\\sin x}\\frac{e^{x(\\cos x-1)}-1}{x}\n$$\nThe first factor is easily seen to have limit $-1/2$. The limit of the second factor is\n$$\n\\lim_{x\\to0}\\frac{e^{x(\\cos x-1)}-1}{x}\n$$\nwhich is the derivative at $0$ of $f(x)=e^{x(\\cos x-1)}$. Any manipulation you do will just be adapting the proof of the chain rule to this particular case, so why not using it directly? Since\n$$\nf'(x)=e^{x(\\cos x-1)}(\\cos x-1-x\\sin x)\n$$\nwe have $f'(0)=0$. </p>\n\n<p>Without the derivative,\n$$\n\\lim_{x\\to0}\\frac{e^{x(\\cos x-1)}-1}{x}=\n\\lim_{x\\to0}\\frac{e^{x(\\cos x-1)}-1}{x(\\cos x-1)}(\\cos x-1)\n$$\nand the limit of the fraction is $1$ because it's essentially the same as\n$$\n\\lim_{t\\to0}\\frac{e^t-1}{t}\n$$\nbecause $\\lim_{x\\to0} x(\\cos x-1)=0$ and the function $g(x)=x(\\cos x-1)$ is invertible in a neighborhood of $0$.</p>\n" }, { "answer_id": 2094368, "score": 0, "text": "<p>Hint: series expansion of $e^x$, $e^{x \\cos x}$ and $\\sin x$.</p>\n" }, { "answer_id": 2094376, "score": 7, "text": "<p>$$\\begin{align*}\n&amp; \\lim_{x \\rightarrow 0} \\frac{e^x - e^{x\\cos x}}{x + \\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x - 1 + 1 - e^{x\\cos x}}{x + \\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x-1}{x+\\sin x}\n+ \\lim_{x \\rightarrow 0} \\frac{1-e^{x\\cos x}}{x+\\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x-1}{x} \\cdot \\frac{x}{x+\\sin x}\n+ \\lim_{x \\rightarrow 0} \\frac{1-e^{x\\cos x}}{-x\\cos x} \\cdot \\frac{-x\\cos x}{x+\\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x-1}{x} \\cdot\n\\lim_{x \\rightarrow 0} \\frac{1}{1 + \\frac{\\sin x}{x}}\n+ \\lim_{x\\cos x \\rightarrow 0} \\frac{e^{x\\cos x}-1}{x\\cos x} \\cdot\n\\lim_{x \\rightarrow 0} -\\frac{x}{x+\\sin x} \\cdot \\cos x \\\\\n&amp; = 1\\cdot \\frac{1}{1+1} + 1 \\cdot -\\frac{1}{1+1} \\cdot 1\\\\\n&amp; = \\frac{1}{2} - \\frac{1}{2}\\\\\n&amp; = 0 \\\\\n\\end{align*}$$ </p>\n" }, { "answer_id": 2094387, "score": 3, "text": "<p>$$\\frac{e^x-e^{x \\cos x}} {x +\\sin x}=\\frac{\\frac{e^x}{x}-\\frac{e^{x \\cos x}}{x}}{1+\\frac{\\sin x}{x}}=\\frac{\\frac{e^x-1}{x}-\\frac{e^{x \\cos x}-1}{x}}{1+\\frac{\\sin x}{x}}=\\frac{\\frac{e^x-1}{x}-\\cos x\\frac{e^{x \\cos x}-1}{x \\cos x}}{1+\\frac{\\sin x}{x}}$$</p>\n\n<p>Hence the limit is $\\frac{1-1(1)}{1+1}=0$.</p>\n" }, { "answer_id": 2095942, "score": 2, "text": "<p>Note that $$\\frac{e^{x} - e^{x\\cos x}} {x+\\sin x} = e^{x\\cos x} \\cdot\\frac{e^{x(1-\\cos x)} - 1}{x(1-\\cos x)} \\cdot\\dfrac{1-\\cos x} {1 + \\dfrac{\\sin x} {x}} $$ which tends to $$e^{0\\cdot 1}\\cdot 1\\cdot\\frac{1-1}{1+1}=0$$ as $x\\to 0$.</p>\n" }, { "answer_id": 2094469, "score": 1, "text": "<p>$$\\lim _{x\\to 0}\\left(\\frac{e^x-e^{x\\:\\cos(x)}}{\\:x+sin(x)}\\:\\right) \\approx \\lim _{x\\to 0}\\left(\\frac{1+x+o\\left(x^2\\right)-\\left(1+x+o\\left(x^2\\right)\\right)}{\\:x+x+o\\left(x^3\\right)}\\:\\right) = \\color{red}{0}$$</p>\n" }, { "answer_id": 2094508, "score": 1, "text": "<p>Consider\n$$\n\\frac{e^x-e^{x\\cos x}}{x+\\sin x}=\\frac{e^x(1-e^{x(\\cos x-1)})}{x+\\sin x}=\n\\frac{-xe^x}{x+\\sin x}\\frac{e^{x(\\cos x-1)}-1}{x}\n$$\nThe first factor is easily seen to have limit $-1/2$. The limit of the second factor is\n$$\n\\lim_{x\\to0}\\frac{e^{x(\\cos x-1)}-1}{x}\n$$\nwhich is the derivative at $0$ of $f(x)=e^{x(\\cos x-1)}$. Any manipulation you do will just be adapting the proof of the chain rule to this particular case, so why not using it directly? Since\n$$\nf'(x)=e^{x(\\cos x-1)}(\\cos x-1-x\\sin x)\n$$\nwe have $f'(0)=0$. </p>\n\n<p>Without the derivative,\n$$\n\\lim_{x\\to0}\\frac{e^{x(\\cos x-1)}-1}{x}=\n\\lim_{x\\to0}\\frac{e^{x(\\cos x-1)}-1}{x(\\cos x-1)}(\\cos x-1)\n$$\nand the limit of the fraction is $1$ because it's essentially the same as\n$$\n\\lim_{t\\to0}\\frac{e^t-1}{t}\n$$\nbecause $\\lim_{x\\to0} x(\\cos x-1)=0$ and the function $g(x)=x(\\cos x-1)$ is invertible in a neighborhood of $0$.</p>\n" }, { "answer_id": 2094368, "score": 0, "text": "<p>Hint: series expansion of $e^x$, $e^{x \\cos x}$ and $\\sin x$.</p>\n" }, { "answer_id": 2094376, "score": 7, "text": "<p>$$\\begin{align*}\n&amp; \\lim_{x \\rightarrow 0} \\frac{e^x - e^{x\\cos x}}{x + \\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x - 1 + 1 - e^{x\\cos x}}{x + \\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x-1}{x+\\sin x}\n+ \\lim_{x \\rightarrow 0} \\frac{1-e^{x\\cos x}}{x+\\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x-1}{x} \\cdot \\frac{x}{x+\\sin x}\n+ \\lim_{x \\rightarrow 0} \\frac{1-e^{x\\cos x}}{-x\\cos x} \\cdot \\frac{-x\\cos x}{x+\\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x-1}{x} \\cdot\n\\lim_{x \\rightarrow 0} \\frac{1}{1 + \\frac{\\sin x}{x}}\n+ \\lim_{x\\cos x \\rightarrow 0} \\frac{e^{x\\cos x}-1}{x\\cos x} \\cdot\n\\lim_{x \\rightarrow 0} -\\frac{x}{x+\\sin x} \\cdot \\cos x \\\\\n&amp; = 1\\cdot \\frac{1}{1+1} + 1 \\cdot -\\frac{1}{1+1} \\cdot 1\\\\\n&amp; = \\frac{1}{2} - \\frac{1}{2}\\\\\n&amp; = 0 \\\\\n\\end{align*}$$ </p>\n" }, { "answer_id": 2094387, "score": 3, "text": "<p>$$\\frac{e^x-e^{x \\cos x}} {x +\\sin x}=\\frac{\\frac{e^x}{x}-\\frac{e^{x \\cos x}}{x}}{1+\\frac{\\sin x}{x}}=\\frac{\\frac{e^x-1}{x}-\\frac{e^{x \\cos x}-1}{x}}{1+\\frac{\\sin x}{x}}=\\frac{\\frac{e^x-1}{x}-\\cos x\\frac{e^{x \\cos x}-1}{x \\cos x}}{1+\\frac{\\sin x}{x}}$$</p>\n\n<p>Hence the limit is $\\frac{1-1(1)}{1+1}=0$.</p>\n" }, { "answer_id": 2095942, "score": 2, "text": "<p>Note that $$\\frac{e^{x} - e^{x\\cos x}} {x+\\sin x} = e^{x\\cos x} \\cdot\\frac{e^{x(1-\\cos x)} - 1}{x(1-\\cos x)} \\cdot\\dfrac{1-\\cos x} {1 + \\dfrac{\\sin x} {x}} $$ which tends to $$e^{0\\cdot 1}\\cdot 1\\cdot\\frac{1-1}{1+1}=0$$ as $x\\to 0$.</p>\n" }, { "answer_id": 2094469, "score": 1, "text": "<p>$$\\lim _{x\\to 0}\\left(\\frac{e^x-e^{x\\:\\cos(x)}}{\\:x+sin(x)}\\:\\right) \\approx \\lim _{x\\to 0}\\left(\\frac{1+x+o\\left(x^2\\right)-\\left(1+x+o\\left(x^2\\right)\\right)}{\\:x+x+o\\left(x^3\\right)}\\:\\right) = \\color{red}{0}$$</p>\n" }, { "answer_id": 2094508, "score": 1, "text": "<p>Consider\n$$\n\\frac{e^x-e^{x\\cos x}}{x+\\sin x}=\\frac{e^x(1-e^{x(\\cos x-1)})}{x+\\sin x}=\n\\frac{-xe^x}{x+\\sin x}\\frac{e^{x(\\cos x-1)}-1}{x}\n$$\nThe first factor is easily seen to have limit $-1/2$. The limit of the second factor is\n$$\n\\lim_{x\\to0}\\frac{e^{x(\\cos x-1)}-1}{x}\n$$\nwhich is the derivative at $0$ of $f(x)=e^{x(\\cos x-1)}$. Any manipulation you do will just be adapting the proof of the chain rule to this particular case, so why not using it directly? Since\n$$\nf'(x)=e^{x(\\cos x-1)}(\\cos x-1-x\\sin x)\n$$\nwe have $f'(0)=0$. </p>\n\n<p>Without the derivative,\n$$\n\\lim_{x\\to0}\\frac{e^{x(\\cos x-1)}-1}{x}=\n\\lim_{x\\to0}\\frac{e^{x(\\cos x-1)}-1}{x(\\cos x-1)}(\\cos x-1)\n$$\nand the limit of the fraction is $1$ because it's essentially the same as\n$$\n\\lim_{t\\to0}\\frac{e^t-1}{t}\n$$\nbecause $\\lim_{x\\to0} x(\\cos x-1)=0$ and the function $g(x)=x(\\cos x-1)$ is invertible in a neighborhood of $0$.</p>\n" }, { "answer_id": 2094368, "score": 0, "text": "<p>Hint: series expansion of $e^x$, $e^{x \\cos x}$ and $\\sin x$.</p>\n" }, { "answer_id": 2094376, "score": 7, "text": "<p>$$\\begin{align*}\n&amp; \\lim_{x \\rightarrow 0} \\frac{e^x - e^{x\\cos x}}{x + \\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x - 1 + 1 - e^{x\\cos x}}{x + \\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x-1}{x+\\sin x}\n+ \\lim_{x \\rightarrow 0} \\frac{1-e^{x\\cos x}}{x+\\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x-1}{x} \\cdot \\frac{x}{x+\\sin x}\n+ \\lim_{x \\rightarrow 0} \\frac{1-e^{x\\cos x}}{-x\\cos x} \\cdot \\frac{-x\\cos x}{x+\\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x-1}{x} \\cdot\n\\lim_{x \\rightarrow 0} \\frac{1}{1 + \\frac{\\sin x}{x}}\n+ \\lim_{x\\cos x \\rightarrow 0} \\frac{e^{x\\cos x}-1}{x\\cos x} \\cdot\n\\lim_{x \\rightarrow 0} -\\frac{x}{x+\\sin x} \\cdot \\cos x \\\\\n&amp; = 1\\cdot \\frac{1}{1+1} + 1 \\cdot -\\frac{1}{1+1} \\cdot 1\\\\\n&amp; = \\frac{1}{2} - \\frac{1}{2}\\\\\n&amp; = 0 \\\\\n\\end{align*}$$ </p>\n" }, { "answer_id": 2094387, "score": 3, "text": "<p>$$\\frac{e^x-e^{x \\cos x}} {x +\\sin x}=\\frac{\\frac{e^x}{x}-\\frac{e^{x \\cos x}}{x}}{1+\\frac{\\sin x}{x}}=\\frac{\\frac{e^x-1}{x}-\\frac{e^{x \\cos x}-1}{x}}{1+\\frac{\\sin x}{x}}=\\frac{\\frac{e^x-1}{x}-\\cos x\\frac{e^{x \\cos x}-1}{x \\cos x}}{1+\\frac{\\sin x}{x}}$$</p>\n\n<p>Hence the limit is $\\frac{1-1(1)}{1+1}=0$.</p>\n" }, { "answer_id": 2095942, "score": 2, "text": "<p>Note that $$\\frac{e^{x} - e^{x\\cos x}} {x+\\sin x} = e^{x\\cos x} \\cdot\\frac{e^{x(1-\\cos x)} - 1}{x(1-\\cos x)} \\cdot\\dfrac{1-\\cos x} {1 + \\dfrac{\\sin x} {x}} $$ which tends to $$e^{0\\cdot 1}\\cdot 1\\cdot\\frac{1-1}{1+1}=0$$ as $x\\to 0$.</p>\n" }, { "answer_id": 2094469, "score": 1, "text": "<p>$$\\lim _{x\\to 0}\\left(\\frac{e^x-e^{x\\:\\cos(x)}}{\\:x+sin(x)}\\:\\right) \\approx \\lim _{x\\to 0}\\left(\\frac{1+x+o\\left(x^2\\right)-\\left(1+x+o\\left(x^2\\right)\\right)}{\\:x+x+o\\left(x^3\\right)}\\:\\right) = \\color{red}{0}$$</p>\n" }, { "answer_id": 2094508, "score": 1, "text": "<p>Consider\n$$\n\\frac{e^x-e^{x\\cos x}}{x+\\sin x}=\\frac{e^x(1-e^{x(\\cos x-1)})}{x+\\sin x}=\n\\frac{-xe^x}{x+\\sin x}\\frac{e^{x(\\cos x-1)}-1}{x}\n$$\nThe first factor is easily seen to have limit $-1/2$. The limit of the second factor is\n$$\n\\lim_{x\\to0}\\frac{e^{x(\\cos x-1)}-1}{x}\n$$\nwhich is the derivative at $0$ of $f(x)=e^{x(\\cos x-1)}$. Any manipulation you do will just be adapting the proof of the chain rule to this particular case, so why not using it directly? Since\n$$\nf'(x)=e^{x(\\cos x-1)}(\\cos x-1-x\\sin x)\n$$\nwe have $f'(0)=0$. </p>\n\n<p>Without the derivative,\n$$\n\\lim_{x\\to0}\\frac{e^{x(\\cos x-1)}-1}{x}=\n\\lim_{x\\to0}\\frac{e^{x(\\cos x-1)}-1}{x(\\cos x-1)}(\\cos x-1)\n$$\nand the limit of the fraction is $1$ because it's essentially the same as\n$$\n\\lim_{t\\to0}\\frac{e^t-1}{t}\n$$\nbecause $\\lim_{x\\to0} x(\\cos x-1)=0$ and the function $g(x)=x(\\cos x-1)$ is invertible in a neighborhood of $0$.</p>\n" }, { "answer_id": 2094368, "score": 0, "text": "<p>Hint: series expansion of $e^x$, $e^{x \\cos x}$ and $\\sin x$.</p>\n" }, { "answer_id": 2094376, "score": 7, "text": "<p>$$\\begin{align*}\n&amp; \\lim_{x \\rightarrow 0} \\frac{e^x - e^{x\\cos x}}{x + \\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x - 1 + 1 - e^{x\\cos x}}{x + \\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x-1}{x+\\sin x}\n+ \\lim_{x \\rightarrow 0} \\frac{1-e^{x\\cos x}}{x+\\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x-1}{x} \\cdot \\frac{x}{x+\\sin x}\n+ \\lim_{x \\rightarrow 0} \\frac{1-e^{x\\cos x}}{-x\\cos x} \\cdot \\frac{-x\\cos x}{x+\\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x-1}{x} \\cdot\n\\lim_{x \\rightarrow 0} \\frac{1}{1 + \\frac{\\sin x}{x}}\n+ \\lim_{x\\cos x \\rightarrow 0} \\frac{e^{x\\cos x}-1}{x\\cos x} \\cdot\n\\lim_{x \\rightarrow 0} -\\frac{x}{x+\\sin x} \\cdot \\cos x \\\\\n&amp; = 1\\cdot \\frac{1}{1+1} + 1 \\cdot -\\frac{1}{1+1} \\cdot 1\\\\\n&amp; = \\frac{1}{2} - \\frac{1}{2}\\\\\n&amp; = 0 \\\\\n\\end{align*}$$ </p>\n" }, { "answer_id": 2094387, "score": 3, "text": "<p>$$\\frac{e^x-e^{x \\cos x}} {x +\\sin x}=\\frac{\\frac{e^x}{x}-\\frac{e^{x \\cos x}}{x}}{1+\\frac{\\sin x}{x}}=\\frac{\\frac{e^x-1}{x}-\\frac{e^{x \\cos x}-1}{x}}{1+\\frac{\\sin x}{x}}=\\frac{\\frac{e^x-1}{x}-\\cos x\\frac{e^{x \\cos x}-1}{x \\cos x}}{1+\\frac{\\sin x}{x}}$$</p>\n\n<p>Hence the limit is $\\frac{1-1(1)}{1+1}=0$.</p>\n" }, { "answer_id": 2095942, "score": 2, "text": "<p>Note that $$\\frac{e^{x} - e^{x\\cos x}} {x+\\sin x} = e^{x\\cos x} \\cdot\\frac{e^{x(1-\\cos x)} - 1}{x(1-\\cos x)} \\cdot\\dfrac{1-\\cos x} {1 + \\dfrac{\\sin x} {x}} $$ which tends to $$e^{0\\cdot 1}\\cdot 1\\cdot\\frac{1-1}{1+1}=0$$ as $x\\to 0$.</p>\n" }, { "answer_id": 2094469, "score": 1, "text": "<p>$$\\lim _{x\\to 0}\\left(\\frac{e^x-e^{x\\:\\cos(x)}}{\\:x+sin(x)}\\:\\right) \\approx \\lim _{x\\to 0}\\left(\\frac{1+x+o\\left(x^2\\right)-\\left(1+x+o\\left(x^2\\right)\\right)}{\\:x+x+o\\left(x^3\\right)}\\:\\right) = \\color{red}{0}$$</p>\n" }, { "answer_id": 2094508, "score": 1, "text": "<p>Consider\n$$\n\\frac{e^x-e^{x\\cos x}}{x+\\sin x}=\\frac{e^x(1-e^{x(\\cos x-1)})}{x+\\sin x}=\n\\frac{-xe^x}{x+\\sin x}\\frac{e^{x(\\cos x-1)}-1}{x}\n$$\nThe first factor is easily seen to have limit $-1/2$. The limit of the second factor is\n$$\n\\lim_{x\\to0}\\frac{e^{x(\\cos x-1)}-1}{x}\n$$\nwhich is the derivative at $0$ of $f(x)=e^{x(\\cos x-1)}$. Any manipulation you do will just be adapting the proof of the chain rule to this particular case, so why not using it directly? Since\n$$\nf'(x)=e^{x(\\cos x-1)}(\\cos x-1-x\\sin x)\n$$\nwe have $f'(0)=0$. </p>\n\n<p>Without the derivative,\n$$\n\\lim_{x\\to0}\\frac{e^{x(\\cos x-1)}-1}{x}=\n\\lim_{x\\to0}\\frac{e^{x(\\cos x-1)}-1}{x(\\cos x-1)}(\\cos x-1)\n$$\nand the limit of the fraction is $1$ because it's essentially the same as\n$$\n\\lim_{t\\to0}\\frac{e^t-1}{t}\n$$\nbecause $\\lim_{x\\to0} x(\\cos x-1)=0$ and the function $g(x)=x(\\cos x-1)$ is invertible in a neighborhood of $0$.</p>\n" }, { "answer_id": 2094368, "score": 0, "text": "<p>Hint: series expansion of $e^x$, $e^{x \\cos x}$ and $\\sin x$.</p>\n" }, { "answer_id": 2094376, "score": 7, "text": "<p>$$\\begin{align*}\n&amp; \\lim_{x \\rightarrow 0} \\frac{e^x - e^{x\\cos x}}{x + \\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x - 1 + 1 - e^{x\\cos x}}{x + \\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x-1}{x+\\sin x}\n+ \\lim_{x \\rightarrow 0} \\frac{1-e^{x\\cos x}}{x+\\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x-1}{x} \\cdot \\frac{x}{x+\\sin x}\n+ \\lim_{x \\rightarrow 0} \\frac{1-e^{x\\cos x}}{-x\\cos x} \\cdot \\frac{-x\\cos x}{x+\\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x-1}{x} \\cdot\n\\lim_{x \\rightarrow 0} \\frac{1}{1 + \\frac{\\sin x}{x}}\n+ \\lim_{x\\cos x \\rightarrow 0} \\frac{e^{x\\cos x}-1}{x\\cos x} \\cdot\n\\lim_{x \\rightarrow 0} -\\frac{x}{x+\\sin x} \\cdot \\cos x \\\\\n&amp; = 1\\cdot \\frac{1}{1+1} + 1 \\cdot -\\frac{1}{1+1} \\cdot 1\\\\\n&amp; = \\frac{1}{2} - \\frac{1}{2}\\\\\n&amp; = 0 \\\\\n\\end{align*}$$ </p>\n" }, { "answer_id": 2094387, "score": 3, "text": "<p>$$\\frac{e^x-e^{x \\cos x}} {x +\\sin x}=\\frac{\\frac{e^x}{x}-\\frac{e^{x \\cos x}}{x}}{1+\\frac{\\sin x}{x}}=\\frac{\\frac{e^x-1}{x}-\\frac{e^{x \\cos x}-1}{x}}{1+\\frac{\\sin x}{x}}=\\frac{\\frac{e^x-1}{x}-\\cos x\\frac{e^{x \\cos x}-1}{x \\cos x}}{1+\\frac{\\sin x}{x}}$$</p>\n\n<p>Hence the limit is $\\frac{1-1(1)}{1+1}=0$.</p>\n" }, { "answer_id": 2095942, "score": 2, "text": "<p>Note that $$\\frac{e^{x} - e^{x\\cos x}} {x+\\sin x} = e^{x\\cos x} \\cdot\\frac{e^{x(1-\\cos x)} - 1}{x(1-\\cos x)} \\cdot\\dfrac{1-\\cos x} {1 + \\dfrac{\\sin x} {x}} $$ which tends to $$e^{0\\cdot 1}\\cdot 1\\cdot\\frac{1-1}{1+1}=0$$ as $x\\to 0$.</p>\n" }, { "answer_id": 2094469, "score": 1, "text": "<p>$$\\lim _{x\\to 0}\\left(\\frac{e^x-e^{x\\:\\cos(x)}}{\\:x+sin(x)}\\:\\right) \\approx \\lim _{x\\to 0}\\left(\\frac{1+x+o\\left(x^2\\right)-\\left(1+x+o\\left(x^2\\right)\\right)}{\\:x+x+o\\left(x^3\\right)}\\:\\right) = \\color{red}{0}$$</p>\n" }, { "answer_id": 2094508, "score": 1, "text": "<p>Consider\n$$\n\\frac{e^x-e^{x\\cos x}}{x+\\sin x}=\\frac{e^x(1-e^{x(\\cos x-1)})}{x+\\sin x}=\n\\frac{-xe^x}{x+\\sin x}\\frac{e^{x(\\cos x-1)}-1}{x}\n$$\nThe first factor is easily seen to have limit $-1/2$. The limit of the second factor is\n$$\n\\lim_{x\\to0}\\frac{e^{x(\\cos x-1)}-1}{x}\n$$\nwhich is the derivative at $0$ of $f(x)=e^{x(\\cos x-1)}$. Any manipulation you do will just be adapting the proof of the chain rule to this particular case, so why not using it directly? Since\n$$\nf'(x)=e^{x(\\cos x-1)}(\\cos x-1-x\\sin x)\n$$\nwe have $f'(0)=0$. </p>\n\n<p>Without the derivative,\n$$\n\\lim_{x\\to0}\\frac{e^{x(\\cos x-1)}-1}{x}=\n\\lim_{x\\to0}\\frac{e^{x(\\cos x-1)}-1}{x(\\cos x-1)}(\\cos x-1)\n$$\nand the limit of the fraction is $1$ because it's essentially the same as\n$$\n\\lim_{t\\to0}\\frac{e^t-1}{t}\n$$\nbecause $\\lim_{x\\to0} x(\\cos x-1)=0$ and the function $g(x)=x(\\cos x-1)$ is invertible in a neighborhood of $0$.</p>\n" }, { "answer_id": 2094368, "score": 0, "text": "<p>Hint: series expansion of $e^x$, $e^{x \\cos x}$ and $\\sin x$.</p>\n" }, { "answer_id": 2094376, "score": 7, "text": "<p>$$\\begin{align*}\n&amp; \\lim_{x \\rightarrow 0} \\frac{e^x - e^{x\\cos x}}{x + \\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x - 1 + 1 - e^{x\\cos x}}{x + \\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x-1}{x+\\sin x}\n+ \\lim_{x \\rightarrow 0} \\frac{1-e^{x\\cos x}}{x+\\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x-1}{x} \\cdot \\frac{x}{x+\\sin x}\n+ \\lim_{x \\rightarrow 0} \\frac{1-e^{x\\cos x}}{-x\\cos x} \\cdot \\frac{-x\\cos x}{x+\\sin x} \\\\\n&amp; = \\lim_{x \\rightarrow 0} \\frac{e^x-1}{x} \\cdot\n\\lim_{x \\rightarrow 0} \\frac{1}{1 + \\frac{\\sin x}{x}}\n+ \\lim_{x\\cos x \\rightarrow 0} \\frac{e^{x\\cos x}-1}{x\\cos x} \\cdot\n\\lim_{x \\rightarrow 0} -\\frac{x}{x+\\sin x} \\cdot \\cos x \\\\\n&amp; = 1\\cdot \\frac{1}{1+1} + 1 \\cdot -\\frac{1}{1+1} \\cdot 1\\\\\n&amp; = \\frac{1}{2} - \\frac{1}{2}\\\\\n&amp; = 0 \\\\\n\\end{align*}$$ </p>\n" }, { "answer_id": 2094387, "score": 3, "text": "<p>$$\\frac{e^x-e^{x \\cos x}} {x +\\sin x}=\\frac{\\frac{e^x}{x}-\\frac{e^{x \\cos x}}{x}}{1+\\frac{\\sin x}{x}}=\\frac{\\frac{e^x-1}{x}-\\frac{e^{x \\cos x}-1}{x}}{1+\\frac{\\sin x}{x}}=\\frac{\\frac{e^x-1}{x}-\\cos x\\frac{e^{x \\cos x}-1}{x \\cos x}}{1+\\frac{\\sin x}{x}}$$</p>\n\n<p>Hence the limit is $\\frac{1-1(1)}{1+1}=0$.</p>\n" }, { "answer_id": 2095942, "score": 2, "text": "<p>Note that $$\\frac{e^{x} - e^{x\\cos x}} {x+\\sin x} = e^{x\\cos x} \\cdot\\frac{e^{x(1-\\cos x)} - 1}{x(1-\\cos x)} \\cdot\\dfrac{1-\\cos x} {1 + \\dfrac{\\sin x} {x}} $$ which tends to $$e^{0\\cdot 1}\\cdot 1\\cdot\\frac{1-1}{1+1}=0$$ as $x\\to 0$.</p>\n" }, { "answer_id": 2094469, "score": 1, "text": "<p>$$\\lim _{x\\to 0}\\left(\\frac{e^x-e^{x\\:\\cos(x)}}{\\:x+sin(x)}\\:\\right) \\approx \\lim _{x\\to 0}\\left(\\frac{1+x+o\\left(x^2\\right)-\\left(1+x+o\\left(x^2\\right)\\right)}{\\:x+x+o\\left(x^3\\right)}\\:\\right) = \\color{red}{0}$$</p>\n" }, { "answer_id": 2094508, "score": 1, "text": "<p>Consider\n$$\n\\frac{e^x-e^{x\\cos x}}{x+\\sin x}=\\frac{e^x(1-e^{x(\\cos x-1)})}{x+\\sin x}=\n\\frac{-xe^x}{x+\\sin x}\\frac{e^{x(\\cos x-1)}-1}{x}\n$$\nThe first factor is easily seen to have limit $-1/2$. The limit of the second factor is\n$$\n\\lim_{x\\to0}\\frac{e^{x(\\cos x-1)}-1}{x}\n$$\nwhich is the derivative at $0$ of $f(x)=e^{x(\\cos x-1)}$. Any manipulation you do will just be adapting the proof of the chain rule to this particular case, so why not using it directly? Since\n$$\nf'(x)=e^{x(\\cos x-1)}(\\cos x-1-x\\sin x)\n$$\nwe have $f'(0)=0$. </p>\n\n<p>Without the derivative,\n$$\n\\lim_{x\\to0}\\frac{e^{x(\\cos x-1)}-1}{x}=\n\\lim_{x\\to0}\\frac{e^{x(\\cos x-1)}-1}{x(\\cos x-1)}(\\cos x-1)\n$$\nand the limit of the fraction is $1$ because it's essentially the same as\n$$\n\\lim_{t\\to0}\\frac{e^t-1}{t}\n$$\nbecause $\\lim_{x\\to0} x(\\cos x-1)=0$ and the function $g(x)=x(\\cos x-1)$ is invertible in a neighborhood of $0$.</p>\n" }, { "answer_id": 2094368, "score": 0, "text": "<p>Hint: series expansion of $e^x$, $e^{x \\cos x}$ and $\\sin x$.</p>\n" } ]
Question: Limit of the exponential functions: $\lim_{x\to 0} \frac{e^x-e^{x \cos x}} {x +\sin x}$ Answer 1: <p>$$\begin{align*} &amp; \lim_{x \rightarrow 0} \frac{e^x - e^{x\cos x}}{x + \sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x - 1 + 1 - e^{x\cos x}}{x + \sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x-1}{x+\sin x} + \lim_{x \rightarrow 0} \frac{1-e^{x\cos x}}{x+\sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x-1}{x} \cdot \frac{x}{x+\sin x} + \lim_{x \rightarrow 0} \frac{1-e^{x\cos x}}{-x\cos x} \cdot \frac{-x\cos x}{x+\sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x-1}{x} \cdot \lim_{x \rightarrow 0} \frac{1}{1 + \frac{\sin x}{x}} + \lim_{x\cos x \rightarrow 0} \frac{e^{x\cos x}-1}{x\cos x} \cdot \lim_{x \rightarrow 0} -\frac{x}{x+\sin x} \cdot \cos x \\ &amp; = 1\cdot \frac{1}{1+1} + 1 \cdot -\frac{1}{1+1} \cdot 1\\ &amp; = \frac{1}{2} - \frac{1}{2}\\ &amp; = 0 \\ \end{align*}$$ </p> Answer 2: <p>$$\frac{e^x-e^{x \cos x}} {x +\sin x}=\frac{\frac{e^x}{x}-\frac{e^{x \cos x}}{x}}{1+\frac{\sin x}{x}}=\frac{\frac{e^x-1}{x}-\frac{e^{x \cos x}-1}{x}}{1+\frac{\sin x}{x}}=\frac{\frac{e^x-1}{x}-\cos x\frac{e^{x \cos x}-1}{x \cos x}}{1+\frac{\sin x}{x}}$$</p> <p>Hence the limit is $\frac{1-1(1)}{1+1}=0$.</p> Answer 3: <p>Note that $$\frac{e^{x} - e^{x\cos x}} {x+\sin x} = e^{x\cos x} \cdot\frac{e^{x(1-\cos x)} - 1}{x(1-\cos x)} \cdot\dfrac{1-\cos x} {1 + \dfrac{\sin x} {x}} $$ which tends to $$e^{0\cdot 1}\cdot 1\cdot\frac{1-1}{1+1}=0$$ as $x\to 0$.</p> Answer 4: <p>$$\lim _{x\to 0}\left(\frac{e^x-e^{x\:\cos(x)}}{\:x+sin(x)}\:\right) \approx \lim _{x\to 0}\left(\frac{1+x+o\left(x^2\right)-\left(1+x+o\left(x^2\right)\right)}{\:x+x+o\left(x^3\right)}\:\right) = \color{red}{0}$$</p> Answer 5: <p>Consider $$ \frac{e^x-e^{x\cos x}}{x+\sin x}=\frac{e^x(1-e^{x(\cos x-1)})}{x+\sin x}= \frac{-xe^x}{x+\sin x}\frac{e^{x(\cos x-1)}-1}{x} $$ The first factor is easily seen to have limit $-1/2$. The limit of the second factor is $$ \lim_{x\to0}\frac{e^{x(\cos x-1)}-1}{x} $$ which is the derivative at $0$ of $f(x)=e^{x(\cos x-1)}$. Any manipulation you do will just be adapting the proof of the chain rule to this particular case, so why not using it directly? Since $$ f'(x)=e^{x(\cos x-1)}(\cos x-1-x\sin x) $$ we have $f'(0)=0$. </p> <p>Without the derivative, $$ \lim_{x\to0}\frac{e^{x(\cos x-1)}-1}{x}= \lim_{x\to0}\frac{e^{x(\cos x-1)}-1}{x(\cos x-1)}(\cos x-1) $$ and the limit of the fraction is $1$ because it's essentially the same as $$ \lim_{t\to0}\frac{e^t-1}{t} $$ because $\lim_{x\to0} x(\cos x-1)=0$ and the function $g(x)=x(\cos x-1)$ is invertible in a neighborhood of $0$.</p> Answer 6: <p>Hint: series expansion of $e^x$, $e^{x \cos x}$ and $\sin x$.</p> Answer 7: <p>$$\begin{align*} &amp; \lim_{x \rightarrow 0} \frac{e^x - e^{x\cos x}}{x + \sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x - 1 + 1 - e^{x\cos x}}{x + \sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x-1}{x+\sin x} + \lim_{x \rightarrow 0} \frac{1-e^{x\cos x}}{x+\sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x-1}{x} \cdot \frac{x}{x+\sin x} + \lim_{x \rightarrow 0} \frac{1-e^{x\cos x}}{-x\cos x} \cdot \frac{-x\cos x}{x+\sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x-1}{x} \cdot \lim_{x \rightarrow 0} \frac{1}{1 + \frac{\sin x}{x}} + \lim_{x\cos x \rightarrow 0} \frac{e^{x\cos x}-1}{x\cos x} \cdot \lim_{x \rightarrow 0} -\frac{x}{x+\sin x} \cdot \cos x \\ &amp; = 1\cdot \frac{1}{1+1} + 1 \cdot -\frac{1}{1+1} \cdot 1\\ &amp; = \frac{1}{2} - \frac{1}{2}\\ &amp; = 0 \\ \end{align*}$$ </p> Answer 8: <p>$$\frac{e^x-e^{x \cos x}} {x +\sin x}=\frac{\frac{e^x}{x}-\frac{e^{x \cos x}}{x}}{1+\frac{\sin x}{x}}=\frac{\frac{e^x-1}{x}-\frac{e^{x \cos x}-1}{x}}{1+\frac{\sin x}{x}}=\frac{\frac{e^x-1}{x}-\cos x\frac{e^{x \cos x}-1}{x \cos x}}{1+\frac{\sin x}{x}}$$</p> <p>Hence the limit is $\frac{1-1(1)}{1+1}=0$.</p> Answer 9: <p>Note that $$\frac{e^{x} - e^{x\cos x}} {x+\sin x} = e^{x\cos x} \cdot\frac{e^{x(1-\cos x)} - 1}{x(1-\cos x)} \cdot\dfrac{1-\cos x} {1 + \dfrac{\sin x} {x}} $$ which tends to $$e^{0\cdot 1}\cdot 1\cdot\frac{1-1}{1+1}=0$$ as $x\to 0$.</p> Answer 10: <p>$$\lim _{x\to 0}\left(\frac{e^x-e^{x\:\cos(x)}}{\:x+sin(x)}\:\right) \approx \lim _{x\to 0}\left(\frac{1+x+o\left(x^2\right)-\left(1+x+o\left(x^2\right)\right)}{\:x+x+o\left(x^3\right)}\:\right) = \color{red}{0}$$</p> Answer 11: <p>Consider $$ \frac{e^x-e^{x\cos x}}{x+\sin x}=\frac{e^x(1-e^{x(\cos x-1)})}{x+\sin x}= \frac{-xe^x}{x+\sin x}\frac{e^{x(\cos x-1)}-1}{x} $$ The first factor is easily seen to have limit $-1/2$. The limit of the second factor is $$ \lim_{x\to0}\frac{e^{x(\cos x-1)}-1}{x} $$ which is the derivative at $0$ of $f(x)=e^{x(\cos x-1)}$. Any manipulation you do will just be adapting the proof of the chain rule to this particular case, so why not using it directly? Since $$ f'(x)=e^{x(\cos x-1)}(\cos x-1-x\sin x) $$ we have $f'(0)=0$. </p> <p>Without the derivative, $$ \lim_{x\to0}\frac{e^{x(\cos x-1)}-1}{x}= \lim_{x\to0}\frac{e^{x(\cos x-1)}-1}{x(\cos x-1)}(\cos x-1) $$ and the limit of the fraction is $1$ because it's essentially the same as $$ \lim_{t\to0}\frac{e^t-1}{t} $$ because $\lim_{x\to0} x(\cos x-1)=0$ and the function $g(x)=x(\cos x-1)$ is invertible in a neighborhood of $0$.</p> Answer 12: <p>Hint: series expansion of $e^x$, $e^{x \cos x}$ and $\sin x$.</p> Answer 13: <p>$$\begin{align*} &amp; \lim_{x \rightarrow 0} \frac{e^x - e^{x\cos x}}{x + \sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x - 1 + 1 - e^{x\cos x}}{x + \sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x-1}{x+\sin x} + \lim_{x \rightarrow 0} \frac{1-e^{x\cos x}}{x+\sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x-1}{x} \cdot \frac{x}{x+\sin x} + \lim_{x \rightarrow 0} \frac{1-e^{x\cos x}}{-x\cos x} \cdot \frac{-x\cos x}{x+\sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x-1}{x} \cdot \lim_{x \rightarrow 0} \frac{1}{1 + \frac{\sin x}{x}} + \lim_{x\cos x \rightarrow 0} \frac{e^{x\cos x}-1}{x\cos x} \cdot \lim_{x \rightarrow 0} -\frac{x}{x+\sin x} \cdot \cos x \\ &amp; = 1\cdot \frac{1}{1+1} + 1 \cdot -\frac{1}{1+1} \cdot 1\\ &amp; = \frac{1}{2} - \frac{1}{2}\\ &amp; = 0 \\ \end{align*}$$ </p> Answer 14: <p>$$\frac{e^x-e^{x \cos x}} {x +\sin x}=\frac{\frac{e^x}{x}-\frac{e^{x \cos x}}{x}}{1+\frac{\sin x}{x}}=\frac{\frac{e^x-1}{x}-\frac{e^{x \cos x}-1}{x}}{1+\frac{\sin x}{x}}=\frac{\frac{e^x-1}{x}-\cos x\frac{e^{x \cos x}-1}{x \cos x}}{1+\frac{\sin x}{x}}$$</p> <p>Hence the limit is $\frac{1-1(1)}{1+1}=0$.</p> Answer 15: <p>Note that $$\frac{e^{x} - e^{x\cos x}} {x+\sin x} = e^{x\cos x} \cdot\frac{e^{x(1-\cos x)} - 1}{x(1-\cos x)} \cdot\dfrac{1-\cos x} {1 + \dfrac{\sin x} {x}} $$ which tends to $$e^{0\cdot 1}\cdot 1\cdot\frac{1-1}{1+1}=0$$ as $x\to 0$.</p> Answer 16: <p>$$\lim _{x\to 0}\left(\frac{e^x-e^{x\:\cos(x)}}{\:x+sin(x)}\:\right) \approx \lim _{x\to 0}\left(\frac{1+x+o\left(x^2\right)-\left(1+x+o\left(x^2\right)\right)}{\:x+x+o\left(x^3\right)}\:\right) = \color{red}{0}$$</p> Answer 17: <p>Consider $$ \frac{e^x-e^{x\cos x}}{x+\sin x}=\frac{e^x(1-e^{x(\cos x-1)})}{x+\sin x}= \frac{-xe^x}{x+\sin x}\frac{e^{x(\cos x-1)}-1}{x} $$ The first factor is easily seen to have limit $-1/2$. The limit of the second factor is $$ \lim_{x\to0}\frac{e^{x(\cos x-1)}-1}{x} $$ which is the derivative at $0$ of $f(x)=e^{x(\cos x-1)}$. Any manipulation you do will just be adapting the proof of the chain rule to this particular case, so why not using it directly? Since $$ f'(x)=e^{x(\cos x-1)}(\cos x-1-x\sin x) $$ we have $f'(0)=0$. </p> <p>Without the derivative, $$ \lim_{x\to0}\frac{e^{x(\cos x-1)}-1}{x}= \lim_{x\to0}\frac{e^{x(\cos x-1)}-1}{x(\cos x-1)}(\cos x-1) $$ and the limit of the fraction is $1$ because it's essentially the same as $$ \lim_{t\to0}\frac{e^t-1}{t} $$ because $\lim_{x\to0} x(\cos x-1)=0$ and the function $g(x)=x(\cos x-1)$ is invertible in a neighborhood of $0$.</p> Answer 18: <p>Hint: series expansion of $e^x$, $e^{x \cos x}$ and $\sin x$.</p> Answer 19: <p>$$\begin{align*} &amp; \lim_{x \rightarrow 0} \frac{e^x - e^{x\cos x}}{x + \sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x - 1 + 1 - e^{x\cos x}}{x + \sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x-1}{x+\sin x} + \lim_{x \rightarrow 0} \frac{1-e^{x\cos x}}{x+\sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x-1}{x} \cdot \frac{x}{x+\sin x} + \lim_{x \rightarrow 0} \frac{1-e^{x\cos x}}{-x\cos x} \cdot \frac{-x\cos x}{x+\sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x-1}{x} \cdot \lim_{x \rightarrow 0} \frac{1}{1 + \frac{\sin x}{x}} + \lim_{x\cos x \rightarrow 0} \frac{e^{x\cos x}-1}{x\cos x} \cdot \lim_{x \rightarrow 0} -\frac{x}{x+\sin x} \cdot \cos x \\ &amp; = 1\cdot \frac{1}{1+1} + 1 \cdot -\frac{1}{1+1} \cdot 1\\ &amp; = \frac{1}{2} - \frac{1}{2}\\ &amp; = 0 \\ \end{align*}$$ </p> Answer 20: <p>$$\frac{e^x-e^{x \cos x}} {x +\sin x}=\frac{\frac{e^x}{x}-\frac{e^{x \cos x}}{x}}{1+\frac{\sin x}{x}}=\frac{\frac{e^x-1}{x}-\frac{e^{x \cos x}-1}{x}}{1+\frac{\sin x}{x}}=\frac{\frac{e^x-1}{x}-\cos x\frac{e^{x \cos x}-1}{x \cos x}}{1+\frac{\sin x}{x}}$$</p> <p>Hence the limit is $\frac{1-1(1)}{1+1}=0$.</p> Answer 21: <p>Note that $$\frac{e^{x} - e^{x\cos x}} {x+\sin x} = e^{x\cos x} \cdot\frac{e^{x(1-\cos x)} - 1}{x(1-\cos x)} \cdot\dfrac{1-\cos x} {1 + \dfrac{\sin x} {x}} $$ which tends to $$e^{0\cdot 1}\cdot 1\cdot\frac{1-1}{1+1}=0$$ as $x\to 0$.</p> Answer 22: <p>$$\lim _{x\to 0}\left(\frac{e^x-e^{x\:\cos(x)}}{\:x+sin(x)}\:\right) \approx \lim _{x\to 0}\left(\frac{1+x+o\left(x^2\right)-\left(1+x+o\left(x^2\right)\right)}{\:x+x+o\left(x^3\right)}\:\right) = \color{red}{0}$$</p> Answer 23: <p>Consider $$ \frac{e^x-e^{x\cos x}}{x+\sin x}=\frac{e^x(1-e^{x(\cos x-1)})}{x+\sin x}= \frac{-xe^x}{x+\sin x}\frac{e^{x(\cos x-1)}-1}{x} $$ The first factor is easily seen to have limit $-1/2$. The limit of the second factor is $$ \lim_{x\to0}\frac{e^{x(\cos x-1)}-1}{x} $$ which is the derivative at $0$ of $f(x)=e^{x(\cos x-1)}$. Any manipulation you do will just be adapting the proof of the chain rule to this particular case, so why not using it directly? Since $$ f'(x)=e^{x(\cos x-1)}(\cos x-1-x\sin x) $$ we have $f'(0)=0$. </p> <p>Without the derivative, $$ \lim_{x\to0}\frac{e^{x(\cos x-1)}-1}{x}= \lim_{x\to0}\frac{e^{x(\cos x-1)}-1}{x(\cos x-1)}(\cos x-1) $$ and the limit of the fraction is $1$ because it's essentially the same as $$ \lim_{t\to0}\frac{e^t-1}{t} $$ because $\lim_{x\to0} x(\cos x-1)=0$ and the function $g(x)=x(\cos x-1)$ is invertible in a neighborhood of $0$.</p> Answer 24: <p>Hint: series expansion of $e^x$, $e^{x \cos x}$ and $\sin x$.</p> Answer 25: <p>$$\begin{align*} &amp; \lim_{x \rightarrow 0} \frac{e^x - e^{x\cos x}}{x + \sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x - 1 + 1 - e^{x\cos x}}{x + \sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x-1}{x+\sin x} + \lim_{x \rightarrow 0} \frac{1-e^{x\cos x}}{x+\sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x-1}{x} \cdot \frac{x}{x+\sin x} + \lim_{x \rightarrow 0} \frac{1-e^{x\cos x}}{-x\cos x} \cdot \frac{-x\cos x}{x+\sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x-1}{x} \cdot \lim_{x \rightarrow 0} \frac{1}{1 + \frac{\sin x}{x}} + \lim_{x\cos x \rightarrow 0} \frac{e^{x\cos x}-1}{x\cos x} \cdot \lim_{x \rightarrow 0} -\frac{x}{x+\sin x} \cdot \cos x \\ &amp; = 1\cdot \frac{1}{1+1} + 1 \cdot -\frac{1}{1+1} \cdot 1\\ &amp; = \frac{1}{2} - \frac{1}{2}\\ &amp; = 0 \\ \end{align*}$$ </p> Answer 26: <p>$$\frac{e^x-e^{x \cos x}} {x +\sin x}=\frac{\frac{e^x}{x}-\frac{e^{x \cos x}}{x}}{1+\frac{\sin x}{x}}=\frac{\frac{e^x-1}{x}-\frac{e^{x \cos x}-1}{x}}{1+\frac{\sin x}{x}}=\frac{\frac{e^x-1}{x}-\cos x\frac{e^{x \cos x}-1}{x \cos x}}{1+\frac{\sin x}{x}}$$</p> <p>Hence the limit is $\frac{1-1(1)}{1+1}=0$.</p> Answer 27: <p>Note that $$\frac{e^{x} - e^{x\cos x}} {x+\sin x} = e^{x\cos x} \cdot\frac{e^{x(1-\cos x)} - 1}{x(1-\cos x)} \cdot\dfrac{1-\cos x} {1 + \dfrac{\sin x} {x}} $$ which tends to $$e^{0\cdot 1}\cdot 1\cdot\frac{1-1}{1+1}=0$$ as $x\to 0$.</p> Answer 28: <p>$$\lim _{x\to 0}\left(\frac{e^x-e^{x\:\cos(x)}}{\:x+sin(x)}\:\right) \approx \lim _{x\to 0}\left(\frac{1+x+o\left(x^2\right)-\left(1+x+o\left(x^2\right)\right)}{\:x+x+o\left(x^3\right)}\:\right) = \color{red}{0}$$</p> Answer 29: <p>Consider $$ \frac{e^x-e^{x\cos x}}{x+\sin x}=\frac{e^x(1-e^{x(\cos x-1)})}{x+\sin x}= \frac{-xe^x}{x+\sin x}\frac{e^{x(\cos x-1)}-1}{x} $$ The first factor is easily seen to have limit $-1/2$. The limit of the second factor is $$ \lim_{x\to0}\frac{e^{x(\cos x-1)}-1}{x} $$ which is the derivative at $0$ of $f(x)=e^{x(\cos x-1)}$. Any manipulation you do will just be adapting the proof of the chain rule to this particular case, so why not using it directly? Since $$ f'(x)=e^{x(\cos x-1)}(\cos x-1-x\sin x) $$ we have $f'(0)=0$. </p> <p>Without the derivative, $$ \lim_{x\to0}\frac{e^{x(\cos x-1)}-1}{x}= \lim_{x\to0}\frac{e^{x(\cos x-1)}-1}{x(\cos x-1)}(\cos x-1) $$ and the limit of the fraction is $1$ because it's essentially the same as $$ \lim_{t\to0}\frac{e^t-1}{t} $$ because $\lim_{x\to0} x(\cos x-1)=0$ and the function $g(x)=x(\cos x-1)$ is invertible in a neighborhood of $0$.</p> Answer 30: <p>Hint: series expansion of $e^x$, $e^{x \cos x}$ and $\sin x$.</p> Answer 31: <p>$$\begin{align*} &amp; \lim_{x \rightarrow 0} \frac{e^x - e^{x\cos x}}{x + \sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x - 1 + 1 - e^{x\cos x}}{x + \sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x-1}{x+\sin x} + \lim_{x \rightarrow 0} \frac{1-e^{x\cos x}}{x+\sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x-1}{x} \cdot \frac{x}{x+\sin x} + \lim_{x \rightarrow 0} \frac{1-e^{x\cos x}}{-x\cos x} \cdot \frac{-x\cos x}{x+\sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x-1}{x} \cdot \lim_{x \rightarrow 0} \frac{1}{1 + \frac{\sin x}{x}} + \lim_{x\cos x \rightarrow 0} \frac{e^{x\cos x}-1}{x\cos x} \cdot \lim_{x \rightarrow 0} -\frac{x}{x+\sin x} \cdot \cos x \\ &amp; = 1\cdot \frac{1}{1+1} + 1 \cdot -\frac{1}{1+1} \cdot 1\\ &amp; = \frac{1}{2} - \frac{1}{2}\\ &amp; = 0 \\ \end{align*}$$ </p> Answer 32: <p>$$\frac{e^x-e^{x \cos x}} {x +\sin x}=\frac{\frac{e^x}{x}-\frac{e^{x \cos x}}{x}}{1+\frac{\sin x}{x}}=\frac{\frac{e^x-1}{x}-\frac{e^{x \cos x}-1}{x}}{1+\frac{\sin x}{x}}=\frac{\frac{e^x-1}{x}-\cos x\frac{e^{x \cos x}-1}{x \cos x}}{1+\frac{\sin x}{x}}$$</p> <p>Hence the limit is $\frac{1-1(1)}{1+1}=0$.</p> Answer 33: <p>Note that $$\frac{e^{x} - e^{x\cos x}} {x+\sin x} = e^{x\cos x} \cdot\frac{e^{x(1-\cos x)} - 1}{x(1-\cos x)} \cdot\dfrac{1-\cos x} {1 + \dfrac{\sin x} {x}} $$ which tends to $$e^{0\cdot 1}\cdot 1\cdot\frac{1-1}{1+1}=0$$ as $x\to 0$.</p> Answer 34: <p>$$\lim _{x\to 0}\left(\frac{e^x-e^{x\:\cos(x)}}{\:x+sin(x)}\:\right) \approx \lim _{x\to 0}\left(\frac{1+x+o\left(x^2\right)-\left(1+x+o\left(x^2\right)\right)}{\:x+x+o\left(x^3\right)}\:\right) = \color{red}{0}$$</p> Answer 35: <p>Consider $$ \frac{e^x-e^{x\cos x}}{x+\sin x}=\frac{e^x(1-e^{x(\cos x-1)})}{x+\sin x}= \frac{-xe^x}{x+\sin x}\frac{e^{x(\cos x-1)}-1}{x} $$ The first factor is easily seen to have limit $-1/2$. The limit of the second factor is $$ \lim_{x\to0}\frac{e^{x(\cos x-1)}-1}{x} $$ which is the derivative at $0$ of $f(x)=e^{x(\cos x-1)}$. Any manipulation you do will just be adapting the proof of the chain rule to this particular case, so why not using it directly? Since $$ f'(x)=e^{x(\cos x-1)}(\cos x-1-x\sin x) $$ we have $f'(0)=0$. </p> <p>Without the derivative, $$ \lim_{x\to0}\frac{e^{x(\cos x-1)}-1}{x}= \lim_{x\to0}\frac{e^{x(\cos x-1)}-1}{x(\cos x-1)}(\cos x-1) $$ and the limit of the fraction is $1$ because it's essentially the same as $$ \lim_{t\to0}\frac{e^t-1}{t} $$ because $\lim_{x\to0} x(\cos x-1)=0$ and the function $g(x)=x(\cos x-1)$ is invertible in a neighborhood of $0$.</p> Answer 36: <p>Hint: series expansion of $e^x$, $e^{x \cos x}$ and $\sin x$.</p> Answer 37: <p>$$\begin{align*} &amp; \lim_{x \rightarrow 0} \frac{e^x - e^{x\cos x}}{x + \sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x - 1 + 1 - e^{x\cos x}}{x + \sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x-1}{x+\sin x} + \lim_{x \rightarrow 0} \frac{1-e^{x\cos x}}{x+\sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x-1}{x} \cdot \frac{x}{x+\sin x} + \lim_{x \rightarrow 0} \frac{1-e^{x\cos x}}{-x\cos x} \cdot \frac{-x\cos x}{x+\sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x-1}{x} \cdot \lim_{x \rightarrow 0} \frac{1}{1 + \frac{\sin x}{x}} + \lim_{x\cos x \rightarrow 0} \frac{e^{x\cos x}-1}{x\cos x} \cdot \lim_{x \rightarrow 0} -\frac{x}{x+\sin x} \cdot \cos x \\ &amp; = 1\cdot \frac{1}{1+1} + 1 \cdot -\frac{1}{1+1} \cdot 1\\ &amp; = \frac{1}{2} - \frac{1}{2}\\ &amp; = 0 \\ \end{align*}$$ </p> Answer 38: <p>$$\frac{e^x-e^{x \cos x}} {x +\sin x}=\frac{\frac{e^x}{x}-\frac{e^{x \cos x}}{x}}{1+\frac{\sin x}{x}}=\frac{\frac{e^x-1}{x}-\frac{e^{x \cos x}-1}{x}}{1+\frac{\sin x}{x}}=\frac{\frac{e^x-1}{x}-\cos x\frac{e^{x \cos x}-1}{x \cos x}}{1+\frac{\sin x}{x}}$$</p> <p>Hence the limit is $\frac{1-1(1)}{1+1}=0$.</p> Answer 39: <p>Note that $$\frac{e^{x} - e^{x\cos x}} {x+\sin x} = e^{x\cos x} \cdot\frac{e^{x(1-\cos x)} - 1}{x(1-\cos x)} \cdot\dfrac{1-\cos x} {1 + \dfrac{\sin x} {x}} $$ which tends to $$e^{0\cdot 1}\cdot 1\cdot\frac{1-1}{1+1}=0$$ as $x\to 0$.</p> Answer 40: <p>$$\lim _{x\to 0}\left(\frac{e^x-e^{x\:\cos(x)}}{\:x+sin(x)}\:\right) \approx \lim _{x\to 0}\left(\frac{1+x+o\left(x^2\right)-\left(1+x+o\left(x^2\right)\right)}{\:x+x+o\left(x^3\right)}\:\right) = \color{red}{0}$$</p> Answer 41: <p>Consider $$ \frac{e^x-e^{x\cos x}}{x+\sin x}=\frac{e^x(1-e^{x(\cos x-1)})}{x+\sin x}= \frac{-xe^x}{x+\sin x}\frac{e^{x(\cos x-1)}-1}{x} $$ The first factor is easily seen to have limit $-1/2$. The limit of the second factor is $$ \lim_{x\to0}\frac{e^{x(\cos x-1)}-1}{x} $$ which is the derivative at $0$ of $f(x)=e^{x(\cos x-1)}$. Any manipulation you do will just be adapting the proof of the chain rule to this particular case, so why not using it directly? Since $$ f'(x)=e^{x(\cos x-1)}(\cos x-1-x\sin x) $$ we have $f'(0)=0$. </p> <p>Without the derivative, $$ \lim_{x\to0}\frac{e^{x(\cos x-1)}-1}{x}= \lim_{x\to0}\frac{e^{x(\cos x-1)}-1}{x(\cos x-1)}(\cos x-1) $$ and the limit of the fraction is $1$ because it's essentially the same as $$ \lim_{t\to0}\frac{e^t-1}{t} $$ because $\lim_{x\to0} x(\cos x-1)=0$ and the function $g(x)=x(\cos x-1)$ is invertible in a neighborhood of $0$.</p> Answer 42: <p>Hint: series expansion of $e^x$, $e^{x \cos x}$ and $\sin x$.</p> Answer 43: <p>$$\begin{align*} &amp; \lim_{x \rightarrow 0} \frac{e^x - e^{x\cos x}}{x + \sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x - 1 + 1 - e^{x\cos x}}{x + \sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x-1}{x+\sin x} + \lim_{x \rightarrow 0} \frac{1-e^{x\cos x}}{x+\sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x-1}{x} \cdot \frac{x}{x+\sin x} + \lim_{x \rightarrow 0} \frac{1-e^{x\cos x}}{-x\cos x} \cdot \frac{-x\cos x}{x+\sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x-1}{x} \cdot \lim_{x \rightarrow 0} \frac{1}{1 + \frac{\sin x}{x}} + \lim_{x\cos x \rightarrow 0} \frac{e^{x\cos x}-1}{x\cos x} \cdot \lim_{x \rightarrow 0} -\frac{x}{x+\sin x} \cdot \cos x \\ &amp; = 1\cdot \frac{1}{1+1} + 1 \cdot -\frac{1}{1+1} \cdot 1\\ &amp; = \frac{1}{2} - \frac{1}{2}\\ &amp; = 0 \\ \end{align*}$$ </p> Answer 44: <p>$$\frac{e^x-e^{x \cos x}} {x +\sin x}=\frac{\frac{e^x}{x}-\frac{e^{x \cos x}}{x}}{1+\frac{\sin x}{x}}=\frac{\frac{e^x-1}{x}-\frac{e^{x \cos x}-1}{x}}{1+\frac{\sin x}{x}}=\frac{\frac{e^x-1}{x}-\cos x\frac{e^{x \cos x}-1}{x \cos x}}{1+\frac{\sin x}{x}}$$</p> <p>Hence the limit is $\frac{1-1(1)}{1+1}=0$.</p> Answer 45: <p>Note that $$\frac{e^{x} - e^{x\cos x}} {x+\sin x} = e^{x\cos x} \cdot\frac{e^{x(1-\cos x)} - 1}{x(1-\cos x)} \cdot\dfrac{1-\cos x} {1 + \dfrac{\sin x} {x}} $$ which tends to $$e^{0\cdot 1}\cdot 1\cdot\frac{1-1}{1+1}=0$$ as $x\to 0$.</p> Answer 46: <p>$$\lim _{x\to 0}\left(\frac{e^x-e^{x\:\cos(x)}}{\:x+sin(x)}\:\right) \approx \lim _{x\to 0}\left(\frac{1+x+o\left(x^2\right)-\left(1+x+o\left(x^2\right)\right)}{\:x+x+o\left(x^3\right)}\:\right) = \color{red}{0}$$</p> Answer 47: <p>Consider $$ \frac{e^x-e^{x\cos x}}{x+\sin x}=\frac{e^x(1-e^{x(\cos x-1)})}{x+\sin x}= \frac{-xe^x}{x+\sin x}\frac{e^{x(\cos x-1)}-1}{x} $$ The first factor is easily seen to have limit $-1/2$. The limit of the second factor is $$ \lim_{x\to0}\frac{e^{x(\cos x-1)}-1}{x} $$ which is the derivative at $0$ of $f(x)=e^{x(\cos x-1)}$. Any manipulation you do will just be adapting the proof of the chain rule to this particular case, so why not using it directly? Since $$ f'(x)=e^{x(\cos x-1)}(\cos x-1-x\sin x) $$ we have $f'(0)=0$. </p> <p>Without the derivative, $$ \lim_{x\to0}\frac{e^{x(\cos x-1)}-1}{x}= \lim_{x\to0}\frac{e^{x(\cos x-1)}-1}{x(\cos x-1)}(\cos x-1) $$ and the limit of the fraction is $1$ because it's essentially the same as $$ \lim_{t\to0}\frac{e^t-1}{t} $$ because $\lim_{x\to0} x(\cos x-1)=0$ and the function $g(x)=x(\cos x-1)$ is invertible in a neighborhood of $0$.</p> Answer 48: <p>Hint: series expansion of $e^x$, $e^{x \cos x}$ and $\sin x$.</p> Answer 49: <p>$$\begin{align*} &amp; \lim_{x \rightarrow 0} \frac{e^x - e^{x\cos x}}{x + \sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x - 1 + 1 - e^{x\cos x}}{x + \sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x-1}{x+\sin x} + \lim_{x \rightarrow 0} \frac{1-e^{x\cos x}}{x+\sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x-1}{x} \cdot \frac{x}{x+\sin x} + \lim_{x \rightarrow 0} \frac{1-e^{x\cos x}}{-x\cos x} \cdot \frac{-x\cos x}{x+\sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x-1}{x} \cdot \lim_{x \rightarrow 0} \frac{1}{1 + \frac{\sin x}{x}} + \lim_{x\cos x \rightarrow 0} \frac{e^{x\cos x}-1}{x\cos x} \cdot \lim_{x \rightarrow 0} -\frac{x}{x+\sin x} \cdot \cos x \\ &amp; = 1\cdot \frac{1}{1+1} + 1 \cdot -\frac{1}{1+1} \cdot 1\\ &amp; = \frac{1}{2} - \frac{1}{2}\\ &amp; = 0 \\ \end{align*}$$ </p> Answer 50: <p>$$\frac{e^x-e^{x \cos x}} {x +\sin x}=\frac{\frac{e^x}{x}-\frac{e^{x \cos x}}{x}}{1+\frac{\sin x}{x}}=\frac{\frac{e^x-1}{x}-\frac{e^{x \cos x}-1}{x}}{1+\frac{\sin x}{x}}=\frac{\frac{e^x-1}{x}-\cos x\frac{e^{x \cos x}-1}{x \cos x}}{1+\frac{\sin x}{x}}$$</p> <p>Hence the limit is $\frac{1-1(1)}{1+1}=0$.</p> Answer 51: <p>Note that $$\frac{e^{x} - e^{x\cos x}} {x+\sin x} = e^{x\cos x} \cdot\frac{e^{x(1-\cos x)} - 1}{x(1-\cos x)} \cdot\dfrac{1-\cos x} {1 + \dfrac{\sin x} {x}} $$ which tends to $$e^{0\cdot 1}\cdot 1\cdot\frac{1-1}{1+1}=0$$ as $x\to 0$.</p> Answer 52: <p>$$\lim _{x\to 0}\left(\frac{e^x-e^{x\:\cos(x)}}{\:x+sin(x)}\:\right) \approx \lim _{x\to 0}\left(\frac{1+x+o\left(x^2\right)-\left(1+x+o\left(x^2\right)\right)}{\:x+x+o\left(x^3\right)}\:\right) = \color{red}{0}$$</p> Answer 53: <p>Consider $$ \frac{e^x-e^{x\cos x}}{x+\sin x}=\frac{e^x(1-e^{x(\cos x-1)})}{x+\sin x}= \frac{-xe^x}{x+\sin x}\frac{e^{x(\cos x-1)}-1}{x} $$ The first factor is easily seen to have limit $-1/2$. The limit of the second factor is $$ \lim_{x\to0}\frac{e^{x(\cos x-1)}-1}{x} $$ which is the derivative at $0$ of $f(x)=e^{x(\cos x-1)}$. Any manipulation you do will just be adapting the proof of the chain rule to this particular case, so why not using it directly? Since $$ f'(x)=e^{x(\cos x-1)}(\cos x-1-x\sin x) $$ we have $f'(0)=0$. </p> <p>Without the derivative, $$ \lim_{x\to0}\frac{e^{x(\cos x-1)}-1}{x}= \lim_{x\to0}\frac{e^{x(\cos x-1)}-1}{x(\cos x-1)}(\cos x-1) $$ and the limit of the fraction is $1$ because it's essentially the same as $$ \lim_{t\to0}\frac{e^t-1}{t} $$ because $\lim_{x\to0} x(\cos x-1)=0$ and the function $g(x)=x(\cos x-1)$ is invertible in a neighborhood of $0$.</p> Answer 54: <p>Hint: series expansion of $e^x$, $e^{x \cos x}$ and $\sin x$.</p> Answer 55: <p>$$\begin{align*} &amp; \lim_{x \rightarrow 0} \frac{e^x - e^{x\cos x}}{x + \sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x - 1 + 1 - e^{x\cos x}}{x + \sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x-1}{x+\sin x} + \lim_{x \rightarrow 0} \frac{1-e^{x\cos x}}{x+\sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x-1}{x} \cdot \frac{x}{x+\sin x} + \lim_{x \rightarrow 0} \frac{1-e^{x\cos x}}{-x\cos x} \cdot \frac{-x\cos x}{x+\sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x-1}{x} \cdot \lim_{x \rightarrow 0} \frac{1}{1 + \frac{\sin x}{x}} + \lim_{x\cos x \rightarrow 0} \frac{e^{x\cos x}-1}{x\cos x} \cdot \lim_{x \rightarrow 0} -\frac{x}{x+\sin x} \cdot \cos x \\ &amp; = 1\cdot \frac{1}{1+1} + 1 \cdot -\frac{1}{1+1} \cdot 1\\ &amp; = \frac{1}{2} - \frac{1}{2}\\ &amp; = 0 \\ \end{align*}$$ </p> Answer 56: <p>$$\frac{e^x-e^{x \cos x}} {x +\sin x}=\frac{\frac{e^x}{x}-\frac{e^{x \cos x}}{x}}{1+\frac{\sin x}{x}}=\frac{\frac{e^x-1}{x}-\frac{e^{x \cos x}-1}{x}}{1+\frac{\sin x}{x}}=\frac{\frac{e^x-1}{x}-\cos x\frac{e^{x \cos x}-1}{x \cos x}}{1+\frac{\sin x}{x}}$$</p> <p>Hence the limit is $\frac{1-1(1)}{1+1}=0$.</p> Answer 57: <p>Note that $$\frac{e^{x} - e^{x\cos x}} {x+\sin x} = e^{x\cos x} \cdot\frac{e^{x(1-\cos x)} - 1}{x(1-\cos x)} \cdot\dfrac{1-\cos x} {1 + \dfrac{\sin x} {x}} $$ which tends to $$e^{0\cdot 1}\cdot 1\cdot\frac{1-1}{1+1}=0$$ as $x\to 0$.</p> Answer 58: <p>$$\lim _{x\to 0}\left(\frac{e^x-e^{x\:\cos(x)}}{\:x+sin(x)}\:\right) \approx \lim _{x\to 0}\left(\frac{1+x+o\left(x^2\right)-\left(1+x+o\left(x^2\right)\right)}{\:x+x+o\left(x^3\right)}\:\right) = \color{red}{0}$$</p> Answer 59: <p>Consider $$ \frac{e^x-e^{x\cos x}}{x+\sin x}=\frac{e^x(1-e^{x(\cos x-1)})}{x+\sin x}= \frac{-xe^x}{x+\sin x}\frac{e^{x(\cos x-1)}-1}{x} $$ The first factor is easily seen to have limit $-1/2$. The limit of the second factor is $$ \lim_{x\to0}\frac{e^{x(\cos x-1)}-1}{x} $$ which is the derivative at $0$ of $f(x)=e^{x(\cos x-1)}$. Any manipulation you do will just be adapting the proof of the chain rule to this particular case, so why not using it directly? Since $$ f'(x)=e^{x(\cos x-1)}(\cos x-1-x\sin x) $$ we have $f'(0)=0$. </p> <p>Without the derivative, $$ \lim_{x\to0}\frac{e^{x(\cos x-1)}-1}{x}= \lim_{x\to0}\frac{e^{x(\cos x-1)}-1}{x(\cos x-1)}(\cos x-1) $$ and the limit of the fraction is $1$ because it's essentially the same as $$ \lim_{t\to0}\frac{e^t-1}{t} $$ because $\lim_{x\to0} x(\cos x-1)=0$ and the function $g(x)=x(\cos x-1)$ is invertible in a neighborhood of $0$.</p> Answer 60: <p>Hint: series expansion of $e^x$, $e^{x \cos x}$ and $\sin x$.</p> Answer 61: <p>$$\begin{align*} &amp; \lim_{x \rightarrow 0} \frac{e^x - e^{x\cos x}}{x + \sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x - 1 + 1 - e^{x\cos x}}{x + \sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x-1}{x+\sin x} + \lim_{x \rightarrow 0} \frac{1-e^{x\cos x}}{x+\sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x-1}{x} \cdot \frac{x}{x+\sin x} + \lim_{x \rightarrow 0} \frac{1-e^{x\cos x}}{-x\cos x} \cdot \frac{-x\cos x}{x+\sin x} \\ &amp; = \lim_{x \rightarrow 0} \frac{e^x-1}{x} \cdot \lim_{x \rightarrow 0} \frac{1}{1 + \frac{\sin x}{x}} + \lim_{x\cos x \rightarrow 0} \frac{e^{x\cos x}-1}{x\cos x} \cdot \lim_{x \rightarrow 0} -\frac{x}{x+\sin x} \cdot \cos x \\ &amp; = 1\cdot \frac{1}{1+1} + 1 \cdot -\frac{1}{1+1} \cdot 1\\ &amp; = \frac{1}{2} - \frac{1}{2}\\ &amp; = 0 \\ \end{align*}$$ </p> Answer 62: <p>$$\frac{e^x-e^{x \cos x}} {x +\sin x}=\frac{\frac{e^x}{x}-\frac{e^{x \cos x}}{x}}{1+\frac{\sin x}{x}}=\frac{\frac{e^x-1}{x}-\frac{e^{x \cos x}-1}{x}}{1+\frac{\sin x}{x}}=\frac{\frac{e^x-1}{x}-\cos x\frac{e^{x \cos x}-1}{x \cos x}}{1+\frac{\sin x}{x}}$$</p> <p>Hence the limit is $\frac{1-1(1)}{1+1}=0$.</p> Answer 63: <p>Note that $$\frac{e^{x} - e^{x\cos x}} {x+\sin x} = e^{x\cos x} \cdot\frac{e^{x(1-\cos x)} - 1}{x(1-\cos x)} \cdot\dfrac{1-\cos x} {1 + \dfrac{\sin x} {x}} $$ which tends to $$e^{0\cdot 1}\cdot 1\cdot\frac{1-1}{1+1}=0$$ as $x\to 0$.</p> Answer 64: <p>$$\lim _{x\to 0}\left(\frac{e^x-e^{x\:\cos(x)}}{\:x+sin(x)}\:\right) \approx \lim _{x\to 0}\left(\frac{1+x+o\left(x^2\right)-\left(1+x+o\left(x^2\right)\right)}{\:x+x+o\left(x^3\right)}\:\right) = \color{red}{0}$$</p> Answer 65: <p>Consider $$ \frac{e^x-e^{x\cos x}}{x+\sin x}=\frac{e^x(1-e^{x(\cos x-1)})}{x+\sin x}= \frac{-xe^x}{x+\sin x}\frac{e^{x(\cos x-1)}-1}{x} $$ The first factor is easily seen to have limit $-1/2$. The limit of the second factor is $$ \lim_{x\to0}\frac{e^{x(\cos x-1)}-1}{x} $$ which is the derivative at $0$ of $f(x)=e^{x(\cos x-1)}$. Any manipulation you do will just be adapting the proof of the chain rule to this particular case, so why not using it directly? Since $$ f'(x)=e^{x(\cos x-1)}(\cos x-1-x\sin x) $$ we have $f'(0)=0$. </p> <p>Without the derivative, $$ \lim_{x\to0}\frac{e^{x(\cos x-1)}-1}{x}= \lim_{x\to0}\frac{e^{x(\cos x-1)}-1}{x(\cos x-1)}(\cos x-1) $$ and the limit of the fraction is $1$ because it's essentially the same as $$ \lim_{t\to0}\frac{e^t-1}{t} $$ because $\lim_{x\to0} x(\cos x-1)=0$ and the function $g(x)=x(\cos x-1)$ is invertible in a neighborhood of $0$.</p> Answer 66: <p>Hint: series expansion of $e^x$, $e^{x \cos x}$ and $\sin x$.</p>
Find ALL integer solutions (n,m) of the equation $(n+1)(2n+1)=10m^2$
2,094,797
6
https://math.stackexchange.com/questions/2094797/find-all-integer-solutions-n-m-of-the-equation-n12n1-10m2
<p>What I'm thinking is that since RHS contains a $10$, so at least one of the factors $(n+1)$ OR $(2n+1)$ must be divisible by 2. $(2n+1)$ is out of the question, so that means $(n+1)$ must be even, or $n$ must be odd.</p> <p>If $(n+1)$ is divisible by $5$ also, then :</p> <p>$n+1 = 5k_1 = 2k_2$ (for some ($k_1,k_2\in {\mathbb Z}$))</p> <p>I do not know what to do with that $m^2$ .</p> <p>Can anyone tell me how to proceed ?</p> <p><strong>EDIT</strong> : I found a factorized equivalent for the given statement : $(n+3m)(n-3m) + (n-m)(n+m) + (3n + 1) =0 $ . Is it something useful ? </p>
[ { "answer_id": 2094851, "score": 2, "text": "<p>Let $x=n+1$. The equation becomes \n$$x(2x-1)=10m^2 \\\\\n2x^2-x-10m^2=0 \\\\\nx=\\frac{1 \\pm \\sqrt{1+80m^2}}{4}$$</p>\n\n<p>It follows that $1+80m^2$ must be a perfect square.\nThus\n$$y^2=1+80m^2 \\Rightarrow\\\\ \ny^2-80m^2=1$$</p>\n\n<p>This is the <strong>Pell equation</strong>, and has solutions. Now, for each solution \n$(y,m)$, $y$ must be odd. Then, each of $1+y, 1-y$ is even, and their sum is $2$, meaning that exactly one of them is divisible by $4$ (look at them $\\pmod{4}$).</p>\n\n<p>That particular value leads to an integer value of $x$.</p>\n" }, { "answer_id": 2094873, "score": 2, "text": "<p>I got this. We have $(n+1)(2n+1)=2n^2+3n+1=10m^2$, if we multiply last equation by $8$ we get $16n^2+24n+8=80m^2$, but we can write this equation as $$(4n+3)^2-1=80m^2.$$</p>\n\n<p>So if we set $4n+3=x$ we get the Pell-equation $x^2-80m^2=1$. The fundamental solution of this equation is $x=9$ and $m=1$, hence we have the following recursive formulas $$x_{r+1}=9x_r+80m_r,$$ $$m_{r+1}=x_r+9m_r,$$ where $r\\in\\Bbb{N}$. Therefore $$n=\\frac{9x_r+80m_r-3}{4}=\\frac{9x_r-3}{4}+20m_r.$$</p>\n\n<p>This means that in order to have $n\\in\\Bbb{Z}$ we must have $9x_r\\equiv 3\\pmod 4$, equivalently $x_r\\equiv 3\\pmod 4$. Now, from $x_{r+1}=9x_r+80m_r$ it's easy to prove by induction that $x_r\\equiv 1\\pmod4$ for every $r\\in\\Bbb{N}$, so $-x_r\\equiv 3\\pmod 4$ and this lead us to infinitely many solutions, i.e. for every $x_r$ which is a solution of $x^2-80m^2=1$, we just need to take $-x_r$ in order to have $n\\in\\Bbb{Z}$. Hence all the solutions are $$(n,m)=\\Bigl(\\frac{-x_r-3}{4}, \\pm m_r\\Bigr).$$</p>\n" }, { "answer_id": 2094841, "score": 1, "text": "<p>Since $n+1$ and $2n+1$ are coprime, they must be of the form $\\pm k_1^2$ and $\\pm 10k_2^2$ (option A), or $\\pm 2k_1^2$ and $\\pm 5k_2^2$ (option B). Let's explore both options in more detail.</p>\n\n<p><strong>(A)</strong> Since $2n+1$ is odd, it can't be divisible by 10, hence it equals $\\pm k_1^2$, and $n+1=\\pm10k_2^2$.<br />Positive branch <em>(that is, interpreting both $\\pm$ as +)</em>: $20k_2^2-1=k_1^2$. This thing can't have solutions modulo 4, hence it can't have solutions at all. <br />Negative branch: $-20k_2^2-1=-k_1^2$, so $20k_2^2+1=k_1^2$. This is a Pell equation, and it produces an infinite sequence of solutions: $(n,m)=(-1,0)$, then $(-41,18)$, then $(-12\\,961,5\\,796)$, then $(-4\\,173\\,161,1\\,866\\,294)$ and so on.</p>\n\n<p><strong>(B)</strong> Since $2n+1$ is odd, it equals $\\pm5k_1^2$, and $n+1=\\pm2k_2^2$. <br />Positive branch: $4k_2^2-1=5k_1^2$. This thing can't have solutions modulo 4 either, hence it can't have solutions at all.<br />Negative branch: $-4k_2^2-1=-5k_1^2$, so $4k_2^2+1=5k_1^2$. This is again a Pell equation (maybe generalized or something), and it produces <em>another</em> infinite sequence of solutions: $(n,m)=(-3,1)$, then $(-723,323)$, then $(-232\\,563,104\\,005)$ and so on.</p>\n\n<p>Needless to say, you may put a minus before $m$ in any solution.</p>\n" }, { "answer_id": 2094851, "score": 2, "text": "<p>Let $x=n+1$. The equation becomes \n$$x(2x-1)=10m^2 \\\\\n2x^2-x-10m^2=0 \\\\\nx=\\frac{1 \\pm \\sqrt{1+80m^2}}{4}$$</p>\n\n<p>It follows that $1+80m^2$ must be a perfect square.\nThus\n$$y^2=1+80m^2 \\Rightarrow\\\\ \ny^2-80m^2=1$$</p>\n\n<p>This is the <strong>Pell equation</strong>, and has solutions. Now, for each solution \n$(y,m)$, $y$ must be odd. Then, each of $1+y, 1-y$ is even, and their sum is $2$, meaning that exactly one of them is divisible by $4$ (look at them $\\pmod{4}$).</p>\n\n<p>That particular value leads to an integer value of $x$.</p>\n" }, { "answer_id": 2094873, "score": 2, "text": "<p>I got this. We have $(n+1)(2n+1)=2n^2+3n+1=10m^2$, if we multiply last equation by $8$ we get $16n^2+24n+8=80m^2$, but we can write this equation as $$(4n+3)^2-1=80m^2.$$</p>\n\n<p>So if we set $4n+3=x$ we get the Pell-equation $x^2-80m^2=1$. The fundamental solution of this equation is $x=9$ and $m=1$, hence we have the following recursive formulas $$x_{r+1}=9x_r+80m_r,$$ $$m_{r+1}=x_r+9m_r,$$ where $r\\in\\Bbb{N}$. Therefore $$n=\\frac{9x_r+80m_r-3}{4}=\\frac{9x_r-3}{4}+20m_r.$$</p>\n\n<p>This means that in order to have $n\\in\\Bbb{Z}$ we must have $9x_r\\equiv 3\\pmod 4$, equivalently $x_r\\equiv 3\\pmod 4$. Now, from $x_{r+1}=9x_r+80m_r$ it's easy to prove by induction that $x_r\\equiv 1\\pmod4$ for every $r\\in\\Bbb{N}$, so $-x_r\\equiv 3\\pmod 4$ and this lead us to infinitely many solutions, i.e. for every $x_r$ which is a solution of $x^2-80m^2=1$, we just need to take $-x_r$ in order to have $n\\in\\Bbb{Z}$. Hence all the solutions are $$(n,m)=\\Bigl(\\frac{-x_r-3}{4}, \\pm m_r\\Bigr).$$</p>\n" }, { "answer_id": 2094841, "score": 1, "text": "<p>Since $n+1$ and $2n+1$ are coprime, they must be of the form $\\pm k_1^2$ and $\\pm 10k_2^2$ (option A), or $\\pm 2k_1^2$ and $\\pm 5k_2^2$ (option B). Let's explore both options in more detail.</p>\n\n<p><strong>(A)</strong> Since $2n+1$ is odd, it can't be divisible by 10, hence it equals $\\pm k_1^2$, and $n+1=\\pm10k_2^2$.<br />Positive branch <em>(that is, interpreting both $\\pm$ as +)</em>: $20k_2^2-1=k_1^2$. This thing can't have solutions modulo 4, hence it can't have solutions at all. <br />Negative branch: $-20k_2^2-1=-k_1^2$, so $20k_2^2+1=k_1^2$. This is a Pell equation, and it produces an infinite sequence of solutions: $(n,m)=(-1,0)$, then $(-41,18)$, then $(-12\\,961,5\\,796)$, then $(-4\\,173\\,161,1\\,866\\,294)$ and so on.</p>\n\n<p><strong>(B)</strong> Since $2n+1$ is odd, it equals $\\pm5k_1^2$, and $n+1=\\pm2k_2^2$. <br />Positive branch: $4k_2^2-1=5k_1^2$. This thing can't have solutions modulo 4 either, hence it can't have solutions at all.<br />Negative branch: $-4k_2^2-1=-5k_1^2$, so $4k_2^2+1=5k_1^2$. This is again a Pell equation (maybe generalized or something), and it produces <em>another</em> infinite sequence of solutions: $(n,m)=(-3,1)$, then $(-723,323)$, then $(-232\\,563,104\\,005)$ and so on.</p>\n\n<p>Needless to say, you may put a minus before $m$ in any solution.</p>\n" }, { "answer_id": 2094851, "score": 2, "text": "<p>Let $x=n+1$. The equation becomes \n$$x(2x-1)=10m^2 \\\\\n2x^2-x-10m^2=0 \\\\\nx=\\frac{1 \\pm \\sqrt{1+80m^2}}{4}$$</p>\n\n<p>It follows that $1+80m^2$ must be a perfect square.\nThus\n$$y^2=1+80m^2 \\Rightarrow\\\\ \ny^2-80m^2=1$$</p>\n\n<p>This is the <strong>Pell equation</strong>, and has solutions. Now, for each solution \n$(y,m)$, $y$ must be odd. Then, each of $1+y, 1-y$ is even, and their sum is $2$, meaning that exactly one of them is divisible by $4$ (look at them $\\pmod{4}$).</p>\n\n<p>That particular value leads to an integer value of $x$.</p>\n" }, { "answer_id": 2094873, "score": 2, "text": "<p>I got this. We have $(n+1)(2n+1)=2n^2+3n+1=10m^2$, if we multiply last equation by $8$ we get $16n^2+24n+8=80m^2$, but we can write this equation as $$(4n+3)^2-1=80m^2.$$</p>\n\n<p>So if we set $4n+3=x$ we get the Pell-equation $x^2-80m^2=1$. The fundamental solution of this equation is $x=9$ and $m=1$, hence we have the following recursive formulas $$x_{r+1}=9x_r+80m_r,$$ $$m_{r+1}=x_r+9m_r,$$ where $r\\in\\Bbb{N}$. Therefore $$n=\\frac{9x_r+80m_r-3}{4}=\\frac{9x_r-3}{4}+20m_r.$$</p>\n\n<p>This means that in order to have $n\\in\\Bbb{Z}$ we must have $9x_r\\equiv 3\\pmod 4$, equivalently $x_r\\equiv 3\\pmod 4$. Now, from $x_{r+1}=9x_r+80m_r$ it's easy to prove by induction that $x_r\\equiv 1\\pmod4$ for every $r\\in\\Bbb{N}$, so $-x_r\\equiv 3\\pmod 4$ and this lead us to infinitely many solutions, i.e. for every $x_r$ which is a solution of $x^2-80m^2=1$, we just need to take $-x_r$ in order to have $n\\in\\Bbb{Z}$. Hence all the solutions are $$(n,m)=\\Bigl(\\frac{-x_r-3}{4}, \\pm m_r\\Bigr).$$</p>\n" }, { "answer_id": 2094841, "score": 1, "text": "<p>Since $n+1$ and $2n+1$ are coprime, they must be of the form $\\pm k_1^2$ and $\\pm 10k_2^2$ (option A), or $\\pm 2k_1^2$ and $\\pm 5k_2^2$ (option B). Let's explore both options in more detail.</p>\n\n<p><strong>(A)</strong> Since $2n+1$ is odd, it can't be divisible by 10, hence it equals $\\pm k_1^2$, and $n+1=\\pm10k_2^2$.<br />Positive branch <em>(that is, interpreting both $\\pm$ as +)</em>: $20k_2^2-1=k_1^2$. This thing can't have solutions modulo 4, hence it can't have solutions at all. <br />Negative branch: $-20k_2^2-1=-k_1^2$, so $20k_2^2+1=k_1^2$. This is a Pell equation, and it produces an infinite sequence of solutions: $(n,m)=(-1,0)$, then $(-41,18)$, then $(-12\\,961,5\\,796)$, then $(-4\\,173\\,161,1\\,866\\,294)$ and so on.</p>\n\n<p><strong>(B)</strong> Since $2n+1$ is odd, it equals $\\pm5k_1^2$, and $n+1=\\pm2k_2^2$. <br />Positive branch: $4k_2^2-1=5k_1^2$. This thing can't have solutions modulo 4 either, hence it can't have solutions at all.<br />Negative branch: $-4k_2^2-1=-5k_1^2$, so $4k_2^2+1=5k_1^2$. This is again a Pell equation (maybe generalized or something), and it produces <em>another</em> infinite sequence of solutions: $(n,m)=(-3,1)$, then $(-723,323)$, then $(-232\\,563,104\\,005)$ and so on.</p>\n\n<p>Needless to say, you may put a minus before $m$ in any solution.</p>\n" }, { "answer_id": 2094851, "score": 2, "text": "<p>Let $x=n+1$. The equation becomes \n$$x(2x-1)=10m^2 \\\\\n2x^2-x-10m^2=0 \\\\\nx=\\frac{1 \\pm \\sqrt{1+80m^2}}{4}$$</p>\n\n<p>It follows that $1+80m^2$ must be a perfect square.\nThus\n$$y^2=1+80m^2 \\Rightarrow\\\\ \ny^2-80m^2=1$$</p>\n\n<p>This is the <strong>Pell equation</strong>, and has solutions. Now, for each solution \n$(y,m)$, $y$ must be odd. Then, each of $1+y, 1-y$ is even, and their sum is $2$, meaning that exactly one of them is divisible by $4$ (look at them $\\pmod{4}$).</p>\n\n<p>That particular value leads to an integer value of $x$.</p>\n" }, { "answer_id": 2094873, "score": 2, "text": "<p>I got this. We have $(n+1)(2n+1)=2n^2+3n+1=10m^2$, if we multiply last equation by $8$ we get $16n^2+24n+8=80m^2$, but we can write this equation as $$(4n+3)^2-1=80m^2.$$</p>\n\n<p>So if we set $4n+3=x$ we get the Pell-equation $x^2-80m^2=1$. The fundamental solution of this equation is $x=9$ and $m=1$, hence we have the following recursive formulas $$x_{r+1}=9x_r+80m_r,$$ $$m_{r+1}=x_r+9m_r,$$ where $r\\in\\Bbb{N}$. Therefore $$n=\\frac{9x_r+80m_r-3}{4}=\\frac{9x_r-3}{4}+20m_r.$$</p>\n\n<p>This means that in order to have $n\\in\\Bbb{Z}$ we must have $9x_r\\equiv 3\\pmod 4$, equivalently $x_r\\equiv 3\\pmod 4$. Now, from $x_{r+1}=9x_r+80m_r$ it's easy to prove by induction that $x_r\\equiv 1\\pmod4$ for every $r\\in\\Bbb{N}$, so $-x_r\\equiv 3\\pmod 4$ and this lead us to infinitely many solutions, i.e. for every $x_r$ which is a solution of $x^2-80m^2=1$, we just need to take $-x_r$ in order to have $n\\in\\Bbb{Z}$. Hence all the solutions are $$(n,m)=\\Bigl(\\frac{-x_r-3}{4}, \\pm m_r\\Bigr).$$</p>\n" }, { "answer_id": 2094841, "score": 1, "text": "<p>Since $n+1$ and $2n+1$ are coprime, they must be of the form $\\pm k_1^2$ and $\\pm 10k_2^2$ (option A), or $\\pm 2k_1^2$ and $\\pm 5k_2^2$ (option B). Let's explore both options in more detail.</p>\n\n<p><strong>(A)</strong> Since $2n+1$ is odd, it can't be divisible by 10, hence it equals $\\pm k_1^2$, and $n+1=\\pm10k_2^2$.<br />Positive branch <em>(that is, interpreting both $\\pm$ as +)</em>: $20k_2^2-1=k_1^2$. This thing can't have solutions modulo 4, hence it can't have solutions at all. <br />Negative branch: $-20k_2^2-1=-k_1^2$, so $20k_2^2+1=k_1^2$. This is a Pell equation, and it produces an infinite sequence of solutions: $(n,m)=(-1,0)$, then $(-41,18)$, then $(-12\\,961,5\\,796)$, then $(-4\\,173\\,161,1\\,866\\,294)$ and so on.</p>\n\n<p><strong>(B)</strong> Since $2n+1$ is odd, it equals $\\pm5k_1^2$, and $n+1=\\pm2k_2^2$. <br />Positive branch: $4k_2^2-1=5k_1^2$. This thing can't have solutions modulo 4 either, hence it can't have solutions at all.<br />Negative branch: $-4k_2^2-1=-5k_1^2$, so $4k_2^2+1=5k_1^2$. This is again a Pell equation (maybe generalized or something), and it produces <em>another</em> infinite sequence of solutions: $(n,m)=(-3,1)$, then $(-723,323)$, then $(-232\\,563,104\\,005)$ and so on.</p>\n\n<p>Needless to say, you may put a minus before $m$ in any solution.</p>\n" }, { "answer_id": 2094851, "score": 2, "text": "<p>Let $x=n+1$. The equation becomes \n$$x(2x-1)=10m^2 \\\\\n2x^2-x-10m^2=0 \\\\\nx=\\frac{1 \\pm \\sqrt{1+80m^2}}{4}$$</p>\n\n<p>It follows that $1+80m^2$ must be a perfect square.\nThus\n$$y^2=1+80m^2 \\Rightarrow\\\\ \ny^2-80m^2=1$$</p>\n\n<p>This is the <strong>Pell equation</strong>, and has solutions. Now, for each solution \n$(y,m)$, $y$ must be odd. Then, each of $1+y, 1-y$ is even, and their sum is $2$, meaning that exactly one of them is divisible by $4$ (look at them $\\pmod{4}$).</p>\n\n<p>That particular value leads to an integer value of $x$.</p>\n" }, { "answer_id": 2094873, "score": 2, "text": "<p>I got this. We have $(n+1)(2n+1)=2n^2+3n+1=10m^2$, if we multiply last equation by $8$ we get $16n^2+24n+8=80m^2$, but we can write this equation as $$(4n+3)^2-1=80m^2.$$</p>\n\n<p>So if we set $4n+3=x$ we get the Pell-equation $x^2-80m^2=1$. The fundamental solution of this equation is $x=9$ and $m=1$, hence we have the following recursive formulas $$x_{r+1}=9x_r+80m_r,$$ $$m_{r+1}=x_r+9m_r,$$ where $r\\in\\Bbb{N}$. Therefore $$n=\\frac{9x_r+80m_r-3}{4}=\\frac{9x_r-3}{4}+20m_r.$$</p>\n\n<p>This means that in order to have $n\\in\\Bbb{Z}$ we must have $9x_r\\equiv 3\\pmod 4$, equivalently $x_r\\equiv 3\\pmod 4$. Now, from $x_{r+1}=9x_r+80m_r$ it's easy to prove by induction that $x_r\\equiv 1\\pmod4$ for every $r\\in\\Bbb{N}$, so $-x_r\\equiv 3\\pmod 4$ and this lead us to infinitely many solutions, i.e. for every $x_r$ which is a solution of $x^2-80m^2=1$, we just need to take $-x_r$ in order to have $n\\in\\Bbb{Z}$. Hence all the solutions are $$(n,m)=\\Bigl(\\frac{-x_r-3}{4}, \\pm m_r\\Bigr).$$</p>\n" }, { "answer_id": 2094841, "score": 1, "text": "<p>Since $n+1$ and $2n+1$ are coprime, they must be of the form $\\pm k_1^2$ and $\\pm 10k_2^2$ (option A), or $\\pm 2k_1^2$ and $\\pm 5k_2^2$ (option B). Let's explore both options in more detail.</p>\n\n<p><strong>(A)</strong> Since $2n+1$ is odd, it can't be divisible by 10, hence it equals $\\pm k_1^2$, and $n+1=\\pm10k_2^2$.<br />Positive branch <em>(that is, interpreting both $\\pm$ as +)</em>: $20k_2^2-1=k_1^2$. This thing can't have solutions modulo 4, hence it can't have solutions at all. <br />Negative branch: $-20k_2^2-1=-k_1^2$, so $20k_2^2+1=k_1^2$. This is a Pell equation, and it produces an infinite sequence of solutions: $(n,m)=(-1,0)$, then $(-41,18)$, then $(-12\\,961,5\\,796)$, then $(-4\\,173\\,161,1\\,866\\,294)$ and so on.</p>\n\n<p><strong>(B)</strong> Since $2n+1$ is odd, it equals $\\pm5k_1^2$, and $n+1=\\pm2k_2^2$. <br />Positive branch: $4k_2^2-1=5k_1^2$. This thing can't have solutions modulo 4 either, hence it can't have solutions at all.<br />Negative branch: $-4k_2^2-1=-5k_1^2$, so $4k_2^2+1=5k_1^2$. This is again a Pell equation (maybe generalized or something), and it produces <em>another</em> infinite sequence of solutions: $(n,m)=(-3,1)$, then $(-723,323)$, then $(-232\\,563,104\\,005)$ and so on.</p>\n\n<p>Needless to say, you may put a minus before $m$ in any solution.</p>\n" }, { "answer_id": 2094851, "score": 2, "text": "<p>Let $x=n+1$. The equation becomes \n$$x(2x-1)=10m^2 \\\\\n2x^2-x-10m^2=0 \\\\\nx=\\frac{1 \\pm \\sqrt{1+80m^2}}{4}$$</p>\n\n<p>It follows that $1+80m^2$ must be a perfect square.\nThus\n$$y^2=1+80m^2 \\Rightarrow\\\\ \ny^2-80m^2=1$$</p>\n\n<p>This is the <strong>Pell equation</strong>, and has solutions. Now, for each solution \n$(y,m)$, $y$ must be odd. Then, each of $1+y, 1-y$ is even, and their sum is $2$, meaning that exactly one of them is divisible by $4$ (look at them $\\pmod{4}$).</p>\n\n<p>That particular value leads to an integer value of $x$.</p>\n" }, { "answer_id": 2094873, "score": 2, "text": "<p>I got this. We have $(n+1)(2n+1)=2n^2+3n+1=10m^2$, if we multiply last equation by $8$ we get $16n^2+24n+8=80m^2$, but we can write this equation as $$(4n+3)^2-1=80m^2.$$</p>\n\n<p>So if we set $4n+3=x$ we get the Pell-equation $x^2-80m^2=1$. The fundamental solution of this equation is $x=9$ and $m=1$, hence we have the following recursive formulas $$x_{r+1}=9x_r+80m_r,$$ $$m_{r+1}=x_r+9m_r,$$ where $r\\in\\Bbb{N}$. Therefore $$n=\\frac{9x_r+80m_r-3}{4}=\\frac{9x_r-3}{4}+20m_r.$$</p>\n\n<p>This means that in order to have $n\\in\\Bbb{Z}$ we must have $9x_r\\equiv 3\\pmod 4$, equivalently $x_r\\equiv 3\\pmod 4$. Now, from $x_{r+1}=9x_r+80m_r$ it's easy to prove by induction that $x_r\\equiv 1\\pmod4$ for every $r\\in\\Bbb{N}$, so $-x_r\\equiv 3\\pmod 4$ and this lead us to infinitely many solutions, i.e. for every $x_r$ which is a solution of $x^2-80m^2=1$, we just need to take $-x_r$ in order to have $n\\in\\Bbb{Z}$. Hence all the solutions are $$(n,m)=\\Bigl(\\frac{-x_r-3}{4}, \\pm m_r\\Bigr).$$</p>\n" }, { "answer_id": 2094841, "score": 1, "text": "<p>Since $n+1$ and $2n+1$ are coprime, they must be of the form $\\pm k_1^2$ and $\\pm 10k_2^2$ (option A), or $\\pm 2k_1^2$ and $\\pm 5k_2^2$ (option B). Let's explore both options in more detail.</p>\n\n<p><strong>(A)</strong> Since $2n+1$ is odd, it can't be divisible by 10, hence it equals $\\pm k_1^2$, and $n+1=\\pm10k_2^2$.<br />Positive branch <em>(that is, interpreting both $\\pm$ as +)</em>: $20k_2^2-1=k_1^2$. This thing can't have solutions modulo 4, hence it can't have solutions at all. <br />Negative branch: $-20k_2^2-1=-k_1^2$, so $20k_2^2+1=k_1^2$. This is a Pell equation, and it produces an infinite sequence of solutions: $(n,m)=(-1,0)$, then $(-41,18)$, then $(-12\\,961,5\\,796)$, then $(-4\\,173\\,161,1\\,866\\,294)$ and so on.</p>\n\n<p><strong>(B)</strong> Since $2n+1$ is odd, it equals $\\pm5k_1^2$, and $n+1=\\pm2k_2^2$. <br />Positive branch: $4k_2^2-1=5k_1^2$. This thing can't have solutions modulo 4 either, hence it can't have solutions at all.<br />Negative branch: $-4k_2^2-1=-5k_1^2$, so $4k_2^2+1=5k_1^2$. This is again a Pell equation (maybe generalized or something), and it produces <em>another</em> infinite sequence of solutions: $(n,m)=(-3,1)$, then $(-723,323)$, then $(-232\\,563,104\\,005)$ and so on.</p>\n\n<p>Needless to say, you may put a minus before $m$ in any solution.</p>\n" }, { "answer_id": 2094851, "score": 2, "text": "<p>Let $x=n+1$. The equation becomes \n$$x(2x-1)=10m^2 \\\\\n2x^2-x-10m^2=0 \\\\\nx=\\frac{1 \\pm \\sqrt{1+80m^2}}{4}$$</p>\n\n<p>It follows that $1+80m^2$ must be a perfect square.\nThus\n$$y^2=1+80m^2 \\Rightarrow\\\\ \ny^2-80m^2=1$$</p>\n\n<p>This is the <strong>Pell equation</strong>, and has solutions. Now, for each solution \n$(y,m)$, $y$ must be odd. Then, each of $1+y, 1-y$ is even, and their sum is $2$, meaning that exactly one of them is divisible by $4$ (look at them $\\pmod{4}$).</p>\n\n<p>That particular value leads to an integer value of $x$.</p>\n" }, { "answer_id": 2094873, "score": 2, "text": "<p>I got this. We have $(n+1)(2n+1)=2n^2+3n+1=10m^2$, if we multiply last equation by $8$ we get $16n^2+24n+8=80m^2$, but we can write this equation as $$(4n+3)^2-1=80m^2.$$</p>\n\n<p>So if we set $4n+3=x$ we get the Pell-equation $x^2-80m^2=1$. The fundamental solution of this equation is $x=9$ and $m=1$, hence we have the following recursive formulas $$x_{r+1}=9x_r+80m_r,$$ $$m_{r+1}=x_r+9m_r,$$ where $r\\in\\Bbb{N}$. Therefore $$n=\\frac{9x_r+80m_r-3}{4}=\\frac{9x_r-3}{4}+20m_r.$$</p>\n\n<p>This means that in order to have $n\\in\\Bbb{Z}$ we must have $9x_r\\equiv 3\\pmod 4$, equivalently $x_r\\equiv 3\\pmod 4$. Now, from $x_{r+1}=9x_r+80m_r$ it's easy to prove by induction that $x_r\\equiv 1\\pmod4$ for every $r\\in\\Bbb{N}$, so $-x_r\\equiv 3\\pmod 4$ and this lead us to infinitely many solutions, i.e. for every $x_r$ which is a solution of $x^2-80m^2=1$, we just need to take $-x_r$ in order to have $n\\in\\Bbb{Z}$. Hence all the solutions are $$(n,m)=\\Bigl(\\frac{-x_r-3}{4}, \\pm m_r\\Bigr).$$</p>\n" }, { "answer_id": 2094841, "score": 1, "text": "<p>Since $n+1$ and $2n+1$ are coprime, they must be of the form $\\pm k_1^2$ and $\\pm 10k_2^2$ (option A), or $\\pm 2k_1^2$ and $\\pm 5k_2^2$ (option B). Let's explore both options in more detail.</p>\n\n<p><strong>(A)</strong> Since $2n+1$ is odd, it can't be divisible by 10, hence it equals $\\pm k_1^2$, and $n+1=\\pm10k_2^2$.<br />Positive branch <em>(that is, interpreting both $\\pm$ as +)</em>: $20k_2^2-1=k_1^2$. This thing can't have solutions modulo 4, hence it can't have solutions at all. <br />Negative branch: $-20k_2^2-1=-k_1^2$, so $20k_2^2+1=k_1^2$. This is a Pell equation, and it produces an infinite sequence of solutions: $(n,m)=(-1,0)$, then $(-41,18)$, then $(-12\\,961,5\\,796)$, then $(-4\\,173\\,161,1\\,866\\,294)$ and so on.</p>\n\n<p><strong>(B)</strong> Since $2n+1$ is odd, it equals $\\pm5k_1^2$, and $n+1=\\pm2k_2^2$. <br />Positive branch: $4k_2^2-1=5k_1^2$. This thing can't have solutions modulo 4 either, hence it can't have solutions at all.<br />Negative branch: $-4k_2^2-1=-5k_1^2$, so $4k_2^2+1=5k_1^2$. This is again a Pell equation (maybe generalized or something), and it produces <em>another</em> infinite sequence of solutions: $(n,m)=(-3,1)$, then $(-723,323)$, then $(-232\\,563,104\\,005)$ and so on.</p>\n\n<p>Needless to say, you may put a minus before $m$ in any solution.</p>\n" }, { "answer_id": 2094851, "score": 2, "text": "<p>Let $x=n+1$. The equation becomes \n$$x(2x-1)=10m^2 \\\\\n2x^2-x-10m^2=0 \\\\\nx=\\frac{1 \\pm \\sqrt{1+80m^2}}{4}$$</p>\n\n<p>It follows that $1+80m^2$ must be a perfect square.\nThus\n$$y^2=1+80m^2 \\Rightarrow\\\\ \ny^2-80m^2=1$$</p>\n\n<p>This is the <strong>Pell equation</strong>, and has solutions. Now, for each solution \n$(y,m)$, $y$ must be odd. Then, each of $1+y, 1-y$ is even, and their sum is $2$, meaning that exactly one of them is divisible by $4$ (look at them $\\pmod{4}$).</p>\n\n<p>That particular value leads to an integer value of $x$.</p>\n" }, { "answer_id": 2094873, "score": 2, "text": "<p>I got this. We have $(n+1)(2n+1)=2n^2+3n+1=10m^2$, if we multiply last equation by $8$ we get $16n^2+24n+8=80m^2$, but we can write this equation as $$(4n+3)^2-1=80m^2.$$</p>\n\n<p>So if we set $4n+3=x$ we get the Pell-equation $x^2-80m^2=1$. The fundamental solution of this equation is $x=9$ and $m=1$, hence we have the following recursive formulas $$x_{r+1}=9x_r+80m_r,$$ $$m_{r+1}=x_r+9m_r,$$ where $r\\in\\Bbb{N}$. Therefore $$n=\\frac{9x_r+80m_r-3}{4}=\\frac{9x_r-3}{4}+20m_r.$$</p>\n\n<p>This means that in order to have $n\\in\\Bbb{Z}$ we must have $9x_r\\equiv 3\\pmod 4$, equivalently $x_r\\equiv 3\\pmod 4$. Now, from $x_{r+1}=9x_r+80m_r$ it's easy to prove by induction that $x_r\\equiv 1\\pmod4$ for every $r\\in\\Bbb{N}$, so $-x_r\\equiv 3\\pmod 4$ and this lead us to infinitely many solutions, i.e. for every $x_r$ which is a solution of $x^2-80m^2=1$, we just need to take $-x_r$ in order to have $n\\in\\Bbb{Z}$. Hence all the solutions are $$(n,m)=\\Bigl(\\frac{-x_r-3}{4}, \\pm m_r\\Bigr).$$</p>\n" }, { "answer_id": 2094841, "score": 1, "text": "<p>Since $n+1$ and $2n+1$ are coprime, they must be of the form $\\pm k_1^2$ and $\\pm 10k_2^2$ (option A), or $\\pm 2k_1^2$ and $\\pm 5k_2^2$ (option B). Let's explore both options in more detail.</p>\n\n<p><strong>(A)</strong> Since $2n+1$ is odd, it can't be divisible by 10, hence it equals $\\pm k_1^2$, and $n+1=\\pm10k_2^2$.<br />Positive branch <em>(that is, interpreting both $\\pm$ as +)</em>: $20k_2^2-1=k_1^2$. This thing can't have solutions modulo 4, hence it can't have solutions at all. <br />Negative branch: $-20k_2^2-1=-k_1^2$, so $20k_2^2+1=k_1^2$. This is a Pell equation, and it produces an infinite sequence of solutions: $(n,m)=(-1,0)$, then $(-41,18)$, then $(-12\\,961,5\\,796)$, then $(-4\\,173\\,161,1\\,866\\,294)$ and so on.</p>\n\n<p><strong>(B)</strong> Since $2n+1$ is odd, it equals $\\pm5k_1^2$, and $n+1=\\pm2k_2^2$. <br />Positive branch: $4k_2^2-1=5k_1^2$. This thing can't have solutions modulo 4 either, hence it can't have solutions at all.<br />Negative branch: $-4k_2^2-1=-5k_1^2$, so $4k_2^2+1=5k_1^2$. This is again a Pell equation (maybe generalized or something), and it produces <em>another</em> infinite sequence of solutions: $(n,m)=(-3,1)$, then $(-723,323)$, then $(-232\\,563,104\\,005)$ and so on.</p>\n\n<p>Needless to say, you may put a minus before $m$ in any solution.</p>\n" }, { "answer_id": 2094851, "score": 2, "text": "<p>Let $x=n+1$. The equation becomes \n$$x(2x-1)=10m^2 \\\\\n2x^2-x-10m^2=0 \\\\\nx=\\frac{1 \\pm \\sqrt{1+80m^2}}{4}$$</p>\n\n<p>It follows that $1+80m^2$ must be a perfect square.\nThus\n$$y^2=1+80m^2 \\Rightarrow\\\\ \ny^2-80m^2=1$$</p>\n\n<p>This is the <strong>Pell equation</strong>, and has solutions. Now, for each solution \n$(y,m)$, $y$ must be odd. Then, each of $1+y, 1-y$ is even, and their sum is $2$, meaning that exactly one of them is divisible by $4$ (look at them $\\pmod{4}$).</p>\n\n<p>That particular value leads to an integer value of $x$.</p>\n" }, { "answer_id": 2094873, "score": 2, "text": "<p>I got this. We have $(n+1)(2n+1)=2n^2+3n+1=10m^2$, if we multiply last equation by $8$ we get $16n^2+24n+8=80m^2$, but we can write this equation as $$(4n+3)^2-1=80m^2.$$</p>\n\n<p>So if we set $4n+3=x$ we get the Pell-equation $x^2-80m^2=1$. The fundamental solution of this equation is $x=9$ and $m=1$, hence we have the following recursive formulas $$x_{r+1}=9x_r+80m_r,$$ $$m_{r+1}=x_r+9m_r,$$ where $r\\in\\Bbb{N}$. Therefore $$n=\\frac{9x_r+80m_r-3}{4}=\\frac{9x_r-3}{4}+20m_r.$$</p>\n\n<p>This means that in order to have $n\\in\\Bbb{Z}$ we must have $9x_r\\equiv 3\\pmod 4$, equivalently $x_r\\equiv 3\\pmod 4$. Now, from $x_{r+1}=9x_r+80m_r$ it's easy to prove by induction that $x_r\\equiv 1\\pmod4$ for every $r\\in\\Bbb{N}$, so $-x_r\\equiv 3\\pmod 4$ and this lead us to infinitely many solutions, i.e. for every $x_r$ which is a solution of $x^2-80m^2=1$, we just need to take $-x_r$ in order to have $n\\in\\Bbb{Z}$. Hence all the solutions are $$(n,m)=\\Bigl(\\frac{-x_r-3}{4}, \\pm m_r\\Bigr).$$</p>\n" }, { "answer_id": 2094841, "score": 1, "text": "<p>Since $n+1$ and $2n+1$ are coprime, they must be of the form $\\pm k_1^2$ and $\\pm 10k_2^2$ (option A), or $\\pm 2k_1^2$ and $\\pm 5k_2^2$ (option B). Let's explore both options in more detail.</p>\n\n<p><strong>(A)</strong> Since $2n+1$ is odd, it can't be divisible by 10, hence it equals $\\pm k_1^2$, and $n+1=\\pm10k_2^2$.<br />Positive branch <em>(that is, interpreting both $\\pm$ as +)</em>: $20k_2^2-1=k_1^2$. This thing can't have solutions modulo 4, hence it can't have solutions at all. <br />Negative branch: $-20k_2^2-1=-k_1^2$, so $20k_2^2+1=k_1^2$. This is a Pell equation, and it produces an infinite sequence of solutions: $(n,m)=(-1,0)$, then $(-41,18)$, then $(-12\\,961,5\\,796)$, then $(-4\\,173\\,161,1\\,866\\,294)$ and so on.</p>\n\n<p><strong>(B)</strong> Since $2n+1$ is odd, it equals $\\pm5k_1^2$, and $n+1=\\pm2k_2^2$. <br />Positive branch: $4k_2^2-1=5k_1^2$. This thing can't have solutions modulo 4 either, hence it can't have solutions at all.<br />Negative branch: $-4k_2^2-1=-5k_1^2$, so $4k_2^2+1=5k_1^2$. This is again a Pell equation (maybe generalized or something), and it produces <em>another</em> infinite sequence of solutions: $(n,m)=(-3,1)$, then $(-723,323)$, then $(-232\\,563,104\\,005)$ and so on.</p>\n\n<p>Needless to say, you may put a minus before $m$ in any solution.</p>\n" }, { "answer_id": 2094851, "score": 2, "text": "<p>Let $x=n+1$. The equation becomes \n$$x(2x-1)=10m^2 \\\\\n2x^2-x-10m^2=0 \\\\\nx=\\frac{1 \\pm \\sqrt{1+80m^2}}{4}$$</p>\n\n<p>It follows that $1+80m^2$ must be a perfect square.\nThus\n$$y^2=1+80m^2 \\Rightarrow\\\\ \ny^2-80m^2=1$$</p>\n\n<p>This is the <strong>Pell equation</strong>, and has solutions. Now, for each solution \n$(y,m)$, $y$ must be odd. Then, each of $1+y, 1-y$ is even, and their sum is $2$, meaning that exactly one of them is divisible by $4$ (look at them $\\pmod{4}$).</p>\n\n<p>That particular value leads to an integer value of $x$.</p>\n" }, { "answer_id": 2094873, "score": 2, "text": "<p>I got this. We have $(n+1)(2n+1)=2n^2+3n+1=10m^2$, if we multiply last equation by $8$ we get $16n^2+24n+8=80m^2$, but we can write this equation as $$(4n+3)^2-1=80m^2.$$</p>\n\n<p>So if we set $4n+3=x$ we get the Pell-equation $x^2-80m^2=1$. The fundamental solution of this equation is $x=9$ and $m=1$, hence we have the following recursive formulas $$x_{r+1}=9x_r+80m_r,$$ $$m_{r+1}=x_r+9m_r,$$ where $r\\in\\Bbb{N}$. Therefore $$n=\\frac{9x_r+80m_r-3}{4}=\\frac{9x_r-3}{4}+20m_r.$$</p>\n\n<p>This means that in order to have $n\\in\\Bbb{Z}$ we must have $9x_r\\equiv 3\\pmod 4$, equivalently $x_r\\equiv 3\\pmod 4$. Now, from $x_{r+1}=9x_r+80m_r$ it's easy to prove by induction that $x_r\\equiv 1\\pmod4$ for every $r\\in\\Bbb{N}$, so $-x_r\\equiv 3\\pmod 4$ and this lead us to infinitely many solutions, i.e. for every $x_r$ which is a solution of $x^2-80m^2=1$, we just need to take $-x_r$ in order to have $n\\in\\Bbb{Z}$. Hence all the solutions are $$(n,m)=\\Bigl(\\frac{-x_r-3}{4}, \\pm m_r\\Bigr).$$</p>\n" }, { "answer_id": 2094841, "score": 1, "text": "<p>Since $n+1$ and $2n+1$ are coprime, they must be of the form $\\pm k_1^2$ and $\\pm 10k_2^2$ (option A), or $\\pm 2k_1^2$ and $\\pm 5k_2^2$ (option B). Let's explore both options in more detail.</p>\n\n<p><strong>(A)</strong> Since $2n+1$ is odd, it can't be divisible by 10, hence it equals $\\pm k_1^2$, and $n+1=\\pm10k_2^2$.<br />Positive branch <em>(that is, interpreting both $\\pm$ as +)</em>: $20k_2^2-1=k_1^2$. This thing can't have solutions modulo 4, hence it can't have solutions at all. <br />Negative branch: $-20k_2^2-1=-k_1^2$, so $20k_2^2+1=k_1^2$. This is a Pell equation, and it produces an infinite sequence of solutions: $(n,m)=(-1,0)$, then $(-41,18)$, then $(-12\\,961,5\\,796)$, then $(-4\\,173\\,161,1\\,866\\,294)$ and so on.</p>\n\n<p><strong>(B)</strong> Since $2n+1$ is odd, it equals $\\pm5k_1^2$, and $n+1=\\pm2k_2^2$. <br />Positive branch: $4k_2^2-1=5k_1^2$. This thing can't have solutions modulo 4 either, hence it can't have solutions at all.<br />Negative branch: $-4k_2^2-1=-5k_1^2$, so $4k_2^2+1=5k_1^2$. This is again a Pell equation (maybe generalized or something), and it produces <em>another</em> infinite sequence of solutions: $(n,m)=(-3,1)$, then $(-723,323)$, then $(-232\\,563,104\\,005)$ and so on.</p>\n\n<p>Needless to say, you may put a minus before $m$ in any solution.</p>\n" }, { "answer_id": 2094851, "score": 2, "text": "<p>Let $x=n+1$. The equation becomes \n$$x(2x-1)=10m^2 \\\\\n2x^2-x-10m^2=0 \\\\\nx=\\frac{1 \\pm \\sqrt{1+80m^2}}{4}$$</p>\n\n<p>It follows that $1+80m^2$ must be a perfect square.\nThus\n$$y^2=1+80m^2 \\Rightarrow\\\\ \ny^2-80m^2=1$$</p>\n\n<p>This is the <strong>Pell equation</strong>, and has solutions. Now, for each solution \n$(y,m)$, $y$ must be odd. Then, each of $1+y, 1-y$ is even, and their sum is $2$, meaning that exactly one of them is divisible by $4$ (look at them $\\pmod{4}$).</p>\n\n<p>That particular value leads to an integer value of $x$.</p>\n" }, { "answer_id": 2094873, "score": 2, "text": "<p>I got this. We have $(n+1)(2n+1)=2n^2+3n+1=10m^2$, if we multiply last equation by $8$ we get $16n^2+24n+8=80m^2$, but we can write this equation as $$(4n+3)^2-1=80m^2.$$</p>\n\n<p>So if we set $4n+3=x$ we get the Pell-equation $x^2-80m^2=1$. The fundamental solution of this equation is $x=9$ and $m=1$, hence we have the following recursive formulas $$x_{r+1}=9x_r+80m_r,$$ $$m_{r+1}=x_r+9m_r,$$ where $r\\in\\Bbb{N}$. Therefore $$n=\\frac{9x_r+80m_r-3}{4}=\\frac{9x_r-3}{4}+20m_r.$$</p>\n\n<p>This means that in order to have $n\\in\\Bbb{Z}$ we must have $9x_r\\equiv 3\\pmod 4$, equivalently $x_r\\equiv 3\\pmod 4$. Now, from $x_{r+1}=9x_r+80m_r$ it's easy to prove by induction that $x_r\\equiv 1\\pmod4$ for every $r\\in\\Bbb{N}$, so $-x_r\\equiv 3\\pmod 4$ and this lead us to infinitely many solutions, i.e. for every $x_r$ which is a solution of $x^2-80m^2=1$, we just need to take $-x_r$ in order to have $n\\in\\Bbb{Z}$. Hence all the solutions are $$(n,m)=\\Bigl(\\frac{-x_r-3}{4}, \\pm m_r\\Bigr).$$</p>\n" }, { "answer_id": 2094841, "score": 1, "text": "<p>Since $n+1$ and $2n+1$ are coprime, they must be of the form $\\pm k_1^2$ and $\\pm 10k_2^2$ (option A), or $\\pm 2k_1^2$ and $\\pm 5k_2^2$ (option B). Let's explore both options in more detail.</p>\n\n<p><strong>(A)</strong> Since $2n+1$ is odd, it can't be divisible by 10, hence it equals $\\pm k_1^2$, and $n+1=\\pm10k_2^2$.<br />Positive branch <em>(that is, interpreting both $\\pm$ as +)</em>: $20k_2^2-1=k_1^2$. This thing can't have solutions modulo 4, hence it can't have solutions at all. <br />Negative branch: $-20k_2^2-1=-k_1^2$, so $20k_2^2+1=k_1^2$. This is a Pell equation, and it produces an infinite sequence of solutions: $(n,m)=(-1,0)$, then $(-41,18)$, then $(-12\\,961,5\\,796)$, then $(-4\\,173\\,161,1\\,866\\,294)$ and so on.</p>\n\n<p><strong>(B)</strong> Since $2n+1$ is odd, it equals $\\pm5k_1^2$, and $n+1=\\pm2k_2^2$. <br />Positive branch: $4k_2^2-1=5k_1^2$. This thing can't have solutions modulo 4 either, hence it can't have solutions at all.<br />Negative branch: $-4k_2^2-1=-5k_1^2$, so $4k_2^2+1=5k_1^2$. This is again a Pell equation (maybe generalized or something), and it produces <em>another</em> infinite sequence of solutions: $(n,m)=(-3,1)$, then $(-723,323)$, then $(-232\\,563,104\\,005)$ and so on.</p>\n\n<p>Needless to say, you may put a minus before $m$ in any solution.</p>\n" } ]
Question: Find ALL integer solutions (n,m) of the equation $(n+1)(2n+1)=10m^2$ Answer 1: <p>Let $x=n+1$. The equation becomes $$x(2x-1)=10m^2 \\ 2x^2-x-10m^2=0 \\ x=\frac{1 \pm \sqrt{1+80m^2}}{4}$$</p> <p>It follows that $1+80m^2$ must be a perfect square. Thus $$y^2=1+80m^2 \Rightarrow\\ y^2-80m^2=1$$</p> <p>This is the <strong>Pell equation</strong>, and has solutions. Now, for each solution $(y,m)$, $y$ must be odd. Then, each of $1+y, 1-y$ is even, and their sum is $2$, meaning that exactly one of them is divisible by $4$ (look at them $\pmod{4}$).</p> <p>That particular value leads to an integer value of $x$.</p> Answer 2: <p>I got this. We have $(n+1)(2n+1)=2n^2+3n+1=10m^2$, if we multiply last equation by $8$ we get $16n^2+24n+8=80m^2$, but we can write this equation as $$(4n+3)^2-1=80m^2.$$</p> <p>So if we set $4n+3=x$ we get the Pell-equation $x^2-80m^2=1$. The fundamental solution of this equation is $x=9$ and $m=1$, hence we have the following recursive formulas $$x_{r+1}=9x_r+80m_r,$$ $$m_{r+1}=x_r+9m_r,$$ where $r\in\Bbb{N}$. Therefore $$n=\frac{9x_r+80m_r-3}{4}=\frac{9x_r-3}{4}+20m_r.$$</p> <p>This means that in order to have $n\in\Bbb{Z}$ we must have $9x_r\equiv 3\pmod 4$, equivalently $x_r\equiv 3\pmod 4$. Now, from $x_{r+1}=9x_r+80m_r$ it's easy to prove by induction that $x_r\equiv 1\pmod4$ for every $r\in\Bbb{N}$, so $-x_r\equiv 3\pmod 4$ and this lead us to infinitely many solutions, i.e. for every $x_r$ which is a solution of $x^2-80m^2=1$, we just need to take $-x_r$ in order to have $n\in\Bbb{Z}$. Hence all the solutions are $$(n,m)=\Bigl(\frac{-x_r-3}{4}, \pm m_r\Bigr).$$</p> Answer 3: <p>Since $n+1$ and $2n+1$ are coprime, they must be of the form $\pm k_1^2$ and $\pm 10k_2^2$ (option A), or $\pm 2k_1^2$ and $\pm 5k_2^2$ (option B). Let's explore both options in more detail.</p> <p><strong>(A)</strong> Since $2n+1$ is odd, it can't be divisible by 10, hence it equals $\pm k_1^2$, and $n+1=\pm10k_2^2$.<br />Positive branch <em>(that is, interpreting both $\pm$ as +)</em>: $20k_2^2-1=k_1^2$. This thing can't have solutions modulo 4, hence it can't have solutions at all. <br />Negative branch: $-20k_2^2-1=-k_1^2$, so $20k_2^2+1=k_1^2$. This is a Pell equation, and it produces an infinite sequence of solutions: $(n,m)=(-1,0)$, then $(-41,18)$, then $(-12\,961,5\,796)$, then $(-4\,173\,161,1\,866\,294)$ and so on.</p> <p><strong>(B)</strong> Since $2n+1$ is odd, it equals $\pm5k_1^2$, and $n+1=\pm2k_2^2$. <br />Positive branch: $4k_2^2-1=5k_1^2$. This thing can't have solutions modulo 4 either, hence it can't have solutions at all.<br />Negative branch: $-4k_2^2-1=-5k_1^2$, so $4k_2^2+1=5k_1^2$. This is again a Pell equation (maybe generalized or something), and it produces <em>another</em> infinite sequence of solutions: $(n,m)=(-3,1)$, then $(-723,323)$, then $(-232\,563,104\,005)$ and so on.</p> <p>Needless to say, you may put a minus before $m$ in any solution.</p> Answer 4: <p>Let $x=n+1$. The equation becomes $$x(2x-1)=10m^2 \\ 2x^2-x-10m^2=0 \\ x=\frac{1 \pm \sqrt{1+80m^2}}{4}$$</p> <p>It follows that $1+80m^2$ must be a perfect square. Thus $$y^2=1+80m^2 \Rightarrow\\ y^2-80m^2=1$$</p> <p>This is the <strong>Pell equation</strong>, and has solutions. Now, for each solution $(y,m)$, $y$ must be odd. Then, each of $1+y, 1-y$ is even, and their sum is $2$, meaning that exactly one of them is divisible by $4$ (look at them $\pmod{4}$).</p> <p>That particular value leads to an integer value of $x$.</p> Answer 5: <p>I got this. We have $(n+1)(2n+1)=2n^2+3n+1=10m^2$, if we multiply last equation by $8$ we get $16n^2+24n+8=80m^2$, but we can write this equation as $$(4n+3)^2-1=80m^2.$$</p> <p>So if we set $4n+3=x$ we get the Pell-equation $x^2-80m^2=1$. The fundamental solution of this equation is $x=9$ and $m=1$, hence we have the following recursive formulas $$x_{r+1}=9x_r+80m_r,$$ $$m_{r+1}=x_r+9m_r,$$ where $r\in\Bbb{N}$. Therefore $$n=\frac{9x_r+80m_r-3}{4}=\frac{9x_r-3}{4}+20m_r.$$</p> <p>This means that in order to have $n\in\Bbb{Z}$ we must have $9x_r\equiv 3\pmod 4$, equivalently $x_r\equiv 3\pmod 4$. Now, from $x_{r+1}=9x_r+80m_r$ it's easy to prove by induction that $x_r\equiv 1\pmod4$ for every $r\in\Bbb{N}$, so $-x_r\equiv 3\pmod 4$ and this lead us to infinitely many solutions, i.e. for every $x_r$ which is a solution of $x^2-80m^2=1$, we just need to take $-x_r$ in order to have $n\in\Bbb{Z}$. Hence all the solutions are $$(n,m)=\Bigl(\frac{-x_r-3}{4}, \pm m_r\Bigr).$$</p> Answer 6: <p>Since $n+1$ and $2n+1$ are coprime, they must be of the form $\pm k_1^2$ and $\pm 10k_2^2$ (option A), or $\pm 2k_1^2$ and $\pm 5k_2^2$ (option B). Let's explore both options in more detail.</p> <p><strong>(A)</strong> Since $2n+1$ is odd, it can't be divisible by 10, hence it equals $\pm k_1^2$, and $n+1=\pm10k_2^2$.<br />Positive branch <em>(that is, interpreting both $\pm$ as +)</em>: $20k_2^2-1=k_1^2$. This thing can't have solutions modulo 4, hence it can't have solutions at all. <br />Negative branch: $-20k_2^2-1=-k_1^2$, so $20k_2^2+1=k_1^2$. This is a Pell equation, and it produces an infinite sequence of solutions: $(n,m)=(-1,0)$, then $(-41,18)$, then $(-12\,961,5\,796)$, then $(-4\,173\,161,1\,866\,294)$ and so on.</p> <p><strong>(B)</strong> Since $2n+1$ is odd, it equals $\pm5k_1^2$, and $n+1=\pm2k_2^2$. <br />Positive branch: $4k_2^2-1=5k_1^2$. This thing can't have solutions modulo 4 either, hence it can't have solutions at all.<br />Negative branch: $-4k_2^2-1=-5k_1^2$, so $4k_2^2+1=5k_1^2$. This is again a Pell equation (maybe generalized or something), and it produces <em>another</em> infinite sequence of solutions: $(n,m)=(-3,1)$, then $(-723,323)$, then $(-232\,563,104\,005)$ and so on.</p> <p>Needless to say, you may put a minus before $m$ in any solution.</p> Answer 7: <p>Let $x=n+1$. The equation becomes $$x(2x-1)=10m^2 \\ 2x^2-x-10m^2=0 \\ x=\frac{1 \pm \sqrt{1+80m^2}}{4}$$</p> <p>It follows that $1+80m^2$ must be a perfect square. Thus $$y^2=1+80m^2 \Rightarrow\\ y^2-80m^2=1$$</p> <p>This is the <strong>Pell equation</strong>, and has solutions. Now, for each solution $(y,m)$, $y$ must be odd. Then, each of $1+y, 1-y$ is even, and their sum is $2$, meaning that exactly one of them is divisible by $4$ (look at them $\pmod{4}$).</p> <p>That particular value leads to an integer value of $x$.</p> Answer 8: <p>I got this. We have $(n+1)(2n+1)=2n^2+3n+1=10m^2$, if we multiply last equation by $8$ we get $16n^2+24n+8=80m^2$, but we can write this equation as $$(4n+3)^2-1=80m^2.$$</p> <p>So if we set $4n+3=x$ we get the Pell-equation $x^2-80m^2=1$. The fundamental solution of this equation is $x=9$ and $m=1$, hence we have the following recursive formulas $$x_{r+1}=9x_r+80m_r,$$ $$m_{r+1}=x_r+9m_r,$$ where $r\in\Bbb{N}$. Therefore $$n=\frac{9x_r+80m_r-3}{4}=\frac{9x_r-3}{4}+20m_r.$$</p> <p>This means that in order to have $n\in\Bbb{Z}$ we must have $9x_r\equiv 3\pmod 4$, equivalently $x_r\equiv 3\pmod 4$. Now, from $x_{r+1}=9x_r+80m_r$ it's easy to prove by induction that $x_r\equiv 1\pmod4$ for every $r\in\Bbb{N}$, so $-x_r\equiv 3\pmod 4$ and this lead us to infinitely many solutions, i.e. for every $x_r$ which is a solution of $x^2-80m^2=1$, we just need to take $-x_r$ in order to have $n\in\Bbb{Z}$. Hence all the solutions are $$(n,m)=\Bigl(\frac{-x_r-3}{4}, \pm m_r\Bigr).$$</p> Answer 9: <p>Since $n+1$ and $2n+1$ are coprime, they must be of the form $\pm k_1^2$ and $\pm 10k_2^2$ (option A), or $\pm 2k_1^2$ and $\pm 5k_2^2$ (option B). Let's explore both options in more detail.</p> <p><strong>(A)</strong> Since $2n+1$ is odd, it can't be divisible by 10, hence it equals $\pm k_1^2$, and $n+1=\pm10k_2^2$.<br />Positive branch <em>(that is, interpreting both $\pm$ as +)</em>: $20k_2^2-1=k_1^2$. This thing can't have solutions modulo 4, hence it can't have solutions at all. <br />Negative branch: $-20k_2^2-1=-k_1^2$, so $20k_2^2+1=k_1^2$. This is a Pell equation, and it produces an infinite sequence of solutions: $(n,m)=(-1,0)$, then $(-41,18)$, then $(-12\,961,5\,796)$, then $(-4\,173\,161,1\,866\,294)$ and so on.</p> <p><strong>(B)</strong> Since $2n+1$ is odd, it equals $\pm5k_1^2$, and $n+1=\pm2k_2^2$. <br />Positive branch: $4k_2^2-1=5k_1^2$. This thing can't have solutions modulo 4 either, hence it can't have solutions at all.<br />Negative branch: $-4k_2^2-1=-5k_1^2$, so $4k_2^2+1=5k_1^2$. This is again a Pell equation (maybe generalized or something), and it produces <em>another</em> infinite sequence of solutions: $(n,m)=(-3,1)$, then $(-723,323)$, then $(-232\,563,104\,005)$ and so on.</p> <p>Needless to say, you may put a minus before $m$ in any solution.</p> Answer 10: <p>Let $x=n+1$. The equation becomes $$x(2x-1)=10m^2 \\ 2x^2-x-10m^2=0 \\ x=\frac{1 \pm \sqrt{1+80m^2}}{4}$$</p> <p>It follows that $1+80m^2$ must be a perfect square. Thus $$y^2=1+80m^2 \Rightarrow\\ y^2-80m^2=1$$</p> <p>This is the <strong>Pell equation</strong>, and has solutions. Now, for each solution $(y,m)$, $y$ must be odd. Then, each of $1+y, 1-y$ is even, and their sum is $2$, meaning that exactly one of them is divisible by $4$ (look at them $\pmod{4}$).</p> <p>That particular value leads to an integer value of $x$.</p> Answer 11: <p>I got this. We have $(n+1)(2n+1)=2n^2+3n+1=10m^2$, if we multiply last equation by $8$ we get $16n^2+24n+8=80m^2$, but we can write this equation as $$(4n+3)^2-1=80m^2.$$</p> <p>So if we set $4n+3=x$ we get the Pell-equation $x^2-80m^2=1$. The fundamental solution of this equation is $x=9$ and $m=1$, hence we have the following recursive formulas $$x_{r+1}=9x_r+80m_r,$$ $$m_{r+1}=x_r+9m_r,$$ where $r\in\Bbb{N}$. Therefore $$n=\frac{9x_r+80m_r-3}{4}=\frac{9x_r-3}{4}+20m_r.$$</p> <p>This means that in order to have $n\in\Bbb{Z}$ we must have $9x_r\equiv 3\pmod 4$, equivalently $x_r\equiv 3\pmod 4$. Now, from $x_{r+1}=9x_r+80m_r$ it's easy to prove by induction that $x_r\equiv 1\pmod4$ for every $r\in\Bbb{N}$, so $-x_r\equiv 3\pmod 4$ and this lead us to infinitely many solutions, i.e. for every $x_r$ which is a solution of $x^2-80m^2=1$, we just need to take $-x_r$ in order to have $n\in\Bbb{Z}$. Hence all the solutions are $$(n,m)=\Bigl(\frac{-x_r-3}{4}, \pm m_r\Bigr).$$</p> Answer 12: <p>Since $n+1$ and $2n+1$ are coprime, they must be of the form $\pm k_1^2$ and $\pm 10k_2^2$ (option A), or $\pm 2k_1^2$ and $\pm 5k_2^2$ (option B). Let's explore both options in more detail.</p> <p><strong>(A)</strong> Since $2n+1$ is odd, it can't be divisible by 10, hence it equals $\pm k_1^2$, and $n+1=\pm10k_2^2$.<br />Positive branch <em>(that is, interpreting both $\pm$ as +)</em>: $20k_2^2-1=k_1^2$. This thing can't have solutions modulo 4, hence it can't have solutions at all. <br />Negative branch: $-20k_2^2-1=-k_1^2$, so $20k_2^2+1=k_1^2$. This is a Pell equation, and it produces an infinite sequence of solutions: $(n,m)=(-1,0)$, then $(-41,18)$, then $(-12\,961,5\,796)$, then $(-4\,173\,161,1\,866\,294)$ and so on.</p> <p><strong>(B)</strong> Since $2n+1$ is odd, it equals $\pm5k_1^2$, and $n+1=\pm2k_2^2$. <br />Positive branch: $4k_2^2-1=5k_1^2$. This thing can't have solutions modulo 4 either, hence it can't have solutions at all.<br />Negative branch: $-4k_2^2-1=-5k_1^2$, so $4k_2^2+1=5k_1^2$. This is again a Pell equation (maybe generalized or something), and it produces <em>another</em> infinite sequence of solutions: $(n,m)=(-3,1)$, then $(-723,323)$, then $(-232\,563,104\,005)$ and so on.</p> <p>Needless to say, you may put a minus before $m$ in any solution.</p> Answer 13: <p>Let $x=n+1$. The equation becomes $$x(2x-1)=10m^2 \\ 2x^2-x-10m^2=0 \\ x=\frac{1 \pm \sqrt{1+80m^2}}{4}$$</p> <p>It follows that $1+80m^2$ must be a perfect square. Thus $$y^2=1+80m^2 \Rightarrow\\ y^2-80m^2=1$$</p> <p>This is the <strong>Pell equation</strong>, and has solutions. Now, for each solution $(y,m)$, $y$ must be odd. Then, each of $1+y, 1-y$ is even, and their sum is $2$, meaning that exactly one of them is divisible by $4$ (look at them $\pmod{4}$).</p> <p>That particular value leads to an integer value of $x$.</p> Answer 14: <p>I got this. We have $(n+1)(2n+1)=2n^2+3n+1=10m^2$, if we multiply last equation by $8$ we get $16n^2+24n+8=80m^2$, but we can write this equation as $$(4n+3)^2-1=80m^2.$$</p> <p>So if we set $4n+3=x$ we get the Pell-equation $x^2-80m^2=1$. The fundamental solution of this equation is $x=9$ and $m=1$, hence we have the following recursive formulas $$x_{r+1}=9x_r+80m_r,$$ $$m_{r+1}=x_r+9m_r,$$ where $r\in\Bbb{N}$. Therefore $$n=\frac{9x_r+80m_r-3}{4}=\frac{9x_r-3}{4}+20m_r.$$</p> <p>This means that in order to have $n\in\Bbb{Z}$ we must have $9x_r\equiv 3\pmod 4$, equivalently $x_r\equiv 3\pmod 4$. Now, from $x_{r+1}=9x_r+80m_r$ it's easy to prove by induction that $x_r\equiv 1\pmod4$ for every $r\in\Bbb{N}$, so $-x_r\equiv 3\pmod 4$ and this lead us to infinitely many solutions, i.e. for every $x_r$ which is a solution of $x^2-80m^2=1$, we just need to take $-x_r$ in order to have $n\in\Bbb{Z}$. Hence all the solutions are $$(n,m)=\Bigl(\frac{-x_r-3}{4}, \pm m_r\Bigr).$$</p> Answer 15: <p>Since $n+1$ and $2n+1$ are coprime, they must be of the form $\pm k_1^2$ and $\pm 10k_2^2$ (option A), or $\pm 2k_1^2$ and $\pm 5k_2^2$ (option B). Let's explore both options in more detail.</p> <p><strong>(A)</strong> Since $2n+1$ is odd, it can't be divisible by 10, hence it equals $\pm k_1^2$, and $n+1=\pm10k_2^2$.<br />Positive branch <em>(that is, interpreting both $\pm$ as +)</em>: $20k_2^2-1=k_1^2$. This thing can't have solutions modulo 4, hence it can't have solutions at all. <br />Negative branch: $-20k_2^2-1=-k_1^2$, so $20k_2^2+1=k_1^2$. This is a Pell equation, and it produces an infinite sequence of solutions: $(n,m)=(-1,0)$, then $(-41,18)$, then $(-12\,961,5\,796)$, then $(-4\,173\,161,1\,866\,294)$ and so on.</p> <p><strong>(B)</strong> Since $2n+1$ is odd, it equals $\pm5k_1^2$, and $n+1=\pm2k_2^2$. <br />Positive branch: $4k_2^2-1=5k_1^2$. This thing can't have solutions modulo 4 either, hence it can't have solutions at all.<br />Negative branch: $-4k_2^2-1=-5k_1^2$, so $4k_2^2+1=5k_1^2$. This is again a Pell equation (maybe generalized or something), and it produces <em>another</em> infinite sequence of solutions: $(n,m)=(-3,1)$, then $(-723,323)$, then $(-232\,563,104\,005)$ and so on.</p> <p>Needless to say, you may put a minus before $m$ in any solution.</p> Answer 16: <p>Let $x=n+1$. The equation becomes $$x(2x-1)=10m^2 \\ 2x^2-x-10m^2=0 \\ x=\frac{1 \pm \sqrt{1+80m^2}}{4}$$</p> <p>It follows that $1+80m^2$ must be a perfect square. Thus $$y^2=1+80m^2 \Rightarrow\\ y^2-80m^2=1$$</p> <p>This is the <strong>Pell equation</strong>, and has solutions. Now, for each solution $(y,m)$, $y$ must be odd. Then, each of $1+y, 1-y$ is even, and their sum is $2$, meaning that exactly one of them is divisible by $4$ (look at them $\pmod{4}$).</p> <p>That particular value leads to an integer value of $x$.</p> Answer 17: <p>I got this. We have $(n+1)(2n+1)=2n^2+3n+1=10m^2$, if we multiply last equation by $8$ we get $16n^2+24n+8=80m^2$, but we can write this equation as $$(4n+3)^2-1=80m^2.$$</p> <p>So if we set $4n+3=x$ we get the Pell-equation $x^2-80m^2=1$. The fundamental solution of this equation is $x=9$ and $m=1$, hence we have the following recursive formulas $$x_{r+1}=9x_r+80m_r,$$ $$m_{r+1}=x_r+9m_r,$$ where $r\in\Bbb{N}$. Therefore $$n=\frac{9x_r+80m_r-3}{4}=\frac{9x_r-3}{4}+20m_r.$$</p> <p>This means that in order to have $n\in\Bbb{Z}$ we must have $9x_r\equiv 3\pmod 4$, equivalently $x_r\equiv 3\pmod 4$. Now, from $x_{r+1}=9x_r+80m_r$ it's easy to prove by induction that $x_r\equiv 1\pmod4$ for every $r\in\Bbb{N}$, so $-x_r\equiv 3\pmod 4$ and this lead us to infinitely many solutions, i.e. for every $x_r$ which is a solution of $x^2-80m^2=1$, we just need to take $-x_r$ in order to have $n\in\Bbb{Z}$. Hence all the solutions are $$(n,m)=\Bigl(\frac{-x_r-3}{4}, \pm m_r\Bigr).$$</p> Answer 18: <p>Since $n+1$ and $2n+1$ are coprime, they must be of the form $\pm k_1^2$ and $\pm 10k_2^2$ (option A), or $\pm 2k_1^2$ and $\pm 5k_2^2$ (option B). Let's explore both options in more detail.</p> <p><strong>(A)</strong> Since $2n+1$ is odd, it can't be divisible by 10, hence it equals $\pm k_1^2$, and $n+1=\pm10k_2^2$.<br />Positive branch <em>(that is, interpreting both $\pm$ as +)</em>: $20k_2^2-1=k_1^2$. This thing can't have solutions modulo 4, hence it can't have solutions at all. <br />Negative branch: $-20k_2^2-1=-k_1^2$, so $20k_2^2+1=k_1^2$. This is a Pell equation, and it produces an infinite sequence of solutions: $(n,m)=(-1,0)$, then $(-41,18)$, then $(-12\,961,5\,796)$, then $(-4\,173\,161,1\,866\,294)$ and so on.</p> <p><strong>(B)</strong> Since $2n+1$ is odd, it equals $\pm5k_1^2$, and $n+1=\pm2k_2^2$. <br />Positive branch: $4k_2^2-1=5k_1^2$. This thing can't have solutions modulo 4 either, hence it can't have solutions at all.<br />Negative branch: $-4k_2^2-1=-5k_1^2$, so $4k_2^2+1=5k_1^2$. This is again a Pell equation (maybe generalized or something), and it produces <em>another</em> infinite sequence of solutions: $(n,m)=(-3,1)$, then $(-723,323)$, then $(-232\,563,104\,005)$ and so on.</p> <p>Needless to say, you may put a minus before $m$ in any solution.</p> Answer 19: <p>Let $x=n+1$. The equation becomes $$x(2x-1)=10m^2 \\ 2x^2-x-10m^2=0 \\ x=\frac{1 \pm \sqrt{1+80m^2}}{4}$$</p> <p>It follows that $1+80m^2$ must be a perfect square. Thus $$y^2=1+80m^2 \Rightarrow\\ y^2-80m^2=1$$</p> <p>This is the <strong>Pell equation</strong>, and has solutions. Now, for each solution $(y,m)$, $y$ must be odd. Then, each of $1+y, 1-y$ is even, and their sum is $2$, meaning that exactly one of them is divisible by $4$ (look at them $\pmod{4}$).</p> <p>That particular value leads to an integer value of $x$.</p> Answer 20: <p>I got this. We have $(n+1)(2n+1)=2n^2+3n+1=10m^2$, if we multiply last equation by $8$ we get $16n^2+24n+8=80m^2$, but we can write this equation as $$(4n+3)^2-1=80m^2.$$</p> <p>So if we set $4n+3=x$ we get the Pell-equation $x^2-80m^2=1$. The fundamental solution of this equation is $x=9$ and $m=1$, hence we have the following recursive formulas $$x_{r+1}=9x_r+80m_r,$$ $$m_{r+1}=x_r+9m_r,$$ where $r\in\Bbb{N}$. Therefore $$n=\frac{9x_r+80m_r-3}{4}=\frac{9x_r-3}{4}+20m_r.$$</p> <p>This means that in order to have $n\in\Bbb{Z}$ we must have $9x_r\equiv 3\pmod 4$, equivalently $x_r\equiv 3\pmod 4$. Now, from $x_{r+1}=9x_r+80m_r$ it's easy to prove by induction that $x_r\equiv 1\pmod4$ for every $r\in\Bbb{N}$, so $-x_r\equiv 3\pmod 4$ and this lead us to infinitely many solutions, i.e. for every $x_r$ which is a solution of $x^2-80m^2=1$, we just need to take $-x_r$ in order to have $n\in\Bbb{Z}$. Hence all the solutions are $$(n,m)=\Bigl(\frac{-x_r-3}{4}, \pm m_r\Bigr).$$</p> Answer 21: <p>Since $n+1$ and $2n+1$ are coprime, they must be of the form $\pm k_1^2$ and $\pm 10k_2^2$ (option A), or $\pm 2k_1^2$ and $\pm 5k_2^2$ (option B). Let's explore both options in more detail.</p> <p><strong>(A)</strong> Since $2n+1$ is odd, it can't be divisible by 10, hence it equals $\pm k_1^2$, and $n+1=\pm10k_2^2$.<br />Positive branch <em>(that is, interpreting both $\pm$ as +)</em>: $20k_2^2-1=k_1^2$. This thing can't have solutions modulo 4, hence it can't have solutions at all. <br />Negative branch: $-20k_2^2-1=-k_1^2$, so $20k_2^2+1=k_1^2$. This is a Pell equation, and it produces an infinite sequence of solutions: $(n,m)=(-1,0)$, then $(-41,18)$, then $(-12\,961,5\,796)$, then $(-4\,173\,161,1\,866\,294)$ and so on.</p> <p><strong>(B)</strong> Since $2n+1$ is odd, it equals $\pm5k_1^2$, and $n+1=\pm2k_2^2$. <br />Positive branch: $4k_2^2-1=5k_1^2$. This thing can't have solutions modulo 4 either, hence it can't have solutions at all.<br />Negative branch: $-4k_2^2-1=-5k_1^2$, so $4k_2^2+1=5k_1^2$. This is again a Pell equation (maybe generalized or something), and it produces <em>another</em> infinite sequence of solutions: $(n,m)=(-3,1)$, then $(-723,323)$, then $(-232\,563,104\,005)$ and so on.</p> <p>Needless to say, you may put a minus before $m$ in any solution.</p> Answer 22: <p>Let $x=n+1$. The equation becomes $$x(2x-1)=10m^2 \\ 2x^2-x-10m^2=0 \\ x=\frac{1 \pm \sqrt{1+80m^2}}{4}$$</p> <p>It follows that $1+80m^2$ must be a perfect square. Thus $$y^2=1+80m^2 \Rightarrow\\ y^2-80m^2=1$$</p> <p>This is the <strong>Pell equation</strong>, and has solutions. Now, for each solution $(y,m)$, $y$ must be odd. Then, each of $1+y, 1-y$ is even, and their sum is $2$, meaning that exactly one of them is divisible by $4$ (look at them $\pmod{4}$).</p> <p>That particular value leads to an integer value of $x$.</p> Answer 23: <p>I got this. We have $(n+1)(2n+1)=2n^2+3n+1=10m^2$, if we multiply last equation by $8$ we get $16n^2+24n+8=80m^2$, but we can write this equation as $$(4n+3)^2-1=80m^2.$$</p> <p>So if we set $4n+3=x$ we get the Pell-equation $x^2-80m^2=1$. The fundamental solution of this equation is $x=9$ and $m=1$, hence we have the following recursive formulas $$x_{r+1}=9x_r+80m_r,$$ $$m_{r+1}=x_r+9m_r,$$ where $r\in\Bbb{N}$. Therefore $$n=\frac{9x_r+80m_r-3}{4}=\frac{9x_r-3}{4}+20m_r.$$</p> <p>This means that in order to have $n\in\Bbb{Z}$ we must have $9x_r\equiv 3\pmod 4$, equivalently $x_r\equiv 3\pmod 4$. Now, from $x_{r+1}=9x_r+80m_r$ it's easy to prove by induction that $x_r\equiv 1\pmod4$ for every $r\in\Bbb{N}$, so $-x_r\equiv 3\pmod 4$ and this lead us to infinitely many solutions, i.e. for every $x_r$ which is a solution of $x^2-80m^2=1$, we just need to take $-x_r$ in order to have $n\in\Bbb{Z}$. Hence all the solutions are $$(n,m)=\Bigl(\frac{-x_r-3}{4}, \pm m_r\Bigr).$$</p> Answer 24: <p>Since $n+1$ and $2n+1$ are coprime, they must be of the form $\pm k_1^2$ and $\pm 10k_2^2$ (option A), or $\pm 2k_1^2$ and $\pm 5k_2^2$ (option B). Let's explore both options in more detail.</p> <p><strong>(A)</strong> Since $2n+1$ is odd, it can't be divisible by 10, hence it equals $\pm k_1^2$, and $n+1=\pm10k_2^2$.<br />Positive branch <em>(that is, interpreting both $\pm$ as +)</em>: $20k_2^2-1=k_1^2$. This thing can't have solutions modulo 4, hence it can't have solutions at all. <br />Negative branch: $-20k_2^2-1=-k_1^2$, so $20k_2^2+1=k_1^2$. This is a Pell equation, and it produces an infinite sequence of solutions: $(n,m)=(-1,0)$, then $(-41,18)$, then $(-12\,961,5\,796)$, then $(-4\,173\,161,1\,866\,294)$ and so on.</p> <p><strong>(B)</strong> Since $2n+1$ is odd, it equals $\pm5k_1^2$, and $n+1=\pm2k_2^2$. <br />Positive branch: $4k_2^2-1=5k_1^2$. This thing can't have solutions modulo 4 either, hence it can't have solutions at all.<br />Negative branch: $-4k_2^2-1=-5k_1^2$, so $4k_2^2+1=5k_1^2$. This is again a Pell equation (maybe generalized or something), and it produces <em>another</em> infinite sequence of solutions: $(n,m)=(-3,1)$, then $(-723,323)$, then $(-232\,563,104\,005)$ and so on.</p> <p>Needless to say, you may put a minus before $m$ in any solution.</p> Answer 25: <p>Let $x=n+1$. The equation becomes $$x(2x-1)=10m^2 \\ 2x^2-x-10m^2=0 \\ x=\frac{1 \pm \sqrt{1+80m^2}}{4}$$</p> <p>It follows that $1+80m^2$ must be a perfect square. Thus $$y^2=1+80m^2 \Rightarrow\\ y^2-80m^2=1$$</p> <p>This is the <strong>Pell equation</strong>, and has solutions. Now, for each solution $(y,m)$, $y$ must be odd. Then, each of $1+y, 1-y$ is even, and their sum is $2$, meaning that exactly one of them is divisible by $4$ (look at them $\pmod{4}$).</p> <p>That particular value leads to an integer value of $x$.</p> Answer 26: <p>I got this. We have $(n+1)(2n+1)=2n^2+3n+1=10m^2$, if we multiply last equation by $8$ we get $16n^2+24n+8=80m^2$, but we can write this equation as $$(4n+3)^2-1=80m^2.$$</p> <p>So if we set $4n+3=x$ we get the Pell-equation $x^2-80m^2=1$. The fundamental solution of this equation is $x=9$ and $m=1$, hence we have the following recursive formulas $$x_{r+1}=9x_r+80m_r,$$ $$m_{r+1}=x_r+9m_r,$$ where $r\in\Bbb{N}$. Therefore $$n=\frac{9x_r+80m_r-3}{4}=\frac{9x_r-3}{4}+20m_r.$$</p> <p>This means that in order to have $n\in\Bbb{Z}$ we must have $9x_r\equiv 3\pmod 4$, equivalently $x_r\equiv 3\pmod 4$. Now, from $x_{r+1}=9x_r+80m_r$ it's easy to prove by induction that $x_r\equiv 1\pmod4$ for every $r\in\Bbb{N}$, so $-x_r\equiv 3\pmod 4$ and this lead us to infinitely many solutions, i.e. for every $x_r$ which is a solution of $x^2-80m^2=1$, we just need to take $-x_r$ in order to have $n\in\Bbb{Z}$. Hence all the solutions are $$(n,m)=\Bigl(\frac{-x_r-3}{4}, \pm m_r\Bigr).$$</p> Answer 27: <p>Since $n+1$ and $2n+1$ are coprime, they must be of the form $\pm k_1^2$ and $\pm 10k_2^2$ (option A), or $\pm 2k_1^2$ and $\pm 5k_2^2$ (option B). Let's explore both options in more detail.</p> <p><strong>(A)</strong> Since $2n+1$ is odd, it can't be divisible by 10, hence it equals $\pm k_1^2$, and $n+1=\pm10k_2^2$.<br />Positive branch <em>(that is, interpreting both $\pm$ as +)</em>: $20k_2^2-1=k_1^2$. This thing can't have solutions modulo 4, hence it can't have solutions at all. <br />Negative branch: $-20k_2^2-1=-k_1^2$, so $20k_2^2+1=k_1^2$. This is a Pell equation, and it produces an infinite sequence of solutions: $(n,m)=(-1,0)$, then $(-41,18)$, then $(-12\,961,5\,796)$, then $(-4\,173\,161,1\,866\,294)$ and so on.</p> <p><strong>(B)</strong> Since $2n+1$ is odd, it equals $\pm5k_1^2$, and $n+1=\pm2k_2^2$. <br />Positive branch: $4k_2^2-1=5k_1^2$. This thing can't have solutions modulo 4 either, hence it can't have solutions at all.<br />Negative branch: $-4k_2^2-1=-5k_1^2$, so $4k_2^2+1=5k_1^2$. This is again a Pell equation (maybe generalized or something), and it produces <em>another</em> infinite sequence of solutions: $(n,m)=(-3,1)$, then $(-723,323)$, then $(-232\,563,104\,005)$ and so on.</p> <p>Needless to say, you may put a minus before $m$ in any solution.</p> Answer 28: <p>Let $x=n+1$. The equation becomes $$x(2x-1)=10m^2 \\ 2x^2-x-10m^2=0 \\ x=\frac{1 \pm \sqrt{1+80m^2}}{4}$$</p> <p>It follows that $1+80m^2$ must be a perfect square. Thus $$y^2=1+80m^2 \Rightarrow\\ y^2-80m^2=1$$</p> <p>This is the <strong>Pell equation</strong>, and has solutions. Now, for each solution $(y,m)$, $y$ must be odd. Then, each of $1+y, 1-y$ is even, and their sum is $2$, meaning that exactly one of them is divisible by $4$ (look at them $\pmod{4}$).</p> <p>That particular value leads to an integer value of $x$.</p> Answer 29: <p>I got this. We have $(n+1)(2n+1)=2n^2+3n+1=10m^2$, if we multiply last equation by $8$ we get $16n^2+24n+8=80m^2$, but we can write this equation as $$(4n+3)^2-1=80m^2.$$</p> <p>So if we set $4n+3=x$ we get the Pell-equation $x^2-80m^2=1$. The fundamental solution of this equation is $x=9$ and $m=1$, hence we have the following recursive formulas $$x_{r+1}=9x_r+80m_r,$$ $$m_{r+1}=x_r+9m_r,$$ where $r\in\Bbb{N}$. Therefore $$n=\frac{9x_r+80m_r-3}{4}=\frac{9x_r-3}{4}+20m_r.$$</p> <p>This means that in order to have $n\in\Bbb{Z}$ we must have $9x_r\equiv 3\pmod 4$, equivalently $x_r\equiv 3\pmod 4$. Now, from $x_{r+1}=9x_r+80m_r$ it's easy to prove by induction that $x_r\equiv 1\pmod4$ for every $r\in\Bbb{N}$, so $-x_r\equiv 3\pmod 4$ and this lead us to infinitely many solutions, i.e. for every $x_r$ which is a solution of $x^2-80m^2=1$, we just need to take $-x_r$ in order to have $n\in\Bbb{Z}$. Hence all the solutions are $$(n,m)=\Bigl(\frac{-x_r-3}{4}, \pm m_r\Bigr).$$</p> Answer 30: <p>Since $n+1$ and $2n+1$ are coprime, they must be of the form $\pm k_1^2$ and $\pm 10k_2^2$ (option A), or $\pm 2k_1^2$ and $\pm 5k_2^2$ (option B). Let's explore both options in more detail.</p> <p><strong>(A)</strong> Since $2n+1$ is odd, it can't be divisible by 10, hence it equals $\pm k_1^2$, and $n+1=\pm10k_2^2$.<br />Positive branch <em>(that is, interpreting both $\pm$ as +)</em>: $20k_2^2-1=k_1^2$. This thing can't have solutions modulo 4, hence it can't have solutions at all. <br />Negative branch: $-20k_2^2-1=-k_1^2$, so $20k_2^2+1=k_1^2$. This is a Pell equation, and it produces an infinite sequence of solutions: $(n,m)=(-1,0)$, then $(-41,18)$, then $(-12\,961,5\,796)$, then $(-4\,173\,161,1\,866\,294)$ and so on.</p> <p><strong>(B)</strong> Since $2n+1$ is odd, it equals $\pm5k_1^2$, and $n+1=\pm2k_2^2$. <br />Positive branch: $4k_2^2-1=5k_1^2$. This thing can't have solutions modulo 4 either, hence it can't have solutions at all.<br />Negative branch: $-4k_2^2-1=-5k_1^2$, so $4k_2^2+1=5k_1^2$. This is again a Pell equation (maybe generalized or something), and it produces <em>another</em> infinite sequence of solutions: $(n,m)=(-3,1)$, then $(-723,323)$, then $(-232\,563,104\,005)$ and so on.</p> <p>Needless to say, you may put a minus before $m$ in any solution.</p> Answer 31: <p>Let $x=n+1$. The equation becomes $$x(2x-1)=10m^2 \\ 2x^2-x-10m^2=0 \\ x=\frac{1 \pm \sqrt{1+80m^2}}{4}$$</p> <p>It follows that $1+80m^2$ must be a perfect square. Thus $$y^2=1+80m^2 \Rightarrow\\ y^2-80m^2=1$$</p> <p>This is the <strong>Pell equation</strong>, and has solutions. Now, for each solution $(y,m)$, $y$ must be odd. Then, each of $1+y, 1-y$ is even, and their sum is $2$, meaning that exactly one of them is divisible by $4$ (look at them $\pmod{4}$).</p> <p>That particular value leads to an integer value of $x$.</p> Answer 32: <p>I got this. We have $(n+1)(2n+1)=2n^2+3n+1=10m^2$, if we multiply last equation by $8$ we get $16n^2+24n+8=80m^2$, but we can write this equation as $$(4n+3)^2-1=80m^2.$$</p> <p>So if we set $4n+3=x$ we get the Pell-equation $x^2-80m^2=1$. The fundamental solution of this equation is $x=9$ and $m=1$, hence we have the following recursive formulas $$x_{r+1}=9x_r+80m_r,$$ $$m_{r+1}=x_r+9m_r,$$ where $r\in\Bbb{N}$. Therefore $$n=\frac{9x_r+80m_r-3}{4}=\frac{9x_r-3}{4}+20m_r.$$</p> <p>This means that in order to have $n\in\Bbb{Z}$ we must have $9x_r\equiv 3\pmod 4$, equivalently $x_r\equiv 3\pmod 4$. Now, from $x_{r+1}=9x_r+80m_r$ it's easy to prove by induction that $x_r\equiv 1\pmod4$ for every $r\in\Bbb{N}$, so $-x_r\equiv 3\pmod 4$ and this lead us to infinitely many solutions, i.e. for every $x_r$ which is a solution of $x^2-80m^2=1$, we just need to take $-x_r$ in order to have $n\in\Bbb{Z}$. Hence all the solutions are $$(n,m)=\Bigl(\frac{-x_r-3}{4}, \pm m_r\Bigr).$$</p> Answer 33: <p>Since $n+1$ and $2n+1$ are coprime, they must be of the form $\pm k_1^2$ and $\pm 10k_2^2$ (option A), or $\pm 2k_1^2$ and $\pm 5k_2^2$ (option B). Let's explore both options in more detail.</p> <p><strong>(A)</strong> Since $2n+1$ is odd, it can't be divisible by 10, hence it equals $\pm k_1^2$, and $n+1=\pm10k_2^2$.<br />Positive branch <em>(that is, interpreting both $\pm$ as +)</em>: $20k_2^2-1=k_1^2$. This thing can't have solutions modulo 4, hence it can't have solutions at all. <br />Negative branch: $-20k_2^2-1=-k_1^2$, so $20k_2^2+1=k_1^2$. This is a Pell equation, and it produces an infinite sequence of solutions: $(n,m)=(-1,0)$, then $(-41,18)$, then $(-12\,961,5\,796)$, then $(-4\,173\,161,1\,866\,294)$ and so on.</p> <p><strong>(B)</strong> Since $2n+1$ is odd, it equals $\pm5k_1^2$, and $n+1=\pm2k_2^2$. <br />Positive branch: $4k_2^2-1=5k_1^2$. This thing can't have solutions modulo 4 either, hence it can't have solutions at all.<br />Negative branch: $-4k_2^2-1=-5k_1^2$, so $4k_2^2+1=5k_1^2$. This is again a Pell equation (maybe generalized or something), and it produces <em>another</em> infinite sequence of solutions: $(n,m)=(-3,1)$, then $(-723,323)$, then $(-232\,563,104\,005)$ and so on.</p> <p>Needless to say, you may put a minus before $m$ in any solution.</p>
Multiplication of two upper triangular matrices
2,095,108
6
https://math.stackexchange.com/questions/2095108/multiplication-of-two-upper-triangular-matrices
<blockquote> <p>Consider the product $C=(C_{ij})$ of two upper triangular matrices $A=(a_{ij})$ and $B=(b_{ij})$, with $n=m$ (rows=columns). Deduce the expression of $C=(C_{ij})$.</p> </blockquote> <p>I'm trying to do this proof but can't complete it.</p>
[ { "answer_id": 2095394, "score": 11, "text": "<blockquote>\n<p><strong>Definition.</strong> A <span class=\"math-container\">$n \\times n$</span> matrix <span class=\"math-container\">$A$</span> is upper triangular if <span class=\"math-container\">$a_{ij} = 0$</span> for <span class=\"math-container\">$i &gt;j$</span>.</p>\n</blockquote>\n<p>Now consider <span class=\"math-container\">$A,B$</span> upper triangular and <span class=\"math-container\">$C = AB$</span>. Then consider <span class=\"math-container\">$C_{ij}$</span> entry from <span class=\"math-container\">$C$</span> with <span class=\"math-container\">$i &gt;j$</span>, given by\n<span class=\"math-container\">\\begin{align} C_{ij} &amp; = \\sum_{k =1}^n a_{ik}b_{kj} \\\\ &amp; = \\sum_{k =1}^{i-1}a_{ik}b_{kj} + \\sum_{k=i}^na_{ik}b_{kj} \\\\&amp;= 0 + 0 = 0, \\end{align}</span> since <span class=\"math-container\">\\begin{align} k &amp; \\leq i-1 \\implies a_{ik} =0 \\\\ k &amp; \\geq i \\implies b_{kj} =0 \\end{align}</span> for this particular combination of <span class=\"math-container\">$(i,j)$</span>. We conclude that <span class=\"math-container\">$C_{ij} = 0$</span> if <span class=\"math-container\">$i &gt;j$</span> hence <span class=\"math-container\">$C$</span> is upper triangular.</p>\n<hr> \nWe can easily generalize the statement we have proven to \n<blockquote>\n<p><strong>Proposition.</strong> Let <span class=\"math-container\">$U_1,\\ldots,U_k$</span> be <span class=\"math-container\">$n \\times n$</span> upper triangular matrices. Then, <span class=\"math-container\">$$V = \\prod_{i=1}^k U_i$$</span> is an upper triangular matrix as well.</p>\n</blockquote>\n<p>You can show it by means of prove by induction. Denote <span class=\"math-container\">$P(k)$</span> the statement above. Then obviously <span class=\"math-container\">$P(1)$</span> is true. Also <span class=\"math-container\">$P(k) \\implies P(k+1)$</span> (exercise!).</p>\n" }, { "answer_id": 3881792, "score": 2, "text": "<p>Let <span class=\"math-container\">$A=[a_{ij}]$</span> and <span class=\"math-container\">$B=[b_{ij}]$</span> be two <span class=\"math-container\">$n \\times n$</span> upper triangular matrices.</p>\n<p>By definition</p>\n<p><span class=\"math-container\">\\begin{equation}\na_{ij}=b_{ij}=0~~\\text{if}~~~i&gt;j.\n\\end{equation}</span></p>\n<p>Since <span class=\"math-container\">$A^T, B^T$</span> are lower triangular matrices, and</p>\n<p><span class=\"math-container\">\\begin{equation}\n(AB)^T=B^TA^T\n\\end{equation}</span>\nis a lower trigular matrix (Ref. <a href=\"https://math.stackexchange.com/q/3881777\">https://math.stackexchange.com/q/3881777</a>), hence <span class=\"math-container\">$AB$</span> is an upper triangular matrix. <span class=\"math-container\">$\\blacksquare$</span></p>\n" }, { "answer_id": 2095394, "score": 11, "text": "<blockquote>\n<p><strong>Definition.</strong> A <span class=\"math-container\">$n \\times n$</span> matrix <span class=\"math-container\">$A$</span> is upper triangular if <span class=\"math-container\">$a_{ij} = 0$</span> for <span class=\"math-container\">$i &gt;j$</span>.</p>\n</blockquote>\n<p>Now consider <span class=\"math-container\">$A,B$</span> upper triangular and <span class=\"math-container\">$C = AB$</span>. Then consider <span class=\"math-container\">$C_{ij}$</span> entry from <span class=\"math-container\">$C$</span> with <span class=\"math-container\">$i &gt;j$</span>, given by\n<span class=\"math-container\">\\begin{align} C_{ij} &amp; = \\sum_{k =1}^n a_{ik}b_{kj} \\\\ &amp; = \\sum_{k =1}^{i-1}a_{ik}b_{kj} + \\sum_{k=i}^na_{ik}b_{kj} \\\\&amp;= 0 + 0 = 0, \\end{align}</span> since <span class=\"math-container\">\\begin{align} k &amp; \\leq i-1 \\implies a_{ik} =0 \\\\ k &amp; \\geq i \\implies b_{kj} =0 \\end{align}</span> for this particular combination of <span class=\"math-container\">$(i,j)$</span>. We conclude that <span class=\"math-container\">$C_{ij} = 0$</span> if <span class=\"math-container\">$i &gt;j$</span> hence <span class=\"math-container\">$C$</span> is upper triangular.</p>\n<hr> \nWe can easily generalize the statement we have proven to \n<blockquote>\n<p><strong>Proposition.</strong> Let <span class=\"math-container\">$U_1,\\ldots,U_k$</span> be <span class=\"math-container\">$n \\times n$</span> upper triangular matrices. Then, <span class=\"math-container\">$$V = \\prod_{i=1}^k U_i$$</span> is an upper triangular matrix as well.</p>\n</blockquote>\n<p>You can show it by means of prove by induction. Denote <span class=\"math-container\">$P(k)$</span> the statement above. Then obviously <span class=\"math-container\">$P(1)$</span> is true. Also <span class=\"math-container\">$P(k) \\implies P(k+1)$</span> (exercise!).</p>\n" }, { "answer_id": 3881792, "score": 2, "text": "<p>Let <span class=\"math-container\">$A=[a_{ij}]$</span> and <span class=\"math-container\">$B=[b_{ij}]$</span> be two <span class=\"math-container\">$n \\times n$</span> upper triangular matrices.</p>\n<p>By definition</p>\n<p><span class=\"math-container\">\\begin{equation}\na_{ij}=b_{ij}=0~~\\text{if}~~~i&gt;j.\n\\end{equation}</span></p>\n<p>Since <span class=\"math-container\">$A^T, B^T$</span> are lower triangular matrices, and</p>\n<p><span class=\"math-container\">\\begin{equation}\n(AB)^T=B^TA^T\n\\end{equation}</span>\nis a lower trigular matrix (Ref. <a href=\"https://math.stackexchange.com/q/3881777\">https://math.stackexchange.com/q/3881777</a>), hence <span class=\"math-container\">$AB$</span> is an upper triangular matrix. <span class=\"math-container\">$\\blacksquare$</span></p>\n" }, { "answer_id": 2095394, "score": 11, "text": "<blockquote>\n<p><strong>Definition.</strong> A <span class=\"math-container\">$n \\times n$</span> matrix <span class=\"math-container\">$A$</span> is upper triangular if <span class=\"math-container\">$a_{ij} = 0$</span> for <span class=\"math-container\">$i &gt;j$</span>.</p>\n</blockquote>\n<p>Now consider <span class=\"math-container\">$A,B$</span> upper triangular and <span class=\"math-container\">$C = AB$</span>. Then consider <span class=\"math-container\">$C_{ij}$</span> entry from <span class=\"math-container\">$C$</span> with <span class=\"math-container\">$i &gt;j$</span>, given by\n<span class=\"math-container\">\\begin{align} C_{ij} &amp; = \\sum_{k =1}^n a_{ik}b_{kj} \\\\ &amp; = \\sum_{k =1}^{i-1}a_{ik}b_{kj} + \\sum_{k=i}^na_{ik}b_{kj} \\\\&amp;= 0 + 0 = 0, \\end{align}</span> since <span class=\"math-container\">\\begin{align} k &amp; \\leq i-1 \\implies a_{ik} =0 \\\\ k &amp; \\geq i \\implies b_{kj} =0 \\end{align}</span> for this particular combination of <span class=\"math-container\">$(i,j)$</span>. We conclude that <span class=\"math-container\">$C_{ij} = 0$</span> if <span class=\"math-container\">$i &gt;j$</span> hence <span class=\"math-container\">$C$</span> is upper triangular.</p>\n<hr> \nWe can easily generalize the statement we have proven to \n<blockquote>\n<p><strong>Proposition.</strong> Let <span class=\"math-container\">$U_1,\\ldots,U_k$</span> be <span class=\"math-container\">$n \\times n$</span> upper triangular matrices. Then, <span class=\"math-container\">$$V = \\prod_{i=1}^k U_i$$</span> is an upper triangular matrix as well.</p>\n</blockquote>\n<p>You can show it by means of prove by induction. Denote <span class=\"math-container\">$P(k)$</span> the statement above. Then obviously <span class=\"math-container\">$P(1)$</span> is true. Also <span class=\"math-container\">$P(k) \\implies P(k+1)$</span> (exercise!).</p>\n" }, { "answer_id": 3881792, "score": 2, "text": "<p>Let <span class=\"math-container\">$A=[a_{ij}]$</span> and <span class=\"math-container\">$B=[b_{ij}]$</span> be two <span class=\"math-container\">$n \\times n$</span> upper triangular matrices.</p>\n<p>By definition</p>\n<p><span class=\"math-container\">\\begin{equation}\na_{ij}=b_{ij}=0~~\\text{if}~~~i&gt;j.\n\\end{equation}</span></p>\n<p>Since <span class=\"math-container\">$A^T, B^T$</span> are lower triangular matrices, and</p>\n<p><span class=\"math-container\">\\begin{equation}\n(AB)^T=B^TA^T\n\\end{equation}</span>\nis a lower trigular matrix (Ref. <a href=\"https://math.stackexchange.com/q/3881777\">https://math.stackexchange.com/q/3881777</a>), hence <span class=\"math-container\">$AB$</span> is an upper triangular matrix. <span class=\"math-container\">$\\blacksquare$</span></p>\n" }, { "answer_id": 2095394, "score": 11, "text": "<blockquote>\n<p><strong>Definition.</strong> A <span class=\"math-container\">$n \\times n$</span> matrix <span class=\"math-container\">$A$</span> is upper triangular if <span class=\"math-container\">$a_{ij} = 0$</span> for <span class=\"math-container\">$i &gt;j$</span>.</p>\n</blockquote>\n<p>Now consider <span class=\"math-container\">$A,B$</span> upper triangular and <span class=\"math-container\">$C = AB$</span>. Then consider <span class=\"math-container\">$C_{ij}$</span> entry from <span class=\"math-container\">$C$</span> with <span class=\"math-container\">$i &gt;j$</span>, given by\n<span class=\"math-container\">\\begin{align} C_{ij} &amp; = \\sum_{k =1}^n a_{ik}b_{kj} \\\\ &amp; = \\sum_{k =1}^{i-1}a_{ik}b_{kj} + \\sum_{k=i}^na_{ik}b_{kj} \\\\&amp;= 0 + 0 = 0, \\end{align}</span> since <span class=\"math-container\">\\begin{align} k &amp; \\leq i-1 \\implies a_{ik} =0 \\\\ k &amp; \\geq i \\implies b_{kj} =0 \\end{align}</span> for this particular combination of <span class=\"math-container\">$(i,j)$</span>. We conclude that <span class=\"math-container\">$C_{ij} = 0$</span> if <span class=\"math-container\">$i &gt;j$</span> hence <span class=\"math-container\">$C$</span> is upper triangular.</p>\n<hr> \nWe can easily generalize the statement we have proven to \n<blockquote>\n<p><strong>Proposition.</strong> Let <span class=\"math-container\">$U_1,\\ldots,U_k$</span> be <span class=\"math-container\">$n \\times n$</span> upper triangular matrices. Then, <span class=\"math-container\">$$V = \\prod_{i=1}^k U_i$$</span> is an upper triangular matrix as well.</p>\n</blockquote>\n<p>You can show it by means of prove by induction. Denote <span class=\"math-container\">$P(k)$</span> the statement above. Then obviously <span class=\"math-container\">$P(1)$</span> is true. Also <span class=\"math-container\">$P(k) \\implies P(k+1)$</span> (exercise!).</p>\n" }, { "answer_id": 3881792, "score": 2, "text": "<p>Let <span class=\"math-container\">$A=[a_{ij}]$</span> and <span class=\"math-container\">$B=[b_{ij}]$</span> be two <span class=\"math-container\">$n \\times n$</span> upper triangular matrices.</p>\n<p>By definition</p>\n<p><span class=\"math-container\">\\begin{equation}\na_{ij}=b_{ij}=0~~\\text{if}~~~i&gt;j.\n\\end{equation}</span></p>\n<p>Since <span class=\"math-container\">$A^T, B^T$</span> are lower triangular matrices, and</p>\n<p><span class=\"math-container\">\\begin{equation}\n(AB)^T=B^TA^T\n\\end{equation}</span>\nis a lower trigular matrix (Ref. <a href=\"https://math.stackexchange.com/q/3881777\">https://math.stackexchange.com/q/3881777</a>), hence <span class=\"math-container\">$AB$</span> is an upper triangular matrix. <span class=\"math-container\">$\\blacksquare$</span></p>\n" }, { "answer_id": 2095394, "score": 11, "text": "<blockquote>\n<p><strong>Definition.</strong> A <span class=\"math-container\">$n \\times n$</span> matrix <span class=\"math-container\">$A$</span> is upper triangular if <span class=\"math-container\">$a_{ij} = 0$</span> for <span class=\"math-container\">$i &gt;j$</span>.</p>\n</blockquote>\n<p>Now consider <span class=\"math-container\">$A,B$</span> upper triangular and <span class=\"math-container\">$C = AB$</span>. Then consider <span class=\"math-container\">$C_{ij}$</span> entry from <span class=\"math-container\">$C$</span> with <span class=\"math-container\">$i &gt;j$</span>, given by\n<span class=\"math-container\">\\begin{align} C_{ij} &amp; = \\sum_{k =1}^n a_{ik}b_{kj} \\\\ &amp; = \\sum_{k =1}^{i-1}a_{ik}b_{kj} + \\sum_{k=i}^na_{ik}b_{kj} \\\\&amp;= 0 + 0 = 0, \\end{align}</span> since <span class=\"math-container\">\\begin{align} k &amp; \\leq i-1 \\implies a_{ik} =0 \\\\ k &amp; \\geq i \\implies b_{kj} =0 \\end{align}</span> for this particular combination of <span class=\"math-container\">$(i,j)$</span>. We conclude that <span class=\"math-container\">$C_{ij} = 0$</span> if <span class=\"math-container\">$i &gt;j$</span> hence <span class=\"math-container\">$C$</span> is upper triangular.</p>\n<hr> \nWe can easily generalize the statement we have proven to \n<blockquote>\n<p><strong>Proposition.</strong> Let <span class=\"math-container\">$U_1,\\ldots,U_k$</span> be <span class=\"math-container\">$n \\times n$</span> upper triangular matrices. Then, <span class=\"math-container\">$$V = \\prod_{i=1}^k U_i$$</span> is an upper triangular matrix as well.</p>\n</blockquote>\n<p>You can show it by means of prove by induction. Denote <span class=\"math-container\">$P(k)$</span> the statement above. Then obviously <span class=\"math-container\">$P(1)$</span> is true. Also <span class=\"math-container\">$P(k) \\implies P(k+1)$</span> (exercise!).</p>\n" }, { "answer_id": 3881792, "score": 2, "text": "<p>Let <span class=\"math-container\">$A=[a_{ij}]$</span> and <span class=\"math-container\">$B=[b_{ij}]$</span> be two <span class=\"math-container\">$n \\times n$</span> upper triangular matrices.</p>\n<p>By definition</p>\n<p><span class=\"math-container\">\\begin{equation}\na_{ij}=b_{ij}=0~~\\text{if}~~~i&gt;j.\n\\end{equation}</span></p>\n<p>Since <span class=\"math-container\">$A^T, B^T$</span> are lower triangular matrices, and</p>\n<p><span class=\"math-container\">\\begin{equation}\n(AB)^T=B^TA^T\n\\end{equation}</span>\nis a lower trigular matrix (Ref. <a href=\"https://math.stackexchange.com/q/3881777\">https://math.stackexchange.com/q/3881777</a>), hence <span class=\"math-container\">$AB$</span> is an upper triangular matrix. <span class=\"math-container\">$\\blacksquare$</span></p>\n" }, { "answer_id": 2095394, "score": 11, "text": "<blockquote>\n<p><strong>Definition.</strong> A <span class=\"math-container\">$n \\times n$</span> matrix <span class=\"math-container\">$A$</span> is upper triangular if <span class=\"math-container\">$a_{ij} = 0$</span> for <span class=\"math-container\">$i &gt;j$</span>.</p>\n</blockquote>\n<p>Now consider <span class=\"math-container\">$A,B$</span> upper triangular and <span class=\"math-container\">$C = AB$</span>. Then consider <span class=\"math-container\">$C_{ij}$</span> entry from <span class=\"math-container\">$C$</span> with <span class=\"math-container\">$i &gt;j$</span>, given by\n<span class=\"math-container\">\\begin{align} C_{ij} &amp; = \\sum_{k =1}^n a_{ik}b_{kj} \\\\ &amp; = \\sum_{k =1}^{i-1}a_{ik}b_{kj} + \\sum_{k=i}^na_{ik}b_{kj} \\\\&amp;= 0 + 0 = 0, \\end{align}</span> since <span class=\"math-container\">\\begin{align} k &amp; \\leq i-1 \\implies a_{ik} =0 \\\\ k &amp; \\geq i \\implies b_{kj} =0 \\end{align}</span> for this particular combination of <span class=\"math-container\">$(i,j)$</span>. We conclude that <span class=\"math-container\">$C_{ij} = 0$</span> if <span class=\"math-container\">$i &gt;j$</span> hence <span class=\"math-container\">$C$</span> is upper triangular.</p>\n<hr> \nWe can easily generalize the statement we have proven to \n<blockquote>\n<p><strong>Proposition.</strong> Let <span class=\"math-container\">$U_1,\\ldots,U_k$</span> be <span class=\"math-container\">$n \\times n$</span> upper triangular matrices. Then, <span class=\"math-container\">$$V = \\prod_{i=1}^k U_i$$</span> is an upper triangular matrix as well.</p>\n</blockquote>\n<p>You can show it by means of prove by induction. Denote <span class=\"math-container\">$P(k)$</span> the statement above. Then obviously <span class=\"math-container\">$P(1)$</span> is true. Also <span class=\"math-container\">$P(k) \\implies P(k+1)$</span> (exercise!).</p>\n" }, { "answer_id": 3881792, "score": 2, "text": "<p>Let <span class=\"math-container\">$A=[a_{ij}]$</span> and <span class=\"math-container\">$B=[b_{ij}]$</span> be two <span class=\"math-container\">$n \\times n$</span> upper triangular matrices.</p>\n<p>By definition</p>\n<p><span class=\"math-container\">\\begin{equation}\na_{ij}=b_{ij}=0~~\\text{if}~~~i&gt;j.\n\\end{equation}</span></p>\n<p>Since <span class=\"math-container\">$A^T, B^T$</span> are lower triangular matrices, and</p>\n<p><span class=\"math-container\">\\begin{equation}\n(AB)^T=B^TA^T\n\\end{equation}</span>\nis a lower trigular matrix (Ref. <a href=\"https://math.stackexchange.com/q/3881777\">https://math.stackexchange.com/q/3881777</a>), hence <span class=\"math-container\">$AB$</span> is an upper triangular matrix. <span class=\"math-container\">$\\blacksquare$</span></p>\n" }, { "answer_id": 2095394, "score": 11, "text": "<blockquote>\n<p><strong>Definition.</strong> A <span class=\"math-container\">$n \\times n$</span> matrix <span class=\"math-container\">$A$</span> is upper triangular if <span class=\"math-container\">$a_{ij} = 0$</span> for <span class=\"math-container\">$i &gt;j$</span>.</p>\n</blockquote>\n<p>Now consider <span class=\"math-container\">$A,B$</span> upper triangular and <span class=\"math-container\">$C = AB$</span>. Then consider <span class=\"math-container\">$C_{ij}$</span> entry from <span class=\"math-container\">$C$</span> with <span class=\"math-container\">$i &gt;j$</span>, given by\n<span class=\"math-container\">\\begin{align} C_{ij} &amp; = \\sum_{k =1}^n a_{ik}b_{kj} \\\\ &amp; = \\sum_{k =1}^{i-1}a_{ik}b_{kj} + \\sum_{k=i}^na_{ik}b_{kj} \\\\&amp;= 0 + 0 = 0, \\end{align}</span> since <span class=\"math-container\">\\begin{align} k &amp; \\leq i-1 \\implies a_{ik} =0 \\\\ k &amp; \\geq i \\implies b_{kj} =0 \\end{align}</span> for this particular combination of <span class=\"math-container\">$(i,j)$</span>. We conclude that <span class=\"math-container\">$C_{ij} = 0$</span> if <span class=\"math-container\">$i &gt;j$</span> hence <span class=\"math-container\">$C$</span> is upper triangular.</p>\n<hr> \nWe can easily generalize the statement we have proven to \n<blockquote>\n<p><strong>Proposition.</strong> Let <span class=\"math-container\">$U_1,\\ldots,U_k$</span> be <span class=\"math-container\">$n \\times n$</span> upper triangular matrices. Then, <span class=\"math-container\">$$V = \\prod_{i=1}^k U_i$$</span> is an upper triangular matrix as well.</p>\n</blockquote>\n<p>You can show it by means of prove by induction. Denote <span class=\"math-container\">$P(k)$</span> the statement above. Then obviously <span class=\"math-container\">$P(1)$</span> is true. Also <span class=\"math-container\">$P(k) \\implies P(k+1)$</span> (exercise!).</p>\n" }, { "answer_id": 3881792, "score": 2, "text": "<p>Let <span class=\"math-container\">$A=[a_{ij}]$</span> and <span class=\"math-container\">$B=[b_{ij}]$</span> be two <span class=\"math-container\">$n \\times n$</span> upper triangular matrices.</p>\n<p>By definition</p>\n<p><span class=\"math-container\">\\begin{equation}\na_{ij}=b_{ij}=0~~\\text{if}~~~i&gt;j.\n\\end{equation}</span></p>\n<p>Since <span class=\"math-container\">$A^T, B^T$</span> are lower triangular matrices, and</p>\n<p><span class=\"math-container\">\\begin{equation}\n(AB)^T=B^TA^T\n\\end{equation}</span>\nis a lower trigular matrix (Ref. <a href=\"https://math.stackexchange.com/q/3881777\">https://math.stackexchange.com/q/3881777</a>), hence <span class=\"math-container\">$AB$</span> is an upper triangular matrix. <span class=\"math-container\">$\\blacksquare$</span></p>\n" }, { "answer_id": 2095394, "score": 11, "text": "<blockquote>\n<p><strong>Definition.</strong> A <span class=\"math-container\">$n \\times n$</span> matrix <span class=\"math-container\">$A$</span> is upper triangular if <span class=\"math-container\">$a_{ij} = 0$</span> for <span class=\"math-container\">$i &gt;j$</span>.</p>\n</blockquote>\n<p>Now consider <span class=\"math-container\">$A,B$</span> upper triangular and <span class=\"math-container\">$C = AB$</span>. Then consider <span class=\"math-container\">$C_{ij}$</span> entry from <span class=\"math-container\">$C$</span> with <span class=\"math-container\">$i &gt;j$</span>, given by\n<span class=\"math-container\">\\begin{align} C_{ij} &amp; = \\sum_{k =1}^n a_{ik}b_{kj} \\\\ &amp; = \\sum_{k =1}^{i-1}a_{ik}b_{kj} + \\sum_{k=i}^na_{ik}b_{kj} \\\\&amp;= 0 + 0 = 0, \\end{align}</span> since <span class=\"math-container\">\\begin{align} k &amp; \\leq i-1 \\implies a_{ik} =0 \\\\ k &amp; \\geq i \\implies b_{kj} =0 \\end{align}</span> for this particular combination of <span class=\"math-container\">$(i,j)$</span>. We conclude that <span class=\"math-container\">$C_{ij} = 0$</span> if <span class=\"math-container\">$i &gt;j$</span> hence <span class=\"math-container\">$C$</span> is upper triangular.</p>\n<hr> \nWe can easily generalize the statement we have proven to \n<blockquote>\n<p><strong>Proposition.</strong> Let <span class=\"math-container\">$U_1,\\ldots,U_k$</span> be <span class=\"math-container\">$n \\times n$</span> upper triangular matrices. Then, <span class=\"math-container\">$$V = \\prod_{i=1}^k U_i$$</span> is an upper triangular matrix as well.</p>\n</blockquote>\n<p>You can show it by means of prove by induction. Denote <span class=\"math-container\">$P(k)$</span> the statement above. Then obviously <span class=\"math-container\">$P(1)$</span> is true. Also <span class=\"math-container\">$P(k) \\implies P(k+1)$</span> (exercise!).</p>\n" }, { "answer_id": 3881792, "score": 2, "text": "<p>Let <span class=\"math-container\">$A=[a_{ij}]$</span> and <span class=\"math-container\">$B=[b_{ij}]$</span> be two <span class=\"math-container\">$n \\times n$</span> upper triangular matrices.</p>\n<p>By definition</p>\n<p><span class=\"math-container\">\\begin{equation}\na_{ij}=b_{ij}=0~~\\text{if}~~~i&gt;j.\n\\end{equation}</span></p>\n<p>Since <span class=\"math-container\">$A^T, B^T$</span> are lower triangular matrices, and</p>\n<p><span class=\"math-container\">\\begin{equation}\n(AB)^T=B^TA^T\n\\end{equation}</span>\nis a lower trigular matrix (Ref. <a href=\"https://math.stackexchange.com/q/3881777\">https://math.stackexchange.com/q/3881777</a>), hence <span class=\"math-container\">$AB$</span> is an upper triangular matrix. <span class=\"math-container\">$\\blacksquare$</span></p>\n" }, { "answer_id": 2095394, "score": 11, "text": "<blockquote>\n<p><strong>Definition.</strong> A <span class=\"math-container\">$n \\times n$</span> matrix <span class=\"math-container\">$A$</span> is upper triangular if <span class=\"math-container\">$a_{ij} = 0$</span> for <span class=\"math-container\">$i &gt;j$</span>.</p>\n</blockquote>\n<p>Now consider <span class=\"math-container\">$A,B$</span> upper triangular and <span class=\"math-container\">$C = AB$</span>. Then consider <span class=\"math-container\">$C_{ij}$</span> entry from <span class=\"math-container\">$C$</span> with <span class=\"math-container\">$i &gt;j$</span>, given by\n<span class=\"math-container\">\\begin{align} C_{ij} &amp; = \\sum_{k =1}^n a_{ik}b_{kj} \\\\ &amp; = \\sum_{k =1}^{i-1}a_{ik}b_{kj} + \\sum_{k=i}^na_{ik}b_{kj} \\\\&amp;= 0 + 0 = 0, \\end{align}</span> since <span class=\"math-container\">\\begin{align} k &amp; \\leq i-1 \\implies a_{ik} =0 \\\\ k &amp; \\geq i \\implies b_{kj} =0 \\end{align}</span> for this particular combination of <span class=\"math-container\">$(i,j)$</span>. We conclude that <span class=\"math-container\">$C_{ij} = 0$</span> if <span class=\"math-container\">$i &gt;j$</span> hence <span class=\"math-container\">$C$</span> is upper triangular.</p>\n<hr> \nWe can easily generalize the statement we have proven to \n<blockquote>\n<p><strong>Proposition.</strong> Let <span class=\"math-container\">$U_1,\\ldots,U_k$</span> be <span class=\"math-container\">$n \\times n$</span> upper triangular matrices. Then, <span class=\"math-container\">$$V = \\prod_{i=1}^k U_i$$</span> is an upper triangular matrix as well.</p>\n</blockquote>\n<p>You can show it by means of prove by induction. Denote <span class=\"math-container\">$P(k)$</span> the statement above. Then obviously <span class=\"math-container\">$P(1)$</span> is true. Also <span class=\"math-container\">$P(k) \\implies P(k+1)$</span> (exercise!).</p>\n" }, { "answer_id": 3881792, "score": 2, "text": "<p>Let <span class=\"math-container\">$A=[a_{ij}]$</span> and <span class=\"math-container\">$B=[b_{ij}]$</span> be two <span class=\"math-container\">$n \\times n$</span> upper triangular matrices.</p>\n<p>By definition</p>\n<p><span class=\"math-container\">\\begin{equation}\na_{ij}=b_{ij}=0~~\\text{if}~~~i&gt;j.\n\\end{equation}</span></p>\n<p>Since <span class=\"math-container\">$A^T, B^T$</span> are lower triangular matrices, and</p>\n<p><span class=\"math-container\">\\begin{equation}\n(AB)^T=B^TA^T\n\\end{equation}</span>\nis a lower trigular matrix (Ref. <a href=\"https://math.stackexchange.com/q/3881777\">https://math.stackexchange.com/q/3881777</a>), hence <span class=\"math-container\">$AB$</span> is an upper triangular matrix. <span class=\"math-container\">$\\blacksquare$</span></p>\n" }, { "answer_id": 2095394, "score": 11, "text": "<blockquote>\n<p><strong>Definition.</strong> A <span class=\"math-container\">$n \\times n$</span> matrix <span class=\"math-container\">$A$</span> is upper triangular if <span class=\"math-container\">$a_{ij} = 0$</span> for <span class=\"math-container\">$i &gt;j$</span>.</p>\n</blockquote>\n<p>Now consider <span class=\"math-container\">$A,B$</span> upper triangular and <span class=\"math-container\">$C = AB$</span>. Then consider <span class=\"math-container\">$C_{ij}$</span> entry from <span class=\"math-container\">$C$</span> with <span class=\"math-container\">$i &gt;j$</span>, given by\n<span class=\"math-container\">\\begin{align} C_{ij} &amp; = \\sum_{k =1}^n a_{ik}b_{kj} \\\\ &amp; = \\sum_{k =1}^{i-1}a_{ik}b_{kj} + \\sum_{k=i}^na_{ik}b_{kj} \\\\&amp;= 0 + 0 = 0, \\end{align}</span> since <span class=\"math-container\">\\begin{align} k &amp; \\leq i-1 \\implies a_{ik} =0 \\\\ k &amp; \\geq i \\implies b_{kj} =0 \\end{align}</span> for this particular combination of <span class=\"math-container\">$(i,j)$</span>. We conclude that <span class=\"math-container\">$C_{ij} = 0$</span> if <span class=\"math-container\">$i &gt;j$</span> hence <span class=\"math-container\">$C$</span> is upper triangular.</p>\n<hr> \nWe can easily generalize the statement we have proven to \n<blockquote>\n<p><strong>Proposition.</strong> Let <span class=\"math-container\">$U_1,\\ldots,U_k$</span> be <span class=\"math-container\">$n \\times n$</span> upper triangular matrices. Then, <span class=\"math-container\">$$V = \\prod_{i=1}^k U_i$$</span> is an upper triangular matrix as well.</p>\n</blockquote>\n<p>You can show it by means of prove by induction. Denote <span class=\"math-container\">$P(k)$</span> the statement above. Then obviously <span class=\"math-container\">$P(1)$</span> is true. Also <span class=\"math-container\">$P(k) \\implies P(k+1)$</span> (exercise!).</p>\n" }, { "answer_id": 3881792, "score": 2, "text": "<p>Let <span class=\"math-container\">$A=[a_{ij}]$</span> and <span class=\"math-container\">$B=[b_{ij}]$</span> be two <span class=\"math-container\">$n \\times n$</span> upper triangular matrices.</p>\n<p>By definition</p>\n<p><span class=\"math-container\">\\begin{equation}\na_{ij}=b_{ij}=0~~\\text{if}~~~i&gt;j.\n\\end{equation}</span></p>\n<p>Since <span class=\"math-container\">$A^T, B^T$</span> are lower triangular matrices, and</p>\n<p><span class=\"math-container\">\\begin{equation}\n(AB)^T=B^TA^T\n\\end{equation}</span>\nis a lower trigular matrix (Ref. <a href=\"https://math.stackexchange.com/q/3881777\">https://math.stackexchange.com/q/3881777</a>), hence <span class=\"math-container\">$AB$</span> is an upper triangular matrix. <span class=\"math-container\">$\\blacksquare$</span></p>\n" }, { "answer_id": 2095394, "score": 11, "text": "<blockquote>\n<p><strong>Definition.</strong> A <span class=\"math-container\">$n \\times n$</span> matrix <span class=\"math-container\">$A$</span> is upper triangular if <span class=\"math-container\">$a_{ij} = 0$</span> for <span class=\"math-container\">$i &gt;j$</span>.</p>\n</blockquote>\n<p>Now consider <span class=\"math-container\">$A,B$</span> upper triangular and <span class=\"math-container\">$C = AB$</span>. Then consider <span class=\"math-container\">$C_{ij}$</span> entry from <span class=\"math-container\">$C$</span> with <span class=\"math-container\">$i &gt;j$</span>, given by\n<span class=\"math-container\">\\begin{align} C_{ij} &amp; = \\sum_{k =1}^n a_{ik}b_{kj} \\\\ &amp; = \\sum_{k =1}^{i-1}a_{ik}b_{kj} + \\sum_{k=i}^na_{ik}b_{kj} \\\\&amp;= 0 + 0 = 0, \\end{align}</span> since <span class=\"math-container\">\\begin{align} k &amp; \\leq i-1 \\implies a_{ik} =0 \\\\ k &amp; \\geq i \\implies b_{kj} =0 \\end{align}</span> for this particular combination of <span class=\"math-container\">$(i,j)$</span>. We conclude that <span class=\"math-container\">$C_{ij} = 0$</span> if <span class=\"math-container\">$i &gt;j$</span> hence <span class=\"math-container\">$C$</span> is upper triangular.</p>\n<hr> \nWe can easily generalize the statement we have proven to \n<blockquote>\n<p><strong>Proposition.</strong> Let <span class=\"math-container\">$U_1,\\ldots,U_k$</span> be <span class=\"math-container\">$n \\times n$</span> upper triangular matrices. Then, <span class=\"math-container\">$$V = \\prod_{i=1}^k U_i$$</span> is an upper triangular matrix as well.</p>\n</blockquote>\n<p>You can show it by means of prove by induction. Denote <span class=\"math-container\">$P(k)$</span> the statement above. Then obviously <span class=\"math-container\">$P(1)$</span> is true. Also <span class=\"math-container\">$P(k) \\implies P(k+1)$</span> (exercise!).</p>\n" }, { "answer_id": 3881792, "score": 2, "text": "<p>Let <span class=\"math-container\">$A=[a_{ij}]$</span> and <span class=\"math-container\">$B=[b_{ij}]$</span> be two <span class=\"math-container\">$n \\times n$</span> upper triangular matrices.</p>\n<p>By definition</p>\n<p><span class=\"math-container\">\\begin{equation}\na_{ij}=b_{ij}=0~~\\text{if}~~~i&gt;j.\n\\end{equation}</span></p>\n<p>Since <span class=\"math-container\">$A^T, B^T$</span> are lower triangular matrices, and</p>\n<p><span class=\"math-container\">\\begin{equation}\n(AB)^T=B^TA^T\n\\end{equation}</span>\nis a lower trigular matrix (Ref. <a href=\"https://math.stackexchange.com/q/3881777\">https://math.stackexchange.com/q/3881777</a>), hence <span class=\"math-container\">$AB$</span> is an upper triangular matrix. <span class=\"math-container\">$\\blacksquare$</span></p>\n" } ]
Question: Multiplication of two upper triangular matrices Answer 1: <blockquote> <p><strong>Definition.</strong> A <span class="math-container">$n \times n$</span> matrix <span class="math-container">$A$</span> is upper triangular if <span class="math-container">$a_{ij} = 0$</span> for <span class="math-container">$i &gt;j$</span>.</p> </blockquote> <p>Now consider <span class="math-container">$A,B$</span> upper triangular and <span class="math-container">$C = AB$</span>. Then consider <span class="math-container">$C_{ij}$</span> entry from <span class="math-container">$C$</span> with <span class="math-container">$i &gt;j$</span>, given by <span class="math-container">\begin{align} C_{ij} &amp; = \sum_{k =1}^n a_{ik}b_{kj} \\ &amp; = \sum_{k =1}^{i-1}a_{ik}b_{kj} + \sum_{k=i}^na_{ik}b_{kj} \\&amp;= 0 + 0 = 0, \end{align}</span> since <span class="math-container">\begin{align} k &amp; \leq i-1 \implies a_{ik} =0 \\ k &amp; \geq i \implies b_{kj} =0 \end{align}</span> for this particular combination of <span class="math-container">$(i,j)$</span>. We conclude that <span class="math-container">$C_{ij} = 0$</span> if <span class="math-container">$i &gt;j$</span> hence <span class="math-container">$C$</span> is upper triangular.</p> <hr> We can easily generalize the statement we have proven to <blockquote> <p><strong>Proposition.</strong> Let <span class="math-container">$U_1,\ldots,U_k$</span> be <span class="math-container">$n \times n$</span> upper triangular matrices. Then, <span class="math-container">$$V = \prod_{i=1}^k U_i$$</span> is an upper triangular matrix as well.</p> </blockquote> <p>You can show it by means of prove by induction. Denote <span class="math-container">$P(k)$</span> the statement above. Then obviously <span class="math-container">$P(1)$</span> is true. Also <span class="math-container">$P(k) \implies P(k+1)$</span> (exercise!).</p> Answer 2: <p>Let <span class="math-container">$A=[a_{ij}]$</span> and <span class="math-container">$B=[b_{ij}]$</span> be two <span class="math-container">$n \times n$</span> upper triangular matrices.</p> <p>By definition</p> <p><span class="math-container">\begin{equation} a_{ij}=b_{ij}=0~~\text{if}~~~i&gt;j. \end{equation}</span></p> <p>Since <span class="math-container">$A^T, B^T$</span> are lower triangular matrices, and</p> <p><span class="math-container">\begin{equation} (AB)^T=B^TA^T \end{equation}</span> is a lower trigular matrix (Ref. <a href="https://math.stackexchange.com/q/3881777">https://math.stackexchange.com/q/3881777</a>), hence <span class="math-container">$AB$</span> is an upper triangular matrix. <span class="math-container">$\blacksquare$</span></p> Answer 3: <blockquote> <p><strong>Definition.</strong> A <span class="math-container">$n \times n$</span> matrix <span class="math-container">$A$</span> is upper triangular if <span class="math-container">$a_{ij} = 0$</span> for <span class="math-container">$i &gt;j$</span>.</p> </blockquote> <p>Now consider <span class="math-container">$A,B$</span> upper triangular and <span class="math-container">$C = AB$</span>. Then consider <span class="math-container">$C_{ij}$</span> entry from <span class="math-container">$C$</span> with <span class="math-container">$i &gt;j$</span>, given by <span class="math-container">\begin{align} C_{ij} &amp; = \sum_{k =1}^n a_{ik}b_{kj} \\ &amp; = \sum_{k =1}^{i-1}a_{ik}b_{kj} + \sum_{k=i}^na_{ik}b_{kj} \\&amp;= 0 + 0 = 0, \end{align}</span> since <span class="math-container">\begin{align} k &amp; \leq i-1 \implies a_{ik} =0 \\ k &amp; \geq i \implies b_{kj} =0 \end{align}</span> for this particular combination of <span class="math-container">$(i,j)$</span>. We conclude that <span class="math-container">$C_{ij} = 0$</span> if <span class="math-container">$i &gt;j$</span> hence <span class="math-container">$C$</span> is upper triangular.</p> <hr> We can easily generalize the statement we have proven to <blockquote> <p><strong>Proposition.</strong> Let <span class="math-container">$U_1,\ldots,U_k$</span> be <span class="math-container">$n \times n$</span> upper triangular matrices. Then, <span class="math-container">$$V = \prod_{i=1}^k U_i$$</span> is an upper triangular matrix as well.</p> </blockquote> <p>You can show it by means of prove by induction. Denote <span class="math-container">$P(k)$</span> the statement above. Then obviously <span class="math-container">$P(1)$</span> is true. Also <span class="math-container">$P(k) \implies P(k+1)$</span> (exercise!).</p> Answer 4: <p>Let <span class="math-container">$A=[a_{ij}]$</span> and <span class="math-container">$B=[b_{ij}]$</span> be two <span class="math-container">$n \times n$</span> upper triangular matrices.</p> <p>By definition</p> <p><span class="math-container">\begin{equation} a_{ij}=b_{ij}=0~~\text{if}~~~i&gt;j. \end{equation}</span></p> <p>Since <span class="math-container">$A^T, B^T$</span> are lower triangular matrices, and</p> <p><span class="math-container">\begin{equation} (AB)^T=B^TA^T \end{equation}</span> is a lower trigular matrix (Ref. <a href="https://math.stackexchange.com/q/3881777">https://math.stackexchange.com/q/3881777</a>), hence <span class="math-container">$AB$</span> is an upper triangular matrix. <span class="math-container">$\blacksquare$</span></p> Answer 5: <blockquote> <p><strong>Definition.</strong> A <span class="math-container">$n \times n$</span> matrix <span class="math-container">$A$</span> is upper triangular if <span class="math-container">$a_{ij} = 0$</span> for <span class="math-container">$i &gt;j$</span>.</p> </blockquote> <p>Now consider <span class="math-container">$A,B$</span> upper triangular and <span class="math-container">$C = AB$</span>. Then consider <span class="math-container">$C_{ij}$</span> entry from <span class="math-container">$C$</span> with <span class="math-container">$i &gt;j$</span>, given by <span class="math-container">\begin{align} C_{ij} &amp; = \sum_{k =1}^n a_{ik}b_{kj} \\ &amp; = \sum_{k =1}^{i-1}a_{ik}b_{kj} + \sum_{k=i}^na_{ik}b_{kj} \\&amp;= 0 + 0 = 0, \end{align}</span> since <span class="math-container">\begin{align} k &amp; \leq i-1 \implies a_{ik} =0 \\ k &amp; \geq i \implies b_{kj} =0 \end{align}</span> for this particular combination of <span class="math-container">$(i,j)$</span>. We conclude that <span class="math-container">$C_{ij} = 0$</span> if <span class="math-container">$i &gt;j$</span> hence <span class="math-container">$C$</span> is upper triangular.</p> <hr> We can easily generalize the statement we have proven to <blockquote> <p><strong>Proposition.</strong> Let <span class="math-container">$U_1,\ldots,U_k$</span> be <span class="math-container">$n \times n$</span> upper triangular matrices. Then, <span class="math-container">$$V = \prod_{i=1}^k U_i$$</span> is an upper triangular matrix as well.</p> </blockquote> <p>You can show it by means of prove by induction. Denote <span class="math-container">$P(k)$</span> the statement above. Then obviously <span class="math-container">$P(1)$</span> is true. Also <span class="math-container">$P(k) \implies P(k+1)$</span> (exercise!).</p> Answer 6: <p>Let <span class="math-container">$A=[a_{ij}]$</span> and <span class="math-container">$B=[b_{ij}]$</span> be two <span class="math-container">$n \times n$</span> upper triangular matrices.</p> <p>By definition</p> <p><span class="math-container">\begin{equation} a_{ij}=b_{ij}=0~~\text{if}~~~i&gt;j. \end{equation}</span></p> <p>Since <span class="math-container">$A^T, B^T$</span> are lower triangular matrices, and</p> <p><span class="math-container">\begin{equation} (AB)^T=B^TA^T \end{equation}</span> is a lower trigular matrix (Ref. <a href="https://math.stackexchange.com/q/3881777">https://math.stackexchange.com/q/3881777</a>), hence <span class="math-container">$AB$</span> is an upper triangular matrix. <span class="math-container">$\blacksquare$</span></p> Answer 7: <blockquote> <p><strong>Definition.</strong> A <span class="math-container">$n \times n$</span> matrix <span class="math-container">$A$</span> is upper triangular if <span class="math-container">$a_{ij} = 0$</span> for <span class="math-container">$i &gt;j$</span>.</p> </blockquote> <p>Now consider <span class="math-container">$A,B$</span> upper triangular and <span class="math-container">$C = AB$</span>. Then consider <span class="math-container">$C_{ij}$</span> entry from <span class="math-container">$C$</span> with <span class="math-container">$i &gt;j$</span>, given by <span class="math-container">\begin{align} C_{ij} &amp; = \sum_{k =1}^n a_{ik}b_{kj} \\ &amp; = \sum_{k =1}^{i-1}a_{ik}b_{kj} + \sum_{k=i}^na_{ik}b_{kj} \\&amp;= 0 + 0 = 0, \end{align}</span> since <span class="math-container">\begin{align} k &amp; \leq i-1 \implies a_{ik} =0 \\ k &amp; \geq i \implies b_{kj} =0 \end{align}</span> for this particular combination of <span class="math-container">$(i,j)$</span>. We conclude that <span class="math-container">$C_{ij} = 0$</span> if <span class="math-container">$i &gt;j$</span> hence <span class="math-container">$C$</span> is upper triangular.</p> <hr> We can easily generalize the statement we have proven to <blockquote> <p><strong>Proposition.</strong> Let <span class="math-container">$U_1,\ldots,U_k$</span> be <span class="math-container">$n \times n$</span> upper triangular matrices. Then, <span class="math-container">$$V = \prod_{i=1}^k U_i$$</span> is an upper triangular matrix as well.</p> </blockquote> <p>You can show it by means of prove by induction. Denote <span class="math-container">$P(k)$</span> the statement above. Then obviously <span class="math-container">$P(1)$</span> is true. Also <span class="math-container">$P(k) \implies P(k+1)$</span> (exercise!).</p> Answer 8: <p>Let <span class="math-container">$A=[a_{ij}]$</span> and <span class="math-container">$B=[b_{ij}]$</span> be two <span class="math-container">$n \times n$</span> upper triangular matrices.</p> <p>By definition</p> <p><span class="math-container">\begin{equation} a_{ij}=b_{ij}=0~~\text{if}~~~i&gt;j. \end{equation}</span></p> <p>Since <span class="math-container">$A^T, B^T$</span> are lower triangular matrices, and</p> <p><span class="math-container">\begin{equation} (AB)^T=B^TA^T \end{equation}</span> is a lower trigular matrix (Ref. <a href="https://math.stackexchange.com/q/3881777">https://math.stackexchange.com/q/3881777</a>), hence <span class="math-container">$AB$</span> is an upper triangular matrix. <span class="math-container">$\blacksquare$</span></p> Answer 9: <blockquote> <p><strong>Definition.</strong> A <span class="math-container">$n \times n$</span> matrix <span class="math-container">$A$</span> is upper triangular if <span class="math-container">$a_{ij} = 0$</span> for <span class="math-container">$i &gt;j$</span>.</p> </blockquote> <p>Now consider <span class="math-container">$A,B$</span> upper triangular and <span class="math-container">$C = AB$</span>. Then consider <span class="math-container">$C_{ij}$</span> entry from <span class="math-container">$C$</span> with <span class="math-container">$i &gt;j$</span>, given by <span class="math-container">\begin{align} C_{ij} &amp; = \sum_{k =1}^n a_{ik}b_{kj} \\ &amp; = \sum_{k =1}^{i-1}a_{ik}b_{kj} + \sum_{k=i}^na_{ik}b_{kj} \\&amp;= 0 + 0 = 0, \end{align}</span> since <span class="math-container">\begin{align} k &amp; \leq i-1 \implies a_{ik} =0 \\ k &amp; \geq i \implies b_{kj} =0 \end{align}</span> for this particular combination of <span class="math-container">$(i,j)$</span>. We conclude that <span class="math-container">$C_{ij} = 0$</span> if <span class="math-container">$i &gt;j$</span> hence <span class="math-container">$C$</span> is upper triangular.</p> <hr> We can easily generalize the statement we have proven to <blockquote> <p><strong>Proposition.</strong> Let <span class="math-container">$U_1,\ldots,U_k$</span> be <span class="math-container">$n \times n$</span> upper triangular matrices. Then, <span class="math-container">$$V = \prod_{i=1}^k U_i$$</span> is an upper triangular matrix as well.</p> </blockquote> <p>You can show it by means of prove by induction. Denote <span class="math-container">$P(k)$</span> the statement above. Then obviously <span class="math-container">$P(1)$</span> is true. Also <span class="math-container">$P(k) \implies P(k+1)$</span> (exercise!).</p> Answer 10: <p>Let <span class="math-container">$A=[a_{ij}]$</span> and <span class="math-container">$B=[b_{ij}]$</span> be two <span class="math-container">$n \times n$</span> upper triangular matrices.</p> <p>By definition</p> <p><span class="math-container">\begin{equation} a_{ij}=b_{ij}=0~~\text{if}~~~i&gt;j. \end{equation}</span></p> <p>Since <span class="math-container">$A^T, B^T$</span> are lower triangular matrices, and</p> <p><span class="math-container">\begin{equation} (AB)^T=B^TA^T \end{equation}</span> is a lower trigular matrix (Ref. <a href="https://math.stackexchange.com/q/3881777">https://math.stackexchange.com/q/3881777</a>), hence <span class="math-container">$AB$</span> is an upper triangular matrix. <span class="math-container">$\blacksquare$</span></p> Answer 11: <blockquote> <p><strong>Definition.</strong> A <span class="math-container">$n \times n$</span> matrix <span class="math-container">$A$</span> is upper triangular if <span class="math-container">$a_{ij} = 0$</span> for <span class="math-container">$i &gt;j$</span>.</p> </blockquote> <p>Now consider <span class="math-container">$A,B$</span> upper triangular and <span class="math-container">$C = AB$</span>. Then consider <span class="math-container">$C_{ij}$</span> entry from <span class="math-container">$C$</span> with <span class="math-container">$i &gt;j$</span>, given by <span class="math-container">\begin{align} C_{ij} &amp; = \sum_{k =1}^n a_{ik}b_{kj} \\ &amp; = \sum_{k =1}^{i-1}a_{ik}b_{kj} + \sum_{k=i}^na_{ik}b_{kj} \\&amp;= 0 + 0 = 0, \end{align}</span> since <span class="math-container">\begin{align} k &amp; \leq i-1 \implies a_{ik} =0 \\ k &amp; \geq i \implies b_{kj} =0 \end{align}</span> for this particular combination of <span class="math-container">$(i,j)$</span>. We conclude that <span class="math-container">$C_{ij} = 0$</span> if <span class="math-container">$i &gt;j$</span> hence <span class="math-container">$C$</span> is upper triangular.</p> <hr> We can easily generalize the statement we have proven to <blockquote> <p><strong>Proposition.</strong> Let <span class="math-container">$U_1,\ldots,U_k$</span> be <span class="math-container">$n \times n$</span> upper triangular matrices. Then, <span class="math-container">$$V = \prod_{i=1}^k U_i$$</span> is an upper triangular matrix as well.</p> </blockquote> <p>You can show it by means of prove by induction. Denote <span class="math-container">$P(k)$</span> the statement above. Then obviously <span class="math-container">$P(1)$</span> is true. Also <span class="math-container">$P(k) \implies P(k+1)$</span> (exercise!).</p> Answer 12: <p>Let <span class="math-container">$A=[a_{ij}]$</span> and <span class="math-container">$B=[b_{ij}]$</span> be two <span class="math-container">$n \times n$</span> upper triangular matrices.</p> <p>By definition</p> <p><span class="math-container">\begin{equation} a_{ij}=b_{ij}=0~~\text{if}~~~i&gt;j. \end{equation}</span></p> <p>Since <span class="math-container">$A^T, B^T$</span> are lower triangular matrices, and</p> <p><span class="math-container">\begin{equation} (AB)^T=B^TA^T \end{equation}</span> is a lower trigular matrix (Ref. <a href="https://math.stackexchange.com/q/3881777">https://math.stackexchange.com/q/3881777</a>), hence <span class="math-container">$AB$</span> is an upper triangular matrix. <span class="math-container">$\blacksquare$</span></p> Answer 13: <blockquote> <p><strong>Definition.</strong> A <span class="math-container">$n \times n$</span> matrix <span class="math-container">$A$</span> is upper triangular if <span class="math-container">$a_{ij} = 0$</span> for <span class="math-container">$i &gt;j$</span>.</p> </blockquote> <p>Now consider <span class="math-container">$A,B$</span> upper triangular and <span class="math-container">$C = AB$</span>. Then consider <span class="math-container">$C_{ij}$</span> entry from <span class="math-container">$C$</span> with <span class="math-container">$i &gt;j$</span>, given by <span class="math-container">\begin{align} C_{ij} &amp; = \sum_{k =1}^n a_{ik}b_{kj} \\ &amp; = \sum_{k =1}^{i-1}a_{ik}b_{kj} + \sum_{k=i}^na_{ik}b_{kj} \\&amp;= 0 + 0 = 0, \end{align}</span> since <span class="math-container">\begin{align} k &amp; \leq i-1 \implies a_{ik} =0 \\ k &amp; \geq i \implies b_{kj} =0 \end{align}</span> for this particular combination of <span class="math-container">$(i,j)$</span>. We conclude that <span class="math-container">$C_{ij} = 0$</span> if <span class="math-container">$i &gt;j$</span> hence <span class="math-container">$C$</span> is upper triangular.</p> <hr> We can easily generalize the statement we have proven to <blockquote> <p><strong>Proposition.</strong> Let <span class="math-container">$U_1,\ldots,U_k$</span> be <span class="math-container">$n \times n$</span> upper triangular matrices. Then, <span class="math-container">$$V = \prod_{i=1}^k U_i$$</span> is an upper triangular matrix as well.</p> </blockquote> <p>You can show it by means of prove by induction. Denote <span class="math-container">$P(k)$</span> the statement above. Then obviously <span class="math-container">$P(1)$</span> is true. Also <span class="math-container">$P(k) \implies P(k+1)$</span> (exercise!).</p> Answer 14: <p>Let <span class="math-container">$A=[a_{ij}]$</span> and <span class="math-container">$B=[b_{ij}]$</span> be two <span class="math-container">$n \times n$</span> upper triangular matrices.</p> <p>By definition</p> <p><span class="math-container">\begin{equation} a_{ij}=b_{ij}=0~~\text{if}~~~i&gt;j. \end{equation}</span></p> <p>Since <span class="math-container">$A^T, B^T$</span> are lower triangular matrices, and</p> <p><span class="math-container">\begin{equation} (AB)^T=B^TA^T \end{equation}</span> is a lower trigular matrix (Ref. <a href="https://math.stackexchange.com/q/3881777">https://math.stackexchange.com/q/3881777</a>), hence <span class="math-container">$AB$</span> is an upper triangular matrix. <span class="math-container">$\blacksquare$</span></p> Answer 15: <blockquote> <p><strong>Definition.</strong> A <span class="math-container">$n \times n$</span> matrix <span class="math-container">$A$</span> is upper triangular if <span class="math-container">$a_{ij} = 0$</span> for <span class="math-container">$i &gt;j$</span>.</p> </blockquote> <p>Now consider <span class="math-container">$A,B$</span> upper triangular and <span class="math-container">$C = AB$</span>. Then consider <span class="math-container">$C_{ij}$</span> entry from <span class="math-container">$C$</span> with <span class="math-container">$i &gt;j$</span>, given by <span class="math-container">\begin{align} C_{ij} &amp; = \sum_{k =1}^n a_{ik}b_{kj} \\ &amp; = \sum_{k =1}^{i-1}a_{ik}b_{kj} + \sum_{k=i}^na_{ik}b_{kj} \\&amp;= 0 + 0 = 0, \end{align}</span> since <span class="math-container">\begin{align} k &amp; \leq i-1 \implies a_{ik} =0 \\ k &amp; \geq i \implies b_{kj} =0 \end{align}</span> for this particular combination of <span class="math-container">$(i,j)$</span>. We conclude that <span class="math-container">$C_{ij} = 0$</span> if <span class="math-container">$i &gt;j$</span> hence <span class="math-container">$C$</span> is upper triangular.</p> <hr> We can easily generalize the statement we have proven to <blockquote> <p><strong>Proposition.</strong> Let <span class="math-container">$U_1,\ldots,U_k$</span> be <span class="math-container">$n \times n$</span> upper triangular matrices. Then, <span class="math-container">$$V = \prod_{i=1}^k U_i$$</span> is an upper triangular matrix as well.</p> </blockquote> <p>You can show it by means of prove by induction. Denote <span class="math-container">$P(k)$</span> the statement above. Then obviously <span class="math-container">$P(1)$</span> is true. Also <span class="math-container">$P(k) \implies P(k+1)$</span> (exercise!).</p> Answer 16: <p>Let <span class="math-container">$A=[a_{ij}]$</span> and <span class="math-container">$B=[b_{ij}]$</span> be two <span class="math-container">$n \times n$</span> upper triangular matrices.</p> <p>By definition</p> <p><span class="math-container">\begin{equation} a_{ij}=b_{ij}=0~~\text{if}~~~i&gt;j. \end{equation}</span></p> <p>Since <span class="math-container">$A^T, B^T$</span> are lower triangular matrices, and</p> <p><span class="math-container">\begin{equation} (AB)^T=B^TA^T \end{equation}</span> is a lower trigular matrix (Ref. <a href="https://math.stackexchange.com/q/3881777">https://math.stackexchange.com/q/3881777</a>), hence <span class="math-container">$AB$</span> is an upper triangular matrix. <span class="math-container">$\blacksquare$</span></p> Answer 17: <blockquote> <p><strong>Definition.</strong> A <span class="math-container">$n \times n$</span> matrix <span class="math-container">$A$</span> is upper triangular if <span class="math-container">$a_{ij} = 0$</span> for <span class="math-container">$i &gt;j$</span>.</p> </blockquote> <p>Now consider <span class="math-container">$A,B$</span> upper triangular and <span class="math-container">$C = AB$</span>. Then consider <span class="math-container">$C_{ij}$</span> entry from <span class="math-container">$C$</span> with <span class="math-container">$i &gt;j$</span>, given by <span class="math-container">\begin{align} C_{ij} &amp; = \sum_{k =1}^n a_{ik}b_{kj} \\ &amp; = \sum_{k =1}^{i-1}a_{ik}b_{kj} + \sum_{k=i}^na_{ik}b_{kj} \\&amp;= 0 + 0 = 0, \end{align}</span> since <span class="math-container">\begin{align} k &amp; \leq i-1 \implies a_{ik} =0 \\ k &amp; \geq i \implies b_{kj} =0 \end{align}</span> for this particular combination of <span class="math-container">$(i,j)$</span>. We conclude that <span class="math-container">$C_{ij} = 0$</span> if <span class="math-container">$i &gt;j$</span> hence <span class="math-container">$C$</span> is upper triangular.</p> <hr> We can easily generalize the statement we have proven to <blockquote> <p><strong>Proposition.</strong> Let <span class="math-container">$U_1,\ldots,U_k$</span> be <span class="math-container">$n \times n$</span> upper triangular matrices. Then, <span class="math-container">$$V = \prod_{i=1}^k U_i$$</span> is an upper triangular matrix as well.</p> </blockquote> <p>You can show it by means of prove by induction. Denote <span class="math-container">$P(k)$</span> the statement above. Then obviously <span class="math-container">$P(1)$</span> is true. Also <span class="math-container">$P(k) \implies P(k+1)$</span> (exercise!).</p> Answer 18: <p>Let <span class="math-container">$A=[a_{ij}]$</span> and <span class="math-container">$B=[b_{ij}]$</span> be two <span class="math-container">$n \times n$</span> upper triangular matrices.</p> <p>By definition</p> <p><span class="math-container">\begin{equation} a_{ij}=b_{ij}=0~~\text{if}~~~i&gt;j. \end{equation}</span></p> <p>Since <span class="math-container">$A^T, B^T$</span> are lower triangular matrices, and</p> <p><span class="math-container">\begin{equation} (AB)^T=B^TA^T \end{equation}</span> is a lower trigular matrix (Ref. <a href="https://math.stackexchange.com/q/3881777">https://math.stackexchange.com/q/3881777</a>), hence <span class="math-container">$AB$</span> is an upper triangular matrix. <span class="math-container">$\blacksquare$</span></p> Answer 19: <blockquote> <p><strong>Definition.</strong> A <span class="math-container">$n \times n$</span> matrix <span class="math-container">$A$</span> is upper triangular if <span class="math-container">$a_{ij} = 0$</span> for <span class="math-container">$i &gt;j$</span>.</p> </blockquote> <p>Now consider <span class="math-container">$A,B$</span> upper triangular and <span class="math-container">$C = AB$</span>. Then consider <span class="math-container">$C_{ij}$</span> entry from <span class="math-container">$C$</span> with <span class="math-container">$i &gt;j$</span>, given by <span class="math-container">\begin{align} C_{ij} &amp; = \sum_{k =1}^n a_{ik}b_{kj} \\ &amp; = \sum_{k =1}^{i-1}a_{ik}b_{kj} + \sum_{k=i}^na_{ik}b_{kj} \\&amp;= 0 + 0 = 0, \end{align}</span> since <span class="math-container">\begin{align} k &amp; \leq i-1 \implies a_{ik} =0 \\ k &amp; \geq i \implies b_{kj} =0 \end{align}</span> for this particular combination of <span class="math-container">$(i,j)$</span>. We conclude that <span class="math-container">$C_{ij} = 0$</span> if <span class="math-container">$i &gt;j$</span> hence <span class="math-container">$C$</span> is upper triangular.</p> <hr> We can easily generalize the statement we have proven to <blockquote> <p><strong>Proposition.</strong> Let <span class="math-container">$U_1,\ldots,U_k$</span> be <span class="math-container">$n \times n$</span> upper triangular matrices. Then, <span class="math-container">$$V = \prod_{i=1}^k U_i$$</span> is an upper triangular matrix as well.</p> </blockquote> <p>You can show it by means of prove by induction. Denote <span class="math-container">$P(k)$</span> the statement above. Then obviously <span class="math-container">$P(1)$</span> is true. Also <span class="math-container">$P(k) \implies P(k+1)$</span> (exercise!).</p> Answer 20: <p>Let <span class="math-container">$A=[a_{ij}]$</span> and <span class="math-container">$B=[b_{ij}]$</span> be two <span class="math-container">$n \times n$</span> upper triangular matrices.</p> <p>By definition</p> <p><span class="math-container">\begin{equation} a_{ij}=b_{ij}=0~~\text{if}~~~i&gt;j. \end{equation}</span></p> <p>Since <span class="math-container">$A^T, B^T$</span> are lower triangular matrices, and</p> <p><span class="math-container">\begin{equation} (AB)^T=B^TA^T \end{equation}</span> is a lower trigular matrix (Ref. <a href="https://math.stackexchange.com/q/3881777">https://math.stackexchange.com/q/3881777</a>), hence <span class="math-container">$AB$</span> is an upper triangular matrix. <span class="math-container">$\blacksquare$</span></p> Answer 21: <blockquote> <p><strong>Definition.</strong> A <span class="math-container">$n \times n$</span> matrix <span class="math-container">$A$</span> is upper triangular if <span class="math-container">$a_{ij} = 0$</span> for <span class="math-container">$i &gt;j$</span>.</p> </blockquote> <p>Now consider <span class="math-container">$A,B$</span> upper triangular and <span class="math-container">$C = AB$</span>. Then consider <span class="math-container">$C_{ij}$</span> entry from <span class="math-container">$C$</span> with <span class="math-container">$i &gt;j$</span>, given by <span class="math-container">\begin{align} C_{ij} &amp; = \sum_{k =1}^n a_{ik}b_{kj} \\ &amp; = \sum_{k =1}^{i-1}a_{ik}b_{kj} + \sum_{k=i}^na_{ik}b_{kj} \\&amp;= 0 + 0 = 0, \end{align}</span> since <span class="math-container">\begin{align} k &amp; \leq i-1 \implies a_{ik} =0 \\ k &amp; \geq i \implies b_{kj} =0 \end{align}</span> for this particular combination of <span class="math-container">$(i,j)$</span>. We conclude that <span class="math-container">$C_{ij} = 0$</span> if <span class="math-container">$i &gt;j$</span> hence <span class="math-container">$C$</span> is upper triangular.</p> <hr> We can easily generalize the statement we have proven to <blockquote> <p><strong>Proposition.</strong> Let <span class="math-container">$U_1,\ldots,U_k$</span> be <span class="math-container">$n \times n$</span> upper triangular matrices. Then, <span class="math-container">$$V = \prod_{i=1}^k U_i$$</span> is an upper triangular matrix as well.</p> </blockquote> <p>You can show it by means of prove by induction. Denote <span class="math-container">$P(k)$</span> the statement above. Then obviously <span class="math-container">$P(1)$</span> is true. Also <span class="math-container">$P(k) \implies P(k+1)$</span> (exercise!).</p> Answer 22: <p>Let <span class="math-container">$A=[a_{ij}]$</span> and <span class="math-container">$B=[b_{ij}]$</span> be two <span class="math-container">$n \times n$</span> upper triangular matrices.</p> <p>By definition</p> <p><span class="math-container">\begin{equation} a_{ij}=b_{ij}=0~~\text{if}~~~i&gt;j. \end{equation}</span></p> <p>Since <span class="math-container">$A^T, B^T$</span> are lower triangular matrices, and</p> <p><span class="math-container">\begin{equation} (AB)^T=B^TA^T \end{equation}</span> is a lower trigular matrix (Ref. <a href="https://math.stackexchange.com/q/3881777">https://math.stackexchange.com/q/3881777</a>), hence <span class="math-container">$AB$</span> is an upper triangular matrix. <span class="math-container">$\blacksquare$</span></p>
Show $0 \to A\mathop \to B\mathop \to C \to 0$ is split when $C$ is a free Abelian group
2,095,625
6
https://math.stackexchange.com/questions/2095625/show-0-to-a-mathop-to-b-mathop-to-c-to-0-is-split-when-c-is-a-free-abeli
<p>I am looking at a statement that, for a short exact sequence of Abelian groups<br> $$0 \to A\mathop \to \limits^f B\mathop \to \limits^g C \to 0$$</p> <p>if $C$ is a free Abelian group, then this sequence is split.</p> <p>I cannot figured out why, can anybody help?</p>
[ { "answer_id": 2774297, "score": 5, "text": "<p>We want to find $h : C \\to B$ such that $gh = \\operatorname{id}_C$ holds. Since $C$ is free abelian, it is generated by a basis $S$. By exactness, $g$ is surjective, hence for any $x \\in S$ we find $b_x \\in B$, such that $g(b_x) = x$. Now define $h : C \\to B$ to be the extension of the map $x \\mapsto b_x$ (by the extension property). Now we have for any $x \\in S$: $$gh(x) = g(b_x) = x,$$ which implies $gh = \\operatorname{id}_C$. </p>\n" }, { "answer_id": 2095631, "score": 1, "text": "<p><strong>Hint:</strong></p>\n\n<p>How do you define a homomorphism from a free abelian group (more generally a free $R$-module) into another group/module?</p>\n\n<p>Deduce a homomorphism $s\\colon C\\to B$ such that $\\;g\\circ s=\\operatorname{id}_C$, and prove that\n$$B\\simeq A\\oplus s(C).$$</p>\n" }, { "answer_id": 2774297, "score": 5, "text": "<p>We want to find $h : C \\to B$ such that $gh = \\operatorname{id}_C$ holds. Since $C$ is free abelian, it is generated by a basis $S$. By exactness, $g$ is surjective, hence for any $x \\in S$ we find $b_x \\in B$, such that $g(b_x) = x$. Now define $h : C \\to B$ to be the extension of the map $x \\mapsto b_x$ (by the extension property). Now we have for any $x \\in S$: $$gh(x) = g(b_x) = x,$$ which implies $gh = \\operatorname{id}_C$. </p>\n" }, { "answer_id": 2095631, "score": 1, "text": "<p><strong>Hint:</strong></p>\n\n<p>How do you define a homomorphism from a free abelian group (more generally a free $R$-module) into another group/module?</p>\n\n<p>Deduce a homomorphism $s\\colon C\\to B$ such that $\\;g\\circ s=\\operatorname{id}_C$, and prove that\n$$B\\simeq A\\oplus s(C).$$</p>\n" }, { "answer_id": 2774297, "score": 5, "text": "<p>We want to find $h : C \\to B$ such that $gh = \\operatorname{id}_C$ holds. Since $C$ is free abelian, it is generated by a basis $S$. By exactness, $g$ is surjective, hence for any $x \\in S$ we find $b_x \\in B$, such that $g(b_x) = x$. Now define $h : C \\to B$ to be the extension of the map $x \\mapsto b_x$ (by the extension property). Now we have for any $x \\in S$: $$gh(x) = g(b_x) = x,$$ which implies $gh = \\operatorname{id}_C$. </p>\n" }, { "answer_id": 2095631, "score": 1, "text": "<p><strong>Hint:</strong></p>\n\n<p>How do you define a homomorphism from a free abelian group (more generally a free $R$-module) into another group/module?</p>\n\n<p>Deduce a homomorphism $s\\colon C\\to B$ such that $\\;g\\circ s=\\operatorname{id}_C$, and prove that\n$$B\\simeq A\\oplus s(C).$$</p>\n" }, { "answer_id": 2774297, "score": 5, "text": "<p>We want to find $h : C \\to B$ such that $gh = \\operatorname{id}_C$ holds. Since $C$ is free abelian, it is generated by a basis $S$. By exactness, $g$ is surjective, hence for any $x \\in S$ we find $b_x \\in B$, such that $g(b_x) = x$. Now define $h : C \\to B$ to be the extension of the map $x \\mapsto b_x$ (by the extension property). Now we have for any $x \\in S$: $$gh(x) = g(b_x) = x,$$ which implies $gh = \\operatorname{id}_C$. </p>\n" }, { "answer_id": 2095631, "score": 1, "text": "<p><strong>Hint:</strong></p>\n\n<p>How do you define a homomorphism from a free abelian group (more generally a free $R$-module) into another group/module?</p>\n\n<p>Deduce a homomorphism $s\\colon C\\to B$ such that $\\;g\\circ s=\\operatorname{id}_C$, and prove that\n$$B\\simeq A\\oplus s(C).$$</p>\n" }, { "answer_id": 2774297, "score": 5, "text": "<p>We want to find $h : C \\to B$ such that $gh = \\operatorname{id}_C$ holds. Since $C$ is free abelian, it is generated by a basis $S$. By exactness, $g$ is surjective, hence for any $x \\in S$ we find $b_x \\in B$, such that $g(b_x) = x$. Now define $h : C \\to B$ to be the extension of the map $x \\mapsto b_x$ (by the extension property). Now we have for any $x \\in S$: $$gh(x) = g(b_x) = x,$$ which implies $gh = \\operatorname{id}_C$. </p>\n" }, { "answer_id": 2095631, "score": 1, "text": "<p><strong>Hint:</strong></p>\n\n<p>How do you define a homomorphism from a free abelian group (more generally a free $R$-module) into another group/module?</p>\n\n<p>Deduce a homomorphism $s\\colon C\\to B$ such that $\\;g\\circ s=\\operatorname{id}_C$, and prove that\n$$B\\simeq A\\oplus s(C).$$</p>\n" }, { "answer_id": 2774297, "score": 5, "text": "<p>We want to find $h : C \\to B$ such that $gh = \\operatorname{id}_C$ holds. Since $C$ is free abelian, it is generated by a basis $S$. By exactness, $g$ is surjective, hence for any $x \\in S$ we find $b_x \\in B$, such that $g(b_x) = x$. Now define $h : C \\to B$ to be the extension of the map $x \\mapsto b_x$ (by the extension property). Now we have for any $x \\in S$: $$gh(x) = g(b_x) = x,$$ which implies $gh = \\operatorname{id}_C$. </p>\n" }, { "answer_id": 2095631, "score": 1, "text": "<p><strong>Hint:</strong></p>\n\n<p>How do you define a homomorphism from a free abelian group (more generally a free $R$-module) into another group/module?</p>\n\n<p>Deduce a homomorphism $s\\colon C\\to B$ such that $\\;g\\circ s=\\operatorname{id}_C$, and prove that\n$$B\\simeq A\\oplus s(C).$$</p>\n" }, { "answer_id": 2774297, "score": 5, "text": "<p>We want to find $h : C \\to B$ such that $gh = \\operatorname{id}_C$ holds. Since $C$ is free abelian, it is generated by a basis $S$. By exactness, $g$ is surjective, hence for any $x \\in S$ we find $b_x \\in B$, such that $g(b_x) = x$. Now define $h : C \\to B$ to be the extension of the map $x \\mapsto b_x$ (by the extension property). Now we have for any $x \\in S$: $$gh(x) = g(b_x) = x,$$ which implies $gh = \\operatorname{id}_C$. </p>\n" }, { "answer_id": 2095631, "score": 1, "text": "<p><strong>Hint:</strong></p>\n\n<p>How do you define a homomorphism from a free abelian group (more generally a free $R$-module) into another group/module?</p>\n\n<p>Deduce a homomorphism $s\\colon C\\to B$ such that $\\;g\\circ s=\\operatorname{id}_C$, and prove that\n$$B\\simeq A\\oplus s(C).$$</p>\n" }, { "answer_id": 2774297, "score": 5, "text": "<p>We want to find $h : C \\to B$ such that $gh = \\operatorname{id}_C$ holds. Since $C$ is free abelian, it is generated by a basis $S$. By exactness, $g$ is surjective, hence for any $x \\in S$ we find $b_x \\in B$, such that $g(b_x) = x$. Now define $h : C \\to B$ to be the extension of the map $x \\mapsto b_x$ (by the extension property). Now we have for any $x \\in S$: $$gh(x) = g(b_x) = x,$$ which implies $gh = \\operatorname{id}_C$. </p>\n" }, { "answer_id": 2095631, "score": 1, "text": "<p><strong>Hint:</strong></p>\n\n<p>How do you define a homomorphism from a free abelian group (more generally a free $R$-module) into another group/module?</p>\n\n<p>Deduce a homomorphism $s\\colon C\\to B$ such that $\\;g\\circ s=\\operatorname{id}_C$, and prove that\n$$B\\simeq A\\oplus s(C).$$</p>\n" }, { "answer_id": 2774297, "score": 5, "text": "<p>We want to find $h : C \\to B$ such that $gh = \\operatorname{id}_C$ holds. Since $C$ is free abelian, it is generated by a basis $S$. By exactness, $g$ is surjective, hence for any $x \\in S$ we find $b_x \\in B$, such that $g(b_x) = x$. Now define $h : C \\to B$ to be the extension of the map $x \\mapsto b_x$ (by the extension property). Now we have for any $x \\in S$: $$gh(x) = g(b_x) = x,$$ which implies $gh = \\operatorname{id}_C$. </p>\n" }, { "answer_id": 2095631, "score": 1, "text": "<p><strong>Hint:</strong></p>\n\n<p>How do you define a homomorphism from a free abelian group (more generally a free $R$-module) into another group/module?</p>\n\n<p>Deduce a homomorphism $s\\colon C\\to B$ such that $\\;g\\circ s=\\operatorname{id}_C$, and prove that\n$$B\\simeq A\\oplus s(C).$$</p>\n" }, { "answer_id": 2774297, "score": 5, "text": "<p>We want to find $h : C \\to B$ such that $gh = \\operatorname{id}_C$ holds. Since $C$ is free abelian, it is generated by a basis $S$. By exactness, $g$ is surjective, hence for any $x \\in S$ we find $b_x \\in B$, such that $g(b_x) = x$. Now define $h : C \\to B$ to be the extension of the map $x \\mapsto b_x$ (by the extension property). Now we have for any $x \\in S$: $$gh(x) = g(b_x) = x,$$ which implies $gh = \\operatorname{id}_C$. </p>\n" }, { "answer_id": 2095631, "score": 1, "text": "<p><strong>Hint:</strong></p>\n\n<p>How do you define a homomorphism from a free abelian group (more generally a free $R$-module) into another group/module?</p>\n\n<p>Deduce a homomorphism $s\\colon C\\to B$ such that $\\;g\\circ s=\\operatorname{id}_C$, and prove that\n$$B\\simeq A\\oplus s(C).$$</p>\n" }, { "answer_id": 2774297, "score": 5, "text": "<p>We want to find $h : C \\to B$ such that $gh = \\operatorname{id}_C$ holds. Since $C$ is free abelian, it is generated by a basis $S$. By exactness, $g$ is surjective, hence for any $x \\in S$ we find $b_x \\in B$, such that $g(b_x) = x$. Now define $h : C \\to B$ to be the extension of the map $x \\mapsto b_x$ (by the extension property). Now we have for any $x \\in S$: $$gh(x) = g(b_x) = x,$$ which implies $gh = \\operatorname{id}_C$. </p>\n" }, { "answer_id": 2095631, "score": 1, "text": "<p><strong>Hint:</strong></p>\n\n<p>How do you define a homomorphism from a free abelian group (more generally a free $R$-module) into another group/module?</p>\n\n<p>Deduce a homomorphism $s\\colon C\\to B$ such that $\\;g\\circ s=\\operatorname{id}_C$, and prove that\n$$B\\simeq A\\oplus s(C).$$</p>\n" } ]
Question: Show $0 \to A\mathop \to B\mathop \to C \to 0$ is split when $C$ is a free Abelian group Answer 1: <p>We want to find $h : C \to B$ such that $gh = \operatorname{id}_C$ holds. Since $C$ is free abelian, it is generated by a basis $S$. By exactness, $g$ is surjective, hence for any $x \in S$ we find $b_x \in B$, such that $g(b_x) = x$. Now define $h : C \to B$ to be the extension of the map $x \mapsto b_x$ (by the extension property). Now we have for any $x \in S$: $$gh(x) = g(b_x) = x,$$ which implies $gh = \operatorname{id}_C$. </p> Answer 2: <p><strong>Hint:</strong></p> <p>How do you define a homomorphism from a free abelian group (more generally a free $R$-module) into another group/module?</p> <p>Deduce a homomorphism $s\colon C\to B$ such that $\;g\circ s=\operatorname{id}_C$, and prove that $$B\simeq A\oplus s(C).$$</p> Answer 3: <p>We want to find $h : C \to B$ such that $gh = \operatorname{id}_C$ holds. Since $C$ is free abelian, it is generated by a basis $S$. By exactness, $g$ is surjective, hence for any $x \in S$ we find $b_x \in B$, such that $g(b_x) = x$. Now define $h : C \to B$ to be the extension of the map $x \mapsto b_x$ (by the extension property). Now we have for any $x \in S$: $$gh(x) = g(b_x) = x,$$ which implies $gh = \operatorname{id}_C$. </p> Answer 4: <p><strong>Hint:</strong></p> <p>How do you define a homomorphism from a free abelian group (more generally a free $R$-module) into another group/module?</p> <p>Deduce a homomorphism $s\colon C\to B$ such that $\;g\circ s=\operatorname{id}_C$, and prove that $$B\simeq A\oplus s(C).$$</p> Answer 5: <p>We want to find $h : C \to B$ such that $gh = \operatorname{id}_C$ holds. Since $C$ is free abelian, it is generated by a basis $S$. By exactness, $g$ is surjective, hence for any $x \in S$ we find $b_x \in B$, such that $g(b_x) = x$. Now define $h : C \to B$ to be the extension of the map $x \mapsto b_x$ (by the extension property). Now we have for any $x \in S$: $$gh(x) = g(b_x) = x,$$ which implies $gh = \operatorname{id}_C$. </p> Answer 6: <p><strong>Hint:</strong></p> <p>How do you define a homomorphism from a free abelian group (more generally a free $R$-module) into another group/module?</p> <p>Deduce a homomorphism $s\colon C\to B$ such that $\;g\circ s=\operatorname{id}_C$, and prove that $$B\simeq A\oplus s(C).$$</p> Answer 7: <p>We want to find $h : C \to B$ such that $gh = \operatorname{id}_C$ holds. Since $C$ is free abelian, it is generated by a basis $S$. By exactness, $g$ is surjective, hence for any $x \in S$ we find $b_x \in B$, such that $g(b_x) = x$. Now define $h : C \to B$ to be the extension of the map $x \mapsto b_x$ (by the extension property). Now we have for any $x \in S$: $$gh(x) = g(b_x) = x,$$ which implies $gh = \operatorname{id}_C$. </p> Answer 8: <p><strong>Hint:</strong></p> <p>How do you define a homomorphism from a free abelian group (more generally a free $R$-module) into another group/module?</p> <p>Deduce a homomorphism $s\colon C\to B$ such that $\;g\circ s=\operatorname{id}_C$, and prove that $$B\simeq A\oplus s(C).$$</p> Answer 9: <p>We want to find $h : C \to B$ such that $gh = \operatorname{id}_C$ holds. Since $C$ is free abelian, it is generated by a basis $S$. By exactness, $g$ is surjective, hence for any $x \in S$ we find $b_x \in B$, such that $g(b_x) = x$. Now define $h : C \to B$ to be the extension of the map $x \mapsto b_x$ (by the extension property). Now we have for any $x \in S$: $$gh(x) = g(b_x) = x,$$ which implies $gh = \operatorname{id}_C$. </p> Answer 10: <p><strong>Hint:</strong></p> <p>How do you define a homomorphism from a free abelian group (more generally a free $R$-module) into another group/module?</p> <p>Deduce a homomorphism $s\colon C\to B$ such that $\;g\circ s=\operatorname{id}_C$, and prove that $$B\simeq A\oplus s(C).$$</p> Answer 11: <p>We want to find $h : C \to B$ such that $gh = \operatorname{id}_C$ holds. Since $C$ is free abelian, it is generated by a basis $S$. By exactness, $g$ is surjective, hence for any $x \in S$ we find $b_x \in B$, such that $g(b_x) = x$. Now define $h : C \to B$ to be the extension of the map $x \mapsto b_x$ (by the extension property). Now we have for any $x \in S$: $$gh(x) = g(b_x) = x,$$ which implies $gh = \operatorname{id}_C$. </p> Answer 12: <p><strong>Hint:</strong></p> <p>How do you define a homomorphism from a free abelian group (more generally a free $R$-module) into another group/module?</p> <p>Deduce a homomorphism $s\colon C\to B$ such that $\;g\circ s=\operatorname{id}_C$, and prove that $$B\simeq A\oplus s(C).$$</p> Answer 13: <p>We want to find $h : C \to B$ such that $gh = \operatorname{id}_C$ holds. Since $C$ is free abelian, it is generated by a basis $S$. By exactness, $g$ is surjective, hence for any $x \in S$ we find $b_x \in B$, such that $g(b_x) = x$. Now define $h : C \to B$ to be the extension of the map $x \mapsto b_x$ (by the extension property). Now we have for any $x \in S$: $$gh(x) = g(b_x) = x,$$ which implies $gh = \operatorname{id}_C$. </p> Answer 14: <p><strong>Hint:</strong></p> <p>How do you define a homomorphism from a free abelian group (more generally a free $R$-module) into another group/module?</p> <p>Deduce a homomorphism $s\colon C\to B$ such that $\;g\circ s=\operatorname{id}_C$, and prove that $$B\simeq A\oplus s(C).$$</p> Answer 15: <p>We want to find $h : C \to B$ such that $gh = \operatorname{id}_C$ holds. Since $C$ is free abelian, it is generated by a basis $S$. By exactness, $g$ is surjective, hence for any $x \in S$ we find $b_x \in B$, such that $g(b_x) = x$. Now define $h : C \to B$ to be the extension of the map $x \mapsto b_x$ (by the extension property). Now we have for any $x \in S$: $$gh(x) = g(b_x) = x,$$ which implies $gh = \operatorname{id}_C$. </p> Answer 16: <p><strong>Hint:</strong></p> <p>How do you define a homomorphism from a free abelian group (more generally a free $R$-module) into another group/module?</p> <p>Deduce a homomorphism $s\colon C\to B$ such that $\;g\circ s=\operatorname{id}_C$, and prove that $$B\simeq A\oplus s(C).$$</p> Answer 17: <p>We want to find $h : C \to B$ such that $gh = \operatorname{id}_C$ holds. Since $C$ is free abelian, it is generated by a basis $S$. By exactness, $g$ is surjective, hence for any $x \in S$ we find $b_x \in B$, such that $g(b_x) = x$. Now define $h : C \to B$ to be the extension of the map $x \mapsto b_x$ (by the extension property). Now we have for any $x \in S$: $$gh(x) = g(b_x) = x,$$ which implies $gh = \operatorname{id}_C$. </p> Answer 18: <p><strong>Hint:</strong></p> <p>How do you define a homomorphism from a free abelian group (more generally a free $R$-module) into another group/module?</p> <p>Deduce a homomorphism $s\colon C\to B$ such that $\;g\circ s=\operatorname{id}_C$, and prove that $$B\simeq A\oplus s(C).$$</p> Answer 19: <p>We want to find $h : C \to B$ such that $gh = \operatorname{id}_C$ holds. Since $C$ is free abelian, it is generated by a basis $S$. By exactness, $g$ is surjective, hence for any $x \in S$ we find $b_x \in B$, such that $g(b_x) = x$. Now define $h : C \to B$ to be the extension of the map $x \mapsto b_x$ (by the extension property). Now we have for any $x \in S$: $$gh(x) = g(b_x) = x,$$ which implies $gh = \operatorname{id}_C$. </p> Answer 20: <p><strong>Hint:</strong></p> <p>How do you define a homomorphism from a free abelian group (more generally a free $R$-module) into another group/module?</p> <p>Deduce a homomorphism $s\colon C\to B$ such that $\;g\circ s=\operatorname{id}_C$, and prove that $$B\simeq A\oplus s(C).$$</p> Answer 21: <p>We want to find $h : C \to B$ such that $gh = \operatorname{id}_C$ holds. Since $C$ is free abelian, it is generated by a basis $S$. By exactness, $g$ is surjective, hence for any $x \in S$ we find $b_x \in B$, such that $g(b_x) = x$. Now define $h : C \to B$ to be the extension of the map $x \mapsto b_x$ (by the extension property). Now we have for any $x \in S$: $$gh(x) = g(b_x) = x,$$ which implies $gh = \operatorname{id}_C$. </p> Answer 22: <p><strong>Hint:</strong></p> <p>How do you define a homomorphism from a free abelian group (more generally a free $R$-module) into another group/module?</p> <p>Deduce a homomorphism $s\colon C\to B$ such that $\;g\circ s=\operatorname{id}_C$, and prove that $$B\simeq A\oplus s(C).$$</p>
Floor function in complex plane
2,095,674
6
https://math.stackexchange.com/questions/2095674/floor-function-in-complex-plane
<p>My question is very simple. How would one take the floor of a complex number and what is the floor of i?</p>
[ { "answer_id": 2095680, "score": 7, "text": "<p>The floor function is defined as the largest integer less than or equal to the real number given.</p>\n\n<p>Since \"less than or equal to\" isn't defined for complex numbers you can't have a floor function for them.</p>\n" }, { "answer_id": 2095679, "score": 6, "text": "<p><a href=\"http://www.wolframalpha.com/input/?i=floor%283.3%2B2.3i%29\" rel=\"noreferrer\">Wolfram Alpha</a> defines $\\lfloor a+bi\\rfloor$ as $\\lfloor a\\rfloor+\\lfloor b]i$. With this convention you may say that $\\lfloor i\\rfloor = i$</p>\n\n<p>Taka a look also <a href=\"http://mathworld.wolfram.com/FloorFunction.html\" rel=\"noreferrer\">here</a></p>\n" }, { "answer_id": 2095686, "score": 2, "text": "<p><a href=\"https://math.stackexchange.com/a/2095679/276406\">Momo's answer</a> is probably more official, but there <em>could</em> be another definition as follows:</p>\n\n<p>$$\\lfloor a + bi \\rfloor = n(a+bi)$$ where $n$ is a positive real number $\\le 1$ such that $$|\\lfloor a+bi \\rfloor | = \\lfloor |a+bi| \\rfloor$$</p>\n\n<p>In other words, keep the same <em>angle</em> and reduce the absolute value to the next integer.</p>\n" }, { "answer_id": 2095680, "score": 7, "text": "<p>The floor function is defined as the largest integer less than or equal to the real number given.</p>\n\n<p>Since \"less than or equal to\" isn't defined for complex numbers you can't have a floor function for them.</p>\n" }, { "answer_id": 2095679, "score": 6, "text": "<p><a href=\"http://www.wolframalpha.com/input/?i=floor%283.3%2B2.3i%29\" rel=\"noreferrer\">Wolfram Alpha</a> defines $\\lfloor a+bi\\rfloor$ as $\\lfloor a\\rfloor+\\lfloor b]i$. With this convention you may say that $\\lfloor i\\rfloor = i$</p>\n\n<p>Taka a look also <a href=\"http://mathworld.wolfram.com/FloorFunction.html\" rel=\"noreferrer\">here</a></p>\n" }, { "answer_id": 2095686, "score": 2, "text": "<p><a href=\"https://math.stackexchange.com/a/2095679/276406\">Momo's answer</a> is probably more official, but there <em>could</em> be another definition as follows:</p>\n\n<p>$$\\lfloor a + bi \\rfloor = n(a+bi)$$ where $n$ is a positive real number $\\le 1$ such that $$|\\lfloor a+bi \\rfloor | = \\lfloor |a+bi| \\rfloor$$</p>\n\n<p>In other words, keep the same <em>angle</em> and reduce the absolute value to the next integer.</p>\n" }, { "answer_id": 2095680, "score": 7, "text": "<p>The floor function is defined as the largest integer less than or equal to the real number given.</p>\n\n<p>Since \"less than or equal to\" isn't defined for complex numbers you can't have a floor function for them.</p>\n" }, { "answer_id": 2095679, "score": 6, "text": "<p><a href=\"http://www.wolframalpha.com/input/?i=floor%283.3%2B2.3i%29\" rel=\"noreferrer\">Wolfram Alpha</a> defines $\\lfloor a+bi\\rfloor$ as $\\lfloor a\\rfloor+\\lfloor b]i$. With this convention you may say that $\\lfloor i\\rfloor = i$</p>\n\n<p>Taka a look also <a href=\"http://mathworld.wolfram.com/FloorFunction.html\" rel=\"noreferrer\">here</a></p>\n" }, { "answer_id": 2095686, "score": 2, "text": "<p><a href=\"https://math.stackexchange.com/a/2095679/276406\">Momo's answer</a> is probably more official, but there <em>could</em> be another definition as follows:</p>\n\n<p>$$\\lfloor a + bi \\rfloor = n(a+bi)$$ where $n$ is a positive real number $\\le 1$ such that $$|\\lfloor a+bi \\rfloor | = \\lfloor |a+bi| \\rfloor$$</p>\n\n<p>In other words, keep the same <em>angle</em> and reduce the absolute value to the next integer.</p>\n" }, { "answer_id": 2095680, "score": 7, "text": "<p>The floor function is defined as the largest integer less than or equal to the real number given.</p>\n\n<p>Since \"less than or equal to\" isn't defined for complex numbers you can't have a floor function for them.</p>\n" }, { "answer_id": 2095679, "score": 6, "text": "<p><a href=\"http://www.wolframalpha.com/input/?i=floor%283.3%2B2.3i%29\" rel=\"noreferrer\">Wolfram Alpha</a> defines $\\lfloor a+bi\\rfloor$ as $\\lfloor a\\rfloor+\\lfloor b]i$. With this convention you may say that $\\lfloor i\\rfloor = i$</p>\n\n<p>Taka a look also <a href=\"http://mathworld.wolfram.com/FloorFunction.html\" rel=\"noreferrer\">here</a></p>\n" }, { "answer_id": 2095686, "score": 2, "text": "<p><a href=\"https://math.stackexchange.com/a/2095679/276406\">Momo's answer</a> is probably more official, but there <em>could</em> be another definition as follows:</p>\n\n<p>$$\\lfloor a + bi \\rfloor = n(a+bi)$$ where $n$ is a positive real number $\\le 1$ such that $$|\\lfloor a+bi \\rfloor | = \\lfloor |a+bi| \\rfloor$$</p>\n\n<p>In other words, keep the same <em>angle</em> and reduce the absolute value to the next integer.</p>\n" }, { "answer_id": 2095680, "score": 7, "text": "<p>The floor function is defined as the largest integer less than or equal to the real number given.</p>\n\n<p>Since \"less than or equal to\" isn't defined for complex numbers you can't have a floor function for them.</p>\n" }, { "answer_id": 2095679, "score": 6, "text": "<p><a href=\"http://www.wolframalpha.com/input/?i=floor%283.3%2B2.3i%29\" rel=\"noreferrer\">Wolfram Alpha</a> defines $\\lfloor a+bi\\rfloor$ as $\\lfloor a\\rfloor+\\lfloor b]i$. With this convention you may say that $\\lfloor i\\rfloor = i$</p>\n\n<p>Taka a look also <a href=\"http://mathworld.wolfram.com/FloorFunction.html\" rel=\"noreferrer\">here</a></p>\n" }, { "answer_id": 2095686, "score": 2, "text": "<p><a href=\"https://math.stackexchange.com/a/2095679/276406\">Momo's answer</a> is probably more official, but there <em>could</em> be another definition as follows:</p>\n\n<p>$$\\lfloor a + bi \\rfloor = n(a+bi)$$ where $n$ is a positive real number $\\le 1$ such that $$|\\lfloor a+bi \\rfloor | = \\lfloor |a+bi| \\rfloor$$</p>\n\n<p>In other words, keep the same <em>angle</em> and reduce the absolute value to the next integer.</p>\n" }, { "answer_id": 2095680, "score": 7, "text": "<p>The floor function is defined as the largest integer less than or equal to the real number given.</p>\n\n<p>Since \"less than or equal to\" isn't defined for complex numbers you can't have a floor function for them.</p>\n" }, { "answer_id": 2095679, "score": 6, "text": "<p><a href=\"http://www.wolframalpha.com/input/?i=floor%283.3%2B2.3i%29\" rel=\"noreferrer\">Wolfram Alpha</a> defines $\\lfloor a+bi\\rfloor$ as $\\lfloor a\\rfloor+\\lfloor b]i$. With this convention you may say that $\\lfloor i\\rfloor = i$</p>\n\n<p>Taka a look also <a href=\"http://mathworld.wolfram.com/FloorFunction.html\" rel=\"noreferrer\">here</a></p>\n" }, { "answer_id": 2095686, "score": 2, "text": "<p><a href=\"https://math.stackexchange.com/a/2095679/276406\">Momo's answer</a> is probably more official, but there <em>could</em> be another definition as follows:</p>\n\n<p>$$\\lfloor a + bi \\rfloor = n(a+bi)$$ where $n$ is a positive real number $\\le 1$ such that $$|\\lfloor a+bi \\rfloor | = \\lfloor |a+bi| \\rfloor$$</p>\n\n<p>In other words, keep the same <em>angle</em> and reduce the absolute value to the next integer.</p>\n" }, { "answer_id": 2095680, "score": 7, "text": "<p>The floor function is defined as the largest integer less than or equal to the real number given.</p>\n\n<p>Since \"less than or equal to\" isn't defined for complex numbers you can't have a floor function for them.</p>\n" }, { "answer_id": 2095679, "score": 6, "text": "<p><a href=\"http://www.wolframalpha.com/input/?i=floor%283.3%2B2.3i%29\" rel=\"noreferrer\">Wolfram Alpha</a> defines $\\lfloor a+bi\\rfloor$ as $\\lfloor a\\rfloor+\\lfloor b]i$. With this convention you may say that $\\lfloor i\\rfloor = i$</p>\n\n<p>Taka a look also <a href=\"http://mathworld.wolfram.com/FloorFunction.html\" rel=\"noreferrer\">here</a></p>\n" }, { "answer_id": 2095686, "score": 2, "text": "<p><a href=\"https://math.stackexchange.com/a/2095679/276406\">Momo's answer</a> is probably more official, but there <em>could</em> be another definition as follows:</p>\n\n<p>$$\\lfloor a + bi \\rfloor = n(a+bi)$$ where $n$ is a positive real number $\\le 1$ such that $$|\\lfloor a+bi \\rfloor | = \\lfloor |a+bi| \\rfloor$$</p>\n\n<p>In other words, keep the same <em>angle</em> and reduce the absolute value to the next integer.</p>\n" }, { "answer_id": 2095680, "score": 7, "text": "<p>The floor function is defined as the largest integer less than or equal to the real number given.</p>\n\n<p>Since \"less than or equal to\" isn't defined for complex numbers you can't have a floor function for them.</p>\n" }, { "answer_id": 2095679, "score": 6, "text": "<p><a href=\"http://www.wolframalpha.com/input/?i=floor%283.3%2B2.3i%29\" rel=\"noreferrer\">Wolfram Alpha</a> defines $\\lfloor a+bi\\rfloor$ as $\\lfloor a\\rfloor+\\lfloor b]i$. With this convention you may say that $\\lfloor i\\rfloor = i$</p>\n\n<p>Taka a look also <a href=\"http://mathworld.wolfram.com/FloorFunction.html\" rel=\"noreferrer\">here</a></p>\n" }, { "answer_id": 2095686, "score": 2, "text": "<p><a href=\"https://math.stackexchange.com/a/2095679/276406\">Momo's answer</a> is probably more official, but there <em>could</em> be another definition as follows:</p>\n\n<p>$$\\lfloor a + bi \\rfloor = n(a+bi)$$ where $n$ is a positive real number $\\le 1$ such that $$|\\lfloor a+bi \\rfloor | = \\lfloor |a+bi| \\rfloor$$</p>\n\n<p>In other words, keep the same <em>angle</em> and reduce the absolute value to the next integer.</p>\n" }, { "answer_id": 2095680, "score": 7, "text": "<p>The floor function is defined as the largest integer less than or equal to the real number given.</p>\n\n<p>Since \"less than or equal to\" isn't defined for complex numbers you can't have a floor function for them.</p>\n" }, { "answer_id": 2095679, "score": 6, "text": "<p><a href=\"http://www.wolframalpha.com/input/?i=floor%283.3%2B2.3i%29\" rel=\"noreferrer\">Wolfram Alpha</a> defines $\\lfloor a+bi\\rfloor$ as $\\lfloor a\\rfloor+\\lfloor b]i$. With this convention you may say that $\\lfloor i\\rfloor = i$</p>\n\n<p>Taka a look also <a href=\"http://mathworld.wolfram.com/FloorFunction.html\" rel=\"noreferrer\">here</a></p>\n" }, { "answer_id": 2095686, "score": 2, "text": "<p><a href=\"https://math.stackexchange.com/a/2095679/276406\">Momo's answer</a> is probably more official, but there <em>could</em> be another definition as follows:</p>\n\n<p>$$\\lfloor a + bi \\rfloor = n(a+bi)$$ where $n$ is a positive real number $\\le 1$ such that $$|\\lfloor a+bi \\rfloor | = \\lfloor |a+bi| \\rfloor$$</p>\n\n<p>In other words, keep the same <em>angle</em> and reduce the absolute value to the next integer.</p>\n" }, { "answer_id": 2095680, "score": 7, "text": "<p>The floor function is defined as the largest integer less than or equal to the real number given.</p>\n\n<p>Since \"less than or equal to\" isn't defined for complex numbers you can't have a floor function for them.</p>\n" }, { "answer_id": 2095679, "score": 6, "text": "<p><a href=\"http://www.wolframalpha.com/input/?i=floor%283.3%2B2.3i%29\" rel=\"noreferrer\">Wolfram Alpha</a> defines $\\lfloor a+bi\\rfloor$ as $\\lfloor a\\rfloor+\\lfloor b]i$. With this convention you may say that $\\lfloor i\\rfloor = i$</p>\n\n<p>Taka a look also <a href=\"http://mathworld.wolfram.com/FloorFunction.html\" rel=\"noreferrer\">here</a></p>\n" }, { "answer_id": 2095686, "score": 2, "text": "<p><a href=\"https://math.stackexchange.com/a/2095679/276406\">Momo's answer</a> is probably more official, but there <em>could</em> be another definition as follows:</p>\n\n<p>$$\\lfloor a + bi \\rfloor = n(a+bi)$$ where $n$ is a positive real number $\\le 1$ such that $$|\\lfloor a+bi \\rfloor | = \\lfloor |a+bi| \\rfloor$$</p>\n\n<p>In other words, keep the same <em>angle</em> and reduce the absolute value to the next integer.</p>\n" }, { "answer_id": 2095680, "score": 7, "text": "<p>The floor function is defined as the largest integer less than or equal to the real number given.</p>\n\n<p>Since \"less than or equal to\" isn't defined for complex numbers you can't have a floor function for them.</p>\n" }, { "answer_id": 2095679, "score": 6, "text": "<p><a href=\"http://www.wolframalpha.com/input/?i=floor%283.3%2B2.3i%29\" rel=\"noreferrer\">Wolfram Alpha</a> defines $\\lfloor a+bi\\rfloor$ as $\\lfloor a\\rfloor+\\lfloor b]i$. With this convention you may say that $\\lfloor i\\rfloor = i$</p>\n\n<p>Taka a look also <a href=\"http://mathworld.wolfram.com/FloorFunction.html\" rel=\"noreferrer\">here</a></p>\n" }, { "answer_id": 2095686, "score": 2, "text": "<p><a href=\"https://math.stackexchange.com/a/2095679/276406\">Momo's answer</a> is probably more official, but there <em>could</em> be another definition as follows:</p>\n\n<p>$$\\lfloor a + bi \\rfloor = n(a+bi)$$ where $n$ is a positive real number $\\le 1$ such that $$|\\lfloor a+bi \\rfloor | = \\lfloor |a+bi| \\rfloor$$</p>\n\n<p>In other words, keep the same <em>angle</em> and reduce the absolute value to the next integer.</p>\n" } ]
Question: Floor function in complex plane Answer 1: <p>The floor function is defined as the largest integer less than or equal to the real number given.</p> <p>Since "less than or equal to" isn't defined for complex numbers you can't have a floor function for them.</p> Answer 2: <p><a href="http://www.wolframalpha.com/input/?i=floor%283.3%2B2.3i%29" rel="noreferrer">Wolfram Alpha</a> defines $\lfloor a+bi\rfloor$ as $\lfloor a\rfloor+\lfloor b]i$. With this convention you may say that $\lfloor i\rfloor = i$</p> <p>Taka a look also <a href="http://mathworld.wolfram.com/FloorFunction.html" rel="noreferrer">here</a></p> Answer 3: <p><a href="https://math.stackexchange.com/a/2095679/276406">Momo's answer</a> is probably more official, but there <em>could</em> be another definition as follows:</p> <p>$$\lfloor a + bi \rfloor = n(a+bi)$$ where $n$ is a positive real number $\le 1$ such that $$|\lfloor a+bi \rfloor | = \lfloor |a+bi| \rfloor$$</p> <p>In other words, keep the same <em>angle</em> and reduce the absolute value to the next integer.</p> Answer 4: <p>The floor function is defined as the largest integer less than or equal to the real number given.</p> <p>Since "less than or equal to" isn't defined for complex numbers you can't have a floor function for them.</p> Answer 5: <p><a href="http://www.wolframalpha.com/input/?i=floor%283.3%2B2.3i%29" rel="noreferrer">Wolfram Alpha</a> defines $\lfloor a+bi\rfloor$ as $\lfloor a\rfloor+\lfloor b]i$. With this convention you may say that $\lfloor i\rfloor = i$</p> <p>Taka a look also <a href="http://mathworld.wolfram.com/FloorFunction.html" rel="noreferrer">here</a></p> Answer 6: <p><a href="https://math.stackexchange.com/a/2095679/276406">Momo's answer</a> is probably more official, but there <em>could</em> be another definition as follows:</p> <p>$$\lfloor a + bi \rfloor = n(a+bi)$$ where $n$ is a positive real number $\le 1$ such that $$|\lfloor a+bi \rfloor | = \lfloor |a+bi| \rfloor$$</p> <p>In other words, keep the same <em>angle</em> and reduce the absolute value to the next integer.</p> Answer 7: <p>The floor function is defined as the largest integer less than or equal to the real number given.</p> <p>Since "less than or equal to" isn't defined for complex numbers you can't have a floor function for them.</p> Answer 8: <p><a href="http://www.wolframalpha.com/input/?i=floor%283.3%2B2.3i%29" rel="noreferrer">Wolfram Alpha</a> defines $\lfloor a+bi\rfloor$ as $\lfloor a\rfloor+\lfloor b]i$. With this convention you may say that $\lfloor i\rfloor = i$</p> <p>Taka a look also <a href="http://mathworld.wolfram.com/FloorFunction.html" rel="noreferrer">here</a></p> Answer 9: <p><a href="https://math.stackexchange.com/a/2095679/276406">Momo's answer</a> is probably more official, but there <em>could</em> be another definition as follows:</p> <p>$$\lfloor a + bi \rfloor = n(a+bi)$$ where $n$ is a positive real number $\le 1$ such that $$|\lfloor a+bi \rfloor | = \lfloor |a+bi| \rfloor$$</p> <p>In other words, keep the same <em>angle</em> and reduce the absolute value to the next integer.</p> Answer 10: <p>The floor function is defined as the largest integer less than or equal to the real number given.</p> <p>Since "less than or equal to" isn't defined for complex numbers you can't have a floor function for them.</p> Answer 11: <p><a href="http://www.wolframalpha.com/input/?i=floor%283.3%2B2.3i%29" rel="noreferrer">Wolfram Alpha</a> defines $\lfloor a+bi\rfloor$ as $\lfloor a\rfloor+\lfloor b]i$. With this convention you may say that $\lfloor i\rfloor = i$</p> <p>Taka a look also <a href="http://mathworld.wolfram.com/FloorFunction.html" rel="noreferrer">here</a></p> Answer 12: <p><a href="https://math.stackexchange.com/a/2095679/276406">Momo's answer</a> is probably more official, but there <em>could</em> be another definition as follows:</p> <p>$$\lfloor a + bi \rfloor = n(a+bi)$$ where $n$ is a positive real number $\le 1$ such that $$|\lfloor a+bi \rfloor | = \lfloor |a+bi| \rfloor$$</p> <p>In other words, keep the same <em>angle</em> and reduce the absolute value to the next integer.</p> Answer 13: <p>The floor function is defined as the largest integer less than or equal to the real number given.</p> <p>Since "less than or equal to" isn't defined for complex numbers you can't have a floor function for them.</p> Answer 14: <p><a href="http://www.wolframalpha.com/input/?i=floor%283.3%2B2.3i%29" rel="noreferrer">Wolfram Alpha</a> defines $\lfloor a+bi\rfloor$ as $\lfloor a\rfloor+\lfloor b]i$. With this convention you may say that $\lfloor i\rfloor = i$</p> <p>Taka a look also <a href="http://mathworld.wolfram.com/FloorFunction.html" rel="noreferrer">here</a></p> Answer 15: <p><a href="https://math.stackexchange.com/a/2095679/276406">Momo's answer</a> is probably more official, but there <em>could</em> be another definition as follows:</p> <p>$$\lfloor a + bi \rfloor = n(a+bi)$$ where $n$ is a positive real number $\le 1$ such that $$|\lfloor a+bi \rfloor | = \lfloor |a+bi| \rfloor$$</p> <p>In other words, keep the same <em>angle</em> and reduce the absolute value to the next integer.</p> Answer 16: <p>The floor function is defined as the largest integer less than or equal to the real number given.</p> <p>Since "less than or equal to" isn't defined for complex numbers you can't have a floor function for them.</p> Answer 17: <p><a href="http://www.wolframalpha.com/input/?i=floor%283.3%2B2.3i%29" rel="noreferrer">Wolfram Alpha</a> defines $\lfloor a+bi\rfloor$ as $\lfloor a\rfloor+\lfloor b]i$. With this convention you may say that $\lfloor i\rfloor = i$</p> <p>Taka a look also <a href="http://mathworld.wolfram.com/FloorFunction.html" rel="noreferrer">here</a></p> Answer 18: <p><a href="https://math.stackexchange.com/a/2095679/276406">Momo's answer</a> is probably more official, but there <em>could</em> be another definition as follows:</p> <p>$$\lfloor a + bi \rfloor = n(a+bi)$$ where $n$ is a positive real number $\le 1$ such that $$|\lfloor a+bi \rfloor | = \lfloor |a+bi| \rfloor$$</p> <p>In other words, keep the same <em>angle</em> and reduce the absolute value to the next integer.</p> Answer 19: <p>The floor function is defined as the largest integer less than or equal to the real number given.</p> <p>Since "less than or equal to" isn't defined for complex numbers you can't have a floor function for them.</p> Answer 20: <p><a href="http://www.wolframalpha.com/input/?i=floor%283.3%2B2.3i%29" rel="noreferrer">Wolfram Alpha</a> defines $\lfloor a+bi\rfloor$ as $\lfloor a\rfloor+\lfloor b]i$. With this convention you may say that $\lfloor i\rfloor = i$</p> <p>Taka a look also <a href="http://mathworld.wolfram.com/FloorFunction.html" rel="noreferrer">here</a></p> Answer 21: <p><a href="https://math.stackexchange.com/a/2095679/276406">Momo's answer</a> is probably more official, but there <em>could</em> be another definition as follows:</p> <p>$$\lfloor a + bi \rfloor = n(a+bi)$$ where $n$ is a positive real number $\le 1$ such that $$|\lfloor a+bi \rfloor | = \lfloor |a+bi| \rfloor$$</p> <p>In other words, keep the same <em>angle</em> and reduce the absolute value to the next integer.</p> Answer 22: <p>The floor function is defined as the largest integer less than or equal to the real number given.</p> <p>Since "less than or equal to" isn't defined for complex numbers you can't have a floor function for them.</p> Answer 23: <p><a href="http://www.wolframalpha.com/input/?i=floor%283.3%2B2.3i%29" rel="noreferrer">Wolfram Alpha</a> defines $\lfloor a+bi\rfloor$ as $\lfloor a\rfloor+\lfloor b]i$. With this convention you may say that $\lfloor i\rfloor = i$</p> <p>Taka a look also <a href="http://mathworld.wolfram.com/FloorFunction.html" rel="noreferrer">here</a></p> Answer 24: <p><a href="https://math.stackexchange.com/a/2095679/276406">Momo's answer</a> is probably more official, but there <em>could</em> be another definition as follows:</p> <p>$$\lfloor a + bi \rfloor = n(a+bi)$$ where $n$ is a positive real number $\le 1$ such that $$|\lfloor a+bi \rfloor | = \lfloor |a+bi| \rfloor$$</p> <p>In other words, keep the same <em>angle</em> and reduce the absolute value to the next integer.</p> Answer 25: <p>The floor function is defined as the largest integer less than or equal to the real number given.</p> <p>Since "less than or equal to" isn't defined for complex numbers you can't have a floor function for them.</p> Answer 26: <p><a href="http://www.wolframalpha.com/input/?i=floor%283.3%2B2.3i%29" rel="noreferrer">Wolfram Alpha</a> defines $\lfloor a+bi\rfloor$ as $\lfloor a\rfloor+\lfloor b]i$. With this convention you may say that $\lfloor i\rfloor = i$</p> <p>Taka a look also <a href="http://mathworld.wolfram.com/FloorFunction.html" rel="noreferrer">here</a></p> Answer 27: <p><a href="https://math.stackexchange.com/a/2095679/276406">Momo's answer</a> is probably more official, but there <em>could</em> be another definition as follows:</p> <p>$$\lfloor a + bi \rfloor = n(a+bi)$$ where $n$ is a positive real number $\le 1$ such that $$|\lfloor a+bi \rfloor | = \lfloor |a+bi| \rfloor$$</p> <p>In other words, keep the same <em>angle</em> and reduce the absolute value to the next integer.</p> Answer 28: <p>The floor function is defined as the largest integer less than or equal to the real number given.</p> <p>Since "less than or equal to" isn't defined for complex numbers you can't have a floor function for them.</p> Answer 29: <p><a href="http://www.wolframalpha.com/input/?i=floor%283.3%2B2.3i%29" rel="noreferrer">Wolfram Alpha</a> defines $\lfloor a+bi\rfloor$ as $\lfloor a\rfloor+\lfloor b]i$. With this convention you may say that $\lfloor i\rfloor = i$</p> <p>Taka a look also <a href="http://mathworld.wolfram.com/FloorFunction.html" rel="noreferrer">here</a></p> Answer 30: <p><a href="https://math.stackexchange.com/a/2095679/276406">Momo's answer</a> is probably more official, but there <em>could</em> be another definition as follows:</p> <p>$$\lfloor a + bi \rfloor = n(a+bi)$$ where $n$ is a positive real number $\le 1$ such that $$|\lfloor a+bi \rfloor | = \lfloor |a+bi| \rfloor$$</p> <p>In other words, keep the same <em>angle</em> and reduce the absolute value to the next integer.</p> Answer 31: <p>The floor function is defined as the largest integer less than or equal to the real number given.</p> <p>Since "less than or equal to" isn't defined for complex numbers you can't have a floor function for them.</p> Answer 32: <p><a href="http://www.wolframalpha.com/input/?i=floor%283.3%2B2.3i%29" rel="noreferrer">Wolfram Alpha</a> defines $\lfloor a+bi\rfloor$ as $\lfloor a\rfloor+\lfloor b]i$. With this convention you may say that $\lfloor i\rfloor = i$</p> <p>Taka a look also <a href="http://mathworld.wolfram.com/FloorFunction.html" rel="noreferrer">here</a></p> Answer 33: <p><a href="https://math.stackexchange.com/a/2095679/276406">Momo's answer</a> is probably more official, but there <em>could</em> be another definition as follows:</p> <p>$$\lfloor a + bi \rfloor = n(a+bi)$$ where $n$ is a positive real number $\le 1$ such that $$|\lfloor a+bi \rfloor | = \lfloor |a+bi| \rfloor$$</p> <p>In other words, keep the same <em>angle</em> and reduce the absolute value to the next integer.</p>
A Ramanujan infinite series
2,095,780
6
https://math.stackexchange.com/questions/2095780/a-ramanujan-infinite-series
<p>$$ 1-5\left(\frac{1}{2}\right)^3+9\left(\frac{1 \cdot 3}{2 \cdot 4}\right)^3-13\left(\frac{1 \cdot 3 \cdot 5}{2 \cdot 4 \cdot 6}\right)^3+\cdots $$</p> <p>I went on evaluating the above series and encountered that solving $\displaystyle \sum_{n\ge 0}\left(\binom{2n}{n}\right)^3x^n$ would suffice.</p> <p>But how do we make a generating function for the third power of a central binomial coefficient using the fact $\displaystyle \sum_{n\ge 0}\binom{2n}{n}x^n=\frac{1}{\sqrt{1-4x}}$</p>
[ { "answer_id": 2095793, "score": 3, "text": "<p><em>This is not an answer but just a result obtained using a CAS.</em></p>\n\n<p>Let $$f_k=\\sum_{n=0}^\\infty \\binom{2 n}{n}^kx^n$$ The following expressions have been obtained $$f_1=\\frac{1}{\\sqrt{1-4 x}}$$ $$f_2=\\frac{2 }{\\pi }K(16 x)$$ $$f_3=\\frac{4 }{\\pi ^2}K\\left(\\frac{1}{2} \\left(1-\\sqrt{1-64 x}\\right)\\right)^2$$\n$$f_4=\\, _4F_3\\left(\\frac{1}{2},\\frac{1}{2},\\frac{1}{2},\\frac{1}{2};1,1,1;256 x\\right)$$ $$f_5=\\,\n _5F_4\\left(\\frac{1}{2},\\frac{1}{2},\\frac{1}{2},\\frac{1}{2},\\frac{1}{2};1,1,1,1;1\n 024 x\\right)$$ where appear, for $k=2,3$, the complete elliptic integrals of the first kind and, for $k&gt;3$, the generalized hypergeometric functions</p>\n" }, { "answer_id": 2164033, "score": 1, "text": "<p>This is one of the simplest and famous series given by Ramanujan and it's value is $2/\\pi$. Unfortunately Ramanujan's technique requires a reasonable amount of effort to understand. I have presented the proof for this series and it's friend $$\\frac{4}{\\pi}= 1+\\frac{7}{4}\\left(\\frac{1}{2}\\right)^{3}+\\frac{13}{4^{2}}\\left(\\frac{1\\cdot 3}{2\\cdot 4}\\right)^{3}+\\dots$$ in <a href=\"http://paramanands.blogspot.com/2012/03/modular-equations-and-approximations-to-pi-part-2.html\" rel=\"nofollow noreferrer\">this post</a>.</p>\n" }, { "answer_id": 4358389, "score": 0, "text": "<p>Modifying @Claude Leibovici's answer a little.<br />\n<span class=\"math-container\">$\\frac{1}{\\left(1-z\\right)^{a}}=\\,_{1}F_{0}\\left(a;;z\\right)={\\displaystyle \\sum_{n=0}^{\\infty}\\frac{a_{n}}{n!}z^{n}}$</span><br />\nand<br />\n<span class=\"math-container\">$\\frac{1}{\\sqrt{1-4\\cdot x}}={\\displaystyle \\sum_{n=0}^{\\infty}\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)x^{n}}$</span><br />\nLet <span class=\"math-container\">$z=4x,x=\\frac{z}{4}$</span><br />\n<span class=\"math-container\">$\\frac{1}{\\left(1-4x\\right)^{a}}=\\,_{1}F_{0}\\left(a;;4x\\right)={\\displaystyle \\sum_{n=0}^{\\infty}\\frac{a_{n}}{n!}\\,\\left(4x\\right)^{n}}$</span><br />\n<span class=\"math-container\">$4^{n}\\frac{\\left(\\frac{1}{2}\\right)_{n}}{n!}=\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)$</span><br />\nAnd we have the generating function for <span class=\"math-container\">$\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)^{k}\\left(4^{k}x\\right)^{n}$</span><br />\ni.e.<br />\n<span class=\"math-container\">$\\,_{k}F_{k-1}\\left({\\displaystyle \\prod_{i=1}^{k}\\left(\\frac{1}{2}\\right)_n};{\\displaystyle \\prod_{i=1}^{k-1}\\left(1\\right)_n};4^{k}z\\right)={\\displaystyle \\sum_{n=0}^{\\infty}}\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)^{k}\\left(4^{k}x\\right)^{n}$</span><br />\nWhere we “borrowed” the <span class=\"math-container\">$n!=\\left(1\\right)_{n}$</span></p>\n" }, { "answer_id": 2095793, "score": 3, "text": "<p><em>This is not an answer but just a result obtained using a CAS.</em></p>\n\n<p>Let $$f_k=\\sum_{n=0}^\\infty \\binom{2 n}{n}^kx^n$$ The following expressions have been obtained $$f_1=\\frac{1}{\\sqrt{1-4 x}}$$ $$f_2=\\frac{2 }{\\pi }K(16 x)$$ $$f_3=\\frac{4 }{\\pi ^2}K\\left(\\frac{1}{2} \\left(1-\\sqrt{1-64 x}\\right)\\right)^2$$\n$$f_4=\\, _4F_3\\left(\\frac{1}{2},\\frac{1}{2},\\frac{1}{2},\\frac{1}{2};1,1,1;256 x\\right)$$ $$f_5=\\,\n _5F_4\\left(\\frac{1}{2},\\frac{1}{2},\\frac{1}{2},\\frac{1}{2},\\frac{1}{2};1,1,1,1;1\n 024 x\\right)$$ where appear, for $k=2,3$, the complete elliptic integrals of the first kind and, for $k&gt;3$, the generalized hypergeometric functions</p>\n" }, { "answer_id": 2164033, "score": 1, "text": "<p>This is one of the simplest and famous series given by Ramanujan and it's value is $2/\\pi$. Unfortunately Ramanujan's technique requires a reasonable amount of effort to understand. I have presented the proof for this series and it's friend $$\\frac{4}{\\pi}= 1+\\frac{7}{4}\\left(\\frac{1}{2}\\right)^{3}+\\frac{13}{4^{2}}\\left(\\frac{1\\cdot 3}{2\\cdot 4}\\right)^{3}+\\dots$$ in <a href=\"http://paramanands.blogspot.com/2012/03/modular-equations-and-approximations-to-pi-part-2.html\" rel=\"nofollow noreferrer\">this post</a>.</p>\n" }, { "answer_id": 4358389, "score": 0, "text": "<p>Modifying @Claude Leibovici's answer a little.<br />\n<span class=\"math-container\">$\\frac{1}{\\left(1-z\\right)^{a}}=\\,_{1}F_{0}\\left(a;;z\\right)={\\displaystyle \\sum_{n=0}^{\\infty}\\frac{a_{n}}{n!}z^{n}}$</span><br />\nand<br />\n<span class=\"math-container\">$\\frac{1}{\\sqrt{1-4\\cdot x}}={\\displaystyle \\sum_{n=0}^{\\infty}\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)x^{n}}$</span><br />\nLet <span class=\"math-container\">$z=4x,x=\\frac{z}{4}$</span><br />\n<span class=\"math-container\">$\\frac{1}{\\left(1-4x\\right)^{a}}=\\,_{1}F_{0}\\left(a;;4x\\right)={\\displaystyle \\sum_{n=0}^{\\infty}\\frac{a_{n}}{n!}\\,\\left(4x\\right)^{n}}$</span><br />\n<span class=\"math-container\">$4^{n}\\frac{\\left(\\frac{1}{2}\\right)_{n}}{n!}=\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)$</span><br />\nAnd we have the generating function for <span class=\"math-container\">$\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)^{k}\\left(4^{k}x\\right)^{n}$</span><br />\ni.e.<br />\n<span class=\"math-container\">$\\,_{k}F_{k-1}\\left({\\displaystyle \\prod_{i=1}^{k}\\left(\\frac{1}{2}\\right)_n};{\\displaystyle \\prod_{i=1}^{k-1}\\left(1\\right)_n};4^{k}z\\right)={\\displaystyle \\sum_{n=0}^{\\infty}}\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)^{k}\\left(4^{k}x\\right)^{n}$</span><br />\nWhere we “borrowed” the <span class=\"math-container\">$n!=\\left(1\\right)_{n}$</span></p>\n" }, { "answer_id": 2095793, "score": 3, "text": "<p><em>This is not an answer but just a result obtained using a CAS.</em></p>\n\n<p>Let $$f_k=\\sum_{n=0}^\\infty \\binom{2 n}{n}^kx^n$$ The following expressions have been obtained $$f_1=\\frac{1}{\\sqrt{1-4 x}}$$ $$f_2=\\frac{2 }{\\pi }K(16 x)$$ $$f_3=\\frac{4 }{\\pi ^2}K\\left(\\frac{1}{2} \\left(1-\\sqrt{1-64 x}\\right)\\right)^2$$\n$$f_4=\\, _4F_3\\left(\\frac{1}{2},\\frac{1}{2},\\frac{1}{2},\\frac{1}{2};1,1,1;256 x\\right)$$ $$f_5=\\,\n _5F_4\\left(\\frac{1}{2},\\frac{1}{2},\\frac{1}{2},\\frac{1}{2},\\frac{1}{2};1,1,1,1;1\n 024 x\\right)$$ where appear, for $k=2,3$, the complete elliptic integrals of the first kind and, for $k&gt;3$, the generalized hypergeometric functions</p>\n" }, { "answer_id": 2164033, "score": 1, "text": "<p>This is one of the simplest and famous series given by Ramanujan and it's value is $2/\\pi$. Unfortunately Ramanujan's technique requires a reasonable amount of effort to understand. I have presented the proof for this series and it's friend $$\\frac{4}{\\pi}= 1+\\frac{7}{4}\\left(\\frac{1}{2}\\right)^{3}+\\frac{13}{4^{2}}\\left(\\frac{1\\cdot 3}{2\\cdot 4}\\right)^{3}+\\dots$$ in <a href=\"http://paramanands.blogspot.com/2012/03/modular-equations-and-approximations-to-pi-part-2.html\" rel=\"nofollow noreferrer\">this post</a>.</p>\n" }, { "answer_id": 4358389, "score": 0, "text": "<p>Modifying @Claude Leibovici's answer a little.<br />\n<span class=\"math-container\">$\\frac{1}{\\left(1-z\\right)^{a}}=\\,_{1}F_{0}\\left(a;;z\\right)={\\displaystyle \\sum_{n=0}^{\\infty}\\frac{a_{n}}{n!}z^{n}}$</span><br />\nand<br />\n<span class=\"math-container\">$\\frac{1}{\\sqrt{1-4\\cdot x}}={\\displaystyle \\sum_{n=0}^{\\infty}\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)x^{n}}$</span><br />\nLet <span class=\"math-container\">$z=4x,x=\\frac{z}{4}$</span><br />\n<span class=\"math-container\">$\\frac{1}{\\left(1-4x\\right)^{a}}=\\,_{1}F_{0}\\left(a;;4x\\right)={\\displaystyle \\sum_{n=0}^{\\infty}\\frac{a_{n}}{n!}\\,\\left(4x\\right)^{n}}$</span><br />\n<span class=\"math-container\">$4^{n}\\frac{\\left(\\frac{1}{2}\\right)_{n}}{n!}=\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)$</span><br />\nAnd we have the generating function for <span class=\"math-container\">$\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)^{k}\\left(4^{k}x\\right)^{n}$</span><br />\ni.e.<br />\n<span class=\"math-container\">$\\,_{k}F_{k-1}\\left({\\displaystyle \\prod_{i=1}^{k}\\left(\\frac{1}{2}\\right)_n};{\\displaystyle \\prod_{i=1}^{k-1}\\left(1\\right)_n};4^{k}z\\right)={\\displaystyle \\sum_{n=0}^{\\infty}}\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)^{k}\\left(4^{k}x\\right)^{n}$</span><br />\nWhere we “borrowed” the <span class=\"math-container\">$n!=\\left(1\\right)_{n}$</span></p>\n" }, { "answer_id": 2095793, "score": 3, "text": "<p><em>This is not an answer but just a result obtained using a CAS.</em></p>\n\n<p>Let $$f_k=\\sum_{n=0}^\\infty \\binom{2 n}{n}^kx^n$$ The following expressions have been obtained $$f_1=\\frac{1}{\\sqrt{1-4 x}}$$ $$f_2=\\frac{2 }{\\pi }K(16 x)$$ $$f_3=\\frac{4 }{\\pi ^2}K\\left(\\frac{1}{2} \\left(1-\\sqrt{1-64 x}\\right)\\right)^2$$\n$$f_4=\\, _4F_3\\left(\\frac{1}{2},\\frac{1}{2},\\frac{1}{2},\\frac{1}{2};1,1,1;256 x\\right)$$ $$f_5=\\,\n _5F_4\\left(\\frac{1}{2},\\frac{1}{2},\\frac{1}{2},\\frac{1}{2},\\frac{1}{2};1,1,1,1;1\n 024 x\\right)$$ where appear, for $k=2,3$, the complete elliptic integrals of the first kind and, for $k&gt;3$, the generalized hypergeometric functions</p>\n" }, { "answer_id": 2164033, "score": 1, "text": "<p>This is one of the simplest and famous series given by Ramanujan and it's value is $2/\\pi$. Unfortunately Ramanujan's technique requires a reasonable amount of effort to understand. I have presented the proof for this series and it's friend $$\\frac{4}{\\pi}= 1+\\frac{7}{4}\\left(\\frac{1}{2}\\right)^{3}+\\frac{13}{4^{2}}\\left(\\frac{1\\cdot 3}{2\\cdot 4}\\right)^{3}+\\dots$$ in <a href=\"http://paramanands.blogspot.com/2012/03/modular-equations-and-approximations-to-pi-part-2.html\" rel=\"nofollow noreferrer\">this post</a>.</p>\n" }, { "answer_id": 4358389, "score": 0, "text": "<p>Modifying @Claude Leibovici's answer a little.<br />\n<span class=\"math-container\">$\\frac{1}{\\left(1-z\\right)^{a}}=\\,_{1}F_{0}\\left(a;;z\\right)={\\displaystyle \\sum_{n=0}^{\\infty}\\frac{a_{n}}{n!}z^{n}}$</span><br />\nand<br />\n<span class=\"math-container\">$\\frac{1}{\\sqrt{1-4\\cdot x}}={\\displaystyle \\sum_{n=0}^{\\infty}\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)x^{n}}$</span><br />\nLet <span class=\"math-container\">$z=4x,x=\\frac{z}{4}$</span><br />\n<span class=\"math-container\">$\\frac{1}{\\left(1-4x\\right)^{a}}=\\,_{1}F_{0}\\left(a;;4x\\right)={\\displaystyle \\sum_{n=0}^{\\infty}\\frac{a_{n}}{n!}\\,\\left(4x\\right)^{n}}$</span><br />\n<span class=\"math-container\">$4^{n}\\frac{\\left(\\frac{1}{2}\\right)_{n}}{n!}=\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)$</span><br />\nAnd we have the generating function for <span class=\"math-container\">$\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)^{k}\\left(4^{k}x\\right)^{n}$</span><br />\ni.e.<br />\n<span class=\"math-container\">$\\,_{k}F_{k-1}\\left({\\displaystyle \\prod_{i=1}^{k}\\left(\\frac{1}{2}\\right)_n};{\\displaystyle \\prod_{i=1}^{k-1}\\left(1\\right)_n};4^{k}z\\right)={\\displaystyle \\sum_{n=0}^{\\infty}}\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)^{k}\\left(4^{k}x\\right)^{n}$</span><br />\nWhere we “borrowed” the <span class=\"math-container\">$n!=\\left(1\\right)_{n}$</span></p>\n" }, { "answer_id": 2095793, "score": 3, "text": "<p><em>This is not an answer but just a result obtained using a CAS.</em></p>\n\n<p>Let $$f_k=\\sum_{n=0}^\\infty \\binom{2 n}{n}^kx^n$$ The following expressions have been obtained $$f_1=\\frac{1}{\\sqrt{1-4 x}}$$ $$f_2=\\frac{2 }{\\pi }K(16 x)$$ $$f_3=\\frac{4 }{\\pi ^2}K\\left(\\frac{1}{2} \\left(1-\\sqrt{1-64 x}\\right)\\right)^2$$\n$$f_4=\\, _4F_3\\left(\\frac{1}{2},\\frac{1}{2},\\frac{1}{2},\\frac{1}{2};1,1,1;256 x\\right)$$ $$f_5=\\,\n _5F_4\\left(\\frac{1}{2},\\frac{1}{2},\\frac{1}{2},\\frac{1}{2},\\frac{1}{2};1,1,1,1;1\n 024 x\\right)$$ where appear, for $k=2,3$, the complete elliptic integrals of the first kind and, for $k&gt;3$, the generalized hypergeometric functions</p>\n" }, { "answer_id": 2164033, "score": 1, "text": "<p>This is one of the simplest and famous series given by Ramanujan and it's value is $2/\\pi$. Unfortunately Ramanujan's technique requires a reasonable amount of effort to understand. I have presented the proof for this series and it's friend $$\\frac{4}{\\pi}= 1+\\frac{7}{4}\\left(\\frac{1}{2}\\right)^{3}+\\frac{13}{4^{2}}\\left(\\frac{1\\cdot 3}{2\\cdot 4}\\right)^{3}+\\dots$$ in <a href=\"http://paramanands.blogspot.com/2012/03/modular-equations-and-approximations-to-pi-part-2.html\" rel=\"nofollow noreferrer\">this post</a>.</p>\n" }, { "answer_id": 4358389, "score": 0, "text": "<p>Modifying @Claude Leibovici's answer a little.<br />\n<span class=\"math-container\">$\\frac{1}{\\left(1-z\\right)^{a}}=\\,_{1}F_{0}\\left(a;;z\\right)={\\displaystyle \\sum_{n=0}^{\\infty}\\frac{a_{n}}{n!}z^{n}}$</span><br />\nand<br />\n<span class=\"math-container\">$\\frac{1}{\\sqrt{1-4\\cdot x}}={\\displaystyle \\sum_{n=0}^{\\infty}\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)x^{n}}$</span><br />\nLet <span class=\"math-container\">$z=4x,x=\\frac{z}{4}$</span><br />\n<span class=\"math-container\">$\\frac{1}{\\left(1-4x\\right)^{a}}=\\,_{1}F_{0}\\left(a;;4x\\right)={\\displaystyle \\sum_{n=0}^{\\infty}\\frac{a_{n}}{n!}\\,\\left(4x\\right)^{n}}$</span><br />\n<span class=\"math-container\">$4^{n}\\frac{\\left(\\frac{1}{2}\\right)_{n}}{n!}=\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)$</span><br />\nAnd we have the generating function for <span class=\"math-container\">$\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)^{k}\\left(4^{k}x\\right)^{n}$</span><br />\ni.e.<br />\n<span class=\"math-container\">$\\,_{k}F_{k-1}\\left({\\displaystyle \\prod_{i=1}^{k}\\left(\\frac{1}{2}\\right)_n};{\\displaystyle \\prod_{i=1}^{k-1}\\left(1\\right)_n};4^{k}z\\right)={\\displaystyle \\sum_{n=0}^{\\infty}}\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)^{k}\\left(4^{k}x\\right)^{n}$</span><br />\nWhere we “borrowed” the <span class=\"math-container\">$n!=\\left(1\\right)_{n}$</span></p>\n" }, { "answer_id": 2095793, "score": 3, "text": "<p><em>This is not an answer but just a result obtained using a CAS.</em></p>\n\n<p>Let $$f_k=\\sum_{n=0}^\\infty \\binom{2 n}{n}^kx^n$$ The following expressions have been obtained $$f_1=\\frac{1}{\\sqrt{1-4 x}}$$ $$f_2=\\frac{2 }{\\pi }K(16 x)$$ $$f_3=\\frac{4 }{\\pi ^2}K\\left(\\frac{1}{2} \\left(1-\\sqrt{1-64 x}\\right)\\right)^2$$\n$$f_4=\\, _4F_3\\left(\\frac{1}{2},\\frac{1}{2},\\frac{1}{2},\\frac{1}{2};1,1,1;256 x\\right)$$ $$f_5=\\,\n _5F_4\\left(\\frac{1}{2},\\frac{1}{2},\\frac{1}{2},\\frac{1}{2},\\frac{1}{2};1,1,1,1;1\n 024 x\\right)$$ where appear, for $k=2,3$, the complete elliptic integrals of the first kind and, for $k&gt;3$, the generalized hypergeometric functions</p>\n" }, { "answer_id": 2164033, "score": 1, "text": "<p>This is one of the simplest and famous series given by Ramanujan and it's value is $2/\\pi$. Unfortunately Ramanujan's technique requires a reasonable amount of effort to understand. I have presented the proof for this series and it's friend $$\\frac{4}{\\pi}= 1+\\frac{7}{4}\\left(\\frac{1}{2}\\right)^{3}+\\frac{13}{4^{2}}\\left(\\frac{1\\cdot 3}{2\\cdot 4}\\right)^{3}+\\dots$$ in <a href=\"http://paramanands.blogspot.com/2012/03/modular-equations-and-approximations-to-pi-part-2.html\" rel=\"nofollow noreferrer\">this post</a>.</p>\n" }, { "answer_id": 4358389, "score": 0, "text": "<p>Modifying @Claude Leibovici's answer a little.<br />\n<span class=\"math-container\">$\\frac{1}{\\left(1-z\\right)^{a}}=\\,_{1}F_{0}\\left(a;;z\\right)={\\displaystyle \\sum_{n=0}^{\\infty}\\frac{a_{n}}{n!}z^{n}}$</span><br />\nand<br />\n<span class=\"math-container\">$\\frac{1}{\\sqrt{1-4\\cdot x}}={\\displaystyle \\sum_{n=0}^{\\infty}\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)x^{n}}$</span><br />\nLet <span class=\"math-container\">$z=4x,x=\\frac{z}{4}$</span><br />\n<span class=\"math-container\">$\\frac{1}{\\left(1-4x\\right)^{a}}=\\,_{1}F_{0}\\left(a;;4x\\right)={\\displaystyle \\sum_{n=0}^{\\infty}\\frac{a_{n}}{n!}\\,\\left(4x\\right)^{n}}$</span><br />\n<span class=\"math-container\">$4^{n}\\frac{\\left(\\frac{1}{2}\\right)_{n}}{n!}=\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)$</span><br />\nAnd we have the generating function for <span class=\"math-container\">$\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)^{k}\\left(4^{k}x\\right)^{n}$</span><br />\ni.e.<br />\n<span class=\"math-container\">$\\,_{k}F_{k-1}\\left({\\displaystyle \\prod_{i=1}^{k}\\left(\\frac{1}{2}\\right)_n};{\\displaystyle \\prod_{i=1}^{k-1}\\left(1\\right)_n};4^{k}z\\right)={\\displaystyle \\sum_{n=0}^{\\infty}}\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)^{k}\\left(4^{k}x\\right)^{n}$</span><br />\nWhere we “borrowed” the <span class=\"math-container\">$n!=\\left(1\\right)_{n}$</span></p>\n" }, { "answer_id": 2095793, "score": 3, "text": "<p><em>This is not an answer but just a result obtained using a CAS.</em></p>\n\n<p>Let $$f_k=\\sum_{n=0}^\\infty \\binom{2 n}{n}^kx^n$$ The following expressions have been obtained $$f_1=\\frac{1}{\\sqrt{1-4 x}}$$ $$f_2=\\frac{2 }{\\pi }K(16 x)$$ $$f_3=\\frac{4 }{\\pi ^2}K\\left(\\frac{1}{2} \\left(1-\\sqrt{1-64 x}\\right)\\right)^2$$\n$$f_4=\\, _4F_3\\left(\\frac{1}{2},\\frac{1}{2},\\frac{1}{2},\\frac{1}{2};1,1,1;256 x\\right)$$ $$f_5=\\,\n _5F_4\\left(\\frac{1}{2},\\frac{1}{2},\\frac{1}{2},\\frac{1}{2},\\frac{1}{2};1,1,1,1;1\n 024 x\\right)$$ where appear, for $k=2,3$, the complete elliptic integrals of the first kind and, for $k&gt;3$, the generalized hypergeometric functions</p>\n" }, { "answer_id": 2164033, "score": 1, "text": "<p>This is one of the simplest and famous series given by Ramanujan and it's value is $2/\\pi$. Unfortunately Ramanujan's technique requires a reasonable amount of effort to understand. I have presented the proof for this series and it's friend $$\\frac{4}{\\pi}= 1+\\frac{7}{4}\\left(\\frac{1}{2}\\right)^{3}+\\frac{13}{4^{2}}\\left(\\frac{1\\cdot 3}{2\\cdot 4}\\right)^{3}+\\dots$$ in <a href=\"http://paramanands.blogspot.com/2012/03/modular-equations-and-approximations-to-pi-part-2.html\" rel=\"nofollow noreferrer\">this post</a>.</p>\n" }, { "answer_id": 4358389, "score": 0, "text": "<p>Modifying @Claude Leibovici's answer a little.<br />\n<span class=\"math-container\">$\\frac{1}{\\left(1-z\\right)^{a}}=\\,_{1}F_{0}\\left(a;;z\\right)={\\displaystyle \\sum_{n=0}^{\\infty}\\frac{a_{n}}{n!}z^{n}}$</span><br />\nand<br />\n<span class=\"math-container\">$\\frac{1}{\\sqrt{1-4\\cdot x}}={\\displaystyle \\sum_{n=0}^{\\infty}\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)x^{n}}$</span><br />\nLet <span class=\"math-container\">$z=4x,x=\\frac{z}{4}$</span><br />\n<span class=\"math-container\">$\\frac{1}{\\left(1-4x\\right)^{a}}=\\,_{1}F_{0}\\left(a;;4x\\right)={\\displaystyle \\sum_{n=0}^{\\infty}\\frac{a_{n}}{n!}\\,\\left(4x\\right)^{n}}$</span><br />\n<span class=\"math-container\">$4^{n}\\frac{\\left(\\frac{1}{2}\\right)_{n}}{n!}=\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)$</span><br />\nAnd we have the generating function for <span class=\"math-container\">$\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)^{k}\\left(4^{k}x\\right)^{n}$</span><br />\ni.e.<br />\n<span class=\"math-container\">$\\,_{k}F_{k-1}\\left({\\displaystyle \\prod_{i=1}^{k}\\left(\\frac{1}{2}\\right)_n};{\\displaystyle \\prod_{i=1}^{k-1}\\left(1\\right)_n};4^{k}z\\right)={\\displaystyle \\sum_{n=0}^{\\infty}}\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)^{k}\\left(4^{k}x\\right)^{n}$</span><br />\nWhere we “borrowed” the <span class=\"math-container\">$n!=\\left(1\\right)_{n}$</span></p>\n" }, { "answer_id": 2095793, "score": 3, "text": "<p><em>This is not an answer but just a result obtained using a CAS.</em></p>\n\n<p>Let $$f_k=\\sum_{n=0}^\\infty \\binom{2 n}{n}^kx^n$$ The following expressions have been obtained $$f_1=\\frac{1}{\\sqrt{1-4 x}}$$ $$f_2=\\frac{2 }{\\pi }K(16 x)$$ $$f_3=\\frac{4 }{\\pi ^2}K\\left(\\frac{1}{2} \\left(1-\\sqrt{1-64 x}\\right)\\right)^2$$\n$$f_4=\\, _4F_3\\left(\\frac{1}{2},\\frac{1}{2},\\frac{1}{2},\\frac{1}{2};1,1,1;256 x\\right)$$ $$f_5=\\,\n _5F_4\\left(\\frac{1}{2},\\frac{1}{2},\\frac{1}{2},\\frac{1}{2},\\frac{1}{2};1,1,1,1;1\n 024 x\\right)$$ where appear, for $k=2,3$, the complete elliptic integrals of the first kind and, for $k&gt;3$, the generalized hypergeometric functions</p>\n" }, { "answer_id": 2164033, "score": 1, "text": "<p>This is one of the simplest and famous series given by Ramanujan and it's value is $2/\\pi$. Unfortunately Ramanujan's technique requires a reasonable amount of effort to understand. I have presented the proof for this series and it's friend $$\\frac{4}{\\pi}= 1+\\frac{7}{4}\\left(\\frac{1}{2}\\right)^{3}+\\frac{13}{4^{2}}\\left(\\frac{1\\cdot 3}{2\\cdot 4}\\right)^{3}+\\dots$$ in <a href=\"http://paramanands.blogspot.com/2012/03/modular-equations-and-approximations-to-pi-part-2.html\" rel=\"nofollow noreferrer\">this post</a>.</p>\n" }, { "answer_id": 4358389, "score": 0, "text": "<p>Modifying @Claude Leibovici's answer a little.<br />\n<span class=\"math-container\">$\\frac{1}{\\left(1-z\\right)^{a}}=\\,_{1}F_{0}\\left(a;;z\\right)={\\displaystyle \\sum_{n=0}^{\\infty}\\frac{a_{n}}{n!}z^{n}}$</span><br />\nand<br />\n<span class=\"math-container\">$\\frac{1}{\\sqrt{1-4\\cdot x}}={\\displaystyle \\sum_{n=0}^{\\infty}\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)x^{n}}$</span><br />\nLet <span class=\"math-container\">$z=4x,x=\\frac{z}{4}$</span><br />\n<span class=\"math-container\">$\\frac{1}{\\left(1-4x\\right)^{a}}=\\,_{1}F_{0}\\left(a;;4x\\right)={\\displaystyle \\sum_{n=0}^{\\infty}\\frac{a_{n}}{n!}\\,\\left(4x\\right)^{n}}$</span><br />\n<span class=\"math-container\">$4^{n}\\frac{\\left(\\frac{1}{2}\\right)_{n}}{n!}=\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)$</span><br />\nAnd we have the generating function for <span class=\"math-container\">$\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)^{k}\\left(4^{k}x\\right)^{n}$</span><br />\ni.e.<br />\n<span class=\"math-container\">$\\,_{k}F_{k-1}\\left({\\displaystyle \\prod_{i=1}^{k}\\left(\\frac{1}{2}\\right)_n};{\\displaystyle \\prod_{i=1}^{k-1}\\left(1\\right)_n};4^{k}z\\right)={\\displaystyle \\sum_{n=0}^{\\infty}}\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)^{k}\\left(4^{k}x\\right)^{n}$</span><br />\nWhere we “borrowed” the <span class=\"math-container\">$n!=\\left(1\\right)_{n}$</span></p>\n" }, { "answer_id": 2095793, "score": 3, "text": "<p><em>This is not an answer but just a result obtained using a CAS.</em></p>\n\n<p>Let $$f_k=\\sum_{n=0}^\\infty \\binom{2 n}{n}^kx^n$$ The following expressions have been obtained $$f_1=\\frac{1}{\\sqrt{1-4 x}}$$ $$f_2=\\frac{2 }{\\pi }K(16 x)$$ $$f_3=\\frac{4 }{\\pi ^2}K\\left(\\frac{1}{2} \\left(1-\\sqrt{1-64 x}\\right)\\right)^2$$\n$$f_4=\\, _4F_3\\left(\\frac{1}{2},\\frac{1}{2},\\frac{1}{2},\\frac{1}{2};1,1,1;256 x\\right)$$ $$f_5=\\,\n _5F_4\\left(\\frac{1}{2},\\frac{1}{2},\\frac{1}{2},\\frac{1}{2},\\frac{1}{2};1,1,1,1;1\n 024 x\\right)$$ where appear, for $k=2,3$, the complete elliptic integrals of the first kind and, for $k&gt;3$, the generalized hypergeometric functions</p>\n" }, { "answer_id": 2164033, "score": 1, "text": "<p>This is one of the simplest and famous series given by Ramanujan and it's value is $2/\\pi$. Unfortunately Ramanujan's technique requires a reasonable amount of effort to understand. I have presented the proof for this series and it's friend $$\\frac{4}{\\pi}= 1+\\frac{7}{4}\\left(\\frac{1}{2}\\right)^{3}+\\frac{13}{4^{2}}\\left(\\frac{1\\cdot 3}{2\\cdot 4}\\right)^{3}+\\dots$$ in <a href=\"http://paramanands.blogspot.com/2012/03/modular-equations-and-approximations-to-pi-part-2.html\" rel=\"nofollow noreferrer\">this post</a>.</p>\n" }, { "answer_id": 4358389, "score": 0, "text": "<p>Modifying @Claude Leibovici's answer a little.<br />\n<span class=\"math-container\">$\\frac{1}{\\left(1-z\\right)^{a}}=\\,_{1}F_{0}\\left(a;;z\\right)={\\displaystyle \\sum_{n=0}^{\\infty}\\frac{a_{n}}{n!}z^{n}}$</span><br />\nand<br />\n<span class=\"math-container\">$\\frac{1}{\\sqrt{1-4\\cdot x}}={\\displaystyle \\sum_{n=0}^{\\infty}\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)x^{n}}$</span><br />\nLet <span class=\"math-container\">$z=4x,x=\\frac{z}{4}$</span><br />\n<span class=\"math-container\">$\\frac{1}{\\left(1-4x\\right)^{a}}=\\,_{1}F_{0}\\left(a;;4x\\right)={\\displaystyle \\sum_{n=0}^{\\infty}\\frac{a_{n}}{n!}\\,\\left(4x\\right)^{n}}$</span><br />\n<span class=\"math-container\">$4^{n}\\frac{\\left(\\frac{1}{2}\\right)_{n}}{n!}=\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)$</span><br />\nAnd we have the generating function for <span class=\"math-container\">$\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)^{k}\\left(4^{k}x\\right)^{n}$</span><br />\ni.e.<br />\n<span class=\"math-container\">$\\,_{k}F_{k-1}\\left({\\displaystyle \\prod_{i=1}^{k}\\left(\\frac{1}{2}\\right)_n};{\\displaystyle \\prod_{i=1}^{k-1}\\left(1\\right)_n};4^{k}z\\right)={\\displaystyle \\sum_{n=0}^{\\infty}}\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)^{k}\\left(4^{k}x\\right)^{n}$</span><br />\nWhere we “borrowed” the <span class=\"math-container\">$n!=\\left(1\\right)_{n}$</span></p>\n" }, { "answer_id": 2095793, "score": 3, "text": "<p><em>This is not an answer but just a result obtained using a CAS.</em></p>\n\n<p>Let $$f_k=\\sum_{n=0}^\\infty \\binom{2 n}{n}^kx^n$$ The following expressions have been obtained $$f_1=\\frac{1}{\\sqrt{1-4 x}}$$ $$f_2=\\frac{2 }{\\pi }K(16 x)$$ $$f_3=\\frac{4 }{\\pi ^2}K\\left(\\frac{1}{2} \\left(1-\\sqrt{1-64 x}\\right)\\right)^2$$\n$$f_4=\\, _4F_3\\left(\\frac{1}{2},\\frac{1}{2},\\frac{1}{2},\\frac{1}{2};1,1,1;256 x\\right)$$ $$f_5=\\,\n _5F_4\\left(\\frac{1}{2},\\frac{1}{2},\\frac{1}{2},\\frac{1}{2},\\frac{1}{2};1,1,1,1;1\n 024 x\\right)$$ where appear, for $k=2,3$, the complete elliptic integrals of the first kind and, for $k&gt;3$, the generalized hypergeometric functions</p>\n" }, { "answer_id": 2164033, "score": 1, "text": "<p>This is one of the simplest and famous series given by Ramanujan and it's value is $2/\\pi$. Unfortunately Ramanujan's technique requires a reasonable amount of effort to understand. I have presented the proof for this series and it's friend $$\\frac{4}{\\pi}= 1+\\frac{7}{4}\\left(\\frac{1}{2}\\right)^{3}+\\frac{13}{4^{2}}\\left(\\frac{1\\cdot 3}{2\\cdot 4}\\right)^{3}+\\dots$$ in <a href=\"http://paramanands.blogspot.com/2012/03/modular-equations-and-approximations-to-pi-part-2.html\" rel=\"nofollow noreferrer\">this post</a>.</p>\n" }, { "answer_id": 4358389, "score": 0, "text": "<p>Modifying @Claude Leibovici's answer a little.<br />\n<span class=\"math-container\">$\\frac{1}{\\left(1-z\\right)^{a}}=\\,_{1}F_{0}\\left(a;;z\\right)={\\displaystyle \\sum_{n=0}^{\\infty}\\frac{a_{n}}{n!}z^{n}}$</span><br />\nand<br />\n<span class=\"math-container\">$\\frac{1}{\\sqrt{1-4\\cdot x}}={\\displaystyle \\sum_{n=0}^{\\infty}\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)x^{n}}$</span><br />\nLet <span class=\"math-container\">$z=4x,x=\\frac{z}{4}$</span><br />\n<span class=\"math-container\">$\\frac{1}{\\left(1-4x\\right)^{a}}=\\,_{1}F_{0}\\left(a;;4x\\right)={\\displaystyle \\sum_{n=0}^{\\infty}\\frac{a_{n}}{n!}\\,\\left(4x\\right)^{n}}$</span><br />\n<span class=\"math-container\">$4^{n}\\frac{\\left(\\frac{1}{2}\\right)_{n}}{n!}=\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)$</span><br />\nAnd we have the generating function for <span class=\"math-container\">$\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)^{k}\\left(4^{k}x\\right)^{n}$</span><br />\ni.e.<br />\n<span class=\"math-container\">$\\,_{k}F_{k-1}\\left({\\displaystyle \\prod_{i=1}^{k}\\left(\\frac{1}{2}\\right)_n};{\\displaystyle \\prod_{i=1}^{k-1}\\left(1\\right)_n};4^{k}z\\right)={\\displaystyle \\sum_{n=0}^{\\infty}}\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)^{k}\\left(4^{k}x\\right)^{n}$</span><br />\nWhere we “borrowed” the <span class=\"math-container\">$n!=\\left(1\\right)_{n}$</span></p>\n" }, { "answer_id": 2095793, "score": 3, "text": "<p><em>This is not an answer but just a result obtained using a CAS.</em></p>\n\n<p>Let $$f_k=\\sum_{n=0}^\\infty \\binom{2 n}{n}^kx^n$$ The following expressions have been obtained $$f_1=\\frac{1}{\\sqrt{1-4 x}}$$ $$f_2=\\frac{2 }{\\pi }K(16 x)$$ $$f_3=\\frac{4 }{\\pi ^2}K\\left(\\frac{1}{2} \\left(1-\\sqrt{1-64 x}\\right)\\right)^2$$\n$$f_4=\\, _4F_3\\left(\\frac{1}{2},\\frac{1}{2},\\frac{1}{2},\\frac{1}{2};1,1,1;256 x\\right)$$ $$f_5=\\,\n _5F_4\\left(\\frac{1}{2},\\frac{1}{2},\\frac{1}{2},\\frac{1}{2},\\frac{1}{2};1,1,1,1;1\n 024 x\\right)$$ where appear, for $k=2,3$, the complete elliptic integrals of the first kind and, for $k&gt;3$, the generalized hypergeometric functions</p>\n" }, { "answer_id": 2164033, "score": 1, "text": "<p>This is one of the simplest and famous series given by Ramanujan and it's value is $2/\\pi$. Unfortunately Ramanujan's technique requires a reasonable amount of effort to understand. I have presented the proof for this series and it's friend $$\\frac{4}{\\pi}= 1+\\frac{7}{4}\\left(\\frac{1}{2}\\right)^{3}+\\frac{13}{4^{2}}\\left(\\frac{1\\cdot 3}{2\\cdot 4}\\right)^{3}+\\dots$$ in <a href=\"http://paramanands.blogspot.com/2012/03/modular-equations-and-approximations-to-pi-part-2.html\" rel=\"nofollow noreferrer\">this post</a>.</p>\n" }, { "answer_id": 4358389, "score": 0, "text": "<p>Modifying @Claude Leibovici's answer a little.<br />\n<span class=\"math-container\">$\\frac{1}{\\left(1-z\\right)^{a}}=\\,_{1}F_{0}\\left(a;;z\\right)={\\displaystyle \\sum_{n=0}^{\\infty}\\frac{a_{n}}{n!}z^{n}}$</span><br />\nand<br />\n<span class=\"math-container\">$\\frac{1}{\\sqrt{1-4\\cdot x}}={\\displaystyle \\sum_{n=0}^{\\infty}\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)x^{n}}$</span><br />\nLet <span class=\"math-container\">$z=4x,x=\\frac{z}{4}$</span><br />\n<span class=\"math-container\">$\\frac{1}{\\left(1-4x\\right)^{a}}=\\,_{1}F_{0}\\left(a;;4x\\right)={\\displaystyle \\sum_{n=0}^{\\infty}\\frac{a_{n}}{n!}\\,\\left(4x\\right)^{n}}$</span><br />\n<span class=\"math-container\">$4^{n}\\frac{\\left(\\frac{1}{2}\\right)_{n}}{n!}=\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)$</span><br />\nAnd we have the generating function for <span class=\"math-container\">$\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)^{k}\\left(4^{k}x\\right)^{n}$</span><br />\ni.e.<br />\n<span class=\"math-container\">$\\,_{k}F_{k-1}\\left({\\displaystyle \\prod_{i=1}^{k}\\left(\\frac{1}{2}\\right)_n};{\\displaystyle \\prod_{i=1}^{k-1}\\left(1\\right)_n};4^{k}z\\right)={\\displaystyle \\sum_{n=0}^{\\infty}}\\left(\\begin{array}{c}\n2n\\\\\nn\n\\end{array}\\right)^{k}\\left(4^{k}x\\right)^{n}$</span><br />\nWhere we “borrowed” the <span class=\"math-container\">$n!=\\left(1\\right)_{n}$</span></p>\n" } ]
Question: A Ramanujan infinite series Answer 1: <p><em>This is not an answer but just a result obtained using a CAS.</em></p> <p>Let $$f_k=\sum_{n=0}^\infty \binom{2 n}{n}^kx^n$$ The following expressions have been obtained $$f_1=\frac{1}{\sqrt{1-4 x}}$$ $$f_2=\frac{2 }{\pi }K(16 x)$$ $$f_3=\frac{4 }{\pi ^2}K\left(\frac{1}{2} \left(1-\sqrt{1-64 x}\right)\right)^2$$ $$f_4=\, _4F_3\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2};1,1,1;256 x\right)$$ $$f_5=\, _5F_4\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2};1,1,1,1;1 024 x\right)$$ where appear, for $k=2,3$, the complete elliptic integrals of the first kind and, for $k&gt;3$, the generalized hypergeometric functions</p> Answer 2: <p>This is one of the simplest and famous series given by Ramanujan and it's value is $2/\pi$. Unfortunately Ramanujan's technique requires a reasonable amount of effort to understand. I have presented the proof for this series and it's friend $$\frac{4}{\pi}= 1+\frac{7}{4}\left(\frac{1}{2}\right)^{3}+\frac{13}{4^{2}}\left(\frac{1\cdot 3}{2\cdot 4}\right)^{3}+\dots$$ in <a href="http://paramanands.blogspot.com/2012/03/modular-equations-and-approximations-to-pi-part-2.html" rel="nofollow noreferrer">this post</a>.</p> Answer 3: <p>Modifying @Claude Leibovici's answer a little.<br /> <span class="math-container">$\frac{1}{\left(1-z\right)^{a}}=\,_{1}F_{0}\left(a;;z\right)={\displaystyle \sum_{n=0}^{\infty}\frac{a_{n}}{n!}z^{n}}$</span><br /> and<br /> <span class="math-container">$\frac{1}{\sqrt{1-4\cdot x}}={\displaystyle \sum_{n=0}^{\infty}\left(\begin{array}{c} 2n\\ n \end{array}\right)x^{n}}$</span><br /> Let <span class="math-container">$z=4x,x=\frac{z}{4}$</span><br /> <span class="math-container">$\frac{1}{\left(1-4x\right)^{a}}=\,_{1}F_{0}\left(a;;4x\right)={\displaystyle \sum_{n=0}^{\infty}\frac{a_{n}}{n!}\,\left(4x\right)^{n}}$</span><br /> <span class="math-container">$4^{n}\frac{\left(\frac{1}{2}\right)_{n}}{n!}=\left(\begin{array}{c} 2n\\ n \end{array}\right)$</span><br /> And we have the generating function for <span class="math-container">$\left(\begin{array}{c} 2n\\ n \end{array}\right)^{k}\left(4^{k}x\right)^{n}$</span><br /> i.e.<br /> <span class="math-container">$\,_{k}F_{k-1}\left({\displaystyle \prod_{i=1}^{k}\left(\frac{1}{2}\right)_n};{\displaystyle \prod_{i=1}^{k-1}\left(1\right)_n};4^{k}z\right)={\displaystyle \sum_{n=0}^{\infty}}\left(\begin{array}{c} 2n\\ n \end{array}\right)^{k}\left(4^{k}x\right)^{n}$</span><br /> Where we “borrowed” the <span class="math-container">$n!=\left(1\right)_{n}$</span></p> Answer 4: <p><em>This is not an answer but just a result obtained using a CAS.</em></p> <p>Let $$f_k=\sum_{n=0}^\infty \binom{2 n}{n}^kx^n$$ The following expressions have been obtained $$f_1=\frac{1}{\sqrt{1-4 x}}$$ $$f_2=\frac{2 }{\pi }K(16 x)$$ $$f_3=\frac{4 }{\pi ^2}K\left(\frac{1}{2} \left(1-\sqrt{1-64 x}\right)\right)^2$$ $$f_4=\, _4F_3\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2};1,1,1;256 x\right)$$ $$f_5=\, _5F_4\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2};1,1,1,1;1 024 x\right)$$ where appear, for $k=2,3$, the complete elliptic integrals of the first kind and, for $k&gt;3$, the generalized hypergeometric functions</p> Answer 5: <p>This is one of the simplest and famous series given by Ramanujan and it's value is $2/\pi$. Unfortunately Ramanujan's technique requires a reasonable amount of effort to understand. I have presented the proof for this series and it's friend $$\frac{4}{\pi}= 1+\frac{7}{4}\left(\frac{1}{2}\right)^{3}+\frac{13}{4^{2}}\left(\frac{1\cdot 3}{2\cdot 4}\right)^{3}+\dots$$ in <a href="http://paramanands.blogspot.com/2012/03/modular-equations-and-approximations-to-pi-part-2.html" rel="nofollow noreferrer">this post</a>.</p> Answer 6: <p>Modifying @Claude Leibovici's answer a little.<br /> <span class="math-container">$\frac{1}{\left(1-z\right)^{a}}=\,_{1}F_{0}\left(a;;z\right)={\displaystyle \sum_{n=0}^{\infty}\frac{a_{n}}{n!}z^{n}}$</span><br /> and<br /> <span class="math-container">$\frac{1}{\sqrt{1-4\cdot x}}={\displaystyle \sum_{n=0}^{\infty}\left(\begin{array}{c} 2n\\ n \end{array}\right)x^{n}}$</span><br /> Let <span class="math-container">$z=4x,x=\frac{z}{4}$</span><br /> <span class="math-container">$\frac{1}{\left(1-4x\right)^{a}}=\,_{1}F_{0}\left(a;;4x\right)={\displaystyle \sum_{n=0}^{\infty}\frac{a_{n}}{n!}\,\left(4x\right)^{n}}$</span><br /> <span class="math-container">$4^{n}\frac{\left(\frac{1}{2}\right)_{n}}{n!}=\left(\begin{array}{c} 2n\\ n \end{array}\right)$</span><br /> And we have the generating function for <span class="math-container">$\left(\begin{array}{c} 2n\\ n \end{array}\right)^{k}\left(4^{k}x\right)^{n}$</span><br /> i.e.<br /> <span class="math-container">$\,_{k}F_{k-1}\left({\displaystyle \prod_{i=1}^{k}\left(\frac{1}{2}\right)_n};{\displaystyle \prod_{i=1}^{k-1}\left(1\right)_n};4^{k}z\right)={\displaystyle \sum_{n=0}^{\infty}}\left(\begin{array}{c} 2n\\ n \end{array}\right)^{k}\left(4^{k}x\right)^{n}$</span><br /> Where we “borrowed” the <span class="math-container">$n!=\left(1\right)_{n}$</span></p> Answer 7: <p><em>This is not an answer but just a result obtained using a CAS.</em></p> <p>Let $$f_k=\sum_{n=0}^\infty \binom{2 n}{n}^kx^n$$ The following expressions have been obtained $$f_1=\frac{1}{\sqrt{1-4 x}}$$ $$f_2=\frac{2 }{\pi }K(16 x)$$ $$f_3=\frac{4 }{\pi ^2}K\left(\frac{1}{2} \left(1-\sqrt{1-64 x}\right)\right)^2$$ $$f_4=\, _4F_3\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2};1,1,1;256 x\right)$$ $$f_5=\, _5F_4\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2};1,1,1,1;1 024 x\right)$$ where appear, for $k=2,3$, the complete elliptic integrals of the first kind and, for $k&gt;3$, the generalized hypergeometric functions</p> Answer 8: <p>This is one of the simplest and famous series given by Ramanujan and it's value is $2/\pi$. Unfortunately Ramanujan's technique requires a reasonable amount of effort to understand. I have presented the proof for this series and it's friend $$\frac{4}{\pi}= 1+\frac{7}{4}\left(\frac{1}{2}\right)^{3}+\frac{13}{4^{2}}\left(\frac{1\cdot 3}{2\cdot 4}\right)^{3}+\dots$$ in <a href="http://paramanands.blogspot.com/2012/03/modular-equations-and-approximations-to-pi-part-2.html" rel="nofollow noreferrer">this post</a>.</p> Answer 9: <p>Modifying @Claude Leibovici's answer a little.<br /> <span class="math-container">$\frac{1}{\left(1-z\right)^{a}}=\,_{1}F_{0}\left(a;;z\right)={\displaystyle \sum_{n=0}^{\infty}\frac{a_{n}}{n!}z^{n}}$</span><br /> and<br /> <span class="math-container">$\frac{1}{\sqrt{1-4\cdot x}}={\displaystyle \sum_{n=0}^{\infty}\left(\begin{array}{c} 2n\\ n \end{array}\right)x^{n}}$</span><br /> Let <span class="math-container">$z=4x,x=\frac{z}{4}$</span><br /> <span class="math-container">$\frac{1}{\left(1-4x\right)^{a}}=\,_{1}F_{0}\left(a;;4x\right)={\displaystyle \sum_{n=0}^{\infty}\frac{a_{n}}{n!}\,\left(4x\right)^{n}}$</span><br /> <span class="math-container">$4^{n}\frac{\left(\frac{1}{2}\right)_{n}}{n!}=\left(\begin{array}{c} 2n\\ n \end{array}\right)$</span><br /> And we have the generating function for <span class="math-container">$\left(\begin{array}{c} 2n\\ n \end{array}\right)^{k}\left(4^{k}x\right)^{n}$</span><br /> i.e.<br /> <span class="math-container">$\,_{k}F_{k-1}\left({\displaystyle \prod_{i=1}^{k}\left(\frac{1}{2}\right)_n};{\displaystyle \prod_{i=1}^{k-1}\left(1\right)_n};4^{k}z\right)={\displaystyle \sum_{n=0}^{\infty}}\left(\begin{array}{c} 2n\\ n \end{array}\right)^{k}\left(4^{k}x\right)^{n}$</span><br /> Where we “borrowed” the <span class="math-container">$n!=\left(1\right)_{n}$</span></p> Answer 10: <p><em>This is not an answer but just a result obtained using a CAS.</em></p> <p>Let $$f_k=\sum_{n=0}^\infty \binom{2 n}{n}^kx^n$$ The following expressions have been obtained $$f_1=\frac{1}{\sqrt{1-4 x}}$$ $$f_2=\frac{2 }{\pi }K(16 x)$$ $$f_3=\frac{4 }{\pi ^2}K\left(\frac{1}{2} \left(1-\sqrt{1-64 x}\right)\right)^2$$ $$f_4=\, _4F_3\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2};1,1,1;256 x\right)$$ $$f_5=\, _5F_4\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2};1,1,1,1;1 024 x\right)$$ where appear, for $k=2,3$, the complete elliptic integrals of the first kind and, for $k&gt;3$, the generalized hypergeometric functions</p> Answer 11: <p>This is one of the simplest and famous series given by Ramanujan and it's value is $2/\pi$. Unfortunately Ramanujan's technique requires a reasonable amount of effort to understand. I have presented the proof for this series and it's friend $$\frac{4}{\pi}= 1+\frac{7}{4}\left(\frac{1}{2}\right)^{3}+\frac{13}{4^{2}}\left(\frac{1\cdot 3}{2\cdot 4}\right)^{3}+\dots$$ in <a href="http://paramanands.blogspot.com/2012/03/modular-equations-and-approximations-to-pi-part-2.html" rel="nofollow noreferrer">this post</a>.</p> Answer 12: <p>Modifying @Claude Leibovici's answer a little.<br /> <span class="math-container">$\frac{1}{\left(1-z\right)^{a}}=\,_{1}F_{0}\left(a;;z\right)={\displaystyle \sum_{n=0}^{\infty}\frac{a_{n}}{n!}z^{n}}$</span><br /> and<br /> <span class="math-container">$\frac{1}{\sqrt{1-4\cdot x}}={\displaystyle \sum_{n=0}^{\infty}\left(\begin{array}{c} 2n\\ n \end{array}\right)x^{n}}$</span><br /> Let <span class="math-container">$z=4x,x=\frac{z}{4}$</span><br /> <span class="math-container">$\frac{1}{\left(1-4x\right)^{a}}=\,_{1}F_{0}\left(a;;4x\right)={\displaystyle \sum_{n=0}^{\infty}\frac{a_{n}}{n!}\,\left(4x\right)^{n}}$</span><br /> <span class="math-container">$4^{n}\frac{\left(\frac{1}{2}\right)_{n}}{n!}=\left(\begin{array}{c} 2n\\ n \end{array}\right)$</span><br /> And we have the generating function for <span class="math-container">$\left(\begin{array}{c} 2n\\ n \end{array}\right)^{k}\left(4^{k}x\right)^{n}$</span><br /> i.e.<br /> <span class="math-container">$\,_{k}F_{k-1}\left({\displaystyle \prod_{i=1}^{k}\left(\frac{1}{2}\right)_n};{\displaystyle \prod_{i=1}^{k-1}\left(1\right)_n};4^{k}z\right)={\displaystyle \sum_{n=0}^{\infty}}\left(\begin{array}{c} 2n\\ n \end{array}\right)^{k}\left(4^{k}x\right)^{n}$</span><br /> Where we “borrowed” the <span class="math-container">$n!=\left(1\right)_{n}$</span></p> Answer 13: <p><em>This is not an answer but just a result obtained using a CAS.</em></p> <p>Let $$f_k=\sum_{n=0}^\infty \binom{2 n}{n}^kx^n$$ The following expressions have been obtained $$f_1=\frac{1}{\sqrt{1-4 x}}$$ $$f_2=\frac{2 }{\pi }K(16 x)$$ $$f_3=\frac{4 }{\pi ^2}K\left(\frac{1}{2} \left(1-\sqrt{1-64 x}\right)\right)^2$$ $$f_4=\, _4F_3\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2};1,1,1;256 x\right)$$ $$f_5=\, _5F_4\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2};1,1,1,1;1 024 x\right)$$ where appear, for $k=2,3$, the complete elliptic integrals of the first kind and, for $k&gt;3$, the generalized hypergeometric functions</p> Answer 14: <p>This is one of the simplest and famous series given by Ramanujan and it's value is $2/\pi$. Unfortunately Ramanujan's technique requires a reasonable amount of effort to understand. I have presented the proof for this series and it's friend $$\frac{4}{\pi}= 1+\frac{7}{4}\left(\frac{1}{2}\right)^{3}+\frac{13}{4^{2}}\left(\frac{1\cdot 3}{2\cdot 4}\right)^{3}+\dots$$ in <a href="http://paramanands.blogspot.com/2012/03/modular-equations-and-approximations-to-pi-part-2.html" rel="nofollow noreferrer">this post</a>.</p> Answer 15: <p>Modifying @Claude Leibovici's answer a little.<br /> <span class="math-container">$\frac{1}{\left(1-z\right)^{a}}=\,_{1}F_{0}\left(a;;z\right)={\displaystyle \sum_{n=0}^{\infty}\frac{a_{n}}{n!}z^{n}}$</span><br /> and<br /> <span class="math-container">$\frac{1}{\sqrt{1-4\cdot x}}={\displaystyle \sum_{n=0}^{\infty}\left(\begin{array}{c} 2n\\ n \end{array}\right)x^{n}}$</span><br /> Let <span class="math-container">$z=4x,x=\frac{z}{4}$</span><br /> <span class="math-container">$\frac{1}{\left(1-4x\right)^{a}}=\,_{1}F_{0}\left(a;;4x\right)={\displaystyle \sum_{n=0}^{\infty}\frac{a_{n}}{n!}\,\left(4x\right)^{n}}$</span><br /> <span class="math-container">$4^{n}\frac{\left(\frac{1}{2}\right)_{n}}{n!}=\left(\begin{array}{c} 2n\\ n \end{array}\right)$</span><br /> And we have the generating function for <span class="math-container">$\left(\begin{array}{c} 2n\\ n \end{array}\right)^{k}\left(4^{k}x\right)^{n}$</span><br /> i.e.<br /> <span class="math-container">$\,_{k}F_{k-1}\left({\displaystyle \prod_{i=1}^{k}\left(\frac{1}{2}\right)_n};{\displaystyle \prod_{i=1}^{k-1}\left(1\right)_n};4^{k}z\right)={\displaystyle \sum_{n=0}^{\infty}}\left(\begin{array}{c} 2n\\ n \end{array}\right)^{k}\left(4^{k}x\right)^{n}$</span><br /> Where we “borrowed” the <span class="math-container">$n!=\left(1\right)_{n}$</span></p> Answer 16: <p><em>This is not an answer but just a result obtained using a CAS.</em></p> <p>Let $$f_k=\sum_{n=0}^\infty \binom{2 n}{n}^kx^n$$ The following expressions have been obtained $$f_1=\frac{1}{\sqrt{1-4 x}}$$ $$f_2=\frac{2 }{\pi }K(16 x)$$ $$f_3=\frac{4 }{\pi ^2}K\left(\frac{1}{2} \left(1-\sqrt{1-64 x}\right)\right)^2$$ $$f_4=\, _4F_3\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2};1,1,1;256 x\right)$$ $$f_5=\, _5F_4\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2};1,1,1,1;1 024 x\right)$$ where appear, for $k=2,3$, the complete elliptic integrals of the first kind and, for $k&gt;3$, the generalized hypergeometric functions</p> Answer 17: <p>This is one of the simplest and famous series given by Ramanujan and it's value is $2/\pi$. Unfortunately Ramanujan's technique requires a reasonable amount of effort to understand. I have presented the proof for this series and it's friend $$\frac{4}{\pi}= 1+\frac{7}{4}\left(\frac{1}{2}\right)^{3}+\frac{13}{4^{2}}\left(\frac{1\cdot 3}{2\cdot 4}\right)^{3}+\dots$$ in <a href="http://paramanands.blogspot.com/2012/03/modular-equations-and-approximations-to-pi-part-2.html" rel="nofollow noreferrer">this post</a>.</p> Answer 18: <p>Modifying @Claude Leibovici's answer a little.<br /> <span class="math-container">$\frac{1}{\left(1-z\right)^{a}}=\,_{1}F_{0}\left(a;;z\right)={\displaystyle \sum_{n=0}^{\infty}\frac{a_{n}}{n!}z^{n}}$</span><br /> and<br /> <span class="math-container">$\frac{1}{\sqrt{1-4\cdot x}}={\displaystyle \sum_{n=0}^{\infty}\left(\begin{array}{c} 2n\\ n \end{array}\right)x^{n}}$</span><br /> Let <span class="math-container">$z=4x,x=\frac{z}{4}$</span><br /> <span class="math-container">$\frac{1}{\left(1-4x\right)^{a}}=\,_{1}F_{0}\left(a;;4x\right)={\displaystyle \sum_{n=0}^{\infty}\frac{a_{n}}{n!}\,\left(4x\right)^{n}}$</span><br /> <span class="math-container">$4^{n}\frac{\left(\frac{1}{2}\right)_{n}}{n!}=\left(\begin{array}{c} 2n\\ n \end{array}\right)$</span><br /> And we have the generating function for <span class="math-container">$\left(\begin{array}{c} 2n\\ n \end{array}\right)^{k}\left(4^{k}x\right)^{n}$</span><br /> i.e.<br /> <span class="math-container">$\,_{k}F_{k-1}\left({\displaystyle \prod_{i=1}^{k}\left(\frac{1}{2}\right)_n};{\displaystyle \prod_{i=1}^{k-1}\left(1\right)_n};4^{k}z\right)={\displaystyle \sum_{n=0}^{\infty}}\left(\begin{array}{c} 2n\\ n \end{array}\right)^{k}\left(4^{k}x\right)^{n}$</span><br /> Where we “borrowed” the <span class="math-container">$n!=\left(1\right)_{n}$</span></p> Answer 19: <p><em>This is not an answer but just a result obtained using a CAS.</em></p> <p>Let $$f_k=\sum_{n=0}^\infty \binom{2 n}{n}^kx^n$$ The following expressions have been obtained $$f_1=\frac{1}{\sqrt{1-4 x}}$$ $$f_2=\frac{2 }{\pi }K(16 x)$$ $$f_3=\frac{4 }{\pi ^2}K\left(\frac{1}{2} \left(1-\sqrt{1-64 x}\right)\right)^2$$ $$f_4=\, _4F_3\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2};1,1,1;256 x\right)$$ $$f_5=\, _5F_4\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2};1,1,1,1;1 024 x\right)$$ where appear, for $k=2,3$, the complete elliptic integrals of the first kind and, for $k&gt;3$, the generalized hypergeometric functions</p> Answer 20: <p>This is one of the simplest and famous series given by Ramanujan and it's value is $2/\pi$. Unfortunately Ramanujan's technique requires a reasonable amount of effort to understand. I have presented the proof for this series and it's friend $$\frac{4}{\pi}= 1+\frac{7}{4}\left(\frac{1}{2}\right)^{3}+\frac{13}{4^{2}}\left(\frac{1\cdot 3}{2\cdot 4}\right)^{3}+\dots$$ in <a href="http://paramanands.blogspot.com/2012/03/modular-equations-and-approximations-to-pi-part-2.html" rel="nofollow noreferrer">this post</a>.</p> Answer 21: <p>Modifying @Claude Leibovici's answer a little.<br /> <span class="math-container">$\frac{1}{\left(1-z\right)^{a}}=\,_{1}F_{0}\left(a;;z\right)={\displaystyle \sum_{n=0}^{\infty}\frac{a_{n}}{n!}z^{n}}$</span><br /> and<br /> <span class="math-container">$\frac{1}{\sqrt{1-4\cdot x}}={\displaystyle \sum_{n=0}^{\infty}\left(\begin{array}{c} 2n\\ n \end{array}\right)x^{n}}$</span><br /> Let <span class="math-container">$z=4x,x=\frac{z}{4}$</span><br /> <span class="math-container">$\frac{1}{\left(1-4x\right)^{a}}=\,_{1}F_{0}\left(a;;4x\right)={\displaystyle \sum_{n=0}^{\infty}\frac{a_{n}}{n!}\,\left(4x\right)^{n}}$</span><br /> <span class="math-container">$4^{n}\frac{\left(\frac{1}{2}\right)_{n}}{n!}=\left(\begin{array}{c} 2n\\ n \end{array}\right)$</span><br /> And we have the generating function for <span class="math-container">$\left(\begin{array}{c} 2n\\ n \end{array}\right)^{k}\left(4^{k}x\right)^{n}$</span><br /> i.e.<br /> <span class="math-container">$\,_{k}F_{k-1}\left({\displaystyle \prod_{i=1}^{k}\left(\frac{1}{2}\right)_n};{\displaystyle \prod_{i=1}^{k-1}\left(1\right)_n};4^{k}z\right)={\displaystyle \sum_{n=0}^{\infty}}\left(\begin{array}{c} 2n\\ n \end{array}\right)^{k}\left(4^{k}x\right)^{n}$</span><br /> Where we “borrowed” the <span class="math-container">$n!=\left(1\right)_{n}$</span></p> Answer 22: <p><em>This is not an answer but just a result obtained using a CAS.</em></p> <p>Let $$f_k=\sum_{n=0}^\infty \binom{2 n}{n}^kx^n$$ The following expressions have been obtained $$f_1=\frac{1}{\sqrt{1-4 x}}$$ $$f_2=\frac{2 }{\pi }K(16 x)$$ $$f_3=\frac{4 }{\pi ^2}K\left(\frac{1}{2} \left(1-\sqrt{1-64 x}\right)\right)^2$$ $$f_4=\, _4F_3\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2};1,1,1;256 x\right)$$ $$f_5=\, _5F_4\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2};1,1,1,1;1 024 x\right)$$ where appear, for $k=2,3$, the complete elliptic integrals of the first kind and, for $k&gt;3$, the generalized hypergeometric functions</p> Answer 23: <p>This is one of the simplest and famous series given by Ramanujan and it's value is $2/\pi$. Unfortunately Ramanujan's technique requires a reasonable amount of effort to understand. I have presented the proof for this series and it's friend $$\frac{4}{\pi}= 1+\frac{7}{4}\left(\frac{1}{2}\right)^{3}+\frac{13}{4^{2}}\left(\frac{1\cdot 3}{2\cdot 4}\right)^{3}+\dots$$ in <a href="http://paramanands.blogspot.com/2012/03/modular-equations-and-approximations-to-pi-part-2.html" rel="nofollow noreferrer">this post</a>.</p> Answer 24: <p>Modifying @Claude Leibovici's answer a little.<br /> <span class="math-container">$\frac{1}{\left(1-z\right)^{a}}=\,_{1}F_{0}\left(a;;z\right)={\displaystyle \sum_{n=0}^{\infty}\frac{a_{n}}{n!}z^{n}}$</span><br /> and<br /> <span class="math-container">$\frac{1}{\sqrt{1-4\cdot x}}={\displaystyle \sum_{n=0}^{\infty}\left(\begin{array}{c} 2n\\ n \end{array}\right)x^{n}}$</span><br /> Let <span class="math-container">$z=4x,x=\frac{z}{4}$</span><br /> <span class="math-container">$\frac{1}{\left(1-4x\right)^{a}}=\,_{1}F_{0}\left(a;;4x\right)={\displaystyle \sum_{n=0}^{\infty}\frac{a_{n}}{n!}\,\left(4x\right)^{n}}$</span><br /> <span class="math-container">$4^{n}\frac{\left(\frac{1}{2}\right)_{n}}{n!}=\left(\begin{array}{c} 2n\\ n \end{array}\right)$</span><br /> And we have the generating function for <span class="math-container">$\left(\begin{array}{c} 2n\\ n \end{array}\right)^{k}\left(4^{k}x\right)^{n}$</span><br /> i.e.<br /> <span class="math-container">$\,_{k}F_{k-1}\left({\displaystyle \prod_{i=1}^{k}\left(\frac{1}{2}\right)_n};{\displaystyle \prod_{i=1}^{k-1}\left(1\right)_n};4^{k}z\right)={\displaystyle \sum_{n=0}^{\infty}}\left(\begin{array}{c} 2n\\ n \end{array}\right)^{k}\left(4^{k}x\right)^{n}$</span><br /> Where we “borrowed” the <span class="math-container">$n!=\left(1\right)_{n}$</span></p> Answer 25: <p><em>This is not an answer but just a result obtained using a CAS.</em></p> <p>Let $$f_k=\sum_{n=0}^\infty \binom{2 n}{n}^kx^n$$ The following expressions have been obtained $$f_1=\frac{1}{\sqrt{1-4 x}}$$ $$f_2=\frac{2 }{\pi }K(16 x)$$ $$f_3=\frac{4 }{\pi ^2}K\left(\frac{1}{2} \left(1-\sqrt{1-64 x}\right)\right)^2$$ $$f_4=\, _4F_3\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2};1,1,1;256 x\right)$$ $$f_5=\, _5F_4\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2};1,1,1,1;1 024 x\right)$$ where appear, for $k=2,3$, the complete elliptic integrals of the first kind and, for $k&gt;3$, the generalized hypergeometric functions</p> Answer 26: <p>This is one of the simplest and famous series given by Ramanujan and it's value is $2/\pi$. Unfortunately Ramanujan's technique requires a reasonable amount of effort to understand. I have presented the proof for this series and it's friend $$\frac{4}{\pi}= 1+\frac{7}{4}\left(\frac{1}{2}\right)^{3}+\frac{13}{4^{2}}\left(\frac{1\cdot 3}{2\cdot 4}\right)^{3}+\dots$$ in <a href="http://paramanands.blogspot.com/2012/03/modular-equations-and-approximations-to-pi-part-2.html" rel="nofollow noreferrer">this post</a>.</p> Answer 27: <p>Modifying @Claude Leibovici's answer a little.<br /> <span class="math-container">$\frac{1}{\left(1-z\right)^{a}}=\,_{1}F_{0}\left(a;;z\right)={\displaystyle \sum_{n=0}^{\infty}\frac{a_{n}}{n!}z^{n}}$</span><br /> and<br /> <span class="math-container">$\frac{1}{\sqrt{1-4\cdot x}}={\displaystyle \sum_{n=0}^{\infty}\left(\begin{array}{c} 2n\\ n \end{array}\right)x^{n}}$</span><br /> Let <span class="math-container">$z=4x,x=\frac{z}{4}$</span><br /> <span class="math-container">$\frac{1}{\left(1-4x\right)^{a}}=\,_{1}F_{0}\left(a;;4x\right)={\displaystyle \sum_{n=0}^{\infty}\frac{a_{n}}{n!}\,\left(4x\right)^{n}}$</span><br /> <span class="math-container">$4^{n}\frac{\left(\frac{1}{2}\right)_{n}}{n!}=\left(\begin{array}{c} 2n\\ n \end{array}\right)$</span><br /> And we have the generating function for <span class="math-container">$\left(\begin{array}{c} 2n\\ n \end{array}\right)^{k}\left(4^{k}x\right)^{n}$</span><br /> i.e.<br /> <span class="math-container">$\,_{k}F_{k-1}\left({\displaystyle \prod_{i=1}^{k}\left(\frac{1}{2}\right)_n};{\displaystyle \prod_{i=1}^{k-1}\left(1\right)_n};4^{k}z\right)={\displaystyle \sum_{n=0}^{\infty}}\left(\begin{array}{c} 2n\\ n \end{array}\right)^{k}\left(4^{k}x\right)^{n}$</span><br /> Where we “borrowed” the <span class="math-container">$n!=\left(1\right)_{n}$</span></p> Answer 28: <p><em>This is not an answer but just a result obtained using a CAS.</em></p> <p>Let $$f_k=\sum_{n=0}^\infty \binom{2 n}{n}^kx^n$$ The following expressions have been obtained $$f_1=\frac{1}{\sqrt{1-4 x}}$$ $$f_2=\frac{2 }{\pi }K(16 x)$$ $$f_3=\frac{4 }{\pi ^2}K\left(\frac{1}{2} \left(1-\sqrt{1-64 x}\right)\right)^2$$ $$f_4=\, _4F_3\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2};1,1,1;256 x\right)$$ $$f_5=\, _5F_4\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2};1,1,1,1;1 024 x\right)$$ where appear, for $k=2,3$, the complete elliptic integrals of the first kind and, for $k&gt;3$, the generalized hypergeometric functions</p> Answer 29: <p>This is one of the simplest and famous series given by Ramanujan and it's value is $2/\pi$. Unfortunately Ramanujan's technique requires a reasonable amount of effort to understand. I have presented the proof for this series and it's friend $$\frac{4}{\pi}= 1+\frac{7}{4}\left(\frac{1}{2}\right)^{3}+\frac{13}{4^{2}}\left(\frac{1\cdot 3}{2\cdot 4}\right)^{3}+\dots$$ in <a href="http://paramanands.blogspot.com/2012/03/modular-equations-and-approximations-to-pi-part-2.html" rel="nofollow noreferrer">this post</a>.</p> Answer 30: <p>Modifying @Claude Leibovici's answer a little.<br /> <span class="math-container">$\frac{1}{\left(1-z\right)^{a}}=\,_{1}F_{0}\left(a;;z\right)={\displaystyle \sum_{n=0}^{\infty}\frac{a_{n}}{n!}z^{n}}$</span><br /> and<br /> <span class="math-container">$\frac{1}{\sqrt{1-4\cdot x}}={\displaystyle \sum_{n=0}^{\infty}\left(\begin{array}{c} 2n\\ n \end{array}\right)x^{n}}$</span><br /> Let <span class="math-container">$z=4x,x=\frac{z}{4}$</span><br /> <span class="math-container">$\frac{1}{\left(1-4x\right)^{a}}=\,_{1}F_{0}\left(a;;4x\right)={\displaystyle \sum_{n=0}^{\infty}\frac{a_{n}}{n!}\,\left(4x\right)^{n}}$</span><br /> <span class="math-container">$4^{n}\frac{\left(\frac{1}{2}\right)_{n}}{n!}=\left(\begin{array}{c} 2n\\ n \end{array}\right)$</span><br /> And we have the generating function for <span class="math-container">$\left(\begin{array}{c} 2n\\ n \end{array}\right)^{k}\left(4^{k}x\right)^{n}$</span><br /> i.e.<br /> <span class="math-container">$\,_{k}F_{k-1}\left({\displaystyle \prod_{i=1}^{k}\left(\frac{1}{2}\right)_n};{\displaystyle \prod_{i=1}^{k-1}\left(1\right)_n};4^{k}z\right)={\displaystyle \sum_{n=0}^{\infty}}\left(\begin{array}{c} 2n\\ n \end{array}\right)^{k}\left(4^{k}x\right)^{n}$</span><br /> Where we “borrowed” the <span class="math-container">$n!=\left(1\right)_{n}$</span></p> Answer 31: <p><em>This is not an answer but just a result obtained using a CAS.</em></p> <p>Let $$f_k=\sum_{n=0}^\infty \binom{2 n}{n}^kx^n$$ The following expressions have been obtained $$f_1=\frac{1}{\sqrt{1-4 x}}$$ $$f_2=\frac{2 }{\pi }K(16 x)$$ $$f_3=\frac{4 }{\pi ^2}K\left(\frac{1}{2} \left(1-\sqrt{1-64 x}\right)\right)^2$$ $$f_4=\, _4F_3\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2};1,1,1;256 x\right)$$ $$f_5=\, _5F_4\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2};1,1,1,1;1 024 x\right)$$ where appear, for $k=2,3$, the complete elliptic integrals of the first kind and, for $k&gt;3$, the generalized hypergeometric functions</p> Answer 32: <p>This is one of the simplest and famous series given by Ramanujan and it's value is $2/\pi$. Unfortunately Ramanujan's technique requires a reasonable amount of effort to understand. I have presented the proof for this series and it's friend $$\frac{4}{\pi}= 1+\frac{7}{4}\left(\frac{1}{2}\right)^{3}+\frac{13}{4^{2}}\left(\frac{1\cdot 3}{2\cdot 4}\right)^{3}+\dots$$ in <a href="http://paramanands.blogspot.com/2012/03/modular-equations-and-approximations-to-pi-part-2.html" rel="nofollow noreferrer">this post</a>.</p> Answer 33: <p>Modifying @Claude Leibovici's answer a little.<br /> <span class="math-container">$\frac{1}{\left(1-z\right)^{a}}=\,_{1}F_{0}\left(a;;z\right)={\displaystyle \sum_{n=0}^{\infty}\frac{a_{n}}{n!}z^{n}}$</span><br /> and<br /> <span class="math-container">$\frac{1}{\sqrt{1-4\cdot x}}={\displaystyle \sum_{n=0}^{\infty}\left(\begin{array}{c} 2n\\ n \end{array}\right)x^{n}}$</span><br /> Let <span class="math-container">$z=4x,x=\frac{z}{4}$</span><br /> <span class="math-container">$\frac{1}{\left(1-4x\right)^{a}}=\,_{1}F_{0}\left(a;;4x\right)={\displaystyle \sum_{n=0}^{\infty}\frac{a_{n}}{n!}\,\left(4x\right)^{n}}$</span><br /> <span class="math-container">$4^{n}\frac{\left(\frac{1}{2}\right)_{n}}{n!}=\left(\begin{array}{c} 2n\\ n \end{array}\right)$</span><br /> And we have the generating function for <span class="math-container">$\left(\begin{array}{c} 2n\\ n \end{array}\right)^{k}\left(4^{k}x\right)^{n}$</span><br /> i.e.<br /> <span class="math-container">$\,_{k}F_{k-1}\left({\displaystyle \prod_{i=1}^{k}\left(\frac{1}{2}\right)_n};{\displaystyle \prod_{i=1}^{k-1}\left(1\right)_n};4^{k}z\right)={\displaystyle \sum_{n=0}^{\infty}}\left(\begin{array}{c} 2n\\ n \end{array}\right)^{k}\left(4^{k}x\right)^{n}$</span><br /> Where we “borrowed” the <span class="math-container">$n!=\left(1\right)_{n}$</span></p>
What can be said about adjunctions between groups (regarded as one-object categories)?
2,096,549
6
https://math.stackexchange.com/questions/2096549/what-can-be-said-about-adjunctions-between-groups-regarded-as-one-object-catego
<p>I am trying to answer to this question: What can be said about adjunctions between groups (regarded as one-object categories)?</p> <p>Here is what I have arrived at till now, but I can't conclude in some way. I mean, it seams to me that one last step is missing.</p> <p>Let's consider $G_1$ and $G_2$ groups and let's view them as one-object categories $\mathcal{G}_1$ and $\mathcal{G}_2$, whose only objects are $G_1$ and $G_2$ respectively.\ Let's then take $F:\mathcal{G}_1\longrightarrow\mathcal{G}_2$, $H:\mathcal{G}_2\longrightarrow\mathcal{G}_1$ functors and suppose $F\dashv H$. This is equivalent to state that the map $\eta_{G_1}:G_1\longrightarrow HF(G_1)$ is initial in $(G_1\Rightarrow H)$ (it suffices to state it for this only map $\eta_{G_1}$ since $G_1$ is the only object of $\mathcal{G}_1$). However, being $\mathcal{G}_1$ and $\mathcal{G}_2$ one-object categories, we already know how the functors $F$ and $H$ are defined on objects, namely $F(G_1)=G_2$ and $H(G_2)=G_1$. So the map $\eta_{G_1}$ is nothing but a function from $G_1$ to itself.\ Let's now look at the comma category $(G_1\Rightarrow H)$. Its objects are the maps of the form $g_1:G_1\longrightarrow H(G_2)$, so they are simply the functions from $G_1$ to itself. A map between $g_1$ and $g_1'$ in $(G_1\Rightarrow H)$ is a function $g_2:G_2\longrightarrow G_2$ such that $g_1'=H(g_2)\circ g_1$, so we can write it as ``$g_1\rightarrow H(g_2)$''.\ Now, we want to impose that $\eta_{G_1}$ is an initial object in $(G_1\Rightarrow H)$. So let $g_1$ be an arbitrary object in $Ob(\mathcal{G}_1)$, and we force the map $g_2\in (G_1\Rightarrow H)(\eta_{G_1},g_1)$ to be unique. Thus we must have $g_2$ to be the unique map such that $g_1=H(g_2)\circ \eta_{G_1}$.</p> <p>So now, what can we say really interesting about this $g_2$???</p> <p>Thanks in advance for any help!</p>
[ { "answer_id": 2096614, "score": 8, "text": "<p>Since groups as one-object categories are a special case of <em>groupoids</em>, that is, categories in which every morphism is an isomorphism, for any adjunction $F\\dashv H\\colon\\mathcal G_1\\to\\mathcal G_2$ between two group(oid)s, the unit and counit natural transformations $\\mathrm{id}_{\\mathcal G_1}\\overset{\\eta}\\Rightarrow HF$ and $FH\\overset\\epsilon\\Rightarrow\\mathrm{id}_{\\mathcal G_2}$ will be natural isomorphisms. In particular, all adjoint pairs of functors between two group(oid)s are adjoint <em>equivalences</em>.</p>\n\n<p>In the special case of the adjunction $F\\dashv H\\colon\\mathcal G_1\\to\\mathcal G_2$ of one-object categories (i.e. monoids), we can identify $\\eta$ and $\\epsilon$ with elements of $\\mathcal G_1$ and $\\mathcal G_2$ respectively, which must satisfy the naturality conditions</p>\n\n<ol>\n<li>$HF(g_1)\\eta=\\eta g_1$ for all $g_1\\in\\mathcal G_1$</li>\n<li>$g_2\\epsilon=\\epsilon FH(g_2)$ for all $g_2\\in\\mathcal G_2$</li>\n</ol>\n\n<p>Furthermore, the zig-zag identities which makes the unit and counit into an adjunction between the two functors (i.e. monoid homomorphisms) are</p>\n\n<ol>\n<li>$\\epsilon F(\\eta)=\\mathrm{id}_{\\mathcal G_2}$</li>\n<li>$H(\\epsilon)\\eta=\\mathrm{id}_{\\mathcal G_1}$</li>\n</ol>\n\n<p>By multiplying the naturality conditions with the inverses given by the zig-zag identity, we end up reducing all four conditions to the following two</p>\n\n<ol>\n<li>$H(\\epsilon F(g_1))\\eta=g_1$ for all $g_1\\in\\mathcal G_1$</li>\n<li>$g_2=\\epsilon F(H(g_2)\\eta))$ for all $g_2\\in\\mathcal G_2$</li>\n</ol>\n\n<p>Indeed, the zig-zag identities are the special case where $g_1$ and $g_2$ are the identity elements of the groups. In particular, in the case of a group (where left and right inverses coincide), we have $HF(g_1)=\\eta^{-1}g_1\\eta$ and $FH(g_2)=\\epsilon^{-1}g_2\\epsilon$ where $\\eta^{-1}=H(\\epsilon)$ and $\\epsilon^{-1}=F(\\eta)$. Thus an adjunction between two groups consists of a pair of group homomorphisms $F\\colon\\mathcal G_1\\rightleftarrows\\mathcal G_2\\colon H$ whose composites are inner automorphisms so that $F$ sends the inner automorphism of $\\mathcal G_1$ to the inverse inner automorphism of $\\mathcal G_2$ and reversly for $H$.</p>\n" }, { "answer_id": 2096572, "score": 4, "text": "<p>Every natural transformation between functors between groups is an equivalence, so every adjunction is an equivalence. </p>\n" }, { "answer_id": 2096614, "score": 8, "text": "<p>Since groups as one-object categories are a special case of <em>groupoids</em>, that is, categories in which every morphism is an isomorphism, for any adjunction $F\\dashv H\\colon\\mathcal G_1\\to\\mathcal G_2$ between two group(oid)s, the unit and counit natural transformations $\\mathrm{id}_{\\mathcal G_1}\\overset{\\eta}\\Rightarrow HF$ and $FH\\overset\\epsilon\\Rightarrow\\mathrm{id}_{\\mathcal G_2}$ will be natural isomorphisms. In particular, all adjoint pairs of functors between two group(oid)s are adjoint <em>equivalences</em>.</p>\n\n<p>In the special case of the adjunction $F\\dashv H\\colon\\mathcal G_1\\to\\mathcal G_2$ of one-object categories (i.e. monoids), we can identify $\\eta$ and $\\epsilon$ with elements of $\\mathcal G_1$ and $\\mathcal G_2$ respectively, which must satisfy the naturality conditions</p>\n\n<ol>\n<li>$HF(g_1)\\eta=\\eta g_1$ for all $g_1\\in\\mathcal G_1$</li>\n<li>$g_2\\epsilon=\\epsilon FH(g_2)$ for all $g_2\\in\\mathcal G_2$</li>\n</ol>\n\n<p>Furthermore, the zig-zag identities which makes the unit and counit into an adjunction between the two functors (i.e. monoid homomorphisms) are</p>\n\n<ol>\n<li>$\\epsilon F(\\eta)=\\mathrm{id}_{\\mathcal G_2}$</li>\n<li>$H(\\epsilon)\\eta=\\mathrm{id}_{\\mathcal G_1}$</li>\n</ol>\n\n<p>By multiplying the naturality conditions with the inverses given by the zig-zag identity, we end up reducing all four conditions to the following two</p>\n\n<ol>\n<li>$H(\\epsilon F(g_1))\\eta=g_1$ for all $g_1\\in\\mathcal G_1$</li>\n<li>$g_2=\\epsilon F(H(g_2)\\eta))$ for all $g_2\\in\\mathcal G_2$</li>\n</ol>\n\n<p>Indeed, the zig-zag identities are the special case where $g_1$ and $g_2$ are the identity elements of the groups. In particular, in the case of a group (where left and right inverses coincide), we have $HF(g_1)=\\eta^{-1}g_1\\eta$ and $FH(g_2)=\\epsilon^{-1}g_2\\epsilon$ where $\\eta^{-1}=H(\\epsilon)$ and $\\epsilon^{-1}=F(\\eta)$. Thus an adjunction between two groups consists of a pair of group homomorphisms $F\\colon\\mathcal G_1\\rightleftarrows\\mathcal G_2\\colon H$ whose composites are inner automorphisms so that $F$ sends the inner automorphism of $\\mathcal G_1$ to the inverse inner automorphism of $\\mathcal G_2$ and reversly for $H$.</p>\n" }, { "answer_id": 2096572, "score": 4, "text": "<p>Every natural transformation between functors between groups is an equivalence, so every adjunction is an equivalence. </p>\n" }, { "answer_id": 2096614, "score": 8, "text": "<p>Since groups as one-object categories are a special case of <em>groupoids</em>, that is, categories in which every morphism is an isomorphism, for any adjunction $F\\dashv H\\colon\\mathcal G_1\\to\\mathcal G_2$ between two group(oid)s, the unit and counit natural transformations $\\mathrm{id}_{\\mathcal G_1}\\overset{\\eta}\\Rightarrow HF$ and $FH\\overset\\epsilon\\Rightarrow\\mathrm{id}_{\\mathcal G_2}$ will be natural isomorphisms. In particular, all adjoint pairs of functors between two group(oid)s are adjoint <em>equivalences</em>.</p>\n\n<p>In the special case of the adjunction $F\\dashv H\\colon\\mathcal G_1\\to\\mathcal G_2$ of one-object categories (i.e. monoids), we can identify $\\eta$ and $\\epsilon$ with elements of $\\mathcal G_1$ and $\\mathcal G_2$ respectively, which must satisfy the naturality conditions</p>\n\n<ol>\n<li>$HF(g_1)\\eta=\\eta g_1$ for all $g_1\\in\\mathcal G_1$</li>\n<li>$g_2\\epsilon=\\epsilon FH(g_2)$ for all $g_2\\in\\mathcal G_2$</li>\n</ol>\n\n<p>Furthermore, the zig-zag identities which makes the unit and counit into an adjunction between the two functors (i.e. monoid homomorphisms) are</p>\n\n<ol>\n<li>$\\epsilon F(\\eta)=\\mathrm{id}_{\\mathcal G_2}$</li>\n<li>$H(\\epsilon)\\eta=\\mathrm{id}_{\\mathcal G_1}$</li>\n</ol>\n\n<p>By multiplying the naturality conditions with the inverses given by the zig-zag identity, we end up reducing all four conditions to the following two</p>\n\n<ol>\n<li>$H(\\epsilon F(g_1))\\eta=g_1$ for all $g_1\\in\\mathcal G_1$</li>\n<li>$g_2=\\epsilon F(H(g_2)\\eta))$ for all $g_2\\in\\mathcal G_2$</li>\n</ol>\n\n<p>Indeed, the zig-zag identities are the special case where $g_1$ and $g_2$ are the identity elements of the groups. In particular, in the case of a group (where left and right inverses coincide), we have $HF(g_1)=\\eta^{-1}g_1\\eta$ and $FH(g_2)=\\epsilon^{-1}g_2\\epsilon$ where $\\eta^{-1}=H(\\epsilon)$ and $\\epsilon^{-1}=F(\\eta)$. Thus an adjunction between two groups consists of a pair of group homomorphisms $F\\colon\\mathcal G_1\\rightleftarrows\\mathcal G_2\\colon H$ whose composites are inner automorphisms so that $F$ sends the inner automorphism of $\\mathcal G_1$ to the inverse inner automorphism of $\\mathcal G_2$ and reversly for $H$.</p>\n" }, { "answer_id": 2096572, "score": 4, "text": "<p>Every natural transformation between functors between groups is an equivalence, so every adjunction is an equivalence. </p>\n" }, { "answer_id": 2096614, "score": 8, "text": "<p>Since groups as one-object categories are a special case of <em>groupoids</em>, that is, categories in which every morphism is an isomorphism, for any adjunction $F\\dashv H\\colon\\mathcal G_1\\to\\mathcal G_2$ between two group(oid)s, the unit and counit natural transformations $\\mathrm{id}_{\\mathcal G_1}\\overset{\\eta}\\Rightarrow HF$ and $FH\\overset\\epsilon\\Rightarrow\\mathrm{id}_{\\mathcal G_2}$ will be natural isomorphisms. In particular, all adjoint pairs of functors between two group(oid)s are adjoint <em>equivalences</em>.</p>\n\n<p>In the special case of the adjunction $F\\dashv H\\colon\\mathcal G_1\\to\\mathcal G_2$ of one-object categories (i.e. monoids), we can identify $\\eta$ and $\\epsilon$ with elements of $\\mathcal G_1$ and $\\mathcal G_2$ respectively, which must satisfy the naturality conditions</p>\n\n<ol>\n<li>$HF(g_1)\\eta=\\eta g_1$ for all $g_1\\in\\mathcal G_1$</li>\n<li>$g_2\\epsilon=\\epsilon FH(g_2)$ for all $g_2\\in\\mathcal G_2$</li>\n</ol>\n\n<p>Furthermore, the zig-zag identities which makes the unit and counit into an adjunction between the two functors (i.e. monoid homomorphisms) are</p>\n\n<ol>\n<li>$\\epsilon F(\\eta)=\\mathrm{id}_{\\mathcal G_2}$</li>\n<li>$H(\\epsilon)\\eta=\\mathrm{id}_{\\mathcal G_1}$</li>\n</ol>\n\n<p>By multiplying the naturality conditions with the inverses given by the zig-zag identity, we end up reducing all four conditions to the following two</p>\n\n<ol>\n<li>$H(\\epsilon F(g_1))\\eta=g_1$ for all $g_1\\in\\mathcal G_1$</li>\n<li>$g_2=\\epsilon F(H(g_2)\\eta))$ for all $g_2\\in\\mathcal G_2$</li>\n</ol>\n\n<p>Indeed, the zig-zag identities are the special case where $g_1$ and $g_2$ are the identity elements of the groups. In particular, in the case of a group (where left and right inverses coincide), we have $HF(g_1)=\\eta^{-1}g_1\\eta$ and $FH(g_2)=\\epsilon^{-1}g_2\\epsilon$ where $\\eta^{-1}=H(\\epsilon)$ and $\\epsilon^{-1}=F(\\eta)$. Thus an adjunction between two groups consists of a pair of group homomorphisms $F\\colon\\mathcal G_1\\rightleftarrows\\mathcal G_2\\colon H$ whose composites are inner automorphisms so that $F$ sends the inner automorphism of $\\mathcal G_1$ to the inverse inner automorphism of $\\mathcal G_2$ and reversly for $H$.</p>\n" }, { "answer_id": 2096572, "score": 4, "text": "<p>Every natural transformation between functors between groups is an equivalence, so every adjunction is an equivalence. </p>\n" }, { "answer_id": 2096614, "score": 8, "text": "<p>Since groups as one-object categories are a special case of <em>groupoids</em>, that is, categories in which every morphism is an isomorphism, for any adjunction $F\\dashv H\\colon\\mathcal G_1\\to\\mathcal G_2$ between two group(oid)s, the unit and counit natural transformations $\\mathrm{id}_{\\mathcal G_1}\\overset{\\eta}\\Rightarrow HF$ and $FH\\overset\\epsilon\\Rightarrow\\mathrm{id}_{\\mathcal G_2}$ will be natural isomorphisms. In particular, all adjoint pairs of functors between two group(oid)s are adjoint <em>equivalences</em>.</p>\n\n<p>In the special case of the adjunction $F\\dashv H\\colon\\mathcal G_1\\to\\mathcal G_2$ of one-object categories (i.e. monoids), we can identify $\\eta$ and $\\epsilon$ with elements of $\\mathcal G_1$ and $\\mathcal G_2$ respectively, which must satisfy the naturality conditions</p>\n\n<ol>\n<li>$HF(g_1)\\eta=\\eta g_1$ for all $g_1\\in\\mathcal G_1$</li>\n<li>$g_2\\epsilon=\\epsilon FH(g_2)$ for all $g_2\\in\\mathcal G_2$</li>\n</ol>\n\n<p>Furthermore, the zig-zag identities which makes the unit and counit into an adjunction between the two functors (i.e. monoid homomorphisms) are</p>\n\n<ol>\n<li>$\\epsilon F(\\eta)=\\mathrm{id}_{\\mathcal G_2}$</li>\n<li>$H(\\epsilon)\\eta=\\mathrm{id}_{\\mathcal G_1}$</li>\n</ol>\n\n<p>By multiplying the naturality conditions with the inverses given by the zig-zag identity, we end up reducing all four conditions to the following two</p>\n\n<ol>\n<li>$H(\\epsilon F(g_1))\\eta=g_1$ for all $g_1\\in\\mathcal G_1$</li>\n<li>$g_2=\\epsilon F(H(g_2)\\eta))$ for all $g_2\\in\\mathcal G_2$</li>\n</ol>\n\n<p>Indeed, the zig-zag identities are the special case where $g_1$ and $g_2$ are the identity elements of the groups. In particular, in the case of a group (where left and right inverses coincide), we have $HF(g_1)=\\eta^{-1}g_1\\eta$ and $FH(g_2)=\\epsilon^{-1}g_2\\epsilon$ where $\\eta^{-1}=H(\\epsilon)$ and $\\epsilon^{-1}=F(\\eta)$. Thus an adjunction between two groups consists of a pair of group homomorphisms $F\\colon\\mathcal G_1\\rightleftarrows\\mathcal G_2\\colon H$ whose composites are inner automorphisms so that $F$ sends the inner automorphism of $\\mathcal G_1$ to the inverse inner automorphism of $\\mathcal G_2$ and reversly for $H$.</p>\n" }, { "answer_id": 2096572, "score": 4, "text": "<p>Every natural transformation between functors between groups is an equivalence, so every adjunction is an equivalence. </p>\n" }, { "answer_id": 2096614, "score": 8, "text": "<p>Since groups as one-object categories are a special case of <em>groupoids</em>, that is, categories in which every morphism is an isomorphism, for any adjunction $F\\dashv H\\colon\\mathcal G_1\\to\\mathcal G_2$ between two group(oid)s, the unit and counit natural transformations $\\mathrm{id}_{\\mathcal G_1}\\overset{\\eta}\\Rightarrow HF$ and $FH\\overset\\epsilon\\Rightarrow\\mathrm{id}_{\\mathcal G_2}$ will be natural isomorphisms. In particular, all adjoint pairs of functors between two group(oid)s are adjoint <em>equivalences</em>.</p>\n\n<p>In the special case of the adjunction $F\\dashv H\\colon\\mathcal G_1\\to\\mathcal G_2$ of one-object categories (i.e. monoids), we can identify $\\eta$ and $\\epsilon$ with elements of $\\mathcal G_1$ and $\\mathcal G_2$ respectively, which must satisfy the naturality conditions</p>\n\n<ol>\n<li>$HF(g_1)\\eta=\\eta g_1$ for all $g_1\\in\\mathcal G_1$</li>\n<li>$g_2\\epsilon=\\epsilon FH(g_2)$ for all $g_2\\in\\mathcal G_2$</li>\n</ol>\n\n<p>Furthermore, the zig-zag identities which makes the unit and counit into an adjunction between the two functors (i.e. monoid homomorphisms) are</p>\n\n<ol>\n<li>$\\epsilon F(\\eta)=\\mathrm{id}_{\\mathcal G_2}$</li>\n<li>$H(\\epsilon)\\eta=\\mathrm{id}_{\\mathcal G_1}$</li>\n</ol>\n\n<p>By multiplying the naturality conditions with the inverses given by the zig-zag identity, we end up reducing all four conditions to the following two</p>\n\n<ol>\n<li>$H(\\epsilon F(g_1))\\eta=g_1$ for all $g_1\\in\\mathcal G_1$</li>\n<li>$g_2=\\epsilon F(H(g_2)\\eta))$ for all $g_2\\in\\mathcal G_2$</li>\n</ol>\n\n<p>Indeed, the zig-zag identities are the special case where $g_1$ and $g_2$ are the identity elements of the groups. In particular, in the case of a group (where left and right inverses coincide), we have $HF(g_1)=\\eta^{-1}g_1\\eta$ and $FH(g_2)=\\epsilon^{-1}g_2\\epsilon$ where $\\eta^{-1}=H(\\epsilon)$ and $\\epsilon^{-1}=F(\\eta)$. Thus an adjunction between two groups consists of a pair of group homomorphisms $F\\colon\\mathcal G_1\\rightleftarrows\\mathcal G_2\\colon H$ whose composites are inner automorphisms so that $F$ sends the inner automorphism of $\\mathcal G_1$ to the inverse inner automorphism of $\\mathcal G_2$ and reversly for $H$.</p>\n" }, { "answer_id": 2096572, "score": 4, "text": "<p>Every natural transformation between functors between groups is an equivalence, so every adjunction is an equivalence. </p>\n" }, { "answer_id": 2096614, "score": 8, "text": "<p>Since groups as one-object categories are a special case of <em>groupoids</em>, that is, categories in which every morphism is an isomorphism, for any adjunction $F\\dashv H\\colon\\mathcal G_1\\to\\mathcal G_2$ between two group(oid)s, the unit and counit natural transformations $\\mathrm{id}_{\\mathcal G_1}\\overset{\\eta}\\Rightarrow HF$ and $FH\\overset\\epsilon\\Rightarrow\\mathrm{id}_{\\mathcal G_2}$ will be natural isomorphisms. In particular, all adjoint pairs of functors between two group(oid)s are adjoint <em>equivalences</em>.</p>\n\n<p>In the special case of the adjunction $F\\dashv H\\colon\\mathcal G_1\\to\\mathcal G_2$ of one-object categories (i.e. monoids), we can identify $\\eta$ and $\\epsilon$ with elements of $\\mathcal G_1$ and $\\mathcal G_2$ respectively, which must satisfy the naturality conditions</p>\n\n<ol>\n<li>$HF(g_1)\\eta=\\eta g_1$ for all $g_1\\in\\mathcal G_1$</li>\n<li>$g_2\\epsilon=\\epsilon FH(g_2)$ for all $g_2\\in\\mathcal G_2$</li>\n</ol>\n\n<p>Furthermore, the zig-zag identities which makes the unit and counit into an adjunction between the two functors (i.e. monoid homomorphisms) are</p>\n\n<ol>\n<li>$\\epsilon F(\\eta)=\\mathrm{id}_{\\mathcal G_2}$</li>\n<li>$H(\\epsilon)\\eta=\\mathrm{id}_{\\mathcal G_1}$</li>\n</ol>\n\n<p>By multiplying the naturality conditions with the inverses given by the zig-zag identity, we end up reducing all four conditions to the following two</p>\n\n<ol>\n<li>$H(\\epsilon F(g_1))\\eta=g_1$ for all $g_1\\in\\mathcal G_1$</li>\n<li>$g_2=\\epsilon F(H(g_2)\\eta))$ for all $g_2\\in\\mathcal G_2$</li>\n</ol>\n\n<p>Indeed, the zig-zag identities are the special case where $g_1$ and $g_2$ are the identity elements of the groups. In particular, in the case of a group (where left and right inverses coincide), we have $HF(g_1)=\\eta^{-1}g_1\\eta$ and $FH(g_2)=\\epsilon^{-1}g_2\\epsilon$ where $\\eta^{-1}=H(\\epsilon)$ and $\\epsilon^{-1}=F(\\eta)$. Thus an adjunction between two groups consists of a pair of group homomorphisms $F\\colon\\mathcal G_1\\rightleftarrows\\mathcal G_2\\colon H$ whose composites are inner automorphisms so that $F$ sends the inner automorphism of $\\mathcal G_1$ to the inverse inner automorphism of $\\mathcal G_2$ and reversly for $H$.</p>\n" }, { "answer_id": 2096572, "score": 4, "text": "<p>Every natural transformation between functors between groups is an equivalence, so every adjunction is an equivalence. </p>\n" }, { "answer_id": 2096614, "score": 8, "text": "<p>Since groups as one-object categories are a special case of <em>groupoids</em>, that is, categories in which every morphism is an isomorphism, for any adjunction $F\\dashv H\\colon\\mathcal G_1\\to\\mathcal G_2$ between two group(oid)s, the unit and counit natural transformations $\\mathrm{id}_{\\mathcal G_1}\\overset{\\eta}\\Rightarrow HF$ and $FH\\overset\\epsilon\\Rightarrow\\mathrm{id}_{\\mathcal G_2}$ will be natural isomorphisms. In particular, all adjoint pairs of functors between two group(oid)s are adjoint <em>equivalences</em>.</p>\n\n<p>In the special case of the adjunction $F\\dashv H\\colon\\mathcal G_1\\to\\mathcal G_2$ of one-object categories (i.e. monoids), we can identify $\\eta$ and $\\epsilon$ with elements of $\\mathcal G_1$ and $\\mathcal G_2$ respectively, which must satisfy the naturality conditions</p>\n\n<ol>\n<li>$HF(g_1)\\eta=\\eta g_1$ for all $g_1\\in\\mathcal G_1$</li>\n<li>$g_2\\epsilon=\\epsilon FH(g_2)$ for all $g_2\\in\\mathcal G_2$</li>\n</ol>\n\n<p>Furthermore, the zig-zag identities which makes the unit and counit into an adjunction between the two functors (i.e. monoid homomorphisms) are</p>\n\n<ol>\n<li>$\\epsilon F(\\eta)=\\mathrm{id}_{\\mathcal G_2}$</li>\n<li>$H(\\epsilon)\\eta=\\mathrm{id}_{\\mathcal G_1}$</li>\n</ol>\n\n<p>By multiplying the naturality conditions with the inverses given by the zig-zag identity, we end up reducing all four conditions to the following two</p>\n\n<ol>\n<li>$H(\\epsilon F(g_1))\\eta=g_1$ for all $g_1\\in\\mathcal G_1$</li>\n<li>$g_2=\\epsilon F(H(g_2)\\eta))$ for all $g_2\\in\\mathcal G_2$</li>\n</ol>\n\n<p>Indeed, the zig-zag identities are the special case where $g_1$ and $g_2$ are the identity elements of the groups. In particular, in the case of a group (where left and right inverses coincide), we have $HF(g_1)=\\eta^{-1}g_1\\eta$ and $FH(g_2)=\\epsilon^{-1}g_2\\epsilon$ where $\\eta^{-1}=H(\\epsilon)$ and $\\epsilon^{-1}=F(\\eta)$. Thus an adjunction between two groups consists of a pair of group homomorphisms $F\\colon\\mathcal G_1\\rightleftarrows\\mathcal G_2\\colon H$ whose composites are inner automorphisms so that $F$ sends the inner automorphism of $\\mathcal G_1$ to the inverse inner automorphism of $\\mathcal G_2$ and reversly for $H$.</p>\n" }, { "answer_id": 2096572, "score": 4, "text": "<p>Every natural transformation between functors between groups is an equivalence, so every adjunction is an equivalence. </p>\n" }, { "answer_id": 2096614, "score": 8, "text": "<p>Since groups as one-object categories are a special case of <em>groupoids</em>, that is, categories in which every morphism is an isomorphism, for any adjunction $F\\dashv H\\colon\\mathcal G_1\\to\\mathcal G_2$ between two group(oid)s, the unit and counit natural transformations $\\mathrm{id}_{\\mathcal G_1}\\overset{\\eta}\\Rightarrow HF$ and $FH\\overset\\epsilon\\Rightarrow\\mathrm{id}_{\\mathcal G_2}$ will be natural isomorphisms. In particular, all adjoint pairs of functors between two group(oid)s are adjoint <em>equivalences</em>.</p>\n\n<p>In the special case of the adjunction $F\\dashv H\\colon\\mathcal G_1\\to\\mathcal G_2$ of one-object categories (i.e. monoids), we can identify $\\eta$ and $\\epsilon$ with elements of $\\mathcal G_1$ and $\\mathcal G_2$ respectively, which must satisfy the naturality conditions</p>\n\n<ol>\n<li>$HF(g_1)\\eta=\\eta g_1$ for all $g_1\\in\\mathcal G_1$</li>\n<li>$g_2\\epsilon=\\epsilon FH(g_2)$ for all $g_2\\in\\mathcal G_2$</li>\n</ol>\n\n<p>Furthermore, the zig-zag identities which makes the unit and counit into an adjunction between the two functors (i.e. monoid homomorphisms) are</p>\n\n<ol>\n<li>$\\epsilon F(\\eta)=\\mathrm{id}_{\\mathcal G_2}$</li>\n<li>$H(\\epsilon)\\eta=\\mathrm{id}_{\\mathcal G_1}$</li>\n</ol>\n\n<p>By multiplying the naturality conditions with the inverses given by the zig-zag identity, we end up reducing all four conditions to the following two</p>\n\n<ol>\n<li>$H(\\epsilon F(g_1))\\eta=g_1$ for all $g_1\\in\\mathcal G_1$</li>\n<li>$g_2=\\epsilon F(H(g_2)\\eta))$ for all $g_2\\in\\mathcal G_2$</li>\n</ol>\n\n<p>Indeed, the zig-zag identities are the special case where $g_1$ and $g_2$ are the identity elements of the groups. In particular, in the case of a group (where left and right inverses coincide), we have $HF(g_1)=\\eta^{-1}g_1\\eta$ and $FH(g_2)=\\epsilon^{-1}g_2\\epsilon$ where $\\eta^{-1}=H(\\epsilon)$ and $\\epsilon^{-1}=F(\\eta)$. Thus an adjunction between two groups consists of a pair of group homomorphisms $F\\colon\\mathcal G_1\\rightleftarrows\\mathcal G_2\\colon H$ whose composites are inner automorphisms so that $F$ sends the inner automorphism of $\\mathcal G_1$ to the inverse inner automorphism of $\\mathcal G_2$ and reversly for $H$.</p>\n" }, { "answer_id": 2096572, "score": 4, "text": "<p>Every natural transformation between functors between groups is an equivalence, so every adjunction is an equivalence. </p>\n" }, { "answer_id": 2096614, "score": 8, "text": "<p>Since groups as one-object categories are a special case of <em>groupoids</em>, that is, categories in which every morphism is an isomorphism, for any adjunction $F\\dashv H\\colon\\mathcal G_1\\to\\mathcal G_2$ between two group(oid)s, the unit and counit natural transformations $\\mathrm{id}_{\\mathcal G_1}\\overset{\\eta}\\Rightarrow HF$ and $FH\\overset\\epsilon\\Rightarrow\\mathrm{id}_{\\mathcal G_2}$ will be natural isomorphisms. In particular, all adjoint pairs of functors between two group(oid)s are adjoint <em>equivalences</em>.</p>\n\n<p>In the special case of the adjunction $F\\dashv H\\colon\\mathcal G_1\\to\\mathcal G_2$ of one-object categories (i.e. monoids), we can identify $\\eta$ and $\\epsilon$ with elements of $\\mathcal G_1$ and $\\mathcal G_2$ respectively, which must satisfy the naturality conditions</p>\n\n<ol>\n<li>$HF(g_1)\\eta=\\eta g_1$ for all $g_1\\in\\mathcal G_1$</li>\n<li>$g_2\\epsilon=\\epsilon FH(g_2)$ for all $g_2\\in\\mathcal G_2$</li>\n</ol>\n\n<p>Furthermore, the zig-zag identities which makes the unit and counit into an adjunction between the two functors (i.e. monoid homomorphisms) are</p>\n\n<ol>\n<li>$\\epsilon F(\\eta)=\\mathrm{id}_{\\mathcal G_2}$</li>\n<li>$H(\\epsilon)\\eta=\\mathrm{id}_{\\mathcal G_1}$</li>\n</ol>\n\n<p>By multiplying the naturality conditions with the inverses given by the zig-zag identity, we end up reducing all four conditions to the following two</p>\n\n<ol>\n<li>$H(\\epsilon F(g_1))\\eta=g_1$ for all $g_1\\in\\mathcal G_1$</li>\n<li>$g_2=\\epsilon F(H(g_2)\\eta))$ for all $g_2\\in\\mathcal G_2$</li>\n</ol>\n\n<p>Indeed, the zig-zag identities are the special case where $g_1$ and $g_2$ are the identity elements of the groups. In particular, in the case of a group (where left and right inverses coincide), we have $HF(g_1)=\\eta^{-1}g_1\\eta$ and $FH(g_2)=\\epsilon^{-1}g_2\\epsilon$ where $\\eta^{-1}=H(\\epsilon)$ and $\\epsilon^{-1}=F(\\eta)$. Thus an adjunction between two groups consists of a pair of group homomorphisms $F\\colon\\mathcal G_1\\rightleftarrows\\mathcal G_2\\colon H$ whose composites are inner automorphisms so that $F$ sends the inner automorphism of $\\mathcal G_1$ to the inverse inner automorphism of $\\mathcal G_2$ and reversly for $H$.</p>\n" }, { "answer_id": 2096572, "score": 4, "text": "<p>Every natural transformation between functors between groups is an equivalence, so every adjunction is an equivalence. </p>\n" }, { "answer_id": 2096614, "score": 8, "text": "<p>Since groups as one-object categories are a special case of <em>groupoids</em>, that is, categories in which every morphism is an isomorphism, for any adjunction $F\\dashv H\\colon\\mathcal G_1\\to\\mathcal G_2$ between two group(oid)s, the unit and counit natural transformations $\\mathrm{id}_{\\mathcal G_1}\\overset{\\eta}\\Rightarrow HF$ and $FH\\overset\\epsilon\\Rightarrow\\mathrm{id}_{\\mathcal G_2}$ will be natural isomorphisms. In particular, all adjoint pairs of functors between two group(oid)s are adjoint <em>equivalences</em>.</p>\n\n<p>In the special case of the adjunction $F\\dashv H\\colon\\mathcal G_1\\to\\mathcal G_2$ of one-object categories (i.e. monoids), we can identify $\\eta$ and $\\epsilon$ with elements of $\\mathcal G_1$ and $\\mathcal G_2$ respectively, which must satisfy the naturality conditions</p>\n\n<ol>\n<li>$HF(g_1)\\eta=\\eta g_1$ for all $g_1\\in\\mathcal G_1$</li>\n<li>$g_2\\epsilon=\\epsilon FH(g_2)$ for all $g_2\\in\\mathcal G_2$</li>\n</ol>\n\n<p>Furthermore, the zig-zag identities which makes the unit and counit into an adjunction between the two functors (i.e. monoid homomorphisms) are</p>\n\n<ol>\n<li>$\\epsilon F(\\eta)=\\mathrm{id}_{\\mathcal G_2}$</li>\n<li>$H(\\epsilon)\\eta=\\mathrm{id}_{\\mathcal G_1}$</li>\n</ol>\n\n<p>By multiplying the naturality conditions with the inverses given by the zig-zag identity, we end up reducing all four conditions to the following two</p>\n\n<ol>\n<li>$H(\\epsilon F(g_1))\\eta=g_1$ for all $g_1\\in\\mathcal G_1$</li>\n<li>$g_2=\\epsilon F(H(g_2)\\eta))$ for all $g_2\\in\\mathcal G_2$</li>\n</ol>\n\n<p>Indeed, the zig-zag identities are the special case where $g_1$ and $g_2$ are the identity elements of the groups. In particular, in the case of a group (where left and right inverses coincide), we have $HF(g_1)=\\eta^{-1}g_1\\eta$ and $FH(g_2)=\\epsilon^{-1}g_2\\epsilon$ where $\\eta^{-1}=H(\\epsilon)$ and $\\epsilon^{-1}=F(\\eta)$. Thus an adjunction between two groups consists of a pair of group homomorphisms $F\\colon\\mathcal G_1\\rightleftarrows\\mathcal G_2\\colon H$ whose composites are inner automorphisms so that $F$ sends the inner automorphism of $\\mathcal G_1$ to the inverse inner automorphism of $\\mathcal G_2$ and reversly for $H$.</p>\n" }, { "answer_id": 2096572, "score": 4, "text": "<p>Every natural transformation between functors between groups is an equivalence, so every adjunction is an equivalence. </p>\n" } ]
Question: What can be said about adjunctions between groups (regarded as one-object categories)? Answer 1: <p>Since groups as one-object categories are a special case of <em>groupoids</em>, that is, categories in which every morphism is an isomorphism, for any adjunction $F\dashv H\colon\mathcal G_1\to\mathcal G_2$ between two group(oid)s, the unit and counit natural transformations $\mathrm{id}_{\mathcal G_1}\overset{\eta}\Rightarrow HF$ and $FH\overset\epsilon\Rightarrow\mathrm{id}_{\mathcal G_2}$ will be natural isomorphisms. In particular, all adjoint pairs of functors between two group(oid)s are adjoint <em>equivalences</em>.</p> <p>In the special case of the adjunction $F\dashv H\colon\mathcal G_1\to\mathcal G_2$ of one-object categories (i.e. monoids), we can identify $\eta$ and $\epsilon$ with elements of $\mathcal G_1$ and $\mathcal G_2$ respectively, which must satisfy the naturality conditions</p> <ol> <li>$HF(g_1)\eta=\eta g_1$ for all $g_1\in\mathcal G_1$</li> <li>$g_2\epsilon=\epsilon FH(g_2)$ for all $g_2\in\mathcal G_2$</li> </ol> <p>Furthermore, the zig-zag identities which makes the unit and counit into an adjunction between the two functors (i.e. monoid homomorphisms) are</p> <ol> <li>$\epsilon F(\eta)=\mathrm{id}_{\mathcal G_2}$</li> <li>$H(\epsilon)\eta=\mathrm{id}_{\mathcal G_1}$</li> </ol> <p>By multiplying the naturality conditions with the inverses given by the zig-zag identity, we end up reducing all four conditions to the following two</p> <ol> <li>$H(\epsilon F(g_1))\eta=g_1$ for all $g_1\in\mathcal G_1$</li> <li>$g_2=\epsilon F(H(g_2)\eta))$ for all $g_2\in\mathcal G_2$</li> </ol> <p>Indeed, the zig-zag identities are the special case where $g_1$ and $g_2$ are the identity elements of the groups. In particular, in the case of a group (where left and right inverses coincide), we have $HF(g_1)=\eta^{-1}g_1\eta$ and $FH(g_2)=\epsilon^{-1}g_2\epsilon$ where $\eta^{-1}=H(\epsilon)$ and $\epsilon^{-1}=F(\eta)$. Thus an adjunction between two groups consists of a pair of group homomorphisms $F\colon\mathcal G_1\rightleftarrows\mathcal G_2\colon H$ whose composites are inner automorphisms so that $F$ sends the inner automorphism of $\mathcal G_1$ to the inverse inner automorphism of $\mathcal G_2$ and reversly for $H$.</p> Answer 2: <p>Every natural transformation between functors between groups is an equivalence, so every adjunction is an equivalence. </p> Answer 3: <p>Since groups as one-object categories are a special case of <em>groupoids</em>, that is, categories in which every morphism is an isomorphism, for any adjunction $F\dashv H\colon\mathcal G_1\to\mathcal G_2$ between two group(oid)s, the unit and counit natural transformations $\mathrm{id}_{\mathcal G_1}\overset{\eta}\Rightarrow HF$ and $FH\overset\epsilon\Rightarrow\mathrm{id}_{\mathcal G_2}$ will be natural isomorphisms. In particular, all adjoint pairs of functors between two group(oid)s are adjoint <em>equivalences</em>.</p> <p>In the special case of the adjunction $F\dashv H\colon\mathcal G_1\to\mathcal G_2$ of one-object categories (i.e. monoids), we can identify $\eta$ and $\epsilon$ with elements of $\mathcal G_1$ and $\mathcal G_2$ respectively, which must satisfy the naturality conditions</p> <ol> <li>$HF(g_1)\eta=\eta g_1$ for all $g_1\in\mathcal G_1$</li> <li>$g_2\epsilon=\epsilon FH(g_2)$ for all $g_2\in\mathcal G_2$</li> </ol> <p>Furthermore, the zig-zag identities which makes the unit and counit into an adjunction between the two functors (i.e. monoid homomorphisms) are</p> <ol> <li>$\epsilon F(\eta)=\mathrm{id}_{\mathcal G_2}$</li> <li>$H(\epsilon)\eta=\mathrm{id}_{\mathcal G_1}$</li> </ol> <p>By multiplying the naturality conditions with the inverses given by the zig-zag identity, we end up reducing all four conditions to the following two</p> <ol> <li>$H(\epsilon F(g_1))\eta=g_1$ for all $g_1\in\mathcal G_1$</li> <li>$g_2=\epsilon F(H(g_2)\eta))$ for all $g_2\in\mathcal G_2$</li> </ol> <p>Indeed, the zig-zag identities are the special case where $g_1$ and $g_2$ are the identity elements of the groups. In particular, in the case of a group (where left and right inverses coincide), we have $HF(g_1)=\eta^{-1}g_1\eta$ and $FH(g_2)=\epsilon^{-1}g_2\epsilon$ where $\eta^{-1}=H(\epsilon)$ and $\epsilon^{-1}=F(\eta)$. Thus an adjunction between two groups consists of a pair of group homomorphisms $F\colon\mathcal G_1\rightleftarrows\mathcal G_2\colon H$ whose composites are inner automorphisms so that $F$ sends the inner automorphism of $\mathcal G_1$ to the inverse inner automorphism of $\mathcal G_2$ and reversly for $H$.</p> Answer 4: <p>Every natural transformation between functors between groups is an equivalence, so every adjunction is an equivalence. </p> Answer 5: <p>Since groups as one-object categories are a special case of <em>groupoids</em>, that is, categories in which every morphism is an isomorphism, for any adjunction $F\dashv H\colon\mathcal G_1\to\mathcal G_2$ between two group(oid)s, the unit and counit natural transformations $\mathrm{id}_{\mathcal G_1}\overset{\eta}\Rightarrow HF$ and $FH\overset\epsilon\Rightarrow\mathrm{id}_{\mathcal G_2}$ will be natural isomorphisms. In particular, all adjoint pairs of functors between two group(oid)s are adjoint <em>equivalences</em>.</p> <p>In the special case of the adjunction $F\dashv H\colon\mathcal G_1\to\mathcal G_2$ of one-object categories (i.e. monoids), we can identify $\eta$ and $\epsilon$ with elements of $\mathcal G_1$ and $\mathcal G_2$ respectively, which must satisfy the naturality conditions</p> <ol> <li>$HF(g_1)\eta=\eta g_1$ for all $g_1\in\mathcal G_1$</li> <li>$g_2\epsilon=\epsilon FH(g_2)$ for all $g_2\in\mathcal G_2$</li> </ol> <p>Furthermore, the zig-zag identities which makes the unit and counit into an adjunction between the two functors (i.e. monoid homomorphisms) are</p> <ol> <li>$\epsilon F(\eta)=\mathrm{id}_{\mathcal G_2}$</li> <li>$H(\epsilon)\eta=\mathrm{id}_{\mathcal G_1}$</li> </ol> <p>By multiplying the naturality conditions with the inverses given by the zig-zag identity, we end up reducing all four conditions to the following two</p> <ol> <li>$H(\epsilon F(g_1))\eta=g_1$ for all $g_1\in\mathcal G_1$</li> <li>$g_2=\epsilon F(H(g_2)\eta))$ for all $g_2\in\mathcal G_2$</li> </ol> <p>Indeed, the zig-zag identities are the special case where $g_1$ and $g_2$ are the identity elements of the groups. In particular, in the case of a group (where left and right inverses coincide), we have $HF(g_1)=\eta^{-1}g_1\eta$ and $FH(g_2)=\epsilon^{-1}g_2\epsilon$ where $\eta^{-1}=H(\epsilon)$ and $\epsilon^{-1}=F(\eta)$. Thus an adjunction between two groups consists of a pair of group homomorphisms $F\colon\mathcal G_1\rightleftarrows\mathcal G_2\colon H$ whose composites are inner automorphisms so that $F$ sends the inner automorphism of $\mathcal G_1$ to the inverse inner automorphism of $\mathcal G_2$ and reversly for $H$.</p> Answer 6: <p>Every natural transformation between functors between groups is an equivalence, so every adjunction is an equivalence. </p> Answer 7: <p>Since groups as one-object categories are a special case of <em>groupoids</em>, that is, categories in which every morphism is an isomorphism, for any adjunction $F\dashv H\colon\mathcal G_1\to\mathcal G_2$ between two group(oid)s, the unit and counit natural transformations $\mathrm{id}_{\mathcal G_1}\overset{\eta}\Rightarrow HF$ and $FH\overset\epsilon\Rightarrow\mathrm{id}_{\mathcal G_2}$ will be natural isomorphisms. In particular, all adjoint pairs of functors between two group(oid)s are adjoint <em>equivalences</em>.</p> <p>In the special case of the adjunction $F\dashv H\colon\mathcal G_1\to\mathcal G_2$ of one-object categories (i.e. monoids), we can identify $\eta$ and $\epsilon$ with elements of $\mathcal G_1$ and $\mathcal G_2$ respectively, which must satisfy the naturality conditions</p> <ol> <li>$HF(g_1)\eta=\eta g_1$ for all $g_1\in\mathcal G_1$</li> <li>$g_2\epsilon=\epsilon FH(g_2)$ for all $g_2\in\mathcal G_2$</li> </ol> <p>Furthermore, the zig-zag identities which makes the unit and counit into an adjunction between the two functors (i.e. monoid homomorphisms) are</p> <ol> <li>$\epsilon F(\eta)=\mathrm{id}_{\mathcal G_2}$</li> <li>$H(\epsilon)\eta=\mathrm{id}_{\mathcal G_1}$</li> </ol> <p>By multiplying the naturality conditions with the inverses given by the zig-zag identity, we end up reducing all four conditions to the following two</p> <ol> <li>$H(\epsilon F(g_1))\eta=g_1$ for all $g_1\in\mathcal G_1$</li> <li>$g_2=\epsilon F(H(g_2)\eta))$ for all $g_2\in\mathcal G_2$</li> </ol> <p>Indeed, the zig-zag identities are the special case where $g_1$ and $g_2$ are the identity elements of the groups. In particular, in the case of a group (where left and right inverses coincide), we have $HF(g_1)=\eta^{-1}g_1\eta$ and $FH(g_2)=\epsilon^{-1}g_2\epsilon$ where $\eta^{-1}=H(\epsilon)$ and $\epsilon^{-1}=F(\eta)$. Thus an adjunction between two groups consists of a pair of group homomorphisms $F\colon\mathcal G_1\rightleftarrows\mathcal G_2\colon H$ whose composites are inner automorphisms so that $F$ sends the inner automorphism of $\mathcal G_1$ to the inverse inner automorphism of $\mathcal G_2$ and reversly for $H$.</p> Answer 8: <p>Every natural transformation between functors between groups is an equivalence, so every adjunction is an equivalence. </p> Answer 9: <p>Since groups as one-object categories are a special case of <em>groupoids</em>, that is, categories in which every morphism is an isomorphism, for any adjunction $F\dashv H\colon\mathcal G_1\to\mathcal G_2$ between two group(oid)s, the unit and counit natural transformations $\mathrm{id}_{\mathcal G_1}\overset{\eta}\Rightarrow HF$ and $FH\overset\epsilon\Rightarrow\mathrm{id}_{\mathcal G_2}$ will be natural isomorphisms. In particular, all adjoint pairs of functors between two group(oid)s are adjoint <em>equivalences</em>.</p> <p>In the special case of the adjunction $F\dashv H\colon\mathcal G_1\to\mathcal G_2$ of one-object categories (i.e. monoids), we can identify $\eta$ and $\epsilon$ with elements of $\mathcal G_1$ and $\mathcal G_2$ respectively, which must satisfy the naturality conditions</p> <ol> <li>$HF(g_1)\eta=\eta g_1$ for all $g_1\in\mathcal G_1$</li> <li>$g_2\epsilon=\epsilon FH(g_2)$ for all $g_2\in\mathcal G_2$</li> </ol> <p>Furthermore, the zig-zag identities which makes the unit and counit into an adjunction between the two functors (i.e. monoid homomorphisms) are</p> <ol> <li>$\epsilon F(\eta)=\mathrm{id}_{\mathcal G_2}$</li> <li>$H(\epsilon)\eta=\mathrm{id}_{\mathcal G_1}$</li> </ol> <p>By multiplying the naturality conditions with the inverses given by the zig-zag identity, we end up reducing all four conditions to the following two</p> <ol> <li>$H(\epsilon F(g_1))\eta=g_1$ for all $g_1\in\mathcal G_1$</li> <li>$g_2=\epsilon F(H(g_2)\eta))$ for all $g_2\in\mathcal G_2$</li> </ol> <p>Indeed, the zig-zag identities are the special case where $g_1$ and $g_2$ are the identity elements of the groups. In particular, in the case of a group (where left and right inverses coincide), we have $HF(g_1)=\eta^{-1}g_1\eta$ and $FH(g_2)=\epsilon^{-1}g_2\epsilon$ where $\eta^{-1}=H(\epsilon)$ and $\epsilon^{-1}=F(\eta)$. Thus an adjunction between two groups consists of a pair of group homomorphisms $F\colon\mathcal G_1\rightleftarrows\mathcal G_2\colon H$ whose composites are inner automorphisms so that $F$ sends the inner automorphism of $\mathcal G_1$ to the inverse inner automorphism of $\mathcal G_2$ and reversly for $H$.</p> Answer 10: <p>Every natural transformation between functors between groups is an equivalence, so every adjunction is an equivalence. </p> Answer 11: <p>Since groups as one-object categories are a special case of <em>groupoids</em>, that is, categories in which every morphism is an isomorphism, for any adjunction $F\dashv H\colon\mathcal G_1\to\mathcal G_2$ between two group(oid)s, the unit and counit natural transformations $\mathrm{id}_{\mathcal G_1}\overset{\eta}\Rightarrow HF$ and $FH\overset\epsilon\Rightarrow\mathrm{id}_{\mathcal G_2}$ will be natural isomorphisms. In particular, all adjoint pairs of functors between two group(oid)s are adjoint <em>equivalences</em>.</p> <p>In the special case of the adjunction $F\dashv H\colon\mathcal G_1\to\mathcal G_2$ of one-object categories (i.e. monoids), we can identify $\eta$ and $\epsilon$ with elements of $\mathcal G_1$ and $\mathcal G_2$ respectively, which must satisfy the naturality conditions</p> <ol> <li>$HF(g_1)\eta=\eta g_1$ for all $g_1\in\mathcal G_1$</li> <li>$g_2\epsilon=\epsilon FH(g_2)$ for all $g_2\in\mathcal G_2$</li> </ol> <p>Furthermore, the zig-zag identities which makes the unit and counit into an adjunction between the two functors (i.e. monoid homomorphisms) are</p> <ol> <li>$\epsilon F(\eta)=\mathrm{id}_{\mathcal G_2}$</li> <li>$H(\epsilon)\eta=\mathrm{id}_{\mathcal G_1}$</li> </ol> <p>By multiplying the naturality conditions with the inverses given by the zig-zag identity, we end up reducing all four conditions to the following two</p> <ol> <li>$H(\epsilon F(g_1))\eta=g_1$ for all $g_1\in\mathcal G_1$</li> <li>$g_2=\epsilon F(H(g_2)\eta))$ for all $g_2\in\mathcal G_2$</li> </ol> <p>Indeed, the zig-zag identities are the special case where $g_1$ and $g_2$ are the identity elements of the groups. In particular, in the case of a group (where left and right inverses coincide), we have $HF(g_1)=\eta^{-1}g_1\eta$ and $FH(g_2)=\epsilon^{-1}g_2\epsilon$ where $\eta^{-1}=H(\epsilon)$ and $\epsilon^{-1}=F(\eta)$. Thus an adjunction between two groups consists of a pair of group homomorphisms $F\colon\mathcal G_1\rightleftarrows\mathcal G_2\colon H$ whose composites are inner automorphisms so that $F$ sends the inner automorphism of $\mathcal G_1$ to the inverse inner automorphism of $\mathcal G_2$ and reversly for $H$.</p> Answer 12: <p>Every natural transformation between functors between groups is an equivalence, so every adjunction is an equivalence. </p> Answer 13: <p>Since groups as one-object categories are a special case of <em>groupoids</em>, that is, categories in which every morphism is an isomorphism, for any adjunction $F\dashv H\colon\mathcal G_1\to\mathcal G_2$ between two group(oid)s, the unit and counit natural transformations $\mathrm{id}_{\mathcal G_1}\overset{\eta}\Rightarrow HF$ and $FH\overset\epsilon\Rightarrow\mathrm{id}_{\mathcal G_2}$ will be natural isomorphisms. In particular, all adjoint pairs of functors between two group(oid)s are adjoint <em>equivalences</em>.</p> <p>In the special case of the adjunction $F\dashv H\colon\mathcal G_1\to\mathcal G_2$ of one-object categories (i.e. monoids), we can identify $\eta$ and $\epsilon$ with elements of $\mathcal G_1$ and $\mathcal G_2$ respectively, which must satisfy the naturality conditions</p> <ol> <li>$HF(g_1)\eta=\eta g_1$ for all $g_1\in\mathcal G_1$</li> <li>$g_2\epsilon=\epsilon FH(g_2)$ for all $g_2\in\mathcal G_2$</li> </ol> <p>Furthermore, the zig-zag identities which makes the unit and counit into an adjunction between the two functors (i.e. monoid homomorphisms) are</p> <ol> <li>$\epsilon F(\eta)=\mathrm{id}_{\mathcal G_2}$</li> <li>$H(\epsilon)\eta=\mathrm{id}_{\mathcal G_1}$</li> </ol> <p>By multiplying the naturality conditions with the inverses given by the zig-zag identity, we end up reducing all four conditions to the following two</p> <ol> <li>$H(\epsilon F(g_1))\eta=g_1$ for all $g_1\in\mathcal G_1$</li> <li>$g_2=\epsilon F(H(g_2)\eta))$ for all $g_2\in\mathcal G_2$</li> </ol> <p>Indeed, the zig-zag identities are the special case where $g_1$ and $g_2$ are the identity elements of the groups. In particular, in the case of a group (where left and right inverses coincide), we have $HF(g_1)=\eta^{-1}g_1\eta$ and $FH(g_2)=\epsilon^{-1}g_2\epsilon$ where $\eta^{-1}=H(\epsilon)$ and $\epsilon^{-1}=F(\eta)$. Thus an adjunction between two groups consists of a pair of group homomorphisms $F\colon\mathcal G_1\rightleftarrows\mathcal G_2\colon H$ whose composites are inner automorphisms so that $F$ sends the inner automorphism of $\mathcal G_1$ to the inverse inner automorphism of $\mathcal G_2$ and reversly for $H$.</p> Answer 14: <p>Every natural transformation between functors between groups is an equivalence, so every adjunction is an equivalence. </p> Answer 15: <p>Since groups as one-object categories are a special case of <em>groupoids</em>, that is, categories in which every morphism is an isomorphism, for any adjunction $F\dashv H\colon\mathcal G_1\to\mathcal G_2$ between two group(oid)s, the unit and counit natural transformations $\mathrm{id}_{\mathcal G_1}\overset{\eta}\Rightarrow HF$ and $FH\overset\epsilon\Rightarrow\mathrm{id}_{\mathcal G_2}$ will be natural isomorphisms. In particular, all adjoint pairs of functors between two group(oid)s are adjoint <em>equivalences</em>.</p> <p>In the special case of the adjunction $F\dashv H\colon\mathcal G_1\to\mathcal G_2$ of one-object categories (i.e. monoids), we can identify $\eta$ and $\epsilon$ with elements of $\mathcal G_1$ and $\mathcal G_2$ respectively, which must satisfy the naturality conditions</p> <ol> <li>$HF(g_1)\eta=\eta g_1$ for all $g_1\in\mathcal G_1$</li> <li>$g_2\epsilon=\epsilon FH(g_2)$ for all $g_2\in\mathcal G_2$</li> </ol> <p>Furthermore, the zig-zag identities which makes the unit and counit into an adjunction between the two functors (i.e. monoid homomorphisms) are</p> <ol> <li>$\epsilon F(\eta)=\mathrm{id}_{\mathcal G_2}$</li> <li>$H(\epsilon)\eta=\mathrm{id}_{\mathcal G_1}$</li> </ol> <p>By multiplying the naturality conditions with the inverses given by the zig-zag identity, we end up reducing all four conditions to the following two</p> <ol> <li>$H(\epsilon F(g_1))\eta=g_1$ for all $g_1\in\mathcal G_1$</li> <li>$g_2=\epsilon F(H(g_2)\eta))$ for all $g_2\in\mathcal G_2$</li> </ol> <p>Indeed, the zig-zag identities are the special case where $g_1$ and $g_2$ are the identity elements of the groups. In particular, in the case of a group (where left and right inverses coincide), we have $HF(g_1)=\eta^{-1}g_1\eta$ and $FH(g_2)=\epsilon^{-1}g_2\epsilon$ where $\eta^{-1}=H(\epsilon)$ and $\epsilon^{-1}=F(\eta)$. Thus an adjunction between two groups consists of a pair of group homomorphisms $F\colon\mathcal G_1\rightleftarrows\mathcal G_2\colon H$ whose composites are inner automorphisms so that $F$ sends the inner automorphism of $\mathcal G_1$ to the inverse inner automorphism of $\mathcal G_2$ and reversly for $H$.</p> Answer 16: <p>Every natural transformation between functors between groups is an equivalence, so every adjunction is an equivalence. </p> Answer 17: <p>Since groups as one-object categories are a special case of <em>groupoids</em>, that is, categories in which every morphism is an isomorphism, for any adjunction $F\dashv H\colon\mathcal G_1\to\mathcal G_2$ between two group(oid)s, the unit and counit natural transformations $\mathrm{id}_{\mathcal G_1}\overset{\eta}\Rightarrow HF$ and $FH\overset\epsilon\Rightarrow\mathrm{id}_{\mathcal G_2}$ will be natural isomorphisms. In particular, all adjoint pairs of functors between two group(oid)s are adjoint <em>equivalences</em>.</p> <p>In the special case of the adjunction $F\dashv H\colon\mathcal G_1\to\mathcal G_2$ of one-object categories (i.e. monoids), we can identify $\eta$ and $\epsilon$ with elements of $\mathcal G_1$ and $\mathcal G_2$ respectively, which must satisfy the naturality conditions</p> <ol> <li>$HF(g_1)\eta=\eta g_1$ for all $g_1\in\mathcal G_1$</li> <li>$g_2\epsilon=\epsilon FH(g_2)$ for all $g_2\in\mathcal G_2$</li> </ol> <p>Furthermore, the zig-zag identities which makes the unit and counit into an adjunction between the two functors (i.e. monoid homomorphisms) are</p> <ol> <li>$\epsilon F(\eta)=\mathrm{id}_{\mathcal G_2}$</li> <li>$H(\epsilon)\eta=\mathrm{id}_{\mathcal G_1}$</li> </ol> <p>By multiplying the naturality conditions with the inverses given by the zig-zag identity, we end up reducing all four conditions to the following two</p> <ol> <li>$H(\epsilon F(g_1))\eta=g_1$ for all $g_1\in\mathcal G_1$</li> <li>$g_2=\epsilon F(H(g_2)\eta))$ for all $g_2\in\mathcal G_2$</li> </ol> <p>Indeed, the zig-zag identities are the special case where $g_1$ and $g_2$ are the identity elements of the groups. In particular, in the case of a group (where left and right inverses coincide), we have $HF(g_1)=\eta^{-1}g_1\eta$ and $FH(g_2)=\epsilon^{-1}g_2\epsilon$ where $\eta^{-1}=H(\epsilon)$ and $\epsilon^{-1}=F(\eta)$. Thus an adjunction between two groups consists of a pair of group homomorphisms $F\colon\mathcal G_1\rightleftarrows\mathcal G_2\colon H$ whose composites are inner automorphisms so that $F$ sends the inner automorphism of $\mathcal G_1$ to the inverse inner automorphism of $\mathcal G_2$ and reversly for $H$.</p> Answer 18: <p>Every natural transformation between functors between groups is an equivalence, so every adjunction is an equivalence. </p> Answer 19: <p>Since groups as one-object categories are a special case of <em>groupoids</em>, that is, categories in which every morphism is an isomorphism, for any adjunction $F\dashv H\colon\mathcal G_1\to\mathcal G_2$ between two group(oid)s, the unit and counit natural transformations $\mathrm{id}_{\mathcal G_1}\overset{\eta}\Rightarrow HF$ and $FH\overset\epsilon\Rightarrow\mathrm{id}_{\mathcal G_2}$ will be natural isomorphisms. In particular, all adjoint pairs of functors between two group(oid)s are adjoint <em>equivalences</em>.</p> <p>In the special case of the adjunction $F\dashv H\colon\mathcal G_1\to\mathcal G_2$ of one-object categories (i.e. monoids), we can identify $\eta$ and $\epsilon$ with elements of $\mathcal G_1$ and $\mathcal G_2$ respectively, which must satisfy the naturality conditions</p> <ol> <li>$HF(g_1)\eta=\eta g_1$ for all $g_1\in\mathcal G_1$</li> <li>$g_2\epsilon=\epsilon FH(g_2)$ for all $g_2\in\mathcal G_2$</li> </ol> <p>Furthermore, the zig-zag identities which makes the unit and counit into an adjunction between the two functors (i.e. monoid homomorphisms) are</p> <ol> <li>$\epsilon F(\eta)=\mathrm{id}_{\mathcal G_2}$</li> <li>$H(\epsilon)\eta=\mathrm{id}_{\mathcal G_1}$</li> </ol> <p>By multiplying the naturality conditions with the inverses given by the zig-zag identity, we end up reducing all four conditions to the following two</p> <ol> <li>$H(\epsilon F(g_1))\eta=g_1$ for all $g_1\in\mathcal G_1$</li> <li>$g_2=\epsilon F(H(g_2)\eta))$ for all $g_2\in\mathcal G_2$</li> </ol> <p>Indeed, the zig-zag identities are the special case where $g_1$ and $g_2$ are the identity elements of the groups. In particular, in the case of a group (where left and right inverses coincide), we have $HF(g_1)=\eta^{-1}g_1\eta$ and $FH(g_2)=\epsilon^{-1}g_2\epsilon$ where $\eta^{-1}=H(\epsilon)$ and $\epsilon^{-1}=F(\eta)$. Thus an adjunction between two groups consists of a pair of group homomorphisms $F\colon\mathcal G_1\rightleftarrows\mathcal G_2\colon H$ whose composites are inner automorphisms so that $F$ sends the inner automorphism of $\mathcal G_1$ to the inverse inner automorphism of $\mathcal G_2$ and reversly for $H$.</p> Answer 20: <p>Every natural transformation between functors between groups is an equivalence, so every adjunction is an equivalence. </p> Answer 21: <p>Since groups as one-object categories are a special case of <em>groupoids</em>, that is, categories in which every morphism is an isomorphism, for any adjunction $F\dashv H\colon\mathcal G_1\to\mathcal G_2$ between two group(oid)s, the unit and counit natural transformations $\mathrm{id}_{\mathcal G_1}\overset{\eta}\Rightarrow HF$ and $FH\overset\epsilon\Rightarrow\mathrm{id}_{\mathcal G_2}$ will be natural isomorphisms. In particular, all adjoint pairs of functors between two group(oid)s are adjoint <em>equivalences</em>.</p> <p>In the special case of the adjunction $F\dashv H\colon\mathcal G_1\to\mathcal G_2$ of one-object categories (i.e. monoids), we can identify $\eta$ and $\epsilon$ with elements of $\mathcal G_1$ and $\mathcal G_2$ respectively, which must satisfy the naturality conditions</p> <ol> <li>$HF(g_1)\eta=\eta g_1$ for all $g_1\in\mathcal G_1$</li> <li>$g_2\epsilon=\epsilon FH(g_2)$ for all $g_2\in\mathcal G_2$</li> </ol> <p>Furthermore, the zig-zag identities which makes the unit and counit into an adjunction between the two functors (i.e. monoid homomorphisms) are</p> <ol> <li>$\epsilon F(\eta)=\mathrm{id}_{\mathcal G_2}$</li> <li>$H(\epsilon)\eta=\mathrm{id}_{\mathcal G_1}$</li> </ol> <p>By multiplying the naturality conditions with the inverses given by the zig-zag identity, we end up reducing all four conditions to the following two</p> <ol> <li>$H(\epsilon F(g_1))\eta=g_1$ for all $g_1\in\mathcal G_1$</li> <li>$g_2=\epsilon F(H(g_2)\eta))$ for all $g_2\in\mathcal G_2$</li> </ol> <p>Indeed, the zig-zag identities are the special case where $g_1$ and $g_2$ are the identity elements of the groups. In particular, in the case of a group (where left and right inverses coincide), we have $HF(g_1)=\eta^{-1}g_1\eta$ and $FH(g_2)=\epsilon^{-1}g_2\epsilon$ where $\eta^{-1}=H(\epsilon)$ and $\epsilon^{-1}=F(\eta)$. Thus an adjunction between two groups consists of a pair of group homomorphisms $F\colon\mathcal G_1\rightleftarrows\mathcal G_2\colon H$ whose composites are inner automorphisms so that $F$ sends the inner automorphism of $\mathcal G_1$ to the inverse inner automorphism of $\mathcal G_2$ and reversly for $H$.</p> Answer 22: <p>Every natural transformation between functors between groups is an equivalence, so every adjunction is an equivalence. </p>
A &quot;functional&quot; definition of simple group?
2,097,063
6
https://math.stackexchange.com/questions/2097063/a-functional-definition-of-simple-group
<p>We usually give the definition of simple group as follows,</p> <blockquote> <p><strong>Simple Group.</strong> A group <span class="math-container">$(G,\circ)$</span> is said to be simple if it contains no <em>proper nontrivial normal subgroup of <span class="math-container">$G$</span></em>.</p> </blockquote> <p>Where by trivial normal subgroup of a group <span class="math-container">$G$</span> we mean <span class="math-container">$\langle e\rangle$</span> where <span class="math-container">$e$</span> is the identity element of <span class="math-container">$G$</span>.</p> <p>Also, we may give the definition of a connected topological space as follows,</p> <blockquote> <p><strong>Connected Topological Space.</strong> A topological space <span class="math-container">$(X,\mathfrak{T})$</span> is said to be connected if it contains no <em>proper nontrivial clopen subset of <span class="math-container">$X$</span></em>.</p> </blockquote> <p>I couldn't help but notice the similarity between these two definitions (especially the italicized parts of the definitions) and so I tried to formulate the definition of Simple Groups in an analogous manner of the following definition of Connected Topological Spaces,</p> <blockquote> <p>A topological space <span class="math-container">$(X,\mathfrak{T})$</span> is said to be connected if for all continuous function <span class="math-container">$f:X\to\{0,1\}$</span>, it is constant.</p> </blockquote> <p>A &quot;natural&quot; analogue of this definition in case of simple groups can be,</p> <blockquote> <p><strong>Definition.</strong> A group <span class="math-container">$(G,\circ)$</span> is said to be simple if for all homomorphisms <span class="math-container">$f:G\to\mathbb{Z}_2$</span>, it is constant.</p> </blockquote> <p>However, I can't prove (or disprove) whether the above definition is equivalent to the definition of simple groups that I mentioned previously.</p> <p><strong>Questions</strong></p> <blockquote> <ul> <li><p>Can anyone help me in this?</p> </li> <li><p>If the definition is not equivalent then can some &quot;functional&quot; definition of a simple group be given?</p> </li> </ul> </blockquote>
[ { "answer_id": 2097067, "score": 6, "text": "<p>That definition is not correct. For instance, the group $\\mathbb{Z}_2$ is simple, but the identity homomorphism $\\mathbb{Z}_2\\to\\mathbb{Z}_2$ is nonconstant. Or if $G$ is any finite group of odd order, any homomorphism $G\\to\\mathbb{Z}_2$ is constant, but $G$ need not be simple. The problem is that in contrast with clopen subsets of a topological space, you can't just take a normal subgroup $K\\subset G$ and get a homomorphism $G\\to\\mathbb{Z}_2$ by mapping $K$ to $0$ and every other element of $G$ to $1$. That usually won't be a homomorphism.</p>\n\n<p>A correct \"functional\" definition of a simple group is that a nontrivial group $G$ is simple if any homomorphism $f:G\\to H$ from $G$ to any other group is either injective or trivial (where \"trivial\" means it sends every element to the identity). Indeed, a homomorphism is injective iff its kernel is the trivial subgroup and trivial iff its kernel is all of $G$, so this is just saying the only normal subgroups of $G$ are the trivial subgroup and $G$.</p>\n\n<p>(Aside: I require $G$ to be nontrivial in this definition because the trivial group is not simple; similarly, the empty topological space is not connected (see <a href=\"https://ncatlab.org/nlab/show/too+simple+to+be+simple\" rel=\"noreferrer\">https://ncatlab.org/nlab/show/too+simple+to+be+simple</a>). You can avoid stating that $G$ is nontrivial by saying instead that every homomorphism $f:G\\to H$ is exactly one of injective and trivial. Similarly, you can fix the definition of connectedness by saying every continuous $f:X\\to\\{0,1\\}$ has exactly one point in its image.)</p>\n" }, { "answer_id": 2097065, "score": 1, "text": "<p>There is only one homomorphism of the cyclic group of order $9$ into the group of order $2$ (the zero homomorphism) but that group is not simple.</p>\n" }, { "answer_id": 2097067, "score": 6, "text": "<p>That definition is not correct. For instance, the group $\\mathbb{Z}_2$ is simple, but the identity homomorphism $\\mathbb{Z}_2\\to\\mathbb{Z}_2$ is nonconstant. Or if $G$ is any finite group of odd order, any homomorphism $G\\to\\mathbb{Z}_2$ is constant, but $G$ need not be simple. The problem is that in contrast with clopen subsets of a topological space, you can't just take a normal subgroup $K\\subset G$ and get a homomorphism $G\\to\\mathbb{Z}_2$ by mapping $K$ to $0$ and every other element of $G$ to $1$. That usually won't be a homomorphism.</p>\n\n<p>A correct \"functional\" definition of a simple group is that a nontrivial group $G$ is simple if any homomorphism $f:G\\to H$ from $G$ to any other group is either injective or trivial (where \"trivial\" means it sends every element to the identity). Indeed, a homomorphism is injective iff its kernel is the trivial subgroup and trivial iff its kernel is all of $G$, so this is just saying the only normal subgroups of $G$ are the trivial subgroup and $G$.</p>\n\n<p>(Aside: I require $G$ to be nontrivial in this definition because the trivial group is not simple; similarly, the empty topological space is not connected (see <a href=\"https://ncatlab.org/nlab/show/too+simple+to+be+simple\" rel=\"noreferrer\">https://ncatlab.org/nlab/show/too+simple+to+be+simple</a>). You can avoid stating that $G$ is nontrivial by saying instead that every homomorphism $f:G\\to H$ is exactly one of injective and trivial. Similarly, you can fix the definition of connectedness by saying every continuous $f:X\\to\\{0,1\\}$ has exactly one point in its image.)</p>\n" }, { "answer_id": 2097065, "score": 1, "text": "<p>There is only one homomorphism of the cyclic group of order $9$ into the group of order $2$ (the zero homomorphism) but that group is not simple.</p>\n" }, { "answer_id": 2097067, "score": 6, "text": "<p>That definition is not correct. For instance, the group $\\mathbb{Z}_2$ is simple, but the identity homomorphism $\\mathbb{Z}_2\\to\\mathbb{Z}_2$ is nonconstant. Or if $G$ is any finite group of odd order, any homomorphism $G\\to\\mathbb{Z}_2$ is constant, but $G$ need not be simple. The problem is that in contrast with clopen subsets of a topological space, you can't just take a normal subgroup $K\\subset G$ and get a homomorphism $G\\to\\mathbb{Z}_2$ by mapping $K$ to $0$ and every other element of $G$ to $1$. That usually won't be a homomorphism.</p>\n\n<p>A correct \"functional\" definition of a simple group is that a nontrivial group $G$ is simple if any homomorphism $f:G\\to H$ from $G$ to any other group is either injective or trivial (where \"trivial\" means it sends every element to the identity). Indeed, a homomorphism is injective iff its kernel is the trivial subgroup and trivial iff its kernel is all of $G$, so this is just saying the only normal subgroups of $G$ are the trivial subgroup and $G$.</p>\n\n<p>(Aside: I require $G$ to be nontrivial in this definition because the trivial group is not simple; similarly, the empty topological space is not connected (see <a href=\"https://ncatlab.org/nlab/show/too+simple+to+be+simple\" rel=\"noreferrer\">https://ncatlab.org/nlab/show/too+simple+to+be+simple</a>). You can avoid stating that $G$ is nontrivial by saying instead that every homomorphism $f:G\\to H$ is exactly one of injective and trivial. Similarly, you can fix the definition of connectedness by saying every continuous $f:X\\to\\{0,1\\}$ has exactly one point in its image.)</p>\n" }, { "answer_id": 2097065, "score": 1, "text": "<p>There is only one homomorphism of the cyclic group of order $9$ into the group of order $2$ (the zero homomorphism) but that group is not simple.</p>\n" }, { "answer_id": 2097067, "score": 6, "text": "<p>That definition is not correct. For instance, the group $\\mathbb{Z}_2$ is simple, but the identity homomorphism $\\mathbb{Z}_2\\to\\mathbb{Z}_2$ is nonconstant. Or if $G$ is any finite group of odd order, any homomorphism $G\\to\\mathbb{Z}_2$ is constant, but $G$ need not be simple. The problem is that in contrast with clopen subsets of a topological space, you can't just take a normal subgroup $K\\subset G$ and get a homomorphism $G\\to\\mathbb{Z}_2$ by mapping $K$ to $0$ and every other element of $G$ to $1$. That usually won't be a homomorphism.</p>\n\n<p>A correct \"functional\" definition of a simple group is that a nontrivial group $G$ is simple if any homomorphism $f:G\\to H$ from $G$ to any other group is either injective or trivial (where \"trivial\" means it sends every element to the identity). Indeed, a homomorphism is injective iff its kernel is the trivial subgroup and trivial iff its kernel is all of $G$, so this is just saying the only normal subgroups of $G$ are the trivial subgroup and $G$.</p>\n\n<p>(Aside: I require $G$ to be nontrivial in this definition because the trivial group is not simple; similarly, the empty topological space is not connected (see <a href=\"https://ncatlab.org/nlab/show/too+simple+to+be+simple\" rel=\"noreferrer\">https://ncatlab.org/nlab/show/too+simple+to+be+simple</a>). You can avoid stating that $G$ is nontrivial by saying instead that every homomorphism $f:G\\to H$ is exactly one of injective and trivial. Similarly, you can fix the definition of connectedness by saying every continuous $f:X\\to\\{0,1\\}$ has exactly one point in its image.)</p>\n" }, { "answer_id": 2097065, "score": 1, "text": "<p>There is only one homomorphism of the cyclic group of order $9$ into the group of order $2$ (the zero homomorphism) but that group is not simple.</p>\n" }, { "answer_id": 2097067, "score": 6, "text": "<p>That definition is not correct. For instance, the group $\\mathbb{Z}_2$ is simple, but the identity homomorphism $\\mathbb{Z}_2\\to\\mathbb{Z}_2$ is nonconstant. Or if $G$ is any finite group of odd order, any homomorphism $G\\to\\mathbb{Z}_2$ is constant, but $G$ need not be simple. The problem is that in contrast with clopen subsets of a topological space, you can't just take a normal subgroup $K\\subset G$ and get a homomorphism $G\\to\\mathbb{Z}_2$ by mapping $K$ to $0$ and every other element of $G$ to $1$. That usually won't be a homomorphism.</p>\n\n<p>A correct \"functional\" definition of a simple group is that a nontrivial group $G$ is simple if any homomorphism $f:G\\to H$ from $G$ to any other group is either injective or trivial (where \"trivial\" means it sends every element to the identity). Indeed, a homomorphism is injective iff its kernel is the trivial subgroup and trivial iff its kernel is all of $G$, so this is just saying the only normal subgroups of $G$ are the trivial subgroup and $G$.</p>\n\n<p>(Aside: I require $G$ to be nontrivial in this definition because the trivial group is not simple; similarly, the empty topological space is not connected (see <a href=\"https://ncatlab.org/nlab/show/too+simple+to+be+simple\" rel=\"noreferrer\">https://ncatlab.org/nlab/show/too+simple+to+be+simple</a>). You can avoid stating that $G$ is nontrivial by saying instead that every homomorphism $f:G\\to H$ is exactly one of injective and trivial. Similarly, you can fix the definition of connectedness by saying every continuous $f:X\\to\\{0,1\\}$ has exactly one point in its image.)</p>\n" }, { "answer_id": 2097065, "score": 1, "text": "<p>There is only one homomorphism of the cyclic group of order $9$ into the group of order $2$ (the zero homomorphism) but that group is not simple.</p>\n" }, { "answer_id": 2097067, "score": 6, "text": "<p>That definition is not correct. For instance, the group $\\mathbb{Z}_2$ is simple, but the identity homomorphism $\\mathbb{Z}_2\\to\\mathbb{Z}_2$ is nonconstant. Or if $G$ is any finite group of odd order, any homomorphism $G\\to\\mathbb{Z}_2$ is constant, but $G$ need not be simple. The problem is that in contrast with clopen subsets of a topological space, you can't just take a normal subgroup $K\\subset G$ and get a homomorphism $G\\to\\mathbb{Z}_2$ by mapping $K$ to $0$ and every other element of $G$ to $1$. That usually won't be a homomorphism.</p>\n\n<p>A correct \"functional\" definition of a simple group is that a nontrivial group $G$ is simple if any homomorphism $f:G\\to H$ from $G$ to any other group is either injective or trivial (where \"trivial\" means it sends every element to the identity). Indeed, a homomorphism is injective iff its kernel is the trivial subgroup and trivial iff its kernel is all of $G$, so this is just saying the only normal subgroups of $G$ are the trivial subgroup and $G$.</p>\n\n<p>(Aside: I require $G$ to be nontrivial in this definition because the trivial group is not simple; similarly, the empty topological space is not connected (see <a href=\"https://ncatlab.org/nlab/show/too+simple+to+be+simple\" rel=\"noreferrer\">https://ncatlab.org/nlab/show/too+simple+to+be+simple</a>). You can avoid stating that $G$ is nontrivial by saying instead that every homomorphism $f:G\\to H$ is exactly one of injective and trivial. Similarly, you can fix the definition of connectedness by saying every continuous $f:X\\to\\{0,1\\}$ has exactly one point in its image.)</p>\n" }, { "answer_id": 2097065, "score": 1, "text": "<p>There is only one homomorphism of the cyclic group of order $9$ into the group of order $2$ (the zero homomorphism) but that group is not simple.</p>\n" }, { "answer_id": 2097067, "score": 6, "text": "<p>That definition is not correct. For instance, the group $\\mathbb{Z}_2$ is simple, but the identity homomorphism $\\mathbb{Z}_2\\to\\mathbb{Z}_2$ is nonconstant. Or if $G$ is any finite group of odd order, any homomorphism $G\\to\\mathbb{Z}_2$ is constant, but $G$ need not be simple. The problem is that in contrast with clopen subsets of a topological space, you can't just take a normal subgroup $K\\subset G$ and get a homomorphism $G\\to\\mathbb{Z}_2$ by mapping $K$ to $0$ and every other element of $G$ to $1$. That usually won't be a homomorphism.</p>\n\n<p>A correct \"functional\" definition of a simple group is that a nontrivial group $G$ is simple if any homomorphism $f:G\\to H$ from $G$ to any other group is either injective or trivial (where \"trivial\" means it sends every element to the identity). Indeed, a homomorphism is injective iff its kernel is the trivial subgroup and trivial iff its kernel is all of $G$, so this is just saying the only normal subgroups of $G$ are the trivial subgroup and $G$.</p>\n\n<p>(Aside: I require $G$ to be nontrivial in this definition because the trivial group is not simple; similarly, the empty topological space is not connected (see <a href=\"https://ncatlab.org/nlab/show/too+simple+to+be+simple\" rel=\"noreferrer\">https://ncatlab.org/nlab/show/too+simple+to+be+simple</a>). You can avoid stating that $G$ is nontrivial by saying instead that every homomorphism $f:G\\to H$ is exactly one of injective and trivial. Similarly, you can fix the definition of connectedness by saying every continuous $f:X\\to\\{0,1\\}$ has exactly one point in its image.)</p>\n" }, { "answer_id": 2097065, "score": 1, "text": "<p>There is only one homomorphism of the cyclic group of order $9$ into the group of order $2$ (the zero homomorphism) but that group is not simple.</p>\n" }, { "answer_id": 2097067, "score": 6, "text": "<p>That definition is not correct. For instance, the group $\\mathbb{Z}_2$ is simple, but the identity homomorphism $\\mathbb{Z}_2\\to\\mathbb{Z}_2$ is nonconstant. Or if $G$ is any finite group of odd order, any homomorphism $G\\to\\mathbb{Z}_2$ is constant, but $G$ need not be simple. The problem is that in contrast with clopen subsets of a topological space, you can't just take a normal subgroup $K\\subset G$ and get a homomorphism $G\\to\\mathbb{Z}_2$ by mapping $K$ to $0$ and every other element of $G$ to $1$. That usually won't be a homomorphism.</p>\n\n<p>A correct \"functional\" definition of a simple group is that a nontrivial group $G$ is simple if any homomorphism $f:G\\to H$ from $G$ to any other group is either injective or trivial (where \"trivial\" means it sends every element to the identity). Indeed, a homomorphism is injective iff its kernel is the trivial subgroup and trivial iff its kernel is all of $G$, so this is just saying the only normal subgroups of $G$ are the trivial subgroup and $G$.</p>\n\n<p>(Aside: I require $G$ to be nontrivial in this definition because the trivial group is not simple; similarly, the empty topological space is not connected (see <a href=\"https://ncatlab.org/nlab/show/too+simple+to+be+simple\" rel=\"noreferrer\">https://ncatlab.org/nlab/show/too+simple+to+be+simple</a>). You can avoid stating that $G$ is nontrivial by saying instead that every homomorphism $f:G\\to H$ is exactly one of injective and trivial. Similarly, you can fix the definition of connectedness by saying every continuous $f:X\\to\\{0,1\\}$ has exactly one point in its image.)</p>\n" }, { "answer_id": 2097065, "score": 1, "text": "<p>There is only one homomorphism of the cyclic group of order $9$ into the group of order $2$ (the zero homomorphism) but that group is not simple.</p>\n" }, { "answer_id": 2097067, "score": 6, "text": "<p>That definition is not correct. For instance, the group $\\mathbb{Z}_2$ is simple, but the identity homomorphism $\\mathbb{Z}_2\\to\\mathbb{Z}_2$ is nonconstant. Or if $G$ is any finite group of odd order, any homomorphism $G\\to\\mathbb{Z}_2$ is constant, but $G$ need not be simple. The problem is that in contrast with clopen subsets of a topological space, you can't just take a normal subgroup $K\\subset G$ and get a homomorphism $G\\to\\mathbb{Z}_2$ by mapping $K$ to $0$ and every other element of $G$ to $1$. That usually won't be a homomorphism.</p>\n\n<p>A correct \"functional\" definition of a simple group is that a nontrivial group $G$ is simple if any homomorphism $f:G\\to H$ from $G$ to any other group is either injective or trivial (where \"trivial\" means it sends every element to the identity). Indeed, a homomorphism is injective iff its kernel is the trivial subgroup and trivial iff its kernel is all of $G$, so this is just saying the only normal subgroups of $G$ are the trivial subgroup and $G$.</p>\n\n<p>(Aside: I require $G$ to be nontrivial in this definition because the trivial group is not simple; similarly, the empty topological space is not connected (see <a href=\"https://ncatlab.org/nlab/show/too+simple+to+be+simple\" rel=\"noreferrer\">https://ncatlab.org/nlab/show/too+simple+to+be+simple</a>). You can avoid stating that $G$ is nontrivial by saying instead that every homomorphism $f:G\\to H$ is exactly one of injective and trivial. Similarly, you can fix the definition of connectedness by saying every continuous $f:X\\to\\{0,1\\}$ has exactly one point in its image.)</p>\n" }, { "answer_id": 2097065, "score": 1, "text": "<p>There is only one homomorphism of the cyclic group of order $9$ into the group of order $2$ (the zero homomorphism) but that group is not simple.</p>\n" }, { "answer_id": 2097067, "score": 6, "text": "<p>That definition is not correct. For instance, the group $\\mathbb{Z}_2$ is simple, but the identity homomorphism $\\mathbb{Z}_2\\to\\mathbb{Z}_2$ is nonconstant. Or if $G$ is any finite group of odd order, any homomorphism $G\\to\\mathbb{Z}_2$ is constant, but $G$ need not be simple. The problem is that in contrast with clopen subsets of a topological space, you can't just take a normal subgroup $K\\subset G$ and get a homomorphism $G\\to\\mathbb{Z}_2$ by mapping $K$ to $0$ and every other element of $G$ to $1$. That usually won't be a homomorphism.</p>\n\n<p>A correct \"functional\" definition of a simple group is that a nontrivial group $G$ is simple if any homomorphism $f:G\\to H$ from $G$ to any other group is either injective or trivial (where \"trivial\" means it sends every element to the identity). Indeed, a homomorphism is injective iff its kernel is the trivial subgroup and trivial iff its kernel is all of $G$, so this is just saying the only normal subgroups of $G$ are the trivial subgroup and $G$.</p>\n\n<p>(Aside: I require $G$ to be nontrivial in this definition because the trivial group is not simple; similarly, the empty topological space is not connected (see <a href=\"https://ncatlab.org/nlab/show/too+simple+to+be+simple\" rel=\"noreferrer\">https://ncatlab.org/nlab/show/too+simple+to+be+simple</a>). You can avoid stating that $G$ is nontrivial by saying instead that every homomorphism $f:G\\to H$ is exactly one of injective and trivial. Similarly, you can fix the definition of connectedness by saying every continuous $f:X\\to\\{0,1\\}$ has exactly one point in its image.)</p>\n" }, { "answer_id": 2097065, "score": 1, "text": "<p>There is only one homomorphism of the cyclic group of order $9$ into the group of order $2$ (the zero homomorphism) but that group is not simple.</p>\n" }, { "answer_id": 2097067, "score": 6, "text": "<p>That definition is not correct. For instance, the group $\\mathbb{Z}_2$ is simple, but the identity homomorphism $\\mathbb{Z}_2\\to\\mathbb{Z}_2$ is nonconstant. Or if $G$ is any finite group of odd order, any homomorphism $G\\to\\mathbb{Z}_2$ is constant, but $G$ need not be simple. The problem is that in contrast with clopen subsets of a topological space, you can't just take a normal subgroup $K\\subset G$ and get a homomorphism $G\\to\\mathbb{Z}_2$ by mapping $K$ to $0$ and every other element of $G$ to $1$. That usually won't be a homomorphism.</p>\n\n<p>A correct \"functional\" definition of a simple group is that a nontrivial group $G$ is simple if any homomorphism $f:G\\to H$ from $G$ to any other group is either injective or trivial (where \"trivial\" means it sends every element to the identity). Indeed, a homomorphism is injective iff its kernel is the trivial subgroup and trivial iff its kernel is all of $G$, so this is just saying the only normal subgroups of $G$ are the trivial subgroup and $G$.</p>\n\n<p>(Aside: I require $G$ to be nontrivial in this definition because the trivial group is not simple; similarly, the empty topological space is not connected (see <a href=\"https://ncatlab.org/nlab/show/too+simple+to+be+simple\" rel=\"noreferrer\">https://ncatlab.org/nlab/show/too+simple+to+be+simple</a>). You can avoid stating that $G$ is nontrivial by saying instead that every homomorphism $f:G\\to H$ is exactly one of injective and trivial. Similarly, you can fix the definition of connectedness by saying every continuous $f:X\\to\\{0,1\\}$ has exactly one point in its image.)</p>\n" }, { "answer_id": 2097065, "score": 1, "text": "<p>There is only one homomorphism of the cyclic group of order $9$ into the group of order $2$ (the zero homomorphism) but that group is not simple.</p>\n" } ]
Question: A &quot;functional&quot; definition of simple group? Answer 1: <p>That definition is not correct. For instance, the group $\mathbb{Z}_2$ is simple, but the identity homomorphism $\mathbb{Z}_2\to\mathbb{Z}_2$ is nonconstant. Or if $G$ is any finite group of odd order, any homomorphism $G\to\mathbb{Z}_2$ is constant, but $G$ need not be simple. The problem is that in contrast with clopen subsets of a topological space, you can't just take a normal subgroup $K\subset G$ and get a homomorphism $G\to\mathbb{Z}_2$ by mapping $K$ to $0$ and every other element of $G$ to $1$. That usually won't be a homomorphism.</p> <p>A correct "functional" definition of a simple group is that a nontrivial group $G$ is simple if any homomorphism $f:G\to H$ from $G$ to any other group is either injective or trivial (where "trivial" means it sends every element to the identity). Indeed, a homomorphism is injective iff its kernel is the trivial subgroup and trivial iff its kernel is all of $G$, so this is just saying the only normal subgroups of $G$ are the trivial subgroup and $G$.</p> <p>(Aside: I require $G$ to be nontrivial in this definition because the trivial group is not simple; similarly, the empty topological space is not connected (see <a href="https://ncatlab.org/nlab/show/too+simple+to+be+simple" rel="noreferrer">https://ncatlab.org/nlab/show/too+simple+to+be+simple</a>). You can avoid stating that $G$ is nontrivial by saying instead that every homomorphism $f:G\to H$ is exactly one of injective and trivial. Similarly, you can fix the definition of connectedness by saying every continuous $f:X\to\{0,1\}$ has exactly one point in its image.)</p> Answer 2: <p>There is only one homomorphism of the cyclic group of order $9$ into the group of order $2$ (the zero homomorphism) but that group is not simple.</p> Answer 3: <p>That definition is not correct. For instance, the group $\mathbb{Z}_2$ is simple, but the identity homomorphism $\mathbb{Z}_2\to\mathbb{Z}_2$ is nonconstant. Or if $G$ is any finite group of odd order, any homomorphism $G\to\mathbb{Z}_2$ is constant, but $G$ need not be simple. The problem is that in contrast with clopen subsets of a topological space, you can't just take a normal subgroup $K\subset G$ and get a homomorphism $G\to\mathbb{Z}_2$ by mapping $K$ to $0$ and every other element of $G$ to $1$. That usually won't be a homomorphism.</p> <p>A correct "functional" definition of a simple group is that a nontrivial group $G$ is simple if any homomorphism $f:G\to H$ from $G$ to any other group is either injective or trivial (where "trivial" means it sends every element to the identity). Indeed, a homomorphism is injective iff its kernel is the trivial subgroup and trivial iff its kernel is all of $G$, so this is just saying the only normal subgroups of $G$ are the trivial subgroup and $G$.</p> <p>(Aside: I require $G$ to be nontrivial in this definition because the trivial group is not simple; similarly, the empty topological space is not connected (see <a href="https://ncatlab.org/nlab/show/too+simple+to+be+simple" rel="noreferrer">https://ncatlab.org/nlab/show/too+simple+to+be+simple</a>). You can avoid stating that $G$ is nontrivial by saying instead that every homomorphism $f:G\to H$ is exactly one of injective and trivial. Similarly, you can fix the definition of connectedness by saying every continuous $f:X\to\{0,1\}$ has exactly one point in its image.)</p> Answer 4: <p>There is only one homomorphism of the cyclic group of order $9$ into the group of order $2$ (the zero homomorphism) but that group is not simple.</p> Answer 5: <p>That definition is not correct. For instance, the group $\mathbb{Z}_2$ is simple, but the identity homomorphism $\mathbb{Z}_2\to\mathbb{Z}_2$ is nonconstant. Or if $G$ is any finite group of odd order, any homomorphism $G\to\mathbb{Z}_2$ is constant, but $G$ need not be simple. The problem is that in contrast with clopen subsets of a topological space, you can't just take a normal subgroup $K\subset G$ and get a homomorphism $G\to\mathbb{Z}_2$ by mapping $K$ to $0$ and every other element of $G$ to $1$. That usually won't be a homomorphism.</p> <p>A correct "functional" definition of a simple group is that a nontrivial group $G$ is simple if any homomorphism $f:G\to H$ from $G$ to any other group is either injective or trivial (where "trivial" means it sends every element to the identity). Indeed, a homomorphism is injective iff its kernel is the trivial subgroup and trivial iff its kernel is all of $G$, so this is just saying the only normal subgroups of $G$ are the trivial subgroup and $G$.</p> <p>(Aside: I require $G$ to be nontrivial in this definition because the trivial group is not simple; similarly, the empty topological space is not connected (see <a href="https://ncatlab.org/nlab/show/too+simple+to+be+simple" rel="noreferrer">https://ncatlab.org/nlab/show/too+simple+to+be+simple</a>). You can avoid stating that $G$ is nontrivial by saying instead that every homomorphism $f:G\to H$ is exactly one of injective and trivial. Similarly, you can fix the definition of connectedness by saying every continuous $f:X\to\{0,1\}$ has exactly one point in its image.)</p> Answer 6: <p>There is only one homomorphism of the cyclic group of order $9$ into the group of order $2$ (the zero homomorphism) but that group is not simple.</p> Answer 7: <p>That definition is not correct. For instance, the group $\mathbb{Z}_2$ is simple, but the identity homomorphism $\mathbb{Z}_2\to\mathbb{Z}_2$ is nonconstant. Or if $G$ is any finite group of odd order, any homomorphism $G\to\mathbb{Z}_2$ is constant, but $G$ need not be simple. The problem is that in contrast with clopen subsets of a topological space, you can't just take a normal subgroup $K\subset G$ and get a homomorphism $G\to\mathbb{Z}_2$ by mapping $K$ to $0$ and every other element of $G$ to $1$. That usually won't be a homomorphism.</p> <p>A correct "functional" definition of a simple group is that a nontrivial group $G$ is simple if any homomorphism $f:G\to H$ from $G$ to any other group is either injective or trivial (where "trivial" means it sends every element to the identity). Indeed, a homomorphism is injective iff its kernel is the trivial subgroup and trivial iff its kernel is all of $G$, so this is just saying the only normal subgroups of $G$ are the trivial subgroup and $G$.</p> <p>(Aside: I require $G$ to be nontrivial in this definition because the trivial group is not simple; similarly, the empty topological space is not connected (see <a href="https://ncatlab.org/nlab/show/too+simple+to+be+simple" rel="noreferrer">https://ncatlab.org/nlab/show/too+simple+to+be+simple</a>). You can avoid stating that $G$ is nontrivial by saying instead that every homomorphism $f:G\to H$ is exactly one of injective and trivial. Similarly, you can fix the definition of connectedness by saying every continuous $f:X\to\{0,1\}$ has exactly one point in its image.)</p> Answer 8: <p>There is only one homomorphism of the cyclic group of order $9$ into the group of order $2$ (the zero homomorphism) but that group is not simple.</p> Answer 9: <p>That definition is not correct. For instance, the group $\mathbb{Z}_2$ is simple, but the identity homomorphism $\mathbb{Z}_2\to\mathbb{Z}_2$ is nonconstant. Or if $G$ is any finite group of odd order, any homomorphism $G\to\mathbb{Z}_2$ is constant, but $G$ need not be simple. The problem is that in contrast with clopen subsets of a topological space, you can't just take a normal subgroup $K\subset G$ and get a homomorphism $G\to\mathbb{Z}_2$ by mapping $K$ to $0$ and every other element of $G$ to $1$. That usually won't be a homomorphism.</p> <p>A correct "functional" definition of a simple group is that a nontrivial group $G$ is simple if any homomorphism $f:G\to H$ from $G$ to any other group is either injective or trivial (where "trivial" means it sends every element to the identity). Indeed, a homomorphism is injective iff its kernel is the trivial subgroup and trivial iff its kernel is all of $G$, so this is just saying the only normal subgroups of $G$ are the trivial subgroup and $G$.</p> <p>(Aside: I require $G$ to be nontrivial in this definition because the trivial group is not simple; similarly, the empty topological space is not connected (see <a href="https://ncatlab.org/nlab/show/too+simple+to+be+simple" rel="noreferrer">https://ncatlab.org/nlab/show/too+simple+to+be+simple</a>). You can avoid stating that $G$ is nontrivial by saying instead that every homomorphism $f:G\to H$ is exactly one of injective and trivial. Similarly, you can fix the definition of connectedness by saying every continuous $f:X\to\{0,1\}$ has exactly one point in its image.)</p> Answer 10: <p>There is only one homomorphism of the cyclic group of order $9$ into the group of order $2$ (the zero homomorphism) but that group is not simple.</p> Answer 11: <p>That definition is not correct. For instance, the group $\mathbb{Z}_2$ is simple, but the identity homomorphism $\mathbb{Z}_2\to\mathbb{Z}_2$ is nonconstant. Or if $G$ is any finite group of odd order, any homomorphism $G\to\mathbb{Z}_2$ is constant, but $G$ need not be simple. The problem is that in contrast with clopen subsets of a topological space, you can't just take a normal subgroup $K\subset G$ and get a homomorphism $G\to\mathbb{Z}_2$ by mapping $K$ to $0$ and every other element of $G$ to $1$. That usually won't be a homomorphism.</p> <p>A correct "functional" definition of a simple group is that a nontrivial group $G$ is simple if any homomorphism $f:G\to H$ from $G$ to any other group is either injective or trivial (where "trivial" means it sends every element to the identity). Indeed, a homomorphism is injective iff its kernel is the trivial subgroup and trivial iff its kernel is all of $G$, so this is just saying the only normal subgroups of $G$ are the trivial subgroup and $G$.</p> <p>(Aside: I require $G$ to be nontrivial in this definition because the trivial group is not simple; similarly, the empty topological space is not connected (see <a href="https://ncatlab.org/nlab/show/too+simple+to+be+simple" rel="noreferrer">https://ncatlab.org/nlab/show/too+simple+to+be+simple</a>). You can avoid stating that $G$ is nontrivial by saying instead that every homomorphism $f:G\to H$ is exactly one of injective and trivial. Similarly, you can fix the definition of connectedness by saying every continuous $f:X\to\{0,1\}$ has exactly one point in its image.)</p> Answer 12: <p>There is only one homomorphism of the cyclic group of order $9$ into the group of order $2$ (the zero homomorphism) but that group is not simple.</p> Answer 13: <p>That definition is not correct. For instance, the group $\mathbb{Z}_2$ is simple, but the identity homomorphism $\mathbb{Z}_2\to\mathbb{Z}_2$ is nonconstant. Or if $G$ is any finite group of odd order, any homomorphism $G\to\mathbb{Z}_2$ is constant, but $G$ need not be simple. The problem is that in contrast with clopen subsets of a topological space, you can't just take a normal subgroup $K\subset G$ and get a homomorphism $G\to\mathbb{Z}_2$ by mapping $K$ to $0$ and every other element of $G$ to $1$. That usually won't be a homomorphism.</p> <p>A correct "functional" definition of a simple group is that a nontrivial group $G$ is simple if any homomorphism $f:G\to H$ from $G$ to any other group is either injective or trivial (where "trivial" means it sends every element to the identity). Indeed, a homomorphism is injective iff its kernel is the trivial subgroup and trivial iff its kernel is all of $G$, so this is just saying the only normal subgroups of $G$ are the trivial subgroup and $G$.</p> <p>(Aside: I require $G$ to be nontrivial in this definition because the trivial group is not simple; similarly, the empty topological space is not connected (see <a href="https://ncatlab.org/nlab/show/too+simple+to+be+simple" rel="noreferrer">https://ncatlab.org/nlab/show/too+simple+to+be+simple</a>). You can avoid stating that $G$ is nontrivial by saying instead that every homomorphism $f:G\to H$ is exactly one of injective and trivial. Similarly, you can fix the definition of connectedness by saying every continuous $f:X\to\{0,1\}$ has exactly one point in its image.)</p> Answer 14: <p>There is only one homomorphism of the cyclic group of order $9$ into the group of order $2$ (the zero homomorphism) but that group is not simple.</p> Answer 15: <p>That definition is not correct. For instance, the group $\mathbb{Z}_2$ is simple, but the identity homomorphism $\mathbb{Z}_2\to\mathbb{Z}_2$ is nonconstant. Or if $G$ is any finite group of odd order, any homomorphism $G\to\mathbb{Z}_2$ is constant, but $G$ need not be simple. The problem is that in contrast with clopen subsets of a topological space, you can't just take a normal subgroup $K\subset G$ and get a homomorphism $G\to\mathbb{Z}_2$ by mapping $K$ to $0$ and every other element of $G$ to $1$. That usually won't be a homomorphism.</p> <p>A correct "functional" definition of a simple group is that a nontrivial group $G$ is simple if any homomorphism $f:G\to H$ from $G$ to any other group is either injective or trivial (where "trivial" means it sends every element to the identity). Indeed, a homomorphism is injective iff its kernel is the trivial subgroup and trivial iff its kernel is all of $G$, so this is just saying the only normal subgroups of $G$ are the trivial subgroup and $G$.</p> <p>(Aside: I require $G$ to be nontrivial in this definition because the trivial group is not simple; similarly, the empty topological space is not connected (see <a href="https://ncatlab.org/nlab/show/too+simple+to+be+simple" rel="noreferrer">https://ncatlab.org/nlab/show/too+simple+to+be+simple</a>). You can avoid stating that $G$ is nontrivial by saying instead that every homomorphism $f:G\to H$ is exactly one of injective and trivial. Similarly, you can fix the definition of connectedness by saying every continuous $f:X\to\{0,1\}$ has exactly one point in its image.)</p> Answer 16: <p>There is only one homomorphism of the cyclic group of order $9$ into the group of order $2$ (the zero homomorphism) but that group is not simple.</p> Answer 17: <p>That definition is not correct. For instance, the group $\mathbb{Z}_2$ is simple, but the identity homomorphism $\mathbb{Z}_2\to\mathbb{Z}_2$ is nonconstant. Or if $G$ is any finite group of odd order, any homomorphism $G\to\mathbb{Z}_2$ is constant, but $G$ need not be simple. The problem is that in contrast with clopen subsets of a topological space, you can't just take a normal subgroup $K\subset G$ and get a homomorphism $G\to\mathbb{Z}_2$ by mapping $K$ to $0$ and every other element of $G$ to $1$. That usually won't be a homomorphism.</p> <p>A correct "functional" definition of a simple group is that a nontrivial group $G$ is simple if any homomorphism $f:G\to H$ from $G$ to any other group is either injective or trivial (where "trivial" means it sends every element to the identity). Indeed, a homomorphism is injective iff its kernel is the trivial subgroup and trivial iff its kernel is all of $G$, so this is just saying the only normal subgroups of $G$ are the trivial subgroup and $G$.</p> <p>(Aside: I require $G$ to be nontrivial in this definition because the trivial group is not simple; similarly, the empty topological space is not connected (see <a href="https://ncatlab.org/nlab/show/too+simple+to+be+simple" rel="noreferrer">https://ncatlab.org/nlab/show/too+simple+to+be+simple</a>). You can avoid stating that $G$ is nontrivial by saying instead that every homomorphism $f:G\to H$ is exactly one of injective and trivial. Similarly, you can fix the definition of connectedness by saying every continuous $f:X\to\{0,1\}$ has exactly one point in its image.)</p> Answer 18: <p>There is only one homomorphism of the cyclic group of order $9$ into the group of order $2$ (the zero homomorphism) but that group is not simple.</p> Answer 19: <p>That definition is not correct. For instance, the group $\mathbb{Z}_2$ is simple, but the identity homomorphism $\mathbb{Z}_2\to\mathbb{Z}_2$ is nonconstant. Or if $G$ is any finite group of odd order, any homomorphism $G\to\mathbb{Z}_2$ is constant, but $G$ need not be simple. The problem is that in contrast with clopen subsets of a topological space, you can't just take a normal subgroup $K\subset G$ and get a homomorphism $G\to\mathbb{Z}_2$ by mapping $K$ to $0$ and every other element of $G$ to $1$. That usually won't be a homomorphism.</p> <p>A correct "functional" definition of a simple group is that a nontrivial group $G$ is simple if any homomorphism $f:G\to H$ from $G$ to any other group is either injective or trivial (where "trivial" means it sends every element to the identity). Indeed, a homomorphism is injective iff its kernel is the trivial subgroup and trivial iff its kernel is all of $G$, so this is just saying the only normal subgroups of $G$ are the trivial subgroup and $G$.</p> <p>(Aside: I require $G$ to be nontrivial in this definition because the trivial group is not simple; similarly, the empty topological space is not connected (see <a href="https://ncatlab.org/nlab/show/too+simple+to+be+simple" rel="noreferrer">https://ncatlab.org/nlab/show/too+simple+to+be+simple</a>). You can avoid stating that $G$ is nontrivial by saying instead that every homomorphism $f:G\to H$ is exactly one of injective and trivial. Similarly, you can fix the definition of connectedness by saying every continuous $f:X\to\{0,1\}$ has exactly one point in its image.)</p> Answer 20: <p>There is only one homomorphism of the cyclic group of order $9$ into the group of order $2$ (the zero homomorphism) but that group is not simple.</p> Answer 21: <p>That definition is not correct. For instance, the group $\mathbb{Z}_2$ is simple, but the identity homomorphism $\mathbb{Z}_2\to\mathbb{Z}_2$ is nonconstant. Or if $G$ is any finite group of odd order, any homomorphism $G\to\mathbb{Z}_2$ is constant, but $G$ need not be simple. The problem is that in contrast with clopen subsets of a topological space, you can't just take a normal subgroup $K\subset G$ and get a homomorphism $G\to\mathbb{Z}_2$ by mapping $K$ to $0$ and every other element of $G$ to $1$. That usually won't be a homomorphism.</p> <p>A correct "functional" definition of a simple group is that a nontrivial group $G$ is simple if any homomorphism $f:G\to H$ from $G$ to any other group is either injective or trivial (where "trivial" means it sends every element to the identity). Indeed, a homomorphism is injective iff its kernel is the trivial subgroup and trivial iff its kernel is all of $G$, so this is just saying the only normal subgroups of $G$ are the trivial subgroup and $G$.</p> <p>(Aside: I require $G$ to be nontrivial in this definition because the trivial group is not simple; similarly, the empty topological space is not connected (see <a href="https://ncatlab.org/nlab/show/too+simple+to+be+simple" rel="noreferrer">https://ncatlab.org/nlab/show/too+simple+to+be+simple</a>). You can avoid stating that $G$ is nontrivial by saying instead that every homomorphism $f:G\to H$ is exactly one of injective and trivial. Similarly, you can fix the definition of connectedness by saying every continuous $f:X\to\{0,1\}$ has exactly one point in its image.)</p> Answer 22: <p>There is only one homomorphism of the cyclic group of order $9$ into the group of order $2$ (the zero homomorphism) but that group is not simple.</p>
Book on Lipschitz continuity
2,097,074
6
https://math.stackexchange.com/questions/2097074/book-on-lipschitz-continuity
<p>I have managed to end up in a fourth-year differential equations class without ever needing to take analysis (really think they messed the pre-reqs up). I know the notions of bounded, closed, compact, and convergent sequences, continuity and even uniform continuity. But give me a ball of radius $r$ at $x$ and all I know how to do with it is play soccer.</p> <p>I would like a good book that lays out different types of continuity and proves their relationship, preferably also with examples being applied to actual functions on the reals. (Turns out there is like 6 different types of continuity. And I really don’t want to drop this class if I can help it.)</p>
[ { "answer_id": 2097121, "score": 7, "text": "<p>Re: <em>Lipschitz</em> continuity, I suggest you look at Fig. 29 in <a href=\"https://books.google.fr/books?id=JUoyqlW7PZgC&amp;lpg=PP1&amp;pg=PA38#v=onepage&amp;q&amp;f=false\" rel=\"noreferrer\">Arnold (1992)</a> and the discussion around it. Myself, I think about the Lipschitz property as more of a weakened smoothness condition than a strengthened continuity one; the symbol $C^{1-}$ sometimes used for Lipschitz-continuous functions is quite apt. (Note that $C^0 \\supset C^{1-} \\supset C^1$; about the $C^k$ notation, see <a href=\"http://mathworld.wolfram.com/C-kFunction.html\" rel=\"noreferrer\">the <em>MathWorld</em> article</a> in case you’ve never taken multivariable calculus.)</p>\n\n<p><a href=\"https://i.stack.imgur.com/Qz26I.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/Qz26I.png\" alt=\"Image showing that Lipshitz continuous function passes inside a cone centered at a given point, with an acute angle, titled “the Lipschitz condition”.\"></a></p>\n\n<p>Re: other notions of continuity, I’m not sure that you are going to need them all. To enumerate:</p>\n\n<ul>\n<li>Continuity, I expect you have some understanding of.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Uniform_continuity\" rel=\"noreferrer\">Uniform continuity</a> is the same as continuity when you are on a compact set (the <a href=\"https://en.wikipedia.org/wiki/Heine%E2%80%93Cantor_theorem\" rel=\"noreferrer\">Heine-Cantor theorem</a>). Uniform continuity is what you use to prove Riemann integrability, so a simpler version and applications of this result usually occur somewhere around there in a calculus/analysis course.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Absolute_continuity\" rel=\"noreferrer\">Absolute continuity</a> mostly appears in connection with Lebesgue integration, so that is where you should go for the explanations. Whether you <em>need</em> to depends on the notion of integration used in the ODE course, and if you indeed need to learn a whole new theory of integration then you’ve got bigger problems than continuity notions.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Lipschitz_continuity\" rel=\"noreferrer\">Lipschitz continuity</a> I tried to elaborate on above, and it is indeed the notion used for the usual ODE existence and uniqueness theorem, the <a href=\"https://en.wikipedia.org/wiki/Picard%E2%80%93Lindel%C3%B6f_theorem\" rel=\"noreferrer\">Picard-Lindelöf theorem</a>.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/H%C3%B6lder_condition\" rel=\"noreferrer\">Hölder continuity</a> is a refinement of Lipschitz continuity (look at the definition) used mostly in advanced PDE theory. If you made it to fourth year without real analysis, you will be able to graduate without ever encountering it.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Equicontinuity\" rel=\"noreferrer\">Equicontinuity</a> (and uniform equicontinuity, and Lipschitz property with a common constant) is a property of <em>families</em> of functions rather than of functions themselves. You should compare it to uniform convergence of functional sequences. It is used in the <a href=\"https://en.wikipedia.org/wiki/Arzel%C3%A0%E2%80%93Ascoli_theorem\" rel=\"noreferrer\">Arzelà-Ascoli theorem</a>, which you <em>may or may not</em> end up using in the course.</li>\n</ul>\n\n<p>Aside from continuity, your remarks make me wonder whether you took some kind of multivariable differential calculus. If not, study it, because it is required here. A ball there is just a normal Euclidean ball (possibly $n$-dimensional though), the excercise is mostly to figure out where in the usual calculus proofs you can change a modulus sign into a norm (<em>i.e.</em> length) sign, and where you should replace multiplication by a linear map (<em>i.e.</em> matrix multiplication). Functional (infinite-dimensional) balls should come afterwards.</p>\n\n<p>Re: the course, brace yourself, because the proofs <em>are</em> going to be hard for you. Thankfully, there is not a lot of theorems in a basic ODE course, and the statements are all rather intuitive. Given that you are asking the question at this time of year, I suppose that the course began with the theorems, which is really not a wise choice. (However, if it is really <em>all</em> like this, <em>i.e.</em> if it is going to proceed into PDE theory, dynamical systems and so on, I strongly suggest you drop it.) Read the first chapters in Arnold’s book to get what you are actually getting at with those statements. (I would even recommend to track down a translation of the <em>first</em> Russian edition, because it is much more concise if sparser on applications.)</p>\n" }, { "answer_id": 2097087, "score": 1, "text": "<p>I think this link has a good introduction on Lipschitz continuity </p>\n\n<p><a href=\"http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf\" rel=\"nofollow noreferrer\">http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf</a>.</p>\n\n<p>And for Real Analysis, iI think book by Royden , by S.Kumaresan is really good and student friendly it encourages you to think.</p>\n" }, { "answer_id": 2097121, "score": 7, "text": "<p>Re: <em>Lipschitz</em> continuity, I suggest you look at Fig. 29 in <a href=\"https://books.google.fr/books?id=JUoyqlW7PZgC&amp;lpg=PP1&amp;pg=PA38#v=onepage&amp;q&amp;f=false\" rel=\"noreferrer\">Arnold (1992)</a> and the discussion around it. Myself, I think about the Lipschitz property as more of a weakened smoothness condition than a strengthened continuity one; the symbol $C^{1-}$ sometimes used for Lipschitz-continuous functions is quite apt. (Note that $C^0 \\supset C^{1-} \\supset C^1$; about the $C^k$ notation, see <a href=\"http://mathworld.wolfram.com/C-kFunction.html\" rel=\"noreferrer\">the <em>MathWorld</em> article</a> in case you’ve never taken multivariable calculus.)</p>\n\n<p><a href=\"https://i.stack.imgur.com/Qz26I.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/Qz26I.png\" alt=\"Image showing that Lipshitz continuous function passes inside a cone centered at a given point, with an acute angle, titled “the Lipschitz condition”.\"></a></p>\n\n<p>Re: other notions of continuity, I’m not sure that you are going to need them all. To enumerate:</p>\n\n<ul>\n<li>Continuity, I expect you have some understanding of.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Uniform_continuity\" rel=\"noreferrer\">Uniform continuity</a> is the same as continuity when you are on a compact set (the <a href=\"https://en.wikipedia.org/wiki/Heine%E2%80%93Cantor_theorem\" rel=\"noreferrer\">Heine-Cantor theorem</a>). Uniform continuity is what you use to prove Riemann integrability, so a simpler version and applications of this result usually occur somewhere around there in a calculus/analysis course.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Absolute_continuity\" rel=\"noreferrer\">Absolute continuity</a> mostly appears in connection with Lebesgue integration, so that is where you should go for the explanations. Whether you <em>need</em> to depends on the notion of integration used in the ODE course, and if you indeed need to learn a whole new theory of integration then you’ve got bigger problems than continuity notions.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Lipschitz_continuity\" rel=\"noreferrer\">Lipschitz continuity</a> I tried to elaborate on above, and it is indeed the notion used for the usual ODE existence and uniqueness theorem, the <a href=\"https://en.wikipedia.org/wiki/Picard%E2%80%93Lindel%C3%B6f_theorem\" rel=\"noreferrer\">Picard-Lindelöf theorem</a>.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/H%C3%B6lder_condition\" rel=\"noreferrer\">Hölder continuity</a> is a refinement of Lipschitz continuity (look at the definition) used mostly in advanced PDE theory. If you made it to fourth year without real analysis, you will be able to graduate without ever encountering it.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Equicontinuity\" rel=\"noreferrer\">Equicontinuity</a> (and uniform equicontinuity, and Lipschitz property with a common constant) is a property of <em>families</em> of functions rather than of functions themselves. You should compare it to uniform convergence of functional sequences. It is used in the <a href=\"https://en.wikipedia.org/wiki/Arzel%C3%A0%E2%80%93Ascoli_theorem\" rel=\"noreferrer\">Arzelà-Ascoli theorem</a>, which you <em>may or may not</em> end up using in the course.</li>\n</ul>\n\n<p>Aside from continuity, your remarks make me wonder whether you took some kind of multivariable differential calculus. If not, study it, because it is required here. A ball there is just a normal Euclidean ball (possibly $n$-dimensional though), the excercise is mostly to figure out where in the usual calculus proofs you can change a modulus sign into a norm (<em>i.e.</em> length) sign, and where you should replace multiplication by a linear map (<em>i.e.</em> matrix multiplication). Functional (infinite-dimensional) balls should come afterwards.</p>\n\n<p>Re: the course, brace yourself, because the proofs <em>are</em> going to be hard for you. Thankfully, there is not a lot of theorems in a basic ODE course, and the statements are all rather intuitive. Given that you are asking the question at this time of year, I suppose that the course began with the theorems, which is really not a wise choice. (However, if it is really <em>all</em> like this, <em>i.e.</em> if it is going to proceed into PDE theory, dynamical systems and so on, I strongly suggest you drop it.) Read the first chapters in Arnold’s book to get what you are actually getting at with those statements. (I would even recommend to track down a translation of the <em>first</em> Russian edition, because it is much more concise if sparser on applications.)</p>\n" }, { "answer_id": 2097087, "score": 1, "text": "<p>I think this link has a good introduction on Lipschitz continuity </p>\n\n<p><a href=\"http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf\" rel=\"nofollow noreferrer\">http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf</a>.</p>\n\n<p>And for Real Analysis, iI think book by Royden , by S.Kumaresan is really good and student friendly it encourages you to think.</p>\n" }, { "answer_id": 2097121, "score": 7, "text": "<p>Re: <em>Lipschitz</em> continuity, I suggest you look at Fig. 29 in <a href=\"https://books.google.fr/books?id=JUoyqlW7PZgC&amp;lpg=PP1&amp;pg=PA38#v=onepage&amp;q&amp;f=false\" rel=\"noreferrer\">Arnold (1992)</a> and the discussion around it. Myself, I think about the Lipschitz property as more of a weakened smoothness condition than a strengthened continuity one; the symbol $C^{1-}$ sometimes used for Lipschitz-continuous functions is quite apt. (Note that $C^0 \\supset C^{1-} \\supset C^1$; about the $C^k$ notation, see <a href=\"http://mathworld.wolfram.com/C-kFunction.html\" rel=\"noreferrer\">the <em>MathWorld</em> article</a> in case you’ve never taken multivariable calculus.)</p>\n\n<p><a href=\"https://i.stack.imgur.com/Qz26I.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/Qz26I.png\" alt=\"Image showing that Lipshitz continuous function passes inside a cone centered at a given point, with an acute angle, titled “the Lipschitz condition”.\"></a></p>\n\n<p>Re: other notions of continuity, I’m not sure that you are going to need them all. To enumerate:</p>\n\n<ul>\n<li>Continuity, I expect you have some understanding of.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Uniform_continuity\" rel=\"noreferrer\">Uniform continuity</a> is the same as continuity when you are on a compact set (the <a href=\"https://en.wikipedia.org/wiki/Heine%E2%80%93Cantor_theorem\" rel=\"noreferrer\">Heine-Cantor theorem</a>). Uniform continuity is what you use to prove Riemann integrability, so a simpler version and applications of this result usually occur somewhere around there in a calculus/analysis course.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Absolute_continuity\" rel=\"noreferrer\">Absolute continuity</a> mostly appears in connection with Lebesgue integration, so that is where you should go for the explanations. Whether you <em>need</em> to depends on the notion of integration used in the ODE course, and if you indeed need to learn a whole new theory of integration then you’ve got bigger problems than continuity notions.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Lipschitz_continuity\" rel=\"noreferrer\">Lipschitz continuity</a> I tried to elaborate on above, and it is indeed the notion used for the usual ODE existence and uniqueness theorem, the <a href=\"https://en.wikipedia.org/wiki/Picard%E2%80%93Lindel%C3%B6f_theorem\" rel=\"noreferrer\">Picard-Lindelöf theorem</a>.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/H%C3%B6lder_condition\" rel=\"noreferrer\">Hölder continuity</a> is a refinement of Lipschitz continuity (look at the definition) used mostly in advanced PDE theory. If you made it to fourth year without real analysis, you will be able to graduate without ever encountering it.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Equicontinuity\" rel=\"noreferrer\">Equicontinuity</a> (and uniform equicontinuity, and Lipschitz property with a common constant) is a property of <em>families</em> of functions rather than of functions themselves. You should compare it to uniform convergence of functional sequences. It is used in the <a href=\"https://en.wikipedia.org/wiki/Arzel%C3%A0%E2%80%93Ascoli_theorem\" rel=\"noreferrer\">Arzelà-Ascoli theorem</a>, which you <em>may or may not</em> end up using in the course.</li>\n</ul>\n\n<p>Aside from continuity, your remarks make me wonder whether you took some kind of multivariable differential calculus. If not, study it, because it is required here. A ball there is just a normal Euclidean ball (possibly $n$-dimensional though), the excercise is mostly to figure out where in the usual calculus proofs you can change a modulus sign into a norm (<em>i.e.</em> length) sign, and where you should replace multiplication by a linear map (<em>i.e.</em> matrix multiplication). Functional (infinite-dimensional) balls should come afterwards.</p>\n\n<p>Re: the course, brace yourself, because the proofs <em>are</em> going to be hard for you. Thankfully, there is not a lot of theorems in a basic ODE course, and the statements are all rather intuitive. Given that you are asking the question at this time of year, I suppose that the course began with the theorems, which is really not a wise choice. (However, if it is really <em>all</em> like this, <em>i.e.</em> if it is going to proceed into PDE theory, dynamical systems and so on, I strongly suggest you drop it.) Read the first chapters in Arnold’s book to get what you are actually getting at with those statements. (I would even recommend to track down a translation of the <em>first</em> Russian edition, because it is much more concise if sparser on applications.)</p>\n" }, { "answer_id": 2097087, "score": 1, "text": "<p>I think this link has a good introduction on Lipschitz continuity </p>\n\n<p><a href=\"http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf\" rel=\"nofollow noreferrer\">http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf</a>.</p>\n\n<p>And for Real Analysis, iI think book by Royden , by S.Kumaresan is really good and student friendly it encourages you to think.</p>\n" }, { "answer_id": 2097121, "score": 7, "text": "<p>Re: <em>Lipschitz</em> continuity, I suggest you look at Fig. 29 in <a href=\"https://books.google.fr/books?id=JUoyqlW7PZgC&amp;lpg=PP1&amp;pg=PA38#v=onepage&amp;q&amp;f=false\" rel=\"noreferrer\">Arnold (1992)</a> and the discussion around it. Myself, I think about the Lipschitz property as more of a weakened smoothness condition than a strengthened continuity one; the symbol $C^{1-}$ sometimes used for Lipschitz-continuous functions is quite apt. (Note that $C^0 \\supset C^{1-} \\supset C^1$; about the $C^k$ notation, see <a href=\"http://mathworld.wolfram.com/C-kFunction.html\" rel=\"noreferrer\">the <em>MathWorld</em> article</a> in case you’ve never taken multivariable calculus.)</p>\n\n<p><a href=\"https://i.stack.imgur.com/Qz26I.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/Qz26I.png\" alt=\"Image showing that Lipshitz continuous function passes inside a cone centered at a given point, with an acute angle, titled “the Lipschitz condition”.\"></a></p>\n\n<p>Re: other notions of continuity, I’m not sure that you are going to need them all. To enumerate:</p>\n\n<ul>\n<li>Continuity, I expect you have some understanding of.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Uniform_continuity\" rel=\"noreferrer\">Uniform continuity</a> is the same as continuity when you are on a compact set (the <a href=\"https://en.wikipedia.org/wiki/Heine%E2%80%93Cantor_theorem\" rel=\"noreferrer\">Heine-Cantor theorem</a>). Uniform continuity is what you use to prove Riemann integrability, so a simpler version and applications of this result usually occur somewhere around there in a calculus/analysis course.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Absolute_continuity\" rel=\"noreferrer\">Absolute continuity</a> mostly appears in connection with Lebesgue integration, so that is where you should go for the explanations. Whether you <em>need</em> to depends on the notion of integration used in the ODE course, and if you indeed need to learn a whole new theory of integration then you’ve got bigger problems than continuity notions.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Lipschitz_continuity\" rel=\"noreferrer\">Lipschitz continuity</a> I tried to elaborate on above, and it is indeed the notion used for the usual ODE existence and uniqueness theorem, the <a href=\"https://en.wikipedia.org/wiki/Picard%E2%80%93Lindel%C3%B6f_theorem\" rel=\"noreferrer\">Picard-Lindelöf theorem</a>.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/H%C3%B6lder_condition\" rel=\"noreferrer\">Hölder continuity</a> is a refinement of Lipschitz continuity (look at the definition) used mostly in advanced PDE theory. If you made it to fourth year without real analysis, you will be able to graduate without ever encountering it.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Equicontinuity\" rel=\"noreferrer\">Equicontinuity</a> (and uniform equicontinuity, and Lipschitz property with a common constant) is a property of <em>families</em> of functions rather than of functions themselves. You should compare it to uniform convergence of functional sequences. It is used in the <a href=\"https://en.wikipedia.org/wiki/Arzel%C3%A0%E2%80%93Ascoli_theorem\" rel=\"noreferrer\">Arzelà-Ascoli theorem</a>, which you <em>may or may not</em> end up using in the course.</li>\n</ul>\n\n<p>Aside from continuity, your remarks make me wonder whether you took some kind of multivariable differential calculus. If not, study it, because it is required here. A ball there is just a normal Euclidean ball (possibly $n$-dimensional though), the excercise is mostly to figure out where in the usual calculus proofs you can change a modulus sign into a norm (<em>i.e.</em> length) sign, and where you should replace multiplication by a linear map (<em>i.e.</em> matrix multiplication). Functional (infinite-dimensional) balls should come afterwards.</p>\n\n<p>Re: the course, brace yourself, because the proofs <em>are</em> going to be hard for you. Thankfully, there is not a lot of theorems in a basic ODE course, and the statements are all rather intuitive. Given that you are asking the question at this time of year, I suppose that the course began with the theorems, which is really not a wise choice. (However, if it is really <em>all</em> like this, <em>i.e.</em> if it is going to proceed into PDE theory, dynamical systems and so on, I strongly suggest you drop it.) Read the first chapters in Arnold’s book to get what you are actually getting at with those statements. (I would even recommend to track down a translation of the <em>first</em> Russian edition, because it is much more concise if sparser on applications.)</p>\n" }, { "answer_id": 2097087, "score": 1, "text": "<p>I think this link has a good introduction on Lipschitz continuity </p>\n\n<p><a href=\"http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf\" rel=\"nofollow noreferrer\">http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf</a>.</p>\n\n<p>And for Real Analysis, iI think book by Royden , by S.Kumaresan is really good and student friendly it encourages you to think.</p>\n" }, { "answer_id": 2097121, "score": 7, "text": "<p>Re: <em>Lipschitz</em> continuity, I suggest you look at Fig. 29 in <a href=\"https://books.google.fr/books?id=JUoyqlW7PZgC&amp;lpg=PP1&amp;pg=PA38#v=onepage&amp;q&amp;f=false\" rel=\"noreferrer\">Arnold (1992)</a> and the discussion around it. Myself, I think about the Lipschitz property as more of a weakened smoothness condition than a strengthened continuity one; the symbol $C^{1-}$ sometimes used for Lipschitz-continuous functions is quite apt. (Note that $C^0 \\supset C^{1-} \\supset C^1$; about the $C^k$ notation, see <a href=\"http://mathworld.wolfram.com/C-kFunction.html\" rel=\"noreferrer\">the <em>MathWorld</em> article</a> in case you’ve never taken multivariable calculus.)</p>\n\n<p><a href=\"https://i.stack.imgur.com/Qz26I.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/Qz26I.png\" alt=\"Image showing that Lipshitz continuous function passes inside a cone centered at a given point, with an acute angle, titled “the Lipschitz condition”.\"></a></p>\n\n<p>Re: other notions of continuity, I’m not sure that you are going to need them all. To enumerate:</p>\n\n<ul>\n<li>Continuity, I expect you have some understanding of.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Uniform_continuity\" rel=\"noreferrer\">Uniform continuity</a> is the same as continuity when you are on a compact set (the <a href=\"https://en.wikipedia.org/wiki/Heine%E2%80%93Cantor_theorem\" rel=\"noreferrer\">Heine-Cantor theorem</a>). Uniform continuity is what you use to prove Riemann integrability, so a simpler version and applications of this result usually occur somewhere around there in a calculus/analysis course.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Absolute_continuity\" rel=\"noreferrer\">Absolute continuity</a> mostly appears in connection with Lebesgue integration, so that is where you should go for the explanations. Whether you <em>need</em> to depends on the notion of integration used in the ODE course, and if you indeed need to learn a whole new theory of integration then you’ve got bigger problems than continuity notions.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Lipschitz_continuity\" rel=\"noreferrer\">Lipschitz continuity</a> I tried to elaborate on above, and it is indeed the notion used for the usual ODE existence and uniqueness theorem, the <a href=\"https://en.wikipedia.org/wiki/Picard%E2%80%93Lindel%C3%B6f_theorem\" rel=\"noreferrer\">Picard-Lindelöf theorem</a>.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/H%C3%B6lder_condition\" rel=\"noreferrer\">Hölder continuity</a> is a refinement of Lipschitz continuity (look at the definition) used mostly in advanced PDE theory. If you made it to fourth year without real analysis, you will be able to graduate without ever encountering it.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Equicontinuity\" rel=\"noreferrer\">Equicontinuity</a> (and uniform equicontinuity, and Lipschitz property with a common constant) is a property of <em>families</em> of functions rather than of functions themselves. You should compare it to uniform convergence of functional sequences. It is used in the <a href=\"https://en.wikipedia.org/wiki/Arzel%C3%A0%E2%80%93Ascoli_theorem\" rel=\"noreferrer\">Arzelà-Ascoli theorem</a>, which you <em>may or may not</em> end up using in the course.</li>\n</ul>\n\n<p>Aside from continuity, your remarks make me wonder whether you took some kind of multivariable differential calculus. If not, study it, because it is required here. A ball there is just a normal Euclidean ball (possibly $n$-dimensional though), the excercise is mostly to figure out where in the usual calculus proofs you can change a modulus sign into a norm (<em>i.e.</em> length) sign, and where you should replace multiplication by a linear map (<em>i.e.</em> matrix multiplication). Functional (infinite-dimensional) balls should come afterwards.</p>\n\n<p>Re: the course, brace yourself, because the proofs <em>are</em> going to be hard for you. Thankfully, there is not a lot of theorems in a basic ODE course, and the statements are all rather intuitive. Given that you are asking the question at this time of year, I suppose that the course began with the theorems, which is really not a wise choice. (However, if it is really <em>all</em> like this, <em>i.e.</em> if it is going to proceed into PDE theory, dynamical systems and so on, I strongly suggest you drop it.) Read the first chapters in Arnold’s book to get what you are actually getting at with those statements. (I would even recommend to track down a translation of the <em>first</em> Russian edition, because it is much more concise if sparser on applications.)</p>\n" }, { "answer_id": 2097087, "score": 1, "text": "<p>I think this link has a good introduction on Lipschitz continuity </p>\n\n<p><a href=\"http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf\" rel=\"nofollow noreferrer\">http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf</a>.</p>\n\n<p>And for Real Analysis, iI think book by Royden , by S.Kumaresan is really good and student friendly it encourages you to think.</p>\n" }, { "answer_id": 2097121, "score": 7, "text": "<p>Re: <em>Lipschitz</em> continuity, I suggest you look at Fig. 29 in <a href=\"https://books.google.fr/books?id=JUoyqlW7PZgC&amp;lpg=PP1&amp;pg=PA38#v=onepage&amp;q&amp;f=false\" rel=\"noreferrer\">Arnold (1992)</a> and the discussion around it. Myself, I think about the Lipschitz property as more of a weakened smoothness condition than a strengthened continuity one; the symbol $C^{1-}$ sometimes used for Lipschitz-continuous functions is quite apt. (Note that $C^0 \\supset C^{1-} \\supset C^1$; about the $C^k$ notation, see <a href=\"http://mathworld.wolfram.com/C-kFunction.html\" rel=\"noreferrer\">the <em>MathWorld</em> article</a> in case you’ve never taken multivariable calculus.)</p>\n\n<p><a href=\"https://i.stack.imgur.com/Qz26I.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/Qz26I.png\" alt=\"Image showing that Lipshitz continuous function passes inside a cone centered at a given point, with an acute angle, titled “the Lipschitz condition”.\"></a></p>\n\n<p>Re: other notions of continuity, I’m not sure that you are going to need them all. To enumerate:</p>\n\n<ul>\n<li>Continuity, I expect you have some understanding of.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Uniform_continuity\" rel=\"noreferrer\">Uniform continuity</a> is the same as continuity when you are on a compact set (the <a href=\"https://en.wikipedia.org/wiki/Heine%E2%80%93Cantor_theorem\" rel=\"noreferrer\">Heine-Cantor theorem</a>). Uniform continuity is what you use to prove Riemann integrability, so a simpler version and applications of this result usually occur somewhere around there in a calculus/analysis course.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Absolute_continuity\" rel=\"noreferrer\">Absolute continuity</a> mostly appears in connection with Lebesgue integration, so that is where you should go for the explanations. Whether you <em>need</em> to depends on the notion of integration used in the ODE course, and if you indeed need to learn a whole new theory of integration then you’ve got bigger problems than continuity notions.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Lipschitz_continuity\" rel=\"noreferrer\">Lipschitz continuity</a> I tried to elaborate on above, and it is indeed the notion used for the usual ODE existence and uniqueness theorem, the <a href=\"https://en.wikipedia.org/wiki/Picard%E2%80%93Lindel%C3%B6f_theorem\" rel=\"noreferrer\">Picard-Lindelöf theorem</a>.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/H%C3%B6lder_condition\" rel=\"noreferrer\">Hölder continuity</a> is a refinement of Lipschitz continuity (look at the definition) used mostly in advanced PDE theory. If you made it to fourth year without real analysis, you will be able to graduate without ever encountering it.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Equicontinuity\" rel=\"noreferrer\">Equicontinuity</a> (and uniform equicontinuity, and Lipschitz property with a common constant) is a property of <em>families</em> of functions rather than of functions themselves. You should compare it to uniform convergence of functional sequences. It is used in the <a href=\"https://en.wikipedia.org/wiki/Arzel%C3%A0%E2%80%93Ascoli_theorem\" rel=\"noreferrer\">Arzelà-Ascoli theorem</a>, which you <em>may or may not</em> end up using in the course.</li>\n</ul>\n\n<p>Aside from continuity, your remarks make me wonder whether you took some kind of multivariable differential calculus. If not, study it, because it is required here. A ball there is just a normal Euclidean ball (possibly $n$-dimensional though), the excercise is mostly to figure out where in the usual calculus proofs you can change a modulus sign into a norm (<em>i.e.</em> length) sign, and where you should replace multiplication by a linear map (<em>i.e.</em> matrix multiplication). Functional (infinite-dimensional) balls should come afterwards.</p>\n\n<p>Re: the course, brace yourself, because the proofs <em>are</em> going to be hard for you. Thankfully, there is not a lot of theorems in a basic ODE course, and the statements are all rather intuitive. Given that you are asking the question at this time of year, I suppose that the course began with the theorems, which is really not a wise choice. (However, if it is really <em>all</em> like this, <em>i.e.</em> if it is going to proceed into PDE theory, dynamical systems and so on, I strongly suggest you drop it.) Read the first chapters in Arnold’s book to get what you are actually getting at with those statements. (I would even recommend to track down a translation of the <em>first</em> Russian edition, because it is much more concise if sparser on applications.)</p>\n" }, { "answer_id": 2097087, "score": 1, "text": "<p>I think this link has a good introduction on Lipschitz continuity </p>\n\n<p><a href=\"http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf\" rel=\"nofollow noreferrer\">http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf</a>.</p>\n\n<p>And for Real Analysis, iI think book by Royden , by S.Kumaresan is really good and student friendly it encourages you to think.</p>\n" }, { "answer_id": 2097121, "score": 7, "text": "<p>Re: <em>Lipschitz</em> continuity, I suggest you look at Fig. 29 in <a href=\"https://books.google.fr/books?id=JUoyqlW7PZgC&amp;lpg=PP1&amp;pg=PA38#v=onepage&amp;q&amp;f=false\" rel=\"noreferrer\">Arnold (1992)</a> and the discussion around it. Myself, I think about the Lipschitz property as more of a weakened smoothness condition than a strengthened continuity one; the symbol $C^{1-}$ sometimes used for Lipschitz-continuous functions is quite apt. (Note that $C^0 \\supset C^{1-} \\supset C^1$; about the $C^k$ notation, see <a href=\"http://mathworld.wolfram.com/C-kFunction.html\" rel=\"noreferrer\">the <em>MathWorld</em> article</a> in case you’ve never taken multivariable calculus.)</p>\n\n<p><a href=\"https://i.stack.imgur.com/Qz26I.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/Qz26I.png\" alt=\"Image showing that Lipshitz continuous function passes inside a cone centered at a given point, with an acute angle, titled “the Lipschitz condition”.\"></a></p>\n\n<p>Re: other notions of continuity, I’m not sure that you are going to need them all. To enumerate:</p>\n\n<ul>\n<li>Continuity, I expect you have some understanding of.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Uniform_continuity\" rel=\"noreferrer\">Uniform continuity</a> is the same as continuity when you are on a compact set (the <a href=\"https://en.wikipedia.org/wiki/Heine%E2%80%93Cantor_theorem\" rel=\"noreferrer\">Heine-Cantor theorem</a>). Uniform continuity is what you use to prove Riemann integrability, so a simpler version and applications of this result usually occur somewhere around there in a calculus/analysis course.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Absolute_continuity\" rel=\"noreferrer\">Absolute continuity</a> mostly appears in connection with Lebesgue integration, so that is where you should go for the explanations. Whether you <em>need</em> to depends on the notion of integration used in the ODE course, and if you indeed need to learn a whole new theory of integration then you’ve got bigger problems than continuity notions.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Lipschitz_continuity\" rel=\"noreferrer\">Lipschitz continuity</a> I tried to elaborate on above, and it is indeed the notion used for the usual ODE existence and uniqueness theorem, the <a href=\"https://en.wikipedia.org/wiki/Picard%E2%80%93Lindel%C3%B6f_theorem\" rel=\"noreferrer\">Picard-Lindelöf theorem</a>.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/H%C3%B6lder_condition\" rel=\"noreferrer\">Hölder continuity</a> is a refinement of Lipschitz continuity (look at the definition) used mostly in advanced PDE theory. If you made it to fourth year without real analysis, you will be able to graduate without ever encountering it.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Equicontinuity\" rel=\"noreferrer\">Equicontinuity</a> (and uniform equicontinuity, and Lipschitz property with a common constant) is a property of <em>families</em> of functions rather than of functions themselves. You should compare it to uniform convergence of functional sequences. It is used in the <a href=\"https://en.wikipedia.org/wiki/Arzel%C3%A0%E2%80%93Ascoli_theorem\" rel=\"noreferrer\">Arzelà-Ascoli theorem</a>, which you <em>may or may not</em> end up using in the course.</li>\n</ul>\n\n<p>Aside from continuity, your remarks make me wonder whether you took some kind of multivariable differential calculus. If not, study it, because it is required here. A ball there is just a normal Euclidean ball (possibly $n$-dimensional though), the excercise is mostly to figure out where in the usual calculus proofs you can change a modulus sign into a norm (<em>i.e.</em> length) sign, and where you should replace multiplication by a linear map (<em>i.e.</em> matrix multiplication). Functional (infinite-dimensional) balls should come afterwards.</p>\n\n<p>Re: the course, brace yourself, because the proofs <em>are</em> going to be hard for you. Thankfully, there is not a lot of theorems in a basic ODE course, and the statements are all rather intuitive. Given that you are asking the question at this time of year, I suppose that the course began with the theorems, which is really not a wise choice. (However, if it is really <em>all</em> like this, <em>i.e.</em> if it is going to proceed into PDE theory, dynamical systems and so on, I strongly suggest you drop it.) Read the first chapters in Arnold’s book to get what you are actually getting at with those statements. (I would even recommend to track down a translation of the <em>first</em> Russian edition, because it is much more concise if sparser on applications.)</p>\n" }, { "answer_id": 2097087, "score": 1, "text": "<p>I think this link has a good introduction on Lipschitz continuity </p>\n\n<p><a href=\"http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf\" rel=\"nofollow noreferrer\">http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf</a>.</p>\n\n<p>And for Real Analysis, iI think book by Royden , by S.Kumaresan is really good and student friendly it encourages you to think.</p>\n" }, { "answer_id": 2097121, "score": 7, "text": "<p>Re: <em>Lipschitz</em> continuity, I suggest you look at Fig. 29 in <a href=\"https://books.google.fr/books?id=JUoyqlW7PZgC&amp;lpg=PP1&amp;pg=PA38#v=onepage&amp;q&amp;f=false\" rel=\"noreferrer\">Arnold (1992)</a> and the discussion around it. Myself, I think about the Lipschitz property as more of a weakened smoothness condition than a strengthened continuity one; the symbol $C^{1-}$ sometimes used for Lipschitz-continuous functions is quite apt. (Note that $C^0 \\supset C^{1-} \\supset C^1$; about the $C^k$ notation, see <a href=\"http://mathworld.wolfram.com/C-kFunction.html\" rel=\"noreferrer\">the <em>MathWorld</em> article</a> in case you’ve never taken multivariable calculus.)</p>\n\n<p><a href=\"https://i.stack.imgur.com/Qz26I.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/Qz26I.png\" alt=\"Image showing that Lipshitz continuous function passes inside a cone centered at a given point, with an acute angle, titled “the Lipschitz condition”.\"></a></p>\n\n<p>Re: other notions of continuity, I’m not sure that you are going to need them all. To enumerate:</p>\n\n<ul>\n<li>Continuity, I expect you have some understanding of.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Uniform_continuity\" rel=\"noreferrer\">Uniform continuity</a> is the same as continuity when you are on a compact set (the <a href=\"https://en.wikipedia.org/wiki/Heine%E2%80%93Cantor_theorem\" rel=\"noreferrer\">Heine-Cantor theorem</a>). Uniform continuity is what you use to prove Riemann integrability, so a simpler version and applications of this result usually occur somewhere around there in a calculus/analysis course.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Absolute_continuity\" rel=\"noreferrer\">Absolute continuity</a> mostly appears in connection with Lebesgue integration, so that is where you should go for the explanations. Whether you <em>need</em> to depends on the notion of integration used in the ODE course, and if you indeed need to learn a whole new theory of integration then you’ve got bigger problems than continuity notions.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Lipschitz_continuity\" rel=\"noreferrer\">Lipschitz continuity</a> I tried to elaborate on above, and it is indeed the notion used for the usual ODE existence and uniqueness theorem, the <a href=\"https://en.wikipedia.org/wiki/Picard%E2%80%93Lindel%C3%B6f_theorem\" rel=\"noreferrer\">Picard-Lindelöf theorem</a>.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/H%C3%B6lder_condition\" rel=\"noreferrer\">Hölder continuity</a> is a refinement of Lipschitz continuity (look at the definition) used mostly in advanced PDE theory. If you made it to fourth year without real analysis, you will be able to graduate without ever encountering it.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Equicontinuity\" rel=\"noreferrer\">Equicontinuity</a> (and uniform equicontinuity, and Lipschitz property with a common constant) is a property of <em>families</em> of functions rather than of functions themselves. You should compare it to uniform convergence of functional sequences. It is used in the <a href=\"https://en.wikipedia.org/wiki/Arzel%C3%A0%E2%80%93Ascoli_theorem\" rel=\"noreferrer\">Arzelà-Ascoli theorem</a>, which you <em>may or may not</em> end up using in the course.</li>\n</ul>\n\n<p>Aside from continuity, your remarks make me wonder whether you took some kind of multivariable differential calculus. If not, study it, because it is required here. A ball there is just a normal Euclidean ball (possibly $n$-dimensional though), the excercise is mostly to figure out where in the usual calculus proofs you can change a modulus sign into a norm (<em>i.e.</em> length) sign, and where you should replace multiplication by a linear map (<em>i.e.</em> matrix multiplication). Functional (infinite-dimensional) balls should come afterwards.</p>\n\n<p>Re: the course, brace yourself, because the proofs <em>are</em> going to be hard for you. Thankfully, there is not a lot of theorems in a basic ODE course, and the statements are all rather intuitive. Given that you are asking the question at this time of year, I suppose that the course began with the theorems, which is really not a wise choice. (However, if it is really <em>all</em> like this, <em>i.e.</em> if it is going to proceed into PDE theory, dynamical systems and so on, I strongly suggest you drop it.) Read the first chapters in Arnold’s book to get what you are actually getting at with those statements. (I would even recommend to track down a translation of the <em>first</em> Russian edition, because it is much more concise if sparser on applications.)</p>\n" }, { "answer_id": 2097087, "score": 1, "text": "<p>I think this link has a good introduction on Lipschitz continuity </p>\n\n<p><a href=\"http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf\" rel=\"nofollow noreferrer\">http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf</a>.</p>\n\n<p>And for Real Analysis, iI think book by Royden , by S.Kumaresan is really good and student friendly it encourages you to think.</p>\n" }, { "answer_id": 2097121, "score": 7, "text": "<p>Re: <em>Lipschitz</em> continuity, I suggest you look at Fig. 29 in <a href=\"https://books.google.fr/books?id=JUoyqlW7PZgC&amp;lpg=PP1&amp;pg=PA38#v=onepage&amp;q&amp;f=false\" rel=\"noreferrer\">Arnold (1992)</a> and the discussion around it. Myself, I think about the Lipschitz property as more of a weakened smoothness condition than a strengthened continuity one; the symbol $C^{1-}$ sometimes used for Lipschitz-continuous functions is quite apt. (Note that $C^0 \\supset C^{1-} \\supset C^1$; about the $C^k$ notation, see <a href=\"http://mathworld.wolfram.com/C-kFunction.html\" rel=\"noreferrer\">the <em>MathWorld</em> article</a> in case you’ve never taken multivariable calculus.)</p>\n\n<p><a href=\"https://i.stack.imgur.com/Qz26I.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/Qz26I.png\" alt=\"Image showing that Lipshitz continuous function passes inside a cone centered at a given point, with an acute angle, titled “the Lipschitz condition”.\"></a></p>\n\n<p>Re: other notions of continuity, I’m not sure that you are going to need them all. To enumerate:</p>\n\n<ul>\n<li>Continuity, I expect you have some understanding of.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Uniform_continuity\" rel=\"noreferrer\">Uniform continuity</a> is the same as continuity when you are on a compact set (the <a href=\"https://en.wikipedia.org/wiki/Heine%E2%80%93Cantor_theorem\" rel=\"noreferrer\">Heine-Cantor theorem</a>). Uniform continuity is what you use to prove Riemann integrability, so a simpler version and applications of this result usually occur somewhere around there in a calculus/analysis course.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Absolute_continuity\" rel=\"noreferrer\">Absolute continuity</a> mostly appears in connection with Lebesgue integration, so that is where you should go for the explanations. Whether you <em>need</em> to depends on the notion of integration used in the ODE course, and if you indeed need to learn a whole new theory of integration then you’ve got bigger problems than continuity notions.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Lipschitz_continuity\" rel=\"noreferrer\">Lipschitz continuity</a> I tried to elaborate on above, and it is indeed the notion used for the usual ODE existence and uniqueness theorem, the <a href=\"https://en.wikipedia.org/wiki/Picard%E2%80%93Lindel%C3%B6f_theorem\" rel=\"noreferrer\">Picard-Lindelöf theorem</a>.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/H%C3%B6lder_condition\" rel=\"noreferrer\">Hölder continuity</a> is a refinement of Lipschitz continuity (look at the definition) used mostly in advanced PDE theory. If you made it to fourth year without real analysis, you will be able to graduate without ever encountering it.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Equicontinuity\" rel=\"noreferrer\">Equicontinuity</a> (and uniform equicontinuity, and Lipschitz property with a common constant) is a property of <em>families</em> of functions rather than of functions themselves. You should compare it to uniform convergence of functional sequences. It is used in the <a href=\"https://en.wikipedia.org/wiki/Arzel%C3%A0%E2%80%93Ascoli_theorem\" rel=\"noreferrer\">Arzelà-Ascoli theorem</a>, which you <em>may or may not</em> end up using in the course.</li>\n</ul>\n\n<p>Aside from continuity, your remarks make me wonder whether you took some kind of multivariable differential calculus. If not, study it, because it is required here. A ball there is just a normal Euclidean ball (possibly $n$-dimensional though), the excercise is mostly to figure out where in the usual calculus proofs you can change a modulus sign into a norm (<em>i.e.</em> length) sign, and where you should replace multiplication by a linear map (<em>i.e.</em> matrix multiplication). Functional (infinite-dimensional) balls should come afterwards.</p>\n\n<p>Re: the course, brace yourself, because the proofs <em>are</em> going to be hard for you. Thankfully, there is not a lot of theorems in a basic ODE course, and the statements are all rather intuitive. Given that you are asking the question at this time of year, I suppose that the course began with the theorems, which is really not a wise choice. (However, if it is really <em>all</em> like this, <em>i.e.</em> if it is going to proceed into PDE theory, dynamical systems and so on, I strongly suggest you drop it.) Read the first chapters in Arnold’s book to get what you are actually getting at with those statements. (I would even recommend to track down a translation of the <em>first</em> Russian edition, because it is much more concise if sparser on applications.)</p>\n" }, { "answer_id": 2097087, "score": 1, "text": "<p>I think this link has a good introduction on Lipschitz continuity </p>\n\n<p><a href=\"http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf\" rel=\"nofollow noreferrer\">http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf</a>.</p>\n\n<p>And for Real Analysis, iI think book by Royden , by S.Kumaresan is really good and student friendly it encourages you to think.</p>\n" }, { "answer_id": 2097121, "score": 7, "text": "<p>Re: <em>Lipschitz</em> continuity, I suggest you look at Fig. 29 in <a href=\"https://books.google.fr/books?id=JUoyqlW7PZgC&amp;lpg=PP1&amp;pg=PA38#v=onepage&amp;q&amp;f=false\" rel=\"noreferrer\">Arnold (1992)</a> and the discussion around it. Myself, I think about the Lipschitz property as more of a weakened smoothness condition than a strengthened continuity one; the symbol $C^{1-}$ sometimes used for Lipschitz-continuous functions is quite apt. (Note that $C^0 \\supset C^{1-} \\supset C^1$; about the $C^k$ notation, see <a href=\"http://mathworld.wolfram.com/C-kFunction.html\" rel=\"noreferrer\">the <em>MathWorld</em> article</a> in case you’ve never taken multivariable calculus.)</p>\n\n<p><a href=\"https://i.stack.imgur.com/Qz26I.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/Qz26I.png\" alt=\"Image showing that Lipshitz continuous function passes inside a cone centered at a given point, with an acute angle, titled “the Lipschitz condition”.\"></a></p>\n\n<p>Re: other notions of continuity, I’m not sure that you are going to need them all. To enumerate:</p>\n\n<ul>\n<li>Continuity, I expect you have some understanding of.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Uniform_continuity\" rel=\"noreferrer\">Uniform continuity</a> is the same as continuity when you are on a compact set (the <a href=\"https://en.wikipedia.org/wiki/Heine%E2%80%93Cantor_theorem\" rel=\"noreferrer\">Heine-Cantor theorem</a>). Uniform continuity is what you use to prove Riemann integrability, so a simpler version and applications of this result usually occur somewhere around there in a calculus/analysis course.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Absolute_continuity\" rel=\"noreferrer\">Absolute continuity</a> mostly appears in connection with Lebesgue integration, so that is where you should go for the explanations. Whether you <em>need</em> to depends on the notion of integration used in the ODE course, and if you indeed need to learn a whole new theory of integration then you’ve got bigger problems than continuity notions.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Lipschitz_continuity\" rel=\"noreferrer\">Lipschitz continuity</a> I tried to elaborate on above, and it is indeed the notion used for the usual ODE existence and uniqueness theorem, the <a href=\"https://en.wikipedia.org/wiki/Picard%E2%80%93Lindel%C3%B6f_theorem\" rel=\"noreferrer\">Picard-Lindelöf theorem</a>.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/H%C3%B6lder_condition\" rel=\"noreferrer\">Hölder continuity</a> is a refinement of Lipschitz continuity (look at the definition) used mostly in advanced PDE theory. If you made it to fourth year without real analysis, you will be able to graduate without ever encountering it.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Equicontinuity\" rel=\"noreferrer\">Equicontinuity</a> (and uniform equicontinuity, and Lipschitz property with a common constant) is a property of <em>families</em> of functions rather than of functions themselves. You should compare it to uniform convergence of functional sequences. It is used in the <a href=\"https://en.wikipedia.org/wiki/Arzel%C3%A0%E2%80%93Ascoli_theorem\" rel=\"noreferrer\">Arzelà-Ascoli theorem</a>, which you <em>may or may not</em> end up using in the course.</li>\n</ul>\n\n<p>Aside from continuity, your remarks make me wonder whether you took some kind of multivariable differential calculus. If not, study it, because it is required here. A ball there is just a normal Euclidean ball (possibly $n$-dimensional though), the excercise is mostly to figure out where in the usual calculus proofs you can change a modulus sign into a norm (<em>i.e.</em> length) sign, and where you should replace multiplication by a linear map (<em>i.e.</em> matrix multiplication). Functional (infinite-dimensional) balls should come afterwards.</p>\n\n<p>Re: the course, brace yourself, because the proofs <em>are</em> going to be hard for you. Thankfully, there is not a lot of theorems in a basic ODE course, and the statements are all rather intuitive. Given that you are asking the question at this time of year, I suppose that the course began with the theorems, which is really not a wise choice. (However, if it is really <em>all</em> like this, <em>i.e.</em> if it is going to proceed into PDE theory, dynamical systems and so on, I strongly suggest you drop it.) Read the first chapters in Arnold’s book to get what you are actually getting at with those statements. (I would even recommend to track down a translation of the <em>first</em> Russian edition, because it is much more concise if sparser on applications.)</p>\n" }, { "answer_id": 2097087, "score": 1, "text": "<p>I think this link has a good introduction on Lipschitz continuity </p>\n\n<p><a href=\"http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf\" rel=\"nofollow noreferrer\">http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf</a>.</p>\n\n<p>And for Real Analysis, iI think book by Royden , by S.Kumaresan is really good and student friendly it encourages you to think.</p>\n" }, { "answer_id": 2097121, "score": 7, "text": "<p>Re: <em>Lipschitz</em> continuity, I suggest you look at Fig. 29 in <a href=\"https://books.google.fr/books?id=JUoyqlW7PZgC&amp;lpg=PP1&amp;pg=PA38#v=onepage&amp;q&amp;f=false\" rel=\"noreferrer\">Arnold (1992)</a> and the discussion around it. Myself, I think about the Lipschitz property as more of a weakened smoothness condition than a strengthened continuity one; the symbol $C^{1-}$ sometimes used for Lipschitz-continuous functions is quite apt. (Note that $C^0 \\supset C^{1-} \\supset C^1$; about the $C^k$ notation, see <a href=\"http://mathworld.wolfram.com/C-kFunction.html\" rel=\"noreferrer\">the <em>MathWorld</em> article</a> in case you’ve never taken multivariable calculus.)</p>\n\n<p><a href=\"https://i.stack.imgur.com/Qz26I.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/Qz26I.png\" alt=\"Image showing that Lipshitz continuous function passes inside a cone centered at a given point, with an acute angle, titled “the Lipschitz condition”.\"></a></p>\n\n<p>Re: other notions of continuity, I’m not sure that you are going to need them all. To enumerate:</p>\n\n<ul>\n<li>Continuity, I expect you have some understanding of.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Uniform_continuity\" rel=\"noreferrer\">Uniform continuity</a> is the same as continuity when you are on a compact set (the <a href=\"https://en.wikipedia.org/wiki/Heine%E2%80%93Cantor_theorem\" rel=\"noreferrer\">Heine-Cantor theorem</a>). Uniform continuity is what you use to prove Riemann integrability, so a simpler version and applications of this result usually occur somewhere around there in a calculus/analysis course.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Absolute_continuity\" rel=\"noreferrer\">Absolute continuity</a> mostly appears in connection with Lebesgue integration, so that is where you should go for the explanations. Whether you <em>need</em> to depends on the notion of integration used in the ODE course, and if you indeed need to learn a whole new theory of integration then you’ve got bigger problems than continuity notions.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Lipschitz_continuity\" rel=\"noreferrer\">Lipschitz continuity</a> I tried to elaborate on above, and it is indeed the notion used for the usual ODE existence and uniqueness theorem, the <a href=\"https://en.wikipedia.org/wiki/Picard%E2%80%93Lindel%C3%B6f_theorem\" rel=\"noreferrer\">Picard-Lindelöf theorem</a>.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/H%C3%B6lder_condition\" rel=\"noreferrer\">Hölder continuity</a> is a refinement of Lipschitz continuity (look at the definition) used mostly in advanced PDE theory. If you made it to fourth year without real analysis, you will be able to graduate without ever encountering it.</li>\n<li><a href=\"https://en.wikipedia.org/wiki/Equicontinuity\" rel=\"noreferrer\">Equicontinuity</a> (and uniform equicontinuity, and Lipschitz property with a common constant) is a property of <em>families</em> of functions rather than of functions themselves. You should compare it to uniform convergence of functional sequences. It is used in the <a href=\"https://en.wikipedia.org/wiki/Arzel%C3%A0%E2%80%93Ascoli_theorem\" rel=\"noreferrer\">Arzelà-Ascoli theorem</a>, which you <em>may or may not</em> end up using in the course.</li>\n</ul>\n\n<p>Aside from continuity, your remarks make me wonder whether you took some kind of multivariable differential calculus. If not, study it, because it is required here. A ball there is just a normal Euclidean ball (possibly $n$-dimensional though), the excercise is mostly to figure out where in the usual calculus proofs you can change a modulus sign into a norm (<em>i.e.</em> length) sign, and where you should replace multiplication by a linear map (<em>i.e.</em> matrix multiplication). Functional (infinite-dimensional) balls should come afterwards.</p>\n\n<p>Re: the course, brace yourself, because the proofs <em>are</em> going to be hard for you. Thankfully, there is not a lot of theorems in a basic ODE course, and the statements are all rather intuitive. Given that you are asking the question at this time of year, I suppose that the course began with the theorems, which is really not a wise choice. (However, if it is really <em>all</em> like this, <em>i.e.</em> if it is going to proceed into PDE theory, dynamical systems and so on, I strongly suggest you drop it.) Read the first chapters in Arnold’s book to get what you are actually getting at with those statements. (I would even recommend to track down a translation of the <em>first</em> Russian edition, because it is much more concise if sparser on applications.)</p>\n" }, { "answer_id": 2097087, "score": 1, "text": "<p>I think this link has a good introduction on Lipschitz continuity </p>\n\n<p><a href=\"http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf\" rel=\"nofollow noreferrer\">http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf</a>.</p>\n\n<p>And for Real Analysis, iI think book by Royden , by S.Kumaresan is really good and student friendly it encourages you to think.</p>\n" } ]
Question: Book on Lipschitz continuity Answer 1: <p>Re: <em>Lipschitz</em> continuity, I suggest you look at Fig. 29 in <a href="https://books.google.fr/books?id=JUoyqlW7PZgC&amp;lpg=PP1&amp;pg=PA38#v=onepage&amp;q&amp;f=false" rel="noreferrer">Arnold (1992)</a> and the discussion around it. Myself, I think about the Lipschitz property as more of a weakened smoothness condition than a strengthened continuity one; the symbol $C^{1-}$ sometimes used for Lipschitz-continuous functions is quite apt. (Note that $C^0 \supset C^{1-} \supset C^1$; about the $C^k$ notation, see <a href="http://mathworld.wolfram.com/C-kFunction.html" rel="noreferrer">the <em>MathWorld</em> article</a> in case you’ve never taken multivariable calculus.)</p> <p><a href="https://i.stack.imgur.com/Qz26I.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Qz26I.png" alt="Image showing that Lipshitz continuous function passes inside a cone centered at a given point, with an acute angle, titled “the Lipschitz condition”."></a></p> <p>Re: other notions of continuity, I’m not sure that you are going to need them all. To enumerate:</p> <ul> <li>Continuity, I expect you have some understanding of.</li> <li><a href="https://en.wikipedia.org/wiki/Uniform_continuity" rel="noreferrer">Uniform continuity</a> is the same as continuity when you are on a compact set (the <a href="https://en.wikipedia.org/wiki/Heine%E2%80%93Cantor_theorem" rel="noreferrer">Heine-Cantor theorem</a>). Uniform continuity is what you use to prove Riemann integrability, so a simpler version and applications of this result usually occur somewhere around there in a calculus/analysis course.</li> <li><a href="https://en.wikipedia.org/wiki/Absolute_continuity" rel="noreferrer">Absolute continuity</a> mostly appears in connection with Lebesgue integration, so that is where you should go for the explanations. Whether you <em>need</em> to depends on the notion of integration used in the ODE course, and if you indeed need to learn a whole new theory of integration then you’ve got bigger problems than continuity notions.</li> <li><a href="https://en.wikipedia.org/wiki/Lipschitz_continuity" rel="noreferrer">Lipschitz continuity</a> I tried to elaborate on above, and it is indeed the notion used for the usual ODE existence and uniqueness theorem, the <a href="https://en.wikipedia.org/wiki/Picard%E2%80%93Lindel%C3%B6f_theorem" rel="noreferrer">Picard-Lindelöf theorem</a>.</li> <li><a href="https://en.wikipedia.org/wiki/H%C3%B6lder_condition" rel="noreferrer">Hölder continuity</a> is a refinement of Lipschitz continuity (look at the definition) used mostly in advanced PDE theory. If you made it to fourth year without real analysis, you will be able to graduate without ever encountering it.</li> <li><a href="https://en.wikipedia.org/wiki/Equicontinuity" rel="noreferrer">Equicontinuity</a> (and uniform equicontinuity, and Lipschitz property with a common constant) is a property of <em>families</em> of functions rather than of functions themselves. You should compare it to uniform convergence of functional sequences. It is used in the <a href="https://en.wikipedia.org/wiki/Arzel%C3%A0%E2%80%93Ascoli_theorem" rel="noreferrer">Arzelà-Ascoli theorem</a>, which you <em>may or may not</em> end up using in the course.</li> </ul> <p>Aside from continuity, your remarks make me wonder whether you took some kind of multivariable differential calculus. If not, study it, because it is required here. A ball there is just a normal Euclidean ball (possibly $n$-dimensional though), the excercise is mostly to figure out where in the usual calculus proofs you can change a modulus sign into a norm (<em>i.e.</em> length) sign, and where you should replace multiplication by a linear map (<em>i.e.</em> matrix multiplication). Functional (infinite-dimensional) balls should come afterwards.</p> <p>Re: the course, brace yourself, because the proofs <em>are</em> going to be hard for you. Thankfully, there is not a lot of theorems in a basic ODE course, and the statements are all rather intuitive. Given that you are asking the question at this time of year, I suppose that the course began with the theorems, which is really not a wise choice. (However, if it is really <em>all</em> like this, <em>i.e.</em> if it is going to proceed into PDE theory, dynamical systems and so on, I strongly suggest you drop it.) Read the first chapters in Arnold’s book to get what you are actually getting at with those statements. (I would even recommend to track down a translation of the <em>first</em> Russian edition, because it is much more concise if sparser on applications.)</p> Answer 2: <p>I think this link has a good introduction on Lipschitz continuity </p> <p><a href="http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf" rel="nofollow noreferrer">http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf</a>.</p> <p>And for Real Analysis, iI think book by Royden , by S.Kumaresan is really good and student friendly it encourages you to think.</p> Answer 3: <p>Re: <em>Lipschitz</em> continuity, I suggest you look at Fig. 29 in <a href="https://books.google.fr/books?id=JUoyqlW7PZgC&amp;lpg=PP1&amp;pg=PA38#v=onepage&amp;q&amp;f=false" rel="noreferrer">Arnold (1992)</a> and the discussion around it. Myself, I think about the Lipschitz property as more of a weakened smoothness condition than a strengthened continuity one; the symbol $C^{1-}$ sometimes used for Lipschitz-continuous functions is quite apt. (Note that $C^0 \supset C^{1-} \supset C^1$; about the $C^k$ notation, see <a href="http://mathworld.wolfram.com/C-kFunction.html" rel="noreferrer">the <em>MathWorld</em> article</a> in case you’ve never taken multivariable calculus.)</p> <p><a href="https://i.stack.imgur.com/Qz26I.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Qz26I.png" alt="Image showing that Lipshitz continuous function passes inside a cone centered at a given point, with an acute angle, titled “the Lipschitz condition”."></a></p> <p>Re: other notions of continuity, I’m not sure that you are going to need them all. To enumerate:</p> <ul> <li>Continuity, I expect you have some understanding of.</li> <li><a href="https://en.wikipedia.org/wiki/Uniform_continuity" rel="noreferrer">Uniform continuity</a> is the same as continuity when you are on a compact set (the <a href="https://en.wikipedia.org/wiki/Heine%E2%80%93Cantor_theorem" rel="noreferrer">Heine-Cantor theorem</a>). Uniform continuity is what you use to prove Riemann integrability, so a simpler version and applications of this result usually occur somewhere around there in a calculus/analysis course.</li> <li><a href="https://en.wikipedia.org/wiki/Absolute_continuity" rel="noreferrer">Absolute continuity</a> mostly appears in connection with Lebesgue integration, so that is where you should go for the explanations. Whether you <em>need</em> to depends on the notion of integration used in the ODE course, and if you indeed need to learn a whole new theory of integration then you’ve got bigger problems than continuity notions.</li> <li><a href="https://en.wikipedia.org/wiki/Lipschitz_continuity" rel="noreferrer">Lipschitz continuity</a> I tried to elaborate on above, and it is indeed the notion used for the usual ODE existence and uniqueness theorem, the <a href="https://en.wikipedia.org/wiki/Picard%E2%80%93Lindel%C3%B6f_theorem" rel="noreferrer">Picard-Lindelöf theorem</a>.</li> <li><a href="https://en.wikipedia.org/wiki/H%C3%B6lder_condition" rel="noreferrer">Hölder continuity</a> is a refinement of Lipschitz continuity (look at the definition) used mostly in advanced PDE theory. If you made it to fourth year without real analysis, you will be able to graduate without ever encountering it.</li> <li><a href="https://en.wikipedia.org/wiki/Equicontinuity" rel="noreferrer">Equicontinuity</a> (and uniform equicontinuity, and Lipschitz property with a common constant) is a property of <em>families</em> of functions rather than of functions themselves. You should compare it to uniform convergence of functional sequences. It is used in the <a href="https://en.wikipedia.org/wiki/Arzel%C3%A0%E2%80%93Ascoli_theorem" rel="noreferrer">Arzelà-Ascoli theorem</a>, which you <em>may or may not</em> end up using in the course.</li> </ul> <p>Aside from continuity, your remarks make me wonder whether you took some kind of multivariable differential calculus. If not, study it, because it is required here. A ball there is just a normal Euclidean ball (possibly $n$-dimensional though), the excercise is mostly to figure out where in the usual calculus proofs you can change a modulus sign into a norm (<em>i.e.</em> length) sign, and where you should replace multiplication by a linear map (<em>i.e.</em> matrix multiplication). Functional (infinite-dimensional) balls should come afterwards.</p> <p>Re: the course, brace yourself, because the proofs <em>are</em> going to be hard for you. Thankfully, there is not a lot of theorems in a basic ODE course, and the statements are all rather intuitive. Given that you are asking the question at this time of year, I suppose that the course began with the theorems, which is really not a wise choice. (However, if it is really <em>all</em> like this, <em>i.e.</em> if it is going to proceed into PDE theory, dynamical systems and so on, I strongly suggest you drop it.) Read the first chapters in Arnold’s book to get what you are actually getting at with those statements. (I would even recommend to track down a translation of the <em>first</em> Russian edition, because it is much more concise if sparser on applications.)</p> Answer 4: <p>I think this link has a good introduction on Lipschitz continuity </p> <p><a href="http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf" rel="nofollow noreferrer">http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf</a>.</p> <p>And for Real Analysis, iI think book by Royden , by S.Kumaresan is really good and student friendly it encourages you to think.</p> Answer 5: <p>Re: <em>Lipschitz</em> continuity, I suggest you look at Fig. 29 in <a href="https://books.google.fr/books?id=JUoyqlW7PZgC&amp;lpg=PP1&amp;pg=PA38#v=onepage&amp;q&amp;f=false" rel="noreferrer">Arnold (1992)</a> and the discussion around it. Myself, I think about the Lipschitz property as more of a weakened smoothness condition than a strengthened continuity one; the symbol $C^{1-}$ sometimes used for Lipschitz-continuous functions is quite apt. (Note that $C^0 \supset C^{1-} \supset C^1$; about the $C^k$ notation, see <a href="http://mathworld.wolfram.com/C-kFunction.html" rel="noreferrer">the <em>MathWorld</em> article</a> in case you’ve never taken multivariable calculus.)</p> <p><a href="https://i.stack.imgur.com/Qz26I.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Qz26I.png" alt="Image showing that Lipshitz continuous function passes inside a cone centered at a given point, with an acute angle, titled “the Lipschitz condition”."></a></p> <p>Re: other notions of continuity, I’m not sure that you are going to need them all. To enumerate:</p> <ul> <li>Continuity, I expect you have some understanding of.</li> <li><a href="https://en.wikipedia.org/wiki/Uniform_continuity" rel="noreferrer">Uniform continuity</a> is the same as continuity when you are on a compact set (the <a href="https://en.wikipedia.org/wiki/Heine%E2%80%93Cantor_theorem" rel="noreferrer">Heine-Cantor theorem</a>). Uniform continuity is what you use to prove Riemann integrability, so a simpler version and applications of this result usually occur somewhere around there in a calculus/analysis course.</li> <li><a href="https://en.wikipedia.org/wiki/Absolute_continuity" rel="noreferrer">Absolute continuity</a> mostly appears in connection with Lebesgue integration, so that is where you should go for the explanations. Whether you <em>need</em> to depends on the notion of integration used in the ODE course, and if you indeed need to learn a whole new theory of integration then you’ve got bigger problems than continuity notions.</li> <li><a href="https://en.wikipedia.org/wiki/Lipschitz_continuity" rel="noreferrer">Lipschitz continuity</a> I tried to elaborate on above, and it is indeed the notion used for the usual ODE existence and uniqueness theorem, the <a href="https://en.wikipedia.org/wiki/Picard%E2%80%93Lindel%C3%B6f_theorem" rel="noreferrer">Picard-Lindelöf theorem</a>.</li> <li><a href="https://en.wikipedia.org/wiki/H%C3%B6lder_condition" rel="noreferrer">Hölder continuity</a> is a refinement of Lipschitz continuity (look at the definition) used mostly in advanced PDE theory. If you made it to fourth year without real analysis, you will be able to graduate without ever encountering it.</li> <li><a href="https://en.wikipedia.org/wiki/Equicontinuity" rel="noreferrer">Equicontinuity</a> (and uniform equicontinuity, and Lipschitz property with a common constant) is a property of <em>families</em> of functions rather than of functions themselves. You should compare it to uniform convergence of functional sequences. It is used in the <a href="https://en.wikipedia.org/wiki/Arzel%C3%A0%E2%80%93Ascoli_theorem" rel="noreferrer">Arzelà-Ascoli theorem</a>, which you <em>may or may not</em> end up using in the course.</li> </ul> <p>Aside from continuity, your remarks make me wonder whether you took some kind of multivariable differential calculus. If not, study it, because it is required here. A ball there is just a normal Euclidean ball (possibly $n$-dimensional though), the excercise is mostly to figure out where in the usual calculus proofs you can change a modulus sign into a norm (<em>i.e.</em> length) sign, and where you should replace multiplication by a linear map (<em>i.e.</em> matrix multiplication). Functional (infinite-dimensional) balls should come afterwards.</p> <p>Re: the course, brace yourself, because the proofs <em>are</em> going to be hard for you. Thankfully, there is not a lot of theorems in a basic ODE course, and the statements are all rather intuitive. Given that you are asking the question at this time of year, I suppose that the course began with the theorems, which is really not a wise choice. (However, if it is really <em>all</em> like this, <em>i.e.</em> if it is going to proceed into PDE theory, dynamical systems and so on, I strongly suggest you drop it.) Read the first chapters in Arnold’s book to get what you are actually getting at with those statements. (I would even recommend to track down a translation of the <em>first</em> Russian edition, because it is much more concise if sparser on applications.)</p> Answer 6: <p>I think this link has a good introduction on Lipschitz continuity </p> <p><a href="http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf" rel="nofollow noreferrer">http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf</a>.</p> <p>And for Real Analysis, iI think book by Royden , by S.Kumaresan is really good and student friendly it encourages you to think.</p> Answer 7: <p>Re: <em>Lipschitz</em> continuity, I suggest you look at Fig. 29 in <a href="https://books.google.fr/books?id=JUoyqlW7PZgC&amp;lpg=PP1&amp;pg=PA38#v=onepage&amp;q&amp;f=false" rel="noreferrer">Arnold (1992)</a> and the discussion around it. Myself, I think about the Lipschitz property as more of a weakened smoothness condition than a strengthened continuity one; the symbol $C^{1-}$ sometimes used for Lipschitz-continuous functions is quite apt. (Note that $C^0 \supset C^{1-} \supset C^1$; about the $C^k$ notation, see <a href="http://mathworld.wolfram.com/C-kFunction.html" rel="noreferrer">the <em>MathWorld</em> article</a> in case you’ve never taken multivariable calculus.)</p> <p><a href="https://i.stack.imgur.com/Qz26I.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Qz26I.png" alt="Image showing that Lipshitz continuous function passes inside a cone centered at a given point, with an acute angle, titled “the Lipschitz condition”."></a></p> <p>Re: other notions of continuity, I’m not sure that you are going to need them all. To enumerate:</p> <ul> <li>Continuity, I expect you have some understanding of.</li> <li><a href="https://en.wikipedia.org/wiki/Uniform_continuity" rel="noreferrer">Uniform continuity</a> is the same as continuity when you are on a compact set (the <a href="https://en.wikipedia.org/wiki/Heine%E2%80%93Cantor_theorem" rel="noreferrer">Heine-Cantor theorem</a>). Uniform continuity is what you use to prove Riemann integrability, so a simpler version and applications of this result usually occur somewhere around there in a calculus/analysis course.</li> <li><a href="https://en.wikipedia.org/wiki/Absolute_continuity" rel="noreferrer">Absolute continuity</a> mostly appears in connection with Lebesgue integration, so that is where you should go for the explanations. Whether you <em>need</em> to depends on the notion of integration used in the ODE course, and if you indeed need to learn a whole new theory of integration then you’ve got bigger problems than continuity notions.</li> <li><a href="https://en.wikipedia.org/wiki/Lipschitz_continuity" rel="noreferrer">Lipschitz continuity</a> I tried to elaborate on above, and it is indeed the notion used for the usual ODE existence and uniqueness theorem, the <a href="https://en.wikipedia.org/wiki/Picard%E2%80%93Lindel%C3%B6f_theorem" rel="noreferrer">Picard-Lindelöf theorem</a>.</li> <li><a href="https://en.wikipedia.org/wiki/H%C3%B6lder_condition" rel="noreferrer">Hölder continuity</a> is a refinement of Lipschitz continuity (look at the definition) used mostly in advanced PDE theory. If you made it to fourth year without real analysis, you will be able to graduate without ever encountering it.</li> <li><a href="https://en.wikipedia.org/wiki/Equicontinuity" rel="noreferrer">Equicontinuity</a> (and uniform equicontinuity, and Lipschitz property with a common constant) is a property of <em>families</em> of functions rather than of functions themselves. You should compare it to uniform convergence of functional sequences. It is used in the <a href="https://en.wikipedia.org/wiki/Arzel%C3%A0%E2%80%93Ascoli_theorem" rel="noreferrer">Arzelà-Ascoli theorem</a>, which you <em>may or may not</em> end up using in the course.</li> </ul> <p>Aside from continuity, your remarks make me wonder whether you took some kind of multivariable differential calculus. If not, study it, because it is required here. A ball there is just a normal Euclidean ball (possibly $n$-dimensional though), the excercise is mostly to figure out where in the usual calculus proofs you can change a modulus sign into a norm (<em>i.e.</em> length) sign, and where you should replace multiplication by a linear map (<em>i.e.</em> matrix multiplication). Functional (infinite-dimensional) balls should come afterwards.</p> <p>Re: the course, brace yourself, because the proofs <em>are</em> going to be hard for you. Thankfully, there is not a lot of theorems in a basic ODE course, and the statements are all rather intuitive. Given that you are asking the question at this time of year, I suppose that the course began with the theorems, which is really not a wise choice. (However, if it is really <em>all</em> like this, <em>i.e.</em> if it is going to proceed into PDE theory, dynamical systems and so on, I strongly suggest you drop it.) Read the first chapters in Arnold’s book to get what you are actually getting at with those statements. (I would even recommend to track down a translation of the <em>first</em> Russian edition, because it is much more concise if sparser on applications.)</p> Answer 8: <p>I think this link has a good introduction on Lipschitz continuity </p> <p><a href="http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf" rel="nofollow noreferrer">http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf</a>.</p> <p>And for Real Analysis, iI think book by Royden , by S.Kumaresan is really good and student friendly it encourages you to think.</p> Answer 9: <p>Re: <em>Lipschitz</em> continuity, I suggest you look at Fig. 29 in <a href="https://books.google.fr/books?id=JUoyqlW7PZgC&amp;lpg=PP1&amp;pg=PA38#v=onepage&amp;q&amp;f=false" rel="noreferrer">Arnold (1992)</a> and the discussion around it. Myself, I think about the Lipschitz property as more of a weakened smoothness condition than a strengthened continuity one; the symbol $C^{1-}$ sometimes used for Lipschitz-continuous functions is quite apt. (Note that $C^0 \supset C^{1-} \supset C^1$; about the $C^k$ notation, see <a href="http://mathworld.wolfram.com/C-kFunction.html" rel="noreferrer">the <em>MathWorld</em> article</a> in case you’ve never taken multivariable calculus.)</p> <p><a href="https://i.stack.imgur.com/Qz26I.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Qz26I.png" alt="Image showing that Lipshitz continuous function passes inside a cone centered at a given point, with an acute angle, titled “the Lipschitz condition”."></a></p> <p>Re: other notions of continuity, I’m not sure that you are going to need them all. To enumerate:</p> <ul> <li>Continuity, I expect you have some understanding of.</li> <li><a href="https://en.wikipedia.org/wiki/Uniform_continuity" rel="noreferrer">Uniform continuity</a> is the same as continuity when you are on a compact set (the <a href="https://en.wikipedia.org/wiki/Heine%E2%80%93Cantor_theorem" rel="noreferrer">Heine-Cantor theorem</a>). Uniform continuity is what you use to prove Riemann integrability, so a simpler version and applications of this result usually occur somewhere around there in a calculus/analysis course.</li> <li><a href="https://en.wikipedia.org/wiki/Absolute_continuity" rel="noreferrer">Absolute continuity</a> mostly appears in connection with Lebesgue integration, so that is where you should go for the explanations. Whether you <em>need</em> to depends on the notion of integration used in the ODE course, and if you indeed need to learn a whole new theory of integration then you’ve got bigger problems than continuity notions.</li> <li><a href="https://en.wikipedia.org/wiki/Lipschitz_continuity" rel="noreferrer">Lipschitz continuity</a> I tried to elaborate on above, and it is indeed the notion used for the usual ODE existence and uniqueness theorem, the <a href="https://en.wikipedia.org/wiki/Picard%E2%80%93Lindel%C3%B6f_theorem" rel="noreferrer">Picard-Lindelöf theorem</a>.</li> <li><a href="https://en.wikipedia.org/wiki/H%C3%B6lder_condition" rel="noreferrer">Hölder continuity</a> is a refinement of Lipschitz continuity (look at the definition) used mostly in advanced PDE theory. If you made it to fourth year without real analysis, you will be able to graduate without ever encountering it.</li> <li><a href="https://en.wikipedia.org/wiki/Equicontinuity" rel="noreferrer">Equicontinuity</a> (and uniform equicontinuity, and Lipschitz property with a common constant) is a property of <em>families</em> of functions rather than of functions themselves. You should compare it to uniform convergence of functional sequences. It is used in the <a href="https://en.wikipedia.org/wiki/Arzel%C3%A0%E2%80%93Ascoli_theorem" rel="noreferrer">Arzelà-Ascoli theorem</a>, which you <em>may or may not</em> end up using in the course.</li> </ul> <p>Aside from continuity, your remarks make me wonder whether you took some kind of multivariable differential calculus. If not, study it, because it is required here. A ball there is just a normal Euclidean ball (possibly $n$-dimensional though), the excercise is mostly to figure out where in the usual calculus proofs you can change a modulus sign into a norm (<em>i.e.</em> length) sign, and where you should replace multiplication by a linear map (<em>i.e.</em> matrix multiplication). Functional (infinite-dimensional) balls should come afterwards.</p> <p>Re: the course, brace yourself, because the proofs <em>are</em> going to be hard for you. Thankfully, there is not a lot of theorems in a basic ODE course, and the statements are all rather intuitive. Given that you are asking the question at this time of year, I suppose that the course began with the theorems, which is really not a wise choice. (However, if it is really <em>all</em> like this, <em>i.e.</em> if it is going to proceed into PDE theory, dynamical systems and so on, I strongly suggest you drop it.) Read the first chapters in Arnold’s book to get what you are actually getting at with those statements. (I would even recommend to track down a translation of the <em>first</em> Russian edition, because it is much more concise if sparser on applications.)</p> Answer 10: <p>I think this link has a good introduction on Lipschitz continuity </p> <p><a href="http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf" rel="nofollow noreferrer">http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf</a>.</p> <p>And for Real Analysis, iI think book by Royden , by S.Kumaresan is really good and student friendly it encourages you to think.</p> Answer 11: <p>Re: <em>Lipschitz</em> continuity, I suggest you look at Fig. 29 in <a href="https://books.google.fr/books?id=JUoyqlW7PZgC&amp;lpg=PP1&amp;pg=PA38#v=onepage&amp;q&amp;f=false" rel="noreferrer">Arnold (1992)</a> and the discussion around it. Myself, I think about the Lipschitz property as more of a weakened smoothness condition than a strengthened continuity one; the symbol $C^{1-}$ sometimes used for Lipschitz-continuous functions is quite apt. (Note that $C^0 \supset C^{1-} \supset C^1$; about the $C^k$ notation, see <a href="http://mathworld.wolfram.com/C-kFunction.html" rel="noreferrer">the <em>MathWorld</em> article</a> in case you’ve never taken multivariable calculus.)</p> <p><a href="https://i.stack.imgur.com/Qz26I.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Qz26I.png" alt="Image showing that Lipshitz continuous function passes inside a cone centered at a given point, with an acute angle, titled “the Lipschitz condition”."></a></p> <p>Re: other notions of continuity, I’m not sure that you are going to need them all. To enumerate:</p> <ul> <li>Continuity, I expect you have some understanding of.</li> <li><a href="https://en.wikipedia.org/wiki/Uniform_continuity" rel="noreferrer">Uniform continuity</a> is the same as continuity when you are on a compact set (the <a href="https://en.wikipedia.org/wiki/Heine%E2%80%93Cantor_theorem" rel="noreferrer">Heine-Cantor theorem</a>). Uniform continuity is what you use to prove Riemann integrability, so a simpler version and applications of this result usually occur somewhere around there in a calculus/analysis course.</li> <li><a href="https://en.wikipedia.org/wiki/Absolute_continuity" rel="noreferrer">Absolute continuity</a> mostly appears in connection with Lebesgue integration, so that is where you should go for the explanations. Whether you <em>need</em> to depends on the notion of integration used in the ODE course, and if you indeed need to learn a whole new theory of integration then you’ve got bigger problems than continuity notions.</li> <li><a href="https://en.wikipedia.org/wiki/Lipschitz_continuity" rel="noreferrer">Lipschitz continuity</a> I tried to elaborate on above, and it is indeed the notion used for the usual ODE existence and uniqueness theorem, the <a href="https://en.wikipedia.org/wiki/Picard%E2%80%93Lindel%C3%B6f_theorem" rel="noreferrer">Picard-Lindelöf theorem</a>.</li> <li><a href="https://en.wikipedia.org/wiki/H%C3%B6lder_condition" rel="noreferrer">Hölder continuity</a> is a refinement of Lipschitz continuity (look at the definition) used mostly in advanced PDE theory. If you made it to fourth year without real analysis, you will be able to graduate without ever encountering it.</li> <li><a href="https://en.wikipedia.org/wiki/Equicontinuity" rel="noreferrer">Equicontinuity</a> (and uniform equicontinuity, and Lipschitz property with a common constant) is a property of <em>families</em> of functions rather than of functions themselves. You should compare it to uniform convergence of functional sequences. It is used in the <a href="https://en.wikipedia.org/wiki/Arzel%C3%A0%E2%80%93Ascoli_theorem" rel="noreferrer">Arzelà-Ascoli theorem</a>, which you <em>may or may not</em> end up using in the course.</li> </ul> <p>Aside from continuity, your remarks make me wonder whether you took some kind of multivariable differential calculus. If not, study it, because it is required here. A ball there is just a normal Euclidean ball (possibly $n$-dimensional though), the excercise is mostly to figure out where in the usual calculus proofs you can change a modulus sign into a norm (<em>i.e.</em> length) sign, and where you should replace multiplication by a linear map (<em>i.e.</em> matrix multiplication). Functional (infinite-dimensional) balls should come afterwards.</p> <p>Re: the course, brace yourself, because the proofs <em>are</em> going to be hard for you. Thankfully, there is not a lot of theorems in a basic ODE course, and the statements are all rather intuitive. Given that you are asking the question at this time of year, I suppose that the course began with the theorems, which is really not a wise choice. (However, if it is really <em>all</em> like this, <em>i.e.</em> if it is going to proceed into PDE theory, dynamical systems and so on, I strongly suggest you drop it.) Read the first chapters in Arnold’s book to get what you are actually getting at with those statements. (I would even recommend to track down a translation of the <em>first</em> Russian edition, because it is much more concise if sparser on applications.)</p> Answer 12: <p>I think this link has a good introduction on Lipschitz continuity </p> <p><a href="http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf" rel="nofollow noreferrer">http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf</a>.</p> <p>And for Real Analysis, iI think book by Royden , by S.Kumaresan is really good and student friendly it encourages you to think.</p> Answer 13: <p>Re: <em>Lipschitz</em> continuity, I suggest you look at Fig. 29 in <a href="https://books.google.fr/books?id=JUoyqlW7PZgC&amp;lpg=PP1&amp;pg=PA38#v=onepage&amp;q&amp;f=false" rel="noreferrer">Arnold (1992)</a> and the discussion around it. Myself, I think about the Lipschitz property as more of a weakened smoothness condition than a strengthened continuity one; the symbol $C^{1-}$ sometimes used for Lipschitz-continuous functions is quite apt. (Note that $C^0 \supset C^{1-} \supset C^1$; about the $C^k$ notation, see <a href="http://mathworld.wolfram.com/C-kFunction.html" rel="noreferrer">the <em>MathWorld</em> article</a> in case you’ve never taken multivariable calculus.)</p> <p><a href="https://i.stack.imgur.com/Qz26I.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Qz26I.png" alt="Image showing that Lipshitz continuous function passes inside a cone centered at a given point, with an acute angle, titled “the Lipschitz condition”."></a></p> <p>Re: other notions of continuity, I’m not sure that you are going to need them all. To enumerate:</p> <ul> <li>Continuity, I expect you have some understanding of.</li> <li><a href="https://en.wikipedia.org/wiki/Uniform_continuity" rel="noreferrer">Uniform continuity</a> is the same as continuity when you are on a compact set (the <a href="https://en.wikipedia.org/wiki/Heine%E2%80%93Cantor_theorem" rel="noreferrer">Heine-Cantor theorem</a>). Uniform continuity is what you use to prove Riemann integrability, so a simpler version and applications of this result usually occur somewhere around there in a calculus/analysis course.</li> <li><a href="https://en.wikipedia.org/wiki/Absolute_continuity" rel="noreferrer">Absolute continuity</a> mostly appears in connection with Lebesgue integration, so that is where you should go for the explanations. Whether you <em>need</em> to depends on the notion of integration used in the ODE course, and if you indeed need to learn a whole new theory of integration then you’ve got bigger problems than continuity notions.</li> <li><a href="https://en.wikipedia.org/wiki/Lipschitz_continuity" rel="noreferrer">Lipschitz continuity</a> I tried to elaborate on above, and it is indeed the notion used for the usual ODE existence and uniqueness theorem, the <a href="https://en.wikipedia.org/wiki/Picard%E2%80%93Lindel%C3%B6f_theorem" rel="noreferrer">Picard-Lindelöf theorem</a>.</li> <li><a href="https://en.wikipedia.org/wiki/H%C3%B6lder_condition" rel="noreferrer">Hölder continuity</a> is a refinement of Lipschitz continuity (look at the definition) used mostly in advanced PDE theory. If you made it to fourth year without real analysis, you will be able to graduate without ever encountering it.</li> <li><a href="https://en.wikipedia.org/wiki/Equicontinuity" rel="noreferrer">Equicontinuity</a> (and uniform equicontinuity, and Lipschitz property with a common constant) is a property of <em>families</em> of functions rather than of functions themselves. You should compare it to uniform convergence of functional sequences. It is used in the <a href="https://en.wikipedia.org/wiki/Arzel%C3%A0%E2%80%93Ascoli_theorem" rel="noreferrer">Arzelà-Ascoli theorem</a>, which you <em>may or may not</em> end up using in the course.</li> </ul> <p>Aside from continuity, your remarks make me wonder whether you took some kind of multivariable differential calculus. If not, study it, because it is required here. A ball there is just a normal Euclidean ball (possibly $n$-dimensional though), the excercise is mostly to figure out where in the usual calculus proofs you can change a modulus sign into a norm (<em>i.e.</em> length) sign, and where you should replace multiplication by a linear map (<em>i.e.</em> matrix multiplication). Functional (infinite-dimensional) balls should come afterwards.</p> <p>Re: the course, brace yourself, because the proofs <em>are</em> going to be hard for you. Thankfully, there is not a lot of theorems in a basic ODE course, and the statements are all rather intuitive. Given that you are asking the question at this time of year, I suppose that the course began with the theorems, which is really not a wise choice. (However, if it is really <em>all</em> like this, <em>i.e.</em> if it is going to proceed into PDE theory, dynamical systems and so on, I strongly suggest you drop it.) Read the first chapters in Arnold’s book to get what you are actually getting at with those statements. (I would even recommend to track down a translation of the <em>first</em> Russian edition, because it is much more concise if sparser on applications.)</p> Answer 14: <p>I think this link has a good introduction on Lipschitz continuity </p> <p><a href="http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf" rel="nofollow noreferrer">http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf</a>.</p> <p>And for Real Analysis, iI think book by Royden , by S.Kumaresan is really good and student friendly it encourages you to think.</p> Answer 15: <p>Re: <em>Lipschitz</em> continuity, I suggest you look at Fig. 29 in <a href="https://books.google.fr/books?id=JUoyqlW7PZgC&amp;lpg=PP1&amp;pg=PA38#v=onepage&amp;q&amp;f=false" rel="noreferrer">Arnold (1992)</a> and the discussion around it. Myself, I think about the Lipschitz property as more of a weakened smoothness condition than a strengthened continuity one; the symbol $C^{1-}$ sometimes used for Lipschitz-continuous functions is quite apt. (Note that $C^0 \supset C^{1-} \supset C^1$; about the $C^k$ notation, see <a href="http://mathworld.wolfram.com/C-kFunction.html" rel="noreferrer">the <em>MathWorld</em> article</a> in case you’ve never taken multivariable calculus.)</p> <p><a href="https://i.stack.imgur.com/Qz26I.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Qz26I.png" alt="Image showing that Lipshitz continuous function passes inside a cone centered at a given point, with an acute angle, titled “the Lipschitz condition”."></a></p> <p>Re: other notions of continuity, I’m not sure that you are going to need them all. To enumerate:</p> <ul> <li>Continuity, I expect you have some understanding of.</li> <li><a href="https://en.wikipedia.org/wiki/Uniform_continuity" rel="noreferrer">Uniform continuity</a> is the same as continuity when you are on a compact set (the <a href="https://en.wikipedia.org/wiki/Heine%E2%80%93Cantor_theorem" rel="noreferrer">Heine-Cantor theorem</a>). Uniform continuity is what you use to prove Riemann integrability, so a simpler version and applications of this result usually occur somewhere around there in a calculus/analysis course.</li> <li><a href="https://en.wikipedia.org/wiki/Absolute_continuity" rel="noreferrer">Absolute continuity</a> mostly appears in connection with Lebesgue integration, so that is where you should go for the explanations. Whether you <em>need</em> to depends on the notion of integration used in the ODE course, and if you indeed need to learn a whole new theory of integration then you’ve got bigger problems than continuity notions.</li> <li><a href="https://en.wikipedia.org/wiki/Lipschitz_continuity" rel="noreferrer">Lipschitz continuity</a> I tried to elaborate on above, and it is indeed the notion used for the usual ODE existence and uniqueness theorem, the <a href="https://en.wikipedia.org/wiki/Picard%E2%80%93Lindel%C3%B6f_theorem" rel="noreferrer">Picard-Lindelöf theorem</a>.</li> <li><a href="https://en.wikipedia.org/wiki/H%C3%B6lder_condition" rel="noreferrer">Hölder continuity</a> is a refinement of Lipschitz continuity (look at the definition) used mostly in advanced PDE theory. If you made it to fourth year without real analysis, you will be able to graduate without ever encountering it.</li> <li><a href="https://en.wikipedia.org/wiki/Equicontinuity" rel="noreferrer">Equicontinuity</a> (and uniform equicontinuity, and Lipschitz property with a common constant) is a property of <em>families</em> of functions rather than of functions themselves. You should compare it to uniform convergence of functional sequences. It is used in the <a href="https://en.wikipedia.org/wiki/Arzel%C3%A0%E2%80%93Ascoli_theorem" rel="noreferrer">Arzelà-Ascoli theorem</a>, which you <em>may or may not</em> end up using in the course.</li> </ul> <p>Aside from continuity, your remarks make me wonder whether you took some kind of multivariable differential calculus. If not, study it, because it is required here. A ball there is just a normal Euclidean ball (possibly $n$-dimensional though), the excercise is mostly to figure out where in the usual calculus proofs you can change a modulus sign into a norm (<em>i.e.</em> length) sign, and where you should replace multiplication by a linear map (<em>i.e.</em> matrix multiplication). Functional (infinite-dimensional) balls should come afterwards.</p> <p>Re: the course, brace yourself, because the proofs <em>are</em> going to be hard for you. Thankfully, there is not a lot of theorems in a basic ODE course, and the statements are all rather intuitive. Given that you are asking the question at this time of year, I suppose that the course began with the theorems, which is really not a wise choice. (However, if it is really <em>all</em> like this, <em>i.e.</em> if it is going to proceed into PDE theory, dynamical systems and so on, I strongly suggest you drop it.) Read the first chapters in Arnold’s book to get what you are actually getting at with those statements. (I would even recommend to track down a translation of the <em>first</em> Russian edition, because it is much more concise if sparser on applications.)</p> Answer 16: <p>I think this link has a good introduction on Lipschitz continuity </p> <p><a href="http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf" rel="nofollow noreferrer">http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf</a>.</p> <p>And for Real Analysis, iI think book by Royden , by S.Kumaresan is really good and student friendly it encourages you to think.</p> Answer 17: <p>Re: <em>Lipschitz</em> continuity, I suggest you look at Fig. 29 in <a href="https://books.google.fr/books?id=JUoyqlW7PZgC&amp;lpg=PP1&amp;pg=PA38#v=onepage&amp;q&amp;f=false" rel="noreferrer">Arnold (1992)</a> and the discussion around it. Myself, I think about the Lipschitz property as more of a weakened smoothness condition than a strengthened continuity one; the symbol $C^{1-}$ sometimes used for Lipschitz-continuous functions is quite apt. (Note that $C^0 \supset C^{1-} \supset C^1$; about the $C^k$ notation, see <a href="http://mathworld.wolfram.com/C-kFunction.html" rel="noreferrer">the <em>MathWorld</em> article</a> in case you’ve never taken multivariable calculus.)</p> <p><a href="https://i.stack.imgur.com/Qz26I.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Qz26I.png" alt="Image showing that Lipshitz continuous function passes inside a cone centered at a given point, with an acute angle, titled “the Lipschitz condition”."></a></p> <p>Re: other notions of continuity, I’m not sure that you are going to need them all. To enumerate:</p> <ul> <li>Continuity, I expect you have some understanding of.</li> <li><a href="https://en.wikipedia.org/wiki/Uniform_continuity" rel="noreferrer">Uniform continuity</a> is the same as continuity when you are on a compact set (the <a href="https://en.wikipedia.org/wiki/Heine%E2%80%93Cantor_theorem" rel="noreferrer">Heine-Cantor theorem</a>). Uniform continuity is what you use to prove Riemann integrability, so a simpler version and applications of this result usually occur somewhere around there in a calculus/analysis course.</li> <li><a href="https://en.wikipedia.org/wiki/Absolute_continuity" rel="noreferrer">Absolute continuity</a> mostly appears in connection with Lebesgue integration, so that is where you should go for the explanations. Whether you <em>need</em> to depends on the notion of integration used in the ODE course, and if you indeed need to learn a whole new theory of integration then you’ve got bigger problems than continuity notions.</li> <li><a href="https://en.wikipedia.org/wiki/Lipschitz_continuity" rel="noreferrer">Lipschitz continuity</a> I tried to elaborate on above, and it is indeed the notion used for the usual ODE existence and uniqueness theorem, the <a href="https://en.wikipedia.org/wiki/Picard%E2%80%93Lindel%C3%B6f_theorem" rel="noreferrer">Picard-Lindelöf theorem</a>.</li> <li><a href="https://en.wikipedia.org/wiki/H%C3%B6lder_condition" rel="noreferrer">Hölder continuity</a> is a refinement of Lipschitz continuity (look at the definition) used mostly in advanced PDE theory. If you made it to fourth year without real analysis, you will be able to graduate without ever encountering it.</li> <li><a href="https://en.wikipedia.org/wiki/Equicontinuity" rel="noreferrer">Equicontinuity</a> (and uniform equicontinuity, and Lipschitz property with a common constant) is a property of <em>families</em> of functions rather than of functions themselves. You should compare it to uniform convergence of functional sequences. It is used in the <a href="https://en.wikipedia.org/wiki/Arzel%C3%A0%E2%80%93Ascoli_theorem" rel="noreferrer">Arzelà-Ascoli theorem</a>, which you <em>may or may not</em> end up using in the course.</li> </ul> <p>Aside from continuity, your remarks make me wonder whether you took some kind of multivariable differential calculus. If not, study it, because it is required here. A ball there is just a normal Euclidean ball (possibly $n$-dimensional though), the excercise is mostly to figure out where in the usual calculus proofs you can change a modulus sign into a norm (<em>i.e.</em> length) sign, and where you should replace multiplication by a linear map (<em>i.e.</em> matrix multiplication). Functional (infinite-dimensional) balls should come afterwards.</p> <p>Re: the course, brace yourself, because the proofs <em>are</em> going to be hard for you. Thankfully, there is not a lot of theorems in a basic ODE course, and the statements are all rather intuitive. Given that you are asking the question at this time of year, I suppose that the course began with the theorems, which is really not a wise choice. (However, if it is really <em>all</em> like this, <em>i.e.</em> if it is going to proceed into PDE theory, dynamical systems and so on, I strongly suggest you drop it.) Read the first chapters in Arnold’s book to get what you are actually getting at with those statements. (I would even recommend to track down a translation of the <em>first</em> Russian edition, because it is much more concise if sparser on applications.)</p> Answer 18: <p>I think this link has a good introduction on Lipschitz continuity </p> <p><a href="http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf" rel="nofollow noreferrer">http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf</a>.</p> <p>And for Real Analysis, iI think book by Royden , by S.Kumaresan is really good and student friendly it encourages you to think.</p> Answer 19: <p>Re: <em>Lipschitz</em> continuity, I suggest you look at Fig. 29 in <a href="https://books.google.fr/books?id=JUoyqlW7PZgC&amp;lpg=PP1&amp;pg=PA38#v=onepage&amp;q&amp;f=false" rel="noreferrer">Arnold (1992)</a> and the discussion around it. Myself, I think about the Lipschitz property as more of a weakened smoothness condition than a strengthened continuity one; the symbol $C^{1-}$ sometimes used for Lipschitz-continuous functions is quite apt. (Note that $C^0 \supset C^{1-} \supset C^1$; about the $C^k$ notation, see <a href="http://mathworld.wolfram.com/C-kFunction.html" rel="noreferrer">the <em>MathWorld</em> article</a> in case you’ve never taken multivariable calculus.)</p> <p><a href="https://i.stack.imgur.com/Qz26I.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Qz26I.png" alt="Image showing that Lipshitz continuous function passes inside a cone centered at a given point, with an acute angle, titled “the Lipschitz condition”."></a></p> <p>Re: other notions of continuity, I’m not sure that you are going to need them all. To enumerate:</p> <ul> <li>Continuity, I expect you have some understanding of.</li> <li><a href="https://en.wikipedia.org/wiki/Uniform_continuity" rel="noreferrer">Uniform continuity</a> is the same as continuity when you are on a compact set (the <a href="https://en.wikipedia.org/wiki/Heine%E2%80%93Cantor_theorem" rel="noreferrer">Heine-Cantor theorem</a>). Uniform continuity is what you use to prove Riemann integrability, so a simpler version and applications of this result usually occur somewhere around there in a calculus/analysis course.</li> <li><a href="https://en.wikipedia.org/wiki/Absolute_continuity" rel="noreferrer">Absolute continuity</a> mostly appears in connection with Lebesgue integration, so that is where you should go for the explanations. Whether you <em>need</em> to depends on the notion of integration used in the ODE course, and if you indeed need to learn a whole new theory of integration then you’ve got bigger problems than continuity notions.</li> <li><a href="https://en.wikipedia.org/wiki/Lipschitz_continuity" rel="noreferrer">Lipschitz continuity</a> I tried to elaborate on above, and it is indeed the notion used for the usual ODE existence and uniqueness theorem, the <a href="https://en.wikipedia.org/wiki/Picard%E2%80%93Lindel%C3%B6f_theorem" rel="noreferrer">Picard-Lindelöf theorem</a>.</li> <li><a href="https://en.wikipedia.org/wiki/H%C3%B6lder_condition" rel="noreferrer">Hölder continuity</a> is a refinement of Lipschitz continuity (look at the definition) used mostly in advanced PDE theory. If you made it to fourth year without real analysis, you will be able to graduate without ever encountering it.</li> <li><a href="https://en.wikipedia.org/wiki/Equicontinuity" rel="noreferrer">Equicontinuity</a> (and uniform equicontinuity, and Lipschitz property with a common constant) is a property of <em>families</em> of functions rather than of functions themselves. You should compare it to uniform convergence of functional sequences. It is used in the <a href="https://en.wikipedia.org/wiki/Arzel%C3%A0%E2%80%93Ascoli_theorem" rel="noreferrer">Arzelà-Ascoli theorem</a>, which you <em>may or may not</em> end up using in the course.</li> </ul> <p>Aside from continuity, your remarks make me wonder whether you took some kind of multivariable differential calculus. If not, study it, because it is required here. A ball there is just a normal Euclidean ball (possibly $n$-dimensional though), the excercise is mostly to figure out where in the usual calculus proofs you can change a modulus sign into a norm (<em>i.e.</em> length) sign, and where you should replace multiplication by a linear map (<em>i.e.</em> matrix multiplication). Functional (infinite-dimensional) balls should come afterwards.</p> <p>Re: the course, brace yourself, because the proofs <em>are</em> going to be hard for you. Thankfully, there is not a lot of theorems in a basic ODE course, and the statements are all rather intuitive. Given that you are asking the question at this time of year, I suppose that the course began with the theorems, which is really not a wise choice. (However, if it is really <em>all</em> like this, <em>i.e.</em> if it is going to proceed into PDE theory, dynamical systems and so on, I strongly suggest you drop it.) Read the first chapters in Arnold’s book to get what you are actually getting at with those statements. (I would even recommend to track down a translation of the <em>first</em> Russian edition, because it is much more concise if sparser on applications.)</p> Answer 20: <p>I think this link has a good introduction on Lipschitz continuity </p> <p><a href="http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf" rel="nofollow noreferrer">http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf</a>.</p> <p>And for Real Analysis, iI think book by Royden , by S.Kumaresan is really good and student friendly it encourages you to think.</p> Answer 21: <p>Re: <em>Lipschitz</em> continuity, I suggest you look at Fig. 29 in <a href="https://books.google.fr/books?id=JUoyqlW7PZgC&amp;lpg=PP1&amp;pg=PA38#v=onepage&amp;q&amp;f=false" rel="noreferrer">Arnold (1992)</a> and the discussion around it. Myself, I think about the Lipschitz property as more of a weakened smoothness condition than a strengthened continuity one; the symbol $C^{1-}$ sometimes used for Lipschitz-continuous functions is quite apt. (Note that $C^0 \supset C^{1-} \supset C^1$; about the $C^k$ notation, see <a href="http://mathworld.wolfram.com/C-kFunction.html" rel="noreferrer">the <em>MathWorld</em> article</a> in case you’ve never taken multivariable calculus.)</p> <p><a href="https://i.stack.imgur.com/Qz26I.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Qz26I.png" alt="Image showing that Lipshitz continuous function passes inside a cone centered at a given point, with an acute angle, titled “the Lipschitz condition”."></a></p> <p>Re: other notions of continuity, I’m not sure that you are going to need them all. To enumerate:</p> <ul> <li>Continuity, I expect you have some understanding of.</li> <li><a href="https://en.wikipedia.org/wiki/Uniform_continuity" rel="noreferrer">Uniform continuity</a> is the same as continuity when you are on a compact set (the <a href="https://en.wikipedia.org/wiki/Heine%E2%80%93Cantor_theorem" rel="noreferrer">Heine-Cantor theorem</a>). Uniform continuity is what you use to prove Riemann integrability, so a simpler version and applications of this result usually occur somewhere around there in a calculus/analysis course.</li> <li><a href="https://en.wikipedia.org/wiki/Absolute_continuity" rel="noreferrer">Absolute continuity</a> mostly appears in connection with Lebesgue integration, so that is where you should go for the explanations. Whether you <em>need</em> to depends on the notion of integration used in the ODE course, and if you indeed need to learn a whole new theory of integration then you’ve got bigger problems than continuity notions.</li> <li><a href="https://en.wikipedia.org/wiki/Lipschitz_continuity" rel="noreferrer">Lipschitz continuity</a> I tried to elaborate on above, and it is indeed the notion used for the usual ODE existence and uniqueness theorem, the <a href="https://en.wikipedia.org/wiki/Picard%E2%80%93Lindel%C3%B6f_theorem" rel="noreferrer">Picard-Lindelöf theorem</a>.</li> <li><a href="https://en.wikipedia.org/wiki/H%C3%B6lder_condition" rel="noreferrer">Hölder continuity</a> is a refinement of Lipschitz continuity (look at the definition) used mostly in advanced PDE theory. If you made it to fourth year without real analysis, you will be able to graduate without ever encountering it.</li> <li><a href="https://en.wikipedia.org/wiki/Equicontinuity" rel="noreferrer">Equicontinuity</a> (and uniform equicontinuity, and Lipschitz property with a common constant) is a property of <em>families</em> of functions rather than of functions themselves. You should compare it to uniform convergence of functional sequences. It is used in the <a href="https://en.wikipedia.org/wiki/Arzel%C3%A0%E2%80%93Ascoli_theorem" rel="noreferrer">Arzelà-Ascoli theorem</a>, which you <em>may or may not</em> end up using in the course.</li> </ul> <p>Aside from continuity, your remarks make me wonder whether you took some kind of multivariable differential calculus. If not, study it, because it is required here. A ball there is just a normal Euclidean ball (possibly $n$-dimensional though), the excercise is mostly to figure out where in the usual calculus proofs you can change a modulus sign into a norm (<em>i.e.</em> length) sign, and where you should replace multiplication by a linear map (<em>i.e.</em> matrix multiplication). Functional (infinite-dimensional) balls should come afterwards.</p> <p>Re: the course, brace yourself, because the proofs <em>are</em> going to be hard for you. Thankfully, there is not a lot of theorems in a basic ODE course, and the statements are all rather intuitive. Given that you are asking the question at this time of year, I suppose that the course began with the theorems, which is really not a wise choice. (However, if it is really <em>all</em> like this, <em>i.e.</em> if it is going to proceed into PDE theory, dynamical systems and so on, I strongly suggest you drop it.) Read the first chapters in Arnold’s book to get what you are actually getting at with those statements. (I would even recommend to track down a translation of the <em>first</em> Russian edition, because it is much more concise if sparser on applications.)</p> Answer 22: <p>I think this link has a good introduction on Lipschitz continuity </p> <p><a href="http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf" rel="nofollow noreferrer">http://users.wpi.edu/~walker/MA500/HANDOUTS/LipschitzContinuity.pdf</a>.</p> <p>And for Real Analysis, iI think book by Royden , by S.Kumaresan is really good and student friendly it encourages you to think.</p>
prove that $f(c)=\frac{1}{2}(c-a)(c-b)f&#39;&#39;(\xi)$ for $\xi \in (a,b)$
2,097,116
6
https://math.stackexchange.com/questions/2097116/prove-that-fc-frac12c-ac-bf-xi-for-xi-in-a-b
<blockquote> <p>A function $f(x)$ is continuous on $[a,b]$ and $f''(x)$ exists for all $x\in (a,b)$. IF $c\in (a,b)$ $f(a)=f(b)=0$ prove that $f(c)=\frac{1}{2}(c-a)(c-b)f''(\xi)$ for $\xi \in (a,b)$.</p> </blockquote> <p>I have no idea to solve it. I know that I have to use Lagrange MVT. Please help.</p>
[ { "answer_id": 2097175, "score": 4, "text": "<p>Consider $g:x\\mapsto (b-a)f(x)-\\frac 12 (a-b)(b-x)(x-a)C$</p>\n\n<p>You can find $C$ such that $g(c)=0$. </p>\n\n<p>Note that $g(a)=g(b)=g(c)=0$, so there's some $\\xi$ such that $g''(\\xi)=0$.</p>\n" }, { "answer_id": 2097172, "score": 0, "text": "<p>Yes indeed Lagrange MVT will work , </p>\n\n<p>Let $a&lt;\\delta_1&lt;c&lt;\\delta_2&lt;b$ and $\\xi \\in (\\delta_1,\\delta_2)$ .</p>\n\n<p>If we start with the following expression,</p>\n\n<p>$\\displaystyle \\frac{1}{b-a}\\left(\\frac{f(c)-f(a)}{c-a}-\\frac{f(c)-f(b)}{c-b}\\right)$</p>\n\n<p>Now, There must exist $\\delta_1\\in(a,c)$ &amp; $\\delta_2\\in(c,b)$ such that $\\displaystyle f'(\\delta_1)=\\frac{f(c)-f(a)}{c-a}$ &amp; $\\displaystyle f'(\\delta_2)=\\frac{f(c)-f(b)}{c-b}$ by Lagrange MVT</p>\n\n<p>Now suppose $\\delta_1&lt;\\xi&lt;\\delta_2$ then since $f'(x)$ exists in $(a,b)$ and continuous on $[\\delta_1,\\delta_2]$ so again by Lagrange MVT we have ,</p>\n\n<p>$\\displaystyle f''(\\xi)=\\frac{f'(\\delta_2)-f'(\\delta_1)}{\\delta_2-\\delta_1}$</p>\n\n<p>So the very first expression turns,</p>\n\n<p>$\\displaystyle \\frac{1}{b-a}\\left(\\frac{f(c)-f(a)}{c-a}-\\frac{f(c)-f(b)}{c-b}\\right) = \\frac{f'(\\delta_2)-f'(\\delta_1)}{b-a}=\\frac{f'(\\delta_2)-f'(\\delta_1)}{2(\\delta_2-\\delta_1)}=\\frac{1}{2}f''(\\xi)$ </p>\n\n<p>where $\\displaystyle \\left(\\frac{b-a}{2}=\\delta_2-\\delta_1\\right)$</p>\n\n<p>Since $f(a)=f(b)=0$ this is equivalent to $\\displaystyle f(c)=\\frac{1}{2}(c-a)(c-b)f''(\\xi)$</p>\n" }, { "answer_id": 2097175, "score": 4, "text": "<p>Consider $g:x\\mapsto (b-a)f(x)-\\frac 12 (a-b)(b-x)(x-a)C$</p>\n\n<p>You can find $C$ such that $g(c)=0$. </p>\n\n<p>Note that $g(a)=g(b)=g(c)=0$, so there's some $\\xi$ such that $g''(\\xi)=0$.</p>\n" }, { "answer_id": 2097172, "score": 0, "text": "<p>Yes indeed Lagrange MVT will work , </p>\n\n<p>Let $a&lt;\\delta_1&lt;c&lt;\\delta_2&lt;b$ and $\\xi \\in (\\delta_1,\\delta_2)$ .</p>\n\n<p>If we start with the following expression,</p>\n\n<p>$\\displaystyle \\frac{1}{b-a}\\left(\\frac{f(c)-f(a)}{c-a}-\\frac{f(c)-f(b)}{c-b}\\right)$</p>\n\n<p>Now, There must exist $\\delta_1\\in(a,c)$ &amp; $\\delta_2\\in(c,b)$ such that $\\displaystyle f'(\\delta_1)=\\frac{f(c)-f(a)}{c-a}$ &amp; $\\displaystyle f'(\\delta_2)=\\frac{f(c)-f(b)}{c-b}$ by Lagrange MVT</p>\n\n<p>Now suppose $\\delta_1&lt;\\xi&lt;\\delta_2$ then since $f'(x)$ exists in $(a,b)$ and continuous on $[\\delta_1,\\delta_2]$ so again by Lagrange MVT we have ,</p>\n\n<p>$\\displaystyle f''(\\xi)=\\frac{f'(\\delta_2)-f'(\\delta_1)}{\\delta_2-\\delta_1}$</p>\n\n<p>So the very first expression turns,</p>\n\n<p>$\\displaystyle \\frac{1}{b-a}\\left(\\frac{f(c)-f(a)}{c-a}-\\frac{f(c)-f(b)}{c-b}\\right) = \\frac{f'(\\delta_2)-f'(\\delta_1)}{b-a}=\\frac{f'(\\delta_2)-f'(\\delta_1)}{2(\\delta_2-\\delta_1)}=\\frac{1}{2}f''(\\xi)$ </p>\n\n<p>where $\\displaystyle \\left(\\frac{b-a}{2}=\\delta_2-\\delta_1\\right)$</p>\n\n<p>Since $f(a)=f(b)=0$ this is equivalent to $\\displaystyle f(c)=\\frac{1}{2}(c-a)(c-b)f''(\\xi)$</p>\n" }, { "answer_id": 2097175, "score": 4, "text": "<p>Consider $g:x\\mapsto (b-a)f(x)-\\frac 12 (a-b)(b-x)(x-a)C$</p>\n\n<p>You can find $C$ such that $g(c)=0$. </p>\n\n<p>Note that $g(a)=g(b)=g(c)=0$, so there's some $\\xi$ such that $g''(\\xi)=0$.</p>\n" }, { "answer_id": 2097172, "score": 0, "text": "<p>Yes indeed Lagrange MVT will work , </p>\n\n<p>Let $a&lt;\\delta_1&lt;c&lt;\\delta_2&lt;b$ and $\\xi \\in (\\delta_1,\\delta_2)$ .</p>\n\n<p>If we start with the following expression,</p>\n\n<p>$\\displaystyle \\frac{1}{b-a}\\left(\\frac{f(c)-f(a)}{c-a}-\\frac{f(c)-f(b)}{c-b}\\right)$</p>\n\n<p>Now, There must exist $\\delta_1\\in(a,c)$ &amp; $\\delta_2\\in(c,b)$ such that $\\displaystyle f'(\\delta_1)=\\frac{f(c)-f(a)}{c-a}$ &amp; $\\displaystyle f'(\\delta_2)=\\frac{f(c)-f(b)}{c-b}$ by Lagrange MVT</p>\n\n<p>Now suppose $\\delta_1&lt;\\xi&lt;\\delta_2$ then since $f'(x)$ exists in $(a,b)$ and continuous on $[\\delta_1,\\delta_2]$ so again by Lagrange MVT we have ,</p>\n\n<p>$\\displaystyle f''(\\xi)=\\frac{f'(\\delta_2)-f'(\\delta_1)}{\\delta_2-\\delta_1}$</p>\n\n<p>So the very first expression turns,</p>\n\n<p>$\\displaystyle \\frac{1}{b-a}\\left(\\frac{f(c)-f(a)}{c-a}-\\frac{f(c)-f(b)}{c-b}\\right) = \\frac{f'(\\delta_2)-f'(\\delta_1)}{b-a}=\\frac{f'(\\delta_2)-f'(\\delta_1)}{2(\\delta_2-\\delta_1)}=\\frac{1}{2}f''(\\xi)$ </p>\n\n<p>where $\\displaystyle \\left(\\frac{b-a}{2}=\\delta_2-\\delta_1\\right)$</p>\n\n<p>Since $f(a)=f(b)=0$ this is equivalent to $\\displaystyle f(c)=\\frac{1}{2}(c-a)(c-b)f''(\\xi)$</p>\n" }, { "answer_id": 2097175, "score": 4, "text": "<p>Consider $g:x\\mapsto (b-a)f(x)-\\frac 12 (a-b)(b-x)(x-a)C$</p>\n\n<p>You can find $C$ such that $g(c)=0$. </p>\n\n<p>Note that $g(a)=g(b)=g(c)=0$, so there's some $\\xi$ such that $g''(\\xi)=0$.</p>\n" }, { "answer_id": 2097172, "score": 0, "text": "<p>Yes indeed Lagrange MVT will work , </p>\n\n<p>Let $a&lt;\\delta_1&lt;c&lt;\\delta_2&lt;b$ and $\\xi \\in (\\delta_1,\\delta_2)$ .</p>\n\n<p>If we start with the following expression,</p>\n\n<p>$\\displaystyle \\frac{1}{b-a}\\left(\\frac{f(c)-f(a)}{c-a}-\\frac{f(c)-f(b)}{c-b}\\right)$</p>\n\n<p>Now, There must exist $\\delta_1\\in(a,c)$ &amp; $\\delta_2\\in(c,b)$ such that $\\displaystyle f'(\\delta_1)=\\frac{f(c)-f(a)}{c-a}$ &amp; $\\displaystyle f'(\\delta_2)=\\frac{f(c)-f(b)}{c-b}$ by Lagrange MVT</p>\n\n<p>Now suppose $\\delta_1&lt;\\xi&lt;\\delta_2$ then since $f'(x)$ exists in $(a,b)$ and continuous on $[\\delta_1,\\delta_2]$ so again by Lagrange MVT we have ,</p>\n\n<p>$\\displaystyle f''(\\xi)=\\frac{f'(\\delta_2)-f'(\\delta_1)}{\\delta_2-\\delta_1}$</p>\n\n<p>So the very first expression turns,</p>\n\n<p>$\\displaystyle \\frac{1}{b-a}\\left(\\frac{f(c)-f(a)}{c-a}-\\frac{f(c)-f(b)}{c-b}\\right) = \\frac{f'(\\delta_2)-f'(\\delta_1)}{b-a}=\\frac{f'(\\delta_2)-f'(\\delta_1)}{2(\\delta_2-\\delta_1)}=\\frac{1}{2}f''(\\xi)$ </p>\n\n<p>where $\\displaystyle \\left(\\frac{b-a}{2}=\\delta_2-\\delta_1\\right)$</p>\n\n<p>Since $f(a)=f(b)=0$ this is equivalent to $\\displaystyle f(c)=\\frac{1}{2}(c-a)(c-b)f''(\\xi)$</p>\n" }, { "answer_id": 2097175, "score": 4, "text": "<p>Consider $g:x\\mapsto (b-a)f(x)-\\frac 12 (a-b)(b-x)(x-a)C$</p>\n\n<p>You can find $C$ such that $g(c)=0$. </p>\n\n<p>Note that $g(a)=g(b)=g(c)=0$, so there's some $\\xi$ such that $g''(\\xi)=0$.</p>\n" }, { "answer_id": 2097172, "score": 0, "text": "<p>Yes indeed Lagrange MVT will work , </p>\n\n<p>Let $a&lt;\\delta_1&lt;c&lt;\\delta_2&lt;b$ and $\\xi \\in (\\delta_1,\\delta_2)$ .</p>\n\n<p>If we start with the following expression,</p>\n\n<p>$\\displaystyle \\frac{1}{b-a}\\left(\\frac{f(c)-f(a)}{c-a}-\\frac{f(c)-f(b)}{c-b}\\right)$</p>\n\n<p>Now, There must exist $\\delta_1\\in(a,c)$ &amp; $\\delta_2\\in(c,b)$ such that $\\displaystyle f'(\\delta_1)=\\frac{f(c)-f(a)}{c-a}$ &amp; $\\displaystyle f'(\\delta_2)=\\frac{f(c)-f(b)}{c-b}$ by Lagrange MVT</p>\n\n<p>Now suppose $\\delta_1&lt;\\xi&lt;\\delta_2$ then since $f'(x)$ exists in $(a,b)$ and continuous on $[\\delta_1,\\delta_2]$ so again by Lagrange MVT we have ,</p>\n\n<p>$\\displaystyle f''(\\xi)=\\frac{f'(\\delta_2)-f'(\\delta_1)}{\\delta_2-\\delta_1}$</p>\n\n<p>So the very first expression turns,</p>\n\n<p>$\\displaystyle \\frac{1}{b-a}\\left(\\frac{f(c)-f(a)}{c-a}-\\frac{f(c)-f(b)}{c-b}\\right) = \\frac{f'(\\delta_2)-f'(\\delta_1)}{b-a}=\\frac{f'(\\delta_2)-f'(\\delta_1)}{2(\\delta_2-\\delta_1)}=\\frac{1}{2}f''(\\xi)$ </p>\n\n<p>where $\\displaystyle \\left(\\frac{b-a}{2}=\\delta_2-\\delta_1\\right)$</p>\n\n<p>Since $f(a)=f(b)=0$ this is equivalent to $\\displaystyle f(c)=\\frac{1}{2}(c-a)(c-b)f''(\\xi)$</p>\n" }, { "answer_id": 2097175, "score": 4, "text": "<p>Consider $g:x\\mapsto (b-a)f(x)-\\frac 12 (a-b)(b-x)(x-a)C$</p>\n\n<p>You can find $C$ such that $g(c)=0$. </p>\n\n<p>Note that $g(a)=g(b)=g(c)=0$, so there's some $\\xi$ such that $g''(\\xi)=0$.</p>\n" }, { "answer_id": 2097172, "score": 0, "text": "<p>Yes indeed Lagrange MVT will work , </p>\n\n<p>Let $a&lt;\\delta_1&lt;c&lt;\\delta_2&lt;b$ and $\\xi \\in (\\delta_1,\\delta_2)$ .</p>\n\n<p>If we start with the following expression,</p>\n\n<p>$\\displaystyle \\frac{1}{b-a}\\left(\\frac{f(c)-f(a)}{c-a}-\\frac{f(c)-f(b)}{c-b}\\right)$</p>\n\n<p>Now, There must exist $\\delta_1\\in(a,c)$ &amp; $\\delta_2\\in(c,b)$ such that $\\displaystyle f'(\\delta_1)=\\frac{f(c)-f(a)}{c-a}$ &amp; $\\displaystyle f'(\\delta_2)=\\frac{f(c)-f(b)}{c-b}$ by Lagrange MVT</p>\n\n<p>Now suppose $\\delta_1&lt;\\xi&lt;\\delta_2$ then since $f'(x)$ exists in $(a,b)$ and continuous on $[\\delta_1,\\delta_2]$ so again by Lagrange MVT we have ,</p>\n\n<p>$\\displaystyle f''(\\xi)=\\frac{f'(\\delta_2)-f'(\\delta_1)}{\\delta_2-\\delta_1}$</p>\n\n<p>So the very first expression turns,</p>\n\n<p>$\\displaystyle \\frac{1}{b-a}\\left(\\frac{f(c)-f(a)}{c-a}-\\frac{f(c)-f(b)}{c-b}\\right) = \\frac{f'(\\delta_2)-f'(\\delta_1)}{b-a}=\\frac{f'(\\delta_2)-f'(\\delta_1)}{2(\\delta_2-\\delta_1)}=\\frac{1}{2}f''(\\xi)$ </p>\n\n<p>where $\\displaystyle \\left(\\frac{b-a}{2}=\\delta_2-\\delta_1\\right)$</p>\n\n<p>Since $f(a)=f(b)=0$ this is equivalent to $\\displaystyle f(c)=\\frac{1}{2}(c-a)(c-b)f''(\\xi)$</p>\n" }, { "answer_id": 2097175, "score": 4, "text": "<p>Consider $g:x\\mapsto (b-a)f(x)-\\frac 12 (a-b)(b-x)(x-a)C$</p>\n\n<p>You can find $C$ such that $g(c)=0$. </p>\n\n<p>Note that $g(a)=g(b)=g(c)=0$, so there's some $\\xi$ such that $g''(\\xi)=0$.</p>\n" }, { "answer_id": 2097172, "score": 0, "text": "<p>Yes indeed Lagrange MVT will work , </p>\n\n<p>Let $a&lt;\\delta_1&lt;c&lt;\\delta_2&lt;b$ and $\\xi \\in (\\delta_1,\\delta_2)$ .</p>\n\n<p>If we start with the following expression,</p>\n\n<p>$\\displaystyle \\frac{1}{b-a}\\left(\\frac{f(c)-f(a)}{c-a}-\\frac{f(c)-f(b)}{c-b}\\right)$</p>\n\n<p>Now, There must exist $\\delta_1\\in(a,c)$ &amp; $\\delta_2\\in(c,b)$ such that $\\displaystyle f'(\\delta_1)=\\frac{f(c)-f(a)}{c-a}$ &amp; $\\displaystyle f'(\\delta_2)=\\frac{f(c)-f(b)}{c-b}$ by Lagrange MVT</p>\n\n<p>Now suppose $\\delta_1&lt;\\xi&lt;\\delta_2$ then since $f'(x)$ exists in $(a,b)$ and continuous on $[\\delta_1,\\delta_2]$ so again by Lagrange MVT we have ,</p>\n\n<p>$\\displaystyle f''(\\xi)=\\frac{f'(\\delta_2)-f'(\\delta_1)}{\\delta_2-\\delta_1}$</p>\n\n<p>So the very first expression turns,</p>\n\n<p>$\\displaystyle \\frac{1}{b-a}\\left(\\frac{f(c)-f(a)}{c-a}-\\frac{f(c)-f(b)}{c-b}\\right) = \\frac{f'(\\delta_2)-f'(\\delta_1)}{b-a}=\\frac{f'(\\delta_2)-f'(\\delta_1)}{2(\\delta_2-\\delta_1)}=\\frac{1}{2}f''(\\xi)$ </p>\n\n<p>where $\\displaystyle \\left(\\frac{b-a}{2}=\\delta_2-\\delta_1\\right)$</p>\n\n<p>Since $f(a)=f(b)=0$ this is equivalent to $\\displaystyle f(c)=\\frac{1}{2}(c-a)(c-b)f''(\\xi)$</p>\n" }, { "answer_id": 2097175, "score": 4, "text": "<p>Consider $g:x\\mapsto (b-a)f(x)-\\frac 12 (a-b)(b-x)(x-a)C$</p>\n\n<p>You can find $C$ such that $g(c)=0$. </p>\n\n<p>Note that $g(a)=g(b)=g(c)=0$, so there's some $\\xi$ such that $g''(\\xi)=0$.</p>\n" }, { "answer_id": 2097172, "score": 0, "text": "<p>Yes indeed Lagrange MVT will work , </p>\n\n<p>Let $a&lt;\\delta_1&lt;c&lt;\\delta_2&lt;b$ and $\\xi \\in (\\delta_1,\\delta_2)$ .</p>\n\n<p>If we start with the following expression,</p>\n\n<p>$\\displaystyle \\frac{1}{b-a}\\left(\\frac{f(c)-f(a)}{c-a}-\\frac{f(c)-f(b)}{c-b}\\right)$</p>\n\n<p>Now, There must exist $\\delta_1\\in(a,c)$ &amp; $\\delta_2\\in(c,b)$ such that $\\displaystyle f'(\\delta_1)=\\frac{f(c)-f(a)}{c-a}$ &amp; $\\displaystyle f'(\\delta_2)=\\frac{f(c)-f(b)}{c-b}$ by Lagrange MVT</p>\n\n<p>Now suppose $\\delta_1&lt;\\xi&lt;\\delta_2$ then since $f'(x)$ exists in $(a,b)$ and continuous on $[\\delta_1,\\delta_2]$ so again by Lagrange MVT we have ,</p>\n\n<p>$\\displaystyle f''(\\xi)=\\frac{f'(\\delta_2)-f'(\\delta_1)}{\\delta_2-\\delta_1}$</p>\n\n<p>So the very first expression turns,</p>\n\n<p>$\\displaystyle \\frac{1}{b-a}\\left(\\frac{f(c)-f(a)}{c-a}-\\frac{f(c)-f(b)}{c-b}\\right) = \\frac{f'(\\delta_2)-f'(\\delta_1)}{b-a}=\\frac{f'(\\delta_2)-f'(\\delta_1)}{2(\\delta_2-\\delta_1)}=\\frac{1}{2}f''(\\xi)$ </p>\n\n<p>where $\\displaystyle \\left(\\frac{b-a}{2}=\\delta_2-\\delta_1\\right)$</p>\n\n<p>Since $f(a)=f(b)=0$ this is equivalent to $\\displaystyle f(c)=\\frac{1}{2}(c-a)(c-b)f''(\\xi)$</p>\n" }, { "answer_id": 2097175, "score": 4, "text": "<p>Consider $g:x\\mapsto (b-a)f(x)-\\frac 12 (a-b)(b-x)(x-a)C$</p>\n\n<p>You can find $C$ such that $g(c)=0$. </p>\n\n<p>Note that $g(a)=g(b)=g(c)=0$, so there's some $\\xi$ such that $g''(\\xi)=0$.</p>\n" }, { "answer_id": 2097172, "score": 0, "text": "<p>Yes indeed Lagrange MVT will work , </p>\n\n<p>Let $a&lt;\\delta_1&lt;c&lt;\\delta_2&lt;b$ and $\\xi \\in (\\delta_1,\\delta_2)$ .</p>\n\n<p>If we start with the following expression,</p>\n\n<p>$\\displaystyle \\frac{1}{b-a}\\left(\\frac{f(c)-f(a)}{c-a}-\\frac{f(c)-f(b)}{c-b}\\right)$</p>\n\n<p>Now, There must exist $\\delta_1\\in(a,c)$ &amp; $\\delta_2\\in(c,b)$ such that $\\displaystyle f'(\\delta_1)=\\frac{f(c)-f(a)}{c-a}$ &amp; $\\displaystyle f'(\\delta_2)=\\frac{f(c)-f(b)}{c-b}$ by Lagrange MVT</p>\n\n<p>Now suppose $\\delta_1&lt;\\xi&lt;\\delta_2$ then since $f'(x)$ exists in $(a,b)$ and continuous on $[\\delta_1,\\delta_2]$ so again by Lagrange MVT we have ,</p>\n\n<p>$\\displaystyle f''(\\xi)=\\frac{f'(\\delta_2)-f'(\\delta_1)}{\\delta_2-\\delta_1}$</p>\n\n<p>So the very first expression turns,</p>\n\n<p>$\\displaystyle \\frac{1}{b-a}\\left(\\frac{f(c)-f(a)}{c-a}-\\frac{f(c)-f(b)}{c-b}\\right) = \\frac{f'(\\delta_2)-f'(\\delta_1)}{b-a}=\\frac{f'(\\delta_2)-f'(\\delta_1)}{2(\\delta_2-\\delta_1)}=\\frac{1}{2}f''(\\xi)$ </p>\n\n<p>where $\\displaystyle \\left(\\frac{b-a}{2}=\\delta_2-\\delta_1\\right)$</p>\n\n<p>Since $f(a)=f(b)=0$ this is equivalent to $\\displaystyle f(c)=\\frac{1}{2}(c-a)(c-b)f''(\\xi)$</p>\n" }, { "answer_id": 2097175, "score": 4, "text": "<p>Consider $g:x\\mapsto (b-a)f(x)-\\frac 12 (a-b)(b-x)(x-a)C$</p>\n\n<p>You can find $C$ such that $g(c)=0$. </p>\n\n<p>Note that $g(a)=g(b)=g(c)=0$, so there's some $\\xi$ such that $g''(\\xi)=0$.</p>\n" }, { "answer_id": 2097172, "score": 0, "text": "<p>Yes indeed Lagrange MVT will work , </p>\n\n<p>Let $a&lt;\\delta_1&lt;c&lt;\\delta_2&lt;b$ and $\\xi \\in (\\delta_1,\\delta_2)$ .</p>\n\n<p>If we start with the following expression,</p>\n\n<p>$\\displaystyle \\frac{1}{b-a}\\left(\\frac{f(c)-f(a)}{c-a}-\\frac{f(c)-f(b)}{c-b}\\right)$</p>\n\n<p>Now, There must exist $\\delta_1\\in(a,c)$ &amp; $\\delta_2\\in(c,b)$ such that $\\displaystyle f'(\\delta_1)=\\frac{f(c)-f(a)}{c-a}$ &amp; $\\displaystyle f'(\\delta_2)=\\frac{f(c)-f(b)}{c-b}$ by Lagrange MVT</p>\n\n<p>Now suppose $\\delta_1&lt;\\xi&lt;\\delta_2$ then since $f'(x)$ exists in $(a,b)$ and continuous on $[\\delta_1,\\delta_2]$ so again by Lagrange MVT we have ,</p>\n\n<p>$\\displaystyle f''(\\xi)=\\frac{f'(\\delta_2)-f'(\\delta_1)}{\\delta_2-\\delta_1}$</p>\n\n<p>So the very first expression turns,</p>\n\n<p>$\\displaystyle \\frac{1}{b-a}\\left(\\frac{f(c)-f(a)}{c-a}-\\frac{f(c)-f(b)}{c-b}\\right) = \\frac{f'(\\delta_2)-f'(\\delta_1)}{b-a}=\\frac{f'(\\delta_2)-f'(\\delta_1)}{2(\\delta_2-\\delta_1)}=\\frac{1}{2}f''(\\xi)$ </p>\n\n<p>where $\\displaystyle \\left(\\frac{b-a}{2}=\\delta_2-\\delta_1\\right)$</p>\n\n<p>Since $f(a)=f(b)=0$ this is equivalent to $\\displaystyle f(c)=\\frac{1}{2}(c-a)(c-b)f''(\\xi)$</p>\n" }, { "answer_id": 2097175, "score": 4, "text": "<p>Consider $g:x\\mapsto (b-a)f(x)-\\frac 12 (a-b)(b-x)(x-a)C$</p>\n\n<p>You can find $C$ such that $g(c)=0$. </p>\n\n<p>Note that $g(a)=g(b)=g(c)=0$, so there's some $\\xi$ such that $g''(\\xi)=0$.</p>\n" }, { "answer_id": 2097172, "score": 0, "text": "<p>Yes indeed Lagrange MVT will work , </p>\n\n<p>Let $a&lt;\\delta_1&lt;c&lt;\\delta_2&lt;b$ and $\\xi \\in (\\delta_1,\\delta_2)$ .</p>\n\n<p>If we start with the following expression,</p>\n\n<p>$\\displaystyle \\frac{1}{b-a}\\left(\\frac{f(c)-f(a)}{c-a}-\\frac{f(c)-f(b)}{c-b}\\right)$</p>\n\n<p>Now, There must exist $\\delta_1\\in(a,c)$ &amp; $\\delta_2\\in(c,b)$ such that $\\displaystyle f'(\\delta_1)=\\frac{f(c)-f(a)}{c-a}$ &amp; $\\displaystyle f'(\\delta_2)=\\frac{f(c)-f(b)}{c-b}$ by Lagrange MVT</p>\n\n<p>Now suppose $\\delta_1&lt;\\xi&lt;\\delta_2$ then since $f'(x)$ exists in $(a,b)$ and continuous on $[\\delta_1,\\delta_2]$ so again by Lagrange MVT we have ,</p>\n\n<p>$\\displaystyle f''(\\xi)=\\frac{f'(\\delta_2)-f'(\\delta_1)}{\\delta_2-\\delta_1}$</p>\n\n<p>So the very first expression turns,</p>\n\n<p>$\\displaystyle \\frac{1}{b-a}\\left(\\frac{f(c)-f(a)}{c-a}-\\frac{f(c)-f(b)}{c-b}\\right) = \\frac{f'(\\delta_2)-f'(\\delta_1)}{b-a}=\\frac{f'(\\delta_2)-f'(\\delta_1)}{2(\\delta_2-\\delta_1)}=\\frac{1}{2}f''(\\xi)$ </p>\n\n<p>where $\\displaystyle \\left(\\frac{b-a}{2}=\\delta_2-\\delta_1\\right)$</p>\n\n<p>Since $f(a)=f(b)=0$ this is equivalent to $\\displaystyle f(c)=\\frac{1}{2}(c-a)(c-b)f''(\\xi)$</p>\n" } ]
Question: prove that $f(c)=\frac{1}{2}(c-a)(c-b)f&#39;&#39;(\xi)$ for $\xi \in (a,b)$ Answer 1: <p>Consider $g:x\mapsto (b-a)f(x)-\frac 12 (a-b)(b-x)(x-a)C$</p> <p>You can find $C$ such that $g(c)=0$. </p> <p>Note that $g(a)=g(b)=g(c)=0$, so there's some $\xi$ such that $g''(\xi)=0$.</p> Answer 2: <p>Yes indeed Lagrange MVT will work , </p> <p>Let $a&lt;\delta_1&lt;c&lt;\delta_2&lt;b$ and $\xi \in (\delta_1,\delta_2)$ .</p> <p>If we start with the following expression,</p> <p>$\displaystyle \frac{1}{b-a}\left(\frac{f(c)-f(a)}{c-a}-\frac{f(c)-f(b)}{c-b}\right)$</p> <p>Now, There must exist $\delta_1\in(a,c)$ &amp; $\delta_2\in(c,b)$ such that $\displaystyle f'(\delta_1)=\frac{f(c)-f(a)}{c-a}$ &amp; $\displaystyle f'(\delta_2)=\frac{f(c)-f(b)}{c-b}$ by Lagrange MVT</p> <p>Now suppose $\delta_1&lt;\xi&lt;\delta_2$ then since $f'(x)$ exists in $(a,b)$ and continuous on $[\delta_1,\delta_2]$ so again by Lagrange MVT we have ,</p> <p>$\displaystyle f''(\xi)=\frac{f'(\delta_2)-f'(\delta_1)}{\delta_2-\delta_1}$</p> <p>So the very first expression turns,</p> <p>$\displaystyle \frac{1}{b-a}\left(\frac{f(c)-f(a)}{c-a}-\frac{f(c)-f(b)}{c-b}\right) = \frac{f'(\delta_2)-f'(\delta_1)}{b-a}=\frac{f'(\delta_2)-f'(\delta_1)}{2(\delta_2-\delta_1)}=\frac{1}{2}f''(\xi)$ </p> <p>where $\displaystyle \left(\frac{b-a}{2}=\delta_2-\delta_1\right)$</p> <p>Since $f(a)=f(b)=0$ this is equivalent to $\displaystyle f(c)=\frac{1}{2}(c-a)(c-b)f''(\xi)$</p> Answer 3: <p>Consider $g:x\mapsto (b-a)f(x)-\frac 12 (a-b)(b-x)(x-a)C$</p> <p>You can find $C$ such that $g(c)=0$. </p> <p>Note that $g(a)=g(b)=g(c)=0$, so there's some $\xi$ such that $g''(\xi)=0$.</p> Answer 4: <p>Yes indeed Lagrange MVT will work , </p> <p>Let $a&lt;\delta_1&lt;c&lt;\delta_2&lt;b$ and $\xi \in (\delta_1,\delta_2)$ .</p> <p>If we start with the following expression,</p> <p>$\displaystyle \frac{1}{b-a}\left(\frac{f(c)-f(a)}{c-a}-\frac{f(c)-f(b)}{c-b}\right)$</p> <p>Now, There must exist $\delta_1\in(a,c)$ &amp; $\delta_2\in(c,b)$ such that $\displaystyle f'(\delta_1)=\frac{f(c)-f(a)}{c-a}$ &amp; $\displaystyle f'(\delta_2)=\frac{f(c)-f(b)}{c-b}$ by Lagrange MVT</p> <p>Now suppose $\delta_1&lt;\xi&lt;\delta_2$ then since $f'(x)$ exists in $(a,b)$ and continuous on $[\delta_1,\delta_2]$ so again by Lagrange MVT we have ,</p> <p>$\displaystyle f''(\xi)=\frac{f'(\delta_2)-f'(\delta_1)}{\delta_2-\delta_1}$</p> <p>So the very first expression turns,</p> <p>$\displaystyle \frac{1}{b-a}\left(\frac{f(c)-f(a)}{c-a}-\frac{f(c)-f(b)}{c-b}\right) = \frac{f'(\delta_2)-f'(\delta_1)}{b-a}=\frac{f'(\delta_2)-f'(\delta_1)}{2(\delta_2-\delta_1)}=\frac{1}{2}f''(\xi)$ </p> <p>where $\displaystyle \left(\frac{b-a}{2}=\delta_2-\delta_1\right)$</p> <p>Since $f(a)=f(b)=0$ this is equivalent to $\displaystyle f(c)=\frac{1}{2}(c-a)(c-b)f''(\xi)$</p> Answer 5: <p>Consider $g:x\mapsto (b-a)f(x)-\frac 12 (a-b)(b-x)(x-a)C$</p> <p>You can find $C$ such that $g(c)=0$. </p> <p>Note that $g(a)=g(b)=g(c)=0$, so there's some $\xi$ such that $g''(\xi)=0$.</p> Answer 6: <p>Yes indeed Lagrange MVT will work , </p> <p>Let $a&lt;\delta_1&lt;c&lt;\delta_2&lt;b$ and $\xi \in (\delta_1,\delta_2)$ .</p> <p>If we start with the following expression,</p> <p>$\displaystyle \frac{1}{b-a}\left(\frac{f(c)-f(a)}{c-a}-\frac{f(c)-f(b)}{c-b}\right)$</p> <p>Now, There must exist $\delta_1\in(a,c)$ &amp; $\delta_2\in(c,b)$ such that $\displaystyle f'(\delta_1)=\frac{f(c)-f(a)}{c-a}$ &amp; $\displaystyle f'(\delta_2)=\frac{f(c)-f(b)}{c-b}$ by Lagrange MVT</p> <p>Now suppose $\delta_1&lt;\xi&lt;\delta_2$ then since $f'(x)$ exists in $(a,b)$ and continuous on $[\delta_1,\delta_2]$ so again by Lagrange MVT we have ,</p> <p>$\displaystyle f''(\xi)=\frac{f'(\delta_2)-f'(\delta_1)}{\delta_2-\delta_1}$</p> <p>So the very first expression turns,</p> <p>$\displaystyle \frac{1}{b-a}\left(\frac{f(c)-f(a)}{c-a}-\frac{f(c)-f(b)}{c-b}\right) = \frac{f'(\delta_2)-f'(\delta_1)}{b-a}=\frac{f'(\delta_2)-f'(\delta_1)}{2(\delta_2-\delta_1)}=\frac{1}{2}f''(\xi)$ </p> <p>where $\displaystyle \left(\frac{b-a}{2}=\delta_2-\delta_1\right)$</p> <p>Since $f(a)=f(b)=0$ this is equivalent to $\displaystyle f(c)=\frac{1}{2}(c-a)(c-b)f''(\xi)$</p> Answer 7: <p>Consider $g:x\mapsto (b-a)f(x)-\frac 12 (a-b)(b-x)(x-a)C$</p> <p>You can find $C$ such that $g(c)=0$. </p> <p>Note that $g(a)=g(b)=g(c)=0$, so there's some $\xi$ such that $g''(\xi)=0$.</p> Answer 8: <p>Yes indeed Lagrange MVT will work , </p> <p>Let $a&lt;\delta_1&lt;c&lt;\delta_2&lt;b$ and $\xi \in (\delta_1,\delta_2)$ .</p> <p>If we start with the following expression,</p> <p>$\displaystyle \frac{1}{b-a}\left(\frac{f(c)-f(a)}{c-a}-\frac{f(c)-f(b)}{c-b}\right)$</p> <p>Now, There must exist $\delta_1\in(a,c)$ &amp; $\delta_2\in(c,b)$ such that $\displaystyle f'(\delta_1)=\frac{f(c)-f(a)}{c-a}$ &amp; $\displaystyle f'(\delta_2)=\frac{f(c)-f(b)}{c-b}$ by Lagrange MVT</p> <p>Now suppose $\delta_1&lt;\xi&lt;\delta_2$ then since $f'(x)$ exists in $(a,b)$ and continuous on $[\delta_1,\delta_2]$ so again by Lagrange MVT we have ,</p> <p>$\displaystyle f''(\xi)=\frac{f'(\delta_2)-f'(\delta_1)}{\delta_2-\delta_1}$</p> <p>So the very first expression turns,</p> <p>$\displaystyle \frac{1}{b-a}\left(\frac{f(c)-f(a)}{c-a}-\frac{f(c)-f(b)}{c-b}\right) = \frac{f'(\delta_2)-f'(\delta_1)}{b-a}=\frac{f'(\delta_2)-f'(\delta_1)}{2(\delta_2-\delta_1)}=\frac{1}{2}f''(\xi)$ </p> <p>where $\displaystyle \left(\frac{b-a}{2}=\delta_2-\delta_1\right)$</p> <p>Since $f(a)=f(b)=0$ this is equivalent to $\displaystyle f(c)=\frac{1}{2}(c-a)(c-b)f''(\xi)$</p> Answer 9: <p>Consider $g:x\mapsto (b-a)f(x)-\frac 12 (a-b)(b-x)(x-a)C$</p> <p>You can find $C$ such that $g(c)=0$. </p> <p>Note that $g(a)=g(b)=g(c)=0$, so there's some $\xi$ such that $g''(\xi)=0$.</p> Answer 10: <p>Yes indeed Lagrange MVT will work , </p> <p>Let $a&lt;\delta_1&lt;c&lt;\delta_2&lt;b$ and $\xi \in (\delta_1,\delta_2)$ .</p> <p>If we start with the following expression,</p> <p>$\displaystyle \frac{1}{b-a}\left(\frac{f(c)-f(a)}{c-a}-\frac{f(c)-f(b)}{c-b}\right)$</p> <p>Now, There must exist $\delta_1\in(a,c)$ &amp; $\delta_2\in(c,b)$ such that $\displaystyle f'(\delta_1)=\frac{f(c)-f(a)}{c-a}$ &amp; $\displaystyle f'(\delta_2)=\frac{f(c)-f(b)}{c-b}$ by Lagrange MVT</p> <p>Now suppose $\delta_1&lt;\xi&lt;\delta_2$ then since $f'(x)$ exists in $(a,b)$ and continuous on $[\delta_1,\delta_2]$ so again by Lagrange MVT we have ,</p> <p>$\displaystyle f''(\xi)=\frac{f'(\delta_2)-f'(\delta_1)}{\delta_2-\delta_1}$</p> <p>So the very first expression turns,</p> <p>$\displaystyle \frac{1}{b-a}\left(\frac{f(c)-f(a)}{c-a}-\frac{f(c)-f(b)}{c-b}\right) = \frac{f'(\delta_2)-f'(\delta_1)}{b-a}=\frac{f'(\delta_2)-f'(\delta_1)}{2(\delta_2-\delta_1)}=\frac{1}{2}f''(\xi)$ </p> <p>where $\displaystyle \left(\frac{b-a}{2}=\delta_2-\delta_1\right)$</p> <p>Since $f(a)=f(b)=0$ this is equivalent to $\displaystyle f(c)=\frac{1}{2}(c-a)(c-b)f''(\xi)$</p> Answer 11: <p>Consider $g:x\mapsto (b-a)f(x)-\frac 12 (a-b)(b-x)(x-a)C$</p> <p>You can find $C$ such that $g(c)=0$. </p> <p>Note that $g(a)=g(b)=g(c)=0$, so there's some $\xi$ such that $g''(\xi)=0$.</p> Answer 12: <p>Yes indeed Lagrange MVT will work , </p> <p>Let $a&lt;\delta_1&lt;c&lt;\delta_2&lt;b$ and $\xi \in (\delta_1,\delta_2)$ .</p> <p>If we start with the following expression,</p> <p>$\displaystyle \frac{1}{b-a}\left(\frac{f(c)-f(a)}{c-a}-\frac{f(c)-f(b)}{c-b}\right)$</p> <p>Now, There must exist $\delta_1\in(a,c)$ &amp; $\delta_2\in(c,b)$ such that $\displaystyle f'(\delta_1)=\frac{f(c)-f(a)}{c-a}$ &amp; $\displaystyle f'(\delta_2)=\frac{f(c)-f(b)}{c-b}$ by Lagrange MVT</p> <p>Now suppose $\delta_1&lt;\xi&lt;\delta_2$ then since $f'(x)$ exists in $(a,b)$ and continuous on $[\delta_1,\delta_2]$ so again by Lagrange MVT we have ,</p> <p>$\displaystyle f''(\xi)=\frac{f'(\delta_2)-f'(\delta_1)}{\delta_2-\delta_1}$</p> <p>So the very first expression turns,</p> <p>$\displaystyle \frac{1}{b-a}\left(\frac{f(c)-f(a)}{c-a}-\frac{f(c)-f(b)}{c-b}\right) = \frac{f'(\delta_2)-f'(\delta_1)}{b-a}=\frac{f'(\delta_2)-f'(\delta_1)}{2(\delta_2-\delta_1)}=\frac{1}{2}f''(\xi)$ </p> <p>where $\displaystyle \left(\frac{b-a}{2}=\delta_2-\delta_1\right)$</p> <p>Since $f(a)=f(b)=0$ this is equivalent to $\displaystyle f(c)=\frac{1}{2}(c-a)(c-b)f''(\xi)$</p> Answer 13: <p>Consider $g:x\mapsto (b-a)f(x)-\frac 12 (a-b)(b-x)(x-a)C$</p> <p>You can find $C$ such that $g(c)=0$. </p> <p>Note that $g(a)=g(b)=g(c)=0$, so there's some $\xi$ such that $g''(\xi)=0$.</p> Answer 14: <p>Yes indeed Lagrange MVT will work , </p> <p>Let $a&lt;\delta_1&lt;c&lt;\delta_2&lt;b$ and $\xi \in (\delta_1,\delta_2)$ .</p> <p>If we start with the following expression,</p> <p>$\displaystyle \frac{1}{b-a}\left(\frac{f(c)-f(a)}{c-a}-\frac{f(c)-f(b)}{c-b}\right)$</p> <p>Now, There must exist $\delta_1\in(a,c)$ &amp; $\delta_2\in(c,b)$ such that $\displaystyle f'(\delta_1)=\frac{f(c)-f(a)}{c-a}$ &amp; $\displaystyle f'(\delta_2)=\frac{f(c)-f(b)}{c-b}$ by Lagrange MVT</p> <p>Now suppose $\delta_1&lt;\xi&lt;\delta_2$ then since $f'(x)$ exists in $(a,b)$ and continuous on $[\delta_1,\delta_2]$ so again by Lagrange MVT we have ,</p> <p>$\displaystyle f''(\xi)=\frac{f'(\delta_2)-f'(\delta_1)}{\delta_2-\delta_1}$</p> <p>So the very first expression turns,</p> <p>$\displaystyle \frac{1}{b-a}\left(\frac{f(c)-f(a)}{c-a}-\frac{f(c)-f(b)}{c-b}\right) = \frac{f'(\delta_2)-f'(\delta_1)}{b-a}=\frac{f'(\delta_2)-f'(\delta_1)}{2(\delta_2-\delta_1)}=\frac{1}{2}f''(\xi)$ </p> <p>where $\displaystyle \left(\frac{b-a}{2}=\delta_2-\delta_1\right)$</p> <p>Since $f(a)=f(b)=0$ this is equivalent to $\displaystyle f(c)=\frac{1}{2}(c-a)(c-b)f''(\xi)$</p> Answer 15: <p>Consider $g:x\mapsto (b-a)f(x)-\frac 12 (a-b)(b-x)(x-a)C$</p> <p>You can find $C$ such that $g(c)=0$. </p> <p>Note that $g(a)=g(b)=g(c)=0$, so there's some $\xi$ such that $g''(\xi)=0$.</p> Answer 16: <p>Yes indeed Lagrange MVT will work , </p> <p>Let $a&lt;\delta_1&lt;c&lt;\delta_2&lt;b$ and $\xi \in (\delta_1,\delta_2)$ .</p> <p>If we start with the following expression,</p> <p>$\displaystyle \frac{1}{b-a}\left(\frac{f(c)-f(a)}{c-a}-\frac{f(c)-f(b)}{c-b}\right)$</p> <p>Now, There must exist $\delta_1\in(a,c)$ &amp; $\delta_2\in(c,b)$ such that $\displaystyle f'(\delta_1)=\frac{f(c)-f(a)}{c-a}$ &amp; $\displaystyle f'(\delta_2)=\frac{f(c)-f(b)}{c-b}$ by Lagrange MVT</p> <p>Now suppose $\delta_1&lt;\xi&lt;\delta_2$ then since $f'(x)$ exists in $(a,b)$ and continuous on $[\delta_1,\delta_2]$ so again by Lagrange MVT we have ,</p> <p>$\displaystyle f''(\xi)=\frac{f'(\delta_2)-f'(\delta_1)}{\delta_2-\delta_1}$</p> <p>So the very first expression turns,</p> <p>$\displaystyle \frac{1}{b-a}\left(\frac{f(c)-f(a)}{c-a}-\frac{f(c)-f(b)}{c-b}\right) = \frac{f'(\delta_2)-f'(\delta_1)}{b-a}=\frac{f'(\delta_2)-f'(\delta_1)}{2(\delta_2-\delta_1)}=\frac{1}{2}f''(\xi)$ </p> <p>where $\displaystyle \left(\frac{b-a}{2}=\delta_2-\delta_1\right)$</p> <p>Since $f(a)=f(b)=0$ this is equivalent to $\displaystyle f(c)=\frac{1}{2}(c-a)(c-b)f''(\xi)$</p> Answer 17: <p>Consider $g:x\mapsto (b-a)f(x)-\frac 12 (a-b)(b-x)(x-a)C$</p> <p>You can find $C$ such that $g(c)=0$. </p> <p>Note that $g(a)=g(b)=g(c)=0$, so there's some $\xi$ such that $g''(\xi)=0$.</p> Answer 18: <p>Yes indeed Lagrange MVT will work , </p> <p>Let $a&lt;\delta_1&lt;c&lt;\delta_2&lt;b$ and $\xi \in (\delta_1,\delta_2)$ .</p> <p>If we start with the following expression,</p> <p>$\displaystyle \frac{1}{b-a}\left(\frac{f(c)-f(a)}{c-a}-\frac{f(c)-f(b)}{c-b}\right)$</p> <p>Now, There must exist $\delta_1\in(a,c)$ &amp; $\delta_2\in(c,b)$ such that $\displaystyle f'(\delta_1)=\frac{f(c)-f(a)}{c-a}$ &amp; $\displaystyle f'(\delta_2)=\frac{f(c)-f(b)}{c-b}$ by Lagrange MVT</p> <p>Now suppose $\delta_1&lt;\xi&lt;\delta_2$ then since $f'(x)$ exists in $(a,b)$ and continuous on $[\delta_1,\delta_2]$ so again by Lagrange MVT we have ,</p> <p>$\displaystyle f''(\xi)=\frac{f'(\delta_2)-f'(\delta_1)}{\delta_2-\delta_1}$</p> <p>So the very first expression turns,</p> <p>$\displaystyle \frac{1}{b-a}\left(\frac{f(c)-f(a)}{c-a}-\frac{f(c)-f(b)}{c-b}\right) = \frac{f'(\delta_2)-f'(\delta_1)}{b-a}=\frac{f'(\delta_2)-f'(\delta_1)}{2(\delta_2-\delta_1)}=\frac{1}{2}f''(\xi)$ </p> <p>where $\displaystyle \left(\frac{b-a}{2}=\delta_2-\delta_1\right)$</p> <p>Since $f(a)=f(b)=0$ this is equivalent to $\displaystyle f(c)=\frac{1}{2}(c-a)(c-b)f''(\xi)$</p> Answer 19: <p>Consider $g:x\mapsto (b-a)f(x)-\frac 12 (a-b)(b-x)(x-a)C$</p> <p>You can find $C$ such that $g(c)=0$. </p> <p>Note that $g(a)=g(b)=g(c)=0$, so there's some $\xi$ such that $g''(\xi)=0$.</p> Answer 20: <p>Yes indeed Lagrange MVT will work , </p> <p>Let $a&lt;\delta_1&lt;c&lt;\delta_2&lt;b$ and $\xi \in (\delta_1,\delta_2)$ .</p> <p>If we start with the following expression,</p> <p>$\displaystyle \frac{1}{b-a}\left(\frac{f(c)-f(a)}{c-a}-\frac{f(c)-f(b)}{c-b}\right)$</p> <p>Now, There must exist $\delta_1\in(a,c)$ &amp; $\delta_2\in(c,b)$ such that $\displaystyle f'(\delta_1)=\frac{f(c)-f(a)}{c-a}$ &amp; $\displaystyle f'(\delta_2)=\frac{f(c)-f(b)}{c-b}$ by Lagrange MVT</p> <p>Now suppose $\delta_1&lt;\xi&lt;\delta_2$ then since $f'(x)$ exists in $(a,b)$ and continuous on $[\delta_1,\delta_2]$ so again by Lagrange MVT we have ,</p> <p>$\displaystyle f''(\xi)=\frac{f'(\delta_2)-f'(\delta_1)}{\delta_2-\delta_1}$</p> <p>So the very first expression turns,</p> <p>$\displaystyle \frac{1}{b-a}\left(\frac{f(c)-f(a)}{c-a}-\frac{f(c)-f(b)}{c-b}\right) = \frac{f'(\delta_2)-f'(\delta_1)}{b-a}=\frac{f'(\delta_2)-f'(\delta_1)}{2(\delta_2-\delta_1)}=\frac{1}{2}f''(\xi)$ </p> <p>where $\displaystyle \left(\frac{b-a}{2}=\delta_2-\delta_1\right)$</p> <p>Since $f(a)=f(b)=0$ this is equivalent to $\displaystyle f(c)=\frac{1}{2}(c-a)(c-b)f''(\xi)$</p> Answer 21: <p>Consider $g:x\mapsto (b-a)f(x)-\frac 12 (a-b)(b-x)(x-a)C$</p> <p>You can find $C$ such that $g(c)=0$. </p> <p>Note that $g(a)=g(b)=g(c)=0$, so there's some $\xi$ such that $g''(\xi)=0$.</p> Answer 22: <p>Yes indeed Lagrange MVT will work , </p> <p>Let $a&lt;\delta_1&lt;c&lt;\delta_2&lt;b$ and $\xi \in (\delta_1,\delta_2)$ .</p> <p>If we start with the following expression,</p> <p>$\displaystyle \frac{1}{b-a}\left(\frac{f(c)-f(a)}{c-a}-\frac{f(c)-f(b)}{c-b}\right)$</p> <p>Now, There must exist $\delta_1\in(a,c)$ &amp; $\delta_2\in(c,b)$ such that $\displaystyle f'(\delta_1)=\frac{f(c)-f(a)}{c-a}$ &amp; $\displaystyle f'(\delta_2)=\frac{f(c)-f(b)}{c-b}$ by Lagrange MVT</p> <p>Now suppose $\delta_1&lt;\xi&lt;\delta_2$ then since $f'(x)$ exists in $(a,b)$ and continuous on $[\delta_1,\delta_2]$ so again by Lagrange MVT we have ,</p> <p>$\displaystyle f''(\xi)=\frac{f'(\delta_2)-f'(\delta_1)}{\delta_2-\delta_1}$</p> <p>So the very first expression turns,</p> <p>$\displaystyle \frac{1}{b-a}\left(\frac{f(c)-f(a)}{c-a}-\frac{f(c)-f(b)}{c-b}\right) = \frac{f'(\delta_2)-f'(\delta_1)}{b-a}=\frac{f'(\delta_2)-f'(\delta_1)}{2(\delta_2-\delta_1)}=\frac{1}{2}f''(\xi)$ </p> <p>where $\displaystyle \left(\frac{b-a}{2}=\delta_2-\delta_1\right)$</p> <p>Since $f(a)=f(b)=0$ this is equivalent to $\displaystyle f(c)=\frac{1}{2}(c-a)(c-b)f''(\xi)$</p>
In a triangle $ABC$ let $D$ be the midpoint of $BC$ . If $\angle ADB=45^\circ$ and $\angle ACD=30^\circ$ then find $\angle BAD$
2,097,263
6
https://math.stackexchange.com/questions/2097263/in-a-triangle-abc-let-d-be-the-midpoint-of-bc-if-angle-adb-45-circ-a
<p>In a triangle $\Delta ABC$ let $D$ be the midpoint of $BC$. If angle $\angle ADB=45^{\circ}$ and angle $\angle ACD=30^{\circ}$ then find angle $\angle BAD$. </p> <p>NOW this is a special case do we need a construction.</p>
[ { "answer_id": 2097329, "score": 2, "text": "<p>Let's denote $\\angle BAD$ as $\\alpha$. We have that $\\angle ADB=180-(45+\\alpha)$. If $\\angle ADB=45$ and $\\angle ACD=30$, then $\\angle ADC=180-45$ and $\\angle CDA=180-30-(180-45)=15$. Now, from the sine theorem:</p>\n\n<p>$$ \\frac{|CD|}{\\sin(15)}=\\frac{|AC|}{\\sin(180-45)}$$\n$$ \\frac{|AC|}{\\sin(180-(45+\\alpha))}=\\frac{|BD|}{\\sin(\\alpha)}=\\frac{|CD|}{\\sin(\\alpha)}$$</p>\n\n<p>From the first one, we have that: </p>\n\n<p>$$ |AC|=\\frac{\\sin(180-45)}{\\sin(15)}|CD|$$</p>\n\n<p>Plugging this into the second one gives us:</p>\n\n<p>$$ \\frac{\\sin(180-45)}{\\sin(15)}|CD|\\frac{1}{\\sin(180-(45+\\alpha))}=\\frac{|CD|}{\\sin(\\alpha)} $$\n$$ \\frac{\\sin(15)\\sin(180-45)}{\\sin(180-(45+\\alpha))}=\\frac{1}{\\sin(\\alpha)} $$\n$$ \\sin(\\alpha)=\\frac{\\sin(180-(45+\\alpha))}{\\sin(15)\\sin(180-45)}=\\frac{\\sin(45+\\alpha)}{\\sin(15)\\sin(180-45)} $$</p>\n\n<p>Can you take it from here? </p>\n" }, { "answer_id": 4583392, "score": 2, "text": "<p>This can be solved very easily using a simple geometry construction:</p>\n<p><a href=\"https://i.stack.imgur.com/ELV6P.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ELV6P.png\" alt=\"enter image description here\" /></a></p>\n<p>1.) Draw segment <span class=\"math-container\">$DE$</span> where point <span class=\"math-container\">$E$</span> lies on <span class=\"math-container\">$AC$</span> such that <span class=\"math-container\">$\\angle DAC=\\angle ADE=15$</span>. This means that <span class=\"math-container\">$\\angle DEC=30$</span>, therefore <span class=\"math-container\">$AE=ED=BD=DC$</span>. Now, join <span class=\"math-container\">$B$</span> and <span class=\"math-container\">$E$</span> via <span class=\"math-container\">$BE$</span>. Notice that <span class=\"math-container\">$\\angle BDE=60$</span> and <span class=\"math-container\">$BD=DE$</span>, this proves that <span class=\"math-container\">$\\triangle BDE$</span> is equilateral, and therefore, <span class=\"math-container\">$AE=ED=BD=DC=BE$</span>.</p>\n<p>2.) Lastly, notice that <span class=\"math-container\">$\\angle BEC=\\angle BEA=90$</span>, and that <span class=\"math-container\">$AE=BE$</span>, this means <span class=\"math-container\">$\\triangle BAE$</span> is an isosceles right triangle, and <span class=\"math-container\">$\\angle BAE=45$</span>. Therefore <span class=\"math-container\">$\\angle BAD=45-15=30$</span></p>\n" }, { "answer_id": 2099293, "score": 1, "text": "<p>One can use a construction if one does not want to use trigonometry but only very simple but tricky methods. </p>\n\n<p>Let point $B'$ be such that triangle $AB'C$ is equilateral and $B$ and $B'$ lie on the same side of line $AC.$ Let point $C'$ be such that line $AC'$ is orthogonal to $B'C$ and $AC' = AC = AB' = B'C$. By construction, $AC'$ is the orthgognal bisector of $B'C$ so $$B'C' = CC'\\,\\,\\, \\text{ and } \\,\\,\\, \\angle \\, CAC' = B'AC' = 30^{\\circ}$$ Therefore, triangles $CAC'$ and $B'AC'$ are congruent isosceles triangles with $30^{\\circ}$ internal angle at vertex $A$ (because $AC'$ is the angle bisector of equlateral triangle $AB'C$).</p>\n\n<p><a href=\"https://i.stack.imgur.com/bBf9T.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bBf9T.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now, by assumption, $\\angle \\, DAC = \\angle \\, ADB - \\angle \\, ACD = 45^{\\circ} - 30^{\\circ} = 15^{\\circ}$. Hence, line $AD$ is the angle bisector in isosceles triagnle $CAC'$ through vertex $A$ and so $AD$ is also orthogonal bisectror of $CC'$ which means that $DC = DC'$. A direct angle chasing shows that $\\angle \\, CDC' = 90^{\\circ}$. However, by assumption, $BD = DC = DC'$ so $$BC ' = CC' = B'C' \\,\\,\\, \\text{ and } \\,\\,\\, \\angle \\, BC'C = 90^{\\circ}$$ Hence triangle $BB'C'$ is isosceles and after a direct angle chasing, one sees that $\\angle \\, BC'B' = 60^{\\circ}$. Thus $BB'C'$ is an equilateral triangle. This means that triangles $ABC'$ and $ABB'$ are congruent, because $BB' = BC'$, segment $AB$ is common and $AB'=AC'$ by construction. Therefore, $$\\angle \\, BAB' = \\angle \\, BAC' = \\frac{1}{2} \\, \\angle \\, B'AC' = \\frac{1}{2} \\, 30^{\\circ} = 15^{\\circ}$$ We can conclude that $$\\angle \\, BAD = \\angle \\, B'AC - \\big(\\angle \\, BAB' + \\angle DAC \\big) = 60^{\\circ} - (15^{\\circ} + 15^{\\circ}) = 30^{\\circ}$$</p>\n" }, { "answer_id": 2097326, "score": 0, "text": "<p>Drop perpendicular from $A$ on extended $BC$ intersecting at $E$ to form right triangle $\\triangle AEC$.</p>\n\n<p><a href=\"https://i.stack.imgur.com/ZaxoF.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ZaxoF.png\" alt=\"enter image description here\"></a></p>\n\n<p>Let $AC = b$</p>\n\n<p>Then by basic trigonometry we get:</p>\n\n<p>$EA=\\frac{b}{2}$</p>\n\n<p>$DC=EC-ED=\\frac{\\sqrt{3}-1}{2}b$</p>\n\n<p>$EB=EC-BC=\\frac{2-\\sqrt{3}}{2}b$</p>\n\n<p>$\\angle EAB=\\tan^{-1}\\frac{EB}{EA}=\\frac{\\pi}{12}$</p>\n\n<p>Therefore,</p>\n\n<p>$$\\angle BAD=\\angle EAD-\\angle EAB=\\frac{\\pi}{4}-\\frac{\\pi}{12}=\\frac{\\pi}{6}$$</p>\n" }, { "answer_id": 2097329, "score": 2, "text": "<p>Let's denote $\\angle BAD$ as $\\alpha$. We have that $\\angle ADB=180-(45+\\alpha)$. If $\\angle ADB=45$ and $\\angle ACD=30$, then $\\angle ADC=180-45$ and $\\angle CDA=180-30-(180-45)=15$. Now, from the sine theorem:</p>\n\n<p>$$ \\frac{|CD|}{\\sin(15)}=\\frac{|AC|}{\\sin(180-45)}$$\n$$ \\frac{|AC|}{\\sin(180-(45+\\alpha))}=\\frac{|BD|}{\\sin(\\alpha)}=\\frac{|CD|}{\\sin(\\alpha)}$$</p>\n\n<p>From the first one, we have that: </p>\n\n<p>$$ |AC|=\\frac{\\sin(180-45)}{\\sin(15)}|CD|$$</p>\n\n<p>Plugging this into the second one gives us:</p>\n\n<p>$$ \\frac{\\sin(180-45)}{\\sin(15)}|CD|\\frac{1}{\\sin(180-(45+\\alpha))}=\\frac{|CD|}{\\sin(\\alpha)} $$\n$$ \\frac{\\sin(15)\\sin(180-45)}{\\sin(180-(45+\\alpha))}=\\frac{1}{\\sin(\\alpha)} $$\n$$ \\sin(\\alpha)=\\frac{\\sin(180-(45+\\alpha))}{\\sin(15)\\sin(180-45)}=\\frac{\\sin(45+\\alpha)}{\\sin(15)\\sin(180-45)} $$</p>\n\n<p>Can you take it from here? </p>\n" }, { "answer_id": 4583392, "score": 2, "text": "<p>This can be solved very easily using a simple geometry construction:</p>\n<p><a href=\"https://i.stack.imgur.com/ELV6P.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ELV6P.png\" alt=\"enter image description here\" /></a></p>\n<p>1.) Draw segment <span class=\"math-container\">$DE$</span> where point <span class=\"math-container\">$E$</span> lies on <span class=\"math-container\">$AC$</span> such that <span class=\"math-container\">$\\angle DAC=\\angle ADE=15$</span>. This means that <span class=\"math-container\">$\\angle DEC=30$</span>, therefore <span class=\"math-container\">$AE=ED=BD=DC$</span>. Now, join <span class=\"math-container\">$B$</span> and <span class=\"math-container\">$E$</span> via <span class=\"math-container\">$BE$</span>. Notice that <span class=\"math-container\">$\\angle BDE=60$</span> and <span class=\"math-container\">$BD=DE$</span>, this proves that <span class=\"math-container\">$\\triangle BDE$</span> is equilateral, and therefore, <span class=\"math-container\">$AE=ED=BD=DC=BE$</span>.</p>\n<p>2.) Lastly, notice that <span class=\"math-container\">$\\angle BEC=\\angle BEA=90$</span>, and that <span class=\"math-container\">$AE=BE$</span>, this means <span class=\"math-container\">$\\triangle BAE$</span> is an isosceles right triangle, and <span class=\"math-container\">$\\angle BAE=45$</span>. Therefore <span class=\"math-container\">$\\angle BAD=45-15=30$</span></p>\n" }, { "answer_id": 2099293, "score": 1, "text": "<p>One can use a construction if one does not want to use trigonometry but only very simple but tricky methods. </p>\n\n<p>Let point $B'$ be such that triangle $AB'C$ is equilateral and $B$ and $B'$ lie on the same side of line $AC.$ Let point $C'$ be such that line $AC'$ is orthogonal to $B'C$ and $AC' = AC = AB' = B'C$. By construction, $AC'$ is the orthgognal bisector of $B'C$ so $$B'C' = CC'\\,\\,\\, \\text{ and } \\,\\,\\, \\angle \\, CAC' = B'AC' = 30^{\\circ}$$ Therefore, triangles $CAC'$ and $B'AC'$ are congruent isosceles triangles with $30^{\\circ}$ internal angle at vertex $A$ (because $AC'$ is the angle bisector of equlateral triangle $AB'C$).</p>\n\n<p><a href=\"https://i.stack.imgur.com/bBf9T.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bBf9T.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now, by assumption, $\\angle \\, DAC = \\angle \\, ADB - \\angle \\, ACD = 45^{\\circ} - 30^{\\circ} = 15^{\\circ}$. Hence, line $AD$ is the angle bisector in isosceles triagnle $CAC'$ through vertex $A$ and so $AD$ is also orthogonal bisectror of $CC'$ which means that $DC = DC'$. A direct angle chasing shows that $\\angle \\, CDC' = 90^{\\circ}$. However, by assumption, $BD = DC = DC'$ so $$BC ' = CC' = B'C' \\,\\,\\, \\text{ and } \\,\\,\\, \\angle \\, BC'C = 90^{\\circ}$$ Hence triangle $BB'C'$ is isosceles and after a direct angle chasing, one sees that $\\angle \\, BC'B' = 60^{\\circ}$. Thus $BB'C'$ is an equilateral triangle. This means that triangles $ABC'$ and $ABB'$ are congruent, because $BB' = BC'$, segment $AB$ is common and $AB'=AC'$ by construction. Therefore, $$\\angle \\, BAB' = \\angle \\, BAC' = \\frac{1}{2} \\, \\angle \\, B'AC' = \\frac{1}{2} \\, 30^{\\circ} = 15^{\\circ}$$ We can conclude that $$\\angle \\, BAD = \\angle \\, B'AC - \\big(\\angle \\, BAB' + \\angle DAC \\big) = 60^{\\circ} - (15^{\\circ} + 15^{\\circ}) = 30^{\\circ}$$</p>\n" }, { "answer_id": 2097326, "score": 0, "text": "<p>Drop perpendicular from $A$ on extended $BC$ intersecting at $E$ to form right triangle $\\triangle AEC$.</p>\n\n<p><a href=\"https://i.stack.imgur.com/ZaxoF.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ZaxoF.png\" alt=\"enter image description here\"></a></p>\n\n<p>Let $AC = b$</p>\n\n<p>Then by basic trigonometry we get:</p>\n\n<p>$EA=\\frac{b}{2}$</p>\n\n<p>$DC=EC-ED=\\frac{\\sqrt{3}-1}{2}b$</p>\n\n<p>$EB=EC-BC=\\frac{2-\\sqrt{3}}{2}b$</p>\n\n<p>$\\angle EAB=\\tan^{-1}\\frac{EB}{EA}=\\frac{\\pi}{12}$</p>\n\n<p>Therefore,</p>\n\n<p>$$\\angle BAD=\\angle EAD-\\angle EAB=\\frac{\\pi}{4}-\\frac{\\pi}{12}=\\frac{\\pi}{6}$$</p>\n" }, { "answer_id": 2097329, "score": 2, "text": "<p>Let's denote $\\angle BAD$ as $\\alpha$. We have that $\\angle ADB=180-(45+\\alpha)$. If $\\angle ADB=45$ and $\\angle ACD=30$, then $\\angle ADC=180-45$ and $\\angle CDA=180-30-(180-45)=15$. Now, from the sine theorem:</p>\n\n<p>$$ \\frac{|CD|}{\\sin(15)}=\\frac{|AC|}{\\sin(180-45)}$$\n$$ \\frac{|AC|}{\\sin(180-(45+\\alpha))}=\\frac{|BD|}{\\sin(\\alpha)}=\\frac{|CD|}{\\sin(\\alpha)}$$</p>\n\n<p>From the first one, we have that: </p>\n\n<p>$$ |AC|=\\frac{\\sin(180-45)}{\\sin(15)}|CD|$$</p>\n\n<p>Plugging this into the second one gives us:</p>\n\n<p>$$ \\frac{\\sin(180-45)}{\\sin(15)}|CD|\\frac{1}{\\sin(180-(45+\\alpha))}=\\frac{|CD|}{\\sin(\\alpha)} $$\n$$ \\frac{\\sin(15)\\sin(180-45)}{\\sin(180-(45+\\alpha))}=\\frac{1}{\\sin(\\alpha)} $$\n$$ \\sin(\\alpha)=\\frac{\\sin(180-(45+\\alpha))}{\\sin(15)\\sin(180-45)}=\\frac{\\sin(45+\\alpha)}{\\sin(15)\\sin(180-45)} $$</p>\n\n<p>Can you take it from here? </p>\n" }, { "answer_id": 4583392, "score": 2, "text": "<p>This can be solved very easily using a simple geometry construction:</p>\n<p><a href=\"https://i.stack.imgur.com/ELV6P.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ELV6P.png\" alt=\"enter image description here\" /></a></p>\n<p>1.) Draw segment <span class=\"math-container\">$DE$</span> where point <span class=\"math-container\">$E$</span> lies on <span class=\"math-container\">$AC$</span> such that <span class=\"math-container\">$\\angle DAC=\\angle ADE=15$</span>. This means that <span class=\"math-container\">$\\angle DEC=30$</span>, therefore <span class=\"math-container\">$AE=ED=BD=DC$</span>. Now, join <span class=\"math-container\">$B$</span> and <span class=\"math-container\">$E$</span> via <span class=\"math-container\">$BE$</span>. Notice that <span class=\"math-container\">$\\angle BDE=60$</span> and <span class=\"math-container\">$BD=DE$</span>, this proves that <span class=\"math-container\">$\\triangle BDE$</span> is equilateral, and therefore, <span class=\"math-container\">$AE=ED=BD=DC=BE$</span>.</p>\n<p>2.) Lastly, notice that <span class=\"math-container\">$\\angle BEC=\\angle BEA=90$</span>, and that <span class=\"math-container\">$AE=BE$</span>, this means <span class=\"math-container\">$\\triangle BAE$</span> is an isosceles right triangle, and <span class=\"math-container\">$\\angle BAE=45$</span>. Therefore <span class=\"math-container\">$\\angle BAD=45-15=30$</span></p>\n" }, { "answer_id": 2099293, "score": 1, "text": "<p>One can use a construction if one does not want to use trigonometry but only very simple but tricky methods. </p>\n\n<p>Let point $B'$ be such that triangle $AB'C$ is equilateral and $B$ and $B'$ lie on the same side of line $AC.$ Let point $C'$ be such that line $AC'$ is orthogonal to $B'C$ and $AC' = AC = AB' = B'C$. By construction, $AC'$ is the orthgognal bisector of $B'C$ so $$B'C' = CC'\\,\\,\\, \\text{ and } \\,\\,\\, \\angle \\, CAC' = B'AC' = 30^{\\circ}$$ Therefore, triangles $CAC'$ and $B'AC'$ are congruent isosceles triangles with $30^{\\circ}$ internal angle at vertex $A$ (because $AC'$ is the angle bisector of equlateral triangle $AB'C$).</p>\n\n<p><a href=\"https://i.stack.imgur.com/bBf9T.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bBf9T.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now, by assumption, $\\angle \\, DAC = \\angle \\, ADB - \\angle \\, ACD = 45^{\\circ} - 30^{\\circ} = 15^{\\circ}$. Hence, line $AD$ is the angle bisector in isosceles triagnle $CAC'$ through vertex $A$ and so $AD$ is also orthogonal bisectror of $CC'$ which means that $DC = DC'$. A direct angle chasing shows that $\\angle \\, CDC' = 90^{\\circ}$. However, by assumption, $BD = DC = DC'$ so $$BC ' = CC' = B'C' \\,\\,\\, \\text{ and } \\,\\,\\, \\angle \\, BC'C = 90^{\\circ}$$ Hence triangle $BB'C'$ is isosceles and after a direct angle chasing, one sees that $\\angle \\, BC'B' = 60^{\\circ}$. Thus $BB'C'$ is an equilateral triangle. This means that triangles $ABC'$ and $ABB'$ are congruent, because $BB' = BC'$, segment $AB$ is common and $AB'=AC'$ by construction. Therefore, $$\\angle \\, BAB' = \\angle \\, BAC' = \\frac{1}{2} \\, \\angle \\, B'AC' = \\frac{1}{2} \\, 30^{\\circ} = 15^{\\circ}$$ We can conclude that $$\\angle \\, BAD = \\angle \\, B'AC - \\big(\\angle \\, BAB' + \\angle DAC \\big) = 60^{\\circ} - (15^{\\circ} + 15^{\\circ}) = 30^{\\circ}$$</p>\n" }, { "answer_id": 2097326, "score": 0, "text": "<p>Drop perpendicular from $A$ on extended $BC$ intersecting at $E$ to form right triangle $\\triangle AEC$.</p>\n\n<p><a href=\"https://i.stack.imgur.com/ZaxoF.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ZaxoF.png\" alt=\"enter image description here\"></a></p>\n\n<p>Let $AC = b$</p>\n\n<p>Then by basic trigonometry we get:</p>\n\n<p>$EA=\\frac{b}{2}$</p>\n\n<p>$DC=EC-ED=\\frac{\\sqrt{3}-1}{2}b$</p>\n\n<p>$EB=EC-BC=\\frac{2-\\sqrt{3}}{2}b$</p>\n\n<p>$\\angle EAB=\\tan^{-1}\\frac{EB}{EA}=\\frac{\\pi}{12}$</p>\n\n<p>Therefore,</p>\n\n<p>$$\\angle BAD=\\angle EAD-\\angle EAB=\\frac{\\pi}{4}-\\frac{\\pi}{12}=\\frac{\\pi}{6}$$</p>\n" }, { "answer_id": 2097329, "score": 2, "text": "<p>Let's denote $\\angle BAD$ as $\\alpha$. We have that $\\angle ADB=180-(45+\\alpha)$. If $\\angle ADB=45$ and $\\angle ACD=30$, then $\\angle ADC=180-45$ and $\\angle CDA=180-30-(180-45)=15$. Now, from the sine theorem:</p>\n\n<p>$$ \\frac{|CD|}{\\sin(15)}=\\frac{|AC|}{\\sin(180-45)}$$\n$$ \\frac{|AC|}{\\sin(180-(45+\\alpha))}=\\frac{|BD|}{\\sin(\\alpha)}=\\frac{|CD|}{\\sin(\\alpha)}$$</p>\n\n<p>From the first one, we have that: </p>\n\n<p>$$ |AC|=\\frac{\\sin(180-45)}{\\sin(15)}|CD|$$</p>\n\n<p>Plugging this into the second one gives us:</p>\n\n<p>$$ \\frac{\\sin(180-45)}{\\sin(15)}|CD|\\frac{1}{\\sin(180-(45+\\alpha))}=\\frac{|CD|}{\\sin(\\alpha)} $$\n$$ \\frac{\\sin(15)\\sin(180-45)}{\\sin(180-(45+\\alpha))}=\\frac{1}{\\sin(\\alpha)} $$\n$$ \\sin(\\alpha)=\\frac{\\sin(180-(45+\\alpha))}{\\sin(15)\\sin(180-45)}=\\frac{\\sin(45+\\alpha)}{\\sin(15)\\sin(180-45)} $$</p>\n\n<p>Can you take it from here? </p>\n" }, { "answer_id": 4583392, "score": 2, "text": "<p>This can be solved very easily using a simple geometry construction:</p>\n<p><a href=\"https://i.stack.imgur.com/ELV6P.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ELV6P.png\" alt=\"enter image description here\" /></a></p>\n<p>1.) Draw segment <span class=\"math-container\">$DE$</span> where point <span class=\"math-container\">$E$</span> lies on <span class=\"math-container\">$AC$</span> such that <span class=\"math-container\">$\\angle DAC=\\angle ADE=15$</span>. This means that <span class=\"math-container\">$\\angle DEC=30$</span>, therefore <span class=\"math-container\">$AE=ED=BD=DC$</span>. Now, join <span class=\"math-container\">$B$</span> and <span class=\"math-container\">$E$</span> via <span class=\"math-container\">$BE$</span>. Notice that <span class=\"math-container\">$\\angle BDE=60$</span> and <span class=\"math-container\">$BD=DE$</span>, this proves that <span class=\"math-container\">$\\triangle BDE$</span> is equilateral, and therefore, <span class=\"math-container\">$AE=ED=BD=DC=BE$</span>.</p>\n<p>2.) Lastly, notice that <span class=\"math-container\">$\\angle BEC=\\angle BEA=90$</span>, and that <span class=\"math-container\">$AE=BE$</span>, this means <span class=\"math-container\">$\\triangle BAE$</span> is an isosceles right triangle, and <span class=\"math-container\">$\\angle BAE=45$</span>. Therefore <span class=\"math-container\">$\\angle BAD=45-15=30$</span></p>\n" }, { "answer_id": 2099293, "score": 1, "text": "<p>One can use a construction if one does not want to use trigonometry but only very simple but tricky methods. </p>\n\n<p>Let point $B'$ be such that triangle $AB'C$ is equilateral and $B$ and $B'$ lie on the same side of line $AC.$ Let point $C'$ be such that line $AC'$ is orthogonal to $B'C$ and $AC' = AC = AB' = B'C$. By construction, $AC'$ is the orthgognal bisector of $B'C$ so $$B'C' = CC'\\,\\,\\, \\text{ and } \\,\\,\\, \\angle \\, CAC' = B'AC' = 30^{\\circ}$$ Therefore, triangles $CAC'$ and $B'AC'$ are congruent isosceles triangles with $30^{\\circ}$ internal angle at vertex $A$ (because $AC'$ is the angle bisector of equlateral triangle $AB'C$).</p>\n\n<p><a href=\"https://i.stack.imgur.com/bBf9T.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bBf9T.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now, by assumption, $\\angle \\, DAC = \\angle \\, ADB - \\angle \\, ACD = 45^{\\circ} - 30^{\\circ} = 15^{\\circ}$. Hence, line $AD$ is the angle bisector in isosceles triagnle $CAC'$ through vertex $A$ and so $AD$ is also orthogonal bisectror of $CC'$ which means that $DC = DC'$. A direct angle chasing shows that $\\angle \\, CDC' = 90^{\\circ}$. However, by assumption, $BD = DC = DC'$ so $$BC ' = CC' = B'C' \\,\\,\\, \\text{ and } \\,\\,\\, \\angle \\, BC'C = 90^{\\circ}$$ Hence triangle $BB'C'$ is isosceles and after a direct angle chasing, one sees that $\\angle \\, BC'B' = 60^{\\circ}$. Thus $BB'C'$ is an equilateral triangle. This means that triangles $ABC'$ and $ABB'$ are congruent, because $BB' = BC'$, segment $AB$ is common and $AB'=AC'$ by construction. Therefore, $$\\angle \\, BAB' = \\angle \\, BAC' = \\frac{1}{2} \\, \\angle \\, B'AC' = \\frac{1}{2} \\, 30^{\\circ} = 15^{\\circ}$$ We can conclude that $$\\angle \\, BAD = \\angle \\, B'AC - \\big(\\angle \\, BAB' + \\angle DAC \\big) = 60^{\\circ} - (15^{\\circ} + 15^{\\circ}) = 30^{\\circ}$$</p>\n" }, { "answer_id": 2097326, "score": 0, "text": "<p>Drop perpendicular from $A$ on extended $BC$ intersecting at $E$ to form right triangle $\\triangle AEC$.</p>\n\n<p><a href=\"https://i.stack.imgur.com/ZaxoF.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ZaxoF.png\" alt=\"enter image description here\"></a></p>\n\n<p>Let $AC = b$</p>\n\n<p>Then by basic trigonometry we get:</p>\n\n<p>$EA=\\frac{b}{2}$</p>\n\n<p>$DC=EC-ED=\\frac{\\sqrt{3}-1}{2}b$</p>\n\n<p>$EB=EC-BC=\\frac{2-\\sqrt{3}}{2}b$</p>\n\n<p>$\\angle EAB=\\tan^{-1}\\frac{EB}{EA}=\\frac{\\pi}{12}$</p>\n\n<p>Therefore,</p>\n\n<p>$$\\angle BAD=\\angle EAD-\\angle EAB=\\frac{\\pi}{4}-\\frac{\\pi}{12}=\\frac{\\pi}{6}$$</p>\n" }, { "answer_id": 2097329, "score": 2, "text": "<p>Let's denote $\\angle BAD$ as $\\alpha$. We have that $\\angle ADB=180-(45+\\alpha)$. If $\\angle ADB=45$ and $\\angle ACD=30$, then $\\angle ADC=180-45$ and $\\angle CDA=180-30-(180-45)=15$. Now, from the sine theorem:</p>\n\n<p>$$ \\frac{|CD|}{\\sin(15)}=\\frac{|AC|}{\\sin(180-45)}$$\n$$ \\frac{|AC|}{\\sin(180-(45+\\alpha))}=\\frac{|BD|}{\\sin(\\alpha)}=\\frac{|CD|}{\\sin(\\alpha)}$$</p>\n\n<p>From the first one, we have that: </p>\n\n<p>$$ |AC|=\\frac{\\sin(180-45)}{\\sin(15)}|CD|$$</p>\n\n<p>Plugging this into the second one gives us:</p>\n\n<p>$$ \\frac{\\sin(180-45)}{\\sin(15)}|CD|\\frac{1}{\\sin(180-(45+\\alpha))}=\\frac{|CD|}{\\sin(\\alpha)} $$\n$$ \\frac{\\sin(15)\\sin(180-45)}{\\sin(180-(45+\\alpha))}=\\frac{1}{\\sin(\\alpha)} $$\n$$ \\sin(\\alpha)=\\frac{\\sin(180-(45+\\alpha))}{\\sin(15)\\sin(180-45)}=\\frac{\\sin(45+\\alpha)}{\\sin(15)\\sin(180-45)} $$</p>\n\n<p>Can you take it from here? </p>\n" }, { "answer_id": 4583392, "score": 2, "text": "<p>This can be solved very easily using a simple geometry construction:</p>\n<p><a href=\"https://i.stack.imgur.com/ELV6P.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ELV6P.png\" alt=\"enter image description here\" /></a></p>\n<p>1.) Draw segment <span class=\"math-container\">$DE$</span> where point <span class=\"math-container\">$E$</span> lies on <span class=\"math-container\">$AC$</span> such that <span class=\"math-container\">$\\angle DAC=\\angle ADE=15$</span>. This means that <span class=\"math-container\">$\\angle DEC=30$</span>, therefore <span class=\"math-container\">$AE=ED=BD=DC$</span>. Now, join <span class=\"math-container\">$B$</span> and <span class=\"math-container\">$E$</span> via <span class=\"math-container\">$BE$</span>. Notice that <span class=\"math-container\">$\\angle BDE=60$</span> and <span class=\"math-container\">$BD=DE$</span>, this proves that <span class=\"math-container\">$\\triangle BDE$</span> is equilateral, and therefore, <span class=\"math-container\">$AE=ED=BD=DC=BE$</span>.</p>\n<p>2.) Lastly, notice that <span class=\"math-container\">$\\angle BEC=\\angle BEA=90$</span>, and that <span class=\"math-container\">$AE=BE$</span>, this means <span class=\"math-container\">$\\triangle BAE$</span> is an isosceles right triangle, and <span class=\"math-container\">$\\angle BAE=45$</span>. Therefore <span class=\"math-container\">$\\angle BAD=45-15=30$</span></p>\n" }, { "answer_id": 2099293, "score": 1, "text": "<p>One can use a construction if one does not want to use trigonometry but only very simple but tricky methods. </p>\n\n<p>Let point $B'$ be such that triangle $AB'C$ is equilateral and $B$ and $B'$ lie on the same side of line $AC.$ Let point $C'$ be such that line $AC'$ is orthogonal to $B'C$ and $AC' = AC = AB' = B'C$. By construction, $AC'$ is the orthgognal bisector of $B'C$ so $$B'C' = CC'\\,\\,\\, \\text{ and } \\,\\,\\, \\angle \\, CAC' = B'AC' = 30^{\\circ}$$ Therefore, triangles $CAC'$ and $B'AC'$ are congruent isosceles triangles with $30^{\\circ}$ internal angle at vertex $A$ (because $AC'$ is the angle bisector of equlateral triangle $AB'C$).</p>\n\n<p><a href=\"https://i.stack.imgur.com/bBf9T.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bBf9T.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now, by assumption, $\\angle \\, DAC = \\angle \\, ADB - \\angle \\, ACD = 45^{\\circ} - 30^{\\circ} = 15^{\\circ}$. Hence, line $AD$ is the angle bisector in isosceles triagnle $CAC'$ through vertex $A$ and so $AD$ is also orthogonal bisectror of $CC'$ which means that $DC = DC'$. A direct angle chasing shows that $\\angle \\, CDC' = 90^{\\circ}$. However, by assumption, $BD = DC = DC'$ so $$BC ' = CC' = B'C' \\,\\,\\, \\text{ and } \\,\\,\\, \\angle \\, BC'C = 90^{\\circ}$$ Hence triangle $BB'C'$ is isosceles and after a direct angle chasing, one sees that $\\angle \\, BC'B' = 60^{\\circ}$. Thus $BB'C'$ is an equilateral triangle. This means that triangles $ABC'$ and $ABB'$ are congruent, because $BB' = BC'$, segment $AB$ is common and $AB'=AC'$ by construction. Therefore, $$\\angle \\, BAB' = \\angle \\, BAC' = \\frac{1}{2} \\, \\angle \\, B'AC' = \\frac{1}{2} \\, 30^{\\circ} = 15^{\\circ}$$ We can conclude that $$\\angle \\, BAD = \\angle \\, B'AC - \\big(\\angle \\, BAB' + \\angle DAC \\big) = 60^{\\circ} - (15^{\\circ} + 15^{\\circ}) = 30^{\\circ}$$</p>\n" }, { "answer_id": 2097326, "score": 0, "text": "<p>Drop perpendicular from $A$ on extended $BC$ intersecting at $E$ to form right triangle $\\triangle AEC$.</p>\n\n<p><a href=\"https://i.stack.imgur.com/ZaxoF.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ZaxoF.png\" alt=\"enter image description here\"></a></p>\n\n<p>Let $AC = b$</p>\n\n<p>Then by basic trigonometry we get:</p>\n\n<p>$EA=\\frac{b}{2}$</p>\n\n<p>$DC=EC-ED=\\frac{\\sqrt{3}-1}{2}b$</p>\n\n<p>$EB=EC-BC=\\frac{2-\\sqrt{3}}{2}b$</p>\n\n<p>$\\angle EAB=\\tan^{-1}\\frac{EB}{EA}=\\frac{\\pi}{12}$</p>\n\n<p>Therefore,</p>\n\n<p>$$\\angle BAD=\\angle EAD-\\angle EAB=\\frac{\\pi}{4}-\\frac{\\pi}{12}=\\frac{\\pi}{6}$$</p>\n" }, { "answer_id": 2097329, "score": 2, "text": "<p>Let's denote $\\angle BAD$ as $\\alpha$. We have that $\\angle ADB=180-(45+\\alpha)$. If $\\angle ADB=45$ and $\\angle ACD=30$, then $\\angle ADC=180-45$ and $\\angle CDA=180-30-(180-45)=15$. Now, from the sine theorem:</p>\n\n<p>$$ \\frac{|CD|}{\\sin(15)}=\\frac{|AC|}{\\sin(180-45)}$$\n$$ \\frac{|AC|}{\\sin(180-(45+\\alpha))}=\\frac{|BD|}{\\sin(\\alpha)}=\\frac{|CD|}{\\sin(\\alpha)}$$</p>\n\n<p>From the first one, we have that: </p>\n\n<p>$$ |AC|=\\frac{\\sin(180-45)}{\\sin(15)}|CD|$$</p>\n\n<p>Plugging this into the second one gives us:</p>\n\n<p>$$ \\frac{\\sin(180-45)}{\\sin(15)}|CD|\\frac{1}{\\sin(180-(45+\\alpha))}=\\frac{|CD|}{\\sin(\\alpha)} $$\n$$ \\frac{\\sin(15)\\sin(180-45)}{\\sin(180-(45+\\alpha))}=\\frac{1}{\\sin(\\alpha)} $$\n$$ \\sin(\\alpha)=\\frac{\\sin(180-(45+\\alpha))}{\\sin(15)\\sin(180-45)}=\\frac{\\sin(45+\\alpha)}{\\sin(15)\\sin(180-45)} $$</p>\n\n<p>Can you take it from here? </p>\n" }, { "answer_id": 4583392, "score": 2, "text": "<p>This can be solved very easily using a simple geometry construction:</p>\n<p><a href=\"https://i.stack.imgur.com/ELV6P.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ELV6P.png\" alt=\"enter image description here\" /></a></p>\n<p>1.) Draw segment <span class=\"math-container\">$DE$</span> where point <span class=\"math-container\">$E$</span> lies on <span class=\"math-container\">$AC$</span> such that <span class=\"math-container\">$\\angle DAC=\\angle ADE=15$</span>. This means that <span class=\"math-container\">$\\angle DEC=30$</span>, therefore <span class=\"math-container\">$AE=ED=BD=DC$</span>. Now, join <span class=\"math-container\">$B$</span> and <span class=\"math-container\">$E$</span> via <span class=\"math-container\">$BE$</span>. Notice that <span class=\"math-container\">$\\angle BDE=60$</span> and <span class=\"math-container\">$BD=DE$</span>, this proves that <span class=\"math-container\">$\\triangle BDE$</span> is equilateral, and therefore, <span class=\"math-container\">$AE=ED=BD=DC=BE$</span>.</p>\n<p>2.) Lastly, notice that <span class=\"math-container\">$\\angle BEC=\\angle BEA=90$</span>, and that <span class=\"math-container\">$AE=BE$</span>, this means <span class=\"math-container\">$\\triangle BAE$</span> is an isosceles right triangle, and <span class=\"math-container\">$\\angle BAE=45$</span>. Therefore <span class=\"math-container\">$\\angle BAD=45-15=30$</span></p>\n" }, { "answer_id": 2099293, "score": 1, "text": "<p>One can use a construction if one does not want to use trigonometry but only very simple but tricky methods. </p>\n\n<p>Let point $B'$ be such that triangle $AB'C$ is equilateral and $B$ and $B'$ lie on the same side of line $AC.$ Let point $C'$ be such that line $AC'$ is orthogonal to $B'C$ and $AC' = AC = AB' = B'C$. By construction, $AC'$ is the orthgognal bisector of $B'C$ so $$B'C' = CC'\\,\\,\\, \\text{ and } \\,\\,\\, \\angle \\, CAC' = B'AC' = 30^{\\circ}$$ Therefore, triangles $CAC'$ and $B'AC'$ are congruent isosceles triangles with $30^{\\circ}$ internal angle at vertex $A$ (because $AC'$ is the angle bisector of equlateral triangle $AB'C$).</p>\n\n<p><a href=\"https://i.stack.imgur.com/bBf9T.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bBf9T.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now, by assumption, $\\angle \\, DAC = \\angle \\, ADB - \\angle \\, ACD = 45^{\\circ} - 30^{\\circ} = 15^{\\circ}$. Hence, line $AD$ is the angle bisector in isosceles triagnle $CAC'$ through vertex $A$ and so $AD$ is also orthogonal bisectror of $CC'$ which means that $DC = DC'$. A direct angle chasing shows that $\\angle \\, CDC' = 90^{\\circ}$. However, by assumption, $BD = DC = DC'$ so $$BC ' = CC' = B'C' \\,\\,\\, \\text{ and } \\,\\,\\, \\angle \\, BC'C = 90^{\\circ}$$ Hence triangle $BB'C'$ is isosceles and after a direct angle chasing, one sees that $\\angle \\, BC'B' = 60^{\\circ}$. Thus $BB'C'$ is an equilateral triangle. This means that triangles $ABC'$ and $ABB'$ are congruent, because $BB' = BC'$, segment $AB$ is common and $AB'=AC'$ by construction. Therefore, $$\\angle \\, BAB' = \\angle \\, BAC' = \\frac{1}{2} \\, \\angle \\, B'AC' = \\frac{1}{2} \\, 30^{\\circ} = 15^{\\circ}$$ We can conclude that $$\\angle \\, BAD = \\angle \\, B'AC - \\big(\\angle \\, BAB' + \\angle DAC \\big) = 60^{\\circ} - (15^{\\circ} + 15^{\\circ}) = 30^{\\circ}$$</p>\n" }, { "answer_id": 2097326, "score": 0, "text": "<p>Drop perpendicular from $A$ on extended $BC$ intersecting at $E$ to form right triangle $\\triangle AEC$.</p>\n\n<p><a href=\"https://i.stack.imgur.com/ZaxoF.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ZaxoF.png\" alt=\"enter image description here\"></a></p>\n\n<p>Let $AC = b$</p>\n\n<p>Then by basic trigonometry we get:</p>\n\n<p>$EA=\\frac{b}{2}$</p>\n\n<p>$DC=EC-ED=\\frac{\\sqrt{3}-1}{2}b$</p>\n\n<p>$EB=EC-BC=\\frac{2-\\sqrt{3}}{2}b$</p>\n\n<p>$\\angle EAB=\\tan^{-1}\\frac{EB}{EA}=\\frac{\\pi}{12}$</p>\n\n<p>Therefore,</p>\n\n<p>$$\\angle BAD=\\angle EAD-\\angle EAB=\\frac{\\pi}{4}-\\frac{\\pi}{12}=\\frac{\\pi}{6}$$</p>\n" }, { "answer_id": 2097329, "score": 2, "text": "<p>Let's denote $\\angle BAD$ as $\\alpha$. We have that $\\angle ADB=180-(45+\\alpha)$. If $\\angle ADB=45$ and $\\angle ACD=30$, then $\\angle ADC=180-45$ and $\\angle CDA=180-30-(180-45)=15$. Now, from the sine theorem:</p>\n\n<p>$$ \\frac{|CD|}{\\sin(15)}=\\frac{|AC|}{\\sin(180-45)}$$\n$$ \\frac{|AC|}{\\sin(180-(45+\\alpha))}=\\frac{|BD|}{\\sin(\\alpha)}=\\frac{|CD|}{\\sin(\\alpha)}$$</p>\n\n<p>From the first one, we have that: </p>\n\n<p>$$ |AC|=\\frac{\\sin(180-45)}{\\sin(15)}|CD|$$</p>\n\n<p>Plugging this into the second one gives us:</p>\n\n<p>$$ \\frac{\\sin(180-45)}{\\sin(15)}|CD|\\frac{1}{\\sin(180-(45+\\alpha))}=\\frac{|CD|}{\\sin(\\alpha)} $$\n$$ \\frac{\\sin(15)\\sin(180-45)}{\\sin(180-(45+\\alpha))}=\\frac{1}{\\sin(\\alpha)} $$\n$$ \\sin(\\alpha)=\\frac{\\sin(180-(45+\\alpha))}{\\sin(15)\\sin(180-45)}=\\frac{\\sin(45+\\alpha)}{\\sin(15)\\sin(180-45)} $$</p>\n\n<p>Can you take it from here? </p>\n" }, { "answer_id": 4583392, "score": 2, "text": "<p>This can be solved very easily using a simple geometry construction:</p>\n<p><a href=\"https://i.stack.imgur.com/ELV6P.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ELV6P.png\" alt=\"enter image description here\" /></a></p>\n<p>1.) Draw segment <span class=\"math-container\">$DE$</span> where point <span class=\"math-container\">$E$</span> lies on <span class=\"math-container\">$AC$</span> such that <span class=\"math-container\">$\\angle DAC=\\angle ADE=15$</span>. This means that <span class=\"math-container\">$\\angle DEC=30$</span>, therefore <span class=\"math-container\">$AE=ED=BD=DC$</span>. Now, join <span class=\"math-container\">$B$</span> and <span class=\"math-container\">$E$</span> via <span class=\"math-container\">$BE$</span>. Notice that <span class=\"math-container\">$\\angle BDE=60$</span> and <span class=\"math-container\">$BD=DE$</span>, this proves that <span class=\"math-container\">$\\triangle BDE$</span> is equilateral, and therefore, <span class=\"math-container\">$AE=ED=BD=DC=BE$</span>.</p>\n<p>2.) Lastly, notice that <span class=\"math-container\">$\\angle BEC=\\angle BEA=90$</span>, and that <span class=\"math-container\">$AE=BE$</span>, this means <span class=\"math-container\">$\\triangle BAE$</span> is an isosceles right triangle, and <span class=\"math-container\">$\\angle BAE=45$</span>. Therefore <span class=\"math-container\">$\\angle BAD=45-15=30$</span></p>\n" }, { "answer_id": 2099293, "score": 1, "text": "<p>One can use a construction if one does not want to use trigonometry but only very simple but tricky methods. </p>\n\n<p>Let point $B'$ be such that triangle $AB'C$ is equilateral and $B$ and $B'$ lie on the same side of line $AC.$ Let point $C'$ be such that line $AC'$ is orthogonal to $B'C$ and $AC' = AC = AB' = B'C$. By construction, $AC'$ is the orthgognal bisector of $B'C$ so $$B'C' = CC'\\,\\,\\, \\text{ and } \\,\\,\\, \\angle \\, CAC' = B'AC' = 30^{\\circ}$$ Therefore, triangles $CAC'$ and $B'AC'$ are congruent isosceles triangles with $30^{\\circ}$ internal angle at vertex $A$ (because $AC'$ is the angle bisector of equlateral triangle $AB'C$).</p>\n\n<p><a href=\"https://i.stack.imgur.com/bBf9T.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bBf9T.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now, by assumption, $\\angle \\, DAC = \\angle \\, ADB - \\angle \\, ACD = 45^{\\circ} - 30^{\\circ} = 15^{\\circ}$. Hence, line $AD$ is the angle bisector in isosceles triagnle $CAC'$ through vertex $A$ and so $AD$ is also orthogonal bisectror of $CC'$ which means that $DC = DC'$. A direct angle chasing shows that $\\angle \\, CDC' = 90^{\\circ}$. However, by assumption, $BD = DC = DC'$ so $$BC ' = CC' = B'C' \\,\\,\\, \\text{ and } \\,\\,\\, \\angle \\, BC'C = 90^{\\circ}$$ Hence triangle $BB'C'$ is isosceles and after a direct angle chasing, one sees that $\\angle \\, BC'B' = 60^{\\circ}$. Thus $BB'C'$ is an equilateral triangle. This means that triangles $ABC'$ and $ABB'$ are congruent, because $BB' = BC'$, segment $AB$ is common and $AB'=AC'$ by construction. Therefore, $$\\angle \\, BAB' = \\angle \\, BAC' = \\frac{1}{2} \\, \\angle \\, B'AC' = \\frac{1}{2} \\, 30^{\\circ} = 15^{\\circ}$$ We can conclude that $$\\angle \\, BAD = \\angle \\, B'AC - \\big(\\angle \\, BAB' + \\angle DAC \\big) = 60^{\\circ} - (15^{\\circ} + 15^{\\circ}) = 30^{\\circ}$$</p>\n" }, { "answer_id": 2097326, "score": 0, "text": "<p>Drop perpendicular from $A$ on extended $BC$ intersecting at $E$ to form right triangle $\\triangle AEC$.</p>\n\n<p><a href=\"https://i.stack.imgur.com/ZaxoF.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ZaxoF.png\" alt=\"enter image description here\"></a></p>\n\n<p>Let $AC = b$</p>\n\n<p>Then by basic trigonometry we get:</p>\n\n<p>$EA=\\frac{b}{2}$</p>\n\n<p>$DC=EC-ED=\\frac{\\sqrt{3}-1}{2}b$</p>\n\n<p>$EB=EC-BC=\\frac{2-\\sqrt{3}}{2}b$</p>\n\n<p>$\\angle EAB=\\tan^{-1}\\frac{EB}{EA}=\\frac{\\pi}{12}$</p>\n\n<p>Therefore,</p>\n\n<p>$$\\angle BAD=\\angle EAD-\\angle EAB=\\frac{\\pi}{4}-\\frac{\\pi}{12}=\\frac{\\pi}{6}$$</p>\n" }, { "answer_id": 2097329, "score": 2, "text": "<p>Let's denote $\\angle BAD$ as $\\alpha$. We have that $\\angle ADB=180-(45+\\alpha)$. If $\\angle ADB=45$ and $\\angle ACD=30$, then $\\angle ADC=180-45$ and $\\angle CDA=180-30-(180-45)=15$. Now, from the sine theorem:</p>\n\n<p>$$ \\frac{|CD|}{\\sin(15)}=\\frac{|AC|}{\\sin(180-45)}$$\n$$ \\frac{|AC|}{\\sin(180-(45+\\alpha))}=\\frac{|BD|}{\\sin(\\alpha)}=\\frac{|CD|}{\\sin(\\alpha)}$$</p>\n\n<p>From the first one, we have that: </p>\n\n<p>$$ |AC|=\\frac{\\sin(180-45)}{\\sin(15)}|CD|$$</p>\n\n<p>Plugging this into the second one gives us:</p>\n\n<p>$$ \\frac{\\sin(180-45)}{\\sin(15)}|CD|\\frac{1}{\\sin(180-(45+\\alpha))}=\\frac{|CD|}{\\sin(\\alpha)} $$\n$$ \\frac{\\sin(15)\\sin(180-45)}{\\sin(180-(45+\\alpha))}=\\frac{1}{\\sin(\\alpha)} $$\n$$ \\sin(\\alpha)=\\frac{\\sin(180-(45+\\alpha))}{\\sin(15)\\sin(180-45)}=\\frac{\\sin(45+\\alpha)}{\\sin(15)\\sin(180-45)} $$</p>\n\n<p>Can you take it from here? </p>\n" }, { "answer_id": 4583392, "score": 2, "text": "<p>This can be solved very easily using a simple geometry construction:</p>\n<p><a href=\"https://i.stack.imgur.com/ELV6P.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ELV6P.png\" alt=\"enter image description here\" /></a></p>\n<p>1.) Draw segment <span class=\"math-container\">$DE$</span> where point <span class=\"math-container\">$E$</span> lies on <span class=\"math-container\">$AC$</span> such that <span class=\"math-container\">$\\angle DAC=\\angle ADE=15$</span>. This means that <span class=\"math-container\">$\\angle DEC=30$</span>, therefore <span class=\"math-container\">$AE=ED=BD=DC$</span>. Now, join <span class=\"math-container\">$B$</span> and <span class=\"math-container\">$E$</span> via <span class=\"math-container\">$BE$</span>. Notice that <span class=\"math-container\">$\\angle BDE=60$</span> and <span class=\"math-container\">$BD=DE$</span>, this proves that <span class=\"math-container\">$\\triangle BDE$</span> is equilateral, and therefore, <span class=\"math-container\">$AE=ED=BD=DC=BE$</span>.</p>\n<p>2.) Lastly, notice that <span class=\"math-container\">$\\angle BEC=\\angle BEA=90$</span>, and that <span class=\"math-container\">$AE=BE$</span>, this means <span class=\"math-container\">$\\triangle BAE$</span> is an isosceles right triangle, and <span class=\"math-container\">$\\angle BAE=45$</span>. Therefore <span class=\"math-container\">$\\angle BAD=45-15=30$</span></p>\n" }, { "answer_id": 2099293, "score": 1, "text": "<p>One can use a construction if one does not want to use trigonometry but only very simple but tricky methods. </p>\n\n<p>Let point $B'$ be such that triangle $AB'C$ is equilateral and $B$ and $B'$ lie on the same side of line $AC.$ Let point $C'$ be such that line $AC'$ is orthogonal to $B'C$ and $AC' = AC = AB' = B'C$. By construction, $AC'$ is the orthgognal bisector of $B'C$ so $$B'C' = CC'\\,\\,\\, \\text{ and } \\,\\,\\, \\angle \\, CAC' = B'AC' = 30^{\\circ}$$ Therefore, triangles $CAC'$ and $B'AC'$ are congruent isosceles triangles with $30^{\\circ}$ internal angle at vertex $A$ (because $AC'$ is the angle bisector of equlateral triangle $AB'C$).</p>\n\n<p><a href=\"https://i.stack.imgur.com/bBf9T.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bBf9T.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now, by assumption, $\\angle \\, DAC = \\angle \\, ADB - \\angle \\, ACD = 45^{\\circ} - 30^{\\circ} = 15^{\\circ}$. Hence, line $AD$ is the angle bisector in isosceles triagnle $CAC'$ through vertex $A$ and so $AD$ is also orthogonal bisectror of $CC'$ which means that $DC = DC'$. A direct angle chasing shows that $\\angle \\, CDC' = 90^{\\circ}$. However, by assumption, $BD = DC = DC'$ so $$BC ' = CC' = B'C' \\,\\,\\, \\text{ and } \\,\\,\\, \\angle \\, BC'C = 90^{\\circ}$$ Hence triangle $BB'C'$ is isosceles and after a direct angle chasing, one sees that $\\angle \\, BC'B' = 60^{\\circ}$. Thus $BB'C'$ is an equilateral triangle. This means that triangles $ABC'$ and $ABB'$ are congruent, because $BB' = BC'$, segment $AB$ is common and $AB'=AC'$ by construction. Therefore, $$\\angle \\, BAB' = \\angle \\, BAC' = \\frac{1}{2} \\, \\angle \\, B'AC' = \\frac{1}{2} \\, 30^{\\circ} = 15^{\\circ}$$ We can conclude that $$\\angle \\, BAD = \\angle \\, B'AC - \\big(\\angle \\, BAB' + \\angle DAC \\big) = 60^{\\circ} - (15^{\\circ} + 15^{\\circ}) = 30^{\\circ}$$</p>\n" }, { "answer_id": 2097326, "score": 0, "text": "<p>Drop perpendicular from $A$ on extended $BC$ intersecting at $E$ to form right triangle $\\triangle AEC$.</p>\n\n<p><a href=\"https://i.stack.imgur.com/ZaxoF.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ZaxoF.png\" alt=\"enter image description here\"></a></p>\n\n<p>Let $AC = b$</p>\n\n<p>Then by basic trigonometry we get:</p>\n\n<p>$EA=\\frac{b}{2}$</p>\n\n<p>$DC=EC-ED=\\frac{\\sqrt{3}-1}{2}b$</p>\n\n<p>$EB=EC-BC=\\frac{2-\\sqrt{3}}{2}b$</p>\n\n<p>$\\angle EAB=\\tan^{-1}\\frac{EB}{EA}=\\frac{\\pi}{12}$</p>\n\n<p>Therefore,</p>\n\n<p>$$\\angle BAD=\\angle EAD-\\angle EAB=\\frac{\\pi}{4}-\\frac{\\pi}{12}=\\frac{\\pi}{6}$$</p>\n" }, { "answer_id": 2097329, "score": 2, "text": "<p>Let's denote $\\angle BAD$ as $\\alpha$. We have that $\\angle ADB=180-(45+\\alpha)$. If $\\angle ADB=45$ and $\\angle ACD=30$, then $\\angle ADC=180-45$ and $\\angle CDA=180-30-(180-45)=15$. Now, from the sine theorem:</p>\n\n<p>$$ \\frac{|CD|}{\\sin(15)}=\\frac{|AC|}{\\sin(180-45)}$$\n$$ \\frac{|AC|}{\\sin(180-(45+\\alpha))}=\\frac{|BD|}{\\sin(\\alpha)}=\\frac{|CD|}{\\sin(\\alpha)}$$</p>\n\n<p>From the first one, we have that: </p>\n\n<p>$$ |AC|=\\frac{\\sin(180-45)}{\\sin(15)}|CD|$$</p>\n\n<p>Plugging this into the second one gives us:</p>\n\n<p>$$ \\frac{\\sin(180-45)}{\\sin(15)}|CD|\\frac{1}{\\sin(180-(45+\\alpha))}=\\frac{|CD|}{\\sin(\\alpha)} $$\n$$ \\frac{\\sin(15)\\sin(180-45)}{\\sin(180-(45+\\alpha))}=\\frac{1}{\\sin(\\alpha)} $$\n$$ \\sin(\\alpha)=\\frac{\\sin(180-(45+\\alpha))}{\\sin(15)\\sin(180-45)}=\\frac{\\sin(45+\\alpha)}{\\sin(15)\\sin(180-45)} $$</p>\n\n<p>Can you take it from here? </p>\n" }, { "answer_id": 4583392, "score": 2, "text": "<p>This can be solved very easily using a simple geometry construction:</p>\n<p><a href=\"https://i.stack.imgur.com/ELV6P.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ELV6P.png\" alt=\"enter image description here\" /></a></p>\n<p>1.) Draw segment <span class=\"math-container\">$DE$</span> where point <span class=\"math-container\">$E$</span> lies on <span class=\"math-container\">$AC$</span> such that <span class=\"math-container\">$\\angle DAC=\\angle ADE=15$</span>. This means that <span class=\"math-container\">$\\angle DEC=30$</span>, therefore <span class=\"math-container\">$AE=ED=BD=DC$</span>. Now, join <span class=\"math-container\">$B$</span> and <span class=\"math-container\">$E$</span> via <span class=\"math-container\">$BE$</span>. Notice that <span class=\"math-container\">$\\angle BDE=60$</span> and <span class=\"math-container\">$BD=DE$</span>, this proves that <span class=\"math-container\">$\\triangle BDE$</span> is equilateral, and therefore, <span class=\"math-container\">$AE=ED=BD=DC=BE$</span>.</p>\n<p>2.) Lastly, notice that <span class=\"math-container\">$\\angle BEC=\\angle BEA=90$</span>, and that <span class=\"math-container\">$AE=BE$</span>, this means <span class=\"math-container\">$\\triangle BAE$</span> is an isosceles right triangle, and <span class=\"math-container\">$\\angle BAE=45$</span>. Therefore <span class=\"math-container\">$\\angle BAD=45-15=30$</span></p>\n" }, { "answer_id": 2099293, "score": 1, "text": "<p>One can use a construction if one does not want to use trigonometry but only very simple but tricky methods. </p>\n\n<p>Let point $B'$ be such that triangle $AB'C$ is equilateral and $B$ and $B'$ lie on the same side of line $AC.$ Let point $C'$ be such that line $AC'$ is orthogonal to $B'C$ and $AC' = AC = AB' = B'C$. By construction, $AC'$ is the orthgognal bisector of $B'C$ so $$B'C' = CC'\\,\\,\\, \\text{ and } \\,\\,\\, \\angle \\, CAC' = B'AC' = 30^{\\circ}$$ Therefore, triangles $CAC'$ and $B'AC'$ are congruent isosceles triangles with $30^{\\circ}$ internal angle at vertex $A$ (because $AC'$ is the angle bisector of equlateral triangle $AB'C$).</p>\n\n<p><a href=\"https://i.stack.imgur.com/bBf9T.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bBf9T.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now, by assumption, $\\angle \\, DAC = \\angle \\, ADB - \\angle \\, ACD = 45^{\\circ} - 30^{\\circ} = 15^{\\circ}$. Hence, line $AD$ is the angle bisector in isosceles triagnle $CAC'$ through vertex $A$ and so $AD$ is also orthogonal bisectror of $CC'$ which means that $DC = DC'$. A direct angle chasing shows that $\\angle \\, CDC' = 90^{\\circ}$. However, by assumption, $BD = DC = DC'$ so $$BC ' = CC' = B'C' \\,\\,\\, \\text{ and } \\,\\,\\, \\angle \\, BC'C = 90^{\\circ}$$ Hence triangle $BB'C'$ is isosceles and after a direct angle chasing, one sees that $\\angle \\, BC'B' = 60^{\\circ}$. Thus $BB'C'$ is an equilateral triangle. This means that triangles $ABC'$ and $ABB'$ are congruent, because $BB' = BC'$, segment $AB$ is common and $AB'=AC'$ by construction. Therefore, $$\\angle \\, BAB' = \\angle \\, BAC' = \\frac{1}{2} \\, \\angle \\, B'AC' = \\frac{1}{2} \\, 30^{\\circ} = 15^{\\circ}$$ We can conclude that $$\\angle \\, BAD = \\angle \\, B'AC - \\big(\\angle \\, BAB' + \\angle DAC \\big) = 60^{\\circ} - (15^{\\circ} + 15^{\\circ}) = 30^{\\circ}$$</p>\n" }, { "answer_id": 2097326, "score": 0, "text": "<p>Drop perpendicular from $A$ on extended $BC$ intersecting at $E$ to form right triangle $\\triangle AEC$.</p>\n\n<p><a href=\"https://i.stack.imgur.com/ZaxoF.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ZaxoF.png\" alt=\"enter image description here\"></a></p>\n\n<p>Let $AC = b$</p>\n\n<p>Then by basic trigonometry we get:</p>\n\n<p>$EA=\\frac{b}{2}$</p>\n\n<p>$DC=EC-ED=\\frac{\\sqrt{3}-1}{2}b$</p>\n\n<p>$EB=EC-BC=\\frac{2-\\sqrt{3}}{2}b$</p>\n\n<p>$\\angle EAB=\\tan^{-1}\\frac{EB}{EA}=\\frac{\\pi}{12}$</p>\n\n<p>Therefore,</p>\n\n<p>$$\\angle BAD=\\angle EAD-\\angle EAB=\\frac{\\pi}{4}-\\frac{\\pi}{12}=\\frac{\\pi}{6}$$</p>\n" }, { "answer_id": 2097329, "score": 2, "text": "<p>Let's denote $\\angle BAD$ as $\\alpha$. We have that $\\angle ADB=180-(45+\\alpha)$. If $\\angle ADB=45$ and $\\angle ACD=30$, then $\\angle ADC=180-45$ and $\\angle CDA=180-30-(180-45)=15$. Now, from the sine theorem:</p>\n\n<p>$$ \\frac{|CD|}{\\sin(15)}=\\frac{|AC|}{\\sin(180-45)}$$\n$$ \\frac{|AC|}{\\sin(180-(45+\\alpha))}=\\frac{|BD|}{\\sin(\\alpha)}=\\frac{|CD|}{\\sin(\\alpha)}$$</p>\n\n<p>From the first one, we have that: </p>\n\n<p>$$ |AC|=\\frac{\\sin(180-45)}{\\sin(15)}|CD|$$</p>\n\n<p>Plugging this into the second one gives us:</p>\n\n<p>$$ \\frac{\\sin(180-45)}{\\sin(15)}|CD|\\frac{1}{\\sin(180-(45+\\alpha))}=\\frac{|CD|}{\\sin(\\alpha)} $$\n$$ \\frac{\\sin(15)\\sin(180-45)}{\\sin(180-(45+\\alpha))}=\\frac{1}{\\sin(\\alpha)} $$\n$$ \\sin(\\alpha)=\\frac{\\sin(180-(45+\\alpha))}{\\sin(15)\\sin(180-45)}=\\frac{\\sin(45+\\alpha)}{\\sin(15)\\sin(180-45)} $$</p>\n\n<p>Can you take it from here? </p>\n" }, { "answer_id": 4583392, "score": 2, "text": "<p>This can be solved very easily using a simple geometry construction:</p>\n<p><a href=\"https://i.stack.imgur.com/ELV6P.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ELV6P.png\" alt=\"enter image description here\" /></a></p>\n<p>1.) Draw segment <span class=\"math-container\">$DE$</span> where point <span class=\"math-container\">$E$</span> lies on <span class=\"math-container\">$AC$</span> such that <span class=\"math-container\">$\\angle DAC=\\angle ADE=15$</span>. This means that <span class=\"math-container\">$\\angle DEC=30$</span>, therefore <span class=\"math-container\">$AE=ED=BD=DC$</span>. Now, join <span class=\"math-container\">$B$</span> and <span class=\"math-container\">$E$</span> via <span class=\"math-container\">$BE$</span>. Notice that <span class=\"math-container\">$\\angle BDE=60$</span> and <span class=\"math-container\">$BD=DE$</span>, this proves that <span class=\"math-container\">$\\triangle BDE$</span> is equilateral, and therefore, <span class=\"math-container\">$AE=ED=BD=DC=BE$</span>.</p>\n<p>2.) Lastly, notice that <span class=\"math-container\">$\\angle BEC=\\angle BEA=90$</span>, and that <span class=\"math-container\">$AE=BE$</span>, this means <span class=\"math-container\">$\\triangle BAE$</span> is an isosceles right triangle, and <span class=\"math-container\">$\\angle BAE=45$</span>. Therefore <span class=\"math-container\">$\\angle BAD=45-15=30$</span></p>\n" }, { "answer_id": 2099293, "score": 1, "text": "<p>One can use a construction if one does not want to use trigonometry but only very simple but tricky methods. </p>\n\n<p>Let point $B'$ be such that triangle $AB'C$ is equilateral and $B$ and $B'$ lie on the same side of line $AC.$ Let point $C'$ be such that line $AC'$ is orthogonal to $B'C$ and $AC' = AC = AB' = B'C$. By construction, $AC'$ is the orthgognal bisector of $B'C$ so $$B'C' = CC'\\,\\,\\, \\text{ and } \\,\\,\\, \\angle \\, CAC' = B'AC' = 30^{\\circ}$$ Therefore, triangles $CAC'$ and $B'AC'$ are congruent isosceles triangles with $30^{\\circ}$ internal angle at vertex $A$ (because $AC'$ is the angle bisector of equlateral triangle $AB'C$).</p>\n\n<p><a href=\"https://i.stack.imgur.com/bBf9T.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bBf9T.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now, by assumption, $\\angle \\, DAC = \\angle \\, ADB - \\angle \\, ACD = 45^{\\circ} - 30^{\\circ} = 15^{\\circ}$. Hence, line $AD$ is the angle bisector in isosceles triagnle $CAC'$ through vertex $A$ and so $AD$ is also orthogonal bisectror of $CC'$ which means that $DC = DC'$. A direct angle chasing shows that $\\angle \\, CDC' = 90^{\\circ}$. However, by assumption, $BD = DC = DC'$ so $$BC ' = CC' = B'C' \\,\\,\\, \\text{ and } \\,\\,\\, \\angle \\, BC'C = 90^{\\circ}$$ Hence triangle $BB'C'$ is isosceles and after a direct angle chasing, one sees that $\\angle \\, BC'B' = 60^{\\circ}$. Thus $BB'C'$ is an equilateral triangle. This means that triangles $ABC'$ and $ABB'$ are congruent, because $BB' = BC'$, segment $AB$ is common and $AB'=AC'$ by construction. Therefore, $$\\angle \\, BAB' = \\angle \\, BAC' = \\frac{1}{2} \\, \\angle \\, B'AC' = \\frac{1}{2} \\, 30^{\\circ} = 15^{\\circ}$$ We can conclude that $$\\angle \\, BAD = \\angle \\, B'AC - \\big(\\angle \\, BAB' + \\angle DAC \\big) = 60^{\\circ} - (15^{\\circ} + 15^{\\circ}) = 30^{\\circ}$$</p>\n" }, { "answer_id": 2097326, "score": 0, "text": "<p>Drop perpendicular from $A$ on extended $BC$ intersecting at $E$ to form right triangle $\\triangle AEC$.</p>\n\n<p><a href=\"https://i.stack.imgur.com/ZaxoF.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ZaxoF.png\" alt=\"enter image description here\"></a></p>\n\n<p>Let $AC = b$</p>\n\n<p>Then by basic trigonometry we get:</p>\n\n<p>$EA=\\frac{b}{2}$</p>\n\n<p>$DC=EC-ED=\\frac{\\sqrt{3}-1}{2}b$</p>\n\n<p>$EB=EC-BC=\\frac{2-\\sqrt{3}}{2}b$</p>\n\n<p>$\\angle EAB=\\tan^{-1}\\frac{EB}{EA}=\\frac{\\pi}{12}$</p>\n\n<p>Therefore,</p>\n\n<p>$$\\angle BAD=\\angle EAD-\\angle EAB=\\frac{\\pi}{4}-\\frac{\\pi}{12}=\\frac{\\pi}{6}$$</p>\n" }, { "answer_id": 2097329, "score": 2, "text": "<p>Let's denote $\\angle BAD$ as $\\alpha$. We have that $\\angle ADB=180-(45+\\alpha)$. If $\\angle ADB=45$ and $\\angle ACD=30$, then $\\angle ADC=180-45$ and $\\angle CDA=180-30-(180-45)=15$. Now, from the sine theorem:</p>\n\n<p>$$ \\frac{|CD|}{\\sin(15)}=\\frac{|AC|}{\\sin(180-45)}$$\n$$ \\frac{|AC|}{\\sin(180-(45+\\alpha))}=\\frac{|BD|}{\\sin(\\alpha)}=\\frac{|CD|}{\\sin(\\alpha)}$$</p>\n\n<p>From the first one, we have that: </p>\n\n<p>$$ |AC|=\\frac{\\sin(180-45)}{\\sin(15)}|CD|$$</p>\n\n<p>Plugging this into the second one gives us:</p>\n\n<p>$$ \\frac{\\sin(180-45)}{\\sin(15)}|CD|\\frac{1}{\\sin(180-(45+\\alpha))}=\\frac{|CD|}{\\sin(\\alpha)} $$\n$$ \\frac{\\sin(15)\\sin(180-45)}{\\sin(180-(45+\\alpha))}=\\frac{1}{\\sin(\\alpha)} $$\n$$ \\sin(\\alpha)=\\frac{\\sin(180-(45+\\alpha))}{\\sin(15)\\sin(180-45)}=\\frac{\\sin(45+\\alpha)}{\\sin(15)\\sin(180-45)} $$</p>\n\n<p>Can you take it from here? </p>\n" }, { "answer_id": 4583392, "score": 2, "text": "<p>This can be solved very easily using a simple geometry construction:</p>\n<p><a href=\"https://i.stack.imgur.com/ELV6P.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ELV6P.png\" alt=\"enter image description here\" /></a></p>\n<p>1.) Draw segment <span class=\"math-container\">$DE$</span> where point <span class=\"math-container\">$E$</span> lies on <span class=\"math-container\">$AC$</span> such that <span class=\"math-container\">$\\angle DAC=\\angle ADE=15$</span>. This means that <span class=\"math-container\">$\\angle DEC=30$</span>, therefore <span class=\"math-container\">$AE=ED=BD=DC$</span>. Now, join <span class=\"math-container\">$B$</span> and <span class=\"math-container\">$E$</span> via <span class=\"math-container\">$BE$</span>. Notice that <span class=\"math-container\">$\\angle BDE=60$</span> and <span class=\"math-container\">$BD=DE$</span>, this proves that <span class=\"math-container\">$\\triangle BDE$</span> is equilateral, and therefore, <span class=\"math-container\">$AE=ED=BD=DC=BE$</span>.</p>\n<p>2.) Lastly, notice that <span class=\"math-container\">$\\angle BEC=\\angle BEA=90$</span>, and that <span class=\"math-container\">$AE=BE$</span>, this means <span class=\"math-container\">$\\triangle BAE$</span> is an isosceles right triangle, and <span class=\"math-container\">$\\angle BAE=45$</span>. Therefore <span class=\"math-container\">$\\angle BAD=45-15=30$</span></p>\n" }, { "answer_id": 2099293, "score": 1, "text": "<p>One can use a construction if one does not want to use trigonometry but only very simple but tricky methods. </p>\n\n<p>Let point $B'$ be such that triangle $AB'C$ is equilateral and $B$ and $B'$ lie on the same side of line $AC.$ Let point $C'$ be such that line $AC'$ is orthogonal to $B'C$ and $AC' = AC = AB' = B'C$. By construction, $AC'$ is the orthgognal bisector of $B'C$ so $$B'C' = CC'\\,\\,\\, \\text{ and } \\,\\,\\, \\angle \\, CAC' = B'AC' = 30^{\\circ}$$ Therefore, triangles $CAC'$ and $B'AC'$ are congruent isosceles triangles with $30^{\\circ}$ internal angle at vertex $A$ (because $AC'$ is the angle bisector of equlateral triangle $AB'C$).</p>\n\n<p><a href=\"https://i.stack.imgur.com/bBf9T.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bBf9T.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now, by assumption, $\\angle \\, DAC = \\angle \\, ADB - \\angle \\, ACD = 45^{\\circ} - 30^{\\circ} = 15^{\\circ}$. Hence, line $AD$ is the angle bisector in isosceles triagnle $CAC'$ through vertex $A$ and so $AD$ is also orthogonal bisectror of $CC'$ which means that $DC = DC'$. A direct angle chasing shows that $\\angle \\, CDC' = 90^{\\circ}$. However, by assumption, $BD = DC = DC'$ so $$BC ' = CC' = B'C' \\,\\,\\, \\text{ and } \\,\\,\\, \\angle \\, BC'C = 90^{\\circ}$$ Hence triangle $BB'C'$ is isosceles and after a direct angle chasing, one sees that $\\angle \\, BC'B' = 60^{\\circ}$. Thus $BB'C'$ is an equilateral triangle. This means that triangles $ABC'$ and $ABB'$ are congruent, because $BB' = BC'$, segment $AB$ is common and $AB'=AC'$ by construction. Therefore, $$\\angle \\, BAB' = \\angle \\, BAC' = \\frac{1}{2} \\, \\angle \\, B'AC' = \\frac{1}{2} \\, 30^{\\circ} = 15^{\\circ}$$ We can conclude that $$\\angle \\, BAD = \\angle \\, B'AC - \\big(\\angle \\, BAB' + \\angle DAC \\big) = 60^{\\circ} - (15^{\\circ} + 15^{\\circ}) = 30^{\\circ}$$</p>\n" }, { "answer_id": 2097326, "score": 0, "text": "<p>Drop perpendicular from $A$ on extended $BC$ intersecting at $E$ to form right triangle $\\triangle AEC$.</p>\n\n<p><a href=\"https://i.stack.imgur.com/ZaxoF.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ZaxoF.png\" alt=\"enter image description here\"></a></p>\n\n<p>Let $AC = b$</p>\n\n<p>Then by basic trigonometry we get:</p>\n\n<p>$EA=\\frac{b}{2}$</p>\n\n<p>$DC=EC-ED=\\frac{\\sqrt{3}-1}{2}b$</p>\n\n<p>$EB=EC-BC=\\frac{2-\\sqrt{3}}{2}b$</p>\n\n<p>$\\angle EAB=\\tan^{-1}\\frac{EB}{EA}=\\frac{\\pi}{12}$</p>\n\n<p>Therefore,</p>\n\n<p>$$\\angle BAD=\\angle EAD-\\angle EAB=\\frac{\\pi}{4}-\\frac{\\pi}{12}=\\frac{\\pi}{6}$$</p>\n" } ]
Question: In a triangle $ABC$ let $D$ be the midpoint of $BC$ . If $\angle ADB=45^\circ$ and $\angle ACD=30^\circ$ then find $\angle BAD$ Answer 1: <p>Let's denote $\angle BAD$ as $\alpha$. We have that $\angle ADB=180-(45+\alpha)$. If $\angle ADB=45$ and $\angle ACD=30$, then $\angle ADC=180-45$ and $\angle CDA=180-30-(180-45)=15$. Now, from the sine theorem:</p> <p>$$ \frac{|CD|}{\sin(15)}=\frac{|AC|}{\sin(180-45)}$$ $$ \frac{|AC|}{\sin(180-(45+\alpha))}=\frac{|BD|}{\sin(\alpha)}=\frac{|CD|}{\sin(\alpha)}$$</p> <p>From the first one, we have that: </p> <p>$$ |AC|=\frac{\sin(180-45)}{\sin(15)}|CD|$$</p> <p>Plugging this into the second one gives us:</p> <p>$$ \frac{\sin(180-45)}{\sin(15)}|CD|\frac{1}{\sin(180-(45+\alpha))}=\frac{|CD|}{\sin(\alpha)} $$ $$ \frac{\sin(15)\sin(180-45)}{\sin(180-(45+\alpha))}=\frac{1}{\sin(\alpha)} $$ $$ \sin(\alpha)=\frac{\sin(180-(45+\alpha))}{\sin(15)\sin(180-45)}=\frac{\sin(45+\alpha)}{\sin(15)\sin(180-45)} $$</p> <p>Can you take it from here? </p> Answer 2: <p>This can be solved very easily using a simple geometry construction:</p> <p><a href="https://i.stack.imgur.com/ELV6P.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ELV6P.png" alt="enter image description here" /></a></p> <p>1.) Draw segment <span class="math-container">$DE$</span> where point <span class="math-container">$E$</span> lies on <span class="math-container">$AC$</span> such that <span class="math-container">$\angle DAC=\angle ADE=15$</span>. This means that <span class="math-container">$\angle DEC=30$</span>, therefore <span class="math-container">$AE=ED=BD=DC$</span>. Now, join <span class="math-container">$B$</span> and <span class="math-container">$E$</span> via <span class="math-container">$BE$</span>. Notice that <span class="math-container">$\angle BDE=60$</span> and <span class="math-container">$BD=DE$</span>, this proves that <span class="math-container">$\triangle BDE$</span> is equilateral, and therefore, <span class="math-container">$AE=ED=BD=DC=BE$</span>.</p> <p>2.) Lastly, notice that <span class="math-container">$\angle BEC=\angle BEA=90$</span>, and that <span class="math-container">$AE=BE$</span>, this means <span class="math-container">$\triangle BAE$</span> is an isosceles right triangle, and <span class="math-container">$\angle BAE=45$</span>. Therefore <span class="math-container">$\angle BAD=45-15=30$</span></p> Answer 3: <p>One can use a construction if one does not want to use trigonometry but only very simple but tricky methods. </p> <p>Let point $B'$ be such that triangle $AB'C$ is equilateral and $B$ and $B'$ lie on the same side of line $AC.$ Let point $C'$ be such that line $AC'$ is orthogonal to $B'C$ and $AC' = AC = AB' = B'C$. By construction, $AC'$ is the orthgognal bisector of $B'C$ so $$B'C' = CC'\,\,\, \text{ and } \,\,\, \angle \, CAC' = B'AC' = 30^{\circ}$$ Therefore, triangles $CAC'$ and $B'AC'$ are congruent isosceles triangles with $30^{\circ}$ internal angle at vertex $A$ (because $AC'$ is the angle bisector of equlateral triangle $AB'C$).</p> <p><a href="https://i.stack.imgur.com/bBf9T.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bBf9T.png" alt="enter image description here"></a></p> <p>Now, by assumption, $\angle \, DAC = \angle \, ADB - \angle \, ACD = 45^{\circ} - 30^{\circ} = 15^{\circ}$. Hence, line $AD$ is the angle bisector in isosceles triagnle $CAC'$ through vertex $A$ and so $AD$ is also orthogonal bisectror of $CC'$ which means that $DC = DC'$. A direct angle chasing shows that $\angle \, CDC' = 90^{\circ}$. However, by assumption, $BD = DC = DC'$ so $$BC ' = CC' = B'C' \,\,\, \text{ and } \,\,\, \angle \, BC'C = 90^{\circ}$$ Hence triangle $BB'C'$ is isosceles and after a direct angle chasing, one sees that $\angle \, BC'B' = 60^{\circ}$. Thus $BB'C'$ is an equilateral triangle. This means that triangles $ABC'$ and $ABB'$ are congruent, because $BB' = BC'$, segment $AB$ is common and $AB'=AC'$ by construction. Therefore, $$\angle \, BAB' = \angle \, BAC' = \frac{1}{2} \, \angle \, B'AC' = \frac{1}{2} \, 30^{\circ} = 15^{\circ}$$ We can conclude that $$\angle \, BAD = \angle \, B'AC - \big(\angle \, BAB' + \angle DAC \big) = 60^{\circ} - (15^{\circ} + 15^{\circ}) = 30^{\circ}$$</p> Answer 4: <p>Drop perpendicular from $A$ on extended $BC$ intersecting at $E$ to form right triangle $\triangle AEC$.</p> <p><a href="https://i.stack.imgur.com/ZaxoF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZaxoF.png" alt="enter image description here"></a></p> <p>Let $AC = b$</p> <p>Then by basic trigonometry we get:</p> <p>$EA=\frac{b}{2}$</p> <p>$DC=EC-ED=\frac{\sqrt{3}-1}{2}b$</p> <p>$EB=EC-BC=\frac{2-\sqrt{3}}{2}b$</p> <p>$\angle EAB=\tan^{-1}\frac{EB}{EA}=\frac{\pi}{12}$</p> <p>Therefore,</p> <p>$$\angle BAD=\angle EAD-\angle EAB=\frac{\pi}{4}-\frac{\pi}{12}=\frac{\pi}{6}$$</p> Answer 5: <p>Let's denote $\angle BAD$ as $\alpha$. We have that $\angle ADB=180-(45+\alpha)$. If $\angle ADB=45$ and $\angle ACD=30$, then $\angle ADC=180-45$ and $\angle CDA=180-30-(180-45)=15$. Now, from the sine theorem:</p> <p>$$ \frac{|CD|}{\sin(15)}=\frac{|AC|}{\sin(180-45)}$$ $$ \frac{|AC|}{\sin(180-(45+\alpha))}=\frac{|BD|}{\sin(\alpha)}=\frac{|CD|}{\sin(\alpha)}$$</p> <p>From the first one, we have that: </p> <p>$$ |AC|=\frac{\sin(180-45)}{\sin(15)}|CD|$$</p> <p>Plugging this into the second one gives us:</p> <p>$$ \frac{\sin(180-45)}{\sin(15)}|CD|\frac{1}{\sin(180-(45+\alpha))}=\frac{|CD|}{\sin(\alpha)} $$ $$ \frac{\sin(15)\sin(180-45)}{\sin(180-(45+\alpha))}=\frac{1}{\sin(\alpha)} $$ $$ \sin(\alpha)=\frac{\sin(180-(45+\alpha))}{\sin(15)\sin(180-45)}=\frac{\sin(45+\alpha)}{\sin(15)\sin(180-45)} $$</p> <p>Can you take it from here? </p> Answer 6: <p>This can be solved very easily using a simple geometry construction:</p> <p><a href="https://i.stack.imgur.com/ELV6P.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ELV6P.png" alt="enter image description here" /></a></p> <p>1.) Draw segment <span class="math-container">$DE$</span> where point <span class="math-container">$E$</span> lies on <span class="math-container">$AC$</span> such that <span class="math-container">$\angle DAC=\angle ADE=15$</span>. This means that <span class="math-container">$\angle DEC=30$</span>, therefore <span class="math-container">$AE=ED=BD=DC$</span>. Now, join <span class="math-container">$B$</span> and <span class="math-container">$E$</span> via <span class="math-container">$BE$</span>. Notice that <span class="math-container">$\angle BDE=60$</span> and <span class="math-container">$BD=DE$</span>, this proves that <span class="math-container">$\triangle BDE$</span> is equilateral, and therefore, <span class="math-container">$AE=ED=BD=DC=BE$</span>.</p> <p>2.) Lastly, notice that <span class="math-container">$\angle BEC=\angle BEA=90$</span>, and that <span class="math-container">$AE=BE$</span>, this means <span class="math-container">$\triangle BAE$</span> is an isosceles right triangle, and <span class="math-container">$\angle BAE=45$</span>. Therefore <span class="math-container">$\angle BAD=45-15=30$</span></p> Answer 7: <p>One can use a construction if one does not want to use trigonometry but only very simple but tricky methods. </p> <p>Let point $B'$ be such that triangle $AB'C$ is equilateral and $B$ and $B'$ lie on the same side of line $AC.$ Let point $C'$ be such that line $AC'$ is orthogonal to $B'C$ and $AC' = AC = AB' = B'C$. By construction, $AC'$ is the orthgognal bisector of $B'C$ so $$B'C' = CC'\,\,\, \text{ and } \,\,\, \angle \, CAC' = B'AC' = 30^{\circ}$$ Therefore, triangles $CAC'$ and $B'AC'$ are congruent isosceles triangles with $30^{\circ}$ internal angle at vertex $A$ (because $AC'$ is the angle bisector of equlateral triangle $AB'C$).</p> <p><a href="https://i.stack.imgur.com/bBf9T.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bBf9T.png" alt="enter image description here"></a></p> <p>Now, by assumption, $\angle \, DAC = \angle \, ADB - \angle \, ACD = 45^{\circ} - 30^{\circ} = 15^{\circ}$. Hence, line $AD$ is the angle bisector in isosceles triagnle $CAC'$ through vertex $A$ and so $AD$ is also orthogonal bisectror of $CC'$ which means that $DC = DC'$. A direct angle chasing shows that $\angle \, CDC' = 90^{\circ}$. However, by assumption, $BD = DC = DC'$ so $$BC ' = CC' = B'C' \,\,\, \text{ and } \,\,\, \angle \, BC'C = 90^{\circ}$$ Hence triangle $BB'C'$ is isosceles and after a direct angle chasing, one sees that $\angle \, BC'B' = 60^{\circ}$. Thus $BB'C'$ is an equilateral triangle. This means that triangles $ABC'$ and $ABB'$ are congruent, because $BB' = BC'$, segment $AB$ is common and $AB'=AC'$ by construction. Therefore, $$\angle \, BAB' = \angle \, BAC' = \frac{1}{2} \, \angle \, B'AC' = \frac{1}{2} \, 30^{\circ} = 15^{\circ}$$ We can conclude that $$\angle \, BAD = \angle \, B'AC - \big(\angle \, BAB' + \angle DAC \big) = 60^{\circ} - (15^{\circ} + 15^{\circ}) = 30^{\circ}$$</p> Answer 8: <p>Drop perpendicular from $A$ on extended $BC$ intersecting at $E$ to form right triangle $\triangle AEC$.</p> <p><a href="https://i.stack.imgur.com/ZaxoF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZaxoF.png" alt="enter image description here"></a></p> <p>Let $AC = b$</p> <p>Then by basic trigonometry we get:</p> <p>$EA=\frac{b}{2}$</p> <p>$DC=EC-ED=\frac{\sqrt{3}-1}{2}b$</p> <p>$EB=EC-BC=\frac{2-\sqrt{3}}{2}b$</p> <p>$\angle EAB=\tan^{-1}\frac{EB}{EA}=\frac{\pi}{12}$</p> <p>Therefore,</p> <p>$$\angle BAD=\angle EAD-\angle EAB=\frac{\pi}{4}-\frac{\pi}{12}=\frac{\pi}{6}$$</p> Answer 9: <p>Let's denote $\angle BAD$ as $\alpha$. We have that $\angle ADB=180-(45+\alpha)$. If $\angle ADB=45$ and $\angle ACD=30$, then $\angle ADC=180-45$ and $\angle CDA=180-30-(180-45)=15$. Now, from the sine theorem:</p> <p>$$ \frac{|CD|}{\sin(15)}=\frac{|AC|}{\sin(180-45)}$$ $$ \frac{|AC|}{\sin(180-(45+\alpha))}=\frac{|BD|}{\sin(\alpha)}=\frac{|CD|}{\sin(\alpha)}$$</p> <p>From the first one, we have that: </p> <p>$$ |AC|=\frac{\sin(180-45)}{\sin(15)}|CD|$$</p> <p>Plugging this into the second one gives us:</p> <p>$$ \frac{\sin(180-45)}{\sin(15)}|CD|\frac{1}{\sin(180-(45+\alpha))}=\frac{|CD|}{\sin(\alpha)} $$ $$ \frac{\sin(15)\sin(180-45)}{\sin(180-(45+\alpha))}=\frac{1}{\sin(\alpha)} $$ $$ \sin(\alpha)=\frac{\sin(180-(45+\alpha))}{\sin(15)\sin(180-45)}=\frac{\sin(45+\alpha)}{\sin(15)\sin(180-45)} $$</p> <p>Can you take it from here? </p> Answer 10: <p>This can be solved very easily using a simple geometry construction:</p> <p><a href="https://i.stack.imgur.com/ELV6P.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ELV6P.png" alt="enter image description here" /></a></p> <p>1.) Draw segment <span class="math-container">$DE$</span> where point <span class="math-container">$E$</span> lies on <span class="math-container">$AC$</span> such that <span class="math-container">$\angle DAC=\angle ADE=15$</span>. This means that <span class="math-container">$\angle DEC=30$</span>, therefore <span class="math-container">$AE=ED=BD=DC$</span>. Now, join <span class="math-container">$B$</span> and <span class="math-container">$E$</span> via <span class="math-container">$BE$</span>. Notice that <span class="math-container">$\angle BDE=60$</span> and <span class="math-container">$BD=DE$</span>, this proves that <span class="math-container">$\triangle BDE$</span> is equilateral, and therefore, <span class="math-container">$AE=ED=BD=DC=BE$</span>.</p> <p>2.) Lastly, notice that <span class="math-container">$\angle BEC=\angle BEA=90$</span>, and that <span class="math-container">$AE=BE$</span>, this means <span class="math-container">$\triangle BAE$</span> is an isosceles right triangle, and <span class="math-container">$\angle BAE=45$</span>. Therefore <span class="math-container">$\angle BAD=45-15=30$</span></p> Answer 11: <p>One can use a construction if one does not want to use trigonometry but only very simple but tricky methods. </p> <p>Let point $B'$ be such that triangle $AB'C$ is equilateral and $B$ and $B'$ lie on the same side of line $AC.$ Let point $C'$ be such that line $AC'$ is orthogonal to $B'C$ and $AC' = AC = AB' = B'C$. By construction, $AC'$ is the orthgognal bisector of $B'C$ so $$B'C' = CC'\,\,\, \text{ and } \,\,\, \angle \, CAC' = B'AC' = 30^{\circ}$$ Therefore, triangles $CAC'$ and $B'AC'$ are congruent isosceles triangles with $30^{\circ}$ internal angle at vertex $A$ (because $AC'$ is the angle bisector of equlateral triangle $AB'C$).</p> <p><a href="https://i.stack.imgur.com/bBf9T.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bBf9T.png" alt="enter image description here"></a></p> <p>Now, by assumption, $\angle \, DAC = \angle \, ADB - \angle \, ACD = 45^{\circ} - 30^{\circ} = 15^{\circ}$. Hence, line $AD$ is the angle bisector in isosceles triagnle $CAC'$ through vertex $A$ and so $AD$ is also orthogonal bisectror of $CC'$ which means that $DC = DC'$. A direct angle chasing shows that $\angle \, CDC' = 90^{\circ}$. However, by assumption, $BD = DC = DC'$ so $$BC ' = CC' = B'C' \,\,\, \text{ and } \,\,\, \angle \, BC'C = 90^{\circ}$$ Hence triangle $BB'C'$ is isosceles and after a direct angle chasing, one sees that $\angle \, BC'B' = 60^{\circ}$. Thus $BB'C'$ is an equilateral triangle. This means that triangles $ABC'$ and $ABB'$ are congruent, because $BB' = BC'$, segment $AB$ is common and $AB'=AC'$ by construction. Therefore, $$\angle \, BAB' = \angle \, BAC' = \frac{1}{2} \, \angle \, B'AC' = \frac{1}{2} \, 30^{\circ} = 15^{\circ}$$ We can conclude that $$\angle \, BAD = \angle \, B'AC - \big(\angle \, BAB' + \angle DAC \big) = 60^{\circ} - (15^{\circ} + 15^{\circ}) = 30^{\circ}$$</p> Answer 12: <p>Drop perpendicular from $A$ on extended $BC$ intersecting at $E$ to form right triangle $\triangle AEC$.</p> <p><a href="https://i.stack.imgur.com/ZaxoF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZaxoF.png" alt="enter image description here"></a></p> <p>Let $AC = b$</p> <p>Then by basic trigonometry we get:</p> <p>$EA=\frac{b}{2}$</p> <p>$DC=EC-ED=\frac{\sqrt{3}-1}{2}b$</p> <p>$EB=EC-BC=\frac{2-\sqrt{3}}{2}b$</p> <p>$\angle EAB=\tan^{-1}\frac{EB}{EA}=\frac{\pi}{12}$</p> <p>Therefore,</p> <p>$$\angle BAD=\angle EAD-\angle EAB=\frac{\pi}{4}-\frac{\pi}{12}=\frac{\pi}{6}$$</p> Answer 13: <p>Let's denote $\angle BAD$ as $\alpha$. We have that $\angle ADB=180-(45+\alpha)$. If $\angle ADB=45$ and $\angle ACD=30$, then $\angle ADC=180-45$ and $\angle CDA=180-30-(180-45)=15$. Now, from the sine theorem:</p> <p>$$ \frac{|CD|}{\sin(15)}=\frac{|AC|}{\sin(180-45)}$$ $$ \frac{|AC|}{\sin(180-(45+\alpha))}=\frac{|BD|}{\sin(\alpha)}=\frac{|CD|}{\sin(\alpha)}$$</p> <p>From the first one, we have that: </p> <p>$$ |AC|=\frac{\sin(180-45)}{\sin(15)}|CD|$$</p> <p>Plugging this into the second one gives us:</p> <p>$$ \frac{\sin(180-45)}{\sin(15)}|CD|\frac{1}{\sin(180-(45+\alpha))}=\frac{|CD|}{\sin(\alpha)} $$ $$ \frac{\sin(15)\sin(180-45)}{\sin(180-(45+\alpha))}=\frac{1}{\sin(\alpha)} $$ $$ \sin(\alpha)=\frac{\sin(180-(45+\alpha))}{\sin(15)\sin(180-45)}=\frac{\sin(45+\alpha)}{\sin(15)\sin(180-45)} $$</p> <p>Can you take it from here? </p> Answer 14: <p>This can be solved very easily using a simple geometry construction:</p> <p><a href="https://i.stack.imgur.com/ELV6P.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ELV6P.png" alt="enter image description here" /></a></p> <p>1.) Draw segment <span class="math-container">$DE$</span> where point <span class="math-container">$E$</span> lies on <span class="math-container">$AC$</span> such that <span class="math-container">$\angle DAC=\angle ADE=15$</span>. This means that <span class="math-container">$\angle DEC=30$</span>, therefore <span class="math-container">$AE=ED=BD=DC$</span>. Now, join <span class="math-container">$B$</span> and <span class="math-container">$E$</span> via <span class="math-container">$BE$</span>. Notice that <span class="math-container">$\angle BDE=60$</span> and <span class="math-container">$BD=DE$</span>, this proves that <span class="math-container">$\triangle BDE$</span> is equilateral, and therefore, <span class="math-container">$AE=ED=BD=DC=BE$</span>.</p> <p>2.) Lastly, notice that <span class="math-container">$\angle BEC=\angle BEA=90$</span>, and that <span class="math-container">$AE=BE$</span>, this means <span class="math-container">$\triangle BAE$</span> is an isosceles right triangle, and <span class="math-container">$\angle BAE=45$</span>. Therefore <span class="math-container">$\angle BAD=45-15=30$</span></p> Answer 15: <p>One can use a construction if one does not want to use trigonometry but only very simple but tricky methods. </p> <p>Let point $B'$ be such that triangle $AB'C$ is equilateral and $B$ and $B'$ lie on the same side of line $AC.$ Let point $C'$ be such that line $AC'$ is orthogonal to $B'C$ and $AC' = AC = AB' = B'C$. By construction, $AC'$ is the orthgognal bisector of $B'C$ so $$B'C' = CC'\,\,\, \text{ and } \,\,\, \angle \, CAC' = B'AC' = 30^{\circ}$$ Therefore, triangles $CAC'$ and $B'AC'$ are congruent isosceles triangles with $30^{\circ}$ internal angle at vertex $A$ (because $AC'$ is the angle bisector of equlateral triangle $AB'C$).</p> <p><a href="https://i.stack.imgur.com/bBf9T.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bBf9T.png" alt="enter image description here"></a></p> <p>Now, by assumption, $\angle \, DAC = \angle \, ADB - \angle \, ACD = 45^{\circ} - 30^{\circ} = 15^{\circ}$. Hence, line $AD$ is the angle bisector in isosceles triagnle $CAC'$ through vertex $A$ and so $AD$ is also orthogonal bisectror of $CC'$ which means that $DC = DC'$. A direct angle chasing shows that $\angle \, CDC' = 90^{\circ}$. However, by assumption, $BD = DC = DC'$ so $$BC ' = CC' = B'C' \,\,\, \text{ and } \,\,\, \angle \, BC'C = 90^{\circ}$$ Hence triangle $BB'C'$ is isosceles and after a direct angle chasing, one sees that $\angle \, BC'B' = 60^{\circ}$. Thus $BB'C'$ is an equilateral triangle. This means that triangles $ABC'$ and $ABB'$ are congruent, because $BB' = BC'$, segment $AB$ is common and $AB'=AC'$ by construction. Therefore, $$\angle \, BAB' = \angle \, BAC' = \frac{1}{2} \, \angle \, B'AC' = \frac{1}{2} \, 30^{\circ} = 15^{\circ}$$ We can conclude that $$\angle \, BAD = \angle \, B'AC - \big(\angle \, BAB' + \angle DAC \big) = 60^{\circ} - (15^{\circ} + 15^{\circ}) = 30^{\circ}$$</p> Answer 16: <p>Drop perpendicular from $A$ on extended $BC$ intersecting at $E$ to form right triangle $\triangle AEC$.</p> <p><a href="https://i.stack.imgur.com/ZaxoF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZaxoF.png" alt="enter image description here"></a></p> <p>Let $AC = b$</p> <p>Then by basic trigonometry we get:</p> <p>$EA=\frac{b}{2}$</p> <p>$DC=EC-ED=\frac{\sqrt{3}-1}{2}b$</p> <p>$EB=EC-BC=\frac{2-\sqrt{3}}{2}b$</p> <p>$\angle EAB=\tan^{-1}\frac{EB}{EA}=\frac{\pi}{12}$</p> <p>Therefore,</p> <p>$$\angle BAD=\angle EAD-\angle EAB=\frac{\pi}{4}-\frac{\pi}{12}=\frac{\pi}{6}$$</p> Answer 17: <p>Let's denote $\angle BAD$ as $\alpha$. We have that $\angle ADB=180-(45+\alpha)$. If $\angle ADB=45$ and $\angle ACD=30$, then $\angle ADC=180-45$ and $\angle CDA=180-30-(180-45)=15$. Now, from the sine theorem:</p> <p>$$ \frac{|CD|}{\sin(15)}=\frac{|AC|}{\sin(180-45)}$$ $$ \frac{|AC|}{\sin(180-(45+\alpha))}=\frac{|BD|}{\sin(\alpha)}=\frac{|CD|}{\sin(\alpha)}$$</p> <p>From the first one, we have that: </p> <p>$$ |AC|=\frac{\sin(180-45)}{\sin(15)}|CD|$$</p> <p>Plugging this into the second one gives us:</p> <p>$$ \frac{\sin(180-45)}{\sin(15)}|CD|\frac{1}{\sin(180-(45+\alpha))}=\frac{|CD|}{\sin(\alpha)} $$ $$ \frac{\sin(15)\sin(180-45)}{\sin(180-(45+\alpha))}=\frac{1}{\sin(\alpha)} $$ $$ \sin(\alpha)=\frac{\sin(180-(45+\alpha))}{\sin(15)\sin(180-45)}=\frac{\sin(45+\alpha)}{\sin(15)\sin(180-45)} $$</p> <p>Can you take it from here? </p> Answer 18: <p>This can be solved very easily using a simple geometry construction:</p> <p><a href="https://i.stack.imgur.com/ELV6P.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ELV6P.png" alt="enter image description here" /></a></p> <p>1.) Draw segment <span class="math-container">$DE$</span> where point <span class="math-container">$E$</span> lies on <span class="math-container">$AC$</span> such that <span class="math-container">$\angle DAC=\angle ADE=15$</span>. This means that <span class="math-container">$\angle DEC=30$</span>, therefore <span class="math-container">$AE=ED=BD=DC$</span>. Now, join <span class="math-container">$B$</span> and <span class="math-container">$E$</span> via <span class="math-container">$BE$</span>. Notice that <span class="math-container">$\angle BDE=60$</span> and <span class="math-container">$BD=DE$</span>, this proves that <span class="math-container">$\triangle BDE$</span> is equilateral, and therefore, <span class="math-container">$AE=ED=BD=DC=BE$</span>.</p> <p>2.) Lastly, notice that <span class="math-container">$\angle BEC=\angle BEA=90$</span>, and that <span class="math-container">$AE=BE$</span>, this means <span class="math-container">$\triangle BAE$</span> is an isosceles right triangle, and <span class="math-container">$\angle BAE=45$</span>. Therefore <span class="math-container">$\angle BAD=45-15=30$</span></p> Answer 19: <p>One can use a construction if one does not want to use trigonometry but only very simple but tricky methods. </p> <p>Let point $B'$ be such that triangle $AB'C$ is equilateral and $B$ and $B'$ lie on the same side of line $AC.$ Let point $C'$ be such that line $AC'$ is orthogonal to $B'C$ and $AC' = AC = AB' = B'C$. By construction, $AC'$ is the orthgognal bisector of $B'C$ so $$B'C' = CC'\,\,\, \text{ and } \,\,\, \angle \, CAC' = B'AC' = 30^{\circ}$$ Therefore, triangles $CAC'$ and $B'AC'$ are congruent isosceles triangles with $30^{\circ}$ internal angle at vertex $A$ (because $AC'$ is the angle bisector of equlateral triangle $AB'C$).</p> <p><a href="https://i.stack.imgur.com/bBf9T.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bBf9T.png" alt="enter image description here"></a></p> <p>Now, by assumption, $\angle \, DAC = \angle \, ADB - \angle \, ACD = 45^{\circ} - 30^{\circ} = 15^{\circ}$. Hence, line $AD$ is the angle bisector in isosceles triagnle $CAC'$ through vertex $A$ and so $AD$ is also orthogonal bisectror of $CC'$ which means that $DC = DC'$. A direct angle chasing shows that $\angle \, CDC' = 90^{\circ}$. However, by assumption, $BD = DC = DC'$ so $$BC ' = CC' = B'C' \,\,\, \text{ and } \,\,\, \angle \, BC'C = 90^{\circ}$$ Hence triangle $BB'C'$ is isosceles and after a direct angle chasing, one sees that $\angle \, BC'B' = 60^{\circ}$. Thus $BB'C'$ is an equilateral triangle. This means that triangles $ABC'$ and $ABB'$ are congruent, because $BB' = BC'$, segment $AB$ is common and $AB'=AC'$ by construction. Therefore, $$\angle \, BAB' = \angle \, BAC' = \frac{1}{2} \, \angle \, B'AC' = \frac{1}{2} \, 30^{\circ} = 15^{\circ}$$ We can conclude that $$\angle \, BAD = \angle \, B'AC - \big(\angle \, BAB' + \angle DAC \big) = 60^{\circ} - (15^{\circ} + 15^{\circ}) = 30^{\circ}$$</p> Answer 20: <p>Drop perpendicular from $A$ on extended $BC$ intersecting at $E$ to form right triangle $\triangle AEC$.</p> <p><a href="https://i.stack.imgur.com/ZaxoF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZaxoF.png" alt="enter image description here"></a></p> <p>Let $AC = b$</p> <p>Then by basic trigonometry we get:</p> <p>$EA=\frac{b}{2}$</p> <p>$DC=EC-ED=\frac{\sqrt{3}-1}{2}b$</p> <p>$EB=EC-BC=\frac{2-\sqrt{3}}{2}b$</p> <p>$\angle EAB=\tan^{-1}\frac{EB}{EA}=\frac{\pi}{12}$</p> <p>Therefore,</p> <p>$$\angle BAD=\angle EAD-\angle EAB=\frac{\pi}{4}-\frac{\pi}{12}=\frac{\pi}{6}$$</p> Answer 21: <p>Let's denote $\angle BAD$ as $\alpha$. We have that $\angle ADB=180-(45+\alpha)$. If $\angle ADB=45$ and $\angle ACD=30$, then $\angle ADC=180-45$ and $\angle CDA=180-30-(180-45)=15$. Now, from the sine theorem:</p> <p>$$ \frac{|CD|}{\sin(15)}=\frac{|AC|}{\sin(180-45)}$$ $$ \frac{|AC|}{\sin(180-(45+\alpha))}=\frac{|BD|}{\sin(\alpha)}=\frac{|CD|}{\sin(\alpha)}$$</p> <p>From the first one, we have that: </p> <p>$$ |AC|=\frac{\sin(180-45)}{\sin(15)}|CD|$$</p> <p>Plugging this into the second one gives us:</p> <p>$$ \frac{\sin(180-45)}{\sin(15)}|CD|\frac{1}{\sin(180-(45+\alpha))}=\frac{|CD|}{\sin(\alpha)} $$ $$ \frac{\sin(15)\sin(180-45)}{\sin(180-(45+\alpha))}=\frac{1}{\sin(\alpha)} $$ $$ \sin(\alpha)=\frac{\sin(180-(45+\alpha))}{\sin(15)\sin(180-45)}=\frac{\sin(45+\alpha)}{\sin(15)\sin(180-45)} $$</p> <p>Can you take it from here? </p> Answer 22: <p>This can be solved very easily using a simple geometry construction:</p> <p><a href="https://i.stack.imgur.com/ELV6P.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ELV6P.png" alt="enter image description here" /></a></p> <p>1.) Draw segment <span class="math-container">$DE$</span> where point <span class="math-container">$E$</span> lies on <span class="math-container">$AC$</span> such that <span class="math-container">$\angle DAC=\angle ADE=15$</span>. This means that <span class="math-container">$\angle DEC=30$</span>, therefore <span class="math-container">$AE=ED=BD=DC$</span>. Now, join <span class="math-container">$B$</span> and <span class="math-container">$E$</span> via <span class="math-container">$BE$</span>. Notice that <span class="math-container">$\angle BDE=60$</span> and <span class="math-container">$BD=DE$</span>, this proves that <span class="math-container">$\triangle BDE$</span> is equilateral, and therefore, <span class="math-container">$AE=ED=BD=DC=BE$</span>.</p> <p>2.) Lastly, notice that <span class="math-container">$\angle BEC=\angle BEA=90$</span>, and that <span class="math-container">$AE=BE$</span>, this means <span class="math-container">$\triangle BAE$</span> is an isosceles right triangle, and <span class="math-container">$\angle BAE=45$</span>. Therefore <span class="math-container">$\angle BAD=45-15=30$</span></p> Answer 23: <p>One can use a construction if one does not want to use trigonometry but only very simple but tricky methods. </p> <p>Let point $B'$ be such that triangle $AB'C$ is equilateral and $B$ and $B'$ lie on the same side of line $AC.$ Let point $C'$ be such that line $AC'$ is orthogonal to $B'C$ and $AC' = AC = AB' = B'C$. By construction, $AC'$ is the orthgognal bisector of $B'C$ so $$B'C' = CC'\,\,\, \text{ and } \,\,\, \angle \, CAC' = B'AC' = 30^{\circ}$$ Therefore, triangles $CAC'$ and $B'AC'$ are congruent isosceles triangles with $30^{\circ}$ internal angle at vertex $A$ (because $AC'$ is the angle bisector of equlateral triangle $AB'C$).</p> <p><a href="https://i.stack.imgur.com/bBf9T.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bBf9T.png" alt="enter image description here"></a></p> <p>Now, by assumption, $\angle \, DAC = \angle \, ADB - \angle \, ACD = 45^{\circ} - 30^{\circ} = 15^{\circ}$. Hence, line $AD$ is the angle bisector in isosceles triagnle $CAC'$ through vertex $A$ and so $AD$ is also orthogonal bisectror of $CC'$ which means that $DC = DC'$. A direct angle chasing shows that $\angle \, CDC' = 90^{\circ}$. However, by assumption, $BD = DC = DC'$ so $$BC ' = CC' = B'C' \,\,\, \text{ and } \,\,\, \angle \, BC'C = 90^{\circ}$$ Hence triangle $BB'C'$ is isosceles and after a direct angle chasing, one sees that $\angle \, BC'B' = 60^{\circ}$. Thus $BB'C'$ is an equilateral triangle. This means that triangles $ABC'$ and $ABB'$ are congruent, because $BB' = BC'$, segment $AB$ is common and $AB'=AC'$ by construction. Therefore, $$\angle \, BAB' = \angle \, BAC' = \frac{1}{2} \, \angle \, B'AC' = \frac{1}{2} \, 30^{\circ} = 15^{\circ}$$ We can conclude that $$\angle \, BAD = \angle \, B'AC - \big(\angle \, BAB' + \angle DAC \big) = 60^{\circ} - (15^{\circ} + 15^{\circ}) = 30^{\circ}$$</p> Answer 24: <p>Drop perpendicular from $A$ on extended $BC$ intersecting at $E$ to form right triangle $\triangle AEC$.</p> <p><a href="https://i.stack.imgur.com/ZaxoF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZaxoF.png" alt="enter image description here"></a></p> <p>Let $AC = b$</p> <p>Then by basic trigonometry we get:</p> <p>$EA=\frac{b}{2}$</p> <p>$DC=EC-ED=\frac{\sqrt{3}-1}{2}b$</p> <p>$EB=EC-BC=\frac{2-\sqrt{3}}{2}b$</p> <p>$\angle EAB=\tan^{-1}\frac{EB}{EA}=\frac{\pi}{12}$</p> <p>Therefore,</p> <p>$$\angle BAD=\angle EAD-\angle EAB=\frac{\pi}{4}-\frac{\pi}{12}=\frac{\pi}{6}$$</p> Answer 25: <p>Let's denote $\angle BAD$ as $\alpha$. We have that $\angle ADB=180-(45+\alpha)$. If $\angle ADB=45$ and $\angle ACD=30$, then $\angle ADC=180-45$ and $\angle CDA=180-30-(180-45)=15$. Now, from the sine theorem:</p> <p>$$ \frac{|CD|}{\sin(15)}=\frac{|AC|}{\sin(180-45)}$$ $$ \frac{|AC|}{\sin(180-(45+\alpha))}=\frac{|BD|}{\sin(\alpha)}=\frac{|CD|}{\sin(\alpha)}$$</p> <p>From the first one, we have that: </p> <p>$$ |AC|=\frac{\sin(180-45)}{\sin(15)}|CD|$$</p> <p>Plugging this into the second one gives us:</p> <p>$$ \frac{\sin(180-45)}{\sin(15)}|CD|\frac{1}{\sin(180-(45+\alpha))}=\frac{|CD|}{\sin(\alpha)} $$ $$ \frac{\sin(15)\sin(180-45)}{\sin(180-(45+\alpha))}=\frac{1}{\sin(\alpha)} $$ $$ \sin(\alpha)=\frac{\sin(180-(45+\alpha))}{\sin(15)\sin(180-45)}=\frac{\sin(45+\alpha)}{\sin(15)\sin(180-45)} $$</p> <p>Can you take it from here? </p> Answer 26: <p>This can be solved very easily using a simple geometry construction:</p> <p><a href="https://i.stack.imgur.com/ELV6P.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ELV6P.png" alt="enter image description here" /></a></p> <p>1.) Draw segment <span class="math-container">$DE$</span> where point <span class="math-container">$E$</span> lies on <span class="math-container">$AC$</span> such that <span class="math-container">$\angle DAC=\angle ADE=15$</span>. This means that <span class="math-container">$\angle DEC=30$</span>, therefore <span class="math-container">$AE=ED=BD=DC$</span>. Now, join <span class="math-container">$B$</span> and <span class="math-container">$E$</span> via <span class="math-container">$BE$</span>. Notice that <span class="math-container">$\angle BDE=60$</span> and <span class="math-container">$BD=DE$</span>, this proves that <span class="math-container">$\triangle BDE$</span> is equilateral, and therefore, <span class="math-container">$AE=ED=BD=DC=BE$</span>.</p> <p>2.) Lastly, notice that <span class="math-container">$\angle BEC=\angle BEA=90$</span>, and that <span class="math-container">$AE=BE$</span>, this means <span class="math-container">$\triangle BAE$</span> is an isosceles right triangle, and <span class="math-container">$\angle BAE=45$</span>. Therefore <span class="math-container">$\angle BAD=45-15=30$</span></p> Answer 27: <p>One can use a construction if one does not want to use trigonometry but only very simple but tricky methods. </p> <p>Let point $B'$ be such that triangle $AB'C$ is equilateral and $B$ and $B'$ lie on the same side of line $AC.$ Let point $C'$ be such that line $AC'$ is orthogonal to $B'C$ and $AC' = AC = AB' = B'C$. By construction, $AC'$ is the orthgognal bisector of $B'C$ so $$B'C' = CC'\,\,\, \text{ and } \,\,\, \angle \, CAC' = B'AC' = 30^{\circ}$$ Therefore, triangles $CAC'$ and $B'AC'$ are congruent isosceles triangles with $30^{\circ}$ internal angle at vertex $A$ (because $AC'$ is the angle bisector of equlateral triangle $AB'C$).</p> <p><a href="https://i.stack.imgur.com/bBf9T.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bBf9T.png" alt="enter image description here"></a></p> <p>Now, by assumption, $\angle \, DAC = \angle \, ADB - \angle \, ACD = 45^{\circ} - 30^{\circ} = 15^{\circ}$. Hence, line $AD$ is the angle bisector in isosceles triagnle $CAC'$ through vertex $A$ and so $AD$ is also orthogonal bisectror of $CC'$ which means that $DC = DC'$. A direct angle chasing shows that $\angle \, CDC' = 90^{\circ}$. However, by assumption, $BD = DC = DC'$ so $$BC ' = CC' = B'C' \,\,\, \text{ and } \,\,\, \angle \, BC'C = 90^{\circ}$$ Hence triangle $BB'C'$ is isosceles and after a direct angle chasing, one sees that $\angle \, BC'B' = 60^{\circ}$. Thus $BB'C'$ is an equilateral triangle. This means that triangles $ABC'$ and $ABB'$ are congruent, because $BB' = BC'$, segment $AB$ is common and $AB'=AC'$ by construction. Therefore, $$\angle \, BAB' = \angle \, BAC' = \frac{1}{2} \, \angle \, B'AC' = \frac{1}{2} \, 30^{\circ} = 15^{\circ}$$ We can conclude that $$\angle \, BAD = \angle \, B'AC - \big(\angle \, BAB' + \angle DAC \big) = 60^{\circ} - (15^{\circ} + 15^{\circ}) = 30^{\circ}$$</p> Answer 28: <p>Drop perpendicular from $A$ on extended $BC$ intersecting at $E$ to form right triangle $\triangle AEC$.</p> <p><a href="https://i.stack.imgur.com/ZaxoF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZaxoF.png" alt="enter image description here"></a></p> <p>Let $AC = b$</p> <p>Then by basic trigonometry we get:</p> <p>$EA=\frac{b}{2}$</p> <p>$DC=EC-ED=\frac{\sqrt{3}-1}{2}b$</p> <p>$EB=EC-BC=\frac{2-\sqrt{3}}{2}b$</p> <p>$\angle EAB=\tan^{-1}\frac{EB}{EA}=\frac{\pi}{12}$</p> <p>Therefore,</p> <p>$$\angle BAD=\angle EAD-\angle EAB=\frac{\pi}{4}-\frac{\pi}{12}=\frac{\pi}{6}$$</p> Answer 29: <p>Let's denote $\angle BAD$ as $\alpha$. We have that $\angle ADB=180-(45+\alpha)$. If $\angle ADB=45$ and $\angle ACD=30$, then $\angle ADC=180-45$ and $\angle CDA=180-30-(180-45)=15$. Now, from the sine theorem:</p> <p>$$ \frac{|CD|}{\sin(15)}=\frac{|AC|}{\sin(180-45)}$$ $$ \frac{|AC|}{\sin(180-(45+\alpha))}=\frac{|BD|}{\sin(\alpha)}=\frac{|CD|}{\sin(\alpha)}$$</p> <p>From the first one, we have that: </p> <p>$$ |AC|=\frac{\sin(180-45)}{\sin(15)}|CD|$$</p> <p>Plugging this into the second one gives us:</p> <p>$$ \frac{\sin(180-45)}{\sin(15)}|CD|\frac{1}{\sin(180-(45+\alpha))}=\frac{|CD|}{\sin(\alpha)} $$ $$ \frac{\sin(15)\sin(180-45)}{\sin(180-(45+\alpha))}=\frac{1}{\sin(\alpha)} $$ $$ \sin(\alpha)=\frac{\sin(180-(45+\alpha))}{\sin(15)\sin(180-45)}=\frac{\sin(45+\alpha)}{\sin(15)\sin(180-45)} $$</p> <p>Can you take it from here? </p> Answer 30: <p>This can be solved very easily using a simple geometry construction:</p> <p><a href="https://i.stack.imgur.com/ELV6P.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ELV6P.png" alt="enter image description here" /></a></p> <p>1.) Draw segment <span class="math-container">$DE$</span> where point <span class="math-container">$E$</span> lies on <span class="math-container">$AC$</span> such that <span class="math-container">$\angle DAC=\angle ADE=15$</span>. This means that <span class="math-container">$\angle DEC=30$</span>, therefore <span class="math-container">$AE=ED=BD=DC$</span>. Now, join <span class="math-container">$B$</span> and <span class="math-container">$E$</span> via <span class="math-container">$BE$</span>. Notice that <span class="math-container">$\angle BDE=60$</span> and <span class="math-container">$BD=DE$</span>, this proves that <span class="math-container">$\triangle BDE$</span> is equilateral, and therefore, <span class="math-container">$AE=ED=BD=DC=BE$</span>.</p> <p>2.) Lastly, notice that <span class="math-container">$\angle BEC=\angle BEA=90$</span>, and that <span class="math-container">$AE=BE$</span>, this means <span class="math-container">$\triangle BAE$</span> is an isosceles right triangle, and <span class="math-container">$\angle BAE=45$</span>. Therefore <span class="math-container">$\angle BAD=45-15=30$</span></p> Answer 31: <p>One can use a construction if one does not want to use trigonometry but only very simple but tricky methods. </p> <p>Let point $B'$ be such that triangle $AB'C$ is equilateral and $B$ and $B'$ lie on the same side of line $AC.$ Let point $C'$ be such that line $AC'$ is orthogonal to $B'C$ and $AC' = AC = AB' = B'C$. By construction, $AC'$ is the orthgognal bisector of $B'C$ so $$B'C' = CC'\,\,\, \text{ and } \,\,\, \angle \, CAC' = B'AC' = 30^{\circ}$$ Therefore, triangles $CAC'$ and $B'AC'$ are congruent isosceles triangles with $30^{\circ}$ internal angle at vertex $A$ (because $AC'$ is the angle bisector of equlateral triangle $AB'C$).</p> <p><a href="https://i.stack.imgur.com/bBf9T.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bBf9T.png" alt="enter image description here"></a></p> <p>Now, by assumption, $\angle \, DAC = \angle \, ADB - \angle \, ACD = 45^{\circ} - 30^{\circ} = 15^{\circ}$. Hence, line $AD$ is the angle bisector in isosceles triagnle $CAC'$ through vertex $A$ and so $AD$ is also orthogonal bisectror of $CC'$ which means that $DC = DC'$. A direct angle chasing shows that $\angle \, CDC' = 90^{\circ}$. However, by assumption, $BD = DC = DC'$ so $$BC ' = CC' = B'C' \,\,\, \text{ and } \,\,\, \angle \, BC'C = 90^{\circ}$$ Hence triangle $BB'C'$ is isosceles and after a direct angle chasing, one sees that $\angle \, BC'B' = 60^{\circ}$. Thus $BB'C'$ is an equilateral triangle. This means that triangles $ABC'$ and $ABB'$ are congruent, because $BB' = BC'$, segment $AB$ is common and $AB'=AC'$ by construction. Therefore, $$\angle \, BAB' = \angle \, BAC' = \frac{1}{2} \, \angle \, B'AC' = \frac{1}{2} \, 30^{\circ} = 15^{\circ}$$ We can conclude that $$\angle \, BAD = \angle \, B'AC - \big(\angle \, BAB' + \angle DAC \big) = 60^{\circ} - (15^{\circ} + 15^{\circ}) = 30^{\circ}$$</p> Answer 32: <p>Drop perpendicular from $A$ on extended $BC$ intersecting at $E$ to form right triangle $\triangle AEC$.</p> <p><a href="https://i.stack.imgur.com/ZaxoF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZaxoF.png" alt="enter image description here"></a></p> <p>Let $AC = b$</p> <p>Then by basic trigonometry we get:</p> <p>$EA=\frac{b}{2}$</p> <p>$DC=EC-ED=\frac{\sqrt{3}-1}{2}b$</p> <p>$EB=EC-BC=\frac{2-\sqrt{3}}{2}b$</p> <p>$\angle EAB=\tan^{-1}\frac{EB}{EA}=\frac{\pi}{12}$</p> <p>Therefore,</p> <p>$$\angle BAD=\angle EAD-\angle EAB=\frac{\pi}{4}-\frac{\pi}{12}=\frac{\pi}{6}$$</p> Answer 33: <p>Let's denote $\angle BAD$ as $\alpha$. We have that $\angle ADB=180-(45+\alpha)$. If $\angle ADB=45$ and $\angle ACD=30$, then $\angle ADC=180-45$ and $\angle CDA=180-30-(180-45)=15$. Now, from the sine theorem:</p> <p>$$ \frac{|CD|}{\sin(15)}=\frac{|AC|}{\sin(180-45)}$$ $$ \frac{|AC|}{\sin(180-(45+\alpha))}=\frac{|BD|}{\sin(\alpha)}=\frac{|CD|}{\sin(\alpha)}$$</p> <p>From the first one, we have that: </p> <p>$$ |AC|=\frac{\sin(180-45)}{\sin(15)}|CD|$$</p> <p>Plugging this into the second one gives us:</p> <p>$$ \frac{\sin(180-45)}{\sin(15)}|CD|\frac{1}{\sin(180-(45+\alpha))}=\frac{|CD|}{\sin(\alpha)} $$ $$ \frac{\sin(15)\sin(180-45)}{\sin(180-(45+\alpha))}=\frac{1}{\sin(\alpha)} $$ $$ \sin(\alpha)=\frac{\sin(180-(45+\alpha))}{\sin(15)\sin(180-45)}=\frac{\sin(45+\alpha)}{\sin(15)\sin(180-45)} $$</p> <p>Can you take it from here? </p> Answer 34: <p>This can be solved very easily using a simple geometry construction:</p> <p><a href="https://i.stack.imgur.com/ELV6P.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ELV6P.png" alt="enter image description here" /></a></p> <p>1.) Draw segment <span class="math-container">$DE$</span> where point <span class="math-container">$E$</span> lies on <span class="math-container">$AC$</span> such that <span class="math-container">$\angle DAC=\angle ADE=15$</span>. This means that <span class="math-container">$\angle DEC=30$</span>, therefore <span class="math-container">$AE=ED=BD=DC$</span>. Now, join <span class="math-container">$B$</span> and <span class="math-container">$E$</span> via <span class="math-container">$BE$</span>. Notice that <span class="math-container">$\angle BDE=60$</span> and <span class="math-container">$BD=DE$</span>, this proves that <span class="math-container">$\triangle BDE$</span> is equilateral, and therefore, <span class="math-container">$AE=ED=BD=DC=BE$</span>.</p> <p>2.) Lastly, notice that <span class="math-container">$\angle BEC=\angle BEA=90$</span>, and that <span class="math-container">$AE=BE$</span>, this means <span class="math-container">$\triangle BAE$</span> is an isosceles right triangle, and <span class="math-container">$\angle BAE=45$</span>. Therefore <span class="math-container">$\angle BAD=45-15=30$</span></p> Answer 35: <p>One can use a construction if one does not want to use trigonometry but only very simple but tricky methods. </p> <p>Let point $B'$ be such that triangle $AB'C$ is equilateral and $B$ and $B'$ lie on the same side of line $AC.$ Let point $C'$ be such that line $AC'$ is orthogonal to $B'C$ and $AC' = AC = AB' = B'C$. By construction, $AC'$ is the orthgognal bisector of $B'C$ so $$B'C' = CC'\,\,\, \text{ and } \,\,\, \angle \, CAC' = B'AC' = 30^{\circ}$$ Therefore, triangles $CAC'$ and $B'AC'$ are congruent isosceles triangles with $30^{\circ}$ internal angle at vertex $A$ (because $AC'$ is the angle bisector of equlateral triangle $AB'C$).</p> <p><a href="https://i.stack.imgur.com/bBf9T.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bBf9T.png" alt="enter image description here"></a></p> <p>Now, by assumption, $\angle \, DAC = \angle \, ADB - \angle \, ACD = 45^{\circ} - 30^{\circ} = 15^{\circ}$. Hence, line $AD$ is the angle bisector in isosceles triagnle $CAC'$ through vertex $A$ and so $AD$ is also orthogonal bisectror of $CC'$ which means that $DC = DC'$. A direct angle chasing shows that $\angle \, CDC' = 90^{\circ}$. However, by assumption, $BD = DC = DC'$ so $$BC ' = CC' = B'C' \,\,\, \text{ and } \,\,\, \angle \, BC'C = 90^{\circ}$$ Hence triangle $BB'C'$ is isosceles and after a direct angle chasing, one sees that $\angle \, BC'B' = 60^{\circ}$. Thus $BB'C'$ is an equilateral triangle. This means that triangles $ABC'$ and $ABB'$ are congruent, because $BB' = BC'$, segment $AB$ is common and $AB'=AC'$ by construction. Therefore, $$\angle \, BAB' = \angle \, BAC' = \frac{1}{2} \, \angle \, B'AC' = \frac{1}{2} \, 30^{\circ} = 15^{\circ}$$ We can conclude that $$\angle \, BAD = \angle \, B'AC - \big(\angle \, BAB' + \angle DAC \big) = 60^{\circ} - (15^{\circ} + 15^{\circ}) = 30^{\circ}$$</p> Answer 36: <p>Drop perpendicular from $A$ on extended $BC$ intersecting at $E$ to form right triangle $\triangle AEC$.</p> <p><a href="https://i.stack.imgur.com/ZaxoF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZaxoF.png" alt="enter image description here"></a></p> <p>Let $AC = b$</p> <p>Then by basic trigonometry we get:</p> <p>$EA=\frac{b}{2}$</p> <p>$DC=EC-ED=\frac{\sqrt{3}-1}{2}b$</p> <p>$EB=EC-BC=\frac{2-\sqrt{3}}{2}b$</p> <p>$\angle EAB=\tan^{-1}\frac{EB}{EA}=\frac{\pi}{12}$</p> <p>Therefore,</p> <p>$$\angle BAD=\angle EAD-\angle EAB=\frac{\pi}{4}-\frac{\pi}{12}=\frac{\pi}{6}$$</p> Answer 37: <p>Let's denote $\angle BAD$ as $\alpha$. We have that $\angle ADB=180-(45+\alpha)$. If $\angle ADB=45$ and $\angle ACD=30$, then $\angle ADC=180-45$ and $\angle CDA=180-30-(180-45)=15$. Now, from the sine theorem:</p> <p>$$ \frac{|CD|}{\sin(15)}=\frac{|AC|}{\sin(180-45)}$$ $$ \frac{|AC|}{\sin(180-(45+\alpha))}=\frac{|BD|}{\sin(\alpha)}=\frac{|CD|}{\sin(\alpha)}$$</p> <p>From the first one, we have that: </p> <p>$$ |AC|=\frac{\sin(180-45)}{\sin(15)}|CD|$$</p> <p>Plugging this into the second one gives us:</p> <p>$$ \frac{\sin(180-45)}{\sin(15)}|CD|\frac{1}{\sin(180-(45+\alpha))}=\frac{|CD|}{\sin(\alpha)} $$ $$ \frac{\sin(15)\sin(180-45)}{\sin(180-(45+\alpha))}=\frac{1}{\sin(\alpha)} $$ $$ \sin(\alpha)=\frac{\sin(180-(45+\alpha))}{\sin(15)\sin(180-45)}=\frac{\sin(45+\alpha)}{\sin(15)\sin(180-45)} $$</p> <p>Can you take it from here? </p> Answer 38: <p>This can be solved very easily using a simple geometry construction:</p> <p><a href="https://i.stack.imgur.com/ELV6P.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ELV6P.png" alt="enter image description here" /></a></p> <p>1.) Draw segment <span class="math-container">$DE$</span> where point <span class="math-container">$E$</span> lies on <span class="math-container">$AC$</span> such that <span class="math-container">$\angle DAC=\angle ADE=15$</span>. This means that <span class="math-container">$\angle DEC=30$</span>, therefore <span class="math-container">$AE=ED=BD=DC$</span>. Now, join <span class="math-container">$B$</span> and <span class="math-container">$E$</span> via <span class="math-container">$BE$</span>. Notice that <span class="math-container">$\angle BDE=60$</span> and <span class="math-container">$BD=DE$</span>, this proves that <span class="math-container">$\triangle BDE$</span> is equilateral, and therefore, <span class="math-container">$AE=ED=BD=DC=BE$</span>.</p> <p>2.) Lastly, notice that <span class="math-container">$\angle BEC=\angle BEA=90$</span>, and that <span class="math-container">$AE=BE$</span>, this means <span class="math-container">$\triangle BAE$</span> is an isosceles right triangle, and <span class="math-container">$\angle BAE=45$</span>. Therefore <span class="math-container">$\angle BAD=45-15=30$</span></p> Answer 39: <p>One can use a construction if one does not want to use trigonometry but only very simple but tricky methods. </p> <p>Let point $B'$ be such that triangle $AB'C$ is equilateral and $B$ and $B'$ lie on the same side of line $AC.$ Let point $C'$ be such that line $AC'$ is orthogonal to $B'C$ and $AC' = AC = AB' = B'C$. By construction, $AC'$ is the orthgognal bisector of $B'C$ so $$B'C' = CC'\,\,\, \text{ and } \,\,\, \angle \, CAC' = B'AC' = 30^{\circ}$$ Therefore, triangles $CAC'$ and $B'AC'$ are congruent isosceles triangles with $30^{\circ}$ internal angle at vertex $A$ (because $AC'$ is the angle bisector of equlateral triangle $AB'C$).</p> <p><a href="https://i.stack.imgur.com/bBf9T.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bBf9T.png" alt="enter image description here"></a></p> <p>Now, by assumption, $\angle \, DAC = \angle \, ADB - \angle \, ACD = 45^{\circ} - 30^{\circ} = 15^{\circ}$. Hence, line $AD$ is the angle bisector in isosceles triagnle $CAC'$ through vertex $A$ and so $AD$ is also orthogonal bisectror of $CC'$ which means that $DC = DC'$. A direct angle chasing shows that $\angle \, CDC' = 90^{\circ}$. However, by assumption, $BD = DC = DC'$ so $$BC ' = CC' = B'C' \,\,\, \text{ and } \,\,\, \angle \, BC'C = 90^{\circ}$$ Hence triangle $BB'C'$ is isosceles and after a direct angle chasing, one sees that $\angle \, BC'B' = 60^{\circ}$. Thus $BB'C'$ is an equilateral triangle. This means that triangles $ABC'$ and $ABB'$ are congruent, because $BB' = BC'$, segment $AB$ is common and $AB'=AC'$ by construction. Therefore, $$\angle \, BAB' = \angle \, BAC' = \frac{1}{2} \, \angle \, B'AC' = \frac{1}{2} \, 30^{\circ} = 15^{\circ}$$ We can conclude that $$\angle \, BAD = \angle \, B'AC - \big(\angle \, BAB' + \angle DAC \big) = 60^{\circ} - (15^{\circ} + 15^{\circ}) = 30^{\circ}$$</p> Answer 40: <p>Drop perpendicular from $A$ on extended $BC$ intersecting at $E$ to form right triangle $\triangle AEC$.</p> <p><a href="https://i.stack.imgur.com/ZaxoF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZaxoF.png" alt="enter image description here"></a></p> <p>Let $AC = b$</p> <p>Then by basic trigonometry we get:</p> <p>$EA=\frac{b}{2}$</p> <p>$DC=EC-ED=\frac{\sqrt{3}-1}{2}b$</p> <p>$EB=EC-BC=\frac{2-\sqrt{3}}{2}b$</p> <p>$\angle EAB=\tan^{-1}\frac{EB}{EA}=\frac{\pi}{12}$</p> <p>Therefore,</p> <p>$$\angle BAD=\angle EAD-\angle EAB=\frac{\pi}{4}-\frac{\pi}{12}=\frac{\pi}{6}$$</p> Answer 41: <p>Let's denote $\angle BAD$ as $\alpha$. We have that $\angle ADB=180-(45+\alpha)$. If $\angle ADB=45$ and $\angle ACD=30$, then $\angle ADC=180-45$ and $\angle CDA=180-30-(180-45)=15$. Now, from the sine theorem:</p> <p>$$ \frac{|CD|}{\sin(15)}=\frac{|AC|}{\sin(180-45)}$$ $$ \frac{|AC|}{\sin(180-(45+\alpha))}=\frac{|BD|}{\sin(\alpha)}=\frac{|CD|}{\sin(\alpha)}$$</p> <p>From the first one, we have that: </p> <p>$$ |AC|=\frac{\sin(180-45)}{\sin(15)}|CD|$$</p> <p>Plugging this into the second one gives us:</p> <p>$$ \frac{\sin(180-45)}{\sin(15)}|CD|\frac{1}{\sin(180-(45+\alpha))}=\frac{|CD|}{\sin(\alpha)} $$ $$ \frac{\sin(15)\sin(180-45)}{\sin(180-(45+\alpha))}=\frac{1}{\sin(\alpha)} $$ $$ \sin(\alpha)=\frac{\sin(180-(45+\alpha))}{\sin(15)\sin(180-45)}=\frac{\sin(45+\alpha)}{\sin(15)\sin(180-45)} $$</p> <p>Can you take it from here? </p> Answer 42: <p>This can be solved very easily using a simple geometry construction:</p> <p><a href="https://i.stack.imgur.com/ELV6P.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ELV6P.png" alt="enter image description here" /></a></p> <p>1.) Draw segment <span class="math-container">$DE$</span> where point <span class="math-container">$E$</span> lies on <span class="math-container">$AC$</span> such that <span class="math-container">$\angle DAC=\angle ADE=15$</span>. This means that <span class="math-container">$\angle DEC=30$</span>, therefore <span class="math-container">$AE=ED=BD=DC$</span>. Now, join <span class="math-container">$B$</span> and <span class="math-container">$E$</span> via <span class="math-container">$BE$</span>. Notice that <span class="math-container">$\angle BDE=60$</span> and <span class="math-container">$BD=DE$</span>, this proves that <span class="math-container">$\triangle BDE$</span> is equilateral, and therefore, <span class="math-container">$AE=ED=BD=DC=BE$</span>.</p> <p>2.) Lastly, notice that <span class="math-container">$\angle BEC=\angle BEA=90$</span>, and that <span class="math-container">$AE=BE$</span>, this means <span class="math-container">$\triangle BAE$</span> is an isosceles right triangle, and <span class="math-container">$\angle BAE=45$</span>. Therefore <span class="math-container">$\angle BAD=45-15=30$</span></p> Answer 43: <p>One can use a construction if one does not want to use trigonometry but only very simple but tricky methods. </p> <p>Let point $B'$ be such that triangle $AB'C$ is equilateral and $B$ and $B'$ lie on the same side of line $AC.$ Let point $C'$ be such that line $AC'$ is orthogonal to $B'C$ and $AC' = AC = AB' = B'C$. By construction, $AC'$ is the orthgognal bisector of $B'C$ so $$B'C' = CC'\,\,\, \text{ and } \,\,\, \angle \, CAC' = B'AC' = 30^{\circ}$$ Therefore, triangles $CAC'$ and $B'AC'$ are congruent isosceles triangles with $30^{\circ}$ internal angle at vertex $A$ (because $AC'$ is the angle bisector of equlateral triangle $AB'C$).</p> <p><a href="https://i.stack.imgur.com/bBf9T.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bBf9T.png" alt="enter image description here"></a></p> <p>Now, by assumption, $\angle \, DAC = \angle \, ADB - \angle \, ACD = 45^{\circ} - 30^{\circ} = 15^{\circ}$. Hence, line $AD$ is the angle bisector in isosceles triagnle $CAC'$ through vertex $A$ and so $AD$ is also orthogonal bisectror of $CC'$ which means that $DC = DC'$. A direct angle chasing shows that $\angle \, CDC' = 90^{\circ}$. However, by assumption, $BD = DC = DC'$ so $$BC ' = CC' = B'C' \,\,\, \text{ and } \,\,\, \angle \, BC'C = 90^{\circ}$$ Hence triangle $BB'C'$ is isosceles and after a direct angle chasing, one sees that $\angle \, BC'B' = 60^{\circ}$. Thus $BB'C'$ is an equilateral triangle. This means that triangles $ABC'$ and $ABB'$ are congruent, because $BB' = BC'$, segment $AB$ is common and $AB'=AC'$ by construction. Therefore, $$\angle \, BAB' = \angle \, BAC' = \frac{1}{2} \, \angle \, B'AC' = \frac{1}{2} \, 30^{\circ} = 15^{\circ}$$ We can conclude that $$\angle \, BAD = \angle \, B'AC - \big(\angle \, BAB' + \angle DAC \big) = 60^{\circ} - (15^{\circ} + 15^{\circ}) = 30^{\circ}$$</p> Answer 44: <p>Drop perpendicular from $A$ on extended $BC$ intersecting at $E$ to form right triangle $\triangle AEC$.</p> <p><a href="https://i.stack.imgur.com/ZaxoF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZaxoF.png" alt="enter image description here"></a></p> <p>Let $AC = b$</p> <p>Then by basic trigonometry we get:</p> <p>$EA=\frac{b}{2}$</p> <p>$DC=EC-ED=\frac{\sqrt{3}-1}{2}b$</p> <p>$EB=EC-BC=\frac{2-\sqrt{3}}{2}b$</p> <p>$\angle EAB=\tan^{-1}\frac{EB}{EA}=\frac{\pi}{12}$</p> <p>Therefore,</p> <p>$$\angle BAD=\angle EAD-\angle EAB=\frac{\pi}{4}-\frac{\pi}{12}=\frac{\pi}{6}$$</p>
How to lower/upper bound $n!$ using $1+x\leq e^x$?
2,098,138
6
https://math.stackexchange.com/questions/2098138/how-to-lower-upper-bound-n-using-1x-leq-ex
<p>I need to prove for all positive integer $n$</p> <p>$$ e\left(\frac{n}{e}\right)^n\leq n!\leq en\left(\frac{n}{e}\right)^n, $$ using the hint $1+x\leq e^x$ for all $x\in \mathbb{R}$.</p> <p>I did this:</p> <p>The hint says </p> <ul> <li>for $x=0$, $1\leq 1$;</li> <li>for $x=1$, $2\leq e$;</li> <li>...</li> <li>for $x=n-1$, $n\leq e^{n-1}$.</li> </ul> <p>So I multiplied these $n$ inequalities to get</p> <p>$$ n!\leq e^{n-1+\ldots+1}, $$ or $$ n!\leq e^{\frac{n(n-1)}{2}}, $$ and I get stuck there. </p>
[ { "answer_id": 2098175, "score": 6, "text": "<p>Rearranging gives the equivalent inequalities\n$$1 \\le (n-1)! \\left(\\frac{e}{n}\\right)^{n-1} \\le n.$$</p>\n\n<p>When $n=1$ both inequalities are equalities.\nAssuming the statement holds for $n=k$, then we want to prove\n$$1 \\le k! \\left(\\frac{e}{k+1}\\right)^k \\le k+1.$$</p>\n\n<hr>\n\n<p>The first inequality holds since\n$$k! \\left(\\frac{e}{k+1}\\right)^k = \\underbrace{e \\cdot \\left(\\frac{k}{k+1}\\right)^k}_{\\ge 1} \\cdot \\underbrace{(k-1)! \\left(\\frac{e}{k}\\right)^{k-1}}_{\\ge 1 \\text{ by induction}}$$\nwhere the first term is $\\ge 1$ because the hint gives\n$$e^{1/k} \\frac{k}{k+1} \\ge \\left(1+\\frac{1}{k}\\right) \\frac{k}{k+1} = 1.$$</p>\n\n<hr>\n\n<p>The second inequality is due to\n$$k! \\left(\\frac{e}{k+1}\\right)^k = \\underbrace{e \\cdot \\left(\\frac{k}{k+1}\\right)^k}_{\\le \\frac{k+1}{k}} \\cdot \\underbrace{(k-1)! \\left(\\frac{e}{k}\\right)^{k-1}}_{\\le k \\text{ by induction}}$$\nwhere the first term is $\\le \\frac{k+1}{k}$ because\n$$e \\cdot \\left(\\frac{k}{k+1}\\right)^{k+1}\n= e \\cdot \\left(1-\\frac{1}{k+1}\\right)^{k+1} \\le e \\cdot e^{-1} = 1.$$\n[It is well known that $\\left(1-\\frac{1}{k+1}\\right)^{k+1}$ converges to $e^{-1}$ as $k \\to \\infty$, but one can show by induction that this actually increases monotonically to $e^{-1}$.]</p>\n" }, { "answer_id": 2098158, "score": 2, "text": "<p>Take the log of all sides, so we just need to prove that</p>\n\n<p>$$n\\ln n-n+1\\le\\ln(n!)\\le (n+1)\\ln n-n+1$$</p>\n\n<p>Then, it is easy to see that</p>\n\n<p>$$\\ln(n!)=\\sum_{k=1}^n\\ln(k)$$</p>\n\n<p>And that is the right Riemann sum to the following integral:</p>\n\n<p>$$\\sum_{k=1}^n\\ln(k)\\ge\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>So we have proven the left side. Then notice that we have the following trapezoidal sum:</p>\n\n<p>$$\\frac{\\ln(n)+2\\ln((n-1)!)}2=\\frac{\\ln(n!)+\\ln((n-1)!)}2\\\\=\\sum_{k=1}^n\\frac{\\ln(k+1)+\\ln(k)}2\\\\\\le\\int_1^n\\ln(x)\\ dx=n\\ln(n)-n+1\\\\\\frac{\\ln(n)+2\\ln((n-1)!)}2\\le n\\ln(n)-n+1\\\\\\implies\\ln((n-1)!)\\le\\left(n-\\frac12\\right)\\ln(n)-n+1\\\\\\implies\\ln(n!)=\\ln(n)+\\ln((n-1)!)\\le\\left(n+\\frac12\\right)\\ln(n)-n+1$$</p>\n\n<p>Thus, when $n\\ge1$,</p>\n\n<p>$$\\ln(n!)\\le\\left(n+\\frac12\\right)\\ln(n)-n+1\\le(n+1)\\ln(n)-n+1$$</p>\n\n<p>and we have everything we wanted.</p>\n" }, { "answer_id": 2098175, "score": 6, "text": "<p>Rearranging gives the equivalent inequalities\n$$1 \\le (n-1)! \\left(\\frac{e}{n}\\right)^{n-1} \\le n.$$</p>\n\n<p>When $n=1$ both inequalities are equalities.\nAssuming the statement holds for $n=k$, then we want to prove\n$$1 \\le k! \\left(\\frac{e}{k+1}\\right)^k \\le k+1.$$</p>\n\n<hr>\n\n<p>The first inequality holds since\n$$k! \\left(\\frac{e}{k+1}\\right)^k = \\underbrace{e \\cdot \\left(\\frac{k}{k+1}\\right)^k}_{\\ge 1} \\cdot \\underbrace{(k-1)! \\left(\\frac{e}{k}\\right)^{k-1}}_{\\ge 1 \\text{ by induction}}$$\nwhere the first term is $\\ge 1$ because the hint gives\n$$e^{1/k} \\frac{k}{k+1} \\ge \\left(1+\\frac{1}{k}\\right) \\frac{k}{k+1} = 1.$$</p>\n\n<hr>\n\n<p>The second inequality is due to\n$$k! \\left(\\frac{e}{k+1}\\right)^k = \\underbrace{e \\cdot \\left(\\frac{k}{k+1}\\right)^k}_{\\le \\frac{k+1}{k}} \\cdot \\underbrace{(k-1)! \\left(\\frac{e}{k}\\right)^{k-1}}_{\\le k \\text{ by induction}}$$\nwhere the first term is $\\le \\frac{k+1}{k}$ because\n$$e \\cdot \\left(\\frac{k}{k+1}\\right)^{k+1}\n= e \\cdot \\left(1-\\frac{1}{k+1}\\right)^{k+1} \\le e \\cdot e^{-1} = 1.$$\n[It is well known that $\\left(1-\\frac{1}{k+1}\\right)^{k+1}$ converges to $e^{-1}$ as $k \\to \\infty$, but one can show by induction that this actually increases monotonically to $e^{-1}$.]</p>\n" }, { "answer_id": 2098158, "score": 2, "text": "<p>Take the log of all sides, so we just need to prove that</p>\n\n<p>$$n\\ln n-n+1\\le\\ln(n!)\\le (n+1)\\ln n-n+1$$</p>\n\n<p>Then, it is easy to see that</p>\n\n<p>$$\\ln(n!)=\\sum_{k=1}^n\\ln(k)$$</p>\n\n<p>And that is the right Riemann sum to the following integral:</p>\n\n<p>$$\\sum_{k=1}^n\\ln(k)\\ge\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>So we have proven the left side. Then notice that we have the following trapezoidal sum:</p>\n\n<p>$$\\frac{\\ln(n)+2\\ln((n-1)!)}2=\\frac{\\ln(n!)+\\ln((n-1)!)}2\\\\=\\sum_{k=1}^n\\frac{\\ln(k+1)+\\ln(k)}2\\\\\\le\\int_1^n\\ln(x)\\ dx=n\\ln(n)-n+1\\\\\\frac{\\ln(n)+2\\ln((n-1)!)}2\\le n\\ln(n)-n+1\\\\\\implies\\ln((n-1)!)\\le\\left(n-\\frac12\\right)\\ln(n)-n+1\\\\\\implies\\ln(n!)=\\ln(n)+\\ln((n-1)!)\\le\\left(n+\\frac12\\right)\\ln(n)-n+1$$</p>\n\n<p>Thus, when $n\\ge1$,</p>\n\n<p>$$\\ln(n!)\\le\\left(n+\\frac12\\right)\\ln(n)-n+1\\le(n+1)\\ln(n)-n+1$$</p>\n\n<p>and we have everything we wanted.</p>\n" }, { "answer_id": 2098175, "score": 6, "text": "<p>Rearranging gives the equivalent inequalities\n$$1 \\le (n-1)! \\left(\\frac{e}{n}\\right)^{n-1} \\le n.$$</p>\n\n<p>When $n=1$ both inequalities are equalities.\nAssuming the statement holds for $n=k$, then we want to prove\n$$1 \\le k! \\left(\\frac{e}{k+1}\\right)^k \\le k+1.$$</p>\n\n<hr>\n\n<p>The first inequality holds since\n$$k! \\left(\\frac{e}{k+1}\\right)^k = \\underbrace{e \\cdot \\left(\\frac{k}{k+1}\\right)^k}_{\\ge 1} \\cdot \\underbrace{(k-1)! \\left(\\frac{e}{k}\\right)^{k-1}}_{\\ge 1 \\text{ by induction}}$$\nwhere the first term is $\\ge 1$ because the hint gives\n$$e^{1/k} \\frac{k}{k+1} \\ge \\left(1+\\frac{1}{k}\\right) \\frac{k}{k+1} = 1.$$</p>\n\n<hr>\n\n<p>The second inequality is due to\n$$k! \\left(\\frac{e}{k+1}\\right)^k = \\underbrace{e \\cdot \\left(\\frac{k}{k+1}\\right)^k}_{\\le \\frac{k+1}{k}} \\cdot \\underbrace{(k-1)! \\left(\\frac{e}{k}\\right)^{k-1}}_{\\le k \\text{ by induction}}$$\nwhere the first term is $\\le \\frac{k+1}{k}$ because\n$$e \\cdot \\left(\\frac{k}{k+1}\\right)^{k+1}\n= e \\cdot \\left(1-\\frac{1}{k+1}\\right)^{k+1} \\le e \\cdot e^{-1} = 1.$$\n[It is well known that $\\left(1-\\frac{1}{k+1}\\right)^{k+1}$ converges to $e^{-1}$ as $k \\to \\infty$, but one can show by induction that this actually increases monotonically to $e^{-1}$.]</p>\n" }, { "answer_id": 2098158, "score": 2, "text": "<p>Take the log of all sides, so we just need to prove that</p>\n\n<p>$$n\\ln n-n+1\\le\\ln(n!)\\le (n+1)\\ln n-n+1$$</p>\n\n<p>Then, it is easy to see that</p>\n\n<p>$$\\ln(n!)=\\sum_{k=1}^n\\ln(k)$$</p>\n\n<p>And that is the right Riemann sum to the following integral:</p>\n\n<p>$$\\sum_{k=1}^n\\ln(k)\\ge\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>So we have proven the left side. Then notice that we have the following trapezoidal sum:</p>\n\n<p>$$\\frac{\\ln(n)+2\\ln((n-1)!)}2=\\frac{\\ln(n!)+\\ln((n-1)!)}2\\\\=\\sum_{k=1}^n\\frac{\\ln(k+1)+\\ln(k)}2\\\\\\le\\int_1^n\\ln(x)\\ dx=n\\ln(n)-n+1\\\\\\frac{\\ln(n)+2\\ln((n-1)!)}2\\le n\\ln(n)-n+1\\\\\\implies\\ln((n-1)!)\\le\\left(n-\\frac12\\right)\\ln(n)-n+1\\\\\\implies\\ln(n!)=\\ln(n)+\\ln((n-1)!)\\le\\left(n+\\frac12\\right)\\ln(n)-n+1$$</p>\n\n<p>Thus, when $n\\ge1$,</p>\n\n<p>$$\\ln(n!)\\le\\left(n+\\frac12\\right)\\ln(n)-n+1\\le(n+1)\\ln(n)-n+1$$</p>\n\n<p>and we have everything we wanted.</p>\n" }, { "answer_id": 2098175, "score": 6, "text": "<p>Rearranging gives the equivalent inequalities\n$$1 \\le (n-1)! \\left(\\frac{e}{n}\\right)^{n-1} \\le n.$$</p>\n\n<p>When $n=1$ both inequalities are equalities.\nAssuming the statement holds for $n=k$, then we want to prove\n$$1 \\le k! \\left(\\frac{e}{k+1}\\right)^k \\le k+1.$$</p>\n\n<hr>\n\n<p>The first inequality holds since\n$$k! \\left(\\frac{e}{k+1}\\right)^k = \\underbrace{e \\cdot \\left(\\frac{k}{k+1}\\right)^k}_{\\ge 1} \\cdot \\underbrace{(k-1)! \\left(\\frac{e}{k}\\right)^{k-1}}_{\\ge 1 \\text{ by induction}}$$\nwhere the first term is $\\ge 1$ because the hint gives\n$$e^{1/k} \\frac{k}{k+1} \\ge \\left(1+\\frac{1}{k}\\right) \\frac{k}{k+1} = 1.$$</p>\n\n<hr>\n\n<p>The second inequality is due to\n$$k! \\left(\\frac{e}{k+1}\\right)^k = \\underbrace{e \\cdot \\left(\\frac{k}{k+1}\\right)^k}_{\\le \\frac{k+1}{k}} \\cdot \\underbrace{(k-1)! \\left(\\frac{e}{k}\\right)^{k-1}}_{\\le k \\text{ by induction}}$$\nwhere the first term is $\\le \\frac{k+1}{k}$ because\n$$e \\cdot \\left(\\frac{k}{k+1}\\right)^{k+1}\n= e \\cdot \\left(1-\\frac{1}{k+1}\\right)^{k+1} \\le e \\cdot e^{-1} = 1.$$\n[It is well known that $\\left(1-\\frac{1}{k+1}\\right)^{k+1}$ converges to $e^{-1}$ as $k \\to \\infty$, but one can show by induction that this actually increases monotonically to $e^{-1}$.]</p>\n" }, { "answer_id": 2098158, "score": 2, "text": "<p>Take the log of all sides, so we just need to prove that</p>\n\n<p>$$n\\ln n-n+1\\le\\ln(n!)\\le (n+1)\\ln n-n+1$$</p>\n\n<p>Then, it is easy to see that</p>\n\n<p>$$\\ln(n!)=\\sum_{k=1}^n\\ln(k)$$</p>\n\n<p>And that is the right Riemann sum to the following integral:</p>\n\n<p>$$\\sum_{k=1}^n\\ln(k)\\ge\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>So we have proven the left side. Then notice that we have the following trapezoidal sum:</p>\n\n<p>$$\\frac{\\ln(n)+2\\ln((n-1)!)}2=\\frac{\\ln(n!)+\\ln((n-1)!)}2\\\\=\\sum_{k=1}^n\\frac{\\ln(k+1)+\\ln(k)}2\\\\\\le\\int_1^n\\ln(x)\\ dx=n\\ln(n)-n+1\\\\\\frac{\\ln(n)+2\\ln((n-1)!)}2\\le n\\ln(n)-n+1\\\\\\implies\\ln((n-1)!)\\le\\left(n-\\frac12\\right)\\ln(n)-n+1\\\\\\implies\\ln(n!)=\\ln(n)+\\ln((n-1)!)\\le\\left(n+\\frac12\\right)\\ln(n)-n+1$$</p>\n\n<p>Thus, when $n\\ge1$,</p>\n\n<p>$$\\ln(n!)\\le\\left(n+\\frac12\\right)\\ln(n)-n+1\\le(n+1)\\ln(n)-n+1$$</p>\n\n<p>and we have everything we wanted.</p>\n" }, { "answer_id": 2098175, "score": 6, "text": "<p>Rearranging gives the equivalent inequalities\n$$1 \\le (n-1)! \\left(\\frac{e}{n}\\right)^{n-1} \\le n.$$</p>\n\n<p>When $n=1$ both inequalities are equalities.\nAssuming the statement holds for $n=k$, then we want to prove\n$$1 \\le k! \\left(\\frac{e}{k+1}\\right)^k \\le k+1.$$</p>\n\n<hr>\n\n<p>The first inequality holds since\n$$k! \\left(\\frac{e}{k+1}\\right)^k = \\underbrace{e \\cdot \\left(\\frac{k}{k+1}\\right)^k}_{\\ge 1} \\cdot \\underbrace{(k-1)! \\left(\\frac{e}{k}\\right)^{k-1}}_{\\ge 1 \\text{ by induction}}$$\nwhere the first term is $\\ge 1$ because the hint gives\n$$e^{1/k} \\frac{k}{k+1} \\ge \\left(1+\\frac{1}{k}\\right) \\frac{k}{k+1} = 1.$$</p>\n\n<hr>\n\n<p>The second inequality is due to\n$$k! \\left(\\frac{e}{k+1}\\right)^k = \\underbrace{e \\cdot \\left(\\frac{k}{k+1}\\right)^k}_{\\le \\frac{k+1}{k}} \\cdot \\underbrace{(k-1)! \\left(\\frac{e}{k}\\right)^{k-1}}_{\\le k \\text{ by induction}}$$\nwhere the first term is $\\le \\frac{k+1}{k}$ because\n$$e \\cdot \\left(\\frac{k}{k+1}\\right)^{k+1}\n= e \\cdot \\left(1-\\frac{1}{k+1}\\right)^{k+1} \\le e \\cdot e^{-1} = 1.$$\n[It is well known that $\\left(1-\\frac{1}{k+1}\\right)^{k+1}$ converges to $e^{-1}$ as $k \\to \\infty$, but one can show by induction that this actually increases monotonically to $e^{-1}$.]</p>\n" }, { "answer_id": 2098158, "score": 2, "text": "<p>Take the log of all sides, so we just need to prove that</p>\n\n<p>$$n\\ln n-n+1\\le\\ln(n!)\\le (n+1)\\ln n-n+1$$</p>\n\n<p>Then, it is easy to see that</p>\n\n<p>$$\\ln(n!)=\\sum_{k=1}^n\\ln(k)$$</p>\n\n<p>And that is the right Riemann sum to the following integral:</p>\n\n<p>$$\\sum_{k=1}^n\\ln(k)\\ge\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>So we have proven the left side. Then notice that we have the following trapezoidal sum:</p>\n\n<p>$$\\frac{\\ln(n)+2\\ln((n-1)!)}2=\\frac{\\ln(n!)+\\ln((n-1)!)}2\\\\=\\sum_{k=1}^n\\frac{\\ln(k+1)+\\ln(k)}2\\\\\\le\\int_1^n\\ln(x)\\ dx=n\\ln(n)-n+1\\\\\\frac{\\ln(n)+2\\ln((n-1)!)}2\\le n\\ln(n)-n+1\\\\\\implies\\ln((n-1)!)\\le\\left(n-\\frac12\\right)\\ln(n)-n+1\\\\\\implies\\ln(n!)=\\ln(n)+\\ln((n-1)!)\\le\\left(n+\\frac12\\right)\\ln(n)-n+1$$</p>\n\n<p>Thus, when $n\\ge1$,</p>\n\n<p>$$\\ln(n!)\\le\\left(n+\\frac12\\right)\\ln(n)-n+1\\le(n+1)\\ln(n)-n+1$$</p>\n\n<p>and we have everything we wanted.</p>\n" }, { "answer_id": 2098175, "score": 6, "text": "<p>Rearranging gives the equivalent inequalities\n$$1 \\le (n-1)! \\left(\\frac{e}{n}\\right)^{n-1} \\le n.$$</p>\n\n<p>When $n=1$ both inequalities are equalities.\nAssuming the statement holds for $n=k$, then we want to prove\n$$1 \\le k! \\left(\\frac{e}{k+1}\\right)^k \\le k+1.$$</p>\n\n<hr>\n\n<p>The first inequality holds since\n$$k! \\left(\\frac{e}{k+1}\\right)^k = \\underbrace{e \\cdot \\left(\\frac{k}{k+1}\\right)^k}_{\\ge 1} \\cdot \\underbrace{(k-1)! \\left(\\frac{e}{k}\\right)^{k-1}}_{\\ge 1 \\text{ by induction}}$$\nwhere the first term is $\\ge 1$ because the hint gives\n$$e^{1/k} \\frac{k}{k+1} \\ge \\left(1+\\frac{1}{k}\\right) \\frac{k}{k+1} = 1.$$</p>\n\n<hr>\n\n<p>The second inequality is due to\n$$k! \\left(\\frac{e}{k+1}\\right)^k = \\underbrace{e \\cdot \\left(\\frac{k}{k+1}\\right)^k}_{\\le \\frac{k+1}{k}} \\cdot \\underbrace{(k-1)! \\left(\\frac{e}{k}\\right)^{k-1}}_{\\le k \\text{ by induction}}$$\nwhere the first term is $\\le \\frac{k+1}{k}$ because\n$$e \\cdot \\left(\\frac{k}{k+1}\\right)^{k+1}\n= e \\cdot \\left(1-\\frac{1}{k+1}\\right)^{k+1} \\le e \\cdot e^{-1} = 1.$$\n[It is well known that $\\left(1-\\frac{1}{k+1}\\right)^{k+1}$ converges to $e^{-1}$ as $k \\to \\infty$, but one can show by induction that this actually increases monotonically to $e^{-1}$.]</p>\n" }, { "answer_id": 2098158, "score": 2, "text": "<p>Take the log of all sides, so we just need to prove that</p>\n\n<p>$$n\\ln n-n+1\\le\\ln(n!)\\le (n+1)\\ln n-n+1$$</p>\n\n<p>Then, it is easy to see that</p>\n\n<p>$$\\ln(n!)=\\sum_{k=1}^n\\ln(k)$$</p>\n\n<p>And that is the right Riemann sum to the following integral:</p>\n\n<p>$$\\sum_{k=1}^n\\ln(k)\\ge\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>So we have proven the left side. Then notice that we have the following trapezoidal sum:</p>\n\n<p>$$\\frac{\\ln(n)+2\\ln((n-1)!)}2=\\frac{\\ln(n!)+\\ln((n-1)!)}2\\\\=\\sum_{k=1}^n\\frac{\\ln(k+1)+\\ln(k)}2\\\\\\le\\int_1^n\\ln(x)\\ dx=n\\ln(n)-n+1\\\\\\frac{\\ln(n)+2\\ln((n-1)!)}2\\le n\\ln(n)-n+1\\\\\\implies\\ln((n-1)!)\\le\\left(n-\\frac12\\right)\\ln(n)-n+1\\\\\\implies\\ln(n!)=\\ln(n)+\\ln((n-1)!)\\le\\left(n+\\frac12\\right)\\ln(n)-n+1$$</p>\n\n<p>Thus, when $n\\ge1$,</p>\n\n<p>$$\\ln(n!)\\le\\left(n+\\frac12\\right)\\ln(n)-n+1\\le(n+1)\\ln(n)-n+1$$</p>\n\n<p>and we have everything we wanted.</p>\n" }, { "answer_id": 2098175, "score": 6, "text": "<p>Rearranging gives the equivalent inequalities\n$$1 \\le (n-1)! \\left(\\frac{e}{n}\\right)^{n-1} \\le n.$$</p>\n\n<p>When $n=1$ both inequalities are equalities.\nAssuming the statement holds for $n=k$, then we want to prove\n$$1 \\le k! \\left(\\frac{e}{k+1}\\right)^k \\le k+1.$$</p>\n\n<hr>\n\n<p>The first inequality holds since\n$$k! \\left(\\frac{e}{k+1}\\right)^k = \\underbrace{e \\cdot \\left(\\frac{k}{k+1}\\right)^k}_{\\ge 1} \\cdot \\underbrace{(k-1)! \\left(\\frac{e}{k}\\right)^{k-1}}_{\\ge 1 \\text{ by induction}}$$\nwhere the first term is $\\ge 1$ because the hint gives\n$$e^{1/k} \\frac{k}{k+1} \\ge \\left(1+\\frac{1}{k}\\right) \\frac{k}{k+1} = 1.$$</p>\n\n<hr>\n\n<p>The second inequality is due to\n$$k! \\left(\\frac{e}{k+1}\\right)^k = \\underbrace{e \\cdot \\left(\\frac{k}{k+1}\\right)^k}_{\\le \\frac{k+1}{k}} \\cdot \\underbrace{(k-1)! \\left(\\frac{e}{k}\\right)^{k-1}}_{\\le k \\text{ by induction}}$$\nwhere the first term is $\\le \\frac{k+1}{k}$ because\n$$e \\cdot \\left(\\frac{k}{k+1}\\right)^{k+1}\n= e \\cdot \\left(1-\\frac{1}{k+1}\\right)^{k+1} \\le e \\cdot e^{-1} = 1.$$\n[It is well known that $\\left(1-\\frac{1}{k+1}\\right)^{k+1}$ converges to $e^{-1}$ as $k \\to \\infty$, but one can show by induction that this actually increases monotonically to $e^{-1}$.]</p>\n" }, { "answer_id": 2098158, "score": 2, "text": "<p>Take the log of all sides, so we just need to prove that</p>\n\n<p>$$n\\ln n-n+1\\le\\ln(n!)\\le (n+1)\\ln n-n+1$$</p>\n\n<p>Then, it is easy to see that</p>\n\n<p>$$\\ln(n!)=\\sum_{k=1}^n\\ln(k)$$</p>\n\n<p>And that is the right Riemann sum to the following integral:</p>\n\n<p>$$\\sum_{k=1}^n\\ln(k)\\ge\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>So we have proven the left side. Then notice that we have the following trapezoidal sum:</p>\n\n<p>$$\\frac{\\ln(n)+2\\ln((n-1)!)}2=\\frac{\\ln(n!)+\\ln((n-1)!)}2\\\\=\\sum_{k=1}^n\\frac{\\ln(k+1)+\\ln(k)}2\\\\\\le\\int_1^n\\ln(x)\\ dx=n\\ln(n)-n+1\\\\\\frac{\\ln(n)+2\\ln((n-1)!)}2\\le n\\ln(n)-n+1\\\\\\implies\\ln((n-1)!)\\le\\left(n-\\frac12\\right)\\ln(n)-n+1\\\\\\implies\\ln(n!)=\\ln(n)+\\ln((n-1)!)\\le\\left(n+\\frac12\\right)\\ln(n)-n+1$$</p>\n\n<p>Thus, when $n\\ge1$,</p>\n\n<p>$$\\ln(n!)\\le\\left(n+\\frac12\\right)\\ln(n)-n+1\\le(n+1)\\ln(n)-n+1$$</p>\n\n<p>and we have everything we wanted.</p>\n" }, { "answer_id": 2098175, "score": 6, "text": "<p>Rearranging gives the equivalent inequalities\n$$1 \\le (n-1)! \\left(\\frac{e}{n}\\right)^{n-1} \\le n.$$</p>\n\n<p>When $n=1$ both inequalities are equalities.\nAssuming the statement holds for $n=k$, then we want to prove\n$$1 \\le k! \\left(\\frac{e}{k+1}\\right)^k \\le k+1.$$</p>\n\n<hr>\n\n<p>The first inequality holds since\n$$k! \\left(\\frac{e}{k+1}\\right)^k = \\underbrace{e \\cdot \\left(\\frac{k}{k+1}\\right)^k}_{\\ge 1} \\cdot \\underbrace{(k-1)! \\left(\\frac{e}{k}\\right)^{k-1}}_{\\ge 1 \\text{ by induction}}$$\nwhere the first term is $\\ge 1$ because the hint gives\n$$e^{1/k} \\frac{k}{k+1} \\ge \\left(1+\\frac{1}{k}\\right) \\frac{k}{k+1} = 1.$$</p>\n\n<hr>\n\n<p>The second inequality is due to\n$$k! \\left(\\frac{e}{k+1}\\right)^k = \\underbrace{e \\cdot \\left(\\frac{k}{k+1}\\right)^k}_{\\le \\frac{k+1}{k}} \\cdot \\underbrace{(k-1)! \\left(\\frac{e}{k}\\right)^{k-1}}_{\\le k \\text{ by induction}}$$\nwhere the first term is $\\le \\frac{k+1}{k}$ because\n$$e \\cdot \\left(\\frac{k}{k+1}\\right)^{k+1}\n= e \\cdot \\left(1-\\frac{1}{k+1}\\right)^{k+1} \\le e \\cdot e^{-1} = 1.$$\n[It is well known that $\\left(1-\\frac{1}{k+1}\\right)^{k+1}$ converges to $e^{-1}$ as $k \\to \\infty$, but one can show by induction that this actually increases monotonically to $e^{-1}$.]</p>\n" }, { "answer_id": 2098158, "score": 2, "text": "<p>Take the log of all sides, so we just need to prove that</p>\n\n<p>$$n\\ln n-n+1\\le\\ln(n!)\\le (n+1)\\ln n-n+1$$</p>\n\n<p>Then, it is easy to see that</p>\n\n<p>$$\\ln(n!)=\\sum_{k=1}^n\\ln(k)$$</p>\n\n<p>And that is the right Riemann sum to the following integral:</p>\n\n<p>$$\\sum_{k=1}^n\\ln(k)\\ge\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>So we have proven the left side. Then notice that we have the following trapezoidal sum:</p>\n\n<p>$$\\frac{\\ln(n)+2\\ln((n-1)!)}2=\\frac{\\ln(n!)+\\ln((n-1)!)}2\\\\=\\sum_{k=1}^n\\frac{\\ln(k+1)+\\ln(k)}2\\\\\\le\\int_1^n\\ln(x)\\ dx=n\\ln(n)-n+1\\\\\\frac{\\ln(n)+2\\ln((n-1)!)}2\\le n\\ln(n)-n+1\\\\\\implies\\ln((n-1)!)\\le\\left(n-\\frac12\\right)\\ln(n)-n+1\\\\\\implies\\ln(n!)=\\ln(n)+\\ln((n-1)!)\\le\\left(n+\\frac12\\right)\\ln(n)-n+1$$</p>\n\n<p>Thus, when $n\\ge1$,</p>\n\n<p>$$\\ln(n!)\\le\\left(n+\\frac12\\right)\\ln(n)-n+1\\le(n+1)\\ln(n)-n+1$$</p>\n\n<p>and we have everything we wanted.</p>\n" }, { "answer_id": 2098175, "score": 6, "text": "<p>Rearranging gives the equivalent inequalities\n$$1 \\le (n-1)! \\left(\\frac{e}{n}\\right)^{n-1} \\le n.$$</p>\n\n<p>When $n=1$ both inequalities are equalities.\nAssuming the statement holds for $n=k$, then we want to prove\n$$1 \\le k! \\left(\\frac{e}{k+1}\\right)^k \\le k+1.$$</p>\n\n<hr>\n\n<p>The first inequality holds since\n$$k! \\left(\\frac{e}{k+1}\\right)^k = \\underbrace{e \\cdot \\left(\\frac{k}{k+1}\\right)^k}_{\\ge 1} \\cdot \\underbrace{(k-1)! \\left(\\frac{e}{k}\\right)^{k-1}}_{\\ge 1 \\text{ by induction}}$$\nwhere the first term is $\\ge 1$ because the hint gives\n$$e^{1/k} \\frac{k}{k+1} \\ge \\left(1+\\frac{1}{k}\\right) \\frac{k}{k+1} = 1.$$</p>\n\n<hr>\n\n<p>The second inequality is due to\n$$k! \\left(\\frac{e}{k+1}\\right)^k = \\underbrace{e \\cdot \\left(\\frac{k}{k+1}\\right)^k}_{\\le \\frac{k+1}{k}} \\cdot \\underbrace{(k-1)! \\left(\\frac{e}{k}\\right)^{k-1}}_{\\le k \\text{ by induction}}$$\nwhere the first term is $\\le \\frac{k+1}{k}$ because\n$$e \\cdot \\left(\\frac{k}{k+1}\\right)^{k+1}\n= e \\cdot \\left(1-\\frac{1}{k+1}\\right)^{k+1} \\le e \\cdot e^{-1} = 1.$$\n[It is well known that $\\left(1-\\frac{1}{k+1}\\right)^{k+1}$ converges to $e^{-1}$ as $k \\to \\infty$, but one can show by induction that this actually increases monotonically to $e^{-1}$.]</p>\n" }, { "answer_id": 2098158, "score": 2, "text": "<p>Take the log of all sides, so we just need to prove that</p>\n\n<p>$$n\\ln n-n+1\\le\\ln(n!)\\le (n+1)\\ln n-n+1$$</p>\n\n<p>Then, it is easy to see that</p>\n\n<p>$$\\ln(n!)=\\sum_{k=1}^n\\ln(k)$$</p>\n\n<p>And that is the right Riemann sum to the following integral:</p>\n\n<p>$$\\sum_{k=1}^n\\ln(k)\\ge\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>So we have proven the left side. Then notice that we have the following trapezoidal sum:</p>\n\n<p>$$\\frac{\\ln(n)+2\\ln((n-1)!)}2=\\frac{\\ln(n!)+\\ln((n-1)!)}2\\\\=\\sum_{k=1}^n\\frac{\\ln(k+1)+\\ln(k)}2\\\\\\le\\int_1^n\\ln(x)\\ dx=n\\ln(n)-n+1\\\\\\frac{\\ln(n)+2\\ln((n-1)!)}2\\le n\\ln(n)-n+1\\\\\\implies\\ln((n-1)!)\\le\\left(n-\\frac12\\right)\\ln(n)-n+1\\\\\\implies\\ln(n!)=\\ln(n)+\\ln((n-1)!)\\le\\left(n+\\frac12\\right)\\ln(n)-n+1$$</p>\n\n<p>Thus, when $n\\ge1$,</p>\n\n<p>$$\\ln(n!)\\le\\left(n+\\frac12\\right)\\ln(n)-n+1\\le(n+1)\\ln(n)-n+1$$</p>\n\n<p>and we have everything we wanted.</p>\n" }, { "answer_id": 2098175, "score": 6, "text": "<p>Rearranging gives the equivalent inequalities\n$$1 \\le (n-1)! \\left(\\frac{e}{n}\\right)^{n-1} \\le n.$$</p>\n\n<p>When $n=1$ both inequalities are equalities.\nAssuming the statement holds for $n=k$, then we want to prove\n$$1 \\le k! \\left(\\frac{e}{k+1}\\right)^k \\le k+1.$$</p>\n\n<hr>\n\n<p>The first inequality holds since\n$$k! \\left(\\frac{e}{k+1}\\right)^k = \\underbrace{e \\cdot \\left(\\frac{k}{k+1}\\right)^k}_{\\ge 1} \\cdot \\underbrace{(k-1)! \\left(\\frac{e}{k}\\right)^{k-1}}_{\\ge 1 \\text{ by induction}}$$\nwhere the first term is $\\ge 1$ because the hint gives\n$$e^{1/k} \\frac{k}{k+1} \\ge \\left(1+\\frac{1}{k}\\right) \\frac{k}{k+1} = 1.$$</p>\n\n<hr>\n\n<p>The second inequality is due to\n$$k! \\left(\\frac{e}{k+1}\\right)^k = \\underbrace{e \\cdot \\left(\\frac{k}{k+1}\\right)^k}_{\\le \\frac{k+1}{k}} \\cdot \\underbrace{(k-1)! \\left(\\frac{e}{k}\\right)^{k-1}}_{\\le k \\text{ by induction}}$$\nwhere the first term is $\\le \\frac{k+1}{k}$ because\n$$e \\cdot \\left(\\frac{k}{k+1}\\right)^{k+1}\n= e \\cdot \\left(1-\\frac{1}{k+1}\\right)^{k+1} \\le e \\cdot e^{-1} = 1.$$\n[It is well known that $\\left(1-\\frac{1}{k+1}\\right)^{k+1}$ converges to $e^{-1}$ as $k \\to \\infty$, but one can show by induction that this actually increases monotonically to $e^{-1}$.]</p>\n" }, { "answer_id": 2098158, "score": 2, "text": "<p>Take the log of all sides, so we just need to prove that</p>\n\n<p>$$n\\ln n-n+1\\le\\ln(n!)\\le (n+1)\\ln n-n+1$$</p>\n\n<p>Then, it is easy to see that</p>\n\n<p>$$\\ln(n!)=\\sum_{k=1}^n\\ln(k)$$</p>\n\n<p>And that is the right Riemann sum to the following integral:</p>\n\n<p>$$\\sum_{k=1}^n\\ln(k)\\ge\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>So we have proven the left side. Then notice that we have the following trapezoidal sum:</p>\n\n<p>$$\\frac{\\ln(n)+2\\ln((n-1)!)}2=\\frac{\\ln(n!)+\\ln((n-1)!)}2\\\\=\\sum_{k=1}^n\\frac{\\ln(k+1)+\\ln(k)}2\\\\\\le\\int_1^n\\ln(x)\\ dx=n\\ln(n)-n+1\\\\\\frac{\\ln(n)+2\\ln((n-1)!)}2\\le n\\ln(n)-n+1\\\\\\implies\\ln((n-1)!)\\le\\left(n-\\frac12\\right)\\ln(n)-n+1\\\\\\implies\\ln(n!)=\\ln(n)+\\ln((n-1)!)\\le\\left(n+\\frac12\\right)\\ln(n)-n+1$$</p>\n\n<p>Thus, when $n\\ge1$,</p>\n\n<p>$$\\ln(n!)\\le\\left(n+\\frac12\\right)\\ln(n)-n+1\\le(n+1)\\ln(n)-n+1$$</p>\n\n<p>and we have everything we wanted.</p>\n" }, { "answer_id": 2098175, "score": 6, "text": "<p>Rearranging gives the equivalent inequalities\n$$1 \\le (n-1)! \\left(\\frac{e}{n}\\right)^{n-1} \\le n.$$</p>\n\n<p>When $n=1$ both inequalities are equalities.\nAssuming the statement holds for $n=k$, then we want to prove\n$$1 \\le k! \\left(\\frac{e}{k+1}\\right)^k \\le k+1.$$</p>\n\n<hr>\n\n<p>The first inequality holds since\n$$k! \\left(\\frac{e}{k+1}\\right)^k = \\underbrace{e \\cdot \\left(\\frac{k}{k+1}\\right)^k}_{\\ge 1} \\cdot \\underbrace{(k-1)! \\left(\\frac{e}{k}\\right)^{k-1}}_{\\ge 1 \\text{ by induction}}$$\nwhere the first term is $\\ge 1$ because the hint gives\n$$e^{1/k} \\frac{k}{k+1} \\ge \\left(1+\\frac{1}{k}\\right) \\frac{k}{k+1} = 1.$$</p>\n\n<hr>\n\n<p>The second inequality is due to\n$$k! \\left(\\frac{e}{k+1}\\right)^k = \\underbrace{e \\cdot \\left(\\frac{k}{k+1}\\right)^k}_{\\le \\frac{k+1}{k}} \\cdot \\underbrace{(k-1)! \\left(\\frac{e}{k}\\right)^{k-1}}_{\\le k \\text{ by induction}}$$\nwhere the first term is $\\le \\frac{k+1}{k}$ because\n$$e \\cdot \\left(\\frac{k}{k+1}\\right)^{k+1}\n= e \\cdot \\left(1-\\frac{1}{k+1}\\right)^{k+1} \\le e \\cdot e^{-1} = 1.$$\n[It is well known that $\\left(1-\\frac{1}{k+1}\\right)^{k+1}$ converges to $e^{-1}$ as $k \\to \\infty$, but one can show by induction that this actually increases monotonically to $e^{-1}$.]</p>\n" }, { "answer_id": 2098158, "score": 2, "text": "<p>Take the log of all sides, so we just need to prove that</p>\n\n<p>$$n\\ln n-n+1\\le\\ln(n!)\\le (n+1)\\ln n-n+1$$</p>\n\n<p>Then, it is easy to see that</p>\n\n<p>$$\\ln(n!)=\\sum_{k=1}^n\\ln(k)$$</p>\n\n<p>And that is the right Riemann sum to the following integral:</p>\n\n<p>$$\\sum_{k=1}^n\\ln(k)\\ge\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>So we have proven the left side. Then notice that we have the following trapezoidal sum:</p>\n\n<p>$$\\frac{\\ln(n)+2\\ln((n-1)!)}2=\\frac{\\ln(n!)+\\ln((n-1)!)}2\\\\=\\sum_{k=1}^n\\frac{\\ln(k+1)+\\ln(k)}2\\\\\\le\\int_1^n\\ln(x)\\ dx=n\\ln(n)-n+1\\\\\\frac{\\ln(n)+2\\ln((n-1)!)}2\\le n\\ln(n)-n+1\\\\\\implies\\ln((n-1)!)\\le\\left(n-\\frac12\\right)\\ln(n)-n+1\\\\\\implies\\ln(n!)=\\ln(n)+\\ln((n-1)!)\\le\\left(n+\\frac12\\right)\\ln(n)-n+1$$</p>\n\n<p>Thus, when $n\\ge1$,</p>\n\n<p>$$\\ln(n!)\\le\\left(n+\\frac12\\right)\\ln(n)-n+1\\le(n+1)\\ln(n)-n+1$$</p>\n\n<p>and we have everything we wanted.</p>\n" } ]
Question: How to lower/upper bound $n!$ using $1+x\leq e^x$? Answer 1: <p>Rearranging gives the equivalent inequalities $$1 \le (n-1)! \left(\frac{e}{n}\right)^{n-1} \le n.$$</p> <p>When $n=1$ both inequalities are equalities. Assuming the statement holds for $n=k$, then we want to prove $$1 \le k! \left(\frac{e}{k+1}\right)^k \le k+1.$$</p> <hr> <p>The first inequality holds since $$k! \left(\frac{e}{k+1}\right)^k = \underbrace{e \cdot \left(\frac{k}{k+1}\right)^k}_{\ge 1} \cdot \underbrace{(k-1)! \left(\frac{e}{k}\right)^{k-1}}_{\ge 1 \text{ by induction}}$$ where the first term is $\ge 1$ because the hint gives $$e^{1/k} \frac{k}{k+1} \ge \left(1+\frac{1}{k}\right) \frac{k}{k+1} = 1.$$</p> <hr> <p>The second inequality is due to $$k! \left(\frac{e}{k+1}\right)^k = \underbrace{e \cdot \left(\frac{k}{k+1}\right)^k}_{\le \frac{k+1}{k}} \cdot \underbrace{(k-1)! \left(\frac{e}{k}\right)^{k-1}}_{\le k \text{ by induction}}$$ where the first term is $\le \frac{k+1}{k}$ because $$e \cdot \left(\frac{k}{k+1}\right)^{k+1} = e \cdot \left(1-\frac{1}{k+1}\right)^{k+1} \le e \cdot e^{-1} = 1.$$ [It is well known that $\left(1-\frac{1}{k+1}\right)^{k+1}$ converges to $e^{-1}$ as $k \to \infty$, but one can show by induction that this actually increases monotonically to $e^{-1}$.]</p> Answer 2: <p>Take the log of all sides, so we just need to prove that</p> <p>$$n\ln n-n+1\le\ln(n!)\le (n+1)\ln n-n+1$$</p> <p>Then, it is easy to see that</p> <p>$$\ln(n!)=\sum_{k=1}^n\ln(k)$$</p> <p>And that is the right Riemann sum to the following integral:</p> <p>$$\sum_{k=1}^n\ln(k)\ge\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>So we have proven the left side. Then notice that we have the following trapezoidal sum:</p> <p>$$\frac{\ln(n)+2\ln((n-1)!)}2=\frac{\ln(n!)+\ln((n-1)!)}2\\=\sum_{k=1}^n\frac{\ln(k+1)+\ln(k)}2\\\le\int_1^n\ln(x)\ dx=n\ln(n)-n+1\\\frac{\ln(n)+2\ln((n-1)!)}2\le n\ln(n)-n+1\\\implies\ln((n-1)!)\le\left(n-\frac12\right)\ln(n)-n+1\\\implies\ln(n!)=\ln(n)+\ln((n-1)!)\le\left(n+\frac12\right)\ln(n)-n+1$$</p> <p>Thus, when $n\ge1$,</p> <p>$$\ln(n!)\le\left(n+\frac12\right)\ln(n)-n+1\le(n+1)\ln(n)-n+1$$</p> <p>and we have everything we wanted.</p> Answer 3: <p>Rearranging gives the equivalent inequalities $$1 \le (n-1)! \left(\frac{e}{n}\right)^{n-1} \le n.$$</p> <p>When $n=1$ both inequalities are equalities. Assuming the statement holds for $n=k$, then we want to prove $$1 \le k! \left(\frac{e}{k+1}\right)^k \le k+1.$$</p> <hr> <p>The first inequality holds since $$k! \left(\frac{e}{k+1}\right)^k = \underbrace{e \cdot \left(\frac{k}{k+1}\right)^k}_{\ge 1} \cdot \underbrace{(k-1)! \left(\frac{e}{k}\right)^{k-1}}_{\ge 1 \text{ by induction}}$$ where the first term is $\ge 1$ because the hint gives $$e^{1/k} \frac{k}{k+1} \ge \left(1+\frac{1}{k}\right) \frac{k}{k+1} = 1.$$</p> <hr> <p>The second inequality is due to $$k! \left(\frac{e}{k+1}\right)^k = \underbrace{e \cdot \left(\frac{k}{k+1}\right)^k}_{\le \frac{k+1}{k}} \cdot \underbrace{(k-1)! \left(\frac{e}{k}\right)^{k-1}}_{\le k \text{ by induction}}$$ where the first term is $\le \frac{k+1}{k}$ because $$e \cdot \left(\frac{k}{k+1}\right)^{k+1} = e \cdot \left(1-\frac{1}{k+1}\right)^{k+1} \le e \cdot e^{-1} = 1.$$ [It is well known that $\left(1-\frac{1}{k+1}\right)^{k+1}$ converges to $e^{-1}$ as $k \to \infty$, but one can show by induction that this actually increases monotonically to $e^{-1}$.]</p> Answer 4: <p>Take the log of all sides, so we just need to prove that</p> <p>$$n\ln n-n+1\le\ln(n!)\le (n+1)\ln n-n+1$$</p> <p>Then, it is easy to see that</p> <p>$$\ln(n!)=\sum_{k=1}^n\ln(k)$$</p> <p>And that is the right Riemann sum to the following integral:</p> <p>$$\sum_{k=1}^n\ln(k)\ge\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>So we have proven the left side. Then notice that we have the following trapezoidal sum:</p> <p>$$\frac{\ln(n)+2\ln((n-1)!)}2=\frac{\ln(n!)+\ln((n-1)!)}2\\=\sum_{k=1}^n\frac{\ln(k+1)+\ln(k)}2\\\le\int_1^n\ln(x)\ dx=n\ln(n)-n+1\\\frac{\ln(n)+2\ln((n-1)!)}2\le n\ln(n)-n+1\\\implies\ln((n-1)!)\le\left(n-\frac12\right)\ln(n)-n+1\\\implies\ln(n!)=\ln(n)+\ln((n-1)!)\le\left(n+\frac12\right)\ln(n)-n+1$$</p> <p>Thus, when $n\ge1$,</p> <p>$$\ln(n!)\le\left(n+\frac12\right)\ln(n)-n+1\le(n+1)\ln(n)-n+1$$</p> <p>and we have everything we wanted.</p> Answer 5: <p>Rearranging gives the equivalent inequalities $$1 \le (n-1)! \left(\frac{e}{n}\right)^{n-1} \le n.$$</p> <p>When $n=1$ both inequalities are equalities. Assuming the statement holds for $n=k$, then we want to prove $$1 \le k! \left(\frac{e}{k+1}\right)^k \le k+1.$$</p> <hr> <p>The first inequality holds since $$k! \left(\frac{e}{k+1}\right)^k = \underbrace{e \cdot \left(\frac{k}{k+1}\right)^k}_{\ge 1} \cdot \underbrace{(k-1)! \left(\frac{e}{k}\right)^{k-1}}_{\ge 1 \text{ by induction}}$$ where the first term is $\ge 1$ because the hint gives $$e^{1/k} \frac{k}{k+1} \ge \left(1+\frac{1}{k}\right) \frac{k}{k+1} = 1.$$</p> <hr> <p>The second inequality is due to $$k! \left(\frac{e}{k+1}\right)^k = \underbrace{e \cdot \left(\frac{k}{k+1}\right)^k}_{\le \frac{k+1}{k}} \cdot \underbrace{(k-1)! \left(\frac{e}{k}\right)^{k-1}}_{\le k \text{ by induction}}$$ where the first term is $\le \frac{k+1}{k}$ because $$e \cdot \left(\frac{k}{k+1}\right)^{k+1} = e \cdot \left(1-\frac{1}{k+1}\right)^{k+1} \le e \cdot e^{-1} = 1.$$ [It is well known that $\left(1-\frac{1}{k+1}\right)^{k+1}$ converges to $e^{-1}$ as $k \to \infty$, but one can show by induction that this actually increases monotonically to $e^{-1}$.]</p> Answer 6: <p>Take the log of all sides, so we just need to prove that</p> <p>$$n\ln n-n+1\le\ln(n!)\le (n+1)\ln n-n+1$$</p> <p>Then, it is easy to see that</p> <p>$$\ln(n!)=\sum_{k=1}^n\ln(k)$$</p> <p>And that is the right Riemann sum to the following integral:</p> <p>$$\sum_{k=1}^n\ln(k)\ge\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>So we have proven the left side. Then notice that we have the following trapezoidal sum:</p> <p>$$\frac{\ln(n)+2\ln((n-1)!)}2=\frac{\ln(n!)+\ln((n-1)!)}2\\=\sum_{k=1}^n\frac{\ln(k+1)+\ln(k)}2\\\le\int_1^n\ln(x)\ dx=n\ln(n)-n+1\\\frac{\ln(n)+2\ln((n-1)!)}2\le n\ln(n)-n+1\\\implies\ln((n-1)!)\le\left(n-\frac12\right)\ln(n)-n+1\\\implies\ln(n!)=\ln(n)+\ln((n-1)!)\le\left(n+\frac12\right)\ln(n)-n+1$$</p> <p>Thus, when $n\ge1$,</p> <p>$$\ln(n!)\le\left(n+\frac12\right)\ln(n)-n+1\le(n+1)\ln(n)-n+1$$</p> <p>and we have everything we wanted.</p> Answer 7: <p>Rearranging gives the equivalent inequalities $$1 \le (n-1)! \left(\frac{e}{n}\right)^{n-1} \le n.$$</p> <p>When $n=1$ both inequalities are equalities. Assuming the statement holds for $n=k$, then we want to prove $$1 \le k! \left(\frac{e}{k+1}\right)^k \le k+1.$$</p> <hr> <p>The first inequality holds since $$k! \left(\frac{e}{k+1}\right)^k = \underbrace{e \cdot \left(\frac{k}{k+1}\right)^k}_{\ge 1} \cdot \underbrace{(k-1)! \left(\frac{e}{k}\right)^{k-1}}_{\ge 1 \text{ by induction}}$$ where the first term is $\ge 1$ because the hint gives $$e^{1/k} \frac{k}{k+1} \ge \left(1+\frac{1}{k}\right) \frac{k}{k+1} = 1.$$</p> <hr> <p>The second inequality is due to $$k! \left(\frac{e}{k+1}\right)^k = \underbrace{e \cdot \left(\frac{k}{k+1}\right)^k}_{\le \frac{k+1}{k}} \cdot \underbrace{(k-1)! \left(\frac{e}{k}\right)^{k-1}}_{\le k \text{ by induction}}$$ where the first term is $\le \frac{k+1}{k}$ because $$e \cdot \left(\frac{k}{k+1}\right)^{k+1} = e \cdot \left(1-\frac{1}{k+1}\right)^{k+1} \le e \cdot e^{-1} = 1.$$ [It is well known that $\left(1-\frac{1}{k+1}\right)^{k+1}$ converges to $e^{-1}$ as $k \to \infty$, but one can show by induction that this actually increases monotonically to $e^{-1}$.]</p> Answer 8: <p>Take the log of all sides, so we just need to prove that</p> <p>$$n\ln n-n+1\le\ln(n!)\le (n+1)\ln n-n+1$$</p> <p>Then, it is easy to see that</p> <p>$$\ln(n!)=\sum_{k=1}^n\ln(k)$$</p> <p>And that is the right Riemann sum to the following integral:</p> <p>$$\sum_{k=1}^n\ln(k)\ge\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>So we have proven the left side. Then notice that we have the following trapezoidal sum:</p> <p>$$\frac{\ln(n)+2\ln((n-1)!)}2=\frac{\ln(n!)+\ln((n-1)!)}2\\=\sum_{k=1}^n\frac{\ln(k+1)+\ln(k)}2\\\le\int_1^n\ln(x)\ dx=n\ln(n)-n+1\\\frac{\ln(n)+2\ln((n-1)!)}2\le n\ln(n)-n+1\\\implies\ln((n-1)!)\le\left(n-\frac12\right)\ln(n)-n+1\\\implies\ln(n!)=\ln(n)+\ln((n-1)!)\le\left(n+\frac12\right)\ln(n)-n+1$$</p> <p>Thus, when $n\ge1$,</p> <p>$$\ln(n!)\le\left(n+\frac12\right)\ln(n)-n+1\le(n+1)\ln(n)-n+1$$</p> <p>and we have everything we wanted.</p> Answer 9: <p>Rearranging gives the equivalent inequalities $$1 \le (n-1)! \left(\frac{e}{n}\right)^{n-1} \le n.$$</p> <p>When $n=1$ both inequalities are equalities. Assuming the statement holds for $n=k$, then we want to prove $$1 \le k! \left(\frac{e}{k+1}\right)^k \le k+1.$$</p> <hr> <p>The first inequality holds since $$k! \left(\frac{e}{k+1}\right)^k = \underbrace{e \cdot \left(\frac{k}{k+1}\right)^k}_{\ge 1} \cdot \underbrace{(k-1)! \left(\frac{e}{k}\right)^{k-1}}_{\ge 1 \text{ by induction}}$$ where the first term is $\ge 1$ because the hint gives $$e^{1/k} \frac{k}{k+1} \ge \left(1+\frac{1}{k}\right) \frac{k}{k+1} = 1.$$</p> <hr> <p>The second inequality is due to $$k! \left(\frac{e}{k+1}\right)^k = \underbrace{e \cdot \left(\frac{k}{k+1}\right)^k}_{\le \frac{k+1}{k}} \cdot \underbrace{(k-1)! \left(\frac{e}{k}\right)^{k-1}}_{\le k \text{ by induction}}$$ where the first term is $\le \frac{k+1}{k}$ because $$e \cdot \left(\frac{k}{k+1}\right)^{k+1} = e \cdot \left(1-\frac{1}{k+1}\right)^{k+1} \le e \cdot e^{-1} = 1.$$ [It is well known that $\left(1-\frac{1}{k+1}\right)^{k+1}$ converges to $e^{-1}$ as $k \to \infty$, but one can show by induction that this actually increases monotonically to $e^{-1}$.]</p> Answer 10: <p>Take the log of all sides, so we just need to prove that</p> <p>$$n\ln n-n+1\le\ln(n!)\le (n+1)\ln n-n+1$$</p> <p>Then, it is easy to see that</p> <p>$$\ln(n!)=\sum_{k=1}^n\ln(k)$$</p> <p>And that is the right Riemann sum to the following integral:</p> <p>$$\sum_{k=1}^n\ln(k)\ge\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>So we have proven the left side. Then notice that we have the following trapezoidal sum:</p> <p>$$\frac{\ln(n)+2\ln((n-1)!)}2=\frac{\ln(n!)+\ln((n-1)!)}2\\=\sum_{k=1}^n\frac{\ln(k+1)+\ln(k)}2\\\le\int_1^n\ln(x)\ dx=n\ln(n)-n+1\\\frac{\ln(n)+2\ln((n-1)!)}2\le n\ln(n)-n+1\\\implies\ln((n-1)!)\le\left(n-\frac12\right)\ln(n)-n+1\\\implies\ln(n!)=\ln(n)+\ln((n-1)!)\le\left(n+\frac12\right)\ln(n)-n+1$$</p> <p>Thus, when $n\ge1$,</p> <p>$$\ln(n!)\le\left(n+\frac12\right)\ln(n)-n+1\le(n+1)\ln(n)-n+1$$</p> <p>and we have everything we wanted.</p> Answer 11: <p>Rearranging gives the equivalent inequalities $$1 \le (n-1)! \left(\frac{e}{n}\right)^{n-1} \le n.$$</p> <p>When $n=1$ both inequalities are equalities. Assuming the statement holds for $n=k$, then we want to prove $$1 \le k! \left(\frac{e}{k+1}\right)^k \le k+1.$$</p> <hr> <p>The first inequality holds since $$k! \left(\frac{e}{k+1}\right)^k = \underbrace{e \cdot \left(\frac{k}{k+1}\right)^k}_{\ge 1} \cdot \underbrace{(k-1)! \left(\frac{e}{k}\right)^{k-1}}_{\ge 1 \text{ by induction}}$$ where the first term is $\ge 1$ because the hint gives $$e^{1/k} \frac{k}{k+1} \ge \left(1+\frac{1}{k}\right) \frac{k}{k+1} = 1.$$</p> <hr> <p>The second inequality is due to $$k! \left(\frac{e}{k+1}\right)^k = \underbrace{e \cdot \left(\frac{k}{k+1}\right)^k}_{\le \frac{k+1}{k}} \cdot \underbrace{(k-1)! \left(\frac{e}{k}\right)^{k-1}}_{\le k \text{ by induction}}$$ where the first term is $\le \frac{k+1}{k}$ because $$e \cdot \left(\frac{k}{k+1}\right)^{k+1} = e \cdot \left(1-\frac{1}{k+1}\right)^{k+1} \le e \cdot e^{-1} = 1.$$ [It is well known that $\left(1-\frac{1}{k+1}\right)^{k+1}$ converges to $e^{-1}$ as $k \to \infty$, but one can show by induction that this actually increases monotonically to $e^{-1}$.]</p> Answer 12: <p>Take the log of all sides, so we just need to prove that</p> <p>$$n\ln n-n+1\le\ln(n!)\le (n+1)\ln n-n+1$$</p> <p>Then, it is easy to see that</p> <p>$$\ln(n!)=\sum_{k=1}^n\ln(k)$$</p> <p>And that is the right Riemann sum to the following integral:</p> <p>$$\sum_{k=1}^n\ln(k)\ge\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>So we have proven the left side. Then notice that we have the following trapezoidal sum:</p> <p>$$\frac{\ln(n)+2\ln((n-1)!)}2=\frac{\ln(n!)+\ln((n-1)!)}2\\=\sum_{k=1}^n\frac{\ln(k+1)+\ln(k)}2\\\le\int_1^n\ln(x)\ dx=n\ln(n)-n+1\\\frac{\ln(n)+2\ln((n-1)!)}2\le n\ln(n)-n+1\\\implies\ln((n-1)!)\le\left(n-\frac12\right)\ln(n)-n+1\\\implies\ln(n!)=\ln(n)+\ln((n-1)!)\le\left(n+\frac12\right)\ln(n)-n+1$$</p> <p>Thus, when $n\ge1$,</p> <p>$$\ln(n!)\le\left(n+\frac12\right)\ln(n)-n+1\le(n+1)\ln(n)-n+1$$</p> <p>and we have everything we wanted.</p> Answer 13: <p>Rearranging gives the equivalent inequalities $$1 \le (n-1)! \left(\frac{e}{n}\right)^{n-1} \le n.$$</p> <p>When $n=1$ both inequalities are equalities. Assuming the statement holds for $n=k$, then we want to prove $$1 \le k! \left(\frac{e}{k+1}\right)^k \le k+1.$$</p> <hr> <p>The first inequality holds since $$k! \left(\frac{e}{k+1}\right)^k = \underbrace{e \cdot \left(\frac{k}{k+1}\right)^k}_{\ge 1} \cdot \underbrace{(k-1)! \left(\frac{e}{k}\right)^{k-1}}_{\ge 1 \text{ by induction}}$$ where the first term is $\ge 1$ because the hint gives $$e^{1/k} \frac{k}{k+1} \ge \left(1+\frac{1}{k}\right) \frac{k}{k+1} = 1.$$</p> <hr> <p>The second inequality is due to $$k! \left(\frac{e}{k+1}\right)^k = \underbrace{e \cdot \left(\frac{k}{k+1}\right)^k}_{\le \frac{k+1}{k}} \cdot \underbrace{(k-1)! \left(\frac{e}{k}\right)^{k-1}}_{\le k \text{ by induction}}$$ where the first term is $\le \frac{k+1}{k}$ because $$e \cdot \left(\frac{k}{k+1}\right)^{k+1} = e \cdot \left(1-\frac{1}{k+1}\right)^{k+1} \le e \cdot e^{-1} = 1.$$ [It is well known that $\left(1-\frac{1}{k+1}\right)^{k+1}$ converges to $e^{-1}$ as $k \to \infty$, but one can show by induction that this actually increases monotonically to $e^{-1}$.]</p> Answer 14: <p>Take the log of all sides, so we just need to prove that</p> <p>$$n\ln n-n+1\le\ln(n!)\le (n+1)\ln n-n+1$$</p> <p>Then, it is easy to see that</p> <p>$$\ln(n!)=\sum_{k=1}^n\ln(k)$$</p> <p>And that is the right Riemann sum to the following integral:</p> <p>$$\sum_{k=1}^n\ln(k)\ge\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>So we have proven the left side. Then notice that we have the following trapezoidal sum:</p> <p>$$\frac{\ln(n)+2\ln((n-1)!)}2=\frac{\ln(n!)+\ln((n-1)!)}2\\=\sum_{k=1}^n\frac{\ln(k+1)+\ln(k)}2\\\le\int_1^n\ln(x)\ dx=n\ln(n)-n+1\\\frac{\ln(n)+2\ln((n-1)!)}2\le n\ln(n)-n+1\\\implies\ln((n-1)!)\le\left(n-\frac12\right)\ln(n)-n+1\\\implies\ln(n!)=\ln(n)+\ln((n-1)!)\le\left(n+\frac12\right)\ln(n)-n+1$$</p> <p>Thus, when $n\ge1$,</p> <p>$$\ln(n!)\le\left(n+\frac12\right)\ln(n)-n+1\le(n+1)\ln(n)-n+1$$</p> <p>and we have everything we wanted.</p> Answer 15: <p>Rearranging gives the equivalent inequalities $$1 \le (n-1)! \left(\frac{e}{n}\right)^{n-1} \le n.$$</p> <p>When $n=1$ both inequalities are equalities. Assuming the statement holds for $n=k$, then we want to prove $$1 \le k! \left(\frac{e}{k+1}\right)^k \le k+1.$$</p> <hr> <p>The first inequality holds since $$k! \left(\frac{e}{k+1}\right)^k = \underbrace{e \cdot \left(\frac{k}{k+1}\right)^k}_{\ge 1} \cdot \underbrace{(k-1)! \left(\frac{e}{k}\right)^{k-1}}_{\ge 1 \text{ by induction}}$$ where the first term is $\ge 1$ because the hint gives $$e^{1/k} \frac{k}{k+1} \ge \left(1+\frac{1}{k}\right) \frac{k}{k+1} = 1.$$</p> <hr> <p>The second inequality is due to $$k! \left(\frac{e}{k+1}\right)^k = \underbrace{e \cdot \left(\frac{k}{k+1}\right)^k}_{\le \frac{k+1}{k}} \cdot \underbrace{(k-1)! \left(\frac{e}{k}\right)^{k-1}}_{\le k \text{ by induction}}$$ where the first term is $\le \frac{k+1}{k}$ because $$e \cdot \left(\frac{k}{k+1}\right)^{k+1} = e \cdot \left(1-\frac{1}{k+1}\right)^{k+1} \le e \cdot e^{-1} = 1.$$ [It is well known that $\left(1-\frac{1}{k+1}\right)^{k+1}$ converges to $e^{-1}$ as $k \to \infty$, but one can show by induction that this actually increases monotonically to $e^{-1}$.]</p> Answer 16: <p>Take the log of all sides, so we just need to prove that</p> <p>$$n\ln n-n+1\le\ln(n!)\le (n+1)\ln n-n+1$$</p> <p>Then, it is easy to see that</p> <p>$$\ln(n!)=\sum_{k=1}^n\ln(k)$$</p> <p>And that is the right Riemann sum to the following integral:</p> <p>$$\sum_{k=1}^n\ln(k)\ge\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>So we have proven the left side. Then notice that we have the following trapezoidal sum:</p> <p>$$\frac{\ln(n)+2\ln((n-1)!)}2=\frac{\ln(n!)+\ln((n-1)!)}2\\=\sum_{k=1}^n\frac{\ln(k+1)+\ln(k)}2\\\le\int_1^n\ln(x)\ dx=n\ln(n)-n+1\\\frac{\ln(n)+2\ln((n-1)!)}2\le n\ln(n)-n+1\\\implies\ln((n-1)!)\le\left(n-\frac12\right)\ln(n)-n+1\\\implies\ln(n!)=\ln(n)+\ln((n-1)!)\le\left(n+\frac12\right)\ln(n)-n+1$$</p> <p>Thus, when $n\ge1$,</p> <p>$$\ln(n!)\le\left(n+\frac12\right)\ln(n)-n+1\le(n+1)\ln(n)-n+1$$</p> <p>and we have everything we wanted.</p> Answer 17: <p>Rearranging gives the equivalent inequalities $$1 \le (n-1)! \left(\frac{e}{n}\right)^{n-1} \le n.$$</p> <p>When $n=1$ both inequalities are equalities. Assuming the statement holds for $n=k$, then we want to prove $$1 \le k! \left(\frac{e}{k+1}\right)^k \le k+1.$$</p> <hr> <p>The first inequality holds since $$k! \left(\frac{e}{k+1}\right)^k = \underbrace{e \cdot \left(\frac{k}{k+1}\right)^k}_{\ge 1} \cdot \underbrace{(k-1)! \left(\frac{e}{k}\right)^{k-1}}_{\ge 1 \text{ by induction}}$$ where the first term is $\ge 1$ because the hint gives $$e^{1/k} \frac{k}{k+1} \ge \left(1+\frac{1}{k}\right) \frac{k}{k+1} = 1.$$</p> <hr> <p>The second inequality is due to $$k! \left(\frac{e}{k+1}\right)^k = \underbrace{e \cdot \left(\frac{k}{k+1}\right)^k}_{\le \frac{k+1}{k}} \cdot \underbrace{(k-1)! \left(\frac{e}{k}\right)^{k-1}}_{\le k \text{ by induction}}$$ where the first term is $\le \frac{k+1}{k}$ because $$e \cdot \left(\frac{k}{k+1}\right)^{k+1} = e \cdot \left(1-\frac{1}{k+1}\right)^{k+1} \le e \cdot e^{-1} = 1.$$ [It is well known that $\left(1-\frac{1}{k+1}\right)^{k+1}$ converges to $e^{-1}$ as $k \to \infty$, but one can show by induction that this actually increases monotonically to $e^{-1}$.]</p> Answer 18: <p>Take the log of all sides, so we just need to prove that</p> <p>$$n\ln n-n+1\le\ln(n!)\le (n+1)\ln n-n+1$$</p> <p>Then, it is easy to see that</p> <p>$$\ln(n!)=\sum_{k=1}^n\ln(k)$$</p> <p>And that is the right Riemann sum to the following integral:</p> <p>$$\sum_{k=1}^n\ln(k)\ge\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>So we have proven the left side. Then notice that we have the following trapezoidal sum:</p> <p>$$\frac{\ln(n)+2\ln((n-1)!)}2=\frac{\ln(n!)+\ln((n-1)!)}2\\=\sum_{k=1}^n\frac{\ln(k+1)+\ln(k)}2\\\le\int_1^n\ln(x)\ dx=n\ln(n)-n+1\\\frac{\ln(n)+2\ln((n-1)!)}2\le n\ln(n)-n+1\\\implies\ln((n-1)!)\le\left(n-\frac12\right)\ln(n)-n+1\\\implies\ln(n!)=\ln(n)+\ln((n-1)!)\le\left(n+\frac12\right)\ln(n)-n+1$$</p> <p>Thus, when $n\ge1$,</p> <p>$$\ln(n!)\le\left(n+\frac12\right)\ln(n)-n+1\le(n+1)\ln(n)-n+1$$</p> <p>and we have everything we wanted.</p> Answer 19: <p>Rearranging gives the equivalent inequalities $$1 \le (n-1)! \left(\frac{e}{n}\right)^{n-1} \le n.$$</p> <p>When $n=1$ both inequalities are equalities. Assuming the statement holds for $n=k$, then we want to prove $$1 \le k! \left(\frac{e}{k+1}\right)^k \le k+1.$$</p> <hr> <p>The first inequality holds since $$k! \left(\frac{e}{k+1}\right)^k = \underbrace{e \cdot \left(\frac{k}{k+1}\right)^k}_{\ge 1} \cdot \underbrace{(k-1)! \left(\frac{e}{k}\right)^{k-1}}_{\ge 1 \text{ by induction}}$$ where the first term is $\ge 1$ because the hint gives $$e^{1/k} \frac{k}{k+1} \ge \left(1+\frac{1}{k}\right) \frac{k}{k+1} = 1.$$</p> <hr> <p>The second inequality is due to $$k! \left(\frac{e}{k+1}\right)^k = \underbrace{e \cdot \left(\frac{k}{k+1}\right)^k}_{\le \frac{k+1}{k}} \cdot \underbrace{(k-1)! \left(\frac{e}{k}\right)^{k-1}}_{\le k \text{ by induction}}$$ where the first term is $\le \frac{k+1}{k}$ because $$e \cdot \left(\frac{k}{k+1}\right)^{k+1} = e \cdot \left(1-\frac{1}{k+1}\right)^{k+1} \le e \cdot e^{-1} = 1.$$ [It is well known that $\left(1-\frac{1}{k+1}\right)^{k+1}$ converges to $e^{-1}$ as $k \to \infty$, but one can show by induction that this actually increases monotonically to $e^{-1}$.]</p> Answer 20: <p>Take the log of all sides, so we just need to prove that</p> <p>$$n\ln n-n+1\le\ln(n!)\le (n+1)\ln n-n+1$$</p> <p>Then, it is easy to see that</p> <p>$$\ln(n!)=\sum_{k=1}^n\ln(k)$$</p> <p>And that is the right Riemann sum to the following integral:</p> <p>$$\sum_{k=1}^n\ln(k)\ge\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>So we have proven the left side. Then notice that we have the following trapezoidal sum:</p> <p>$$\frac{\ln(n)+2\ln((n-1)!)}2=\frac{\ln(n!)+\ln((n-1)!)}2\\=\sum_{k=1}^n\frac{\ln(k+1)+\ln(k)}2\\\le\int_1^n\ln(x)\ dx=n\ln(n)-n+1\\\frac{\ln(n)+2\ln((n-1)!)}2\le n\ln(n)-n+1\\\implies\ln((n-1)!)\le\left(n-\frac12\right)\ln(n)-n+1\\\implies\ln(n!)=\ln(n)+\ln((n-1)!)\le\left(n+\frac12\right)\ln(n)-n+1$$</p> <p>Thus, when $n\ge1$,</p> <p>$$\ln(n!)\le\left(n+\frac12\right)\ln(n)-n+1\le(n+1)\ln(n)-n+1$$</p> <p>and we have everything we wanted.</p> Answer 21: <p>Rearranging gives the equivalent inequalities $$1 \le (n-1)! \left(\frac{e}{n}\right)^{n-1} \le n.$$</p> <p>When $n=1$ both inequalities are equalities. Assuming the statement holds for $n=k$, then we want to prove $$1 \le k! \left(\frac{e}{k+1}\right)^k \le k+1.$$</p> <hr> <p>The first inequality holds since $$k! \left(\frac{e}{k+1}\right)^k = \underbrace{e \cdot \left(\frac{k}{k+1}\right)^k}_{\ge 1} \cdot \underbrace{(k-1)! \left(\frac{e}{k}\right)^{k-1}}_{\ge 1 \text{ by induction}}$$ where the first term is $\ge 1$ because the hint gives $$e^{1/k} \frac{k}{k+1} \ge \left(1+\frac{1}{k}\right) \frac{k}{k+1} = 1.$$</p> <hr> <p>The second inequality is due to $$k! \left(\frac{e}{k+1}\right)^k = \underbrace{e \cdot \left(\frac{k}{k+1}\right)^k}_{\le \frac{k+1}{k}} \cdot \underbrace{(k-1)! \left(\frac{e}{k}\right)^{k-1}}_{\le k \text{ by induction}}$$ where the first term is $\le \frac{k+1}{k}$ because $$e \cdot \left(\frac{k}{k+1}\right)^{k+1} = e \cdot \left(1-\frac{1}{k+1}\right)^{k+1} \le e \cdot e^{-1} = 1.$$ [It is well known that $\left(1-\frac{1}{k+1}\right)^{k+1}$ converges to $e^{-1}$ as $k \to \infty$, but one can show by induction that this actually increases monotonically to $e^{-1}$.]</p> Answer 22: <p>Take the log of all sides, so we just need to prove that</p> <p>$$n\ln n-n+1\le\ln(n!)\le (n+1)\ln n-n+1$$</p> <p>Then, it is easy to see that</p> <p>$$\ln(n!)=\sum_{k=1}^n\ln(k)$$</p> <p>And that is the right Riemann sum to the following integral:</p> <p>$$\sum_{k=1}^n\ln(k)\ge\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>So we have proven the left side. Then notice that we have the following trapezoidal sum:</p> <p>$$\frac{\ln(n)+2\ln((n-1)!)}2=\frac{\ln(n!)+\ln((n-1)!)}2\\=\sum_{k=1}^n\frac{\ln(k+1)+\ln(k)}2\\\le\int_1^n\ln(x)\ dx=n\ln(n)-n+1\\\frac{\ln(n)+2\ln((n-1)!)}2\le n\ln(n)-n+1\\\implies\ln((n-1)!)\le\left(n-\frac12\right)\ln(n)-n+1\\\implies\ln(n!)=\ln(n)+\ln((n-1)!)\le\left(n+\frac12\right)\ln(n)-n+1$$</p> <p>Thus, when $n\ge1$,</p> <p>$$\ln(n!)\le\left(n+\frac12\right)\ln(n)-n+1\le(n+1)\ln(n)-n+1$$</p> <p>and we have everything we wanted.</p>
why is the entropy of a dynamical system interesting?
2,098,596
6
https://math.stackexchange.com/questions/2098596/why-is-the-entropy-of-a-dynamical-system-interesting
<p>I took an introductory class on dynamical systems this semester. In class we've seen a great deal about the entropy of a dynamical system $T : X \to X$ on a metric space $(X, d)$. In the topological case the entropy is defined to be $$h(T) := \lim_{\epsilon \to 0 } \lim_{k \to \infty} \frac{1}{k} \log (\text{sep}(k, \epsilon, T)),$$ where $\text{sep}(k, \epsilon, T)$ is defined to be the maximal cardinality of a subset $B \subset X$ such that for all $x, y \in B$ there is a $i = 0, ..., k - 1$ such that $d(T^i(x), T^i(y)) &gt; \epsilon$. </p> <p>In class it was said that the entropy measures how "chaotic" or "random" a map is. However, we never really saw any theorem or application that made clear why people care about entropy. It seems a rather arbitrary definition to me. </p> <p>Can anyone explain to me why the topological entropy of a dynamical system is interesting? </p> <p>Thanks!</p>
[ { "answer_id": 2098752, "score": 7, "text": "<blockquote>\n <p><em>Topological entropy</em> is one of many invariants of topological conjugacy.</p>\n</blockquote>\n\n<p>As all others it has two main applications:</p>\n\n<p>$1)$ distinguish dynamics that are not topological conjugate (if $h(T)\\ne h(S)$, then $T$ and $S$ are not topologically conjugate);</p>\n\n<p>$2)$ detect complicated behavior (if $h(T)\\ne0$, then $T$ has \"complicated behavior\").</p>\n\n<p>The second aspect is extremely technical and is one of the difficult topics of the finite-dimensional dynamical systems theory.\nLet me formulate one rigorous result (among others), which may let us appreciate the interest in the notion, but also its tecnhical nature:</p>\n\n<blockquote>\n <p>If $T$ is a $C^1$ diffeomorphism of a compact manifold and $h(T)&gt;0$, then there exists an ergodic $T$-invariant probability measure with at least one positive Lyapunov exponent.</p>\n</blockquote>\n" }, { "answer_id": 2099639, "score": 3, "text": "<p>This is a bit more informal, but the following interpretation was helpful to me as a student:</p>\n\n<hr>\n\n<p>Think of $\\epsilon$ as the resolution with which you can view the dynamics: that is, if two points are closer than $\\epsilon$, then you cannot distinguish them. Then, \n\\begin{align*}\n\\operatorname{sep}(k, \\epsilon, T)\n\\end{align*}\ncan be thought of as the number of \"distinguishable trajectories of length $k$\". Positivity of the entropy $h(T)$ implies that for $\\epsilon &gt; 0$ sufficiently small, this quantity grows at a positive exponential rate. </p>\n\n<p>What is the implication? Well, suppose you're modelling the dynamics of $T$ on a computer and want to capture all trajectories of length $n$. If $h(T) &gt; 0$, then the amount of data required to do so grows exponentially in $n$, at a rate $\\sim e^{n h(T)}$. This is why one often refers to entropy as capturing <em>complexity</em> of the system.</p>\n" }, { "answer_id": 2098752, "score": 7, "text": "<blockquote>\n <p><em>Topological entropy</em> is one of many invariants of topological conjugacy.</p>\n</blockquote>\n\n<p>As all others it has two main applications:</p>\n\n<p>$1)$ distinguish dynamics that are not topological conjugate (if $h(T)\\ne h(S)$, then $T$ and $S$ are not topologically conjugate);</p>\n\n<p>$2)$ detect complicated behavior (if $h(T)\\ne0$, then $T$ has \"complicated behavior\").</p>\n\n<p>The second aspect is extremely technical and is one of the difficult topics of the finite-dimensional dynamical systems theory.\nLet me formulate one rigorous result (among others), which may let us appreciate the interest in the notion, but also its tecnhical nature:</p>\n\n<blockquote>\n <p>If $T$ is a $C^1$ diffeomorphism of a compact manifold and $h(T)&gt;0$, then there exists an ergodic $T$-invariant probability measure with at least one positive Lyapunov exponent.</p>\n</blockquote>\n" }, { "answer_id": 2099639, "score": 3, "text": "<p>This is a bit more informal, but the following interpretation was helpful to me as a student:</p>\n\n<hr>\n\n<p>Think of $\\epsilon$ as the resolution with which you can view the dynamics: that is, if two points are closer than $\\epsilon$, then you cannot distinguish them. Then, \n\\begin{align*}\n\\operatorname{sep}(k, \\epsilon, T)\n\\end{align*}\ncan be thought of as the number of \"distinguishable trajectories of length $k$\". Positivity of the entropy $h(T)$ implies that for $\\epsilon &gt; 0$ sufficiently small, this quantity grows at a positive exponential rate. </p>\n\n<p>What is the implication? Well, suppose you're modelling the dynamics of $T$ on a computer and want to capture all trajectories of length $n$. If $h(T) &gt; 0$, then the amount of data required to do so grows exponentially in $n$, at a rate $\\sim e^{n h(T)}$. This is why one often refers to entropy as capturing <em>complexity</em> of the system.</p>\n" }, { "answer_id": 2098752, "score": 7, "text": "<blockquote>\n <p><em>Topological entropy</em> is one of many invariants of topological conjugacy.</p>\n</blockquote>\n\n<p>As all others it has two main applications:</p>\n\n<p>$1)$ distinguish dynamics that are not topological conjugate (if $h(T)\\ne h(S)$, then $T$ and $S$ are not topologically conjugate);</p>\n\n<p>$2)$ detect complicated behavior (if $h(T)\\ne0$, then $T$ has \"complicated behavior\").</p>\n\n<p>The second aspect is extremely technical and is one of the difficult topics of the finite-dimensional dynamical systems theory.\nLet me formulate one rigorous result (among others), which may let us appreciate the interest in the notion, but also its tecnhical nature:</p>\n\n<blockquote>\n <p>If $T$ is a $C^1$ diffeomorphism of a compact manifold and $h(T)&gt;0$, then there exists an ergodic $T$-invariant probability measure with at least one positive Lyapunov exponent.</p>\n</blockquote>\n" }, { "answer_id": 2099639, "score": 3, "text": "<p>This is a bit more informal, but the following interpretation was helpful to me as a student:</p>\n\n<hr>\n\n<p>Think of $\\epsilon$ as the resolution with which you can view the dynamics: that is, if two points are closer than $\\epsilon$, then you cannot distinguish them. Then, \n\\begin{align*}\n\\operatorname{sep}(k, \\epsilon, T)\n\\end{align*}\ncan be thought of as the number of \"distinguishable trajectories of length $k$\". Positivity of the entropy $h(T)$ implies that for $\\epsilon &gt; 0$ sufficiently small, this quantity grows at a positive exponential rate. </p>\n\n<p>What is the implication? Well, suppose you're modelling the dynamics of $T$ on a computer and want to capture all trajectories of length $n$. If $h(T) &gt; 0$, then the amount of data required to do so grows exponentially in $n$, at a rate $\\sim e^{n h(T)}$. This is why one often refers to entropy as capturing <em>complexity</em> of the system.</p>\n" }, { "answer_id": 2098752, "score": 7, "text": "<blockquote>\n <p><em>Topological entropy</em> is one of many invariants of topological conjugacy.</p>\n</blockquote>\n\n<p>As all others it has two main applications:</p>\n\n<p>$1)$ distinguish dynamics that are not topological conjugate (if $h(T)\\ne h(S)$, then $T$ and $S$ are not topologically conjugate);</p>\n\n<p>$2)$ detect complicated behavior (if $h(T)\\ne0$, then $T$ has \"complicated behavior\").</p>\n\n<p>The second aspect is extremely technical and is one of the difficult topics of the finite-dimensional dynamical systems theory.\nLet me formulate one rigorous result (among others), which may let us appreciate the interest in the notion, but also its tecnhical nature:</p>\n\n<blockquote>\n <p>If $T$ is a $C^1$ diffeomorphism of a compact manifold and $h(T)&gt;0$, then there exists an ergodic $T$-invariant probability measure with at least one positive Lyapunov exponent.</p>\n</blockquote>\n" }, { "answer_id": 2099639, "score": 3, "text": "<p>This is a bit more informal, but the following interpretation was helpful to me as a student:</p>\n\n<hr>\n\n<p>Think of $\\epsilon$ as the resolution with which you can view the dynamics: that is, if two points are closer than $\\epsilon$, then you cannot distinguish them. Then, \n\\begin{align*}\n\\operatorname{sep}(k, \\epsilon, T)\n\\end{align*}\ncan be thought of as the number of \"distinguishable trajectories of length $k$\". Positivity of the entropy $h(T)$ implies that for $\\epsilon &gt; 0$ sufficiently small, this quantity grows at a positive exponential rate. </p>\n\n<p>What is the implication? Well, suppose you're modelling the dynamics of $T$ on a computer and want to capture all trajectories of length $n$. If $h(T) &gt; 0$, then the amount of data required to do so grows exponentially in $n$, at a rate $\\sim e^{n h(T)}$. This is why one often refers to entropy as capturing <em>complexity</em> of the system.</p>\n" }, { "answer_id": 2098752, "score": 7, "text": "<blockquote>\n <p><em>Topological entropy</em> is one of many invariants of topological conjugacy.</p>\n</blockquote>\n\n<p>As all others it has two main applications:</p>\n\n<p>$1)$ distinguish dynamics that are not topological conjugate (if $h(T)\\ne h(S)$, then $T$ and $S$ are not topologically conjugate);</p>\n\n<p>$2)$ detect complicated behavior (if $h(T)\\ne0$, then $T$ has \"complicated behavior\").</p>\n\n<p>The second aspect is extremely technical and is one of the difficult topics of the finite-dimensional dynamical systems theory.\nLet me formulate one rigorous result (among others), which may let us appreciate the interest in the notion, but also its tecnhical nature:</p>\n\n<blockquote>\n <p>If $T$ is a $C^1$ diffeomorphism of a compact manifold and $h(T)&gt;0$, then there exists an ergodic $T$-invariant probability measure with at least one positive Lyapunov exponent.</p>\n</blockquote>\n" }, { "answer_id": 2099639, "score": 3, "text": "<p>This is a bit more informal, but the following interpretation was helpful to me as a student:</p>\n\n<hr>\n\n<p>Think of $\\epsilon$ as the resolution with which you can view the dynamics: that is, if two points are closer than $\\epsilon$, then you cannot distinguish them. Then, \n\\begin{align*}\n\\operatorname{sep}(k, \\epsilon, T)\n\\end{align*}\ncan be thought of as the number of \"distinguishable trajectories of length $k$\". Positivity of the entropy $h(T)$ implies that for $\\epsilon &gt; 0$ sufficiently small, this quantity grows at a positive exponential rate. </p>\n\n<p>What is the implication? Well, suppose you're modelling the dynamics of $T$ on a computer and want to capture all trajectories of length $n$. If $h(T) &gt; 0$, then the amount of data required to do so grows exponentially in $n$, at a rate $\\sim e^{n h(T)}$. This is why one often refers to entropy as capturing <em>complexity</em> of the system.</p>\n" }, { "answer_id": 2098752, "score": 7, "text": "<blockquote>\n <p><em>Topological entropy</em> is one of many invariants of topological conjugacy.</p>\n</blockquote>\n\n<p>As all others it has two main applications:</p>\n\n<p>$1)$ distinguish dynamics that are not topological conjugate (if $h(T)\\ne h(S)$, then $T$ and $S$ are not topologically conjugate);</p>\n\n<p>$2)$ detect complicated behavior (if $h(T)\\ne0$, then $T$ has \"complicated behavior\").</p>\n\n<p>The second aspect is extremely technical and is one of the difficult topics of the finite-dimensional dynamical systems theory.\nLet me formulate one rigorous result (among others), which may let us appreciate the interest in the notion, but also its tecnhical nature:</p>\n\n<blockquote>\n <p>If $T$ is a $C^1$ diffeomorphism of a compact manifold and $h(T)&gt;0$, then there exists an ergodic $T$-invariant probability measure with at least one positive Lyapunov exponent.</p>\n</blockquote>\n" }, { "answer_id": 2099639, "score": 3, "text": "<p>This is a bit more informal, but the following interpretation was helpful to me as a student:</p>\n\n<hr>\n\n<p>Think of $\\epsilon$ as the resolution with which you can view the dynamics: that is, if two points are closer than $\\epsilon$, then you cannot distinguish them. Then, \n\\begin{align*}\n\\operatorname{sep}(k, \\epsilon, T)\n\\end{align*}\ncan be thought of as the number of \"distinguishable trajectories of length $k$\". Positivity of the entropy $h(T)$ implies that for $\\epsilon &gt; 0$ sufficiently small, this quantity grows at a positive exponential rate. </p>\n\n<p>What is the implication? Well, suppose you're modelling the dynamics of $T$ on a computer and want to capture all trajectories of length $n$. If $h(T) &gt; 0$, then the amount of data required to do so grows exponentially in $n$, at a rate $\\sim e^{n h(T)}$. This is why one often refers to entropy as capturing <em>complexity</em> of the system.</p>\n" }, { "answer_id": 2098752, "score": 7, "text": "<blockquote>\n <p><em>Topological entropy</em> is one of many invariants of topological conjugacy.</p>\n</blockquote>\n\n<p>As all others it has two main applications:</p>\n\n<p>$1)$ distinguish dynamics that are not topological conjugate (if $h(T)\\ne h(S)$, then $T$ and $S$ are not topologically conjugate);</p>\n\n<p>$2)$ detect complicated behavior (if $h(T)\\ne0$, then $T$ has \"complicated behavior\").</p>\n\n<p>The second aspect is extremely technical and is one of the difficult topics of the finite-dimensional dynamical systems theory.\nLet me formulate one rigorous result (among others), which may let us appreciate the interest in the notion, but also its tecnhical nature:</p>\n\n<blockquote>\n <p>If $T$ is a $C^1$ diffeomorphism of a compact manifold and $h(T)&gt;0$, then there exists an ergodic $T$-invariant probability measure with at least one positive Lyapunov exponent.</p>\n</blockquote>\n" }, { "answer_id": 2099639, "score": 3, "text": "<p>This is a bit more informal, but the following interpretation was helpful to me as a student:</p>\n\n<hr>\n\n<p>Think of $\\epsilon$ as the resolution with which you can view the dynamics: that is, if two points are closer than $\\epsilon$, then you cannot distinguish them. Then, \n\\begin{align*}\n\\operatorname{sep}(k, \\epsilon, T)\n\\end{align*}\ncan be thought of as the number of \"distinguishable trajectories of length $k$\". Positivity of the entropy $h(T)$ implies that for $\\epsilon &gt; 0$ sufficiently small, this quantity grows at a positive exponential rate. </p>\n\n<p>What is the implication? Well, suppose you're modelling the dynamics of $T$ on a computer and want to capture all trajectories of length $n$. If $h(T) &gt; 0$, then the amount of data required to do so grows exponentially in $n$, at a rate $\\sim e^{n h(T)}$. This is why one often refers to entropy as capturing <em>complexity</em> of the system.</p>\n" }, { "answer_id": 2098752, "score": 7, "text": "<blockquote>\n <p><em>Topological entropy</em> is one of many invariants of topological conjugacy.</p>\n</blockquote>\n\n<p>As all others it has two main applications:</p>\n\n<p>$1)$ distinguish dynamics that are not topological conjugate (if $h(T)\\ne h(S)$, then $T$ and $S$ are not topologically conjugate);</p>\n\n<p>$2)$ detect complicated behavior (if $h(T)\\ne0$, then $T$ has \"complicated behavior\").</p>\n\n<p>The second aspect is extremely technical and is one of the difficult topics of the finite-dimensional dynamical systems theory.\nLet me formulate one rigorous result (among others), which may let us appreciate the interest in the notion, but also its tecnhical nature:</p>\n\n<blockquote>\n <p>If $T$ is a $C^1$ diffeomorphism of a compact manifold and $h(T)&gt;0$, then there exists an ergodic $T$-invariant probability measure with at least one positive Lyapunov exponent.</p>\n</blockquote>\n" }, { "answer_id": 2099639, "score": 3, "text": "<p>This is a bit more informal, but the following interpretation was helpful to me as a student:</p>\n\n<hr>\n\n<p>Think of $\\epsilon$ as the resolution with which you can view the dynamics: that is, if two points are closer than $\\epsilon$, then you cannot distinguish them. Then, \n\\begin{align*}\n\\operatorname{sep}(k, \\epsilon, T)\n\\end{align*}\ncan be thought of as the number of \"distinguishable trajectories of length $k$\". Positivity of the entropy $h(T)$ implies that for $\\epsilon &gt; 0$ sufficiently small, this quantity grows at a positive exponential rate. </p>\n\n<p>What is the implication? Well, suppose you're modelling the dynamics of $T$ on a computer and want to capture all trajectories of length $n$. If $h(T) &gt; 0$, then the amount of data required to do so grows exponentially in $n$, at a rate $\\sim e^{n h(T)}$. This is why one often refers to entropy as capturing <em>complexity</em> of the system.</p>\n" }, { "answer_id": 2098752, "score": 7, "text": "<blockquote>\n <p><em>Topological entropy</em> is one of many invariants of topological conjugacy.</p>\n</blockquote>\n\n<p>As all others it has two main applications:</p>\n\n<p>$1)$ distinguish dynamics that are not topological conjugate (if $h(T)\\ne h(S)$, then $T$ and $S$ are not topologically conjugate);</p>\n\n<p>$2)$ detect complicated behavior (if $h(T)\\ne0$, then $T$ has \"complicated behavior\").</p>\n\n<p>The second aspect is extremely technical and is one of the difficult topics of the finite-dimensional dynamical systems theory.\nLet me formulate one rigorous result (among others), which may let us appreciate the interest in the notion, but also its tecnhical nature:</p>\n\n<blockquote>\n <p>If $T$ is a $C^1$ diffeomorphism of a compact manifold and $h(T)&gt;0$, then there exists an ergodic $T$-invariant probability measure with at least one positive Lyapunov exponent.</p>\n</blockquote>\n" }, { "answer_id": 2099639, "score": 3, "text": "<p>This is a bit more informal, but the following interpretation was helpful to me as a student:</p>\n\n<hr>\n\n<p>Think of $\\epsilon$ as the resolution with which you can view the dynamics: that is, if two points are closer than $\\epsilon$, then you cannot distinguish them. Then, \n\\begin{align*}\n\\operatorname{sep}(k, \\epsilon, T)\n\\end{align*}\ncan be thought of as the number of \"distinguishable trajectories of length $k$\". Positivity of the entropy $h(T)$ implies that for $\\epsilon &gt; 0$ sufficiently small, this quantity grows at a positive exponential rate. </p>\n\n<p>What is the implication? Well, suppose you're modelling the dynamics of $T$ on a computer and want to capture all trajectories of length $n$. If $h(T) &gt; 0$, then the amount of data required to do so grows exponentially in $n$, at a rate $\\sim e^{n h(T)}$. This is why one often refers to entropy as capturing <em>complexity</em> of the system.</p>\n" }, { "answer_id": 2098752, "score": 7, "text": "<blockquote>\n <p><em>Topological entropy</em> is one of many invariants of topological conjugacy.</p>\n</blockquote>\n\n<p>As all others it has two main applications:</p>\n\n<p>$1)$ distinguish dynamics that are not topological conjugate (if $h(T)\\ne h(S)$, then $T$ and $S$ are not topologically conjugate);</p>\n\n<p>$2)$ detect complicated behavior (if $h(T)\\ne0$, then $T$ has \"complicated behavior\").</p>\n\n<p>The second aspect is extremely technical and is one of the difficult topics of the finite-dimensional dynamical systems theory.\nLet me formulate one rigorous result (among others), which may let us appreciate the interest in the notion, but also its tecnhical nature:</p>\n\n<blockquote>\n <p>If $T$ is a $C^1$ diffeomorphism of a compact manifold and $h(T)&gt;0$, then there exists an ergodic $T$-invariant probability measure with at least one positive Lyapunov exponent.</p>\n</blockquote>\n" }, { "answer_id": 2099639, "score": 3, "text": "<p>This is a bit more informal, but the following interpretation was helpful to me as a student:</p>\n\n<hr>\n\n<p>Think of $\\epsilon$ as the resolution with which you can view the dynamics: that is, if two points are closer than $\\epsilon$, then you cannot distinguish them. Then, \n\\begin{align*}\n\\operatorname{sep}(k, \\epsilon, T)\n\\end{align*}\ncan be thought of as the number of \"distinguishable trajectories of length $k$\". Positivity of the entropy $h(T)$ implies that for $\\epsilon &gt; 0$ sufficiently small, this quantity grows at a positive exponential rate. </p>\n\n<p>What is the implication? Well, suppose you're modelling the dynamics of $T$ on a computer and want to capture all trajectories of length $n$. If $h(T) &gt; 0$, then the amount of data required to do so grows exponentially in $n$, at a rate $\\sim e^{n h(T)}$. This is why one often refers to entropy as capturing <em>complexity</em> of the system.</p>\n" }, { "answer_id": 2098752, "score": 7, "text": "<blockquote>\n <p><em>Topological entropy</em> is one of many invariants of topological conjugacy.</p>\n</blockquote>\n\n<p>As all others it has two main applications:</p>\n\n<p>$1)$ distinguish dynamics that are not topological conjugate (if $h(T)\\ne h(S)$, then $T$ and $S$ are not topologically conjugate);</p>\n\n<p>$2)$ detect complicated behavior (if $h(T)\\ne0$, then $T$ has \"complicated behavior\").</p>\n\n<p>The second aspect is extremely technical and is one of the difficult topics of the finite-dimensional dynamical systems theory.\nLet me formulate one rigorous result (among others), which may let us appreciate the interest in the notion, but also its tecnhical nature:</p>\n\n<blockquote>\n <p>If $T$ is a $C^1$ diffeomorphism of a compact manifold and $h(T)&gt;0$, then there exists an ergodic $T$-invariant probability measure with at least one positive Lyapunov exponent.</p>\n</blockquote>\n" }, { "answer_id": 2099639, "score": 3, "text": "<p>This is a bit more informal, but the following interpretation was helpful to me as a student:</p>\n\n<hr>\n\n<p>Think of $\\epsilon$ as the resolution with which you can view the dynamics: that is, if two points are closer than $\\epsilon$, then you cannot distinguish them. Then, \n\\begin{align*}\n\\operatorname{sep}(k, \\epsilon, T)\n\\end{align*}\ncan be thought of as the number of \"distinguishable trajectories of length $k$\". Positivity of the entropy $h(T)$ implies that for $\\epsilon &gt; 0$ sufficiently small, this quantity grows at a positive exponential rate. </p>\n\n<p>What is the implication? Well, suppose you're modelling the dynamics of $T$ on a computer and want to capture all trajectories of length $n$. If $h(T) &gt; 0$, then the amount of data required to do so grows exponentially in $n$, at a rate $\\sim e^{n h(T)}$. This is why one often refers to entropy as capturing <em>complexity</em> of the system.</p>\n" } ]
Question: why is the entropy of a dynamical system interesting? Answer 1: <blockquote> <p><em>Topological entropy</em> is one of many invariants of topological conjugacy.</p> </blockquote> <p>As all others it has two main applications:</p> <p>$1)$ distinguish dynamics that are not topological conjugate (if $h(T)\ne h(S)$, then $T$ and $S$ are not topologically conjugate);</p> <p>$2)$ detect complicated behavior (if $h(T)\ne0$, then $T$ has "complicated behavior").</p> <p>The second aspect is extremely technical and is one of the difficult topics of the finite-dimensional dynamical systems theory. Let me formulate one rigorous result (among others), which may let us appreciate the interest in the notion, but also its tecnhical nature:</p> <blockquote> <p>If $T$ is a $C^1$ diffeomorphism of a compact manifold and $h(T)&gt;0$, then there exists an ergodic $T$-invariant probability measure with at least one positive Lyapunov exponent.</p> </blockquote> Answer 2: <p>This is a bit more informal, but the following interpretation was helpful to me as a student:</p> <hr> <p>Think of $\epsilon$ as the resolution with which you can view the dynamics: that is, if two points are closer than $\epsilon$, then you cannot distinguish them. Then, \begin{align*} \operatorname{sep}(k, \epsilon, T) \end{align*} can be thought of as the number of "distinguishable trajectories of length $k$". Positivity of the entropy $h(T)$ implies that for $\epsilon &gt; 0$ sufficiently small, this quantity grows at a positive exponential rate. </p> <p>What is the implication? Well, suppose you're modelling the dynamics of $T$ on a computer and want to capture all trajectories of length $n$. If $h(T) &gt; 0$, then the amount of data required to do so grows exponentially in $n$, at a rate $\sim e^{n h(T)}$. This is why one often refers to entropy as capturing <em>complexity</em> of the system.</p> Answer 3: <blockquote> <p><em>Topological entropy</em> is one of many invariants of topological conjugacy.</p> </blockquote> <p>As all others it has two main applications:</p> <p>$1)$ distinguish dynamics that are not topological conjugate (if $h(T)\ne h(S)$, then $T$ and $S$ are not topologically conjugate);</p> <p>$2)$ detect complicated behavior (if $h(T)\ne0$, then $T$ has "complicated behavior").</p> <p>The second aspect is extremely technical and is one of the difficult topics of the finite-dimensional dynamical systems theory. Let me formulate one rigorous result (among others), which may let us appreciate the interest in the notion, but also its tecnhical nature:</p> <blockquote> <p>If $T$ is a $C^1$ diffeomorphism of a compact manifold and $h(T)&gt;0$, then there exists an ergodic $T$-invariant probability measure with at least one positive Lyapunov exponent.</p> </blockquote> Answer 4: <p>This is a bit more informal, but the following interpretation was helpful to me as a student:</p> <hr> <p>Think of $\epsilon$ as the resolution with which you can view the dynamics: that is, if two points are closer than $\epsilon$, then you cannot distinguish them. Then, \begin{align*} \operatorname{sep}(k, \epsilon, T) \end{align*} can be thought of as the number of "distinguishable trajectories of length $k$". Positivity of the entropy $h(T)$ implies that for $\epsilon &gt; 0$ sufficiently small, this quantity grows at a positive exponential rate. </p> <p>What is the implication? Well, suppose you're modelling the dynamics of $T$ on a computer and want to capture all trajectories of length $n$. If $h(T) &gt; 0$, then the amount of data required to do so grows exponentially in $n$, at a rate $\sim e^{n h(T)}$. This is why one often refers to entropy as capturing <em>complexity</em> of the system.</p> Answer 5: <blockquote> <p><em>Topological entropy</em> is one of many invariants of topological conjugacy.</p> </blockquote> <p>As all others it has two main applications:</p> <p>$1)$ distinguish dynamics that are not topological conjugate (if $h(T)\ne h(S)$, then $T$ and $S$ are not topologically conjugate);</p> <p>$2)$ detect complicated behavior (if $h(T)\ne0$, then $T$ has "complicated behavior").</p> <p>The second aspect is extremely technical and is one of the difficult topics of the finite-dimensional dynamical systems theory. Let me formulate one rigorous result (among others), which may let us appreciate the interest in the notion, but also its tecnhical nature:</p> <blockquote> <p>If $T$ is a $C^1$ diffeomorphism of a compact manifold and $h(T)&gt;0$, then there exists an ergodic $T$-invariant probability measure with at least one positive Lyapunov exponent.</p> </blockquote> Answer 6: <p>This is a bit more informal, but the following interpretation was helpful to me as a student:</p> <hr> <p>Think of $\epsilon$ as the resolution with which you can view the dynamics: that is, if two points are closer than $\epsilon$, then you cannot distinguish them. Then, \begin{align*} \operatorname{sep}(k, \epsilon, T) \end{align*} can be thought of as the number of "distinguishable trajectories of length $k$". Positivity of the entropy $h(T)$ implies that for $\epsilon &gt; 0$ sufficiently small, this quantity grows at a positive exponential rate. </p> <p>What is the implication? Well, suppose you're modelling the dynamics of $T$ on a computer and want to capture all trajectories of length $n$. If $h(T) &gt; 0$, then the amount of data required to do so grows exponentially in $n$, at a rate $\sim e^{n h(T)}$. This is why one often refers to entropy as capturing <em>complexity</em> of the system.</p> Answer 7: <blockquote> <p><em>Topological entropy</em> is one of many invariants of topological conjugacy.</p> </blockquote> <p>As all others it has two main applications:</p> <p>$1)$ distinguish dynamics that are not topological conjugate (if $h(T)\ne h(S)$, then $T$ and $S$ are not topologically conjugate);</p> <p>$2)$ detect complicated behavior (if $h(T)\ne0$, then $T$ has "complicated behavior").</p> <p>The second aspect is extremely technical and is one of the difficult topics of the finite-dimensional dynamical systems theory. Let me formulate one rigorous result (among others), which may let us appreciate the interest in the notion, but also its tecnhical nature:</p> <blockquote> <p>If $T$ is a $C^1$ diffeomorphism of a compact manifold and $h(T)&gt;0$, then there exists an ergodic $T$-invariant probability measure with at least one positive Lyapunov exponent.</p> </blockquote> Answer 8: <p>This is a bit more informal, but the following interpretation was helpful to me as a student:</p> <hr> <p>Think of $\epsilon$ as the resolution with which you can view the dynamics: that is, if two points are closer than $\epsilon$, then you cannot distinguish them. Then, \begin{align*} \operatorname{sep}(k, \epsilon, T) \end{align*} can be thought of as the number of "distinguishable trajectories of length $k$". Positivity of the entropy $h(T)$ implies that for $\epsilon &gt; 0$ sufficiently small, this quantity grows at a positive exponential rate. </p> <p>What is the implication? Well, suppose you're modelling the dynamics of $T$ on a computer and want to capture all trajectories of length $n$. If $h(T) &gt; 0$, then the amount of data required to do so grows exponentially in $n$, at a rate $\sim e^{n h(T)}$. This is why one often refers to entropy as capturing <em>complexity</em> of the system.</p> Answer 9: <blockquote> <p><em>Topological entropy</em> is one of many invariants of topological conjugacy.</p> </blockquote> <p>As all others it has two main applications:</p> <p>$1)$ distinguish dynamics that are not topological conjugate (if $h(T)\ne h(S)$, then $T$ and $S$ are not topologically conjugate);</p> <p>$2)$ detect complicated behavior (if $h(T)\ne0$, then $T$ has "complicated behavior").</p> <p>The second aspect is extremely technical and is one of the difficult topics of the finite-dimensional dynamical systems theory. Let me formulate one rigorous result (among others), which may let us appreciate the interest in the notion, but also its tecnhical nature:</p> <blockquote> <p>If $T$ is a $C^1$ diffeomorphism of a compact manifold and $h(T)&gt;0$, then there exists an ergodic $T$-invariant probability measure with at least one positive Lyapunov exponent.</p> </blockquote> Answer 10: <p>This is a bit more informal, but the following interpretation was helpful to me as a student:</p> <hr> <p>Think of $\epsilon$ as the resolution with which you can view the dynamics: that is, if two points are closer than $\epsilon$, then you cannot distinguish them. Then, \begin{align*} \operatorname{sep}(k, \epsilon, T) \end{align*} can be thought of as the number of "distinguishable trajectories of length $k$". Positivity of the entropy $h(T)$ implies that for $\epsilon &gt; 0$ sufficiently small, this quantity grows at a positive exponential rate. </p> <p>What is the implication? Well, suppose you're modelling the dynamics of $T$ on a computer and want to capture all trajectories of length $n$. If $h(T) &gt; 0$, then the amount of data required to do so grows exponentially in $n$, at a rate $\sim e^{n h(T)}$. This is why one often refers to entropy as capturing <em>complexity</em> of the system.</p> Answer 11: <blockquote> <p><em>Topological entropy</em> is one of many invariants of topological conjugacy.</p> </blockquote> <p>As all others it has two main applications:</p> <p>$1)$ distinguish dynamics that are not topological conjugate (if $h(T)\ne h(S)$, then $T$ and $S$ are not topologically conjugate);</p> <p>$2)$ detect complicated behavior (if $h(T)\ne0$, then $T$ has "complicated behavior").</p> <p>The second aspect is extremely technical and is one of the difficult topics of the finite-dimensional dynamical systems theory. Let me formulate one rigorous result (among others), which may let us appreciate the interest in the notion, but also its tecnhical nature:</p> <blockquote> <p>If $T$ is a $C^1$ diffeomorphism of a compact manifold and $h(T)&gt;0$, then there exists an ergodic $T$-invariant probability measure with at least one positive Lyapunov exponent.</p> </blockquote> Answer 12: <p>This is a bit more informal, but the following interpretation was helpful to me as a student:</p> <hr> <p>Think of $\epsilon$ as the resolution with which you can view the dynamics: that is, if two points are closer than $\epsilon$, then you cannot distinguish them. Then, \begin{align*} \operatorname{sep}(k, \epsilon, T) \end{align*} can be thought of as the number of "distinguishable trajectories of length $k$". Positivity of the entropy $h(T)$ implies that for $\epsilon &gt; 0$ sufficiently small, this quantity grows at a positive exponential rate. </p> <p>What is the implication? Well, suppose you're modelling the dynamics of $T$ on a computer and want to capture all trajectories of length $n$. If $h(T) &gt; 0$, then the amount of data required to do so grows exponentially in $n$, at a rate $\sim e^{n h(T)}$. This is why one often refers to entropy as capturing <em>complexity</em> of the system.</p> Answer 13: <blockquote> <p><em>Topological entropy</em> is one of many invariants of topological conjugacy.</p> </blockquote> <p>As all others it has two main applications:</p> <p>$1)$ distinguish dynamics that are not topological conjugate (if $h(T)\ne h(S)$, then $T$ and $S$ are not topologically conjugate);</p> <p>$2)$ detect complicated behavior (if $h(T)\ne0$, then $T$ has "complicated behavior").</p> <p>The second aspect is extremely technical and is one of the difficult topics of the finite-dimensional dynamical systems theory. Let me formulate one rigorous result (among others), which may let us appreciate the interest in the notion, but also its tecnhical nature:</p> <blockquote> <p>If $T$ is a $C^1$ diffeomorphism of a compact manifold and $h(T)&gt;0$, then there exists an ergodic $T$-invariant probability measure with at least one positive Lyapunov exponent.</p> </blockquote> Answer 14: <p>This is a bit more informal, but the following interpretation was helpful to me as a student:</p> <hr> <p>Think of $\epsilon$ as the resolution with which you can view the dynamics: that is, if two points are closer than $\epsilon$, then you cannot distinguish them. Then, \begin{align*} \operatorname{sep}(k, \epsilon, T) \end{align*} can be thought of as the number of "distinguishable trajectories of length $k$". Positivity of the entropy $h(T)$ implies that for $\epsilon &gt; 0$ sufficiently small, this quantity grows at a positive exponential rate. </p> <p>What is the implication? Well, suppose you're modelling the dynamics of $T$ on a computer and want to capture all trajectories of length $n$. If $h(T) &gt; 0$, then the amount of data required to do so grows exponentially in $n$, at a rate $\sim e^{n h(T)}$. This is why one often refers to entropy as capturing <em>complexity</em> of the system.</p> Answer 15: <blockquote> <p><em>Topological entropy</em> is one of many invariants of topological conjugacy.</p> </blockquote> <p>As all others it has two main applications:</p> <p>$1)$ distinguish dynamics that are not topological conjugate (if $h(T)\ne h(S)$, then $T$ and $S$ are not topologically conjugate);</p> <p>$2)$ detect complicated behavior (if $h(T)\ne0$, then $T$ has "complicated behavior").</p> <p>The second aspect is extremely technical and is one of the difficult topics of the finite-dimensional dynamical systems theory. Let me formulate one rigorous result (among others), which may let us appreciate the interest in the notion, but also its tecnhical nature:</p> <blockquote> <p>If $T$ is a $C^1$ diffeomorphism of a compact manifold and $h(T)&gt;0$, then there exists an ergodic $T$-invariant probability measure with at least one positive Lyapunov exponent.</p> </blockquote> Answer 16: <p>This is a bit more informal, but the following interpretation was helpful to me as a student:</p> <hr> <p>Think of $\epsilon$ as the resolution with which you can view the dynamics: that is, if two points are closer than $\epsilon$, then you cannot distinguish them. Then, \begin{align*} \operatorname{sep}(k, \epsilon, T) \end{align*} can be thought of as the number of "distinguishable trajectories of length $k$". Positivity of the entropy $h(T)$ implies that for $\epsilon &gt; 0$ sufficiently small, this quantity grows at a positive exponential rate. </p> <p>What is the implication? Well, suppose you're modelling the dynamics of $T$ on a computer and want to capture all trajectories of length $n$. If $h(T) &gt; 0$, then the amount of data required to do so grows exponentially in $n$, at a rate $\sim e^{n h(T)}$. This is why one often refers to entropy as capturing <em>complexity</em> of the system.</p> Answer 17: <blockquote> <p><em>Topological entropy</em> is one of many invariants of topological conjugacy.</p> </blockquote> <p>As all others it has two main applications:</p> <p>$1)$ distinguish dynamics that are not topological conjugate (if $h(T)\ne h(S)$, then $T$ and $S$ are not topologically conjugate);</p> <p>$2)$ detect complicated behavior (if $h(T)\ne0$, then $T$ has "complicated behavior").</p> <p>The second aspect is extremely technical and is one of the difficult topics of the finite-dimensional dynamical systems theory. Let me formulate one rigorous result (among others), which may let us appreciate the interest in the notion, but also its tecnhical nature:</p> <blockquote> <p>If $T$ is a $C^1$ diffeomorphism of a compact manifold and $h(T)&gt;0$, then there exists an ergodic $T$-invariant probability measure with at least one positive Lyapunov exponent.</p> </blockquote> Answer 18: <p>This is a bit more informal, but the following interpretation was helpful to me as a student:</p> <hr> <p>Think of $\epsilon$ as the resolution with which you can view the dynamics: that is, if two points are closer than $\epsilon$, then you cannot distinguish them. Then, \begin{align*} \operatorname{sep}(k, \epsilon, T) \end{align*} can be thought of as the number of "distinguishable trajectories of length $k$". Positivity of the entropy $h(T)$ implies that for $\epsilon &gt; 0$ sufficiently small, this quantity grows at a positive exponential rate. </p> <p>What is the implication? Well, suppose you're modelling the dynamics of $T$ on a computer and want to capture all trajectories of length $n$. If $h(T) &gt; 0$, then the amount of data required to do so grows exponentially in $n$, at a rate $\sim e^{n h(T)}$. This is why one often refers to entropy as capturing <em>complexity</em> of the system.</p> Answer 19: <blockquote> <p><em>Topological entropy</em> is one of many invariants of topological conjugacy.</p> </blockquote> <p>As all others it has two main applications:</p> <p>$1)$ distinguish dynamics that are not topological conjugate (if $h(T)\ne h(S)$, then $T$ and $S$ are not topologically conjugate);</p> <p>$2)$ detect complicated behavior (if $h(T)\ne0$, then $T$ has "complicated behavior").</p> <p>The second aspect is extremely technical and is one of the difficult topics of the finite-dimensional dynamical systems theory. Let me formulate one rigorous result (among others), which may let us appreciate the interest in the notion, but also its tecnhical nature:</p> <blockquote> <p>If $T$ is a $C^1$ diffeomorphism of a compact manifold and $h(T)&gt;0$, then there exists an ergodic $T$-invariant probability measure with at least one positive Lyapunov exponent.</p> </blockquote> Answer 20: <p>This is a bit more informal, but the following interpretation was helpful to me as a student:</p> <hr> <p>Think of $\epsilon$ as the resolution with which you can view the dynamics: that is, if two points are closer than $\epsilon$, then you cannot distinguish them. Then, \begin{align*} \operatorname{sep}(k, \epsilon, T) \end{align*} can be thought of as the number of "distinguishable trajectories of length $k$". Positivity of the entropy $h(T)$ implies that for $\epsilon &gt; 0$ sufficiently small, this quantity grows at a positive exponential rate. </p> <p>What is the implication? Well, suppose you're modelling the dynamics of $T$ on a computer and want to capture all trajectories of length $n$. If $h(T) &gt; 0$, then the amount of data required to do so grows exponentially in $n$, at a rate $\sim e^{n h(T)}$. This is why one often refers to entropy as capturing <em>complexity</em> of the system.</p> Answer 21: <blockquote> <p><em>Topological entropy</em> is one of many invariants of topological conjugacy.</p> </blockquote> <p>As all others it has two main applications:</p> <p>$1)$ distinguish dynamics that are not topological conjugate (if $h(T)\ne h(S)$, then $T$ and $S$ are not topologically conjugate);</p> <p>$2)$ detect complicated behavior (if $h(T)\ne0$, then $T$ has "complicated behavior").</p> <p>The second aspect is extremely technical and is one of the difficult topics of the finite-dimensional dynamical systems theory. Let me formulate one rigorous result (among others), which may let us appreciate the interest in the notion, but also its tecnhical nature:</p> <blockquote> <p>If $T$ is a $C^1$ diffeomorphism of a compact manifold and $h(T)&gt;0$, then there exists an ergodic $T$-invariant probability measure with at least one positive Lyapunov exponent.</p> </blockquote> Answer 22: <p>This is a bit more informal, but the following interpretation was helpful to me as a student:</p> <hr> <p>Think of $\epsilon$ as the resolution with which you can view the dynamics: that is, if two points are closer than $\epsilon$, then you cannot distinguish them. Then, \begin{align*} \operatorname{sep}(k, \epsilon, T) \end{align*} can be thought of as the number of "distinguishable trajectories of length $k$". Positivity of the entropy $h(T)$ implies that for $\epsilon &gt; 0$ sufficiently small, this quantity grows at a positive exponential rate. </p> <p>What is the implication? Well, suppose you're modelling the dynamics of $T$ on a computer and want to capture all trajectories of length $n$. If $h(T) &gt; 0$, then the amount of data required to do so grows exponentially in $n$, at a rate $\sim e^{n h(T)}$. This is why one often refers to entropy as capturing <em>complexity</em> of the system.</p>
Proper clopen subset of disconnected set
2,099,061
6
https://math.stackexchange.com/questions/2099061/proper-clopen-subset-of-disconnected-set
<p>In Chapman Pugh's Real Analysis the definition of a disconnected set is that it has a proper clopen subset. I was trying to apply that to a simple example but got the following inconsistency:</p> <p>The disconnected set $U=[a,b]\cup[c,d]\subset\mathbb R$ where $a&lt;b&lt;c&lt;d$ should have a proper clopen subset. However, $\mathbb R$ has no clopen proper subsets and any proper subset of $U$ is also a proper subset of $\mathbb R$ which is a contradiction. Where am I wrong in my reasoning? </p> <p>Thanks in advance!</p>
[ { "answer_id": 2099064, "score": 10, "text": "<p>You're mistaking the idea of being \"open in $U$\" with \"being open in $\\Bbb{R}$\".</p>\n\n<p>Indeed, $[a,b]$ is open in $U$ with the subspace topology because if you choose any $e$ with $b&lt;e&lt;c$, then $[a,b]=U\\cap(a-1,e)$. It is also closed in $U$ because $[a,b]$ closed in $\\Bbb{R}$ and $[a,b]=U\\cap[a,b]$.</p>\n" }, { "answer_id": 2099063, "score": 8, "text": "<p>$[a, b]$ is clopen <em>in</em> $U$, by the definition of subspace topology. It is closed because it's equal to $[a, b]\\cap U$, and it's open because it's equal to $(a-1, \\frac{b+c}{2})\\cap U$.</p>\n" }, { "answer_id": 2099064, "score": 10, "text": "<p>You're mistaking the idea of being \"open in $U$\" with \"being open in $\\Bbb{R}$\".</p>\n\n<p>Indeed, $[a,b]$ is open in $U$ with the subspace topology because if you choose any $e$ with $b&lt;e&lt;c$, then $[a,b]=U\\cap(a-1,e)$. It is also closed in $U$ because $[a,b]$ closed in $\\Bbb{R}$ and $[a,b]=U\\cap[a,b]$.</p>\n" }, { "answer_id": 2099063, "score": 8, "text": "<p>$[a, b]$ is clopen <em>in</em> $U$, by the definition of subspace topology. It is closed because it's equal to $[a, b]\\cap U$, and it's open because it's equal to $(a-1, \\frac{b+c}{2})\\cap U$.</p>\n" }, { "answer_id": 2099064, "score": 10, "text": "<p>You're mistaking the idea of being \"open in $U$\" with \"being open in $\\Bbb{R}$\".</p>\n\n<p>Indeed, $[a,b]$ is open in $U$ with the subspace topology because if you choose any $e$ with $b&lt;e&lt;c$, then $[a,b]=U\\cap(a-1,e)$. It is also closed in $U$ because $[a,b]$ closed in $\\Bbb{R}$ and $[a,b]=U\\cap[a,b]$.</p>\n" }, { "answer_id": 2099063, "score": 8, "text": "<p>$[a, b]$ is clopen <em>in</em> $U$, by the definition of subspace topology. It is closed because it's equal to $[a, b]\\cap U$, and it's open because it's equal to $(a-1, \\frac{b+c}{2})\\cap U$.</p>\n" }, { "answer_id": 2099064, "score": 10, "text": "<p>You're mistaking the idea of being \"open in $U$\" with \"being open in $\\Bbb{R}$\".</p>\n\n<p>Indeed, $[a,b]$ is open in $U$ with the subspace topology because if you choose any $e$ with $b&lt;e&lt;c$, then $[a,b]=U\\cap(a-1,e)$. It is also closed in $U$ because $[a,b]$ closed in $\\Bbb{R}$ and $[a,b]=U\\cap[a,b]$.</p>\n" }, { "answer_id": 2099063, "score": 8, "text": "<p>$[a, b]$ is clopen <em>in</em> $U$, by the definition of subspace topology. It is closed because it's equal to $[a, b]\\cap U$, and it's open because it's equal to $(a-1, \\frac{b+c}{2})\\cap U$.</p>\n" }, { "answer_id": 2099064, "score": 10, "text": "<p>You're mistaking the idea of being \"open in $U$\" with \"being open in $\\Bbb{R}$\".</p>\n\n<p>Indeed, $[a,b]$ is open in $U$ with the subspace topology because if you choose any $e$ with $b&lt;e&lt;c$, then $[a,b]=U\\cap(a-1,e)$. It is also closed in $U$ because $[a,b]$ closed in $\\Bbb{R}$ and $[a,b]=U\\cap[a,b]$.</p>\n" }, { "answer_id": 2099063, "score": 8, "text": "<p>$[a, b]$ is clopen <em>in</em> $U$, by the definition of subspace topology. It is closed because it's equal to $[a, b]\\cap U$, and it's open because it's equal to $(a-1, \\frac{b+c}{2})\\cap U$.</p>\n" }, { "answer_id": 2099064, "score": 10, "text": "<p>You're mistaking the idea of being \"open in $U$\" with \"being open in $\\Bbb{R}$\".</p>\n\n<p>Indeed, $[a,b]$ is open in $U$ with the subspace topology because if you choose any $e$ with $b&lt;e&lt;c$, then $[a,b]=U\\cap(a-1,e)$. It is also closed in $U$ because $[a,b]$ closed in $\\Bbb{R}$ and $[a,b]=U\\cap[a,b]$.</p>\n" }, { "answer_id": 2099063, "score": 8, "text": "<p>$[a, b]$ is clopen <em>in</em> $U$, by the definition of subspace topology. It is closed because it's equal to $[a, b]\\cap U$, and it's open because it's equal to $(a-1, \\frac{b+c}{2})\\cap U$.</p>\n" }, { "answer_id": 2099064, "score": 10, "text": "<p>You're mistaking the idea of being \"open in $U$\" with \"being open in $\\Bbb{R}$\".</p>\n\n<p>Indeed, $[a,b]$ is open in $U$ with the subspace topology because if you choose any $e$ with $b&lt;e&lt;c$, then $[a,b]=U\\cap(a-1,e)$. It is also closed in $U$ because $[a,b]$ closed in $\\Bbb{R}$ and $[a,b]=U\\cap[a,b]$.</p>\n" }, { "answer_id": 2099063, "score": 8, "text": "<p>$[a, b]$ is clopen <em>in</em> $U$, by the definition of subspace topology. It is closed because it's equal to $[a, b]\\cap U$, and it's open because it's equal to $(a-1, \\frac{b+c}{2})\\cap U$.</p>\n" }, { "answer_id": 2099064, "score": 10, "text": "<p>You're mistaking the idea of being \"open in $U$\" with \"being open in $\\Bbb{R}$\".</p>\n\n<p>Indeed, $[a,b]$ is open in $U$ with the subspace topology because if you choose any $e$ with $b&lt;e&lt;c$, then $[a,b]=U\\cap(a-1,e)$. It is also closed in $U$ because $[a,b]$ closed in $\\Bbb{R}$ and $[a,b]=U\\cap[a,b]$.</p>\n" }, { "answer_id": 2099063, "score": 8, "text": "<p>$[a, b]$ is clopen <em>in</em> $U$, by the definition of subspace topology. It is closed because it's equal to $[a, b]\\cap U$, and it's open because it's equal to $(a-1, \\frac{b+c}{2})\\cap U$.</p>\n" }, { "answer_id": 2099064, "score": 10, "text": "<p>You're mistaking the idea of being \"open in $U$\" with \"being open in $\\Bbb{R}$\".</p>\n\n<p>Indeed, $[a,b]$ is open in $U$ with the subspace topology because if you choose any $e$ with $b&lt;e&lt;c$, then $[a,b]=U\\cap(a-1,e)$. It is also closed in $U$ because $[a,b]$ closed in $\\Bbb{R}$ and $[a,b]=U\\cap[a,b]$.</p>\n" }, { "answer_id": 2099063, "score": 8, "text": "<p>$[a, b]$ is clopen <em>in</em> $U$, by the definition of subspace topology. It is closed because it's equal to $[a, b]\\cap U$, and it's open because it's equal to $(a-1, \\frac{b+c}{2})\\cap U$.</p>\n" }, { "answer_id": 2099064, "score": 10, "text": "<p>You're mistaking the idea of being \"open in $U$\" with \"being open in $\\Bbb{R}$\".</p>\n\n<p>Indeed, $[a,b]$ is open in $U$ with the subspace topology because if you choose any $e$ with $b&lt;e&lt;c$, then $[a,b]=U\\cap(a-1,e)$. It is also closed in $U$ because $[a,b]$ closed in $\\Bbb{R}$ and $[a,b]=U\\cap[a,b]$.</p>\n" }, { "answer_id": 2099063, "score": 8, "text": "<p>$[a, b]$ is clopen <em>in</em> $U$, by the definition of subspace topology. It is closed because it's equal to $[a, b]\\cap U$, and it's open because it's equal to $(a-1, \\frac{b+c}{2})\\cap U$.</p>\n" }, { "answer_id": 2099064, "score": 10, "text": "<p>You're mistaking the idea of being \"open in $U$\" with \"being open in $\\Bbb{R}$\".</p>\n\n<p>Indeed, $[a,b]$ is open in $U$ with the subspace topology because if you choose any $e$ with $b&lt;e&lt;c$, then $[a,b]=U\\cap(a-1,e)$. It is also closed in $U$ because $[a,b]$ closed in $\\Bbb{R}$ and $[a,b]=U\\cap[a,b]$.</p>\n" }, { "answer_id": 2099063, "score": 8, "text": "<p>$[a, b]$ is clopen <em>in</em> $U$, by the definition of subspace topology. It is closed because it's equal to $[a, b]\\cap U$, and it's open because it's equal to $(a-1, \\frac{b+c}{2})\\cap U$.</p>\n" } ]
Question: Proper clopen subset of disconnected set Answer 1: <p>You're mistaking the idea of being "open in $U$" with "being open in $\Bbb{R}$".</p> <p>Indeed, $[a,b]$ is open in $U$ with the subspace topology because if you choose any $e$ with $b&lt;e&lt;c$, then $[a,b]=U\cap(a-1,e)$. It is also closed in $U$ because $[a,b]$ closed in $\Bbb{R}$ and $[a,b]=U\cap[a,b]$.</p> Answer 2: <p>$[a, b]$ is clopen <em>in</em> $U$, by the definition of subspace topology. It is closed because it's equal to $[a, b]\cap U$, and it's open because it's equal to $(a-1, \frac{b+c}{2})\cap U$.</p> Answer 3: <p>You're mistaking the idea of being "open in $U$" with "being open in $\Bbb{R}$".</p> <p>Indeed, $[a,b]$ is open in $U$ with the subspace topology because if you choose any $e$ with $b&lt;e&lt;c$, then $[a,b]=U\cap(a-1,e)$. It is also closed in $U$ because $[a,b]$ closed in $\Bbb{R}$ and $[a,b]=U\cap[a,b]$.</p> Answer 4: <p>$[a, b]$ is clopen <em>in</em> $U$, by the definition of subspace topology. It is closed because it's equal to $[a, b]\cap U$, and it's open because it's equal to $(a-1, \frac{b+c}{2})\cap U$.</p> Answer 5: <p>You're mistaking the idea of being "open in $U$" with "being open in $\Bbb{R}$".</p> <p>Indeed, $[a,b]$ is open in $U$ with the subspace topology because if you choose any $e$ with $b&lt;e&lt;c$, then $[a,b]=U\cap(a-1,e)$. It is also closed in $U$ because $[a,b]$ closed in $\Bbb{R}$ and $[a,b]=U\cap[a,b]$.</p> Answer 6: <p>$[a, b]$ is clopen <em>in</em> $U$, by the definition of subspace topology. It is closed because it's equal to $[a, b]\cap U$, and it's open because it's equal to $(a-1, \frac{b+c}{2})\cap U$.</p> Answer 7: <p>You're mistaking the idea of being "open in $U$" with "being open in $\Bbb{R}$".</p> <p>Indeed, $[a,b]$ is open in $U$ with the subspace topology because if you choose any $e$ with $b&lt;e&lt;c$, then $[a,b]=U\cap(a-1,e)$. It is also closed in $U$ because $[a,b]$ closed in $\Bbb{R}$ and $[a,b]=U\cap[a,b]$.</p> Answer 8: <p>$[a, b]$ is clopen <em>in</em> $U$, by the definition of subspace topology. It is closed because it's equal to $[a, b]\cap U$, and it's open because it's equal to $(a-1, \frac{b+c}{2})\cap U$.</p> Answer 9: <p>You're mistaking the idea of being "open in $U$" with "being open in $\Bbb{R}$".</p> <p>Indeed, $[a,b]$ is open in $U$ with the subspace topology because if you choose any $e$ with $b&lt;e&lt;c$, then $[a,b]=U\cap(a-1,e)$. It is also closed in $U$ because $[a,b]$ closed in $\Bbb{R}$ and $[a,b]=U\cap[a,b]$.</p> Answer 10: <p>$[a, b]$ is clopen <em>in</em> $U$, by the definition of subspace topology. It is closed because it's equal to $[a, b]\cap U$, and it's open because it's equal to $(a-1, \frac{b+c}{2})\cap U$.</p> Answer 11: <p>You're mistaking the idea of being "open in $U$" with "being open in $\Bbb{R}$".</p> <p>Indeed, $[a,b]$ is open in $U$ with the subspace topology because if you choose any $e$ with $b&lt;e&lt;c$, then $[a,b]=U\cap(a-1,e)$. It is also closed in $U$ because $[a,b]$ closed in $\Bbb{R}$ and $[a,b]=U\cap[a,b]$.</p> Answer 12: <p>$[a, b]$ is clopen <em>in</em> $U$, by the definition of subspace topology. It is closed because it's equal to $[a, b]\cap U$, and it's open because it's equal to $(a-1, \frac{b+c}{2})\cap U$.</p> Answer 13: <p>You're mistaking the idea of being "open in $U$" with "being open in $\Bbb{R}$".</p> <p>Indeed, $[a,b]$ is open in $U$ with the subspace topology because if you choose any $e$ with $b&lt;e&lt;c$, then $[a,b]=U\cap(a-1,e)$. It is also closed in $U$ because $[a,b]$ closed in $\Bbb{R}$ and $[a,b]=U\cap[a,b]$.</p> Answer 14: <p>$[a, b]$ is clopen <em>in</em> $U$, by the definition of subspace topology. It is closed because it's equal to $[a, b]\cap U$, and it's open because it's equal to $(a-1, \frac{b+c}{2})\cap U$.</p> Answer 15: <p>You're mistaking the idea of being "open in $U$" with "being open in $\Bbb{R}$".</p> <p>Indeed, $[a,b]$ is open in $U$ with the subspace topology because if you choose any $e$ with $b&lt;e&lt;c$, then $[a,b]=U\cap(a-1,e)$. It is also closed in $U$ because $[a,b]$ closed in $\Bbb{R}$ and $[a,b]=U\cap[a,b]$.</p> Answer 16: <p>$[a, b]$ is clopen <em>in</em> $U$, by the definition of subspace topology. It is closed because it's equal to $[a, b]\cap U$, and it's open because it's equal to $(a-1, \frac{b+c}{2})\cap U$.</p> Answer 17: <p>You're mistaking the idea of being "open in $U$" with "being open in $\Bbb{R}$".</p> <p>Indeed, $[a,b]$ is open in $U$ with the subspace topology because if you choose any $e$ with $b&lt;e&lt;c$, then $[a,b]=U\cap(a-1,e)$. It is also closed in $U$ because $[a,b]$ closed in $\Bbb{R}$ and $[a,b]=U\cap[a,b]$.</p> Answer 18: <p>$[a, b]$ is clopen <em>in</em> $U$, by the definition of subspace topology. It is closed because it's equal to $[a, b]\cap U$, and it's open because it's equal to $(a-1, \frac{b+c}{2})\cap U$.</p> Answer 19: <p>You're mistaking the idea of being "open in $U$" with "being open in $\Bbb{R}$".</p> <p>Indeed, $[a,b]$ is open in $U$ with the subspace topology because if you choose any $e$ with $b&lt;e&lt;c$, then $[a,b]=U\cap(a-1,e)$. It is also closed in $U$ because $[a,b]$ closed in $\Bbb{R}$ and $[a,b]=U\cap[a,b]$.</p> Answer 20: <p>$[a, b]$ is clopen <em>in</em> $U$, by the definition of subspace topology. It is closed because it's equal to $[a, b]\cap U$, and it's open because it's equal to $(a-1, \frac{b+c}{2})\cap U$.</p> Answer 21: <p>You're mistaking the idea of being "open in $U$" with "being open in $\Bbb{R}$".</p> <p>Indeed, $[a,b]$ is open in $U$ with the subspace topology because if you choose any $e$ with $b&lt;e&lt;c$, then $[a,b]=U\cap(a-1,e)$. It is also closed in $U$ because $[a,b]$ closed in $\Bbb{R}$ and $[a,b]=U\cap[a,b]$.</p> Answer 22: <p>$[a, b]$ is clopen <em>in</em> $U$, by the definition of subspace topology. It is closed because it's equal to $[a, b]\cap U$, and it's open because it's equal to $(a-1, \frac{b+c}{2})\cap U$.</p>
Prove that all roots of $z\tan z = k$ lie in $\Bbb R$, where $k$ is a positive, non-zero real number.
2,099,088
6
https://math.stackexchange.com/questions/2099088/prove-that-all-roots-of-z-tan-z-k-lie-in-bbb-r-where-k-is-a-positive
<p>The question is that given in the title;</p> <blockquote> <p>Prove that all roots of the equation $z\tan z = k$ lie in $\Bbb R$, where $k$ is a positive non-zero real number.</p> </blockquote> <p>I attempted a somewhat "brute force" approach by simply letting $z = a + bi$, with $a, b \in \Bbb R$. Under this assumption, if $a = 0$ (that is, $z$ is purely imaginary), we come to the conclusion that $k$ must be of the form</p> <p>$$-b\tanh b = k.$$</p> <p>If $b &gt; 0$ then $\tanh b \in (0,1]$ and so $-b\tanh b &lt; 0$. If $b &lt; 0$ then $\tanh b \in [-1, 0)$ since $\tanh$ is odd, and so again, $-b\tanh b &lt; 0$, and so for $k \in \Bbb R_{&gt;0}$, we cannot have purely imaginary $z$. </p> <p>Consider then the case that $z = a + bi$ where $a, b \neq 0$. Then after some manipulation, we end up with</p> <p>$$z\tan z = \frac{\tan a + i\tanh b}{1 - i\tan a \tanh b}.$$</p> <p>Since both $\tan$ and $\tanh$, with domain restricted to $\Bbb R$, also have codomain $\Bbb R$, this ratio must be complex, since $\tanh b = 0$ iff $b = 0$, which we excluded under assumption, and so $k$ is complex if $z$ is complex.</p> <p>The only remaining possibility is that $z \in \Bbb R$, the result has been proven.</p> <p>I think this is somewhat hand-wavy at best; can anyone give me a more satisfactory/elegant proof of this result? </p>
[ { "answer_id": 2099130, "score": 1, "text": "<p><strong>Edit:</strong> As Valent points out in the comment below, it appears the premise is false: the main calculation here omits a factor of <span class=\"math-container\">$z,$</span> which changes the problem. This solution is not correct as written!</p>\n<hr />\n<blockquote class=\"spoiler\">\n<p>This is really only a slight tweaking of your argument to remove the casework and fix a few holes. It also shows the result for <span class=\"math-container\">$k\\in\\Bbb R$</span>, not just <span class=\"math-container\">$k\\in\\Bbb R^+$</span>.</p>\n</blockquote>\n<blockquote class=\"spoiler\">\n<p>Say <span class=\"math-container\">$z = a + bi$</span>. Then <span class=\"math-container\">$$z\\tan z = \\frac{\\tan a + i\\tanh b}{1 - i\\tan a\\tanh b} = \\left(\\frac{1}{1 + \\tan^2 a\\tanh^2 b}\\right)\\left(\\tan a(1 - \\tanh^2 b) + i\\tanh b(1 + \\tan^2 a)\\right).$$</span> <span class=\"math-container\">$\\left(\\frac{1}{1 + \\tan^2 a\\tanh^2 b}\\right)\\in\\Bbb R^+$</span>, because <span class=\"math-container\">$\\tan$</span> and <span class=\"math-container\">$\\tanh$</span> are real valued when restricted to <span class=\"math-container\">$\\Bbb R$</span>.</p>\n</blockquote>\n<blockquote class=\"spoiler\">\n<p>Hence, it is enough to show that if <span class=\"math-container\">$\\tan a(1 - \\tanh^2 b) + i\\tanh b(1 + \\tan^2 a)$</span> is real, then <span class=\"math-container\">$b = 0$</span>. For this quantity to be real, we need <span class=\"math-container\">$\\tanh b(1 + \\tan^2 a) = 0$</span>. This forces either <span class=\"math-container\">$\\tanh b = 0$</span> or <span class=\"math-container\">$1 + \\tan^2 a = 0$</span>. However, <span class=\"math-container\">$\\tan^2 a\\geq 0$</span>, so <span class=\"math-container\">$1 + \\tan^2 a\\geq 1$</span>. Thus, if <span class=\"math-container\">$z\\tan z$</span> is real, we must have <span class=\"math-container\">$\\tanh b = 0$</span>. As you've said, this happens if and only if <span class=\"math-container\">$b = 0$</span>, which means that <span class=\"math-container\">$a + bi\\in\\Bbb R$</span>.</p>\n</blockquote>\n" }, { "answer_id": 4482217, "score": 1, "text": "<p>(See <a href=\"https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi\">Real and imaginary part of $\\tan(a+bi)$</a> )</p>\n<p><span class=\"math-container\">$$\\tan(x+iy)=\\frac{\\sin2x+i\\sinh2y}{\\cos2x+\\cosh2y}$$</span>\n<span class=\"math-container\">$$(x+iy)\\tan(x+iy)=\\frac{(x\\sin2x-y\\sinh2y)+i(x\\sinh2y+y\\sin2x)}{\\cos2x+\\cosh2y}$$</span></p>\n<p>This shows that <span class=\"math-container\">$(x+iy)\\tan(x+iy)$</span> is real exactly when</p>\n<p><span class=\"math-container\">$$x\\sinh2y=-y\\sin2x.$$</span></p>\n<p>We want to know whether there are solutions with <span class=\"math-container\">$y\\neq0$</span>. If <span class=\"math-container\">$x=0$</span>, then the real part of the function has the same sign as <span class=\"math-container\">$-y\\sinh2y$</span>, which has the same sign as <span class=\"math-container\">$-y^2$</span>, which is not positive. So assume <span class=\"math-container\">$xy\\neq0$</span>. Then we have</p>\n<p><span class=\"math-container\">$$\\frac{\\sinh2y}{2y}=-\\frac{\\sin2x}{2x}.$$</span></p>\n<p>But the left side is always greater than <span class=\"math-container\">$1$</span>, while the right side is always smaller than <span class=\"math-container\">$1$</span>. So there is no solution here.</p>\n<p>Thus <span class=\"math-container\">$z\\tan z$</span> is a positive real number only when <span class=\"math-container\">$z$</span> is real.</p>\n" }, { "answer_id": 4482336, "score": 0, "text": "<p>The solution uses the partial fraction <a href=\"https://en.wikipedia.org/wiki/Trigonometric_functions#Partial_fraction_expansion\" rel=\"nofollow noreferrer\">expansion</a>\n<span class=\"math-container\">$$\\pi z\\cdot \\tan \\pi z = \\sum_{k \\ \\textrm{odd integer}} \\frac{z^2}{(\\frac{k}{2})^2 - z^2}$$</span></p>\n<p>Consider the rational fraction <span class=\"math-container\">$\\phi_a(\\cdot)$</span> with<br />\n<span class=\"math-container\">$$\\phi_a(z) = \\frac{z}{a-z} = \\frac{a}{a-z}-1$$</span></p>\n<p>If <span class=\"math-container\">$a&gt;0$</span> then <span class=\"math-container\">$\\phi_a$</span> invariates the upper (lower) half complex plane, that is <span class=\"math-container\">$\\phi_a(\\mathcal{H}_{\\pm}) \\subset \\mathcal{H}_{\\pm}$</span> ( in fact here we have equality). Now the map <span class=\"math-container\">$z\\mapsto z^2$</span> takes the first open quadrant to the upper half plane <span class=\"math-container\">$\\mathcal{H}_+$</span>, and the second one to the lower half plane <span class=\"math-container\">$\\mathcal{H}_-$</span>. Therefore, for <span class=\"math-container\">$a&gt;0$</span> the function\n<span class=\"math-container\">$$z\\ \\mapsto \\frac{z^2}{a-z^2}$$</span>\nmaps the first quadrant to the upper half plane, and the second quadrant to the lower half plane. We conclude that any function of the form</p>\n<p><span class=\"math-container\">$$f\\colon z\\mapsto \\sum_{a&gt;0} c_a \\cdot \\frac{z^2}{a- z^2}$$</span></p>\n<p>takes the first quadrant (Re, Im <span class=\"math-container\">$&gt;0$</span>) to the upper half plane ( Im <span class=\"math-container\">$&gt;0$</span>) and the second quadrant to the lower half plane. We conclude that <span class=\"math-container\">$f^{-1}(\\mathbb{R}) = \\mathbb{R}$</span>.</p>\n<p><span class=\"math-container\">$\\bf{Added:}$</span> Another method</p>\n<p>We <a href=\"https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi/4482323#4482323\">have</a></p>\n<p><span class=\"math-container\">$$\\tan(x+i y) = \\frac{\\sin 2x + i \\sinh 2y}{\\cos 2x + \\cosh 2 y}$$</span></p>\n<p>Let us show that <span class=\"math-container\">$f(z) \\colon = z\\ \\tan z$</span> takes the first quadrant and third quadrant to the upper half plane. Then using <span class=\"math-container\">$f(\\bar z) = \\bar f(z)$</span> (<span class=\"math-container\">$f$</span> real) we see to what half plane will each quadrant go.</p>\n<p>For <span class=\"math-container\">$y\\ne 0$</span> we have <span class=\"math-container\">$\\cosh 2y &gt;1$</span> so the denominator <span class=\"math-container\">$\\cos 2x + \\cosh y&gt;0$</span>. Let us that for <span class=\"math-container\">$x\\cdot y&gt;0$</span> the imaginary part of\n<span class=\"math-container\">$(x+i y) ( \\sin 2 x + i \\sinh 2y)$</span>\nis <span class=\"math-container\">$&gt;0$</span>, that is <span class=\"math-container\">$x \\sinh 2y + y \\sin 2x&gt;0$</span>. With <span class=\"math-container\">$2x = s$</span>, <span class=\"math-container\">$2y = t$</span>, <span class=\"math-container\">$\\ s \\cdot t&gt;0$</span> we have to show that\n<span class=\"math-container\">$s \\sinh t + t \\sin s &gt; 0$</span>, or\n<span class=\"math-container\">$$\\frac{\\sinh t}{t} + \\frac{\\sin s}{s} &gt;0$$</span></p>\n<p>Now\n<span class=\"math-container\">$$\\frac{\\sinh t}{t} = 1 + \\frac{t^2}{3!} + \\frac{t^4}{5!}+ \\cdots &gt; 1$$</span>\nfor all <span class=\"math-container\">$t \\in \\mathbb{R}$</span>, <span class=\"math-container\">$t\\ne 0$</span>, while</p>\n<p><span class=\"math-container\">$$1 + \\frac{\\sin s}{s} = \\frac{1}{s} \\cdot \\int_0 ^s (1 + \\cos u) du &gt;0$$</span>\nfor all <span class=\"math-container\">$s\\in \\mathbb{R}$</span>, <span class=\"math-container\">$s\\ne 0$</span>. We are done.</p>\n" }, { "answer_id": 2099130, "score": 1, "text": "<p><strong>Edit:</strong> As Valent points out in the comment below, it appears the premise is false: the main calculation here omits a factor of <span class=\"math-container\">$z,$</span> which changes the problem. This solution is not correct as written!</p>\n<hr />\n<blockquote class=\"spoiler\">\n<p>This is really only a slight tweaking of your argument to remove the casework and fix a few holes. It also shows the result for <span class=\"math-container\">$k\\in\\Bbb R$</span>, not just <span class=\"math-container\">$k\\in\\Bbb R^+$</span>.</p>\n</blockquote>\n<blockquote class=\"spoiler\">\n<p>Say <span class=\"math-container\">$z = a + bi$</span>. Then <span class=\"math-container\">$$z\\tan z = \\frac{\\tan a + i\\tanh b}{1 - i\\tan a\\tanh b} = \\left(\\frac{1}{1 + \\tan^2 a\\tanh^2 b}\\right)\\left(\\tan a(1 - \\tanh^2 b) + i\\tanh b(1 + \\tan^2 a)\\right).$$</span> <span class=\"math-container\">$\\left(\\frac{1}{1 + \\tan^2 a\\tanh^2 b}\\right)\\in\\Bbb R^+$</span>, because <span class=\"math-container\">$\\tan$</span> and <span class=\"math-container\">$\\tanh$</span> are real valued when restricted to <span class=\"math-container\">$\\Bbb R$</span>.</p>\n</blockquote>\n<blockquote class=\"spoiler\">\n<p>Hence, it is enough to show that if <span class=\"math-container\">$\\tan a(1 - \\tanh^2 b) + i\\tanh b(1 + \\tan^2 a)$</span> is real, then <span class=\"math-container\">$b = 0$</span>. For this quantity to be real, we need <span class=\"math-container\">$\\tanh b(1 + \\tan^2 a) = 0$</span>. This forces either <span class=\"math-container\">$\\tanh b = 0$</span> or <span class=\"math-container\">$1 + \\tan^2 a = 0$</span>. However, <span class=\"math-container\">$\\tan^2 a\\geq 0$</span>, so <span class=\"math-container\">$1 + \\tan^2 a\\geq 1$</span>. Thus, if <span class=\"math-container\">$z\\tan z$</span> is real, we must have <span class=\"math-container\">$\\tanh b = 0$</span>. As you've said, this happens if and only if <span class=\"math-container\">$b = 0$</span>, which means that <span class=\"math-container\">$a + bi\\in\\Bbb R$</span>.</p>\n</blockquote>\n" }, { "answer_id": 4482217, "score": 1, "text": "<p>(See <a href=\"https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi\">Real and imaginary part of $\\tan(a+bi)$</a> )</p>\n<p><span class=\"math-container\">$$\\tan(x+iy)=\\frac{\\sin2x+i\\sinh2y}{\\cos2x+\\cosh2y}$$</span>\n<span class=\"math-container\">$$(x+iy)\\tan(x+iy)=\\frac{(x\\sin2x-y\\sinh2y)+i(x\\sinh2y+y\\sin2x)}{\\cos2x+\\cosh2y}$$</span></p>\n<p>This shows that <span class=\"math-container\">$(x+iy)\\tan(x+iy)$</span> is real exactly when</p>\n<p><span class=\"math-container\">$$x\\sinh2y=-y\\sin2x.$$</span></p>\n<p>We want to know whether there are solutions with <span class=\"math-container\">$y\\neq0$</span>. If <span class=\"math-container\">$x=0$</span>, then the real part of the function has the same sign as <span class=\"math-container\">$-y\\sinh2y$</span>, which has the same sign as <span class=\"math-container\">$-y^2$</span>, which is not positive. So assume <span class=\"math-container\">$xy\\neq0$</span>. Then we have</p>\n<p><span class=\"math-container\">$$\\frac{\\sinh2y}{2y}=-\\frac{\\sin2x}{2x}.$$</span></p>\n<p>But the left side is always greater than <span class=\"math-container\">$1$</span>, while the right side is always smaller than <span class=\"math-container\">$1$</span>. So there is no solution here.</p>\n<p>Thus <span class=\"math-container\">$z\\tan z$</span> is a positive real number only when <span class=\"math-container\">$z$</span> is real.</p>\n" }, { "answer_id": 4482336, "score": 0, "text": "<p>The solution uses the partial fraction <a href=\"https://en.wikipedia.org/wiki/Trigonometric_functions#Partial_fraction_expansion\" rel=\"nofollow noreferrer\">expansion</a>\n<span class=\"math-container\">$$\\pi z\\cdot \\tan \\pi z = \\sum_{k \\ \\textrm{odd integer}} \\frac{z^2}{(\\frac{k}{2})^2 - z^2}$$</span></p>\n<p>Consider the rational fraction <span class=\"math-container\">$\\phi_a(\\cdot)$</span> with<br />\n<span class=\"math-container\">$$\\phi_a(z) = \\frac{z}{a-z} = \\frac{a}{a-z}-1$$</span></p>\n<p>If <span class=\"math-container\">$a&gt;0$</span> then <span class=\"math-container\">$\\phi_a$</span> invariates the upper (lower) half complex plane, that is <span class=\"math-container\">$\\phi_a(\\mathcal{H}_{\\pm}) \\subset \\mathcal{H}_{\\pm}$</span> ( in fact here we have equality). Now the map <span class=\"math-container\">$z\\mapsto z^2$</span> takes the first open quadrant to the upper half plane <span class=\"math-container\">$\\mathcal{H}_+$</span>, and the second one to the lower half plane <span class=\"math-container\">$\\mathcal{H}_-$</span>. Therefore, for <span class=\"math-container\">$a&gt;0$</span> the function\n<span class=\"math-container\">$$z\\ \\mapsto \\frac{z^2}{a-z^2}$$</span>\nmaps the first quadrant to the upper half plane, and the second quadrant to the lower half plane. We conclude that any function of the form</p>\n<p><span class=\"math-container\">$$f\\colon z\\mapsto \\sum_{a&gt;0} c_a \\cdot \\frac{z^2}{a- z^2}$$</span></p>\n<p>takes the first quadrant (Re, Im <span class=\"math-container\">$&gt;0$</span>) to the upper half plane ( Im <span class=\"math-container\">$&gt;0$</span>) and the second quadrant to the lower half plane. We conclude that <span class=\"math-container\">$f^{-1}(\\mathbb{R}) = \\mathbb{R}$</span>.</p>\n<p><span class=\"math-container\">$\\bf{Added:}$</span> Another method</p>\n<p>We <a href=\"https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi/4482323#4482323\">have</a></p>\n<p><span class=\"math-container\">$$\\tan(x+i y) = \\frac{\\sin 2x + i \\sinh 2y}{\\cos 2x + \\cosh 2 y}$$</span></p>\n<p>Let us show that <span class=\"math-container\">$f(z) \\colon = z\\ \\tan z$</span> takes the first quadrant and third quadrant to the upper half plane. Then using <span class=\"math-container\">$f(\\bar z) = \\bar f(z)$</span> (<span class=\"math-container\">$f$</span> real) we see to what half plane will each quadrant go.</p>\n<p>For <span class=\"math-container\">$y\\ne 0$</span> we have <span class=\"math-container\">$\\cosh 2y &gt;1$</span> so the denominator <span class=\"math-container\">$\\cos 2x + \\cosh y&gt;0$</span>. Let us that for <span class=\"math-container\">$x\\cdot y&gt;0$</span> the imaginary part of\n<span class=\"math-container\">$(x+i y) ( \\sin 2 x + i \\sinh 2y)$</span>\nis <span class=\"math-container\">$&gt;0$</span>, that is <span class=\"math-container\">$x \\sinh 2y + y \\sin 2x&gt;0$</span>. With <span class=\"math-container\">$2x = s$</span>, <span class=\"math-container\">$2y = t$</span>, <span class=\"math-container\">$\\ s \\cdot t&gt;0$</span> we have to show that\n<span class=\"math-container\">$s \\sinh t + t \\sin s &gt; 0$</span>, or\n<span class=\"math-container\">$$\\frac{\\sinh t}{t} + \\frac{\\sin s}{s} &gt;0$$</span></p>\n<p>Now\n<span class=\"math-container\">$$\\frac{\\sinh t}{t} = 1 + \\frac{t^2}{3!} + \\frac{t^4}{5!}+ \\cdots &gt; 1$$</span>\nfor all <span class=\"math-container\">$t \\in \\mathbb{R}$</span>, <span class=\"math-container\">$t\\ne 0$</span>, while</p>\n<p><span class=\"math-container\">$$1 + \\frac{\\sin s}{s} = \\frac{1}{s} \\cdot \\int_0 ^s (1 + \\cos u) du &gt;0$$</span>\nfor all <span class=\"math-container\">$s\\in \\mathbb{R}$</span>, <span class=\"math-container\">$s\\ne 0$</span>. We are done.</p>\n" }, { "answer_id": 2099130, "score": 1, "text": "<p><strong>Edit:</strong> As Valent points out in the comment below, it appears the premise is false: the main calculation here omits a factor of <span class=\"math-container\">$z,$</span> which changes the problem. This solution is not correct as written!</p>\n<hr />\n<blockquote class=\"spoiler\">\n<p>This is really only a slight tweaking of your argument to remove the casework and fix a few holes. It also shows the result for <span class=\"math-container\">$k\\in\\Bbb R$</span>, not just <span class=\"math-container\">$k\\in\\Bbb R^+$</span>.</p>\n</blockquote>\n<blockquote class=\"spoiler\">\n<p>Say <span class=\"math-container\">$z = a + bi$</span>. Then <span class=\"math-container\">$$z\\tan z = \\frac{\\tan a + i\\tanh b}{1 - i\\tan a\\tanh b} = \\left(\\frac{1}{1 + \\tan^2 a\\tanh^2 b}\\right)\\left(\\tan a(1 - \\tanh^2 b) + i\\tanh b(1 + \\tan^2 a)\\right).$$</span> <span class=\"math-container\">$\\left(\\frac{1}{1 + \\tan^2 a\\tanh^2 b}\\right)\\in\\Bbb R^+$</span>, because <span class=\"math-container\">$\\tan$</span> and <span class=\"math-container\">$\\tanh$</span> are real valued when restricted to <span class=\"math-container\">$\\Bbb R$</span>.</p>\n</blockquote>\n<blockquote class=\"spoiler\">\n<p>Hence, it is enough to show that if <span class=\"math-container\">$\\tan a(1 - \\tanh^2 b) + i\\tanh b(1 + \\tan^2 a)$</span> is real, then <span class=\"math-container\">$b = 0$</span>. For this quantity to be real, we need <span class=\"math-container\">$\\tanh b(1 + \\tan^2 a) = 0$</span>. This forces either <span class=\"math-container\">$\\tanh b = 0$</span> or <span class=\"math-container\">$1 + \\tan^2 a = 0$</span>. However, <span class=\"math-container\">$\\tan^2 a\\geq 0$</span>, so <span class=\"math-container\">$1 + \\tan^2 a\\geq 1$</span>. Thus, if <span class=\"math-container\">$z\\tan z$</span> is real, we must have <span class=\"math-container\">$\\tanh b = 0$</span>. As you've said, this happens if and only if <span class=\"math-container\">$b = 0$</span>, which means that <span class=\"math-container\">$a + bi\\in\\Bbb R$</span>.</p>\n</blockquote>\n" }, { "answer_id": 4482217, "score": 1, "text": "<p>(See <a href=\"https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi\">Real and imaginary part of $\\tan(a+bi)$</a> )</p>\n<p><span class=\"math-container\">$$\\tan(x+iy)=\\frac{\\sin2x+i\\sinh2y}{\\cos2x+\\cosh2y}$$</span>\n<span class=\"math-container\">$$(x+iy)\\tan(x+iy)=\\frac{(x\\sin2x-y\\sinh2y)+i(x\\sinh2y+y\\sin2x)}{\\cos2x+\\cosh2y}$$</span></p>\n<p>This shows that <span class=\"math-container\">$(x+iy)\\tan(x+iy)$</span> is real exactly when</p>\n<p><span class=\"math-container\">$$x\\sinh2y=-y\\sin2x.$$</span></p>\n<p>We want to know whether there are solutions with <span class=\"math-container\">$y\\neq0$</span>. If <span class=\"math-container\">$x=0$</span>, then the real part of the function has the same sign as <span class=\"math-container\">$-y\\sinh2y$</span>, which has the same sign as <span class=\"math-container\">$-y^2$</span>, which is not positive. So assume <span class=\"math-container\">$xy\\neq0$</span>. Then we have</p>\n<p><span class=\"math-container\">$$\\frac{\\sinh2y}{2y}=-\\frac{\\sin2x}{2x}.$$</span></p>\n<p>But the left side is always greater than <span class=\"math-container\">$1$</span>, while the right side is always smaller than <span class=\"math-container\">$1$</span>. So there is no solution here.</p>\n<p>Thus <span class=\"math-container\">$z\\tan z$</span> is a positive real number only when <span class=\"math-container\">$z$</span> is real.</p>\n" }, { "answer_id": 4482336, "score": 0, "text": "<p>The solution uses the partial fraction <a href=\"https://en.wikipedia.org/wiki/Trigonometric_functions#Partial_fraction_expansion\" rel=\"nofollow noreferrer\">expansion</a>\n<span class=\"math-container\">$$\\pi z\\cdot \\tan \\pi z = \\sum_{k \\ \\textrm{odd integer}} \\frac{z^2}{(\\frac{k}{2})^2 - z^2}$$</span></p>\n<p>Consider the rational fraction <span class=\"math-container\">$\\phi_a(\\cdot)$</span> with<br />\n<span class=\"math-container\">$$\\phi_a(z) = \\frac{z}{a-z} = \\frac{a}{a-z}-1$$</span></p>\n<p>If <span class=\"math-container\">$a&gt;0$</span> then <span class=\"math-container\">$\\phi_a$</span> invariates the upper (lower) half complex plane, that is <span class=\"math-container\">$\\phi_a(\\mathcal{H}_{\\pm}) \\subset \\mathcal{H}_{\\pm}$</span> ( in fact here we have equality). Now the map <span class=\"math-container\">$z\\mapsto z^2$</span> takes the first open quadrant to the upper half plane <span class=\"math-container\">$\\mathcal{H}_+$</span>, and the second one to the lower half plane <span class=\"math-container\">$\\mathcal{H}_-$</span>. Therefore, for <span class=\"math-container\">$a&gt;0$</span> the function\n<span class=\"math-container\">$$z\\ \\mapsto \\frac{z^2}{a-z^2}$$</span>\nmaps the first quadrant to the upper half plane, and the second quadrant to the lower half plane. We conclude that any function of the form</p>\n<p><span class=\"math-container\">$$f\\colon z\\mapsto \\sum_{a&gt;0} c_a \\cdot \\frac{z^2}{a- z^2}$$</span></p>\n<p>takes the first quadrant (Re, Im <span class=\"math-container\">$&gt;0$</span>) to the upper half plane ( Im <span class=\"math-container\">$&gt;0$</span>) and the second quadrant to the lower half plane. We conclude that <span class=\"math-container\">$f^{-1}(\\mathbb{R}) = \\mathbb{R}$</span>.</p>\n<p><span class=\"math-container\">$\\bf{Added:}$</span> Another method</p>\n<p>We <a href=\"https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi/4482323#4482323\">have</a></p>\n<p><span class=\"math-container\">$$\\tan(x+i y) = \\frac{\\sin 2x + i \\sinh 2y}{\\cos 2x + \\cosh 2 y}$$</span></p>\n<p>Let us show that <span class=\"math-container\">$f(z) \\colon = z\\ \\tan z$</span> takes the first quadrant and third quadrant to the upper half plane. Then using <span class=\"math-container\">$f(\\bar z) = \\bar f(z)$</span> (<span class=\"math-container\">$f$</span> real) we see to what half plane will each quadrant go.</p>\n<p>For <span class=\"math-container\">$y\\ne 0$</span> we have <span class=\"math-container\">$\\cosh 2y &gt;1$</span> so the denominator <span class=\"math-container\">$\\cos 2x + \\cosh y&gt;0$</span>. Let us that for <span class=\"math-container\">$x\\cdot y&gt;0$</span> the imaginary part of\n<span class=\"math-container\">$(x+i y) ( \\sin 2 x + i \\sinh 2y)$</span>\nis <span class=\"math-container\">$&gt;0$</span>, that is <span class=\"math-container\">$x \\sinh 2y + y \\sin 2x&gt;0$</span>. With <span class=\"math-container\">$2x = s$</span>, <span class=\"math-container\">$2y = t$</span>, <span class=\"math-container\">$\\ s \\cdot t&gt;0$</span> we have to show that\n<span class=\"math-container\">$s \\sinh t + t \\sin s &gt; 0$</span>, or\n<span class=\"math-container\">$$\\frac{\\sinh t}{t} + \\frac{\\sin s}{s} &gt;0$$</span></p>\n<p>Now\n<span class=\"math-container\">$$\\frac{\\sinh t}{t} = 1 + \\frac{t^2}{3!} + \\frac{t^4}{5!}+ \\cdots &gt; 1$$</span>\nfor all <span class=\"math-container\">$t \\in \\mathbb{R}$</span>, <span class=\"math-container\">$t\\ne 0$</span>, while</p>\n<p><span class=\"math-container\">$$1 + \\frac{\\sin s}{s} = \\frac{1}{s} \\cdot \\int_0 ^s (1 + \\cos u) du &gt;0$$</span>\nfor all <span class=\"math-container\">$s\\in \\mathbb{R}$</span>, <span class=\"math-container\">$s\\ne 0$</span>. We are done.</p>\n" }, { "answer_id": 2099130, "score": 1, "text": "<p><strong>Edit:</strong> As Valent points out in the comment below, it appears the premise is false: the main calculation here omits a factor of <span class=\"math-container\">$z,$</span> which changes the problem. This solution is not correct as written!</p>\n<hr />\n<blockquote class=\"spoiler\">\n<p>This is really only a slight tweaking of your argument to remove the casework and fix a few holes. It also shows the result for <span class=\"math-container\">$k\\in\\Bbb R$</span>, not just <span class=\"math-container\">$k\\in\\Bbb R^+$</span>.</p>\n</blockquote>\n<blockquote class=\"spoiler\">\n<p>Say <span class=\"math-container\">$z = a + bi$</span>. Then <span class=\"math-container\">$$z\\tan z = \\frac{\\tan a + i\\tanh b}{1 - i\\tan a\\tanh b} = \\left(\\frac{1}{1 + \\tan^2 a\\tanh^2 b}\\right)\\left(\\tan a(1 - \\tanh^2 b) + i\\tanh b(1 + \\tan^2 a)\\right).$$</span> <span class=\"math-container\">$\\left(\\frac{1}{1 + \\tan^2 a\\tanh^2 b}\\right)\\in\\Bbb R^+$</span>, because <span class=\"math-container\">$\\tan$</span> and <span class=\"math-container\">$\\tanh$</span> are real valued when restricted to <span class=\"math-container\">$\\Bbb R$</span>.</p>\n</blockquote>\n<blockquote class=\"spoiler\">\n<p>Hence, it is enough to show that if <span class=\"math-container\">$\\tan a(1 - \\tanh^2 b) + i\\tanh b(1 + \\tan^2 a)$</span> is real, then <span class=\"math-container\">$b = 0$</span>. For this quantity to be real, we need <span class=\"math-container\">$\\tanh b(1 + \\tan^2 a) = 0$</span>. This forces either <span class=\"math-container\">$\\tanh b = 0$</span> or <span class=\"math-container\">$1 + \\tan^2 a = 0$</span>. However, <span class=\"math-container\">$\\tan^2 a\\geq 0$</span>, so <span class=\"math-container\">$1 + \\tan^2 a\\geq 1$</span>. Thus, if <span class=\"math-container\">$z\\tan z$</span> is real, we must have <span class=\"math-container\">$\\tanh b = 0$</span>. As you've said, this happens if and only if <span class=\"math-container\">$b = 0$</span>, which means that <span class=\"math-container\">$a + bi\\in\\Bbb R$</span>.</p>\n</blockquote>\n" }, { "answer_id": 4482217, "score": 1, "text": "<p>(See <a href=\"https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi\">Real and imaginary part of $\\tan(a+bi)$</a> )</p>\n<p><span class=\"math-container\">$$\\tan(x+iy)=\\frac{\\sin2x+i\\sinh2y}{\\cos2x+\\cosh2y}$$</span>\n<span class=\"math-container\">$$(x+iy)\\tan(x+iy)=\\frac{(x\\sin2x-y\\sinh2y)+i(x\\sinh2y+y\\sin2x)}{\\cos2x+\\cosh2y}$$</span></p>\n<p>This shows that <span class=\"math-container\">$(x+iy)\\tan(x+iy)$</span> is real exactly when</p>\n<p><span class=\"math-container\">$$x\\sinh2y=-y\\sin2x.$$</span></p>\n<p>We want to know whether there are solutions with <span class=\"math-container\">$y\\neq0$</span>. If <span class=\"math-container\">$x=0$</span>, then the real part of the function has the same sign as <span class=\"math-container\">$-y\\sinh2y$</span>, which has the same sign as <span class=\"math-container\">$-y^2$</span>, which is not positive. So assume <span class=\"math-container\">$xy\\neq0$</span>. Then we have</p>\n<p><span class=\"math-container\">$$\\frac{\\sinh2y}{2y}=-\\frac{\\sin2x}{2x}.$$</span></p>\n<p>But the left side is always greater than <span class=\"math-container\">$1$</span>, while the right side is always smaller than <span class=\"math-container\">$1$</span>. So there is no solution here.</p>\n<p>Thus <span class=\"math-container\">$z\\tan z$</span> is a positive real number only when <span class=\"math-container\">$z$</span> is real.</p>\n" }, { "answer_id": 4482336, "score": 0, "text": "<p>The solution uses the partial fraction <a href=\"https://en.wikipedia.org/wiki/Trigonometric_functions#Partial_fraction_expansion\" rel=\"nofollow noreferrer\">expansion</a>\n<span class=\"math-container\">$$\\pi z\\cdot \\tan \\pi z = \\sum_{k \\ \\textrm{odd integer}} \\frac{z^2}{(\\frac{k}{2})^2 - z^2}$$</span></p>\n<p>Consider the rational fraction <span class=\"math-container\">$\\phi_a(\\cdot)$</span> with<br />\n<span class=\"math-container\">$$\\phi_a(z) = \\frac{z}{a-z} = \\frac{a}{a-z}-1$$</span></p>\n<p>If <span class=\"math-container\">$a&gt;0$</span> then <span class=\"math-container\">$\\phi_a$</span> invariates the upper (lower) half complex plane, that is <span class=\"math-container\">$\\phi_a(\\mathcal{H}_{\\pm}) \\subset \\mathcal{H}_{\\pm}$</span> ( in fact here we have equality). Now the map <span class=\"math-container\">$z\\mapsto z^2$</span> takes the first open quadrant to the upper half plane <span class=\"math-container\">$\\mathcal{H}_+$</span>, and the second one to the lower half plane <span class=\"math-container\">$\\mathcal{H}_-$</span>. Therefore, for <span class=\"math-container\">$a&gt;0$</span> the function\n<span class=\"math-container\">$$z\\ \\mapsto \\frac{z^2}{a-z^2}$$</span>\nmaps the first quadrant to the upper half plane, and the second quadrant to the lower half plane. We conclude that any function of the form</p>\n<p><span class=\"math-container\">$$f\\colon z\\mapsto \\sum_{a&gt;0} c_a \\cdot \\frac{z^2}{a- z^2}$$</span></p>\n<p>takes the first quadrant (Re, Im <span class=\"math-container\">$&gt;0$</span>) to the upper half plane ( Im <span class=\"math-container\">$&gt;0$</span>) and the second quadrant to the lower half plane. We conclude that <span class=\"math-container\">$f^{-1}(\\mathbb{R}) = \\mathbb{R}$</span>.</p>\n<p><span class=\"math-container\">$\\bf{Added:}$</span> Another method</p>\n<p>We <a href=\"https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi/4482323#4482323\">have</a></p>\n<p><span class=\"math-container\">$$\\tan(x+i y) = \\frac{\\sin 2x + i \\sinh 2y}{\\cos 2x + \\cosh 2 y}$$</span></p>\n<p>Let us show that <span class=\"math-container\">$f(z) \\colon = z\\ \\tan z$</span> takes the first quadrant and third quadrant to the upper half plane. Then using <span class=\"math-container\">$f(\\bar z) = \\bar f(z)$</span> (<span class=\"math-container\">$f$</span> real) we see to what half plane will each quadrant go.</p>\n<p>For <span class=\"math-container\">$y\\ne 0$</span> we have <span class=\"math-container\">$\\cosh 2y &gt;1$</span> so the denominator <span class=\"math-container\">$\\cos 2x + \\cosh y&gt;0$</span>. Let us that for <span class=\"math-container\">$x\\cdot y&gt;0$</span> the imaginary part of\n<span class=\"math-container\">$(x+i y) ( \\sin 2 x + i \\sinh 2y)$</span>\nis <span class=\"math-container\">$&gt;0$</span>, that is <span class=\"math-container\">$x \\sinh 2y + y \\sin 2x&gt;0$</span>. With <span class=\"math-container\">$2x = s$</span>, <span class=\"math-container\">$2y = t$</span>, <span class=\"math-container\">$\\ s \\cdot t&gt;0$</span> we have to show that\n<span class=\"math-container\">$s \\sinh t + t \\sin s &gt; 0$</span>, or\n<span class=\"math-container\">$$\\frac{\\sinh t}{t} + \\frac{\\sin s}{s} &gt;0$$</span></p>\n<p>Now\n<span class=\"math-container\">$$\\frac{\\sinh t}{t} = 1 + \\frac{t^2}{3!} + \\frac{t^4}{5!}+ \\cdots &gt; 1$$</span>\nfor all <span class=\"math-container\">$t \\in \\mathbb{R}$</span>, <span class=\"math-container\">$t\\ne 0$</span>, while</p>\n<p><span class=\"math-container\">$$1 + \\frac{\\sin s}{s} = \\frac{1}{s} \\cdot \\int_0 ^s (1 + \\cos u) du &gt;0$$</span>\nfor all <span class=\"math-container\">$s\\in \\mathbb{R}$</span>, <span class=\"math-container\">$s\\ne 0$</span>. We are done.</p>\n" }, { "answer_id": 2099130, "score": 1, "text": "<p><strong>Edit:</strong> As Valent points out in the comment below, it appears the premise is false: the main calculation here omits a factor of <span class=\"math-container\">$z,$</span> which changes the problem. This solution is not correct as written!</p>\n<hr />\n<blockquote class=\"spoiler\">\n<p>This is really only a slight tweaking of your argument to remove the casework and fix a few holes. It also shows the result for <span class=\"math-container\">$k\\in\\Bbb R$</span>, not just <span class=\"math-container\">$k\\in\\Bbb R^+$</span>.</p>\n</blockquote>\n<blockquote class=\"spoiler\">\n<p>Say <span class=\"math-container\">$z = a + bi$</span>. Then <span class=\"math-container\">$$z\\tan z = \\frac{\\tan a + i\\tanh b}{1 - i\\tan a\\tanh b} = \\left(\\frac{1}{1 + \\tan^2 a\\tanh^2 b}\\right)\\left(\\tan a(1 - \\tanh^2 b) + i\\tanh b(1 + \\tan^2 a)\\right).$$</span> <span class=\"math-container\">$\\left(\\frac{1}{1 + \\tan^2 a\\tanh^2 b}\\right)\\in\\Bbb R^+$</span>, because <span class=\"math-container\">$\\tan$</span> and <span class=\"math-container\">$\\tanh$</span> are real valued when restricted to <span class=\"math-container\">$\\Bbb R$</span>.</p>\n</blockquote>\n<blockquote class=\"spoiler\">\n<p>Hence, it is enough to show that if <span class=\"math-container\">$\\tan a(1 - \\tanh^2 b) + i\\tanh b(1 + \\tan^2 a)$</span> is real, then <span class=\"math-container\">$b = 0$</span>. For this quantity to be real, we need <span class=\"math-container\">$\\tanh b(1 + \\tan^2 a) = 0$</span>. This forces either <span class=\"math-container\">$\\tanh b = 0$</span> or <span class=\"math-container\">$1 + \\tan^2 a = 0$</span>. However, <span class=\"math-container\">$\\tan^2 a\\geq 0$</span>, so <span class=\"math-container\">$1 + \\tan^2 a\\geq 1$</span>. Thus, if <span class=\"math-container\">$z\\tan z$</span> is real, we must have <span class=\"math-container\">$\\tanh b = 0$</span>. As you've said, this happens if and only if <span class=\"math-container\">$b = 0$</span>, which means that <span class=\"math-container\">$a + bi\\in\\Bbb R$</span>.</p>\n</blockquote>\n" }, { "answer_id": 4482217, "score": 1, "text": "<p>(See <a href=\"https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi\">Real and imaginary part of $\\tan(a+bi)$</a> )</p>\n<p><span class=\"math-container\">$$\\tan(x+iy)=\\frac{\\sin2x+i\\sinh2y}{\\cos2x+\\cosh2y}$$</span>\n<span class=\"math-container\">$$(x+iy)\\tan(x+iy)=\\frac{(x\\sin2x-y\\sinh2y)+i(x\\sinh2y+y\\sin2x)}{\\cos2x+\\cosh2y}$$</span></p>\n<p>This shows that <span class=\"math-container\">$(x+iy)\\tan(x+iy)$</span> is real exactly when</p>\n<p><span class=\"math-container\">$$x\\sinh2y=-y\\sin2x.$$</span></p>\n<p>We want to know whether there are solutions with <span class=\"math-container\">$y\\neq0$</span>. If <span class=\"math-container\">$x=0$</span>, then the real part of the function has the same sign as <span class=\"math-container\">$-y\\sinh2y$</span>, which has the same sign as <span class=\"math-container\">$-y^2$</span>, which is not positive. So assume <span class=\"math-container\">$xy\\neq0$</span>. Then we have</p>\n<p><span class=\"math-container\">$$\\frac{\\sinh2y}{2y}=-\\frac{\\sin2x}{2x}.$$</span></p>\n<p>But the left side is always greater than <span class=\"math-container\">$1$</span>, while the right side is always smaller than <span class=\"math-container\">$1$</span>. So there is no solution here.</p>\n<p>Thus <span class=\"math-container\">$z\\tan z$</span> is a positive real number only when <span class=\"math-container\">$z$</span> is real.</p>\n" }, { "answer_id": 4482336, "score": 0, "text": "<p>The solution uses the partial fraction <a href=\"https://en.wikipedia.org/wiki/Trigonometric_functions#Partial_fraction_expansion\" rel=\"nofollow noreferrer\">expansion</a>\n<span class=\"math-container\">$$\\pi z\\cdot \\tan \\pi z = \\sum_{k \\ \\textrm{odd integer}} \\frac{z^2}{(\\frac{k}{2})^2 - z^2}$$</span></p>\n<p>Consider the rational fraction <span class=\"math-container\">$\\phi_a(\\cdot)$</span> with<br />\n<span class=\"math-container\">$$\\phi_a(z) = \\frac{z}{a-z} = \\frac{a}{a-z}-1$$</span></p>\n<p>If <span class=\"math-container\">$a&gt;0$</span> then <span class=\"math-container\">$\\phi_a$</span> invariates the upper (lower) half complex plane, that is <span class=\"math-container\">$\\phi_a(\\mathcal{H}_{\\pm}) \\subset \\mathcal{H}_{\\pm}$</span> ( in fact here we have equality). Now the map <span class=\"math-container\">$z\\mapsto z^2$</span> takes the first open quadrant to the upper half plane <span class=\"math-container\">$\\mathcal{H}_+$</span>, and the second one to the lower half plane <span class=\"math-container\">$\\mathcal{H}_-$</span>. Therefore, for <span class=\"math-container\">$a&gt;0$</span> the function\n<span class=\"math-container\">$$z\\ \\mapsto \\frac{z^2}{a-z^2}$$</span>\nmaps the first quadrant to the upper half plane, and the second quadrant to the lower half plane. We conclude that any function of the form</p>\n<p><span class=\"math-container\">$$f\\colon z\\mapsto \\sum_{a&gt;0} c_a \\cdot \\frac{z^2}{a- z^2}$$</span></p>\n<p>takes the first quadrant (Re, Im <span class=\"math-container\">$&gt;0$</span>) to the upper half plane ( Im <span class=\"math-container\">$&gt;0$</span>) and the second quadrant to the lower half plane. We conclude that <span class=\"math-container\">$f^{-1}(\\mathbb{R}) = \\mathbb{R}$</span>.</p>\n<p><span class=\"math-container\">$\\bf{Added:}$</span> Another method</p>\n<p>We <a href=\"https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi/4482323#4482323\">have</a></p>\n<p><span class=\"math-container\">$$\\tan(x+i y) = \\frac{\\sin 2x + i \\sinh 2y}{\\cos 2x + \\cosh 2 y}$$</span></p>\n<p>Let us show that <span class=\"math-container\">$f(z) \\colon = z\\ \\tan z$</span> takes the first quadrant and third quadrant to the upper half plane. Then using <span class=\"math-container\">$f(\\bar z) = \\bar f(z)$</span> (<span class=\"math-container\">$f$</span> real) we see to what half plane will each quadrant go.</p>\n<p>For <span class=\"math-container\">$y\\ne 0$</span> we have <span class=\"math-container\">$\\cosh 2y &gt;1$</span> so the denominator <span class=\"math-container\">$\\cos 2x + \\cosh y&gt;0$</span>. Let us that for <span class=\"math-container\">$x\\cdot y&gt;0$</span> the imaginary part of\n<span class=\"math-container\">$(x+i y) ( \\sin 2 x + i \\sinh 2y)$</span>\nis <span class=\"math-container\">$&gt;0$</span>, that is <span class=\"math-container\">$x \\sinh 2y + y \\sin 2x&gt;0$</span>. With <span class=\"math-container\">$2x = s$</span>, <span class=\"math-container\">$2y = t$</span>, <span class=\"math-container\">$\\ s \\cdot t&gt;0$</span> we have to show that\n<span class=\"math-container\">$s \\sinh t + t \\sin s &gt; 0$</span>, or\n<span class=\"math-container\">$$\\frac{\\sinh t}{t} + \\frac{\\sin s}{s} &gt;0$$</span></p>\n<p>Now\n<span class=\"math-container\">$$\\frac{\\sinh t}{t} = 1 + \\frac{t^2}{3!} + \\frac{t^4}{5!}+ \\cdots &gt; 1$$</span>\nfor all <span class=\"math-container\">$t \\in \\mathbb{R}$</span>, <span class=\"math-container\">$t\\ne 0$</span>, while</p>\n<p><span class=\"math-container\">$$1 + \\frac{\\sin s}{s} = \\frac{1}{s} \\cdot \\int_0 ^s (1 + \\cos u) du &gt;0$$</span>\nfor all <span class=\"math-container\">$s\\in \\mathbb{R}$</span>, <span class=\"math-container\">$s\\ne 0$</span>. We are done.</p>\n" }, { "answer_id": 2099130, "score": 1, "text": "<p><strong>Edit:</strong> As Valent points out in the comment below, it appears the premise is false: the main calculation here omits a factor of <span class=\"math-container\">$z,$</span> which changes the problem. This solution is not correct as written!</p>\n<hr />\n<blockquote class=\"spoiler\">\n<p>This is really only a slight tweaking of your argument to remove the casework and fix a few holes. It also shows the result for <span class=\"math-container\">$k\\in\\Bbb R$</span>, not just <span class=\"math-container\">$k\\in\\Bbb R^+$</span>.</p>\n</blockquote>\n<blockquote class=\"spoiler\">\n<p>Say <span class=\"math-container\">$z = a + bi$</span>. Then <span class=\"math-container\">$$z\\tan z = \\frac{\\tan a + i\\tanh b}{1 - i\\tan a\\tanh b} = \\left(\\frac{1}{1 + \\tan^2 a\\tanh^2 b}\\right)\\left(\\tan a(1 - \\tanh^2 b) + i\\tanh b(1 + \\tan^2 a)\\right).$$</span> <span class=\"math-container\">$\\left(\\frac{1}{1 + \\tan^2 a\\tanh^2 b}\\right)\\in\\Bbb R^+$</span>, because <span class=\"math-container\">$\\tan$</span> and <span class=\"math-container\">$\\tanh$</span> are real valued when restricted to <span class=\"math-container\">$\\Bbb R$</span>.</p>\n</blockquote>\n<blockquote class=\"spoiler\">\n<p>Hence, it is enough to show that if <span class=\"math-container\">$\\tan a(1 - \\tanh^2 b) + i\\tanh b(1 + \\tan^2 a)$</span> is real, then <span class=\"math-container\">$b = 0$</span>. For this quantity to be real, we need <span class=\"math-container\">$\\tanh b(1 + \\tan^2 a) = 0$</span>. This forces either <span class=\"math-container\">$\\tanh b = 0$</span> or <span class=\"math-container\">$1 + \\tan^2 a = 0$</span>. However, <span class=\"math-container\">$\\tan^2 a\\geq 0$</span>, so <span class=\"math-container\">$1 + \\tan^2 a\\geq 1$</span>. Thus, if <span class=\"math-container\">$z\\tan z$</span> is real, we must have <span class=\"math-container\">$\\tanh b = 0$</span>. As you've said, this happens if and only if <span class=\"math-container\">$b = 0$</span>, which means that <span class=\"math-container\">$a + bi\\in\\Bbb R$</span>.</p>\n</blockquote>\n" }, { "answer_id": 4482217, "score": 1, "text": "<p>(See <a href=\"https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi\">Real and imaginary part of $\\tan(a+bi)$</a> )</p>\n<p><span class=\"math-container\">$$\\tan(x+iy)=\\frac{\\sin2x+i\\sinh2y}{\\cos2x+\\cosh2y}$$</span>\n<span class=\"math-container\">$$(x+iy)\\tan(x+iy)=\\frac{(x\\sin2x-y\\sinh2y)+i(x\\sinh2y+y\\sin2x)}{\\cos2x+\\cosh2y}$$</span></p>\n<p>This shows that <span class=\"math-container\">$(x+iy)\\tan(x+iy)$</span> is real exactly when</p>\n<p><span class=\"math-container\">$$x\\sinh2y=-y\\sin2x.$$</span></p>\n<p>We want to know whether there are solutions with <span class=\"math-container\">$y\\neq0$</span>. If <span class=\"math-container\">$x=0$</span>, then the real part of the function has the same sign as <span class=\"math-container\">$-y\\sinh2y$</span>, which has the same sign as <span class=\"math-container\">$-y^2$</span>, which is not positive. So assume <span class=\"math-container\">$xy\\neq0$</span>. Then we have</p>\n<p><span class=\"math-container\">$$\\frac{\\sinh2y}{2y}=-\\frac{\\sin2x}{2x}.$$</span></p>\n<p>But the left side is always greater than <span class=\"math-container\">$1$</span>, while the right side is always smaller than <span class=\"math-container\">$1$</span>. So there is no solution here.</p>\n<p>Thus <span class=\"math-container\">$z\\tan z$</span> is a positive real number only when <span class=\"math-container\">$z$</span> is real.</p>\n" }, { "answer_id": 4482336, "score": 0, "text": "<p>The solution uses the partial fraction <a href=\"https://en.wikipedia.org/wiki/Trigonometric_functions#Partial_fraction_expansion\" rel=\"nofollow noreferrer\">expansion</a>\n<span class=\"math-container\">$$\\pi z\\cdot \\tan \\pi z = \\sum_{k \\ \\textrm{odd integer}} \\frac{z^2}{(\\frac{k}{2})^2 - z^2}$$</span></p>\n<p>Consider the rational fraction <span class=\"math-container\">$\\phi_a(\\cdot)$</span> with<br />\n<span class=\"math-container\">$$\\phi_a(z) = \\frac{z}{a-z} = \\frac{a}{a-z}-1$$</span></p>\n<p>If <span class=\"math-container\">$a&gt;0$</span> then <span class=\"math-container\">$\\phi_a$</span> invariates the upper (lower) half complex plane, that is <span class=\"math-container\">$\\phi_a(\\mathcal{H}_{\\pm}) \\subset \\mathcal{H}_{\\pm}$</span> ( in fact here we have equality). Now the map <span class=\"math-container\">$z\\mapsto z^2$</span> takes the first open quadrant to the upper half plane <span class=\"math-container\">$\\mathcal{H}_+$</span>, and the second one to the lower half plane <span class=\"math-container\">$\\mathcal{H}_-$</span>. Therefore, for <span class=\"math-container\">$a&gt;0$</span> the function\n<span class=\"math-container\">$$z\\ \\mapsto \\frac{z^2}{a-z^2}$$</span>\nmaps the first quadrant to the upper half plane, and the second quadrant to the lower half plane. We conclude that any function of the form</p>\n<p><span class=\"math-container\">$$f\\colon z\\mapsto \\sum_{a&gt;0} c_a \\cdot \\frac{z^2}{a- z^2}$$</span></p>\n<p>takes the first quadrant (Re, Im <span class=\"math-container\">$&gt;0$</span>) to the upper half plane ( Im <span class=\"math-container\">$&gt;0$</span>) and the second quadrant to the lower half plane. We conclude that <span class=\"math-container\">$f^{-1}(\\mathbb{R}) = \\mathbb{R}$</span>.</p>\n<p><span class=\"math-container\">$\\bf{Added:}$</span> Another method</p>\n<p>We <a href=\"https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi/4482323#4482323\">have</a></p>\n<p><span class=\"math-container\">$$\\tan(x+i y) = \\frac{\\sin 2x + i \\sinh 2y}{\\cos 2x + \\cosh 2 y}$$</span></p>\n<p>Let us show that <span class=\"math-container\">$f(z) \\colon = z\\ \\tan z$</span> takes the first quadrant and third quadrant to the upper half plane. Then using <span class=\"math-container\">$f(\\bar z) = \\bar f(z)$</span> (<span class=\"math-container\">$f$</span> real) we see to what half plane will each quadrant go.</p>\n<p>For <span class=\"math-container\">$y\\ne 0$</span> we have <span class=\"math-container\">$\\cosh 2y &gt;1$</span> so the denominator <span class=\"math-container\">$\\cos 2x + \\cosh y&gt;0$</span>. Let us that for <span class=\"math-container\">$x\\cdot y&gt;0$</span> the imaginary part of\n<span class=\"math-container\">$(x+i y) ( \\sin 2 x + i \\sinh 2y)$</span>\nis <span class=\"math-container\">$&gt;0$</span>, that is <span class=\"math-container\">$x \\sinh 2y + y \\sin 2x&gt;0$</span>. With <span class=\"math-container\">$2x = s$</span>, <span class=\"math-container\">$2y = t$</span>, <span class=\"math-container\">$\\ s \\cdot t&gt;0$</span> we have to show that\n<span class=\"math-container\">$s \\sinh t + t \\sin s &gt; 0$</span>, or\n<span class=\"math-container\">$$\\frac{\\sinh t}{t} + \\frac{\\sin s}{s} &gt;0$$</span></p>\n<p>Now\n<span class=\"math-container\">$$\\frac{\\sinh t}{t} = 1 + \\frac{t^2}{3!} + \\frac{t^4}{5!}+ \\cdots &gt; 1$$</span>\nfor all <span class=\"math-container\">$t \\in \\mathbb{R}$</span>, <span class=\"math-container\">$t\\ne 0$</span>, while</p>\n<p><span class=\"math-container\">$$1 + \\frac{\\sin s}{s} = \\frac{1}{s} \\cdot \\int_0 ^s (1 + \\cos u) du &gt;0$$</span>\nfor all <span class=\"math-container\">$s\\in \\mathbb{R}$</span>, <span class=\"math-container\">$s\\ne 0$</span>. We are done.</p>\n" }, { "answer_id": 2099130, "score": 1, "text": "<p><strong>Edit:</strong> As Valent points out in the comment below, it appears the premise is false: the main calculation here omits a factor of <span class=\"math-container\">$z,$</span> which changes the problem. This solution is not correct as written!</p>\n<hr />\n<blockquote class=\"spoiler\">\n<p>This is really only a slight tweaking of your argument to remove the casework and fix a few holes. It also shows the result for <span class=\"math-container\">$k\\in\\Bbb R$</span>, not just <span class=\"math-container\">$k\\in\\Bbb R^+$</span>.</p>\n</blockquote>\n<blockquote class=\"spoiler\">\n<p>Say <span class=\"math-container\">$z = a + bi$</span>. Then <span class=\"math-container\">$$z\\tan z = \\frac{\\tan a + i\\tanh b}{1 - i\\tan a\\tanh b} = \\left(\\frac{1}{1 + \\tan^2 a\\tanh^2 b}\\right)\\left(\\tan a(1 - \\tanh^2 b) + i\\tanh b(1 + \\tan^2 a)\\right).$$</span> <span class=\"math-container\">$\\left(\\frac{1}{1 + \\tan^2 a\\tanh^2 b}\\right)\\in\\Bbb R^+$</span>, because <span class=\"math-container\">$\\tan$</span> and <span class=\"math-container\">$\\tanh$</span> are real valued when restricted to <span class=\"math-container\">$\\Bbb R$</span>.</p>\n</blockquote>\n<blockquote class=\"spoiler\">\n<p>Hence, it is enough to show that if <span class=\"math-container\">$\\tan a(1 - \\tanh^2 b) + i\\tanh b(1 + \\tan^2 a)$</span> is real, then <span class=\"math-container\">$b = 0$</span>. For this quantity to be real, we need <span class=\"math-container\">$\\tanh b(1 + \\tan^2 a) = 0$</span>. This forces either <span class=\"math-container\">$\\tanh b = 0$</span> or <span class=\"math-container\">$1 + \\tan^2 a = 0$</span>. However, <span class=\"math-container\">$\\tan^2 a\\geq 0$</span>, so <span class=\"math-container\">$1 + \\tan^2 a\\geq 1$</span>. Thus, if <span class=\"math-container\">$z\\tan z$</span> is real, we must have <span class=\"math-container\">$\\tanh b = 0$</span>. As you've said, this happens if and only if <span class=\"math-container\">$b = 0$</span>, which means that <span class=\"math-container\">$a + bi\\in\\Bbb R$</span>.</p>\n</blockquote>\n" }, { "answer_id": 4482217, "score": 1, "text": "<p>(See <a href=\"https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi\">Real and imaginary part of $\\tan(a+bi)$</a> )</p>\n<p><span class=\"math-container\">$$\\tan(x+iy)=\\frac{\\sin2x+i\\sinh2y}{\\cos2x+\\cosh2y}$$</span>\n<span class=\"math-container\">$$(x+iy)\\tan(x+iy)=\\frac{(x\\sin2x-y\\sinh2y)+i(x\\sinh2y+y\\sin2x)}{\\cos2x+\\cosh2y}$$</span></p>\n<p>This shows that <span class=\"math-container\">$(x+iy)\\tan(x+iy)$</span> is real exactly when</p>\n<p><span class=\"math-container\">$$x\\sinh2y=-y\\sin2x.$$</span></p>\n<p>We want to know whether there are solutions with <span class=\"math-container\">$y\\neq0$</span>. If <span class=\"math-container\">$x=0$</span>, then the real part of the function has the same sign as <span class=\"math-container\">$-y\\sinh2y$</span>, which has the same sign as <span class=\"math-container\">$-y^2$</span>, which is not positive. So assume <span class=\"math-container\">$xy\\neq0$</span>. Then we have</p>\n<p><span class=\"math-container\">$$\\frac{\\sinh2y}{2y}=-\\frac{\\sin2x}{2x}.$$</span></p>\n<p>But the left side is always greater than <span class=\"math-container\">$1$</span>, while the right side is always smaller than <span class=\"math-container\">$1$</span>. So there is no solution here.</p>\n<p>Thus <span class=\"math-container\">$z\\tan z$</span> is a positive real number only when <span class=\"math-container\">$z$</span> is real.</p>\n" }, { "answer_id": 4482336, "score": 0, "text": "<p>The solution uses the partial fraction <a href=\"https://en.wikipedia.org/wiki/Trigonometric_functions#Partial_fraction_expansion\" rel=\"nofollow noreferrer\">expansion</a>\n<span class=\"math-container\">$$\\pi z\\cdot \\tan \\pi z = \\sum_{k \\ \\textrm{odd integer}} \\frac{z^2}{(\\frac{k}{2})^2 - z^2}$$</span></p>\n<p>Consider the rational fraction <span class=\"math-container\">$\\phi_a(\\cdot)$</span> with<br />\n<span class=\"math-container\">$$\\phi_a(z) = \\frac{z}{a-z} = \\frac{a}{a-z}-1$$</span></p>\n<p>If <span class=\"math-container\">$a&gt;0$</span> then <span class=\"math-container\">$\\phi_a$</span> invariates the upper (lower) half complex plane, that is <span class=\"math-container\">$\\phi_a(\\mathcal{H}_{\\pm}) \\subset \\mathcal{H}_{\\pm}$</span> ( in fact here we have equality). Now the map <span class=\"math-container\">$z\\mapsto z^2$</span> takes the first open quadrant to the upper half plane <span class=\"math-container\">$\\mathcal{H}_+$</span>, and the second one to the lower half plane <span class=\"math-container\">$\\mathcal{H}_-$</span>. Therefore, for <span class=\"math-container\">$a&gt;0$</span> the function\n<span class=\"math-container\">$$z\\ \\mapsto \\frac{z^2}{a-z^2}$$</span>\nmaps the first quadrant to the upper half plane, and the second quadrant to the lower half plane. We conclude that any function of the form</p>\n<p><span class=\"math-container\">$$f\\colon z\\mapsto \\sum_{a&gt;0} c_a \\cdot \\frac{z^2}{a- z^2}$$</span></p>\n<p>takes the first quadrant (Re, Im <span class=\"math-container\">$&gt;0$</span>) to the upper half plane ( Im <span class=\"math-container\">$&gt;0$</span>) and the second quadrant to the lower half plane. We conclude that <span class=\"math-container\">$f^{-1}(\\mathbb{R}) = \\mathbb{R}$</span>.</p>\n<p><span class=\"math-container\">$\\bf{Added:}$</span> Another method</p>\n<p>We <a href=\"https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi/4482323#4482323\">have</a></p>\n<p><span class=\"math-container\">$$\\tan(x+i y) = \\frac{\\sin 2x + i \\sinh 2y}{\\cos 2x + \\cosh 2 y}$$</span></p>\n<p>Let us show that <span class=\"math-container\">$f(z) \\colon = z\\ \\tan z$</span> takes the first quadrant and third quadrant to the upper half plane. Then using <span class=\"math-container\">$f(\\bar z) = \\bar f(z)$</span> (<span class=\"math-container\">$f$</span> real) we see to what half plane will each quadrant go.</p>\n<p>For <span class=\"math-container\">$y\\ne 0$</span> we have <span class=\"math-container\">$\\cosh 2y &gt;1$</span> so the denominator <span class=\"math-container\">$\\cos 2x + \\cosh y&gt;0$</span>. Let us that for <span class=\"math-container\">$x\\cdot y&gt;0$</span> the imaginary part of\n<span class=\"math-container\">$(x+i y) ( \\sin 2 x + i \\sinh 2y)$</span>\nis <span class=\"math-container\">$&gt;0$</span>, that is <span class=\"math-container\">$x \\sinh 2y + y \\sin 2x&gt;0$</span>. With <span class=\"math-container\">$2x = s$</span>, <span class=\"math-container\">$2y = t$</span>, <span class=\"math-container\">$\\ s \\cdot t&gt;0$</span> we have to show that\n<span class=\"math-container\">$s \\sinh t + t \\sin s &gt; 0$</span>, or\n<span class=\"math-container\">$$\\frac{\\sinh t}{t} + \\frac{\\sin s}{s} &gt;0$$</span></p>\n<p>Now\n<span class=\"math-container\">$$\\frac{\\sinh t}{t} = 1 + \\frac{t^2}{3!} + \\frac{t^4}{5!}+ \\cdots &gt; 1$$</span>\nfor all <span class=\"math-container\">$t \\in \\mathbb{R}$</span>, <span class=\"math-container\">$t\\ne 0$</span>, while</p>\n<p><span class=\"math-container\">$$1 + \\frac{\\sin s}{s} = \\frac{1}{s} \\cdot \\int_0 ^s (1 + \\cos u) du &gt;0$$</span>\nfor all <span class=\"math-container\">$s\\in \\mathbb{R}$</span>, <span class=\"math-container\">$s\\ne 0$</span>. We are done.</p>\n" }, { "answer_id": 2099130, "score": 1, "text": "<p><strong>Edit:</strong> As Valent points out in the comment below, it appears the premise is false: the main calculation here omits a factor of <span class=\"math-container\">$z,$</span> which changes the problem. This solution is not correct as written!</p>\n<hr />\n<blockquote class=\"spoiler\">\n<p>This is really only a slight tweaking of your argument to remove the casework and fix a few holes. It also shows the result for <span class=\"math-container\">$k\\in\\Bbb R$</span>, not just <span class=\"math-container\">$k\\in\\Bbb R^+$</span>.</p>\n</blockquote>\n<blockquote class=\"spoiler\">\n<p>Say <span class=\"math-container\">$z = a + bi$</span>. Then <span class=\"math-container\">$$z\\tan z = \\frac{\\tan a + i\\tanh b}{1 - i\\tan a\\tanh b} = \\left(\\frac{1}{1 + \\tan^2 a\\tanh^2 b}\\right)\\left(\\tan a(1 - \\tanh^2 b) + i\\tanh b(1 + \\tan^2 a)\\right).$$</span> <span class=\"math-container\">$\\left(\\frac{1}{1 + \\tan^2 a\\tanh^2 b}\\right)\\in\\Bbb R^+$</span>, because <span class=\"math-container\">$\\tan$</span> and <span class=\"math-container\">$\\tanh$</span> are real valued when restricted to <span class=\"math-container\">$\\Bbb R$</span>.</p>\n</blockquote>\n<blockquote class=\"spoiler\">\n<p>Hence, it is enough to show that if <span class=\"math-container\">$\\tan a(1 - \\tanh^2 b) + i\\tanh b(1 + \\tan^2 a)$</span> is real, then <span class=\"math-container\">$b = 0$</span>. For this quantity to be real, we need <span class=\"math-container\">$\\tanh b(1 + \\tan^2 a) = 0$</span>. This forces either <span class=\"math-container\">$\\tanh b = 0$</span> or <span class=\"math-container\">$1 + \\tan^2 a = 0$</span>. However, <span class=\"math-container\">$\\tan^2 a\\geq 0$</span>, so <span class=\"math-container\">$1 + \\tan^2 a\\geq 1$</span>. Thus, if <span class=\"math-container\">$z\\tan z$</span> is real, we must have <span class=\"math-container\">$\\tanh b = 0$</span>. As you've said, this happens if and only if <span class=\"math-container\">$b = 0$</span>, which means that <span class=\"math-container\">$a + bi\\in\\Bbb R$</span>.</p>\n</blockquote>\n" }, { "answer_id": 4482217, "score": 1, "text": "<p>(See <a href=\"https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi\">Real and imaginary part of $\\tan(a+bi)$</a> )</p>\n<p><span class=\"math-container\">$$\\tan(x+iy)=\\frac{\\sin2x+i\\sinh2y}{\\cos2x+\\cosh2y}$$</span>\n<span class=\"math-container\">$$(x+iy)\\tan(x+iy)=\\frac{(x\\sin2x-y\\sinh2y)+i(x\\sinh2y+y\\sin2x)}{\\cos2x+\\cosh2y}$$</span></p>\n<p>This shows that <span class=\"math-container\">$(x+iy)\\tan(x+iy)$</span> is real exactly when</p>\n<p><span class=\"math-container\">$$x\\sinh2y=-y\\sin2x.$$</span></p>\n<p>We want to know whether there are solutions with <span class=\"math-container\">$y\\neq0$</span>. If <span class=\"math-container\">$x=0$</span>, then the real part of the function has the same sign as <span class=\"math-container\">$-y\\sinh2y$</span>, which has the same sign as <span class=\"math-container\">$-y^2$</span>, which is not positive. So assume <span class=\"math-container\">$xy\\neq0$</span>. Then we have</p>\n<p><span class=\"math-container\">$$\\frac{\\sinh2y}{2y}=-\\frac{\\sin2x}{2x}.$$</span></p>\n<p>But the left side is always greater than <span class=\"math-container\">$1$</span>, while the right side is always smaller than <span class=\"math-container\">$1$</span>. So there is no solution here.</p>\n<p>Thus <span class=\"math-container\">$z\\tan z$</span> is a positive real number only when <span class=\"math-container\">$z$</span> is real.</p>\n" }, { "answer_id": 4482336, "score": 0, "text": "<p>The solution uses the partial fraction <a href=\"https://en.wikipedia.org/wiki/Trigonometric_functions#Partial_fraction_expansion\" rel=\"nofollow noreferrer\">expansion</a>\n<span class=\"math-container\">$$\\pi z\\cdot \\tan \\pi z = \\sum_{k \\ \\textrm{odd integer}} \\frac{z^2}{(\\frac{k}{2})^2 - z^2}$$</span></p>\n<p>Consider the rational fraction <span class=\"math-container\">$\\phi_a(\\cdot)$</span> with<br />\n<span class=\"math-container\">$$\\phi_a(z) = \\frac{z}{a-z} = \\frac{a}{a-z}-1$$</span></p>\n<p>If <span class=\"math-container\">$a&gt;0$</span> then <span class=\"math-container\">$\\phi_a$</span> invariates the upper (lower) half complex plane, that is <span class=\"math-container\">$\\phi_a(\\mathcal{H}_{\\pm}) \\subset \\mathcal{H}_{\\pm}$</span> ( in fact here we have equality). Now the map <span class=\"math-container\">$z\\mapsto z^2$</span> takes the first open quadrant to the upper half plane <span class=\"math-container\">$\\mathcal{H}_+$</span>, and the second one to the lower half plane <span class=\"math-container\">$\\mathcal{H}_-$</span>. Therefore, for <span class=\"math-container\">$a&gt;0$</span> the function\n<span class=\"math-container\">$$z\\ \\mapsto \\frac{z^2}{a-z^2}$$</span>\nmaps the first quadrant to the upper half plane, and the second quadrant to the lower half plane. We conclude that any function of the form</p>\n<p><span class=\"math-container\">$$f\\colon z\\mapsto \\sum_{a&gt;0} c_a \\cdot \\frac{z^2}{a- z^2}$$</span></p>\n<p>takes the first quadrant (Re, Im <span class=\"math-container\">$&gt;0$</span>) to the upper half plane ( Im <span class=\"math-container\">$&gt;0$</span>) and the second quadrant to the lower half plane. We conclude that <span class=\"math-container\">$f^{-1}(\\mathbb{R}) = \\mathbb{R}$</span>.</p>\n<p><span class=\"math-container\">$\\bf{Added:}$</span> Another method</p>\n<p>We <a href=\"https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi/4482323#4482323\">have</a></p>\n<p><span class=\"math-container\">$$\\tan(x+i y) = \\frac{\\sin 2x + i \\sinh 2y}{\\cos 2x + \\cosh 2 y}$$</span></p>\n<p>Let us show that <span class=\"math-container\">$f(z) \\colon = z\\ \\tan z$</span> takes the first quadrant and third quadrant to the upper half plane. Then using <span class=\"math-container\">$f(\\bar z) = \\bar f(z)$</span> (<span class=\"math-container\">$f$</span> real) we see to what half plane will each quadrant go.</p>\n<p>For <span class=\"math-container\">$y\\ne 0$</span> we have <span class=\"math-container\">$\\cosh 2y &gt;1$</span> so the denominator <span class=\"math-container\">$\\cos 2x + \\cosh y&gt;0$</span>. Let us that for <span class=\"math-container\">$x\\cdot y&gt;0$</span> the imaginary part of\n<span class=\"math-container\">$(x+i y) ( \\sin 2 x + i \\sinh 2y)$</span>\nis <span class=\"math-container\">$&gt;0$</span>, that is <span class=\"math-container\">$x \\sinh 2y + y \\sin 2x&gt;0$</span>. With <span class=\"math-container\">$2x = s$</span>, <span class=\"math-container\">$2y = t$</span>, <span class=\"math-container\">$\\ s \\cdot t&gt;0$</span> we have to show that\n<span class=\"math-container\">$s \\sinh t + t \\sin s &gt; 0$</span>, or\n<span class=\"math-container\">$$\\frac{\\sinh t}{t} + \\frac{\\sin s}{s} &gt;0$$</span></p>\n<p>Now\n<span class=\"math-container\">$$\\frac{\\sinh t}{t} = 1 + \\frac{t^2}{3!} + \\frac{t^4}{5!}+ \\cdots &gt; 1$$</span>\nfor all <span class=\"math-container\">$t \\in \\mathbb{R}$</span>, <span class=\"math-container\">$t\\ne 0$</span>, while</p>\n<p><span class=\"math-container\">$$1 + \\frac{\\sin s}{s} = \\frac{1}{s} \\cdot \\int_0 ^s (1 + \\cos u) du &gt;0$$</span>\nfor all <span class=\"math-container\">$s\\in \\mathbb{R}$</span>, <span class=\"math-container\">$s\\ne 0$</span>. We are done.</p>\n" }, { "answer_id": 2099130, "score": 1, "text": "<p><strong>Edit:</strong> As Valent points out in the comment below, it appears the premise is false: the main calculation here omits a factor of <span class=\"math-container\">$z,$</span> which changes the problem. This solution is not correct as written!</p>\n<hr />\n<blockquote class=\"spoiler\">\n<p>This is really only a slight tweaking of your argument to remove the casework and fix a few holes. It also shows the result for <span class=\"math-container\">$k\\in\\Bbb R$</span>, not just <span class=\"math-container\">$k\\in\\Bbb R^+$</span>.</p>\n</blockquote>\n<blockquote class=\"spoiler\">\n<p>Say <span class=\"math-container\">$z = a + bi$</span>. Then <span class=\"math-container\">$$z\\tan z = \\frac{\\tan a + i\\tanh b}{1 - i\\tan a\\tanh b} = \\left(\\frac{1}{1 + \\tan^2 a\\tanh^2 b}\\right)\\left(\\tan a(1 - \\tanh^2 b) + i\\tanh b(1 + \\tan^2 a)\\right).$$</span> <span class=\"math-container\">$\\left(\\frac{1}{1 + \\tan^2 a\\tanh^2 b}\\right)\\in\\Bbb R^+$</span>, because <span class=\"math-container\">$\\tan$</span> and <span class=\"math-container\">$\\tanh$</span> are real valued when restricted to <span class=\"math-container\">$\\Bbb R$</span>.</p>\n</blockquote>\n<blockquote class=\"spoiler\">\n<p>Hence, it is enough to show that if <span class=\"math-container\">$\\tan a(1 - \\tanh^2 b) + i\\tanh b(1 + \\tan^2 a)$</span> is real, then <span class=\"math-container\">$b = 0$</span>. For this quantity to be real, we need <span class=\"math-container\">$\\tanh b(1 + \\tan^2 a) = 0$</span>. This forces either <span class=\"math-container\">$\\tanh b = 0$</span> or <span class=\"math-container\">$1 + \\tan^2 a = 0$</span>. However, <span class=\"math-container\">$\\tan^2 a\\geq 0$</span>, so <span class=\"math-container\">$1 + \\tan^2 a\\geq 1$</span>. Thus, if <span class=\"math-container\">$z\\tan z$</span> is real, we must have <span class=\"math-container\">$\\tanh b = 0$</span>. As you've said, this happens if and only if <span class=\"math-container\">$b = 0$</span>, which means that <span class=\"math-container\">$a + bi\\in\\Bbb R$</span>.</p>\n</blockquote>\n" }, { "answer_id": 4482217, "score": 1, "text": "<p>(See <a href=\"https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi\">Real and imaginary part of $\\tan(a+bi)$</a> )</p>\n<p><span class=\"math-container\">$$\\tan(x+iy)=\\frac{\\sin2x+i\\sinh2y}{\\cos2x+\\cosh2y}$$</span>\n<span class=\"math-container\">$$(x+iy)\\tan(x+iy)=\\frac{(x\\sin2x-y\\sinh2y)+i(x\\sinh2y+y\\sin2x)}{\\cos2x+\\cosh2y}$$</span></p>\n<p>This shows that <span class=\"math-container\">$(x+iy)\\tan(x+iy)$</span> is real exactly when</p>\n<p><span class=\"math-container\">$$x\\sinh2y=-y\\sin2x.$$</span></p>\n<p>We want to know whether there are solutions with <span class=\"math-container\">$y\\neq0$</span>. If <span class=\"math-container\">$x=0$</span>, then the real part of the function has the same sign as <span class=\"math-container\">$-y\\sinh2y$</span>, which has the same sign as <span class=\"math-container\">$-y^2$</span>, which is not positive. So assume <span class=\"math-container\">$xy\\neq0$</span>. Then we have</p>\n<p><span class=\"math-container\">$$\\frac{\\sinh2y}{2y}=-\\frac{\\sin2x}{2x}.$$</span></p>\n<p>But the left side is always greater than <span class=\"math-container\">$1$</span>, while the right side is always smaller than <span class=\"math-container\">$1$</span>. So there is no solution here.</p>\n<p>Thus <span class=\"math-container\">$z\\tan z$</span> is a positive real number only when <span class=\"math-container\">$z$</span> is real.</p>\n" }, { "answer_id": 4482336, "score": 0, "text": "<p>The solution uses the partial fraction <a href=\"https://en.wikipedia.org/wiki/Trigonometric_functions#Partial_fraction_expansion\" rel=\"nofollow noreferrer\">expansion</a>\n<span class=\"math-container\">$$\\pi z\\cdot \\tan \\pi z = \\sum_{k \\ \\textrm{odd integer}} \\frac{z^2}{(\\frac{k}{2})^2 - z^2}$$</span></p>\n<p>Consider the rational fraction <span class=\"math-container\">$\\phi_a(\\cdot)$</span> with<br />\n<span class=\"math-container\">$$\\phi_a(z) = \\frac{z}{a-z} = \\frac{a}{a-z}-1$$</span></p>\n<p>If <span class=\"math-container\">$a&gt;0$</span> then <span class=\"math-container\">$\\phi_a$</span> invariates the upper (lower) half complex plane, that is <span class=\"math-container\">$\\phi_a(\\mathcal{H}_{\\pm}) \\subset \\mathcal{H}_{\\pm}$</span> ( in fact here we have equality). Now the map <span class=\"math-container\">$z\\mapsto z^2$</span> takes the first open quadrant to the upper half plane <span class=\"math-container\">$\\mathcal{H}_+$</span>, and the second one to the lower half plane <span class=\"math-container\">$\\mathcal{H}_-$</span>. Therefore, for <span class=\"math-container\">$a&gt;0$</span> the function\n<span class=\"math-container\">$$z\\ \\mapsto \\frac{z^2}{a-z^2}$$</span>\nmaps the first quadrant to the upper half plane, and the second quadrant to the lower half plane. We conclude that any function of the form</p>\n<p><span class=\"math-container\">$$f\\colon z\\mapsto \\sum_{a&gt;0} c_a \\cdot \\frac{z^2}{a- z^2}$$</span></p>\n<p>takes the first quadrant (Re, Im <span class=\"math-container\">$&gt;0$</span>) to the upper half plane ( Im <span class=\"math-container\">$&gt;0$</span>) and the second quadrant to the lower half plane. We conclude that <span class=\"math-container\">$f^{-1}(\\mathbb{R}) = \\mathbb{R}$</span>.</p>\n<p><span class=\"math-container\">$\\bf{Added:}$</span> Another method</p>\n<p>We <a href=\"https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi/4482323#4482323\">have</a></p>\n<p><span class=\"math-container\">$$\\tan(x+i y) = \\frac{\\sin 2x + i \\sinh 2y}{\\cos 2x + \\cosh 2 y}$$</span></p>\n<p>Let us show that <span class=\"math-container\">$f(z) \\colon = z\\ \\tan z$</span> takes the first quadrant and third quadrant to the upper half plane. Then using <span class=\"math-container\">$f(\\bar z) = \\bar f(z)$</span> (<span class=\"math-container\">$f$</span> real) we see to what half plane will each quadrant go.</p>\n<p>For <span class=\"math-container\">$y\\ne 0$</span> we have <span class=\"math-container\">$\\cosh 2y &gt;1$</span> so the denominator <span class=\"math-container\">$\\cos 2x + \\cosh y&gt;0$</span>. Let us that for <span class=\"math-container\">$x\\cdot y&gt;0$</span> the imaginary part of\n<span class=\"math-container\">$(x+i y) ( \\sin 2 x + i \\sinh 2y)$</span>\nis <span class=\"math-container\">$&gt;0$</span>, that is <span class=\"math-container\">$x \\sinh 2y + y \\sin 2x&gt;0$</span>. With <span class=\"math-container\">$2x = s$</span>, <span class=\"math-container\">$2y = t$</span>, <span class=\"math-container\">$\\ s \\cdot t&gt;0$</span> we have to show that\n<span class=\"math-container\">$s \\sinh t + t \\sin s &gt; 0$</span>, or\n<span class=\"math-container\">$$\\frac{\\sinh t}{t} + \\frac{\\sin s}{s} &gt;0$$</span></p>\n<p>Now\n<span class=\"math-container\">$$\\frac{\\sinh t}{t} = 1 + \\frac{t^2}{3!} + \\frac{t^4}{5!}+ \\cdots &gt; 1$$</span>\nfor all <span class=\"math-container\">$t \\in \\mathbb{R}$</span>, <span class=\"math-container\">$t\\ne 0$</span>, while</p>\n<p><span class=\"math-container\">$$1 + \\frac{\\sin s}{s} = \\frac{1}{s} \\cdot \\int_0 ^s (1 + \\cos u) du &gt;0$$</span>\nfor all <span class=\"math-container\">$s\\in \\mathbb{R}$</span>, <span class=\"math-container\">$s\\ne 0$</span>. We are done.</p>\n" }, { "answer_id": 2099130, "score": 1, "text": "<p><strong>Edit:</strong> As Valent points out in the comment below, it appears the premise is false: the main calculation here omits a factor of <span class=\"math-container\">$z,$</span> which changes the problem. This solution is not correct as written!</p>\n<hr />\n<blockquote class=\"spoiler\">\n<p>This is really only a slight tweaking of your argument to remove the casework and fix a few holes. It also shows the result for <span class=\"math-container\">$k\\in\\Bbb R$</span>, not just <span class=\"math-container\">$k\\in\\Bbb R^+$</span>.</p>\n</blockquote>\n<blockquote class=\"spoiler\">\n<p>Say <span class=\"math-container\">$z = a + bi$</span>. Then <span class=\"math-container\">$$z\\tan z = \\frac{\\tan a + i\\tanh b}{1 - i\\tan a\\tanh b} = \\left(\\frac{1}{1 + \\tan^2 a\\tanh^2 b}\\right)\\left(\\tan a(1 - \\tanh^2 b) + i\\tanh b(1 + \\tan^2 a)\\right).$$</span> <span class=\"math-container\">$\\left(\\frac{1}{1 + \\tan^2 a\\tanh^2 b}\\right)\\in\\Bbb R^+$</span>, because <span class=\"math-container\">$\\tan$</span> and <span class=\"math-container\">$\\tanh$</span> are real valued when restricted to <span class=\"math-container\">$\\Bbb R$</span>.</p>\n</blockquote>\n<blockquote class=\"spoiler\">\n<p>Hence, it is enough to show that if <span class=\"math-container\">$\\tan a(1 - \\tanh^2 b) + i\\tanh b(1 + \\tan^2 a)$</span> is real, then <span class=\"math-container\">$b = 0$</span>. For this quantity to be real, we need <span class=\"math-container\">$\\tanh b(1 + \\tan^2 a) = 0$</span>. This forces either <span class=\"math-container\">$\\tanh b = 0$</span> or <span class=\"math-container\">$1 + \\tan^2 a = 0$</span>. However, <span class=\"math-container\">$\\tan^2 a\\geq 0$</span>, so <span class=\"math-container\">$1 + \\tan^2 a\\geq 1$</span>. Thus, if <span class=\"math-container\">$z\\tan z$</span> is real, we must have <span class=\"math-container\">$\\tanh b = 0$</span>. As you've said, this happens if and only if <span class=\"math-container\">$b = 0$</span>, which means that <span class=\"math-container\">$a + bi\\in\\Bbb R$</span>.</p>\n</blockquote>\n" }, { "answer_id": 4482217, "score": 1, "text": "<p>(See <a href=\"https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi\">Real and imaginary part of $\\tan(a+bi)$</a> )</p>\n<p><span class=\"math-container\">$$\\tan(x+iy)=\\frac{\\sin2x+i\\sinh2y}{\\cos2x+\\cosh2y}$$</span>\n<span class=\"math-container\">$$(x+iy)\\tan(x+iy)=\\frac{(x\\sin2x-y\\sinh2y)+i(x\\sinh2y+y\\sin2x)}{\\cos2x+\\cosh2y}$$</span></p>\n<p>This shows that <span class=\"math-container\">$(x+iy)\\tan(x+iy)$</span> is real exactly when</p>\n<p><span class=\"math-container\">$$x\\sinh2y=-y\\sin2x.$$</span></p>\n<p>We want to know whether there are solutions with <span class=\"math-container\">$y\\neq0$</span>. If <span class=\"math-container\">$x=0$</span>, then the real part of the function has the same sign as <span class=\"math-container\">$-y\\sinh2y$</span>, which has the same sign as <span class=\"math-container\">$-y^2$</span>, which is not positive. So assume <span class=\"math-container\">$xy\\neq0$</span>. Then we have</p>\n<p><span class=\"math-container\">$$\\frac{\\sinh2y}{2y}=-\\frac{\\sin2x}{2x}.$$</span></p>\n<p>But the left side is always greater than <span class=\"math-container\">$1$</span>, while the right side is always smaller than <span class=\"math-container\">$1$</span>. So there is no solution here.</p>\n<p>Thus <span class=\"math-container\">$z\\tan z$</span> is a positive real number only when <span class=\"math-container\">$z$</span> is real.</p>\n" }, { "answer_id": 4482336, "score": 0, "text": "<p>The solution uses the partial fraction <a href=\"https://en.wikipedia.org/wiki/Trigonometric_functions#Partial_fraction_expansion\" rel=\"nofollow noreferrer\">expansion</a>\n<span class=\"math-container\">$$\\pi z\\cdot \\tan \\pi z = \\sum_{k \\ \\textrm{odd integer}} \\frac{z^2}{(\\frac{k}{2})^2 - z^2}$$</span></p>\n<p>Consider the rational fraction <span class=\"math-container\">$\\phi_a(\\cdot)$</span> with<br />\n<span class=\"math-container\">$$\\phi_a(z) = \\frac{z}{a-z} = \\frac{a}{a-z}-1$$</span></p>\n<p>If <span class=\"math-container\">$a&gt;0$</span> then <span class=\"math-container\">$\\phi_a$</span> invariates the upper (lower) half complex plane, that is <span class=\"math-container\">$\\phi_a(\\mathcal{H}_{\\pm}) \\subset \\mathcal{H}_{\\pm}$</span> ( in fact here we have equality). Now the map <span class=\"math-container\">$z\\mapsto z^2$</span> takes the first open quadrant to the upper half plane <span class=\"math-container\">$\\mathcal{H}_+$</span>, and the second one to the lower half plane <span class=\"math-container\">$\\mathcal{H}_-$</span>. Therefore, for <span class=\"math-container\">$a&gt;0$</span> the function\n<span class=\"math-container\">$$z\\ \\mapsto \\frac{z^2}{a-z^2}$$</span>\nmaps the first quadrant to the upper half plane, and the second quadrant to the lower half plane. We conclude that any function of the form</p>\n<p><span class=\"math-container\">$$f\\colon z\\mapsto \\sum_{a&gt;0} c_a \\cdot \\frac{z^2}{a- z^2}$$</span></p>\n<p>takes the first quadrant (Re, Im <span class=\"math-container\">$&gt;0$</span>) to the upper half plane ( Im <span class=\"math-container\">$&gt;0$</span>) and the second quadrant to the lower half plane. We conclude that <span class=\"math-container\">$f^{-1}(\\mathbb{R}) = \\mathbb{R}$</span>.</p>\n<p><span class=\"math-container\">$\\bf{Added:}$</span> Another method</p>\n<p>We <a href=\"https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi/4482323#4482323\">have</a></p>\n<p><span class=\"math-container\">$$\\tan(x+i y) = \\frac{\\sin 2x + i \\sinh 2y}{\\cos 2x + \\cosh 2 y}$$</span></p>\n<p>Let us show that <span class=\"math-container\">$f(z) \\colon = z\\ \\tan z$</span> takes the first quadrant and third quadrant to the upper half plane. Then using <span class=\"math-container\">$f(\\bar z) = \\bar f(z)$</span> (<span class=\"math-container\">$f$</span> real) we see to what half plane will each quadrant go.</p>\n<p>For <span class=\"math-container\">$y\\ne 0$</span> we have <span class=\"math-container\">$\\cosh 2y &gt;1$</span> so the denominator <span class=\"math-container\">$\\cos 2x + \\cosh y&gt;0$</span>. Let us that for <span class=\"math-container\">$x\\cdot y&gt;0$</span> the imaginary part of\n<span class=\"math-container\">$(x+i y) ( \\sin 2 x + i \\sinh 2y)$</span>\nis <span class=\"math-container\">$&gt;0$</span>, that is <span class=\"math-container\">$x \\sinh 2y + y \\sin 2x&gt;0$</span>. With <span class=\"math-container\">$2x = s$</span>, <span class=\"math-container\">$2y = t$</span>, <span class=\"math-container\">$\\ s \\cdot t&gt;0$</span> we have to show that\n<span class=\"math-container\">$s \\sinh t + t \\sin s &gt; 0$</span>, or\n<span class=\"math-container\">$$\\frac{\\sinh t}{t} + \\frac{\\sin s}{s} &gt;0$$</span></p>\n<p>Now\n<span class=\"math-container\">$$\\frac{\\sinh t}{t} = 1 + \\frac{t^2}{3!} + \\frac{t^4}{5!}+ \\cdots &gt; 1$$</span>\nfor all <span class=\"math-container\">$t \\in \\mathbb{R}$</span>, <span class=\"math-container\">$t\\ne 0$</span>, while</p>\n<p><span class=\"math-container\">$$1 + \\frac{\\sin s}{s} = \\frac{1}{s} \\cdot \\int_0 ^s (1 + \\cos u) du &gt;0$$</span>\nfor all <span class=\"math-container\">$s\\in \\mathbb{R}$</span>, <span class=\"math-container\">$s\\ne 0$</span>. We are done.</p>\n" }, { "answer_id": 2099130, "score": 1, "text": "<p><strong>Edit:</strong> As Valent points out in the comment below, it appears the premise is false: the main calculation here omits a factor of <span class=\"math-container\">$z,$</span> which changes the problem. This solution is not correct as written!</p>\n<hr />\n<blockquote class=\"spoiler\">\n<p>This is really only a slight tweaking of your argument to remove the casework and fix a few holes. It also shows the result for <span class=\"math-container\">$k\\in\\Bbb R$</span>, not just <span class=\"math-container\">$k\\in\\Bbb R^+$</span>.</p>\n</blockquote>\n<blockquote class=\"spoiler\">\n<p>Say <span class=\"math-container\">$z = a + bi$</span>. Then <span class=\"math-container\">$$z\\tan z = \\frac{\\tan a + i\\tanh b}{1 - i\\tan a\\tanh b} = \\left(\\frac{1}{1 + \\tan^2 a\\tanh^2 b}\\right)\\left(\\tan a(1 - \\tanh^2 b) + i\\tanh b(1 + \\tan^2 a)\\right).$$</span> <span class=\"math-container\">$\\left(\\frac{1}{1 + \\tan^2 a\\tanh^2 b}\\right)\\in\\Bbb R^+$</span>, because <span class=\"math-container\">$\\tan$</span> and <span class=\"math-container\">$\\tanh$</span> are real valued when restricted to <span class=\"math-container\">$\\Bbb R$</span>.</p>\n</blockquote>\n<blockquote class=\"spoiler\">\n<p>Hence, it is enough to show that if <span class=\"math-container\">$\\tan a(1 - \\tanh^2 b) + i\\tanh b(1 + \\tan^2 a)$</span> is real, then <span class=\"math-container\">$b = 0$</span>. For this quantity to be real, we need <span class=\"math-container\">$\\tanh b(1 + \\tan^2 a) = 0$</span>. This forces either <span class=\"math-container\">$\\tanh b = 0$</span> or <span class=\"math-container\">$1 + \\tan^2 a = 0$</span>. However, <span class=\"math-container\">$\\tan^2 a\\geq 0$</span>, so <span class=\"math-container\">$1 + \\tan^2 a\\geq 1$</span>. Thus, if <span class=\"math-container\">$z\\tan z$</span> is real, we must have <span class=\"math-container\">$\\tanh b = 0$</span>. As you've said, this happens if and only if <span class=\"math-container\">$b = 0$</span>, which means that <span class=\"math-container\">$a + bi\\in\\Bbb R$</span>.</p>\n</blockquote>\n" }, { "answer_id": 4482217, "score": 1, "text": "<p>(See <a href=\"https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi\">Real and imaginary part of $\\tan(a+bi)$</a> )</p>\n<p><span class=\"math-container\">$$\\tan(x+iy)=\\frac{\\sin2x+i\\sinh2y}{\\cos2x+\\cosh2y}$$</span>\n<span class=\"math-container\">$$(x+iy)\\tan(x+iy)=\\frac{(x\\sin2x-y\\sinh2y)+i(x\\sinh2y+y\\sin2x)}{\\cos2x+\\cosh2y}$$</span></p>\n<p>This shows that <span class=\"math-container\">$(x+iy)\\tan(x+iy)$</span> is real exactly when</p>\n<p><span class=\"math-container\">$$x\\sinh2y=-y\\sin2x.$$</span></p>\n<p>We want to know whether there are solutions with <span class=\"math-container\">$y\\neq0$</span>. If <span class=\"math-container\">$x=0$</span>, then the real part of the function has the same sign as <span class=\"math-container\">$-y\\sinh2y$</span>, which has the same sign as <span class=\"math-container\">$-y^2$</span>, which is not positive. So assume <span class=\"math-container\">$xy\\neq0$</span>. Then we have</p>\n<p><span class=\"math-container\">$$\\frac{\\sinh2y}{2y}=-\\frac{\\sin2x}{2x}.$$</span></p>\n<p>But the left side is always greater than <span class=\"math-container\">$1$</span>, while the right side is always smaller than <span class=\"math-container\">$1$</span>. So there is no solution here.</p>\n<p>Thus <span class=\"math-container\">$z\\tan z$</span> is a positive real number only when <span class=\"math-container\">$z$</span> is real.</p>\n" }, { "answer_id": 4482336, "score": 0, "text": "<p>The solution uses the partial fraction <a href=\"https://en.wikipedia.org/wiki/Trigonometric_functions#Partial_fraction_expansion\" rel=\"nofollow noreferrer\">expansion</a>\n<span class=\"math-container\">$$\\pi z\\cdot \\tan \\pi z = \\sum_{k \\ \\textrm{odd integer}} \\frac{z^2}{(\\frac{k}{2})^2 - z^2}$$</span></p>\n<p>Consider the rational fraction <span class=\"math-container\">$\\phi_a(\\cdot)$</span> with<br />\n<span class=\"math-container\">$$\\phi_a(z) = \\frac{z}{a-z} = \\frac{a}{a-z}-1$$</span></p>\n<p>If <span class=\"math-container\">$a&gt;0$</span> then <span class=\"math-container\">$\\phi_a$</span> invariates the upper (lower) half complex plane, that is <span class=\"math-container\">$\\phi_a(\\mathcal{H}_{\\pm}) \\subset \\mathcal{H}_{\\pm}$</span> ( in fact here we have equality). Now the map <span class=\"math-container\">$z\\mapsto z^2$</span> takes the first open quadrant to the upper half plane <span class=\"math-container\">$\\mathcal{H}_+$</span>, and the second one to the lower half plane <span class=\"math-container\">$\\mathcal{H}_-$</span>. Therefore, for <span class=\"math-container\">$a&gt;0$</span> the function\n<span class=\"math-container\">$$z\\ \\mapsto \\frac{z^2}{a-z^2}$$</span>\nmaps the first quadrant to the upper half plane, and the second quadrant to the lower half plane. We conclude that any function of the form</p>\n<p><span class=\"math-container\">$$f\\colon z\\mapsto \\sum_{a&gt;0} c_a \\cdot \\frac{z^2}{a- z^2}$$</span></p>\n<p>takes the first quadrant (Re, Im <span class=\"math-container\">$&gt;0$</span>) to the upper half plane ( Im <span class=\"math-container\">$&gt;0$</span>) and the second quadrant to the lower half plane. We conclude that <span class=\"math-container\">$f^{-1}(\\mathbb{R}) = \\mathbb{R}$</span>.</p>\n<p><span class=\"math-container\">$\\bf{Added:}$</span> Another method</p>\n<p>We <a href=\"https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi/4482323#4482323\">have</a></p>\n<p><span class=\"math-container\">$$\\tan(x+i y) = \\frac{\\sin 2x + i \\sinh 2y}{\\cos 2x + \\cosh 2 y}$$</span></p>\n<p>Let us show that <span class=\"math-container\">$f(z) \\colon = z\\ \\tan z$</span> takes the first quadrant and third quadrant to the upper half plane. Then using <span class=\"math-container\">$f(\\bar z) = \\bar f(z)$</span> (<span class=\"math-container\">$f$</span> real) we see to what half plane will each quadrant go.</p>\n<p>For <span class=\"math-container\">$y\\ne 0$</span> we have <span class=\"math-container\">$\\cosh 2y &gt;1$</span> so the denominator <span class=\"math-container\">$\\cos 2x + \\cosh y&gt;0$</span>. Let us that for <span class=\"math-container\">$x\\cdot y&gt;0$</span> the imaginary part of\n<span class=\"math-container\">$(x+i y) ( \\sin 2 x + i \\sinh 2y)$</span>\nis <span class=\"math-container\">$&gt;0$</span>, that is <span class=\"math-container\">$x \\sinh 2y + y \\sin 2x&gt;0$</span>. With <span class=\"math-container\">$2x = s$</span>, <span class=\"math-container\">$2y = t$</span>, <span class=\"math-container\">$\\ s \\cdot t&gt;0$</span> we have to show that\n<span class=\"math-container\">$s \\sinh t + t \\sin s &gt; 0$</span>, or\n<span class=\"math-container\">$$\\frac{\\sinh t}{t} + \\frac{\\sin s}{s} &gt;0$$</span></p>\n<p>Now\n<span class=\"math-container\">$$\\frac{\\sinh t}{t} = 1 + \\frac{t^2}{3!} + \\frac{t^4}{5!}+ \\cdots &gt; 1$$</span>\nfor all <span class=\"math-container\">$t \\in \\mathbb{R}$</span>, <span class=\"math-container\">$t\\ne 0$</span>, while</p>\n<p><span class=\"math-container\">$$1 + \\frac{\\sin s}{s} = \\frac{1}{s} \\cdot \\int_0 ^s (1 + \\cos u) du &gt;0$$</span>\nfor all <span class=\"math-container\">$s\\in \\mathbb{R}$</span>, <span class=\"math-container\">$s\\ne 0$</span>. We are done.</p>\n" } ]
Question: Prove that all roots of $z\tan z = k$ lie in $\Bbb R$, where $k$ is a positive, non-zero real number. Answer 1: <p><strong>Edit:</strong> As Valent points out in the comment below, it appears the premise is false: the main calculation here omits a factor of <span class="math-container">$z,$</span> which changes the problem. This solution is not correct as written!</p> <hr /> <blockquote class="spoiler"> <p>This is really only a slight tweaking of your argument to remove the casework and fix a few holes. It also shows the result for <span class="math-container">$k\in\Bbb R$</span>, not just <span class="math-container">$k\in\Bbb R^+$</span>.</p> </blockquote> <blockquote class="spoiler"> <p>Say <span class="math-container">$z = a + bi$</span>. Then <span class="math-container">$$z\tan z = \frac{\tan a + i\tanh b}{1 - i\tan a\tanh b} = \left(\frac{1}{1 + \tan^2 a\tanh^2 b}\right)\left(\tan a(1 - \tanh^2 b) + i\tanh b(1 + \tan^2 a)\right).$$</span> <span class="math-container">$\left(\frac{1}{1 + \tan^2 a\tanh^2 b}\right)\in\Bbb R^+$</span>, because <span class="math-container">$\tan$</span> and <span class="math-container">$\tanh$</span> are real valued when restricted to <span class="math-container">$\Bbb R$</span>.</p> </blockquote> <blockquote class="spoiler"> <p>Hence, it is enough to show that if <span class="math-container">$\tan a(1 - \tanh^2 b) + i\tanh b(1 + \tan^2 a)$</span> is real, then <span class="math-container">$b = 0$</span>. For this quantity to be real, we need <span class="math-container">$\tanh b(1 + \tan^2 a) = 0$</span>. This forces either <span class="math-container">$\tanh b = 0$</span> or <span class="math-container">$1 + \tan^2 a = 0$</span>. However, <span class="math-container">$\tan^2 a\geq 0$</span>, so <span class="math-container">$1 + \tan^2 a\geq 1$</span>. Thus, if <span class="math-container">$z\tan z$</span> is real, we must have <span class="math-container">$\tanh b = 0$</span>. As you've said, this happens if and only if <span class="math-container">$b = 0$</span>, which means that <span class="math-container">$a + bi\in\Bbb R$</span>.</p> </blockquote> Answer 2: <p>(See <a href="https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi">Real and imaginary part of $\tan(a+bi)$</a> )</p> <p><span class="math-container">$$\tan(x+iy)=\frac{\sin2x+i\sinh2y}{\cos2x+\cosh2y}$$</span> <span class="math-container">$$(x+iy)\tan(x+iy)=\frac{(x\sin2x-y\sinh2y)+i(x\sinh2y+y\sin2x)}{\cos2x+\cosh2y}$$</span></p> <p>This shows that <span class="math-container">$(x+iy)\tan(x+iy)$</span> is real exactly when</p> <p><span class="math-container">$$x\sinh2y=-y\sin2x.$$</span></p> <p>We want to know whether there are solutions with <span class="math-container">$y\neq0$</span>. If <span class="math-container">$x=0$</span>, then the real part of the function has the same sign as <span class="math-container">$-y\sinh2y$</span>, which has the same sign as <span class="math-container">$-y^2$</span>, which is not positive. So assume <span class="math-container">$xy\neq0$</span>. Then we have</p> <p><span class="math-container">$$\frac{\sinh2y}{2y}=-\frac{\sin2x}{2x}.$$</span></p> <p>But the left side is always greater than <span class="math-container">$1$</span>, while the right side is always smaller than <span class="math-container">$1$</span>. So there is no solution here.</p> <p>Thus <span class="math-container">$z\tan z$</span> is a positive real number only when <span class="math-container">$z$</span> is real.</p> Answer 3: <p>The solution uses the partial fraction <a href="https://en.wikipedia.org/wiki/Trigonometric_functions#Partial_fraction_expansion" rel="nofollow noreferrer">expansion</a> <span class="math-container">$$\pi z\cdot \tan \pi z = \sum_{k \ \textrm{odd integer}} \frac{z^2}{(\frac{k}{2})^2 - z^2}$$</span></p> <p>Consider the rational fraction <span class="math-container">$\phi_a(\cdot)$</span> with<br /> <span class="math-container">$$\phi_a(z) = \frac{z}{a-z} = \frac{a}{a-z}-1$$</span></p> <p>If <span class="math-container">$a&gt;0$</span> then <span class="math-container">$\phi_a$</span> invariates the upper (lower) half complex plane, that is <span class="math-container">$\phi_a(\mathcal{H}_{\pm}) \subset \mathcal{H}_{\pm}$</span> ( in fact here we have equality). Now the map <span class="math-container">$z\mapsto z^2$</span> takes the first open quadrant to the upper half plane <span class="math-container">$\mathcal{H}_+$</span>, and the second one to the lower half plane <span class="math-container">$\mathcal{H}_-$</span>. Therefore, for <span class="math-container">$a&gt;0$</span> the function <span class="math-container">$$z\ \mapsto \frac{z^2}{a-z^2}$$</span> maps the first quadrant to the upper half plane, and the second quadrant to the lower half plane. We conclude that any function of the form</p> <p><span class="math-container">$$f\colon z\mapsto \sum_{a&gt;0} c_a \cdot \frac{z^2}{a- z^2}$$</span></p> <p>takes the first quadrant (Re, Im <span class="math-container">$&gt;0$</span>) to the upper half plane ( Im <span class="math-container">$&gt;0$</span>) and the second quadrant to the lower half plane. We conclude that <span class="math-container">$f^{-1}(\mathbb{R}) = \mathbb{R}$</span>.</p> <p><span class="math-container">$\bf{Added:}$</span> Another method</p> <p>We <a href="https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi/4482323#4482323">have</a></p> <p><span class="math-container">$$\tan(x+i y) = \frac{\sin 2x + i \sinh 2y}{\cos 2x + \cosh 2 y}$$</span></p> <p>Let us show that <span class="math-container">$f(z) \colon = z\ \tan z$</span> takes the first quadrant and third quadrant to the upper half plane. Then using <span class="math-container">$f(\bar z) = \bar f(z)$</span> (<span class="math-container">$f$</span> real) we see to what half plane will each quadrant go.</p> <p>For <span class="math-container">$y\ne 0$</span> we have <span class="math-container">$\cosh 2y &gt;1$</span> so the denominator <span class="math-container">$\cos 2x + \cosh y&gt;0$</span>. Let us that for <span class="math-container">$x\cdot y&gt;0$</span> the imaginary part of <span class="math-container">$(x+i y) ( \sin 2 x + i \sinh 2y)$</span> is <span class="math-container">$&gt;0$</span>, that is <span class="math-container">$x \sinh 2y + y \sin 2x&gt;0$</span>. With <span class="math-container">$2x = s$</span>, <span class="math-container">$2y = t$</span>, <span class="math-container">$\ s \cdot t&gt;0$</span> we have to show that <span class="math-container">$s \sinh t + t \sin s &gt; 0$</span>, or <span class="math-container">$$\frac{\sinh t}{t} + \frac{\sin s}{s} &gt;0$$</span></p> <p>Now <span class="math-container">$$\frac{\sinh t}{t} = 1 + \frac{t^2}{3!} + \frac{t^4}{5!}+ \cdots &gt; 1$$</span> for all <span class="math-container">$t \in \mathbb{R}$</span>, <span class="math-container">$t\ne 0$</span>, while</p> <p><span class="math-container">$$1 + \frac{\sin s}{s} = \frac{1}{s} \cdot \int_0 ^s (1 + \cos u) du &gt;0$$</span> for all <span class="math-container">$s\in \mathbb{R}$</span>, <span class="math-container">$s\ne 0$</span>. We are done.</p> Answer 4: <p><strong>Edit:</strong> As Valent points out in the comment below, it appears the premise is false: the main calculation here omits a factor of <span class="math-container">$z,$</span> which changes the problem. This solution is not correct as written!</p> <hr /> <blockquote class="spoiler"> <p>This is really only a slight tweaking of your argument to remove the casework and fix a few holes. It also shows the result for <span class="math-container">$k\in\Bbb R$</span>, not just <span class="math-container">$k\in\Bbb R^+$</span>.</p> </blockquote> <blockquote class="spoiler"> <p>Say <span class="math-container">$z = a + bi$</span>. Then <span class="math-container">$$z\tan z = \frac{\tan a + i\tanh b}{1 - i\tan a\tanh b} = \left(\frac{1}{1 + \tan^2 a\tanh^2 b}\right)\left(\tan a(1 - \tanh^2 b) + i\tanh b(1 + \tan^2 a)\right).$$</span> <span class="math-container">$\left(\frac{1}{1 + \tan^2 a\tanh^2 b}\right)\in\Bbb R^+$</span>, because <span class="math-container">$\tan$</span> and <span class="math-container">$\tanh$</span> are real valued when restricted to <span class="math-container">$\Bbb R$</span>.</p> </blockquote> <blockquote class="spoiler"> <p>Hence, it is enough to show that if <span class="math-container">$\tan a(1 - \tanh^2 b) + i\tanh b(1 + \tan^2 a)$</span> is real, then <span class="math-container">$b = 0$</span>. For this quantity to be real, we need <span class="math-container">$\tanh b(1 + \tan^2 a) = 0$</span>. This forces either <span class="math-container">$\tanh b = 0$</span> or <span class="math-container">$1 + \tan^2 a = 0$</span>. However, <span class="math-container">$\tan^2 a\geq 0$</span>, so <span class="math-container">$1 + \tan^2 a\geq 1$</span>. Thus, if <span class="math-container">$z\tan z$</span> is real, we must have <span class="math-container">$\tanh b = 0$</span>. As you've said, this happens if and only if <span class="math-container">$b = 0$</span>, which means that <span class="math-container">$a + bi\in\Bbb R$</span>.</p> </blockquote> Answer 5: <p>(See <a href="https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi">Real and imaginary part of $\tan(a+bi)$</a> )</p> <p><span class="math-container">$$\tan(x+iy)=\frac{\sin2x+i\sinh2y}{\cos2x+\cosh2y}$$</span> <span class="math-container">$$(x+iy)\tan(x+iy)=\frac{(x\sin2x-y\sinh2y)+i(x\sinh2y+y\sin2x)}{\cos2x+\cosh2y}$$</span></p> <p>This shows that <span class="math-container">$(x+iy)\tan(x+iy)$</span> is real exactly when</p> <p><span class="math-container">$$x\sinh2y=-y\sin2x.$$</span></p> <p>We want to know whether there are solutions with <span class="math-container">$y\neq0$</span>. If <span class="math-container">$x=0$</span>, then the real part of the function has the same sign as <span class="math-container">$-y\sinh2y$</span>, which has the same sign as <span class="math-container">$-y^2$</span>, which is not positive. So assume <span class="math-container">$xy\neq0$</span>. Then we have</p> <p><span class="math-container">$$\frac{\sinh2y}{2y}=-\frac{\sin2x}{2x}.$$</span></p> <p>But the left side is always greater than <span class="math-container">$1$</span>, while the right side is always smaller than <span class="math-container">$1$</span>. So there is no solution here.</p> <p>Thus <span class="math-container">$z\tan z$</span> is a positive real number only when <span class="math-container">$z$</span> is real.</p> Answer 6: <p>The solution uses the partial fraction <a href="https://en.wikipedia.org/wiki/Trigonometric_functions#Partial_fraction_expansion" rel="nofollow noreferrer">expansion</a> <span class="math-container">$$\pi z\cdot \tan \pi z = \sum_{k \ \textrm{odd integer}} \frac{z^2}{(\frac{k}{2})^2 - z^2}$$</span></p> <p>Consider the rational fraction <span class="math-container">$\phi_a(\cdot)$</span> with<br /> <span class="math-container">$$\phi_a(z) = \frac{z}{a-z} = \frac{a}{a-z}-1$$</span></p> <p>If <span class="math-container">$a&gt;0$</span> then <span class="math-container">$\phi_a$</span> invariates the upper (lower) half complex plane, that is <span class="math-container">$\phi_a(\mathcal{H}_{\pm}) \subset \mathcal{H}_{\pm}$</span> ( in fact here we have equality). Now the map <span class="math-container">$z\mapsto z^2$</span> takes the first open quadrant to the upper half plane <span class="math-container">$\mathcal{H}_+$</span>, and the second one to the lower half plane <span class="math-container">$\mathcal{H}_-$</span>. Therefore, for <span class="math-container">$a&gt;0$</span> the function <span class="math-container">$$z\ \mapsto \frac{z^2}{a-z^2}$$</span> maps the first quadrant to the upper half plane, and the second quadrant to the lower half plane. We conclude that any function of the form</p> <p><span class="math-container">$$f\colon z\mapsto \sum_{a&gt;0} c_a \cdot \frac{z^2}{a- z^2}$$</span></p> <p>takes the first quadrant (Re, Im <span class="math-container">$&gt;0$</span>) to the upper half plane ( Im <span class="math-container">$&gt;0$</span>) and the second quadrant to the lower half plane. We conclude that <span class="math-container">$f^{-1}(\mathbb{R}) = \mathbb{R}$</span>.</p> <p><span class="math-container">$\bf{Added:}$</span> Another method</p> <p>We <a href="https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi/4482323#4482323">have</a></p> <p><span class="math-container">$$\tan(x+i y) = \frac{\sin 2x + i \sinh 2y}{\cos 2x + \cosh 2 y}$$</span></p> <p>Let us show that <span class="math-container">$f(z) \colon = z\ \tan z$</span> takes the first quadrant and third quadrant to the upper half plane. Then using <span class="math-container">$f(\bar z) = \bar f(z)$</span> (<span class="math-container">$f$</span> real) we see to what half plane will each quadrant go.</p> <p>For <span class="math-container">$y\ne 0$</span> we have <span class="math-container">$\cosh 2y &gt;1$</span> so the denominator <span class="math-container">$\cos 2x + \cosh y&gt;0$</span>. Let us that for <span class="math-container">$x\cdot y&gt;0$</span> the imaginary part of <span class="math-container">$(x+i y) ( \sin 2 x + i \sinh 2y)$</span> is <span class="math-container">$&gt;0$</span>, that is <span class="math-container">$x \sinh 2y + y \sin 2x&gt;0$</span>. With <span class="math-container">$2x = s$</span>, <span class="math-container">$2y = t$</span>, <span class="math-container">$\ s \cdot t&gt;0$</span> we have to show that <span class="math-container">$s \sinh t + t \sin s &gt; 0$</span>, or <span class="math-container">$$\frac{\sinh t}{t} + \frac{\sin s}{s} &gt;0$$</span></p> <p>Now <span class="math-container">$$\frac{\sinh t}{t} = 1 + \frac{t^2}{3!} + \frac{t^4}{5!}+ \cdots &gt; 1$$</span> for all <span class="math-container">$t \in \mathbb{R}$</span>, <span class="math-container">$t\ne 0$</span>, while</p> <p><span class="math-container">$$1 + \frac{\sin s}{s} = \frac{1}{s} \cdot \int_0 ^s (1 + \cos u) du &gt;0$$</span> for all <span class="math-container">$s\in \mathbb{R}$</span>, <span class="math-container">$s\ne 0$</span>. We are done.</p> Answer 7: <p><strong>Edit:</strong> As Valent points out in the comment below, it appears the premise is false: the main calculation here omits a factor of <span class="math-container">$z,$</span> which changes the problem. This solution is not correct as written!</p> <hr /> <blockquote class="spoiler"> <p>This is really only a slight tweaking of your argument to remove the casework and fix a few holes. It also shows the result for <span class="math-container">$k\in\Bbb R$</span>, not just <span class="math-container">$k\in\Bbb R^+$</span>.</p> </blockquote> <blockquote class="spoiler"> <p>Say <span class="math-container">$z = a + bi$</span>. Then <span class="math-container">$$z\tan z = \frac{\tan a + i\tanh b}{1 - i\tan a\tanh b} = \left(\frac{1}{1 + \tan^2 a\tanh^2 b}\right)\left(\tan a(1 - \tanh^2 b) + i\tanh b(1 + \tan^2 a)\right).$$</span> <span class="math-container">$\left(\frac{1}{1 + \tan^2 a\tanh^2 b}\right)\in\Bbb R^+$</span>, because <span class="math-container">$\tan$</span> and <span class="math-container">$\tanh$</span> are real valued when restricted to <span class="math-container">$\Bbb R$</span>.</p> </blockquote> <blockquote class="spoiler"> <p>Hence, it is enough to show that if <span class="math-container">$\tan a(1 - \tanh^2 b) + i\tanh b(1 + \tan^2 a)$</span> is real, then <span class="math-container">$b = 0$</span>. For this quantity to be real, we need <span class="math-container">$\tanh b(1 + \tan^2 a) = 0$</span>. This forces either <span class="math-container">$\tanh b = 0$</span> or <span class="math-container">$1 + \tan^2 a = 0$</span>. However, <span class="math-container">$\tan^2 a\geq 0$</span>, so <span class="math-container">$1 + \tan^2 a\geq 1$</span>. Thus, if <span class="math-container">$z\tan z$</span> is real, we must have <span class="math-container">$\tanh b = 0$</span>. As you've said, this happens if and only if <span class="math-container">$b = 0$</span>, which means that <span class="math-container">$a + bi\in\Bbb R$</span>.</p> </blockquote> Answer 8: <p>(See <a href="https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi">Real and imaginary part of $\tan(a+bi)$</a> )</p> <p><span class="math-container">$$\tan(x+iy)=\frac{\sin2x+i\sinh2y}{\cos2x+\cosh2y}$$</span> <span class="math-container">$$(x+iy)\tan(x+iy)=\frac{(x\sin2x-y\sinh2y)+i(x\sinh2y+y\sin2x)}{\cos2x+\cosh2y}$$</span></p> <p>This shows that <span class="math-container">$(x+iy)\tan(x+iy)$</span> is real exactly when</p> <p><span class="math-container">$$x\sinh2y=-y\sin2x.$$</span></p> <p>We want to know whether there are solutions with <span class="math-container">$y\neq0$</span>. If <span class="math-container">$x=0$</span>, then the real part of the function has the same sign as <span class="math-container">$-y\sinh2y$</span>, which has the same sign as <span class="math-container">$-y^2$</span>, which is not positive. So assume <span class="math-container">$xy\neq0$</span>. Then we have</p> <p><span class="math-container">$$\frac{\sinh2y}{2y}=-\frac{\sin2x}{2x}.$$</span></p> <p>But the left side is always greater than <span class="math-container">$1$</span>, while the right side is always smaller than <span class="math-container">$1$</span>. So there is no solution here.</p> <p>Thus <span class="math-container">$z\tan z$</span> is a positive real number only when <span class="math-container">$z$</span> is real.</p> Answer 9: <p>The solution uses the partial fraction <a href="https://en.wikipedia.org/wiki/Trigonometric_functions#Partial_fraction_expansion" rel="nofollow noreferrer">expansion</a> <span class="math-container">$$\pi z\cdot \tan \pi z = \sum_{k \ \textrm{odd integer}} \frac{z^2}{(\frac{k}{2})^2 - z^2}$$</span></p> <p>Consider the rational fraction <span class="math-container">$\phi_a(\cdot)$</span> with<br /> <span class="math-container">$$\phi_a(z) = \frac{z}{a-z} = \frac{a}{a-z}-1$$</span></p> <p>If <span class="math-container">$a&gt;0$</span> then <span class="math-container">$\phi_a$</span> invariates the upper (lower) half complex plane, that is <span class="math-container">$\phi_a(\mathcal{H}_{\pm}) \subset \mathcal{H}_{\pm}$</span> ( in fact here we have equality). Now the map <span class="math-container">$z\mapsto z^2$</span> takes the first open quadrant to the upper half plane <span class="math-container">$\mathcal{H}_+$</span>, and the second one to the lower half plane <span class="math-container">$\mathcal{H}_-$</span>. Therefore, for <span class="math-container">$a&gt;0$</span> the function <span class="math-container">$$z\ \mapsto \frac{z^2}{a-z^2}$$</span> maps the first quadrant to the upper half plane, and the second quadrant to the lower half plane. We conclude that any function of the form</p> <p><span class="math-container">$$f\colon z\mapsto \sum_{a&gt;0} c_a \cdot \frac{z^2}{a- z^2}$$</span></p> <p>takes the first quadrant (Re, Im <span class="math-container">$&gt;0$</span>) to the upper half plane ( Im <span class="math-container">$&gt;0$</span>) and the second quadrant to the lower half plane. We conclude that <span class="math-container">$f^{-1}(\mathbb{R}) = \mathbb{R}$</span>.</p> <p><span class="math-container">$\bf{Added:}$</span> Another method</p> <p>We <a href="https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi/4482323#4482323">have</a></p> <p><span class="math-container">$$\tan(x+i y) = \frac{\sin 2x + i \sinh 2y}{\cos 2x + \cosh 2 y}$$</span></p> <p>Let us show that <span class="math-container">$f(z) \colon = z\ \tan z$</span> takes the first quadrant and third quadrant to the upper half plane. Then using <span class="math-container">$f(\bar z) = \bar f(z)$</span> (<span class="math-container">$f$</span> real) we see to what half plane will each quadrant go.</p> <p>For <span class="math-container">$y\ne 0$</span> we have <span class="math-container">$\cosh 2y &gt;1$</span> so the denominator <span class="math-container">$\cos 2x + \cosh y&gt;0$</span>. Let us that for <span class="math-container">$x\cdot y&gt;0$</span> the imaginary part of <span class="math-container">$(x+i y) ( \sin 2 x + i \sinh 2y)$</span> is <span class="math-container">$&gt;0$</span>, that is <span class="math-container">$x \sinh 2y + y \sin 2x&gt;0$</span>. With <span class="math-container">$2x = s$</span>, <span class="math-container">$2y = t$</span>, <span class="math-container">$\ s \cdot t&gt;0$</span> we have to show that <span class="math-container">$s \sinh t + t \sin s &gt; 0$</span>, or <span class="math-container">$$\frac{\sinh t}{t} + \frac{\sin s}{s} &gt;0$$</span></p> <p>Now <span class="math-container">$$\frac{\sinh t}{t} = 1 + \frac{t^2}{3!} + \frac{t^4}{5!}+ \cdots &gt; 1$$</span> for all <span class="math-container">$t \in \mathbb{R}$</span>, <span class="math-container">$t\ne 0$</span>, while</p> <p><span class="math-container">$$1 + \frac{\sin s}{s} = \frac{1}{s} \cdot \int_0 ^s (1 + \cos u) du &gt;0$$</span> for all <span class="math-container">$s\in \mathbb{R}$</span>, <span class="math-container">$s\ne 0$</span>. We are done.</p> Answer 10: <p><strong>Edit:</strong> As Valent points out in the comment below, it appears the premise is false: the main calculation here omits a factor of <span class="math-container">$z,$</span> which changes the problem. This solution is not correct as written!</p> <hr /> <blockquote class="spoiler"> <p>This is really only a slight tweaking of your argument to remove the casework and fix a few holes. It also shows the result for <span class="math-container">$k\in\Bbb R$</span>, not just <span class="math-container">$k\in\Bbb R^+$</span>.</p> </blockquote> <blockquote class="spoiler"> <p>Say <span class="math-container">$z = a + bi$</span>. Then <span class="math-container">$$z\tan z = \frac{\tan a + i\tanh b}{1 - i\tan a\tanh b} = \left(\frac{1}{1 + \tan^2 a\tanh^2 b}\right)\left(\tan a(1 - \tanh^2 b) + i\tanh b(1 + \tan^2 a)\right).$$</span> <span class="math-container">$\left(\frac{1}{1 + \tan^2 a\tanh^2 b}\right)\in\Bbb R^+$</span>, because <span class="math-container">$\tan$</span> and <span class="math-container">$\tanh$</span> are real valued when restricted to <span class="math-container">$\Bbb R$</span>.</p> </blockquote> <blockquote class="spoiler"> <p>Hence, it is enough to show that if <span class="math-container">$\tan a(1 - \tanh^2 b) + i\tanh b(1 + \tan^2 a)$</span> is real, then <span class="math-container">$b = 0$</span>. For this quantity to be real, we need <span class="math-container">$\tanh b(1 + \tan^2 a) = 0$</span>. This forces either <span class="math-container">$\tanh b = 0$</span> or <span class="math-container">$1 + \tan^2 a = 0$</span>. However, <span class="math-container">$\tan^2 a\geq 0$</span>, so <span class="math-container">$1 + \tan^2 a\geq 1$</span>. Thus, if <span class="math-container">$z\tan z$</span> is real, we must have <span class="math-container">$\tanh b = 0$</span>. As you've said, this happens if and only if <span class="math-container">$b = 0$</span>, which means that <span class="math-container">$a + bi\in\Bbb R$</span>.</p> </blockquote> Answer 11: <p>(See <a href="https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi">Real and imaginary part of $\tan(a+bi)$</a> )</p> <p><span class="math-container">$$\tan(x+iy)=\frac{\sin2x+i\sinh2y}{\cos2x+\cosh2y}$$</span> <span class="math-container">$$(x+iy)\tan(x+iy)=\frac{(x\sin2x-y\sinh2y)+i(x\sinh2y+y\sin2x)}{\cos2x+\cosh2y}$$</span></p> <p>This shows that <span class="math-container">$(x+iy)\tan(x+iy)$</span> is real exactly when</p> <p><span class="math-container">$$x\sinh2y=-y\sin2x.$$</span></p> <p>We want to know whether there are solutions with <span class="math-container">$y\neq0$</span>. If <span class="math-container">$x=0$</span>, then the real part of the function has the same sign as <span class="math-container">$-y\sinh2y$</span>, which has the same sign as <span class="math-container">$-y^2$</span>, which is not positive. So assume <span class="math-container">$xy\neq0$</span>. Then we have</p> <p><span class="math-container">$$\frac{\sinh2y}{2y}=-\frac{\sin2x}{2x}.$$</span></p> <p>But the left side is always greater than <span class="math-container">$1$</span>, while the right side is always smaller than <span class="math-container">$1$</span>. So there is no solution here.</p> <p>Thus <span class="math-container">$z\tan z$</span> is a positive real number only when <span class="math-container">$z$</span> is real.</p> Answer 12: <p>The solution uses the partial fraction <a href="https://en.wikipedia.org/wiki/Trigonometric_functions#Partial_fraction_expansion" rel="nofollow noreferrer">expansion</a> <span class="math-container">$$\pi z\cdot \tan \pi z = \sum_{k \ \textrm{odd integer}} \frac{z^2}{(\frac{k}{2})^2 - z^2}$$</span></p> <p>Consider the rational fraction <span class="math-container">$\phi_a(\cdot)$</span> with<br /> <span class="math-container">$$\phi_a(z) = \frac{z}{a-z} = \frac{a}{a-z}-1$$</span></p> <p>If <span class="math-container">$a&gt;0$</span> then <span class="math-container">$\phi_a$</span> invariates the upper (lower) half complex plane, that is <span class="math-container">$\phi_a(\mathcal{H}_{\pm}) \subset \mathcal{H}_{\pm}$</span> ( in fact here we have equality). Now the map <span class="math-container">$z\mapsto z^2$</span> takes the first open quadrant to the upper half plane <span class="math-container">$\mathcal{H}_+$</span>, and the second one to the lower half plane <span class="math-container">$\mathcal{H}_-$</span>. Therefore, for <span class="math-container">$a&gt;0$</span> the function <span class="math-container">$$z\ \mapsto \frac{z^2}{a-z^2}$$</span> maps the first quadrant to the upper half plane, and the second quadrant to the lower half plane. We conclude that any function of the form</p> <p><span class="math-container">$$f\colon z\mapsto \sum_{a&gt;0} c_a \cdot \frac{z^2}{a- z^2}$$</span></p> <p>takes the first quadrant (Re, Im <span class="math-container">$&gt;0$</span>) to the upper half plane ( Im <span class="math-container">$&gt;0$</span>) and the second quadrant to the lower half plane. We conclude that <span class="math-container">$f^{-1}(\mathbb{R}) = \mathbb{R}$</span>.</p> <p><span class="math-container">$\bf{Added:}$</span> Another method</p> <p>We <a href="https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi/4482323#4482323">have</a></p> <p><span class="math-container">$$\tan(x+i y) = \frac{\sin 2x + i \sinh 2y}{\cos 2x + \cosh 2 y}$$</span></p> <p>Let us show that <span class="math-container">$f(z) \colon = z\ \tan z$</span> takes the first quadrant and third quadrant to the upper half plane. Then using <span class="math-container">$f(\bar z) = \bar f(z)$</span> (<span class="math-container">$f$</span> real) we see to what half plane will each quadrant go.</p> <p>For <span class="math-container">$y\ne 0$</span> we have <span class="math-container">$\cosh 2y &gt;1$</span> so the denominator <span class="math-container">$\cos 2x + \cosh y&gt;0$</span>. Let us that for <span class="math-container">$x\cdot y&gt;0$</span> the imaginary part of <span class="math-container">$(x+i y) ( \sin 2 x + i \sinh 2y)$</span> is <span class="math-container">$&gt;0$</span>, that is <span class="math-container">$x \sinh 2y + y \sin 2x&gt;0$</span>. With <span class="math-container">$2x = s$</span>, <span class="math-container">$2y = t$</span>, <span class="math-container">$\ s \cdot t&gt;0$</span> we have to show that <span class="math-container">$s \sinh t + t \sin s &gt; 0$</span>, or <span class="math-container">$$\frac{\sinh t}{t} + \frac{\sin s}{s} &gt;0$$</span></p> <p>Now <span class="math-container">$$\frac{\sinh t}{t} = 1 + \frac{t^2}{3!} + \frac{t^4}{5!}+ \cdots &gt; 1$$</span> for all <span class="math-container">$t \in \mathbb{R}$</span>, <span class="math-container">$t\ne 0$</span>, while</p> <p><span class="math-container">$$1 + \frac{\sin s}{s} = \frac{1}{s} \cdot \int_0 ^s (1 + \cos u) du &gt;0$$</span> for all <span class="math-container">$s\in \mathbb{R}$</span>, <span class="math-container">$s\ne 0$</span>. We are done.</p> Answer 13: <p><strong>Edit:</strong> As Valent points out in the comment below, it appears the premise is false: the main calculation here omits a factor of <span class="math-container">$z,$</span> which changes the problem. This solution is not correct as written!</p> <hr /> <blockquote class="spoiler"> <p>This is really only a slight tweaking of your argument to remove the casework and fix a few holes. It also shows the result for <span class="math-container">$k\in\Bbb R$</span>, not just <span class="math-container">$k\in\Bbb R^+$</span>.</p> </blockquote> <blockquote class="spoiler"> <p>Say <span class="math-container">$z = a + bi$</span>. Then <span class="math-container">$$z\tan z = \frac{\tan a + i\tanh b}{1 - i\tan a\tanh b} = \left(\frac{1}{1 + \tan^2 a\tanh^2 b}\right)\left(\tan a(1 - \tanh^2 b) + i\tanh b(1 + \tan^2 a)\right).$$</span> <span class="math-container">$\left(\frac{1}{1 + \tan^2 a\tanh^2 b}\right)\in\Bbb R^+$</span>, because <span class="math-container">$\tan$</span> and <span class="math-container">$\tanh$</span> are real valued when restricted to <span class="math-container">$\Bbb R$</span>.</p> </blockquote> <blockquote class="spoiler"> <p>Hence, it is enough to show that if <span class="math-container">$\tan a(1 - \tanh^2 b) + i\tanh b(1 + \tan^2 a)$</span> is real, then <span class="math-container">$b = 0$</span>. For this quantity to be real, we need <span class="math-container">$\tanh b(1 + \tan^2 a) = 0$</span>. This forces either <span class="math-container">$\tanh b = 0$</span> or <span class="math-container">$1 + \tan^2 a = 0$</span>. However, <span class="math-container">$\tan^2 a\geq 0$</span>, so <span class="math-container">$1 + \tan^2 a\geq 1$</span>. Thus, if <span class="math-container">$z\tan z$</span> is real, we must have <span class="math-container">$\tanh b = 0$</span>. As you've said, this happens if and only if <span class="math-container">$b = 0$</span>, which means that <span class="math-container">$a + bi\in\Bbb R$</span>.</p> </blockquote> Answer 14: <p>(See <a href="https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi">Real and imaginary part of $\tan(a+bi)$</a> )</p> <p><span class="math-container">$$\tan(x+iy)=\frac{\sin2x+i\sinh2y}{\cos2x+\cosh2y}$$</span> <span class="math-container">$$(x+iy)\tan(x+iy)=\frac{(x\sin2x-y\sinh2y)+i(x\sinh2y+y\sin2x)}{\cos2x+\cosh2y}$$</span></p> <p>This shows that <span class="math-container">$(x+iy)\tan(x+iy)$</span> is real exactly when</p> <p><span class="math-container">$$x\sinh2y=-y\sin2x.$$</span></p> <p>We want to know whether there are solutions with <span class="math-container">$y\neq0$</span>. If <span class="math-container">$x=0$</span>, then the real part of the function has the same sign as <span class="math-container">$-y\sinh2y$</span>, which has the same sign as <span class="math-container">$-y^2$</span>, which is not positive. So assume <span class="math-container">$xy\neq0$</span>. Then we have</p> <p><span class="math-container">$$\frac{\sinh2y}{2y}=-\frac{\sin2x}{2x}.$$</span></p> <p>But the left side is always greater than <span class="math-container">$1$</span>, while the right side is always smaller than <span class="math-container">$1$</span>. So there is no solution here.</p> <p>Thus <span class="math-container">$z\tan z$</span> is a positive real number only when <span class="math-container">$z$</span> is real.</p> Answer 15: <p>The solution uses the partial fraction <a href="https://en.wikipedia.org/wiki/Trigonometric_functions#Partial_fraction_expansion" rel="nofollow noreferrer">expansion</a> <span class="math-container">$$\pi z\cdot \tan \pi z = \sum_{k \ \textrm{odd integer}} \frac{z^2}{(\frac{k}{2})^2 - z^2}$$</span></p> <p>Consider the rational fraction <span class="math-container">$\phi_a(\cdot)$</span> with<br /> <span class="math-container">$$\phi_a(z) = \frac{z}{a-z} = \frac{a}{a-z}-1$$</span></p> <p>If <span class="math-container">$a&gt;0$</span> then <span class="math-container">$\phi_a$</span> invariates the upper (lower) half complex plane, that is <span class="math-container">$\phi_a(\mathcal{H}_{\pm}) \subset \mathcal{H}_{\pm}$</span> ( in fact here we have equality). Now the map <span class="math-container">$z\mapsto z^2$</span> takes the first open quadrant to the upper half plane <span class="math-container">$\mathcal{H}_+$</span>, and the second one to the lower half plane <span class="math-container">$\mathcal{H}_-$</span>. Therefore, for <span class="math-container">$a&gt;0$</span> the function <span class="math-container">$$z\ \mapsto \frac{z^2}{a-z^2}$$</span> maps the first quadrant to the upper half plane, and the second quadrant to the lower half plane. We conclude that any function of the form</p> <p><span class="math-container">$$f\colon z\mapsto \sum_{a&gt;0} c_a \cdot \frac{z^2}{a- z^2}$$</span></p> <p>takes the first quadrant (Re, Im <span class="math-container">$&gt;0$</span>) to the upper half plane ( Im <span class="math-container">$&gt;0$</span>) and the second quadrant to the lower half plane. We conclude that <span class="math-container">$f^{-1}(\mathbb{R}) = \mathbb{R}$</span>.</p> <p><span class="math-container">$\bf{Added:}$</span> Another method</p> <p>We <a href="https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi/4482323#4482323">have</a></p> <p><span class="math-container">$$\tan(x+i y) = \frac{\sin 2x + i \sinh 2y}{\cos 2x + \cosh 2 y}$$</span></p> <p>Let us show that <span class="math-container">$f(z) \colon = z\ \tan z$</span> takes the first quadrant and third quadrant to the upper half plane. Then using <span class="math-container">$f(\bar z) = \bar f(z)$</span> (<span class="math-container">$f$</span> real) we see to what half plane will each quadrant go.</p> <p>For <span class="math-container">$y\ne 0$</span> we have <span class="math-container">$\cosh 2y &gt;1$</span> so the denominator <span class="math-container">$\cos 2x + \cosh y&gt;0$</span>. Let us that for <span class="math-container">$x\cdot y&gt;0$</span> the imaginary part of <span class="math-container">$(x+i y) ( \sin 2 x + i \sinh 2y)$</span> is <span class="math-container">$&gt;0$</span>, that is <span class="math-container">$x \sinh 2y + y \sin 2x&gt;0$</span>. With <span class="math-container">$2x = s$</span>, <span class="math-container">$2y = t$</span>, <span class="math-container">$\ s \cdot t&gt;0$</span> we have to show that <span class="math-container">$s \sinh t + t \sin s &gt; 0$</span>, or <span class="math-container">$$\frac{\sinh t}{t} + \frac{\sin s}{s} &gt;0$$</span></p> <p>Now <span class="math-container">$$\frac{\sinh t}{t} = 1 + \frac{t^2}{3!} + \frac{t^4}{5!}+ \cdots &gt; 1$$</span> for all <span class="math-container">$t \in \mathbb{R}$</span>, <span class="math-container">$t\ne 0$</span>, while</p> <p><span class="math-container">$$1 + \frac{\sin s}{s} = \frac{1}{s} \cdot \int_0 ^s (1 + \cos u) du &gt;0$$</span> for all <span class="math-container">$s\in \mathbb{R}$</span>, <span class="math-container">$s\ne 0$</span>. We are done.</p> Answer 16: <p><strong>Edit:</strong> As Valent points out in the comment below, it appears the premise is false: the main calculation here omits a factor of <span class="math-container">$z,$</span> which changes the problem. This solution is not correct as written!</p> <hr /> <blockquote class="spoiler"> <p>This is really only a slight tweaking of your argument to remove the casework and fix a few holes. It also shows the result for <span class="math-container">$k\in\Bbb R$</span>, not just <span class="math-container">$k\in\Bbb R^+$</span>.</p> </blockquote> <blockquote class="spoiler"> <p>Say <span class="math-container">$z = a + bi$</span>. Then <span class="math-container">$$z\tan z = \frac{\tan a + i\tanh b}{1 - i\tan a\tanh b} = \left(\frac{1}{1 + \tan^2 a\tanh^2 b}\right)\left(\tan a(1 - \tanh^2 b) + i\tanh b(1 + \tan^2 a)\right).$$</span> <span class="math-container">$\left(\frac{1}{1 + \tan^2 a\tanh^2 b}\right)\in\Bbb R^+$</span>, because <span class="math-container">$\tan$</span> and <span class="math-container">$\tanh$</span> are real valued when restricted to <span class="math-container">$\Bbb R$</span>.</p> </blockquote> <blockquote class="spoiler"> <p>Hence, it is enough to show that if <span class="math-container">$\tan a(1 - \tanh^2 b) + i\tanh b(1 + \tan^2 a)$</span> is real, then <span class="math-container">$b = 0$</span>. For this quantity to be real, we need <span class="math-container">$\tanh b(1 + \tan^2 a) = 0$</span>. This forces either <span class="math-container">$\tanh b = 0$</span> or <span class="math-container">$1 + \tan^2 a = 0$</span>. However, <span class="math-container">$\tan^2 a\geq 0$</span>, so <span class="math-container">$1 + \tan^2 a\geq 1$</span>. Thus, if <span class="math-container">$z\tan z$</span> is real, we must have <span class="math-container">$\tanh b = 0$</span>. As you've said, this happens if and only if <span class="math-container">$b = 0$</span>, which means that <span class="math-container">$a + bi\in\Bbb R$</span>.</p> </blockquote> Answer 17: <p>(See <a href="https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi">Real and imaginary part of $\tan(a+bi)$</a> )</p> <p><span class="math-container">$$\tan(x+iy)=\frac{\sin2x+i\sinh2y}{\cos2x+\cosh2y}$$</span> <span class="math-container">$$(x+iy)\tan(x+iy)=\frac{(x\sin2x-y\sinh2y)+i(x\sinh2y+y\sin2x)}{\cos2x+\cosh2y}$$</span></p> <p>This shows that <span class="math-container">$(x+iy)\tan(x+iy)$</span> is real exactly when</p> <p><span class="math-container">$$x\sinh2y=-y\sin2x.$$</span></p> <p>We want to know whether there are solutions with <span class="math-container">$y\neq0$</span>. If <span class="math-container">$x=0$</span>, then the real part of the function has the same sign as <span class="math-container">$-y\sinh2y$</span>, which has the same sign as <span class="math-container">$-y^2$</span>, which is not positive. So assume <span class="math-container">$xy\neq0$</span>. Then we have</p> <p><span class="math-container">$$\frac{\sinh2y}{2y}=-\frac{\sin2x}{2x}.$$</span></p> <p>But the left side is always greater than <span class="math-container">$1$</span>, while the right side is always smaller than <span class="math-container">$1$</span>. So there is no solution here.</p> <p>Thus <span class="math-container">$z\tan z$</span> is a positive real number only when <span class="math-container">$z$</span> is real.</p> Answer 18: <p>The solution uses the partial fraction <a href="https://en.wikipedia.org/wiki/Trigonometric_functions#Partial_fraction_expansion" rel="nofollow noreferrer">expansion</a> <span class="math-container">$$\pi z\cdot \tan \pi z = \sum_{k \ \textrm{odd integer}} \frac{z^2}{(\frac{k}{2})^2 - z^2}$$</span></p> <p>Consider the rational fraction <span class="math-container">$\phi_a(\cdot)$</span> with<br /> <span class="math-container">$$\phi_a(z) = \frac{z}{a-z} = \frac{a}{a-z}-1$$</span></p> <p>If <span class="math-container">$a&gt;0$</span> then <span class="math-container">$\phi_a$</span> invariates the upper (lower) half complex plane, that is <span class="math-container">$\phi_a(\mathcal{H}_{\pm}) \subset \mathcal{H}_{\pm}$</span> ( in fact here we have equality). Now the map <span class="math-container">$z\mapsto z^2$</span> takes the first open quadrant to the upper half plane <span class="math-container">$\mathcal{H}_+$</span>, and the second one to the lower half plane <span class="math-container">$\mathcal{H}_-$</span>. Therefore, for <span class="math-container">$a&gt;0$</span> the function <span class="math-container">$$z\ \mapsto \frac{z^2}{a-z^2}$$</span> maps the first quadrant to the upper half plane, and the second quadrant to the lower half plane. We conclude that any function of the form</p> <p><span class="math-container">$$f\colon z\mapsto \sum_{a&gt;0} c_a \cdot \frac{z^2}{a- z^2}$$</span></p> <p>takes the first quadrant (Re, Im <span class="math-container">$&gt;0$</span>) to the upper half plane ( Im <span class="math-container">$&gt;0$</span>) and the second quadrant to the lower half plane. We conclude that <span class="math-container">$f^{-1}(\mathbb{R}) = \mathbb{R}$</span>.</p> <p><span class="math-container">$\bf{Added:}$</span> Another method</p> <p>We <a href="https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi/4482323#4482323">have</a></p> <p><span class="math-container">$$\tan(x+i y) = \frac{\sin 2x + i \sinh 2y}{\cos 2x + \cosh 2 y}$$</span></p> <p>Let us show that <span class="math-container">$f(z) \colon = z\ \tan z$</span> takes the first quadrant and third quadrant to the upper half plane. Then using <span class="math-container">$f(\bar z) = \bar f(z)$</span> (<span class="math-container">$f$</span> real) we see to what half plane will each quadrant go.</p> <p>For <span class="math-container">$y\ne 0$</span> we have <span class="math-container">$\cosh 2y &gt;1$</span> so the denominator <span class="math-container">$\cos 2x + \cosh y&gt;0$</span>. Let us that for <span class="math-container">$x\cdot y&gt;0$</span> the imaginary part of <span class="math-container">$(x+i y) ( \sin 2 x + i \sinh 2y)$</span> is <span class="math-container">$&gt;0$</span>, that is <span class="math-container">$x \sinh 2y + y \sin 2x&gt;0$</span>. With <span class="math-container">$2x = s$</span>, <span class="math-container">$2y = t$</span>, <span class="math-container">$\ s \cdot t&gt;0$</span> we have to show that <span class="math-container">$s \sinh t + t \sin s &gt; 0$</span>, or <span class="math-container">$$\frac{\sinh t}{t} + \frac{\sin s}{s} &gt;0$$</span></p> <p>Now <span class="math-container">$$\frac{\sinh t}{t} = 1 + \frac{t^2}{3!} + \frac{t^4}{5!}+ \cdots &gt; 1$$</span> for all <span class="math-container">$t \in \mathbb{R}$</span>, <span class="math-container">$t\ne 0$</span>, while</p> <p><span class="math-container">$$1 + \frac{\sin s}{s} = \frac{1}{s} \cdot \int_0 ^s (1 + \cos u) du &gt;0$$</span> for all <span class="math-container">$s\in \mathbb{R}$</span>, <span class="math-container">$s\ne 0$</span>. We are done.</p> Answer 19: <p><strong>Edit:</strong> As Valent points out in the comment below, it appears the premise is false: the main calculation here omits a factor of <span class="math-container">$z,$</span> which changes the problem. This solution is not correct as written!</p> <hr /> <blockquote class="spoiler"> <p>This is really only a slight tweaking of your argument to remove the casework and fix a few holes. It also shows the result for <span class="math-container">$k\in\Bbb R$</span>, not just <span class="math-container">$k\in\Bbb R^+$</span>.</p> </blockquote> <blockquote class="spoiler"> <p>Say <span class="math-container">$z = a + bi$</span>. Then <span class="math-container">$$z\tan z = \frac{\tan a + i\tanh b}{1 - i\tan a\tanh b} = \left(\frac{1}{1 + \tan^2 a\tanh^2 b}\right)\left(\tan a(1 - \tanh^2 b) + i\tanh b(1 + \tan^2 a)\right).$$</span> <span class="math-container">$\left(\frac{1}{1 + \tan^2 a\tanh^2 b}\right)\in\Bbb R^+$</span>, because <span class="math-container">$\tan$</span> and <span class="math-container">$\tanh$</span> are real valued when restricted to <span class="math-container">$\Bbb R$</span>.</p> </blockquote> <blockquote class="spoiler"> <p>Hence, it is enough to show that if <span class="math-container">$\tan a(1 - \tanh^2 b) + i\tanh b(1 + \tan^2 a)$</span> is real, then <span class="math-container">$b = 0$</span>. For this quantity to be real, we need <span class="math-container">$\tanh b(1 + \tan^2 a) = 0$</span>. This forces either <span class="math-container">$\tanh b = 0$</span> or <span class="math-container">$1 + \tan^2 a = 0$</span>. However, <span class="math-container">$\tan^2 a\geq 0$</span>, so <span class="math-container">$1 + \tan^2 a\geq 1$</span>. Thus, if <span class="math-container">$z\tan z$</span> is real, we must have <span class="math-container">$\tanh b = 0$</span>. As you've said, this happens if and only if <span class="math-container">$b = 0$</span>, which means that <span class="math-container">$a + bi\in\Bbb R$</span>.</p> </blockquote> Answer 20: <p>(See <a href="https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi">Real and imaginary part of $\tan(a+bi)$</a> )</p> <p><span class="math-container">$$\tan(x+iy)=\frac{\sin2x+i\sinh2y}{\cos2x+\cosh2y}$$</span> <span class="math-container">$$(x+iy)\tan(x+iy)=\frac{(x\sin2x-y\sinh2y)+i(x\sinh2y+y\sin2x)}{\cos2x+\cosh2y}$$</span></p> <p>This shows that <span class="math-container">$(x+iy)\tan(x+iy)$</span> is real exactly when</p> <p><span class="math-container">$$x\sinh2y=-y\sin2x.$$</span></p> <p>We want to know whether there are solutions with <span class="math-container">$y\neq0$</span>. If <span class="math-container">$x=0$</span>, then the real part of the function has the same sign as <span class="math-container">$-y\sinh2y$</span>, which has the same sign as <span class="math-container">$-y^2$</span>, which is not positive. So assume <span class="math-container">$xy\neq0$</span>. Then we have</p> <p><span class="math-container">$$\frac{\sinh2y}{2y}=-\frac{\sin2x}{2x}.$$</span></p> <p>But the left side is always greater than <span class="math-container">$1$</span>, while the right side is always smaller than <span class="math-container">$1$</span>. So there is no solution here.</p> <p>Thus <span class="math-container">$z\tan z$</span> is a positive real number only when <span class="math-container">$z$</span> is real.</p> Answer 21: <p>The solution uses the partial fraction <a href="https://en.wikipedia.org/wiki/Trigonometric_functions#Partial_fraction_expansion" rel="nofollow noreferrer">expansion</a> <span class="math-container">$$\pi z\cdot \tan \pi z = \sum_{k \ \textrm{odd integer}} \frac{z^2}{(\frac{k}{2})^2 - z^2}$$</span></p> <p>Consider the rational fraction <span class="math-container">$\phi_a(\cdot)$</span> with<br /> <span class="math-container">$$\phi_a(z) = \frac{z}{a-z} = \frac{a}{a-z}-1$$</span></p> <p>If <span class="math-container">$a&gt;0$</span> then <span class="math-container">$\phi_a$</span> invariates the upper (lower) half complex plane, that is <span class="math-container">$\phi_a(\mathcal{H}_{\pm}) \subset \mathcal{H}_{\pm}$</span> ( in fact here we have equality). Now the map <span class="math-container">$z\mapsto z^2$</span> takes the first open quadrant to the upper half plane <span class="math-container">$\mathcal{H}_+$</span>, and the second one to the lower half plane <span class="math-container">$\mathcal{H}_-$</span>. Therefore, for <span class="math-container">$a&gt;0$</span> the function <span class="math-container">$$z\ \mapsto \frac{z^2}{a-z^2}$$</span> maps the first quadrant to the upper half plane, and the second quadrant to the lower half plane. We conclude that any function of the form</p> <p><span class="math-container">$$f\colon z\mapsto \sum_{a&gt;0} c_a \cdot \frac{z^2}{a- z^2}$$</span></p> <p>takes the first quadrant (Re, Im <span class="math-container">$&gt;0$</span>) to the upper half plane ( Im <span class="math-container">$&gt;0$</span>) and the second quadrant to the lower half plane. We conclude that <span class="math-container">$f^{-1}(\mathbb{R}) = \mathbb{R}$</span>.</p> <p><span class="math-container">$\bf{Added:}$</span> Another method</p> <p>We <a href="https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi/4482323#4482323">have</a></p> <p><span class="math-container">$$\tan(x+i y) = \frac{\sin 2x + i \sinh 2y}{\cos 2x + \cosh 2 y}$$</span></p> <p>Let us show that <span class="math-container">$f(z) \colon = z\ \tan z$</span> takes the first quadrant and third quadrant to the upper half plane. Then using <span class="math-container">$f(\bar z) = \bar f(z)$</span> (<span class="math-container">$f$</span> real) we see to what half plane will each quadrant go.</p> <p>For <span class="math-container">$y\ne 0$</span> we have <span class="math-container">$\cosh 2y &gt;1$</span> so the denominator <span class="math-container">$\cos 2x + \cosh y&gt;0$</span>. Let us that for <span class="math-container">$x\cdot y&gt;0$</span> the imaginary part of <span class="math-container">$(x+i y) ( \sin 2 x + i \sinh 2y)$</span> is <span class="math-container">$&gt;0$</span>, that is <span class="math-container">$x \sinh 2y + y \sin 2x&gt;0$</span>. With <span class="math-container">$2x = s$</span>, <span class="math-container">$2y = t$</span>, <span class="math-container">$\ s \cdot t&gt;0$</span> we have to show that <span class="math-container">$s \sinh t + t \sin s &gt; 0$</span>, or <span class="math-container">$$\frac{\sinh t}{t} + \frac{\sin s}{s} &gt;0$$</span></p> <p>Now <span class="math-container">$$\frac{\sinh t}{t} = 1 + \frac{t^2}{3!} + \frac{t^4}{5!}+ \cdots &gt; 1$$</span> for all <span class="math-container">$t \in \mathbb{R}$</span>, <span class="math-container">$t\ne 0$</span>, while</p> <p><span class="math-container">$$1 + \frac{\sin s}{s} = \frac{1}{s} \cdot \int_0 ^s (1 + \cos u) du &gt;0$$</span> for all <span class="math-container">$s\in \mathbb{R}$</span>, <span class="math-container">$s\ne 0$</span>. We are done.</p> Answer 22: <p><strong>Edit:</strong> As Valent points out in the comment below, it appears the premise is false: the main calculation here omits a factor of <span class="math-container">$z,$</span> which changes the problem. This solution is not correct as written!</p> <hr /> <blockquote class="spoiler"> <p>This is really only a slight tweaking of your argument to remove the casework and fix a few holes. It also shows the result for <span class="math-container">$k\in\Bbb R$</span>, not just <span class="math-container">$k\in\Bbb R^+$</span>.</p> </blockquote> <blockquote class="spoiler"> <p>Say <span class="math-container">$z = a + bi$</span>. Then <span class="math-container">$$z\tan z = \frac{\tan a + i\tanh b}{1 - i\tan a\tanh b} = \left(\frac{1}{1 + \tan^2 a\tanh^2 b}\right)\left(\tan a(1 - \tanh^2 b) + i\tanh b(1 + \tan^2 a)\right).$$</span> <span class="math-container">$\left(\frac{1}{1 + \tan^2 a\tanh^2 b}\right)\in\Bbb R^+$</span>, because <span class="math-container">$\tan$</span> and <span class="math-container">$\tanh$</span> are real valued when restricted to <span class="math-container">$\Bbb R$</span>.</p> </blockquote> <blockquote class="spoiler"> <p>Hence, it is enough to show that if <span class="math-container">$\tan a(1 - \tanh^2 b) + i\tanh b(1 + \tan^2 a)$</span> is real, then <span class="math-container">$b = 0$</span>. For this quantity to be real, we need <span class="math-container">$\tanh b(1 + \tan^2 a) = 0$</span>. This forces either <span class="math-container">$\tanh b = 0$</span> or <span class="math-container">$1 + \tan^2 a = 0$</span>. However, <span class="math-container">$\tan^2 a\geq 0$</span>, so <span class="math-container">$1 + \tan^2 a\geq 1$</span>. Thus, if <span class="math-container">$z\tan z$</span> is real, we must have <span class="math-container">$\tanh b = 0$</span>. As you've said, this happens if and only if <span class="math-container">$b = 0$</span>, which means that <span class="math-container">$a + bi\in\Bbb R$</span>.</p> </blockquote> Answer 23: <p>(See <a href="https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi">Real and imaginary part of $\tan(a+bi)$</a> )</p> <p><span class="math-container">$$\tan(x+iy)=\frac{\sin2x+i\sinh2y}{\cos2x+\cosh2y}$$</span> <span class="math-container">$$(x+iy)\tan(x+iy)=\frac{(x\sin2x-y\sinh2y)+i(x\sinh2y+y\sin2x)}{\cos2x+\cosh2y}$$</span></p> <p>This shows that <span class="math-container">$(x+iy)\tan(x+iy)$</span> is real exactly when</p> <p><span class="math-container">$$x\sinh2y=-y\sin2x.$$</span></p> <p>We want to know whether there are solutions with <span class="math-container">$y\neq0$</span>. If <span class="math-container">$x=0$</span>, then the real part of the function has the same sign as <span class="math-container">$-y\sinh2y$</span>, which has the same sign as <span class="math-container">$-y^2$</span>, which is not positive. So assume <span class="math-container">$xy\neq0$</span>. Then we have</p> <p><span class="math-container">$$\frac{\sinh2y}{2y}=-\frac{\sin2x}{2x}.$$</span></p> <p>But the left side is always greater than <span class="math-container">$1$</span>, while the right side is always smaller than <span class="math-container">$1$</span>. So there is no solution here.</p> <p>Thus <span class="math-container">$z\tan z$</span> is a positive real number only when <span class="math-container">$z$</span> is real.</p> Answer 24: <p>The solution uses the partial fraction <a href="https://en.wikipedia.org/wiki/Trigonometric_functions#Partial_fraction_expansion" rel="nofollow noreferrer">expansion</a> <span class="math-container">$$\pi z\cdot \tan \pi z = \sum_{k \ \textrm{odd integer}} \frac{z^2}{(\frac{k}{2})^2 - z^2}$$</span></p> <p>Consider the rational fraction <span class="math-container">$\phi_a(\cdot)$</span> with<br /> <span class="math-container">$$\phi_a(z) = \frac{z}{a-z} = \frac{a}{a-z}-1$$</span></p> <p>If <span class="math-container">$a&gt;0$</span> then <span class="math-container">$\phi_a$</span> invariates the upper (lower) half complex plane, that is <span class="math-container">$\phi_a(\mathcal{H}_{\pm}) \subset \mathcal{H}_{\pm}$</span> ( in fact here we have equality). Now the map <span class="math-container">$z\mapsto z^2$</span> takes the first open quadrant to the upper half plane <span class="math-container">$\mathcal{H}_+$</span>, and the second one to the lower half plane <span class="math-container">$\mathcal{H}_-$</span>. Therefore, for <span class="math-container">$a&gt;0$</span> the function <span class="math-container">$$z\ \mapsto \frac{z^2}{a-z^2}$$</span> maps the first quadrant to the upper half plane, and the second quadrant to the lower half plane. We conclude that any function of the form</p> <p><span class="math-container">$$f\colon z\mapsto \sum_{a&gt;0} c_a \cdot \frac{z^2}{a- z^2}$$</span></p> <p>takes the first quadrant (Re, Im <span class="math-container">$&gt;0$</span>) to the upper half plane ( Im <span class="math-container">$&gt;0$</span>) and the second quadrant to the lower half plane. We conclude that <span class="math-container">$f^{-1}(\mathbb{R}) = \mathbb{R}$</span>.</p> <p><span class="math-container">$\bf{Added:}$</span> Another method</p> <p>We <a href="https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi/4482323#4482323">have</a></p> <p><span class="math-container">$$\tan(x+i y) = \frac{\sin 2x + i \sinh 2y}{\cos 2x + \cosh 2 y}$$</span></p> <p>Let us show that <span class="math-container">$f(z) \colon = z\ \tan z$</span> takes the first quadrant and third quadrant to the upper half plane. Then using <span class="math-container">$f(\bar z) = \bar f(z)$</span> (<span class="math-container">$f$</span> real) we see to what half plane will each quadrant go.</p> <p>For <span class="math-container">$y\ne 0$</span> we have <span class="math-container">$\cosh 2y &gt;1$</span> so the denominator <span class="math-container">$\cos 2x + \cosh y&gt;0$</span>. Let us that for <span class="math-container">$x\cdot y&gt;0$</span> the imaginary part of <span class="math-container">$(x+i y) ( \sin 2 x + i \sinh 2y)$</span> is <span class="math-container">$&gt;0$</span>, that is <span class="math-container">$x \sinh 2y + y \sin 2x&gt;0$</span>. With <span class="math-container">$2x = s$</span>, <span class="math-container">$2y = t$</span>, <span class="math-container">$\ s \cdot t&gt;0$</span> we have to show that <span class="math-container">$s \sinh t + t \sin s &gt; 0$</span>, or <span class="math-container">$$\frac{\sinh t}{t} + \frac{\sin s}{s} &gt;0$$</span></p> <p>Now <span class="math-container">$$\frac{\sinh t}{t} = 1 + \frac{t^2}{3!} + \frac{t^4}{5!}+ \cdots &gt; 1$$</span> for all <span class="math-container">$t \in \mathbb{R}$</span>, <span class="math-container">$t\ne 0$</span>, while</p> <p><span class="math-container">$$1 + \frac{\sin s}{s} = \frac{1}{s} \cdot \int_0 ^s (1 + \cos u) du &gt;0$$</span> for all <span class="math-container">$s\in \mathbb{R}$</span>, <span class="math-container">$s\ne 0$</span>. We are done.</p> Answer 25: <p><strong>Edit:</strong> As Valent points out in the comment below, it appears the premise is false: the main calculation here omits a factor of <span class="math-container">$z,$</span> which changes the problem. This solution is not correct as written!</p> <hr /> <blockquote class="spoiler"> <p>This is really only a slight tweaking of your argument to remove the casework and fix a few holes. It also shows the result for <span class="math-container">$k\in\Bbb R$</span>, not just <span class="math-container">$k\in\Bbb R^+$</span>.</p> </blockquote> <blockquote class="spoiler"> <p>Say <span class="math-container">$z = a + bi$</span>. Then <span class="math-container">$$z\tan z = \frac{\tan a + i\tanh b}{1 - i\tan a\tanh b} = \left(\frac{1}{1 + \tan^2 a\tanh^2 b}\right)\left(\tan a(1 - \tanh^2 b) + i\tanh b(1 + \tan^2 a)\right).$$</span> <span class="math-container">$\left(\frac{1}{1 + \tan^2 a\tanh^2 b}\right)\in\Bbb R^+$</span>, because <span class="math-container">$\tan$</span> and <span class="math-container">$\tanh$</span> are real valued when restricted to <span class="math-container">$\Bbb R$</span>.</p> </blockquote> <blockquote class="spoiler"> <p>Hence, it is enough to show that if <span class="math-container">$\tan a(1 - \tanh^2 b) + i\tanh b(1 + \tan^2 a)$</span> is real, then <span class="math-container">$b = 0$</span>. For this quantity to be real, we need <span class="math-container">$\tanh b(1 + \tan^2 a) = 0$</span>. This forces either <span class="math-container">$\tanh b = 0$</span> or <span class="math-container">$1 + \tan^2 a = 0$</span>. However, <span class="math-container">$\tan^2 a\geq 0$</span>, so <span class="math-container">$1 + \tan^2 a\geq 1$</span>. Thus, if <span class="math-container">$z\tan z$</span> is real, we must have <span class="math-container">$\tanh b = 0$</span>. As you've said, this happens if and only if <span class="math-container">$b = 0$</span>, which means that <span class="math-container">$a + bi\in\Bbb R$</span>.</p> </blockquote> Answer 26: <p>(See <a href="https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi">Real and imaginary part of $\tan(a+bi)$</a> )</p> <p><span class="math-container">$$\tan(x+iy)=\frac{\sin2x+i\sinh2y}{\cos2x+\cosh2y}$$</span> <span class="math-container">$$(x+iy)\tan(x+iy)=\frac{(x\sin2x-y\sinh2y)+i(x\sinh2y+y\sin2x)}{\cos2x+\cosh2y}$$</span></p> <p>This shows that <span class="math-container">$(x+iy)\tan(x+iy)$</span> is real exactly when</p> <p><span class="math-container">$$x\sinh2y=-y\sin2x.$$</span></p> <p>We want to know whether there are solutions with <span class="math-container">$y\neq0$</span>. If <span class="math-container">$x=0$</span>, then the real part of the function has the same sign as <span class="math-container">$-y\sinh2y$</span>, which has the same sign as <span class="math-container">$-y^2$</span>, which is not positive. So assume <span class="math-container">$xy\neq0$</span>. Then we have</p> <p><span class="math-container">$$\frac{\sinh2y}{2y}=-\frac{\sin2x}{2x}.$$</span></p> <p>But the left side is always greater than <span class="math-container">$1$</span>, while the right side is always smaller than <span class="math-container">$1$</span>. So there is no solution here.</p> <p>Thus <span class="math-container">$z\tan z$</span> is a positive real number only when <span class="math-container">$z$</span> is real.</p> Answer 27: <p>The solution uses the partial fraction <a href="https://en.wikipedia.org/wiki/Trigonometric_functions#Partial_fraction_expansion" rel="nofollow noreferrer">expansion</a> <span class="math-container">$$\pi z\cdot \tan \pi z = \sum_{k \ \textrm{odd integer}} \frac{z^2}{(\frac{k}{2})^2 - z^2}$$</span></p> <p>Consider the rational fraction <span class="math-container">$\phi_a(\cdot)$</span> with<br /> <span class="math-container">$$\phi_a(z) = \frac{z}{a-z} = \frac{a}{a-z}-1$$</span></p> <p>If <span class="math-container">$a&gt;0$</span> then <span class="math-container">$\phi_a$</span> invariates the upper (lower) half complex plane, that is <span class="math-container">$\phi_a(\mathcal{H}_{\pm}) \subset \mathcal{H}_{\pm}$</span> ( in fact here we have equality). Now the map <span class="math-container">$z\mapsto z^2$</span> takes the first open quadrant to the upper half plane <span class="math-container">$\mathcal{H}_+$</span>, and the second one to the lower half plane <span class="math-container">$\mathcal{H}_-$</span>. Therefore, for <span class="math-container">$a&gt;0$</span> the function <span class="math-container">$$z\ \mapsto \frac{z^2}{a-z^2}$$</span> maps the first quadrant to the upper half plane, and the second quadrant to the lower half plane. We conclude that any function of the form</p> <p><span class="math-container">$$f\colon z\mapsto \sum_{a&gt;0} c_a \cdot \frac{z^2}{a- z^2}$$</span></p> <p>takes the first quadrant (Re, Im <span class="math-container">$&gt;0$</span>) to the upper half plane ( Im <span class="math-container">$&gt;0$</span>) and the second quadrant to the lower half plane. We conclude that <span class="math-container">$f^{-1}(\mathbb{R}) = \mathbb{R}$</span>.</p> <p><span class="math-container">$\bf{Added:}$</span> Another method</p> <p>We <a href="https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi/4482323#4482323">have</a></p> <p><span class="math-container">$$\tan(x+i y) = \frac{\sin 2x + i \sinh 2y}{\cos 2x + \cosh 2 y}$$</span></p> <p>Let us show that <span class="math-container">$f(z) \colon = z\ \tan z$</span> takes the first quadrant and third quadrant to the upper half plane. Then using <span class="math-container">$f(\bar z) = \bar f(z)$</span> (<span class="math-container">$f$</span> real) we see to what half plane will each quadrant go.</p> <p>For <span class="math-container">$y\ne 0$</span> we have <span class="math-container">$\cosh 2y &gt;1$</span> so the denominator <span class="math-container">$\cos 2x + \cosh y&gt;0$</span>. Let us that for <span class="math-container">$x\cdot y&gt;0$</span> the imaginary part of <span class="math-container">$(x+i y) ( \sin 2 x + i \sinh 2y)$</span> is <span class="math-container">$&gt;0$</span>, that is <span class="math-container">$x \sinh 2y + y \sin 2x&gt;0$</span>. With <span class="math-container">$2x = s$</span>, <span class="math-container">$2y = t$</span>, <span class="math-container">$\ s \cdot t&gt;0$</span> we have to show that <span class="math-container">$s \sinh t + t \sin s &gt; 0$</span>, or <span class="math-container">$$\frac{\sinh t}{t} + \frac{\sin s}{s} &gt;0$$</span></p> <p>Now <span class="math-container">$$\frac{\sinh t}{t} = 1 + \frac{t^2}{3!} + \frac{t^4}{5!}+ \cdots &gt; 1$$</span> for all <span class="math-container">$t \in \mathbb{R}$</span>, <span class="math-container">$t\ne 0$</span>, while</p> <p><span class="math-container">$$1 + \frac{\sin s}{s} = \frac{1}{s} \cdot \int_0 ^s (1 + \cos u) du &gt;0$$</span> for all <span class="math-container">$s\in \mathbb{R}$</span>, <span class="math-container">$s\ne 0$</span>. We are done.</p> Answer 28: <p><strong>Edit:</strong> As Valent points out in the comment below, it appears the premise is false: the main calculation here omits a factor of <span class="math-container">$z,$</span> which changes the problem. This solution is not correct as written!</p> <hr /> <blockquote class="spoiler"> <p>This is really only a slight tweaking of your argument to remove the casework and fix a few holes. It also shows the result for <span class="math-container">$k\in\Bbb R$</span>, not just <span class="math-container">$k\in\Bbb R^+$</span>.</p> </blockquote> <blockquote class="spoiler"> <p>Say <span class="math-container">$z = a + bi$</span>. Then <span class="math-container">$$z\tan z = \frac{\tan a + i\tanh b}{1 - i\tan a\tanh b} = \left(\frac{1}{1 + \tan^2 a\tanh^2 b}\right)\left(\tan a(1 - \tanh^2 b) + i\tanh b(1 + \tan^2 a)\right).$$</span> <span class="math-container">$\left(\frac{1}{1 + \tan^2 a\tanh^2 b}\right)\in\Bbb R^+$</span>, because <span class="math-container">$\tan$</span> and <span class="math-container">$\tanh$</span> are real valued when restricted to <span class="math-container">$\Bbb R$</span>.</p> </blockquote> <blockquote class="spoiler"> <p>Hence, it is enough to show that if <span class="math-container">$\tan a(1 - \tanh^2 b) + i\tanh b(1 + \tan^2 a)$</span> is real, then <span class="math-container">$b = 0$</span>. For this quantity to be real, we need <span class="math-container">$\tanh b(1 + \tan^2 a) = 0$</span>. This forces either <span class="math-container">$\tanh b = 0$</span> or <span class="math-container">$1 + \tan^2 a = 0$</span>. However, <span class="math-container">$\tan^2 a\geq 0$</span>, so <span class="math-container">$1 + \tan^2 a\geq 1$</span>. Thus, if <span class="math-container">$z\tan z$</span> is real, we must have <span class="math-container">$\tanh b = 0$</span>. As you've said, this happens if and only if <span class="math-container">$b = 0$</span>, which means that <span class="math-container">$a + bi\in\Bbb R$</span>.</p> </blockquote> Answer 29: <p>(See <a href="https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi">Real and imaginary part of $\tan(a+bi)$</a> )</p> <p><span class="math-container">$$\tan(x+iy)=\frac{\sin2x+i\sinh2y}{\cos2x+\cosh2y}$$</span> <span class="math-container">$$(x+iy)\tan(x+iy)=\frac{(x\sin2x-y\sinh2y)+i(x\sinh2y+y\sin2x)}{\cos2x+\cosh2y}$$</span></p> <p>This shows that <span class="math-container">$(x+iy)\tan(x+iy)$</span> is real exactly when</p> <p><span class="math-container">$$x\sinh2y=-y\sin2x.$$</span></p> <p>We want to know whether there are solutions with <span class="math-container">$y\neq0$</span>. If <span class="math-container">$x=0$</span>, then the real part of the function has the same sign as <span class="math-container">$-y\sinh2y$</span>, which has the same sign as <span class="math-container">$-y^2$</span>, which is not positive. So assume <span class="math-container">$xy\neq0$</span>. Then we have</p> <p><span class="math-container">$$\frac{\sinh2y}{2y}=-\frac{\sin2x}{2x}.$$</span></p> <p>But the left side is always greater than <span class="math-container">$1$</span>, while the right side is always smaller than <span class="math-container">$1$</span>. So there is no solution here.</p> <p>Thus <span class="math-container">$z\tan z$</span> is a positive real number only when <span class="math-container">$z$</span> is real.</p> Answer 30: <p>The solution uses the partial fraction <a href="https://en.wikipedia.org/wiki/Trigonometric_functions#Partial_fraction_expansion" rel="nofollow noreferrer">expansion</a> <span class="math-container">$$\pi z\cdot \tan \pi z = \sum_{k \ \textrm{odd integer}} \frac{z^2}{(\frac{k}{2})^2 - z^2}$$</span></p> <p>Consider the rational fraction <span class="math-container">$\phi_a(\cdot)$</span> with<br /> <span class="math-container">$$\phi_a(z) = \frac{z}{a-z} = \frac{a}{a-z}-1$$</span></p> <p>If <span class="math-container">$a&gt;0$</span> then <span class="math-container">$\phi_a$</span> invariates the upper (lower) half complex plane, that is <span class="math-container">$\phi_a(\mathcal{H}_{\pm}) \subset \mathcal{H}_{\pm}$</span> ( in fact here we have equality). Now the map <span class="math-container">$z\mapsto z^2$</span> takes the first open quadrant to the upper half plane <span class="math-container">$\mathcal{H}_+$</span>, and the second one to the lower half plane <span class="math-container">$\mathcal{H}_-$</span>. Therefore, for <span class="math-container">$a&gt;0$</span> the function <span class="math-container">$$z\ \mapsto \frac{z^2}{a-z^2}$$</span> maps the first quadrant to the upper half plane, and the second quadrant to the lower half plane. We conclude that any function of the form</p> <p><span class="math-container">$$f\colon z\mapsto \sum_{a&gt;0} c_a \cdot \frac{z^2}{a- z^2}$$</span></p> <p>takes the first quadrant (Re, Im <span class="math-container">$&gt;0$</span>) to the upper half plane ( Im <span class="math-container">$&gt;0$</span>) and the second quadrant to the lower half plane. We conclude that <span class="math-container">$f^{-1}(\mathbb{R}) = \mathbb{R}$</span>.</p> <p><span class="math-container">$\bf{Added:}$</span> Another method</p> <p>We <a href="https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi/4482323#4482323">have</a></p> <p><span class="math-container">$$\tan(x+i y) = \frac{\sin 2x + i \sinh 2y}{\cos 2x + \cosh 2 y}$$</span></p> <p>Let us show that <span class="math-container">$f(z) \colon = z\ \tan z$</span> takes the first quadrant and third quadrant to the upper half plane. Then using <span class="math-container">$f(\bar z) = \bar f(z)$</span> (<span class="math-container">$f$</span> real) we see to what half plane will each quadrant go.</p> <p>For <span class="math-container">$y\ne 0$</span> we have <span class="math-container">$\cosh 2y &gt;1$</span> so the denominator <span class="math-container">$\cos 2x + \cosh y&gt;0$</span>. Let us that for <span class="math-container">$x\cdot y&gt;0$</span> the imaginary part of <span class="math-container">$(x+i y) ( \sin 2 x + i \sinh 2y)$</span> is <span class="math-container">$&gt;0$</span>, that is <span class="math-container">$x \sinh 2y + y \sin 2x&gt;0$</span>. With <span class="math-container">$2x = s$</span>, <span class="math-container">$2y = t$</span>, <span class="math-container">$\ s \cdot t&gt;0$</span> we have to show that <span class="math-container">$s \sinh t + t \sin s &gt; 0$</span>, or <span class="math-container">$$\frac{\sinh t}{t} + \frac{\sin s}{s} &gt;0$$</span></p> <p>Now <span class="math-container">$$\frac{\sinh t}{t} = 1 + \frac{t^2}{3!} + \frac{t^4}{5!}+ \cdots &gt; 1$$</span> for all <span class="math-container">$t \in \mathbb{R}$</span>, <span class="math-container">$t\ne 0$</span>, while</p> <p><span class="math-container">$$1 + \frac{\sin s}{s} = \frac{1}{s} \cdot \int_0 ^s (1 + \cos u) du &gt;0$$</span> for all <span class="math-container">$s\in \mathbb{R}$</span>, <span class="math-container">$s\ne 0$</span>. We are done.</p> Answer 31: <p><strong>Edit:</strong> As Valent points out in the comment below, it appears the premise is false: the main calculation here omits a factor of <span class="math-container">$z,$</span> which changes the problem. This solution is not correct as written!</p> <hr /> <blockquote class="spoiler"> <p>This is really only a slight tweaking of your argument to remove the casework and fix a few holes. It also shows the result for <span class="math-container">$k\in\Bbb R$</span>, not just <span class="math-container">$k\in\Bbb R^+$</span>.</p> </blockquote> <blockquote class="spoiler"> <p>Say <span class="math-container">$z = a + bi$</span>. Then <span class="math-container">$$z\tan z = \frac{\tan a + i\tanh b}{1 - i\tan a\tanh b} = \left(\frac{1}{1 + \tan^2 a\tanh^2 b}\right)\left(\tan a(1 - \tanh^2 b) + i\tanh b(1 + \tan^2 a)\right).$$</span> <span class="math-container">$\left(\frac{1}{1 + \tan^2 a\tanh^2 b}\right)\in\Bbb R^+$</span>, because <span class="math-container">$\tan$</span> and <span class="math-container">$\tanh$</span> are real valued when restricted to <span class="math-container">$\Bbb R$</span>.</p> </blockquote> <blockquote class="spoiler"> <p>Hence, it is enough to show that if <span class="math-container">$\tan a(1 - \tanh^2 b) + i\tanh b(1 + \tan^2 a)$</span> is real, then <span class="math-container">$b = 0$</span>. For this quantity to be real, we need <span class="math-container">$\tanh b(1 + \tan^2 a) = 0$</span>. This forces either <span class="math-container">$\tanh b = 0$</span> or <span class="math-container">$1 + \tan^2 a = 0$</span>. However, <span class="math-container">$\tan^2 a\geq 0$</span>, so <span class="math-container">$1 + \tan^2 a\geq 1$</span>. Thus, if <span class="math-container">$z\tan z$</span> is real, we must have <span class="math-container">$\tanh b = 0$</span>. As you've said, this happens if and only if <span class="math-container">$b = 0$</span>, which means that <span class="math-container">$a + bi\in\Bbb R$</span>.</p> </blockquote> Answer 32: <p>(See <a href="https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi">Real and imaginary part of $\tan(a+bi)$</a> )</p> <p><span class="math-container">$$\tan(x+iy)=\frac{\sin2x+i\sinh2y}{\cos2x+\cosh2y}$$</span> <span class="math-container">$$(x+iy)\tan(x+iy)=\frac{(x\sin2x-y\sinh2y)+i(x\sinh2y+y\sin2x)}{\cos2x+\cosh2y}$$</span></p> <p>This shows that <span class="math-container">$(x+iy)\tan(x+iy)$</span> is real exactly when</p> <p><span class="math-container">$$x\sinh2y=-y\sin2x.$$</span></p> <p>We want to know whether there are solutions with <span class="math-container">$y\neq0$</span>. If <span class="math-container">$x=0$</span>, then the real part of the function has the same sign as <span class="math-container">$-y\sinh2y$</span>, which has the same sign as <span class="math-container">$-y^2$</span>, which is not positive. So assume <span class="math-container">$xy\neq0$</span>. Then we have</p> <p><span class="math-container">$$\frac{\sinh2y}{2y}=-\frac{\sin2x}{2x}.$$</span></p> <p>But the left side is always greater than <span class="math-container">$1$</span>, while the right side is always smaller than <span class="math-container">$1$</span>. So there is no solution here.</p> <p>Thus <span class="math-container">$z\tan z$</span> is a positive real number only when <span class="math-container">$z$</span> is real.</p> Answer 33: <p>The solution uses the partial fraction <a href="https://en.wikipedia.org/wiki/Trigonometric_functions#Partial_fraction_expansion" rel="nofollow noreferrer">expansion</a> <span class="math-container">$$\pi z\cdot \tan \pi z = \sum_{k \ \textrm{odd integer}} \frac{z^2}{(\frac{k}{2})^2 - z^2}$$</span></p> <p>Consider the rational fraction <span class="math-container">$\phi_a(\cdot)$</span> with<br /> <span class="math-container">$$\phi_a(z) = \frac{z}{a-z} = \frac{a}{a-z}-1$$</span></p> <p>If <span class="math-container">$a&gt;0$</span> then <span class="math-container">$\phi_a$</span> invariates the upper (lower) half complex plane, that is <span class="math-container">$\phi_a(\mathcal{H}_{\pm}) \subset \mathcal{H}_{\pm}$</span> ( in fact here we have equality). Now the map <span class="math-container">$z\mapsto z^2$</span> takes the first open quadrant to the upper half plane <span class="math-container">$\mathcal{H}_+$</span>, and the second one to the lower half plane <span class="math-container">$\mathcal{H}_-$</span>. Therefore, for <span class="math-container">$a&gt;0$</span> the function <span class="math-container">$$z\ \mapsto \frac{z^2}{a-z^2}$$</span> maps the first quadrant to the upper half plane, and the second quadrant to the lower half plane. We conclude that any function of the form</p> <p><span class="math-container">$$f\colon z\mapsto \sum_{a&gt;0} c_a \cdot \frac{z^2}{a- z^2}$$</span></p> <p>takes the first quadrant (Re, Im <span class="math-container">$&gt;0$</span>) to the upper half plane ( Im <span class="math-container">$&gt;0$</span>) and the second quadrant to the lower half plane. We conclude that <span class="math-container">$f^{-1}(\mathbb{R}) = \mathbb{R}$</span>.</p> <p><span class="math-container">$\bf{Added:}$</span> Another method</p> <p>We <a href="https://math.stackexchange.com/questions/2983928/real-and-imaginary-part-of-tanabi/4482323#4482323">have</a></p> <p><span class="math-container">$$\tan(x+i y) = \frac{\sin 2x + i \sinh 2y}{\cos 2x + \cosh 2 y}$$</span></p> <p>Let us show that <span class="math-container">$f(z) \colon = z\ \tan z$</span> takes the first quadrant and third quadrant to the upper half plane. Then using <span class="math-container">$f(\bar z) = \bar f(z)$</span> (<span class="math-container">$f$</span> real) we see to what half plane will each quadrant go.</p> <p>For <span class="math-container">$y\ne 0$</span> we have <span class="math-container">$\cosh 2y &gt;1$</span> so the denominator <span class="math-container">$\cos 2x + \cosh y&gt;0$</span>. Let us that for <span class="math-container">$x\cdot y&gt;0$</span> the imaginary part of <span class="math-container">$(x+i y) ( \sin 2 x + i \sinh 2y)$</span> is <span class="math-container">$&gt;0$</span>, that is <span class="math-container">$x \sinh 2y + y \sin 2x&gt;0$</span>. With <span class="math-container">$2x = s$</span>, <span class="math-container">$2y = t$</span>, <span class="math-container">$\ s \cdot t&gt;0$</span> we have to show that <span class="math-container">$s \sinh t + t \sin s &gt; 0$</span>, or <span class="math-container">$$\frac{\sinh t}{t} + \frac{\sin s}{s} &gt;0$$</span></p> <p>Now <span class="math-container">$$\frac{\sinh t}{t} = 1 + \frac{t^2}{3!} + \frac{t^4}{5!}+ \cdots &gt; 1$$</span> for all <span class="math-container">$t \in \mathbb{R}$</span>, <span class="math-container">$t\ne 0$</span>, while</p> <p><span class="math-container">$$1 + \frac{\sin s}{s} = \frac{1}{s} \cdot \int_0 ^s (1 + \cos u) du &gt;0$$</span> for all <span class="math-container">$s\in \mathbb{R}$</span>, <span class="math-container">$s\ne 0$</span>. We are done.</p>
Proof of Russian Peasant Multiplication
2,099,095
6
https://math.stackexchange.com/questions/2099095/proof-of-russian-peasant-multiplication
<p>I'm studying a combinatorics text (<a href="http://rads.stackoverflow.com/amzn/click/0521457610" rel="noreferrer">Cameron</a>) in preparation for taking discrete math this upcoming semester. I've taken a fair amount of math in college, through Calculus II, but this is my first 500+ level math course. </p> <p>In chapter 2 of the <a href="http://rads.stackoverflow.com/amzn/click/0521457610" rel="noreferrer">Cameron</a> book, the author introduces an algorithm for multiplication called Russian Peasant Multiplication. I want to prove that this algorithm will always result in the product of two numbers.</p> <p>I've been chewing on the proof for a while now, and just can't seem to get any traction. Thanks in advance for any help.</p> <h1>Description of Algorithm</h1> <p>I'll try to explain the algorithm clearly. The goal of the algorithm is to find the product of two numbers $m$ and $n$. The below table is filled out for $m$ and $n$:</p> <p>$$\begin{array}{c} m&amp;&amp;n&amp;\\ \hline \bigl\lfloor \frac{m}{2} \bigr\rfloor&amp;&amp; 2n \\ \bigl\lfloor \frac{m}{4} \bigr\rfloor&amp;&amp; 4n \\ \bigl\lfloor \frac{m}{8} \bigr\rfloor&amp;&amp; 8n \\ \bigl\lfloor \frac{m}{16} \bigr\rfloor&amp;&amp; 16n \\ \bigl\lfloor \frac{m}{32} \bigr\rfloor&amp;&amp; 32n \\ \end{array}$$</p> <p>The sequence of numbers in both columns is the sequence $a_n = 2a_{n-1}\text{ where }a_1=2$. The columns are extended down until the value in the left column is equal to the number one. After filling out the table, the product is found by adding the values of the right column where the left column is an odd number.</p> <p>For example, to multiply 18 and 37 with the algorithm:</p> <p>$$\begin{array}{c} m&amp;&amp;n&amp;\\ \hline 18 &amp;&amp; 37 \\ 9 &amp;&amp; 74 \\ 4 &amp;&amp; 148 \\ 2 &amp;&amp; 296 \\ 1 &amp;&amp; 592 \\ \end{array}$$</p> <p>To find the product, cross out values of the left column where the value of the right column is even:</p> <p>$$\begin{array}{c} m&amp;&amp;n&amp;\\ \hline \require{enclose} \enclose{horizontalstrike}{18} &amp;&amp; \enclose{horizontalstrike}{37} \\ 9 &amp;&amp; 74 \\ \enclose{horizontalstrike}{4} &amp;&amp; \enclose{horizontalstrike}{148} \\ \enclose{horizontalstrike}{2} &amp;&amp; \enclose{horizontalstrike}{296} \\ 1 &amp;&amp; 592 \\ \end{array}$$</p> <p>And add the remaining numbers in the right column to get the result: $18\times37=74+592=666$</p> <h1>Alternate Description of Algorithm</h1> <p>As an aside, here is a shorter way to describe the algorithm, repeat the below steps until the left column == 1:</p> <pre><code>• Halve the last number in the first column • Write the result below (discard the remainder) • Double the last number in the first column. • Write the result below. </code></pre> <p>Now, for every even number in the first column, delete the adjacent number in the second column. Add the remaining numbers in the second column.</p> <h1>What I've done so Far</h1> <p>I haven't been able to get much of anywhere on the proof. One thing I did was create a bit of a different way of arriving at the same outcome:</p> <p>$$\begin{array}{c} m&amp;&amp;n&amp;&amp;\text{index}&amp;&amp;\text{remainder}&amp;&amp;\text{if remainder == 1, }n\times2^{Index}&amp;\\ \hline 18 &amp;&amp; 37 &amp;&amp; 0 &amp;&amp; 0 \\ 9 &amp;&amp; 74 &amp;&amp; 1 &amp;&amp; 1 &amp;&amp; 74\\ 4 &amp;&amp; 148 &amp;&amp; 2 &amp;&amp; 0 \\ 2 &amp;&amp; 296 &amp;&amp; 3 &amp;&amp; 0 \\ 1 &amp;&amp; 592 &amp;&amp; 4 &amp;&amp; 1 &amp;&amp; 592\\ \end{array}$$</p> <p>Which brings to light the relationship between this algorithm and binary numbers (the fourth column in the above table is the product in binary, read bottom to top). </p> <p>I also wrote some Python functions to help me explore the problem, <a href="https://github.com/def-mycroft/russian_peasant" rel="noreferrer">Github repo.</a></p>
[ { "answer_id": 2099114, "score": 5, "text": "<p>Write $m$ in binary. The lines where the number is odd corresponds to the bits that are $1$ in the binary expansion. In your example of $18 \\times 37$ we have $18=10010_2=2^1+2^4,$ so $18 \\times 37=(2+16) \\times 37$, which are exactly the terms you added.</p>\n" }, { "answer_id": 2100406, "score": 2, "text": "<p>After reading Ross Millikan's reply I decided to write a complete proof. This is my attempt: </p>\n\n<p>A binary number is a way of expressing a number in powers of two, taking the form: </p>\n\n<p>$$m = d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}$$</p>\n\n<p>Where $n$ is the number of digits in the binary number. Therefore:</p>\n\n<p>$$m\\times n = \\left[d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}\\right]\\times n$$</p>\n\n<p>For example:</p>\n\n<p>$$=18\\times 37$$\n$$\\quad\\;\\;\\;\\;= \\left[2^4+2\\right]\\times 37$$\n$$\\quad\\;\\;\\;= \\left[16+2\\right]\\times 37$$\n$$\\;= 592+74$$\n$$666$$</p>\n\n<p>The below table is a way of using this identity to multiply two numbers $m$ and $n$: </p>\n\n<p>$$\\begin{array}{c}\n\\text{Index}&amp;&amp;m&amp;&amp;\\text{If col. 1 is odd, }n\\times 2^{Index}&amp;\\\\ \\hline\n0 &amp;&amp; m\\\\\n1 &amp;&amp; \\bigl\\lfloor \\frac{m}{2} \\bigr\\rfloor\\\\\n2 &amp;&amp; \\bigl\\lfloor \\frac{m}{4} \\bigr\\rfloor\\\\\n3 &amp;&amp; \\bigl\\lfloor \\frac{m}{8} \\bigr\\rfloor\\\\\n4 &amp;&amp; \\bigl\\lfloor \\frac{m}{16} \\bigr\\rfloor\\\\\n5 &amp;&amp; \\bigl\\lfloor \\frac{m}{32} \\bigr\\rfloor\\\\\n\\end{array}$$</p>\n\n<p>Where the first column is the sequence of integers, the denominator in the second column is the sequence $a_n = 2a_{n-1}\\text{ where }a_1=2$. The summation of the third column is the product $m\\times n$.</p>\n\n<p>Which is equivalent to these steps: repeat the below steps until the left column == 1:</p>\n\n<pre><code>• Halve the last number in the first column\n• Write the result below (discard the remainder)\n• Double the last number in the first column.\n• Write the result below.\n</code></pre>\n\n<p>Now, for every even number in the first column, delete the adjacent number in the second column. Add the remaining numbers in the second column. </p>\n" }, { "answer_id": 4427236, "score": 1, "text": "<p>I'd like to add an answer to the already good ones just to point out that no explicit mention of binary representations is needed to understand the algorithm, yet every explanation I've found online mentions binary. When I first figured out how the algorithm works this didn't even occur to me! The idea is really simple:</p>\n<p><span class=\"math-container\">$$18\\times 37=9\\times 74$$</span>\nNothing surprising here, we just multiplied by <span class=\"math-container\">$1$</span> in the form <span class=\"math-container\">$2\\times 1/2$</span>. Since <span class=\"math-container\">$18$</span> is even and no fractions come into play, we've just converted the original product into a new one and can discard the top row. Now we do the same halving/doubling again:\n<span class=\"math-container\">$$9\\times 74=(4+1/2)\\times 148=4\\times 148+1/2\\times 148$$</span>\nAnd what's the last term? It's <span class=\"math-container\">$74$</span>, right above the <span class=\"math-container\">$148$</span>, which is next to the <em>odd</em> number <span class=\"math-container\">$9$</span>. Knowing that this will be accounted for at the end, ignore it for now and halve/double again with the remaining term:\n<span class=\"math-container\">$$4\\times 148=2\\times 296$$</span>\nThe <span class=\"math-container\">$4\\times 148$</span> row can be ignored. Continuing,\n<span class=\"math-container\">$$2\\times 296=1\\times 592$$</span>\nPutting everything together, we end up with <span class=\"math-container\">$18\\times 37=592+74$</span>.</p>\n" }, { "answer_id": 2099114, "score": 5, "text": "<p>Write $m$ in binary. The lines where the number is odd corresponds to the bits that are $1$ in the binary expansion. In your example of $18 \\times 37$ we have $18=10010_2=2^1+2^4,$ so $18 \\times 37=(2+16) \\times 37$, which are exactly the terms you added.</p>\n" }, { "answer_id": 2100406, "score": 2, "text": "<p>After reading Ross Millikan's reply I decided to write a complete proof. This is my attempt: </p>\n\n<p>A binary number is a way of expressing a number in powers of two, taking the form: </p>\n\n<p>$$m = d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}$$</p>\n\n<p>Where $n$ is the number of digits in the binary number. Therefore:</p>\n\n<p>$$m\\times n = \\left[d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}\\right]\\times n$$</p>\n\n<p>For example:</p>\n\n<p>$$=18\\times 37$$\n$$\\quad\\;\\;\\;\\;= \\left[2^4+2\\right]\\times 37$$\n$$\\quad\\;\\;\\;= \\left[16+2\\right]\\times 37$$\n$$\\;= 592+74$$\n$$666$$</p>\n\n<p>The below table is a way of using this identity to multiply two numbers $m$ and $n$: </p>\n\n<p>$$\\begin{array}{c}\n\\text{Index}&amp;&amp;m&amp;&amp;\\text{If col. 1 is odd, }n\\times 2^{Index}&amp;\\\\ \\hline\n0 &amp;&amp; m\\\\\n1 &amp;&amp; \\bigl\\lfloor \\frac{m}{2} \\bigr\\rfloor\\\\\n2 &amp;&amp; \\bigl\\lfloor \\frac{m}{4} \\bigr\\rfloor\\\\\n3 &amp;&amp; \\bigl\\lfloor \\frac{m}{8} \\bigr\\rfloor\\\\\n4 &amp;&amp; \\bigl\\lfloor \\frac{m}{16} \\bigr\\rfloor\\\\\n5 &amp;&amp; \\bigl\\lfloor \\frac{m}{32} \\bigr\\rfloor\\\\\n\\end{array}$$</p>\n\n<p>Where the first column is the sequence of integers, the denominator in the second column is the sequence $a_n = 2a_{n-1}\\text{ where }a_1=2$. The summation of the third column is the product $m\\times n$.</p>\n\n<p>Which is equivalent to these steps: repeat the below steps until the left column == 1:</p>\n\n<pre><code>• Halve the last number in the first column\n• Write the result below (discard the remainder)\n• Double the last number in the first column.\n• Write the result below.\n</code></pre>\n\n<p>Now, for every even number in the first column, delete the adjacent number in the second column. Add the remaining numbers in the second column. </p>\n" }, { "answer_id": 4427236, "score": 1, "text": "<p>I'd like to add an answer to the already good ones just to point out that no explicit mention of binary representations is needed to understand the algorithm, yet every explanation I've found online mentions binary. When I first figured out how the algorithm works this didn't even occur to me! The idea is really simple:</p>\n<p><span class=\"math-container\">$$18\\times 37=9\\times 74$$</span>\nNothing surprising here, we just multiplied by <span class=\"math-container\">$1$</span> in the form <span class=\"math-container\">$2\\times 1/2$</span>. Since <span class=\"math-container\">$18$</span> is even and no fractions come into play, we've just converted the original product into a new one and can discard the top row. Now we do the same halving/doubling again:\n<span class=\"math-container\">$$9\\times 74=(4+1/2)\\times 148=4\\times 148+1/2\\times 148$$</span>\nAnd what's the last term? It's <span class=\"math-container\">$74$</span>, right above the <span class=\"math-container\">$148$</span>, which is next to the <em>odd</em> number <span class=\"math-container\">$9$</span>. Knowing that this will be accounted for at the end, ignore it for now and halve/double again with the remaining term:\n<span class=\"math-container\">$$4\\times 148=2\\times 296$$</span>\nThe <span class=\"math-container\">$4\\times 148$</span> row can be ignored. Continuing,\n<span class=\"math-container\">$$2\\times 296=1\\times 592$$</span>\nPutting everything together, we end up with <span class=\"math-container\">$18\\times 37=592+74$</span>.</p>\n" }, { "answer_id": 2099114, "score": 5, "text": "<p>Write $m$ in binary. The lines where the number is odd corresponds to the bits that are $1$ in the binary expansion. In your example of $18 \\times 37$ we have $18=10010_2=2^1+2^4,$ so $18 \\times 37=(2+16) \\times 37$, which are exactly the terms you added.</p>\n" }, { "answer_id": 2100406, "score": 2, "text": "<p>After reading Ross Millikan's reply I decided to write a complete proof. This is my attempt: </p>\n\n<p>A binary number is a way of expressing a number in powers of two, taking the form: </p>\n\n<p>$$m = d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}$$</p>\n\n<p>Where $n$ is the number of digits in the binary number. Therefore:</p>\n\n<p>$$m\\times n = \\left[d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}\\right]\\times n$$</p>\n\n<p>For example:</p>\n\n<p>$$=18\\times 37$$\n$$\\quad\\;\\;\\;\\;= \\left[2^4+2\\right]\\times 37$$\n$$\\quad\\;\\;\\;= \\left[16+2\\right]\\times 37$$\n$$\\;= 592+74$$\n$$666$$</p>\n\n<p>The below table is a way of using this identity to multiply two numbers $m$ and $n$: </p>\n\n<p>$$\\begin{array}{c}\n\\text{Index}&amp;&amp;m&amp;&amp;\\text{If col. 1 is odd, }n\\times 2^{Index}&amp;\\\\ \\hline\n0 &amp;&amp; m\\\\\n1 &amp;&amp; \\bigl\\lfloor \\frac{m}{2} \\bigr\\rfloor\\\\\n2 &amp;&amp; \\bigl\\lfloor \\frac{m}{4} \\bigr\\rfloor\\\\\n3 &amp;&amp; \\bigl\\lfloor \\frac{m}{8} \\bigr\\rfloor\\\\\n4 &amp;&amp; \\bigl\\lfloor \\frac{m}{16} \\bigr\\rfloor\\\\\n5 &amp;&amp; \\bigl\\lfloor \\frac{m}{32} \\bigr\\rfloor\\\\\n\\end{array}$$</p>\n\n<p>Where the first column is the sequence of integers, the denominator in the second column is the sequence $a_n = 2a_{n-1}\\text{ where }a_1=2$. The summation of the third column is the product $m\\times n$.</p>\n\n<p>Which is equivalent to these steps: repeat the below steps until the left column == 1:</p>\n\n<pre><code>• Halve the last number in the first column\n• Write the result below (discard the remainder)\n• Double the last number in the first column.\n• Write the result below.\n</code></pre>\n\n<p>Now, for every even number in the first column, delete the adjacent number in the second column. Add the remaining numbers in the second column. </p>\n" }, { "answer_id": 4427236, "score": 1, "text": "<p>I'd like to add an answer to the already good ones just to point out that no explicit mention of binary representations is needed to understand the algorithm, yet every explanation I've found online mentions binary. When I first figured out how the algorithm works this didn't even occur to me! The idea is really simple:</p>\n<p><span class=\"math-container\">$$18\\times 37=9\\times 74$$</span>\nNothing surprising here, we just multiplied by <span class=\"math-container\">$1$</span> in the form <span class=\"math-container\">$2\\times 1/2$</span>. Since <span class=\"math-container\">$18$</span> is even and no fractions come into play, we've just converted the original product into a new one and can discard the top row. Now we do the same halving/doubling again:\n<span class=\"math-container\">$$9\\times 74=(4+1/2)\\times 148=4\\times 148+1/2\\times 148$$</span>\nAnd what's the last term? It's <span class=\"math-container\">$74$</span>, right above the <span class=\"math-container\">$148$</span>, which is next to the <em>odd</em> number <span class=\"math-container\">$9$</span>. Knowing that this will be accounted for at the end, ignore it for now and halve/double again with the remaining term:\n<span class=\"math-container\">$$4\\times 148=2\\times 296$$</span>\nThe <span class=\"math-container\">$4\\times 148$</span> row can be ignored. Continuing,\n<span class=\"math-container\">$$2\\times 296=1\\times 592$$</span>\nPutting everything together, we end up with <span class=\"math-container\">$18\\times 37=592+74$</span>.</p>\n" }, { "answer_id": 2099114, "score": 5, "text": "<p>Write $m$ in binary. The lines where the number is odd corresponds to the bits that are $1$ in the binary expansion. In your example of $18 \\times 37$ we have $18=10010_2=2^1+2^4,$ so $18 \\times 37=(2+16) \\times 37$, which are exactly the terms you added.</p>\n" }, { "answer_id": 2100406, "score": 2, "text": "<p>After reading Ross Millikan's reply I decided to write a complete proof. This is my attempt: </p>\n\n<p>A binary number is a way of expressing a number in powers of two, taking the form: </p>\n\n<p>$$m = d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}$$</p>\n\n<p>Where $n$ is the number of digits in the binary number. Therefore:</p>\n\n<p>$$m\\times n = \\left[d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}\\right]\\times n$$</p>\n\n<p>For example:</p>\n\n<p>$$=18\\times 37$$\n$$\\quad\\;\\;\\;\\;= \\left[2^4+2\\right]\\times 37$$\n$$\\quad\\;\\;\\;= \\left[16+2\\right]\\times 37$$\n$$\\;= 592+74$$\n$$666$$</p>\n\n<p>The below table is a way of using this identity to multiply two numbers $m$ and $n$: </p>\n\n<p>$$\\begin{array}{c}\n\\text{Index}&amp;&amp;m&amp;&amp;\\text{If col. 1 is odd, }n\\times 2^{Index}&amp;\\\\ \\hline\n0 &amp;&amp; m\\\\\n1 &amp;&amp; \\bigl\\lfloor \\frac{m}{2} \\bigr\\rfloor\\\\\n2 &amp;&amp; \\bigl\\lfloor \\frac{m}{4} \\bigr\\rfloor\\\\\n3 &amp;&amp; \\bigl\\lfloor \\frac{m}{8} \\bigr\\rfloor\\\\\n4 &amp;&amp; \\bigl\\lfloor \\frac{m}{16} \\bigr\\rfloor\\\\\n5 &amp;&amp; \\bigl\\lfloor \\frac{m}{32} \\bigr\\rfloor\\\\\n\\end{array}$$</p>\n\n<p>Where the first column is the sequence of integers, the denominator in the second column is the sequence $a_n = 2a_{n-1}\\text{ where }a_1=2$. The summation of the third column is the product $m\\times n$.</p>\n\n<p>Which is equivalent to these steps: repeat the below steps until the left column == 1:</p>\n\n<pre><code>• Halve the last number in the first column\n• Write the result below (discard the remainder)\n• Double the last number in the first column.\n• Write the result below.\n</code></pre>\n\n<p>Now, for every even number in the first column, delete the adjacent number in the second column. Add the remaining numbers in the second column. </p>\n" }, { "answer_id": 4427236, "score": 1, "text": "<p>I'd like to add an answer to the already good ones just to point out that no explicit mention of binary representations is needed to understand the algorithm, yet every explanation I've found online mentions binary. When I first figured out how the algorithm works this didn't even occur to me! The idea is really simple:</p>\n<p><span class=\"math-container\">$$18\\times 37=9\\times 74$$</span>\nNothing surprising here, we just multiplied by <span class=\"math-container\">$1$</span> in the form <span class=\"math-container\">$2\\times 1/2$</span>. Since <span class=\"math-container\">$18$</span> is even and no fractions come into play, we've just converted the original product into a new one and can discard the top row. Now we do the same halving/doubling again:\n<span class=\"math-container\">$$9\\times 74=(4+1/2)\\times 148=4\\times 148+1/2\\times 148$$</span>\nAnd what's the last term? It's <span class=\"math-container\">$74$</span>, right above the <span class=\"math-container\">$148$</span>, which is next to the <em>odd</em> number <span class=\"math-container\">$9$</span>. Knowing that this will be accounted for at the end, ignore it for now and halve/double again with the remaining term:\n<span class=\"math-container\">$$4\\times 148=2\\times 296$$</span>\nThe <span class=\"math-container\">$4\\times 148$</span> row can be ignored. Continuing,\n<span class=\"math-container\">$$2\\times 296=1\\times 592$$</span>\nPutting everything together, we end up with <span class=\"math-container\">$18\\times 37=592+74$</span>.</p>\n" }, { "answer_id": 2099114, "score": 5, "text": "<p>Write $m$ in binary. The lines where the number is odd corresponds to the bits that are $1$ in the binary expansion. In your example of $18 \\times 37$ we have $18=10010_2=2^1+2^4,$ so $18 \\times 37=(2+16) \\times 37$, which are exactly the terms you added.</p>\n" }, { "answer_id": 2100406, "score": 2, "text": "<p>After reading Ross Millikan's reply I decided to write a complete proof. This is my attempt: </p>\n\n<p>A binary number is a way of expressing a number in powers of two, taking the form: </p>\n\n<p>$$m = d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}$$</p>\n\n<p>Where $n$ is the number of digits in the binary number. Therefore:</p>\n\n<p>$$m\\times n = \\left[d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}\\right]\\times n$$</p>\n\n<p>For example:</p>\n\n<p>$$=18\\times 37$$\n$$\\quad\\;\\;\\;\\;= \\left[2^4+2\\right]\\times 37$$\n$$\\quad\\;\\;\\;= \\left[16+2\\right]\\times 37$$\n$$\\;= 592+74$$\n$$666$$</p>\n\n<p>The below table is a way of using this identity to multiply two numbers $m$ and $n$: </p>\n\n<p>$$\\begin{array}{c}\n\\text{Index}&amp;&amp;m&amp;&amp;\\text{If col. 1 is odd, }n\\times 2^{Index}&amp;\\\\ \\hline\n0 &amp;&amp; m\\\\\n1 &amp;&amp; \\bigl\\lfloor \\frac{m}{2} \\bigr\\rfloor\\\\\n2 &amp;&amp; \\bigl\\lfloor \\frac{m}{4} \\bigr\\rfloor\\\\\n3 &amp;&amp; \\bigl\\lfloor \\frac{m}{8} \\bigr\\rfloor\\\\\n4 &amp;&amp; \\bigl\\lfloor \\frac{m}{16} \\bigr\\rfloor\\\\\n5 &amp;&amp; \\bigl\\lfloor \\frac{m}{32} \\bigr\\rfloor\\\\\n\\end{array}$$</p>\n\n<p>Where the first column is the sequence of integers, the denominator in the second column is the sequence $a_n = 2a_{n-1}\\text{ where }a_1=2$. The summation of the third column is the product $m\\times n$.</p>\n\n<p>Which is equivalent to these steps: repeat the below steps until the left column == 1:</p>\n\n<pre><code>• Halve the last number in the first column\n• Write the result below (discard the remainder)\n• Double the last number in the first column.\n• Write the result below.\n</code></pre>\n\n<p>Now, for every even number in the first column, delete the adjacent number in the second column. Add the remaining numbers in the second column. </p>\n" }, { "answer_id": 4427236, "score": 1, "text": "<p>I'd like to add an answer to the already good ones just to point out that no explicit mention of binary representations is needed to understand the algorithm, yet every explanation I've found online mentions binary. When I first figured out how the algorithm works this didn't even occur to me! The idea is really simple:</p>\n<p><span class=\"math-container\">$$18\\times 37=9\\times 74$$</span>\nNothing surprising here, we just multiplied by <span class=\"math-container\">$1$</span> in the form <span class=\"math-container\">$2\\times 1/2$</span>. Since <span class=\"math-container\">$18$</span> is even and no fractions come into play, we've just converted the original product into a new one and can discard the top row. Now we do the same halving/doubling again:\n<span class=\"math-container\">$$9\\times 74=(4+1/2)\\times 148=4\\times 148+1/2\\times 148$$</span>\nAnd what's the last term? It's <span class=\"math-container\">$74$</span>, right above the <span class=\"math-container\">$148$</span>, which is next to the <em>odd</em> number <span class=\"math-container\">$9$</span>. Knowing that this will be accounted for at the end, ignore it for now and halve/double again with the remaining term:\n<span class=\"math-container\">$$4\\times 148=2\\times 296$$</span>\nThe <span class=\"math-container\">$4\\times 148$</span> row can be ignored. Continuing,\n<span class=\"math-container\">$$2\\times 296=1\\times 592$$</span>\nPutting everything together, we end up with <span class=\"math-container\">$18\\times 37=592+74$</span>.</p>\n" }, { "answer_id": 2099114, "score": 5, "text": "<p>Write $m$ in binary. The lines where the number is odd corresponds to the bits that are $1$ in the binary expansion. In your example of $18 \\times 37$ we have $18=10010_2=2^1+2^4,$ so $18 \\times 37=(2+16) \\times 37$, which are exactly the terms you added.</p>\n" }, { "answer_id": 2100406, "score": 2, "text": "<p>After reading Ross Millikan's reply I decided to write a complete proof. This is my attempt: </p>\n\n<p>A binary number is a way of expressing a number in powers of two, taking the form: </p>\n\n<p>$$m = d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}$$</p>\n\n<p>Where $n$ is the number of digits in the binary number. Therefore:</p>\n\n<p>$$m\\times n = \\left[d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}\\right]\\times n$$</p>\n\n<p>For example:</p>\n\n<p>$$=18\\times 37$$\n$$\\quad\\;\\;\\;\\;= \\left[2^4+2\\right]\\times 37$$\n$$\\quad\\;\\;\\;= \\left[16+2\\right]\\times 37$$\n$$\\;= 592+74$$\n$$666$$</p>\n\n<p>The below table is a way of using this identity to multiply two numbers $m$ and $n$: </p>\n\n<p>$$\\begin{array}{c}\n\\text{Index}&amp;&amp;m&amp;&amp;\\text{If col. 1 is odd, }n\\times 2^{Index}&amp;\\\\ \\hline\n0 &amp;&amp; m\\\\\n1 &amp;&amp; \\bigl\\lfloor \\frac{m}{2} \\bigr\\rfloor\\\\\n2 &amp;&amp; \\bigl\\lfloor \\frac{m}{4} \\bigr\\rfloor\\\\\n3 &amp;&amp; \\bigl\\lfloor \\frac{m}{8} \\bigr\\rfloor\\\\\n4 &amp;&amp; \\bigl\\lfloor \\frac{m}{16} \\bigr\\rfloor\\\\\n5 &amp;&amp; \\bigl\\lfloor \\frac{m}{32} \\bigr\\rfloor\\\\\n\\end{array}$$</p>\n\n<p>Where the first column is the sequence of integers, the denominator in the second column is the sequence $a_n = 2a_{n-1}\\text{ where }a_1=2$. The summation of the third column is the product $m\\times n$.</p>\n\n<p>Which is equivalent to these steps: repeat the below steps until the left column == 1:</p>\n\n<pre><code>• Halve the last number in the first column\n• Write the result below (discard the remainder)\n• Double the last number in the first column.\n• Write the result below.\n</code></pre>\n\n<p>Now, for every even number in the first column, delete the adjacent number in the second column. Add the remaining numbers in the second column. </p>\n" }, { "answer_id": 4427236, "score": 1, "text": "<p>I'd like to add an answer to the already good ones just to point out that no explicit mention of binary representations is needed to understand the algorithm, yet every explanation I've found online mentions binary. When I first figured out how the algorithm works this didn't even occur to me! The idea is really simple:</p>\n<p><span class=\"math-container\">$$18\\times 37=9\\times 74$$</span>\nNothing surprising here, we just multiplied by <span class=\"math-container\">$1$</span> in the form <span class=\"math-container\">$2\\times 1/2$</span>. Since <span class=\"math-container\">$18$</span> is even and no fractions come into play, we've just converted the original product into a new one and can discard the top row. Now we do the same halving/doubling again:\n<span class=\"math-container\">$$9\\times 74=(4+1/2)\\times 148=4\\times 148+1/2\\times 148$$</span>\nAnd what's the last term? It's <span class=\"math-container\">$74$</span>, right above the <span class=\"math-container\">$148$</span>, which is next to the <em>odd</em> number <span class=\"math-container\">$9$</span>. Knowing that this will be accounted for at the end, ignore it for now and halve/double again with the remaining term:\n<span class=\"math-container\">$$4\\times 148=2\\times 296$$</span>\nThe <span class=\"math-container\">$4\\times 148$</span> row can be ignored. Continuing,\n<span class=\"math-container\">$$2\\times 296=1\\times 592$$</span>\nPutting everything together, we end up with <span class=\"math-container\">$18\\times 37=592+74$</span>.</p>\n" }, { "answer_id": 2099114, "score": 5, "text": "<p>Write $m$ in binary. The lines where the number is odd corresponds to the bits that are $1$ in the binary expansion. In your example of $18 \\times 37$ we have $18=10010_2=2^1+2^4,$ so $18 \\times 37=(2+16) \\times 37$, which are exactly the terms you added.</p>\n" }, { "answer_id": 2100406, "score": 2, "text": "<p>After reading Ross Millikan's reply I decided to write a complete proof. This is my attempt: </p>\n\n<p>A binary number is a way of expressing a number in powers of two, taking the form: </p>\n\n<p>$$m = d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}$$</p>\n\n<p>Where $n$ is the number of digits in the binary number. Therefore:</p>\n\n<p>$$m\\times n = \\left[d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}\\right]\\times n$$</p>\n\n<p>For example:</p>\n\n<p>$$=18\\times 37$$\n$$\\quad\\;\\;\\;\\;= \\left[2^4+2\\right]\\times 37$$\n$$\\quad\\;\\;\\;= \\left[16+2\\right]\\times 37$$\n$$\\;= 592+74$$\n$$666$$</p>\n\n<p>The below table is a way of using this identity to multiply two numbers $m$ and $n$: </p>\n\n<p>$$\\begin{array}{c}\n\\text{Index}&amp;&amp;m&amp;&amp;\\text{If col. 1 is odd, }n\\times 2^{Index}&amp;\\\\ \\hline\n0 &amp;&amp; m\\\\\n1 &amp;&amp; \\bigl\\lfloor \\frac{m}{2} \\bigr\\rfloor\\\\\n2 &amp;&amp; \\bigl\\lfloor \\frac{m}{4} \\bigr\\rfloor\\\\\n3 &amp;&amp; \\bigl\\lfloor \\frac{m}{8} \\bigr\\rfloor\\\\\n4 &amp;&amp; \\bigl\\lfloor \\frac{m}{16} \\bigr\\rfloor\\\\\n5 &amp;&amp; \\bigl\\lfloor \\frac{m}{32} \\bigr\\rfloor\\\\\n\\end{array}$$</p>\n\n<p>Where the first column is the sequence of integers, the denominator in the second column is the sequence $a_n = 2a_{n-1}\\text{ where }a_1=2$. The summation of the third column is the product $m\\times n$.</p>\n\n<p>Which is equivalent to these steps: repeat the below steps until the left column == 1:</p>\n\n<pre><code>• Halve the last number in the first column\n• Write the result below (discard the remainder)\n• Double the last number in the first column.\n• Write the result below.\n</code></pre>\n\n<p>Now, for every even number in the first column, delete the adjacent number in the second column. Add the remaining numbers in the second column. </p>\n" }, { "answer_id": 4427236, "score": 1, "text": "<p>I'd like to add an answer to the already good ones just to point out that no explicit mention of binary representations is needed to understand the algorithm, yet every explanation I've found online mentions binary. When I first figured out how the algorithm works this didn't even occur to me! The idea is really simple:</p>\n<p><span class=\"math-container\">$$18\\times 37=9\\times 74$$</span>\nNothing surprising here, we just multiplied by <span class=\"math-container\">$1$</span> in the form <span class=\"math-container\">$2\\times 1/2$</span>. Since <span class=\"math-container\">$18$</span> is even and no fractions come into play, we've just converted the original product into a new one and can discard the top row. Now we do the same halving/doubling again:\n<span class=\"math-container\">$$9\\times 74=(4+1/2)\\times 148=4\\times 148+1/2\\times 148$$</span>\nAnd what's the last term? It's <span class=\"math-container\">$74$</span>, right above the <span class=\"math-container\">$148$</span>, which is next to the <em>odd</em> number <span class=\"math-container\">$9$</span>. Knowing that this will be accounted for at the end, ignore it for now and halve/double again with the remaining term:\n<span class=\"math-container\">$$4\\times 148=2\\times 296$$</span>\nThe <span class=\"math-container\">$4\\times 148$</span> row can be ignored. Continuing,\n<span class=\"math-container\">$$2\\times 296=1\\times 592$$</span>\nPutting everything together, we end up with <span class=\"math-container\">$18\\times 37=592+74$</span>.</p>\n" }, { "answer_id": 2099114, "score": 5, "text": "<p>Write $m$ in binary. The lines where the number is odd corresponds to the bits that are $1$ in the binary expansion. In your example of $18 \\times 37$ we have $18=10010_2=2^1+2^4,$ so $18 \\times 37=(2+16) \\times 37$, which are exactly the terms you added.</p>\n" }, { "answer_id": 2100406, "score": 2, "text": "<p>After reading Ross Millikan's reply I decided to write a complete proof. This is my attempt: </p>\n\n<p>A binary number is a way of expressing a number in powers of two, taking the form: </p>\n\n<p>$$m = d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}$$</p>\n\n<p>Where $n$ is the number of digits in the binary number. Therefore:</p>\n\n<p>$$m\\times n = \\left[d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}\\right]\\times n$$</p>\n\n<p>For example:</p>\n\n<p>$$=18\\times 37$$\n$$\\quad\\;\\;\\;\\;= \\left[2^4+2\\right]\\times 37$$\n$$\\quad\\;\\;\\;= \\left[16+2\\right]\\times 37$$\n$$\\;= 592+74$$\n$$666$$</p>\n\n<p>The below table is a way of using this identity to multiply two numbers $m$ and $n$: </p>\n\n<p>$$\\begin{array}{c}\n\\text{Index}&amp;&amp;m&amp;&amp;\\text{If col. 1 is odd, }n\\times 2^{Index}&amp;\\\\ \\hline\n0 &amp;&amp; m\\\\\n1 &amp;&amp; \\bigl\\lfloor \\frac{m}{2} \\bigr\\rfloor\\\\\n2 &amp;&amp; \\bigl\\lfloor \\frac{m}{4} \\bigr\\rfloor\\\\\n3 &amp;&amp; \\bigl\\lfloor \\frac{m}{8} \\bigr\\rfloor\\\\\n4 &amp;&amp; \\bigl\\lfloor \\frac{m}{16} \\bigr\\rfloor\\\\\n5 &amp;&amp; \\bigl\\lfloor \\frac{m}{32} \\bigr\\rfloor\\\\\n\\end{array}$$</p>\n\n<p>Where the first column is the sequence of integers, the denominator in the second column is the sequence $a_n = 2a_{n-1}\\text{ where }a_1=2$. The summation of the third column is the product $m\\times n$.</p>\n\n<p>Which is equivalent to these steps: repeat the below steps until the left column == 1:</p>\n\n<pre><code>• Halve the last number in the first column\n• Write the result below (discard the remainder)\n• Double the last number in the first column.\n• Write the result below.\n</code></pre>\n\n<p>Now, for every even number in the first column, delete the adjacent number in the second column. Add the remaining numbers in the second column. </p>\n" }, { "answer_id": 4427236, "score": 1, "text": "<p>I'd like to add an answer to the already good ones just to point out that no explicit mention of binary representations is needed to understand the algorithm, yet every explanation I've found online mentions binary. When I first figured out how the algorithm works this didn't even occur to me! The idea is really simple:</p>\n<p><span class=\"math-container\">$$18\\times 37=9\\times 74$$</span>\nNothing surprising here, we just multiplied by <span class=\"math-container\">$1$</span> in the form <span class=\"math-container\">$2\\times 1/2$</span>. Since <span class=\"math-container\">$18$</span> is even and no fractions come into play, we've just converted the original product into a new one and can discard the top row. Now we do the same halving/doubling again:\n<span class=\"math-container\">$$9\\times 74=(4+1/2)\\times 148=4\\times 148+1/2\\times 148$$</span>\nAnd what's the last term? It's <span class=\"math-container\">$74$</span>, right above the <span class=\"math-container\">$148$</span>, which is next to the <em>odd</em> number <span class=\"math-container\">$9$</span>. Knowing that this will be accounted for at the end, ignore it for now and halve/double again with the remaining term:\n<span class=\"math-container\">$$4\\times 148=2\\times 296$$</span>\nThe <span class=\"math-container\">$4\\times 148$</span> row can be ignored. Continuing,\n<span class=\"math-container\">$$2\\times 296=1\\times 592$$</span>\nPutting everything together, we end up with <span class=\"math-container\">$18\\times 37=592+74$</span>.</p>\n" }, { "answer_id": 2099114, "score": 5, "text": "<p>Write $m$ in binary. The lines where the number is odd corresponds to the bits that are $1$ in the binary expansion. In your example of $18 \\times 37$ we have $18=10010_2=2^1+2^4,$ so $18 \\times 37=(2+16) \\times 37$, which are exactly the terms you added.</p>\n" }, { "answer_id": 2100406, "score": 2, "text": "<p>After reading Ross Millikan's reply I decided to write a complete proof. This is my attempt: </p>\n\n<p>A binary number is a way of expressing a number in powers of two, taking the form: </p>\n\n<p>$$m = d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}$$</p>\n\n<p>Where $n$ is the number of digits in the binary number. Therefore:</p>\n\n<p>$$m\\times n = \\left[d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}\\right]\\times n$$</p>\n\n<p>For example:</p>\n\n<p>$$=18\\times 37$$\n$$\\quad\\;\\;\\;\\;= \\left[2^4+2\\right]\\times 37$$\n$$\\quad\\;\\;\\;= \\left[16+2\\right]\\times 37$$\n$$\\;= 592+74$$\n$$666$$</p>\n\n<p>The below table is a way of using this identity to multiply two numbers $m$ and $n$: </p>\n\n<p>$$\\begin{array}{c}\n\\text{Index}&amp;&amp;m&amp;&amp;\\text{If col. 1 is odd, }n\\times 2^{Index}&amp;\\\\ \\hline\n0 &amp;&amp; m\\\\\n1 &amp;&amp; \\bigl\\lfloor \\frac{m}{2} \\bigr\\rfloor\\\\\n2 &amp;&amp; \\bigl\\lfloor \\frac{m}{4} \\bigr\\rfloor\\\\\n3 &amp;&amp; \\bigl\\lfloor \\frac{m}{8} \\bigr\\rfloor\\\\\n4 &amp;&amp; \\bigl\\lfloor \\frac{m}{16} \\bigr\\rfloor\\\\\n5 &amp;&amp; \\bigl\\lfloor \\frac{m}{32} \\bigr\\rfloor\\\\\n\\end{array}$$</p>\n\n<p>Where the first column is the sequence of integers, the denominator in the second column is the sequence $a_n = 2a_{n-1}\\text{ where }a_1=2$. The summation of the third column is the product $m\\times n$.</p>\n\n<p>Which is equivalent to these steps: repeat the below steps until the left column == 1:</p>\n\n<pre><code>• Halve the last number in the first column\n• Write the result below (discard the remainder)\n• Double the last number in the first column.\n• Write the result below.\n</code></pre>\n\n<p>Now, for every even number in the first column, delete the adjacent number in the second column. Add the remaining numbers in the second column. </p>\n" }, { "answer_id": 4427236, "score": 1, "text": "<p>I'd like to add an answer to the already good ones just to point out that no explicit mention of binary representations is needed to understand the algorithm, yet every explanation I've found online mentions binary. When I first figured out how the algorithm works this didn't even occur to me! The idea is really simple:</p>\n<p><span class=\"math-container\">$$18\\times 37=9\\times 74$$</span>\nNothing surprising here, we just multiplied by <span class=\"math-container\">$1$</span> in the form <span class=\"math-container\">$2\\times 1/2$</span>. Since <span class=\"math-container\">$18$</span> is even and no fractions come into play, we've just converted the original product into a new one and can discard the top row. Now we do the same halving/doubling again:\n<span class=\"math-container\">$$9\\times 74=(4+1/2)\\times 148=4\\times 148+1/2\\times 148$$</span>\nAnd what's the last term? It's <span class=\"math-container\">$74$</span>, right above the <span class=\"math-container\">$148$</span>, which is next to the <em>odd</em> number <span class=\"math-container\">$9$</span>. Knowing that this will be accounted for at the end, ignore it for now and halve/double again with the remaining term:\n<span class=\"math-container\">$$4\\times 148=2\\times 296$$</span>\nThe <span class=\"math-container\">$4\\times 148$</span> row can be ignored. Continuing,\n<span class=\"math-container\">$$2\\times 296=1\\times 592$$</span>\nPutting everything together, we end up with <span class=\"math-container\">$18\\times 37=592+74$</span>.</p>\n" }, { "answer_id": 2099114, "score": 5, "text": "<p>Write $m$ in binary. The lines where the number is odd corresponds to the bits that are $1$ in the binary expansion. In your example of $18 \\times 37$ we have $18=10010_2=2^1+2^4,$ so $18 \\times 37=(2+16) \\times 37$, which are exactly the terms you added.</p>\n" }, { "answer_id": 2100406, "score": 2, "text": "<p>After reading Ross Millikan's reply I decided to write a complete proof. This is my attempt: </p>\n\n<p>A binary number is a way of expressing a number in powers of two, taking the form: </p>\n\n<p>$$m = d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}$$</p>\n\n<p>Where $n$ is the number of digits in the binary number. Therefore:</p>\n\n<p>$$m\\times n = \\left[d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}\\right]\\times n$$</p>\n\n<p>For example:</p>\n\n<p>$$=18\\times 37$$\n$$\\quad\\;\\;\\;\\;= \\left[2^4+2\\right]\\times 37$$\n$$\\quad\\;\\;\\;= \\left[16+2\\right]\\times 37$$\n$$\\;= 592+74$$\n$$666$$</p>\n\n<p>The below table is a way of using this identity to multiply two numbers $m$ and $n$: </p>\n\n<p>$$\\begin{array}{c}\n\\text{Index}&amp;&amp;m&amp;&amp;\\text{If col. 1 is odd, }n\\times 2^{Index}&amp;\\\\ \\hline\n0 &amp;&amp; m\\\\\n1 &amp;&amp; \\bigl\\lfloor \\frac{m}{2} \\bigr\\rfloor\\\\\n2 &amp;&amp; \\bigl\\lfloor \\frac{m}{4} \\bigr\\rfloor\\\\\n3 &amp;&amp; \\bigl\\lfloor \\frac{m}{8} \\bigr\\rfloor\\\\\n4 &amp;&amp; \\bigl\\lfloor \\frac{m}{16} \\bigr\\rfloor\\\\\n5 &amp;&amp; \\bigl\\lfloor \\frac{m}{32} \\bigr\\rfloor\\\\\n\\end{array}$$</p>\n\n<p>Where the first column is the sequence of integers, the denominator in the second column is the sequence $a_n = 2a_{n-1}\\text{ where }a_1=2$. The summation of the third column is the product $m\\times n$.</p>\n\n<p>Which is equivalent to these steps: repeat the below steps until the left column == 1:</p>\n\n<pre><code>• Halve the last number in the first column\n• Write the result below (discard the remainder)\n• Double the last number in the first column.\n• Write the result below.\n</code></pre>\n\n<p>Now, for every even number in the first column, delete the adjacent number in the second column. Add the remaining numbers in the second column. </p>\n" }, { "answer_id": 4427236, "score": 1, "text": "<p>I'd like to add an answer to the already good ones just to point out that no explicit mention of binary representations is needed to understand the algorithm, yet every explanation I've found online mentions binary. When I first figured out how the algorithm works this didn't even occur to me! The idea is really simple:</p>\n<p><span class=\"math-container\">$$18\\times 37=9\\times 74$$</span>\nNothing surprising here, we just multiplied by <span class=\"math-container\">$1$</span> in the form <span class=\"math-container\">$2\\times 1/2$</span>. Since <span class=\"math-container\">$18$</span> is even and no fractions come into play, we've just converted the original product into a new one and can discard the top row. Now we do the same halving/doubling again:\n<span class=\"math-container\">$$9\\times 74=(4+1/2)\\times 148=4\\times 148+1/2\\times 148$$</span>\nAnd what's the last term? It's <span class=\"math-container\">$74$</span>, right above the <span class=\"math-container\">$148$</span>, which is next to the <em>odd</em> number <span class=\"math-container\">$9$</span>. Knowing that this will be accounted for at the end, ignore it for now and halve/double again with the remaining term:\n<span class=\"math-container\">$$4\\times 148=2\\times 296$$</span>\nThe <span class=\"math-container\">$4\\times 148$</span> row can be ignored. Continuing,\n<span class=\"math-container\">$$2\\times 296=1\\times 592$$</span>\nPutting everything together, we end up with <span class=\"math-container\">$18\\times 37=592+74$</span>.</p>\n" }, { "answer_id": 2099114, "score": 5, "text": "<p>Write $m$ in binary. The lines where the number is odd corresponds to the bits that are $1$ in the binary expansion. In your example of $18 \\times 37$ we have $18=10010_2=2^1+2^4,$ so $18 \\times 37=(2+16) \\times 37$, which are exactly the terms you added.</p>\n" }, { "answer_id": 2100406, "score": 2, "text": "<p>After reading Ross Millikan's reply I decided to write a complete proof. This is my attempt: </p>\n\n<p>A binary number is a way of expressing a number in powers of two, taking the form: </p>\n\n<p>$$m = d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}$$</p>\n\n<p>Where $n$ is the number of digits in the binary number. Therefore:</p>\n\n<p>$$m\\times n = \\left[d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}\\right]\\times n$$</p>\n\n<p>For example:</p>\n\n<p>$$=18\\times 37$$\n$$\\quad\\;\\;\\;\\;= \\left[2^4+2\\right]\\times 37$$\n$$\\quad\\;\\;\\;= \\left[16+2\\right]\\times 37$$\n$$\\;= 592+74$$\n$$666$$</p>\n\n<p>The below table is a way of using this identity to multiply two numbers $m$ and $n$: </p>\n\n<p>$$\\begin{array}{c}\n\\text{Index}&amp;&amp;m&amp;&amp;\\text{If col. 1 is odd, }n\\times 2^{Index}&amp;\\\\ \\hline\n0 &amp;&amp; m\\\\\n1 &amp;&amp; \\bigl\\lfloor \\frac{m}{2} \\bigr\\rfloor\\\\\n2 &amp;&amp; \\bigl\\lfloor \\frac{m}{4} \\bigr\\rfloor\\\\\n3 &amp;&amp; \\bigl\\lfloor \\frac{m}{8} \\bigr\\rfloor\\\\\n4 &amp;&amp; \\bigl\\lfloor \\frac{m}{16} \\bigr\\rfloor\\\\\n5 &amp;&amp; \\bigl\\lfloor \\frac{m}{32} \\bigr\\rfloor\\\\\n\\end{array}$$</p>\n\n<p>Where the first column is the sequence of integers, the denominator in the second column is the sequence $a_n = 2a_{n-1}\\text{ where }a_1=2$. The summation of the third column is the product $m\\times n$.</p>\n\n<p>Which is equivalent to these steps: repeat the below steps until the left column == 1:</p>\n\n<pre><code>• Halve the last number in the first column\n• Write the result below (discard the remainder)\n• Double the last number in the first column.\n• Write the result below.\n</code></pre>\n\n<p>Now, for every even number in the first column, delete the adjacent number in the second column. Add the remaining numbers in the second column. </p>\n" }, { "answer_id": 4427236, "score": 1, "text": "<p>I'd like to add an answer to the already good ones just to point out that no explicit mention of binary representations is needed to understand the algorithm, yet every explanation I've found online mentions binary. When I first figured out how the algorithm works this didn't even occur to me! The idea is really simple:</p>\n<p><span class=\"math-container\">$$18\\times 37=9\\times 74$$</span>\nNothing surprising here, we just multiplied by <span class=\"math-container\">$1$</span> in the form <span class=\"math-container\">$2\\times 1/2$</span>. Since <span class=\"math-container\">$18$</span> is even and no fractions come into play, we've just converted the original product into a new one and can discard the top row. Now we do the same halving/doubling again:\n<span class=\"math-container\">$$9\\times 74=(4+1/2)\\times 148=4\\times 148+1/2\\times 148$$</span>\nAnd what's the last term? It's <span class=\"math-container\">$74$</span>, right above the <span class=\"math-container\">$148$</span>, which is next to the <em>odd</em> number <span class=\"math-container\">$9$</span>. Knowing that this will be accounted for at the end, ignore it for now and halve/double again with the remaining term:\n<span class=\"math-container\">$$4\\times 148=2\\times 296$$</span>\nThe <span class=\"math-container\">$4\\times 148$</span> row can be ignored. Continuing,\n<span class=\"math-container\">$$2\\times 296=1\\times 592$$</span>\nPutting everything together, we end up with <span class=\"math-container\">$18\\times 37=592+74$</span>.</p>\n" } ]
Question: Proof of Russian Peasant Multiplication Answer 1: <p>Write $m$ in binary. The lines where the number is odd corresponds to the bits that are $1$ in the binary expansion. In your example of $18 \times 37$ we have $18=10010_2=2^1+2^4,$ so $18 \times 37=(2+16) \times 37$, which are exactly the terms you added.</p> Answer 2: <p>After reading Ross Millikan's reply I decided to write a complete proof. This is my attempt: </p> <p>A binary number is a way of expressing a number in powers of two, taking the form: </p> <p>$$m = d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}$$</p> <p>Where $n$ is the number of digits in the binary number. Therefore:</p> <p>$$m\times n = \left[d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}\right]\times n$$</p> <p>For example:</p> <p>$$=18\times 37$$ $$\quad\;\;\;\;= \left[2^4+2\right]\times 37$$ $$\quad\;\;\;= \left[16+2\right]\times 37$$ $$\;= 592+74$$ $$666$$</p> <p>The below table is a way of using this identity to multiply two numbers $m$ and $n$: </p> <p>$$\begin{array}{c} \text{Index}&amp;&amp;m&amp;&amp;\text{If col. 1 is odd, }n\times 2^{Index}&amp;\\ \hline 0 &amp;&amp; m\\ 1 &amp;&amp; \bigl\lfloor \frac{m}{2} \bigr\rfloor\\ 2 &amp;&amp; \bigl\lfloor \frac{m}{4} \bigr\rfloor\\ 3 &amp;&amp; \bigl\lfloor \frac{m}{8} \bigr\rfloor\\ 4 &amp;&amp; \bigl\lfloor \frac{m}{16} \bigr\rfloor\\ 5 &amp;&amp; \bigl\lfloor \frac{m}{32} \bigr\rfloor\\ \end{array}$$</p> <p>Where the first column is the sequence of integers, the denominator in the second column is the sequence $a_n = 2a_{n-1}\text{ where }a_1=2$. The summation of the third column is the product $m\times n$.</p> <p>Which is equivalent to these steps: repeat the below steps until the left column == 1:</p> <pre><code>• Halve the last number in the first column • Write the result below (discard the remainder) • Double the last number in the first column. • Write the result below. </code></pre> <p>Now, for every even number in the first column, delete the adjacent number in the second column. Add the remaining numbers in the second column. </p> Answer 3: <p>I'd like to add an answer to the already good ones just to point out that no explicit mention of binary representations is needed to understand the algorithm, yet every explanation I've found online mentions binary. When I first figured out how the algorithm works this didn't even occur to me! The idea is really simple:</p> <p><span class="math-container">$$18\times 37=9\times 74$$</span> Nothing surprising here, we just multiplied by <span class="math-container">$1$</span> in the form <span class="math-container">$2\times 1/2$</span>. Since <span class="math-container">$18$</span> is even and no fractions come into play, we've just converted the original product into a new one and can discard the top row. Now we do the same halving/doubling again: <span class="math-container">$$9\times 74=(4+1/2)\times 148=4\times 148+1/2\times 148$$</span> And what's the last term? It's <span class="math-container">$74$</span>, right above the <span class="math-container">$148$</span>, which is next to the <em>odd</em> number <span class="math-container">$9$</span>. Knowing that this will be accounted for at the end, ignore it for now and halve/double again with the remaining term: <span class="math-container">$$4\times 148=2\times 296$$</span> The <span class="math-container">$4\times 148$</span> row can be ignored. Continuing, <span class="math-container">$$2\times 296=1\times 592$$</span> Putting everything together, we end up with <span class="math-container">$18\times 37=592+74$</span>.</p> Answer 4: <p>Write $m$ in binary. The lines where the number is odd corresponds to the bits that are $1$ in the binary expansion. In your example of $18 \times 37$ we have $18=10010_2=2^1+2^4,$ so $18 \times 37=(2+16) \times 37$, which are exactly the terms you added.</p> Answer 5: <p>After reading Ross Millikan's reply I decided to write a complete proof. This is my attempt: </p> <p>A binary number is a way of expressing a number in powers of two, taking the form: </p> <p>$$m = d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}$$</p> <p>Where $n$ is the number of digits in the binary number. Therefore:</p> <p>$$m\times n = \left[d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}\right]\times n$$</p> <p>For example:</p> <p>$$=18\times 37$$ $$\quad\;\;\;\;= \left[2^4+2\right]\times 37$$ $$\quad\;\;\;= \left[16+2\right]\times 37$$ $$\;= 592+74$$ $$666$$</p> <p>The below table is a way of using this identity to multiply two numbers $m$ and $n$: </p> <p>$$\begin{array}{c} \text{Index}&amp;&amp;m&amp;&amp;\text{If col. 1 is odd, }n\times 2^{Index}&amp;\\ \hline 0 &amp;&amp; m\\ 1 &amp;&amp; \bigl\lfloor \frac{m}{2} \bigr\rfloor\\ 2 &amp;&amp; \bigl\lfloor \frac{m}{4} \bigr\rfloor\\ 3 &amp;&amp; \bigl\lfloor \frac{m}{8} \bigr\rfloor\\ 4 &amp;&amp; \bigl\lfloor \frac{m}{16} \bigr\rfloor\\ 5 &amp;&amp; \bigl\lfloor \frac{m}{32} \bigr\rfloor\\ \end{array}$$</p> <p>Where the first column is the sequence of integers, the denominator in the second column is the sequence $a_n = 2a_{n-1}\text{ where }a_1=2$. The summation of the third column is the product $m\times n$.</p> <p>Which is equivalent to these steps: repeat the below steps until the left column == 1:</p> <pre><code>• Halve the last number in the first column • Write the result below (discard the remainder) • Double the last number in the first column. • Write the result below. </code></pre> <p>Now, for every even number in the first column, delete the adjacent number in the second column. Add the remaining numbers in the second column. </p> Answer 6: <p>I'd like to add an answer to the already good ones just to point out that no explicit mention of binary representations is needed to understand the algorithm, yet every explanation I've found online mentions binary. When I first figured out how the algorithm works this didn't even occur to me! The idea is really simple:</p> <p><span class="math-container">$$18\times 37=9\times 74$$</span> Nothing surprising here, we just multiplied by <span class="math-container">$1$</span> in the form <span class="math-container">$2\times 1/2$</span>. Since <span class="math-container">$18$</span> is even and no fractions come into play, we've just converted the original product into a new one and can discard the top row. Now we do the same halving/doubling again: <span class="math-container">$$9\times 74=(4+1/2)\times 148=4\times 148+1/2\times 148$$</span> And what's the last term? It's <span class="math-container">$74$</span>, right above the <span class="math-container">$148$</span>, which is next to the <em>odd</em> number <span class="math-container">$9$</span>. Knowing that this will be accounted for at the end, ignore it for now and halve/double again with the remaining term: <span class="math-container">$$4\times 148=2\times 296$$</span> The <span class="math-container">$4\times 148$</span> row can be ignored. Continuing, <span class="math-container">$$2\times 296=1\times 592$$</span> Putting everything together, we end up with <span class="math-container">$18\times 37=592+74$</span>.</p> Answer 7: <p>Write $m$ in binary. The lines where the number is odd corresponds to the bits that are $1$ in the binary expansion. In your example of $18 \times 37$ we have $18=10010_2=2^1+2^4,$ so $18 \times 37=(2+16) \times 37$, which are exactly the terms you added.</p> Answer 8: <p>After reading Ross Millikan's reply I decided to write a complete proof. This is my attempt: </p> <p>A binary number is a way of expressing a number in powers of two, taking the form: </p> <p>$$m = d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}$$</p> <p>Where $n$ is the number of digits in the binary number. Therefore:</p> <p>$$m\times n = \left[d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}\right]\times n$$</p> <p>For example:</p> <p>$$=18\times 37$$ $$\quad\;\;\;\;= \left[2^4+2\right]\times 37$$ $$\quad\;\;\;= \left[16+2\right]\times 37$$ $$\;= 592+74$$ $$666$$</p> <p>The below table is a way of using this identity to multiply two numbers $m$ and $n$: </p> <p>$$\begin{array}{c} \text{Index}&amp;&amp;m&amp;&amp;\text{If col. 1 is odd, }n\times 2^{Index}&amp;\\ \hline 0 &amp;&amp; m\\ 1 &amp;&amp; \bigl\lfloor \frac{m}{2} \bigr\rfloor\\ 2 &amp;&amp; \bigl\lfloor \frac{m}{4} \bigr\rfloor\\ 3 &amp;&amp; \bigl\lfloor \frac{m}{8} \bigr\rfloor\\ 4 &amp;&amp; \bigl\lfloor \frac{m}{16} \bigr\rfloor\\ 5 &amp;&amp; \bigl\lfloor \frac{m}{32} \bigr\rfloor\\ \end{array}$$</p> <p>Where the first column is the sequence of integers, the denominator in the second column is the sequence $a_n = 2a_{n-1}\text{ where }a_1=2$. The summation of the third column is the product $m\times n$.</p> <p>Which is equivalent to these steps: repeat the below steps until the left column == 1:</p> <pre><code>• Halve the last number in the first column • Write the result below (discard the remainder) • Double the last number in the first column. • Write the result below. </code></pre> <p>Now, for every even number in the first column, delete the adjacent number in the second column. Add the remaining numbers in the second column. </p> Answer 9: <p>I'd like to add an answer to the already good ones just to point out that no explicit mention of binary representations is needed to understand the algorithm, yet every explanation I've found online mentions binary. When I first figured out how the algorithm works this didn't even occur to me! The idea is really simple:</p> <p><span class="math-container">$$18\times 37=9\times 74$$</span> Nothing surprising here, we just multiplied by <span class="math-container">$1$</span> in the form <span class="math-container">$2\times 1/2$</span>. Since <span class="math-container">$18$</span> is even and no fractions come into play, we've just converted the original product into a new one and can discard the top row. Now we do the same halving/doubling again: <span class="math-container">$$9\times 74=(4+1/2)\times 148=4\times 148+1/2\times 148$$</span> And what's the last term? It's <span class="math-container">$74$</span>, right above the <span class="math-container">$148$</span>, which is next to the <em>odd</em> number <span class="math-container">$9$</span>. Knowing that this will be accounted for at the end, ignore it for now and halve/double again with the remaining term: <span class="math-container">$$4\times 148=2\times 296$$</span> The <span class="math-container">$4\times 148$</span> row can be ignored. Continuing, <span class="math-container">$$2\times 296=1\times 592$$</span> Putting everything together, we end up with <span class="math-container">$18\times 37=592+74$</span>.</p> Answer 10: <p>Write $m$ in binary. The lines where the number is odd corresponds to the bits that are $1$ in the binary expansion. In your example of $18 \times 37$ we have $18=10010_2=2^1+2^4,$ so $18 \times 37=(2+16) \times 37$, which are exactly the terms you added.</p> Answer 11: <p>After reading Ross Millikan's reply I decided to write a complete proof. This is my attempt: </p> <p>A binary number is a way of expressing a number in powers of two, taking the form: </p> <p>$$m = d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}$$</p> <p>Where $n$ is the number of digits in the binary number. Therefore:</p> <p>$$m\times n = \left[d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}\right]\times n$$</p> <p>For example:</p> <p>$$=18\times 37$$ $$\quad\;\;\;\;= \left[2^4+2\right]\times 37$$ $$\quad\;\;\;= \left[16+2\right]\times 37$$ $$\;= 592+74$$ $$666$$</p> <p>The below table is a way of using this identity to multiply two numbers $m$ and $n$: </p> <p>$$\begin{array}{c} \text{Index}&amp;&amp;m&amp;&amp;\text{If col. 1 is odd, }n\times 2^{Index}&amp;\\ \hline 0 &amp;&amp; m\\ 1 &amp;&amp; \bigl\lfloor \frac{m}{2} \bigr\rfloor\\ 2 &amp;&amp; \bigl\lfloor \frac{m}{4} \bigr\rfloor\\ 3 &amp;&amp; \bigl\lfloor \frac{m}{8} \bigr\rfloor\\ 4 &amp;&amp; \bigl\lfloor \frac{m}{16} \bigr\rfloor\\ 5 &amp;&amp; \bigl\lfloor \frac{m}{32} \bigr\rfloor\\ \end{array}$$</p> <p>Where the first column is the sequence of integers, the denominator in the second column is the sequence $a_n = 2a_{n-1}\text{ where }a_1=2$. The summation of the third column is the product $m\times n$.</p> <p>Which is equivalent to these steps: repeat the below steps until the left column == 1:</p> <pre><code>• Halve the last number in the first column • Write the result below (discard the remainder) • Double the last number in the first column. • Write the result below. </code></pre> <p>Now, for every even number in the first column, delete the adjacent number in the second column. Add the remaining numbers in the second column. </p> Answer 12: <p>I'd like to add an answer to the already good ones just to point out that no explicit mention of binary representations is needed to understand the algorithm, yet every explanation I've found online mentions binary. When I first figured out how the algorithm works this didn't even occur to me! The idea is really simple:</p> <p><span class="math-container">$$18\times 37=9\times 74$$</span> Nothing surprising here, we just multiplied by <span class="math-container">$1$</span> in the form <span class="math-container">$2\times 1/2$</span>. Since <span class="math-container">$18$</span> is even and no fractions come into play, we've just converted the original product into a new one and can discard the top row. Now we do the same halving/doubling again: <span class="math-container">$$9\times 74=(4+1/2)\times 148=4\times 148+1/2\times 148$$</span> And what's the last term? It's <span class="math-container">$74$</span>, right above the <span class="math-container">$148$</span>, which is next to the <em>odd</em> number <span class="math-container">$9$</span>. Knowing that this will be accounted for at the end, ignore it for now and halve/double again with the remaining term: <span class="math-container">$$4\times 148=2\times 296$$</span> The <span class="math-container">$4\times 148$</span> row can be ignored. Continuing, <span class="math-container">$$2\times 296=1\times 592$$</span> Putting everything together, we end up with <span class="math-container">$18\times 37=592+74$</span>.</p> Answer 13: <p>Write $m$ in binary. The lines where the number is odd corresponds to the bits that are $1$ in the binary expansion. In your example of $18 \times 37$ we have $18=10010_2=2^1+2^4,$ so $18 \times 37=(2+16) \times 37$, which are exactly the terms you added.</p> Answer 14: <p>After reading Ross Millikan's reply I decided to write a complete proof. This is my attempt: </p> <p>A binary number is a way of expressing a number in powers of two, taking the form: </p> <p>$$m = d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}$$</p> <p>Where $n$ is the number of digits in the binary number. Therefore:</p> <p>$$m\times n = \left[d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}\right]\times n$$</p> <p>For example:</p> <p>$$=18\times 37$$ $$\quad\;\;\;\;= \left[2^4+2\right]\times 37$$ $$\quad\;\;\;= \left[16+2\right]\times 37$$ $$\;= 592+74$$ $$666$$</p> <p>The below table is a way of using this identity to multiply two numbers $m$ and $n$: </p> <p>$$\begin{array}{c} \text{Index}&amp;&amp;m&amp;&amp;\text{If col. 1 is odd, }n\times 2^{Index}&amp;\\ \hline 0 &amp;&amp; m\\ 1 &amp;&amp; \bigl\lfloor \frac{m}{2} \bigr\rfloor\\ 2 &amp;&amp; \bigl\lfloor \frac{m}{4} \bigr\rfloor\\ 3 &amp;&amp; \bigl\lfloor \frac{m}{8} \bigr\rfloor\\ 4 &amp;&amp; \bigl\lfloor \frac{m}{16} \bigr\rfloor\\ 5 &amp;&amp; \bigl\lfloor \frac{m}{32} \bigr\rfloor\\ \end{array}$$</p> <p>Where the first column is the sequence of integers, the denominator in the second column is the sequence $a_n = 2a_{n-1}\text{ where }a_1=2$. The summation of the third column is the product $m\times n$.</p> <p>Which is equivalent to these steps: repeat the below steps until the left column == 1:</p> <pre><code>• Halve the last number in the first column • Write the result below (discard the remainder) • Double the last number in the first column. • Write the result below. </code></pre> <p>Now, for every even number in the first column, delete the adjacent number in the second column. Add the remaining numbers in the second column. </p> Answer 15: <p>I'd like to add an answer to the already good ones just to point out that no explicit mention of binary representations is needed to understand the algorithm, yet every explanation I've found online mentions binary. When I first figured out how the algorithm works this didn't even occur to me! The idea is really simple:</p> <p><span class="math-container">$$18\times 37=9\times 74$$</span> Nothing surprising here, we just multiplied by <span class="math-container">$1$</span> in the form <span class="math-container">$2\times 1/2$</span>. Since <span class="math-container">$18$</span> is even and no fractions come into play, we've just converted the original product into a new one and can discard the top row. Now we do the same halving/doubling again: <span class="math-container">$$9\times 74=(4+1/2)\times 148=4\times 148+1/2\times 148$$</span> And what's the last term? It's <span class="math-container">$74$</span>, right above the <span class="math-container">$148$</span>, which is next to the <em>odd</em> number <span class="math-container">$9$</span>. Knowing that this will be accounted for at the end, ignore it for now and halve/double again with the remaining term: <span class="math-container">$$4\times 148=2\times 296$$</span> The <span class="math-container">$4\times 148$</span> row can be ignored. Continuing, <span class="math-container">$$2\times 296=1\times 592$$</span> Putting everything together, we end up with <span class="math-container">$18\times 37=592+74$</span>.</p> Answer 16: <p>Write $m$ in binary. The lines where the number is odd corresponds to the bits that are $1$ in the binary expansion. In your example of $18 \times 37$ we have $18=10010_2=2^1+2^4,$ so $18 \times 37=(2+16) \times 37$, which are exactly the terms you added.</p> Answer 17: <p>After reading Ross Millikan's reply I decided to write a complete proof. This is my attempt: </p> <p>A binary number is a way of expressing a number in powers of two, taking the form: </p> <p>$$m = d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}$$</p> <p>Where $n$ is the number of digits in the binary number. Therefore:</p> <p>$$m\times n = \left[d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}\right]\times n$$</p> <p>For example:</p> <p>$$=18\times 37$$ $$\quad\;\;\;\;= \left[2^4+2\right]\times 37$$ $$\quad\;\;\;= \left[16+2\right]\times 37$$ $$\;= 592+74$$ $$666$$</p> <p>The below table is a way of using this identity to multiply two numbers $m$ and $n$: </p> <p>$$\begin{array}{c} \text{Index}&amp;&amp;m&amp;&amp;\text{If col. 1 is odd, }n\times 2^{Index}&amp;\\ \hline 0 &amp;&amp; m\\ 1 &amp;&amp; \bigl\lfloor \frac{m}{2} \bigr\rfloor\\ 2 &amp;&amp; \bigl\lfloor \frac{m}{4} \bigr\rfloor\\ 3 &amp;&amp; \bigl\lfloor \frac{m}{8} \bigr\rfloor\\ 4 &amp;&amp; \bigl\lfloor \frac{m}{16} \bigr\rfloor\\ 5 &amp;&amp; \bigl\lfloor \frac{m}{32} \bigr\rfloor\\ \end{array}$$</p> <p>Where the first column is the sequence of integers, the denominator in the second column is the sequence $a_n = 2a_{n-1}\text{ where }a_1=2$. The summation of the third column is the product $m\times n$.</p> <p>Which is equivalent to these steps: repeat the below steps until the left column == 1:</p> <pre><code>• Halve the last number in the first column • Write the result below (discard the remainder) • Double the last number in the first column. • Write the result below. </code></pre> <p>Now, for every even number in the first column, delete the adjacent number in the second column. Add the remaining numbers in the second column. </p> Answer 18: <p>I'd like to add an answer to the already good ones just to point out that no explicit mention of binary representations is needed to understand the algorithm, yet every explanation I've found online mentions binary. When I first figured out how the algorithm works this didn't even occur to me! The idea is really simple:</p> <p><span class="math-container">$$18\times 37=9\times 74$$</span> Nothing surprising here, we just multiplied by <span class="math-container">$1$</span> in the form <span class="math-container">$2\times 1/2$</span>. Since <span class="math-container">$18$</span> is even and no fractions come into play, we've just converted the original product into a new one and can discard the top row. Now we do the same halving/doubling again: <span class="math-container">$$9\times 74=(4+1/2)\times 148=4\times 148+1/2\times 148$$</span> And what's the last term? It's <span class="math-container">$74$</span>, right above the <span class="math-container">$148$</span>, which is next to the <em>odd</em> number <span class="math-container">$9$</span>. Knowing that this will be accounted for at the end, ignore it for now and halve/double again with the remaining term: <span class="math-container">$$4\times 148=2\times 296$$</span> The <span class="math-container">$4\times 148$</span> row can be ignored. Continuing, <span class="math-container">$$2\times 296=1\times 592$$</span> Putting everything together, we end up with <span class="math-container">$18\times 37=592+74$</span>.</p> Answer 19: <p>Write $m$ in binary. The lines where the number is odd corresponds to the bits that are $1$ in the binary expansion. In your example of $18 \times 37$ we have $18=10010_2=2^1+2^4,$ so $18 \times 37=(2+16) \times 37$, which are exactly the terms you added.</p> Answer 20: <p>After reading Ross Millikan's reply I decided to write a complete proof. This is my attempt: </p> <p>A binary number is a way of expressing a number in powers of two, taking the form: </p> <p>$$m = d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}$$</p> <p>Where $n$ is the number of digits in the binary number. Therefore:</p> <p>$$m\times n = \left[d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}\right]\times n$$</p> <p>For example:</p> <p>$$=18\times 37$$ $$\quad\;\;\;\;= \left[2^4+2\right]\times 37$$ $$\quad\;\;\;= \left[16+2\right]\times 37$$ $$\;= 592+74$$ $$666$$</p> <p>The below table is a way of using this identity to multiply two numbers $m$ and $n$: </p> <p>$$\begin{array}{c} \text{Index}&amp;&amp;m&amp;&amp;\text{If col. 1 is odd, }n\times 2^{Index}&amp;\\ \hline 0 &amp;&amp; m\\ 1 &amp;&amp; \bigl\lfloor \frac{m}{2} \bigr\rfloor\\ 2 &amp;&amp; \bigl\lfloor \frac{m}{4} \bigr\rfloor\\ 3 &amp;&amp; \bigl\lfloor \frac{m}{8} \bigr\rfloor\\ 4 &amp;&amp; \bigl\lfloor \frac{m}{16} \bigr\rfloor\\ 5 &amp;&amp; \bigl\lfloor \frac{m}{32} \bigr\rfloor\\ \end{array}$$</p> <p>Where the first column is the sequence of integers, the denominator in the second column is the sequence $a_n = 2a_{n-1}\text{ where }a_1=2$. The summation of the third column is the product $m\times n$.</p> <p>Which is equivalent to these steps: repeat the below steps until the left column == 1:</p> <pre><code>• Halve the last number in the first column • Write the result below (discard the remainder) • Double the last number in the first column. • Write the result below. </code></pre> <p>Now, for every even number in the first column, delete the adjacent number in the second column. Add the remaining numbers in the second column. </p> Answer 21: <p>I'd like to add an answer to the already good ones just to point out that no explicit mention of binary representations is needed to understand the algorithm, yet every explanation I've found online mentions binary. When I first figured out how the algorithm works this didn't even occur to me! The idea is really simple:</p> <p><span class="math-container">$$18\times 37=9\times 74$$</span> Nothing surprising here, we just multiplied by <span class="math-container">$1$</span> in the form <span class="math-container">$2\times 1/2$</span>. Since <span class="math-container">$18$</span> is even and no fractions come into play, we've just converted the original product into a new one and can discard the top row. Now we do the same halving/doubling again: <span class="math-container">$$9\times 74=(4+1/2)\times 148=4\times 148+1/2\times 148$$</span> And what's the last term? It's <span class="math-container">$74$</span>, right above the <span class="math-container">$148$</span>, which is next to the <em>odd</em> number <span class="math-container">$9$</span>. Knowing that this will be accounted for at the end, ignore it for now and halve/double again with the remaining term: <span class="math-container">$$4\times 148=2\times 296$$</span> The <span class="math-container">$4\times 148$</span> row can be ignored. Continuing, <span class="math-container">$$2\times 296=1\times 592$$</span> Putting everything together, we end up with <span class="math-container">$18\times 37=592+74$</span>.</p> Answer 22: <p>Write $m$ in binary. The lines where the number is odd corresponds to the bits that are $1$ in the binary expansion. In your example of $18 \times 37$ we have $18=10010_2=2^1+2^4,$ so $18 \times 37=(2+16) \times 37$, which are exactly the terms you added.</p> Answer 23: <p>After reading Ross Millikan's reply I decided to write a complete proof. This is my attempt: </p> <p>A binary number is a way of expressing a number in powers of two, taking the form: </p> <p>$$m = d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}$$</p> <p>Where $n$ is the number of digits in the binary number. Therefore:</p> <p>$$m\times n = \left[d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}\right]\times n$$</p> <p>For example:</p> <p>$$=18\times 37$$ $$\quad\;\;\;\;= \left[2^4+2\right]\times 37$$ $$\quad\;\;\;= \left[16+2\right]\times 37$$ $$\;= 592+74$$ $$666$$</p> <p>The below table is a way of using this identity to multiply two numbers $m$ and $n$: </p> <p>$$\begin{array}{c} \text{Index}&amp;&amp;m&amp;&amp;\text{If col. 1 is odd, }n\times 2^{Index}&amp;\\ \hline 0 &amp;&amp; m\\ 1 &amp;&amp; \bigl\lfloor \frac{m}{2} \bigr\rfloor\\ 2 &amp;&amp; \bigl\lfloor \frac{m}{4} \bigr\rfloor\\ 3 &amp;&amp; \bigl\lfloor \frac{m}{8} \bigr\rfloor\\ 4 &amp;&amp; \bigl\lfloor \frac{m}{16} \bigr\rfloor\\ 5 &amp;&amp; \bigl\lfloor \frac{m}{32} \bigr\rfloor\\ \end{array}$$</p> <p>Where the first column is the sequence of integers, the denominator in the second column is the sequence $a_n = 2a_{n-1}\text{ where }a_1=2$. The summation of the third column is the product $m\times n$.</p> <p>Which is equivalent to these steps: repeat the below steps until the left column == 1:</p> <pre><code>• Halve the last number in the first column • Write the result below (discard the remainder) • Double the last number in the first column. • Write the result below. </code></pre> <p>Now, for every even number in the first column, delete the adjacent number in the second column. Add the remaining numbers in the second column. </p> Answer 24: <p>I'd like to add an answer to the already good ones just to point out that no explicit mention of binary representations is needed to understand the algorithm, yet every explanation I've found online mentions binary. When I first figured out how the algorithm works this didn't even occur to me! The idea is really simple:</p> <p><span class="math-container">$$18\times 37=9\times 74$$</span> Nothing surprising here, we just multiplied by <span class="math-container">$1$</span> in the form <span class="math-container">$2\times 1/2$</span>. Since <span class="math-container">$18$</span> is even and no fractions come into play, we've just converted the original product into a new one and can discard the top row. Now we do the same halving/doubling again: <span class="math-container">$$9\times 74=(4+1/2)\times 148=4\times 148+1/2\times 148$$</span> And what's the last term? It's <span class="math-container">$74$</span>, right above the <span class="math-container">$148$</span>, which is next to the <em>odd</em> number <span class="math-container">$9$</span>. Knowing that this will be accounted for at the end, ignore it for now and halve/double again with the remaining term: <span class="math-container">$$4\times 148=2\times 296$$</span> The <span class="math-container">$4\times 148$</span> row can be ignored. Continuing, <span class="math-container">$$2\times 296=1\times 592$$</span> Putting everything together, we end up with <span class="math-container">$18\times 37=592+74$</span>.</p> Answer 25: <p>Write $m$ in binary. The lines where the number is odd corresponds to the bits that are $1$ in the binary expansion. In your example of $18 \times 37$ we have $18=10010_2=2^1+2^4,$ so $18 \times 37=(2+16) \times 37$, which are exactly the terms you added.</p> Answer 26: <p>After reading Ross Millikan's reply I decided to write a complete proof. This is my attempt: </p> <p>A binary number is a way of expressing a number in powers of two, taking the form: </p> <p>$$m = d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}$$</p> <p>Where $n$ is the number of digits in the binary number. Therefore:</p> <p>$$m\times n = \left[d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}\right]\times n$$</p> <p>For example:</p> <p>$$=18\times 37$$ $$\quad\;\;\;\;= \left[2^4+2\right]\times 37$$ $$\quad\;\;\;= \left[16+2\right]\times 37$$ $$\;= 592+74$$ $$666$$</p> <p>The below table is a way of using this identity to multiply two numbers $m$ and $n$: </p> <p>$$\begin{array}{c} \text{Index}&amp;&amp;m&amp;&amp;\text{If col. 1 is odd, }n\times 2^{Index}&amp;\\ \hline 0 &amp;&amp; m\\ 1 &amp;&amp; \bigl\lfloor \frac{m}{2} \bigr\rfloor\\ 2 &amp;&amp; \bigl\lfloor \frac{m}{4} \bigr\rfloor\\ 3 &amp;&amp; \bigl\lfloor \frac{m}{8} \bigr\rfloor\\ 4 &amp;&amp; \bigl\lfloor \frac{m}{16} \bigr\rfloor\\ 5 &amp;&amp; \bigl\lfloor \frac{m}{32} \bigr\rfloor\\ \end{array}$$</p> <p>Where the first column is the sequence of integers, the denominator in the second column is the sequence $a_n = 2a_{n-1}\text{ where }a_1=2$. The summation of the third column is the product $m\times n$.</p> <p>Which is equivalent to these steps: repeat the below steps until the left column == 1:</p> <pre><code>• Halve the last number in the first column • Write the result below (discard the remainder) • Double the last number in the first column. • Write the result below. </code></pre> <p>Now, for every even number in the first column, delete the adjacent number in the second column. Add the remaining numbers in the second column. </p> Answer 27: <p>I'd like to add an answer to the already good ones just to point out that no explicit mention of binary representations is needed to understand the algorithm, yet every explanation I've found online mentions binary. When I first figured out how the algorithm works this didn't even occur to me! The idea is really simple:</p> <p><span class="math-container">$$18\times 37=9\times 74$$</span> Nothing surprising here, we just multiplied by <span class="math-container">$1$</span> in the form <span class="math-container">$2\times 1/2$</span>. Since <span class="math-container">$18$</span> is even and no fractions come into play, we've just converted the original product into a new one and can discard the top row. Now we do the same halving/doubling again: <span class="math-container">$$9\times 74=(4+1/2)\times 148=4\times 148+1/2\times 148$$</span> And what's the last term? It's <span class="math-container">$74$</span>, right above the <span class="math-container">$148$</span>, which is next to the <em>odd</em> number <span class="math-container">$9$</span>. Knowing that this will be accounted for at the end, ignore it for now and halve/double again with the remaining term: <span class="math-container">$$4\times 148=2\times 296$$</span> The <span class="math-container">$4\times 148$</span> row can be ignored. Continuing, <span class="math-container">$$2\times 296=1\times 592$$</span> Putting everything together, we end up with <span class="math-container">$18\times 37=592+74$</span>.</p> Answer 28: <p>Write $m$ in binary. The lines where the number is odd corresponds to the bits that are $1$ in the binary expansion. In your example of $18 \times 37$ we have $18=10010_2=2^1+2^4,$ so $18 \times 37=(2+16) \times 37$, which are exactly the terms you added.</p> Answer 29: <p>After reading Ross Millikan's reply I decided to write a complete proof. This is my attempt: </p> <p>A binary number is a way of expressing a number in powers of two, taking the form: </p> <p>$$m = d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}$$</p> <p>Where $n$ is the number of digits in the binary number. Therefore:</p> <p>$$m\times n = \left[d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}\right]\times n$$</p> <p>For example:</p> <p>$$=18\times 37$$ $$\quad\;\;\;\;= \left[2^4+2\right]\times 37$$ $$\quad\;\;\;= \left[16+2\right]\times 37$$ $$\;= 592+74$$ $$666$$</p> <p>The below table is a way of using this identity to multiply two numbers $m$ and $n$: </p> <p>$$\begin{array}{c} \text{Index}&amp;&amp;m&amp;&amp;\text{If col. 1 is odd, }n\times 2^{Index}&amp;\\ \hline 0 &amp;&amp; m\\ 1 &amp;&amp; \bigl\lfloor \frac{m}{2} \bigr\rfloor\\ 2 &amp;&amp; \bigl\lfloor \frac{m}{4} \bigr\rfloor\\ 3 &amp;&amp; \bigl\lfloor \frac{m}{8} \bigr\rfloor\\ 4 &amp;&amp; \bigl\lfloor \frac{m}{16} \bigr\rfloor\\ 5 &amp;&amp; \bigl\lfloor \frac{m}{32} \bigr\rfloor\\ \end{array}$$</p> <p>Where the first column is the sequence of integers, the denominator in the second column is the sequence $a_n = 2a_{n-1}\text{ where }a_1=2$. The summation of the third column is the product $m\times n$.</p> <p>Which is equivalent to these steps: repeat the below steps until the left column == 1:</p> <pre><code>• Halve the last number in the first column • Write the result below (discard the remainder) • Double the last number in the first column. • Write the result below. </code></pre> <p>Now, for every even number in the first column, delete the adjacent number in the second column. Add the remaining numbers in the second column. </p> Answer 30: <p>I'd like to add an answer to the already good ones just to point out that no explicit mention of binary representations is needed to understand the algorithm, yet every explanation I've found online mentions binary. When I first figured out how the algorithm works this didn't even occur to me! The idea is really simple:</p> <p><span class="math-container">$$18\times 37=9\times 74$$</span> Nothing surprising here, we just multiplied by <span class="math-container">$1$</span> in the form <span class="math-container">$2\times 1/2$</span>. Since <span class="math-container">$18$</span> is even and no fractions come into play, we've just converted the original product into a new one and can discard the top row. Now we do the same halving/doubling again: <span class="math-container">$$9\times 74=(4+1/2)\times 148=4\times 148+1/2\times 148$$</span> And what's the last term? It's <span class="math-container">$74$</span>, right above the <span class="math-container">$148$</span>, which is next to the <em>odd</em> number <span class="math-container">$9$</span>. Knowing that this will be accounted for at the end, ignore it for now and halve/double again with the remaining term: <span class="math-container">$$4\times 148=2\times 296$$</span> The <span class="math-container">$4\times 148$</span> row can be ignored. Continuing, <span class="math-container">$$2\times 296=1\times 592$$</span> Putting everything together, we end up with <span class="math-container">$18\times 37=592+74$</span>.</p> Answer 31: <p>Write $m$ in binary. The lines where the number is odd corresponds to the bits that are $1$ in the binary expansion. In your example of $18 \times 37$ we have $18=10010_2=2^1+2^4,$ so $18 \times 37=(2+16) \times 37$, which are exactly the terms you added.</p> Answer 32: <p>After reading Ross Millikan's reply I decided to write a complete proof. This is my attempt: </p> <p>A binary number is a way of expressing a number in powers of two, taking the form: </p> <p>$$m = d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}$$</p> <p>Where $n$ is the number of digits in the binary number. Therefore:</p> <p>$$m\times n = \left[d_n2^{n-1}+d_{n-1}2^{n-2}+d_{n-2}2^{n-3}+...+d_{1}2^{0}\right]\times n$$</p> <p>For example:</p> <p>$$=18\times 37$$ $$\quad\;\;\;\;= \left[2^4+2\right]\times 37$$ $$\quad\;\;\;= \left[16+2\right]\times 37$$ $$\;= 592+74$$ $$666$$</p> <p>The below table is a way of using this identity to multiply two numbers $m$ and $n$: </p> <p>$$\begin{array}{c} \text{Index}&amp;&amp;m&amp;&amp;\text{If col. 1 is odd, }n\times 2^{Index}&amp;\\ \hline 0 &amp;&amp; m\\ 1 &amp;&amp; \bigl\lfloor \frac{m}{2} \bigr\rfloor\\ 2 &amp;&amp; \bigl\lfloor \frac{m}{4} \bigr\rfloor\\ 3 &amp;&amp; \bigl\lfloor \frac{m}{8} \bigr\rfloor\\ 4 &amp;&amp; \bigl\lfloor \frac{m}{16} \bigr\rfloor\\ 5 &amp;&amp; \bigl\lfloor \frac{m}{32} \bigr\rfloor\\ \end{array}$$</p> <p>Where the first column is the sequence of integers, the denominator in the second column is the sequence $a_n = 2a_{n-1}\text{ where }a_1=2$. The summation of the third column is the product $m\times n$.</p> <p>Which is equivalent to these steps: repeat the below steps until the left column == 1:</p> <pre><code>• Halve the last number in the first column • Write the result below (discard the remainder) • Double the last number in the first column. • Write the result below. </code></pre> <p>Now, for every even number in the first column, delete the adjacent number in the second column. Add the remaining numbers in the second column. </p> Answer 33: <p>I'd like to add an answer to the already good ones just to point out that no explicit mention of binary representations is needed to understand the algorithm, yet every explanation I've found online mentions binary. When I first figured out how the algorithm works this didn't even occur to me! The idea is really simple:</p> <p><span class="math-container">$$18\times 37=9\times 74$$</span> Nothing surprising here, we just multiplied by <span class="math-container">$1$</span> in the form <span class="math-container">$2\times 1/2$</span>. Since <span class="math-container">$18$</span> is even and no fractions come into play, we've just converted the original product into a new one and can discard the top row. Now we do the same halving/doubling again: <span class="math-container">$$9\times 74=(4+1/2)\times 148=4\times 148+1/2\times 148$$</span> And what's the last term? It's <span class="math-container">$74$</span>, right above the <span class="math-container">$148$</span>, which is next to the <em>odd</em> number <span class="math-container">$9$</span>. Knowing that this will be accounted for at the end, ignore it for now and halve/double again with the remaining term: <span class="math-container">$$4\times 148=2\times 296$$</span> The <span class="math-container">$4\times 148$</span> row can be ignored. Continuing, <span class="math-container">$$2\times 296=1\times 592$$</span> Putting everything together, we end up with <span class="math-container">$18\times 37=592+74$</span>.</p>
What are the eigenvalues of matrix $uu^T$ where $u$ is a column vector?
2,099,586
6
https://math.stackexchange.com/questions/2099586/what-are-the-eigenvalues-of-matrix-uut-where-u-is-a-column-vector
<p>Please let me know if this problem is duplicated and I will remove it ASAP.</p> <p>I see this problem on an interview book.</p> <p>The vector is defined as $u=(u_1,u_2,...,u_n)^T$.</p> <p>Then the eigenvalues of $uu^T$ are given as $\Sigma_{i=1}^n u_i^2$ with multiplicity 1 and 0 with multiplicity $n-1$.</p> <p>I try to start with $det(uu^T-\lambda I)$ and try to show this is exactly</p> <p>($\lambda-\Sigma_{i=1}^n u_i^2)\lambda^{n-1}=0$</p> <p>Any help will be appreciated!</p>
[ { "answer_id": 2099593, "score": 6, "text": "<p>Note that\n$$\n(uu^T)u=u(u^Tu)=||u||^2u\n$$\nso $||u||^2$ is an eigenvalue with $u$ itself a corresponding eigenvector. Now, consider the $N-1$ dimensional subspace $W$ of $\\mathbb{R}^N$ that contains vectors orthogonal to $u$. Then,\n$$\nw\\in W\\implies u^Tw=0\\implies(uu^T)w=u(u^Tw)=0u\n$$\nso $0$ is an eigenvalue with multiplicity $N-1$.</p>\n\n<hr>\n\n<p><strong>Edit</strong>: the above shows that $0$ has geometric multiplicity $N-1$. But\n$$\nN-1=\\text{geometric multiplicity of }0\\leq\\text{algebraic multiplicity of }0\\leq N-1\n$$\n(the last inequality is because there exists another eigenvalue i.e. $||u||^2$) so you actually have\n$$\nN-1=\\text{geometric multiplicity of }0=\\text{algebraic multiplicity of }0= N-1.\n$$</p>\n" }, { "answer_id": 2099591, "score": 0, "text": "<p>First, prove that $u$ an eigenvector corresponding to eigenvalue $\\sum_i u_i^2$.</p>\n\n<p>Then extend $u$ into an orthogonal basis $\\{u, v_2,\\ldots,v_n\\}$ and show that the other basis elements are zero-eigenvectors.</p>\n" }, { "answer_id": 2099593, "score": 6, "text": "<p>Note that\n$$\n(uu^T)u=u(u^Tu)=||u||^2u\n$$\nso $||u||^2$ is an eigenvalue with $u$ itself a corresponding eigenvector. Now, consider the $N-1$ dimensional subspace $W$ of $\\mathbb{R}^N$ that contains vectors orthogonal to $u$. Then,\n$$\nw\\in W\\implies u^Tw=0\\implies(uu^T)w=u(u^Tw)=0u\n$$\nso $0$ is an eigenvalue with multiplicity $N-1$.</p>\n\n<hr>\n\n<p><strong>Edit</strong>: the above shows that $0$ has geometric multiplicity $N-1$. But\n$$\nN-1=\\text{geometric multiplicity of }0\\leq\\text{algebraic multiplicity of }0\\leq N-1\n$$\n(the last inequality is because there exists another eigenvalue i.e. $||u||^2$) so you actually have\n$$\nN-1=\\text{geometric multiplicity of }0=\\text{algebraic multiplicity of }0= N-1.\n$$</p>\n" }, { "answer_id": 2099591, "score": 0, "text": "<p>First, prove that $u$ an eigenvector corresponding to eigenvalue $\\sum_i u_i^2$.</p>\n\n<p>Then extend $u$ into an orthogonal basis $\\{u, v_2,\\ldots,v_n\\}$ and show that the other basis elements are zero-eigenvectors.</p>\n" }, { "answer_id": 2099593, "score": 6, "text": "<p>Note that\n$$\n(uu^T)u=u(u^Tu)=||u||^2u\n$$\nso $||u||^2$ is an eigenvalue with $u$ itself a corresponding eigenvector. Now, consider the $N-1$ dimensional subspace $W$ of $\\mathbb{R}^N$ that contains vectors orthogonal to $u$. Then,\n$$\nw\\in W\\implies u^Tw=0\\implies(uu^T)w=u(u^Tw)=0u\n$$\nso $0$ is an eigenvalue with multiplicity $N-1$.</p>\n\n<hr>\n\n<p><strong>Edit</strong>: the above shows that $0$ has geometric multiplicity $N-1$. But\n$$\nN-1=\\text{geometric multiplicity of }0\\leq\\text{algebraic multiplicity of }0\\leq N-1\n$$\n(the last inequality is because there exists another eigenvalue i.e. $||u||^2$) so you actually have\n$$\nN-1=\\text{geometric multiplicity of }0=\\text{algebraic multiplicity of }0= N-1.\n$$</p>\n" }, { "answer_id": 2099591, "score": 0, "text": "<p>First, prove that $u$ an eigenvector corresponding to eigenvalue $\\sum_i u_i^2$.</p>\n\n<p>Then extend $u$ into an orthogonal basis $\\{u, v_2,\\ldots,v_n\\}$ and show that the other basis elements are zero-eigenvectors.</p>\n" }, { "answer_id": 2099593, "score": 6, "text": "<p>Note that\n$$\n(uu^T)u=u(u^Tu)=||u||^2u\n$$\nso $||u||^2$ is an eigenvalue with $u$ itself a corresponding eigenvector. Now, consider the $N-1$ dimensional subspace $W$ of $\\mathbb{R}^N$ that contains vectors orthogonal to $u$. Then,\n$$\nw\\in W\\implies u^Tw=0\\implies(uu^T)w=u(u^Tw)=0u\n$$\nso $0$ is an eigenvalue with multiplicity $N-1$.</p>\n\n<hr>\n\n<p><strong>Edit</strong>: the above shows that $0$ has geometric multiplicity $N-1$. But\n$$\nN-1=\\text{geometric multiplicity of }0\\leq\\text{algebraic multiplicity of }0\\leq N-1\n$$\n(the last inequality is because there exists another eigenvalue i.e. $||u||^2$) so you actually have\n$$\nN-1=\\text{geometric multiplicity of }0=\\text{algebraic multiplicity of }0= N-1.\n$$</p>\n" }, { "answer_id": 2099591, "score": 0, "text": "<p>First, prove that $u$ an eigenvector corresponding to eigenvalue $\\sum_i u_i^2$.</p>\n\n<p>Then extend $u$ into an orthogonal basis $\\{u, v_2,\\ldots,v_n\\}$ and show that the other basis elements are zero-eigenvectors.</p>\n" }, { "answer_id": 2099593, "score": 6, "text": "<p>Note that\n$$\n(uu^T)u=u(u^Tu)=||u||^2u\n$$\nso $||u||^2$ is an eigenvalue with $u$ itself a corresponding eigenvector. Now, consider the $N-1$ dimensional subspace $W$ of $\\mathbb{R}^N$ that contains vectors orthogonal to $u$. Then,\n$$\nw\\in W\\implies u^Tw=0\\implies(uu^T)w=u(u^Tw)=0u\n$$\nso $0$ is an eigenvalue with multiplicity $N-1$.</p>\n\n<hr>\n\n<p><strong>Edit</strong>: the above shows that $0$ has geometric multiplicity $N-1$. But\n$$\nN-1=\\text{geometric multiplicity of }0\\leq\\text{algebraic multiplicity of }0\\leq N-1\n$$\n(the last inequality is because there exists another eigenvalue i.e. $||u||^2$) so you actually have\n$$\nN-1=\\text{geometric multiplicity of }0=\\text{algebraic multiplicity of }0= N-1.\n$$</p>\n" }, { "answer_id": 2099591, "score": 0, "text": "<p>First, prove that $u$ an eigenvector corresponding to eigenvalue $\\sum_i u_i^2$.</p>\n\n<p>Then extend $u$ into an orthogonal basis $\\{u, v_2,\\ldots,v_n\\}$ and show that the other basis elements are zero-eigenvectors.</p>\n" }, { "answer_id": 2099593, "score": 6, "text": "<p>Note that\n$$\n(uu^T)u=u(u^Tu)=||u||^2u\n$$\nso $||u||^2$ is an eigenvalue with $u$ itself a corresponding eigenvector. Now, consider the $N-1$ dimensional subspace $W$ of $\\mathbb{R}^N$ that contains vectors orthogonal to $u$. Then,\n$$\nw\\in W\\implies u^Tw=0\\implies(uu^T)w=u(u^Tw)=0u\n$$\nso $0$ is an eigenvalue with multiplicity $N-1$.</p>\n\n<hr>\n\n<p><strong>Edit</strong>: the above shows that $0$ has geometric multiplicity $N-1$. But\n$$\nN-1=\\text{geometric multiplicity of }0\\leq\\text{algebraic multiplicity of }0\\leq N-1\n$$\n(the last inequality is because there exists another eigenvalue i.e. $||u||^2$) so you actually have\n$$\nN-1=\\text{geometric multiplicity of }0=\\text{algebraic multiplicity of }0= N-1.\n$$</p>\n" }, { "answer_id": 2099591, "score": 0, "text": "<p>First, prove that $u$ an eigenvector corresponding to eigenvalue $\\sum_i u_i^2$.</p>\n\n<p>Then extend $u$ into an orthogonal basis $\\{u, v_2,\\ldots,v_n\\}$ and show that the other basis elements are zero-eigenvectors.</p>\n" }, { "answer_id": 2099593, "score": 6, "text": "<p>Note that\n$$\n(uu^T)u=u(u^Tu)=||u||^2u\n$$\nso $||u||^2$ is an eigenvalue with $u$ itself a corresponding eigenvector. Now, consider the $N-1$ dimensional subspace $W$ of $\\mathbb{R}^N$ that contains vectors orthogonal to $u$. Then,\n$$\nw\\in W\\implies u^Tw=0\\implies(uu^T)w=u(u^Tw)=0u\n$$\nso $0$ is an eigenvalue with multiplicity $N-1$.</p>\n\n<hr>\n\n<p><strong>Edit</strong>: the above shows that $0$ has geometric multiplicity $N-1$. But\n$$\nN-1=\\text{geometric multiplicity of }0\\leq\\text{algebraic multiplicity of }0\\leq N-1\n$$\n(the last inequality is because there exists another eigenvalue i.e. $||u||^2$) so you actually have\n$$\nN-1=\\text{geometric multiplicity of }0=\\text{algebraic multiplicity of }0= N-1.\n$$</p>\n" }, { "answer_id": 2099591, "score": 0, "text": "<p>First, prove that $u$ an eigenvector corresponding to eigenvalue $\\sum_i u_i^2$.</p>\n\n<p>Then extend $u$ into an orthogonal basis $\\{u, v_2,\\ldots,v_n\\}$ and show that the other basis elements are zero-eigenvectors.</p>\n" }, { "answer_id": 2099593, "score": 6, "text": "<p>Note that\n$$\n(uu^T)u=u(u^Tu)=||u||^2u\n$$\nso $||u||^2$ is an eigenvalue with $u$ itself a corresponding eigenvector. Now, consider the $N-1$ dimensional subspace $W$ of $\\mathbb{R}^N$ that contains vectors orthogonal to $u$. Then,\n$$\nw\\in W\\implies u^Tw=0\\implies(uu^T)w=u(u^Tw)=0u\n$$\nso $0$ is an eigenvalue with multiplicity $N-1$.</p>\n\n<hr>\n\n<p><strong>Edit</strong>: the above shows that $0$ has geometric multiplicity $N-1$. But\n$$\nN-1=\\text{geometric multiplicity of }0\\leq\\text{algebraic multiplicity of }0\\leq N-1\n$$\n(the last inequality is because there exists another eigenvalue i.e. $||u||^2$) so you actually have\n$$\nN-1=\\text{geometric multiplicity of }0=\\text{algebraic multiplicity of }0= N-1.\n$$</p>\n" }, { "answer_id": 2099591, "score": 0, "text": "<p>First, prove that $u$ an eigenvector corresponding to eigenvalue $\\sum_i u_i^2$.</p>\n\n<p>Then extend $u$ into an orthogonal basis $\\{u, v_2,\\ldots,v_n\\}$ and show that the other basis elements are zero-eigenvectors.</p>\n" }, { "answer_id": 2099593, "score": 6, "text": "<p>Note that\n$$\n(uu^T)u=u(u^Tu)=||u||^2u\n$$\nso $||u||^2$ is an eigenvalue with $u$ itself a corresponding eigenvector. Now, consider the $N-1$ dimensional subspace $W$ of $\\mathbb{R}^N$ that contains vectors orthogonal to $u$. Then,\n$$\nw\\in W\\implies u^Tw=0\\implies(uu^T)w=u(u^Tw)=0u\n$$\nso $0$ is an eigenvalue with multiplicity $N-1$.</p>\n\n<hr>\n\n<p><strong>Edit</strong>: the above shows that $0$ has geometric multiplicity $N-1$. But\n$$\nN-1=\\text{geometric multiplicity of }0\\leq\\text{algebraic multiplicity of }0\\leq N-1\n$$\n(the last inequality is because there exists another eigenvalue i.e. $||u||^2$) so you actually have\n$$\nN-1=\\text{geometric multiplicity of }0=\\text{algebraic multiplicity of }0= N-1.\n$$</p>\n" }, { "answer_id": 2099591, "score": 0, "text": "<p>First, prove that $u$ an eigenvector corresponding to eigenvalue $\\sum_i u_i^2$.</p>\n\n<p>Then extend $u$ into an orthogonal basis $\\{u, v_2,\\ldots,v_n\\}$ and show that the other basis elements are zero-eigenvectors.</p>\n" }, { "answer_id": 2099593, "score": 6, "text": "<p>Note that\n$$\n(uu^T)u=u(u^Tu)=||u||^2u\n$$\nso $||u||^2$ is an eigenvalue with $u$ itself a corresponding eigenvector. Now, consider the $N-1$ dimensional subspace $W$ of $\\mathbb{R}^N$ that contains vectors orthogonal to $u$. Then,\n$$\nw\\in W\\implies u^Tw=0\\implies(uu^T)w=u(u^Tw)=0u\n$$\nso $0$ is an eigenvalue with multiplicity $N-1$.</p>\n\n<hr>\n\n<p><strong>Edit</strong>: the above shows that $0$ has geometric multiplicity $N-1$. But\n$$\nN-1=\\text{geometric multiplicity of }0\\leq\\text{algebraic multiplicity of }0\\leq N-1\n$$\n(the last inequality is because there exists another eigenvalue i.e. $||u||^2$) so you actually have\n$$\nN-1=\\text{geometric multiplicity of }0=\\text{algebraic multiplicity of }0= N-1.\n$$</p>\n" }, { "answer_id": 2099591, "score": 0, "text": "<p>First, prove that $u$ an eigenvector corresponding to eigenvalue $\\sum_i u_i^2$.</p>\n\n<p>Then extend $u$ into an orthogonal basis $\\{u, v_2,\\ldots,v_n\\}$ and show that the other basis elements are zero-eigenvectors.</p>\n" }, { "answer_id": 2099593, "score": 6, "text": "<p>Note that\n$$\n(uu^T)u=u(u^Tu)=||u||^2u\n$$\nso $||u||^2$ is an eigenvalue with $u$ itself a corresponding eigenvector. Now, consider the $N-1$ dimensional subspace $W$ of $\\mathbb{R}^N$ that contains vectors orthogonal to $u$. Then,\n$$\nw\\in W\\implies u^Tw=0\\implies(uu^T)w=u(u^Tw)=0u\n$$\nso $0$ is an eigenvalue with multiplicity $N-1$.</p>\n\n<hr>\n\n<p><strong>Edit</strong>: the above shows that $0$ has geometric multiplicity $N-1$. But\n$$\nN-1=\\text{geometric multiplicity of }0\\leq\\text{algebraic multiplicity of }0\\leq N-1\n$$\n(the last inequality is because there exists another eigenvalue i.e. $||u||^2$) so you actually have\n$$\nN-1=\\text{geometric multiplicity of }0=\\text{algebraic multiplicity of }0= N-1.\n$$</p>\n" }, { "answer_id": 2099591, "score": 0, "text": "<p>First, prove that $u$ an eigenvector corresponding to eigenvalue $\\sum_i u_i^2$.</p>\n\n<p>Then extend $u$ into an orthogonal basis $\\{u, v_2,\\ldots,v_n\\}$ and show that the other basis elements are zero-eigenvectors.</p>\n" } ]
Question: What are the eigenvalues of matrix $uu^T$ where $u$ is a column vector? Answer 1: <p>Note that $$ (uu^T)u=u(u^Tu)=||u||^2u $$ so $||u||^2$ is an eigenvalue with $u$ itself a corresponding eigenvector. Now, consider the $N-1$ dimensional subspace $W$ of $\mathbb{R}^N$ that contains vectors orthogonal to $u$. Then, $$ w\in W\implies u^Tw=0\implies(uu^T)w=u(u^Tw)=0u $$ so $0$ is an eigenvalue with multiplicity $N-1$.</p> <hr> <p><strong>Edit</strong>: the above shows that $0$ has geometric multiplicity $N-1$. But $$ N-1=\text{geometric multiplicity of }0\leq\text{algebraic multiplicity of }0\leq N-1 $$ (the last inequality is because there exists another eigenvalue i.e. $||u||^2$) so you actually have $$ N-1=\text{geometric multiplicity of }0=\text{algebraic multiplicity of }0= N-1. $$</p> Answer 2: <p>First, prove that $u$ an eigenvector corresponding to eigenvalue $\sum_i u_i^2$.</p> <p>Then extend $u$ into an orthogonal basis $\{u, v_2,\ldots,v_n\}$ and show that the other basis elements are zero-eigenvectors.</p> Answer 3: <p>Note that $$ (uu^T)u=u(u^Tu)=||u||^2u $$ so $||u||^2$ is an eigenvalue with $u$ itself a corresponding eigenvector. Now, consider the $N-1$ dimensional subspace $W$ of $\mathbb{R}^N$ that contains vectors orthogonal to $u$. Then, $$ w\in W\implies u^Tw=0\implies(uu^T)w=u(u^Tw)=0u $$ so $0$ is an eigenvalue with multiplicity $N-1$.</p> <hr> <p><strong>Edit</strong>: the above shows that $0$ has geometric multiplicity $N-1$. But $$ N-1=\text{geometric multiplicity of }0\leq\text{algebraic multiplicity of }0\leq N-1 $$ (the last inequality is because there exists another eigenvalue i.e. $||u||^2$) so you actually have $$ N-1=\text{geometric multiplicity of }0=\text{algebraic multiplicity of }0= N-1. $$</p> Answer 4: <p>First, prove that $u$ an eigenvector corresponding to eigenvalue $\sum_i u_i^2$.</p> <p>Then extend $u$ into an orthogonal basis $\{u, v_2,\ldots,v_n\}$ and show that the other basis elements are zero-eigenvectors.</p> Answer 5: <p>Note that $$ (uu^T)u=u(u^Tu)=||u||^2u $$ so $||u||^2$ is an eigenvalue with $u$ itself a corresponding eigenvector. Now, consider the $N-1$ dimensional subspace $W$ of $\mathbb{R}^N$ that contains vectors orthogonal to $u$. Then, $$ w\in W\implies u^Tw=0\implies(uu^T)w=u(u^Tw)=0u $$ so $0$ is an eigenvalue with multiplicity $N-1$.</p> <hr> <p><strong>Edit</strong>: the above shows that $0$ has geometric multiplicity $N-1$. But $$ N-1=\text{geometric multiplicity of }0\leq\text{algebraic multiplicity of }0\leq N-1 $$ (the last inequality is because there exists another eigenvalue i.e. $||u||^2$) so you actually have $$ N-1=\text{geometric multiplicity of }0=\text{algebraic multiplicity of }0= N-1. $$</p> Answer 6: <p>First, prove that $u$ an eigenvector corresponding to eigenvalue $\sum_i u_i^2$.</p> <p>Then extend $u$ into an orthogonal basis $\{u, v_2,\ldots,v_n\}$ and show that the other basis elements are zero-eigenvectors.</p> Answer 7: <p>Note that $$ (uu^T)u=u(u^Tu)=||u||^2u $$ so $||u||^2$ is an eigenvalue with $u$ itself a corresponding eigenvector. Now, consider the $N-1$ dimensional subspace $W$ of $\mathbb{R}^N$ that contains vectors orthogonal to $u$. Then, $$ w\in W\implies u^Tw=0\implies(uu^T)w=u(u^Tw)=0u $$ so $0$ is an eigenvalue with multiplicity $N-1$.</p> <hr> <p><strong>Edit</strong>: the above shows that $0$ has geometric multiplicity $N-1$. But $$ N-1=\text{geometric multiplicity of }0\leq\text{algebraic multiplicity of }0\leq N-1 $$ (the last inequality is because there exists another eigenvalue i.e. $||u||^2$) so you actually have $$ N-1=\text{geometric multiplicity of }0=\text{algebraic multiplicity of }0= N-1. $$</p> Answer 8: <p>First, prove that $u$ an eigenvector corresponding to eigenvalue $\sum_i u_i^2$.</p> <p>Then extend $u$ into an orthogonal basis $\{u, v_2,\ldots,v_n\}$ and show that the other basis elements are zero-eigenvectors.</p> Answer 9: <p>Note that $$ (uu^T)u=u(u^Tu)=||u||^2u $$ so $||u||^2$ is an eigenvalue with $u$ itself a corresponding eigenvector. Now, consider the $N-1$ dimensional subspace $W$ of $\mathbb{R}^N$ that contains vectors orthogonal to $u$. Then, $$ w\in W\implies u^Tw=0\implies(uu^T)w=u(u^Tw)=0u $$ so $0$ is an eigenvalue with multiplicity $N-1$.</p> <hr> <p><strong>Edit</strong>: the above shows that $0$ has geometric multiplicity $N-1$. But $$ N-1=\text{geometric multiplicity of }0\leq\text{algebraic multiplicity of }0\leq N-1 $$ (the last inequality is because there exists another eigenvalue i.e. $||u||^2$) so you actually have $$ N-1=\text{geometric multiplicity of }0=\text{algebraic multiplicity of }0= N-1. $$</p> Answer 10: <p>First, prove that $u$ an eigenvector corresponding to eigenvalue $\sum_i u_i^2$.</p> <p>Then extend $u$ into an orthogonal basis $\{u, v_2,\ldots,v_n\}$ and show that the other basis elements are zero-eigenvectors.</p> Answer 11: <p>Note that $$ (uu^T)u=u(u^Tu)=||u||^2u $$ so $||u||^2$ is an eigenvalue with $u$ itself a corresponding eigenvector. Now, consider the $N-1$ dimensional subspace $W$ of $\mathbb{R}^N$ that contains vectors orthogonal to $u$. Then, $$ w\in W\implies u^Tw=0\implies(uu^T)w=u(u^Tw)=0u $$ so $0$ is an eigenvalue with multiplicity $N-1$.</p> <hr> <p><strong>Edit</strong>: the above shows that $0$ has geometric multiplicity $N-1$. But $$ N-1=\text{geometric multiplicity of }0\leq\text{algebraic multiplicity of }0\leq N-1 $$ (the last inequality is because there exists another eigenvalue i.e. $||u||^2$) so you actually have $$ N-1=\text{geometric multiplicity of }0=\text{algebraic multiplicity of }0= N-1. $$</p> Answer 12: <p>First, prove that $u$ an eigenvector corresponding to eigenvalue $\sum_i u_i^2$.</p> <p>Then extend $u$ into an orthogonal basis $\{u, v_2,\ldots,v_n\}$ and show that the other basis elements are zero-eigenvectors.</p> Answer 13: <p>Note that $$ (uu^T)u=u(u^Tu)=||u||^2u $$ so $||u||^2$ is an eigenvalue with $u$ itself a corresponding eigenvector. Now, consider the $N-1$ dimensional subspace $W$ of $\mathbb{R}^N$ that contains vectors orthogonal to $u$. Then, $$ w\in W\implies u^Tw=0\implies(uu^T)w=u(u^Tw)=0u $$ so $0$ is an eigenvalue with multiplicity $N-1$.</p> <hr> <p><strong>Edit</strong>: the above shows that $0$ has geometric multiplicity $N-1$. But $$ N-1=\text{geometric multiplicity of }0\leq\text{algebraic multiplicity of }0\leq N-1 $$ (the last inequality is because there exists another eigenvalue i.e. $||u||^2$) so you actually have $$ N-1=\text{geometric multiplicity of }0=\text{algebraic multiplicity of }0= N-1. $$</p> Answer 14: <p>First, prove that $u$ an eigenvector corresponding to eigenvalue $\sum_i u_i^2$.</p> <p>Then extend $u$ into an orthogonal basis $\{u, v_2,\ldots,v_n\}$ and show that the other basis elements are zero-eigenvectors.</p> Answer 15: <p>Note that $$ (uu^T)u=u(u^Tu)=||u||^2u $$ so $||u||^2$ is an eigenvalue with $u$ itself a corresponding eigenvector. Now, consider the $N-1$ dimensional subspace $W$ of $\mathbb{R}^N$ that contains vectors orthogonal to $u$. Then, $$ w\in W\implies u^Tw=0\implies(uu^T)w=u(u^Tw)=0u $$ so $0$ is an eigenvalue with multiplicity $N-1$.</p> <hr> <p><strong>Edit</strong>: the above shows that $0$ has geometric multiplicity $N-1$. But $$ N-1=\text{geometric multiplicity of }0\leq\text{algebraic multiplicity of }0\leq N-1 $$ (the last inequality is because there exists another eigenvalue i.e. $||u||^2$) so you actually have $$ N-1=\text{geometric multiplicity of }0=\text{algebraic multiplicity of }0= N-1. $$</p> Answer 16: <p>First, prove that $u$ an eigenvector corresponding to eigenvalue $\sum_i u_i^2$.</p> <p>Then extend $u$ into an orthogonal basis $\{u, v_2,\ldots,v_n\}$ and show that the other basis elements are zero-eigenvectors.</p> Answer 17: <p>Note that $$ (uu^T)u=u(u^Tu)=||u||^2u $$ so $||u||^2$ is an eigenvalue with $u$ itself a corresponding eigenvector. Now, consider the $N-1$ dimensional subspace $W$ of $\mathbb{R}^N$ that contains vectors orthogonal to $u$. Then, $$ w\in W\implies u^Tw=0\implies(uu^T)w=u(u^Tw)=0u $$ so $0$ is an eigenvalue with multiplicity $N-1$.</p> <hr> <p><strong>Edit</strong>: the above shows that $0$ has geometric multiplicity $N-1$. But $$ N-1=\text{geometric multiplicity of }0\leq\text{algebraic multiplicity of }0\leq N-1 $$ (the last inequality is because there exists another eigenvalue i.e. $||u||^2$) so you actually have $$ N-1=\text{geometric multiplicity of }0=\text{algebraic multiplicity of }0= N-1. $$</p> Answer 18: <p>First, prove that $u$ an eigenvector corresponding to eigenvalue $\sum_i u_i^2$.</p> <p>Then extend $u$ into an orthogonal basis $\{u, v_2,\ldots,v_n\}$ and show that the other basis elements are zero-eigenvectors.</p> Answer 19: <p>Note that $$ (uu^T)u=u(u^Tu)=||u||^2u $$ so $||u||^2$ is an eigenvalue with $u$ itself a corresponding eigenvector. Now, consider the $N-1$ dimensional subspace $W$ of $\mathbb{R}^N$ that contains vectors orthogonal to $u$. Then, $$ w\in W\implies u^Tw=0\implies(uu^T)w=u(u^Tw)=0u $$ so $0$ is an eigenvalue with multiplicity $N-1$.</p> <hr> <p><strong>Edit</strong>: the above shows that $0$ has geometric multiplicity $N-1$. But $$ N-1=\text{geometric multiplicity of }0\leq\text{algebraic multiplicity of }0\leq N-1 $$ (the last inequality is because there exists another eigenvalue i.e. $||u||^2$) so you actually have $$ N-1=\text{geometric multiplicity of }0=\text{algebraic multiplicity of }0= N-1. $$</p> Answer 20: <p>First, prove that $u$ an eigenvector corresponding to eigenvalue $\sum_i u_i^2$.</p> <p>Then extend $u$ into an orthogonal basis $\{u, v_2,\ldots,v_n\}$ and show that the other basis elements are zero-eigenvectors.</p> Answer 21: <p>Note that $$ (uu^T)u=u(u^Tu)=||u||^2u $$ so $||u||^2$ is an eigenvalue with $u$ itself a corresponding eigenvector. Now, consider the $N-1$ dimensional subspace $W$ of $\mathbb{R}^N$ that contains vectors orthogonal to $u$. Then, $$ w\in W\implies u^Tw=0\implies(uu^T)w=u(u^Tw)=0u $$ so $0$ is an eigenvalue with multiplicity $N-1$.</p> <hr> <p><strong>Edit</strong>: the above shows that $0$ has geometric multiplicity $N-1$. But $$ N-1=\text{geometric multiplicity of }0\leq\text{algebraic multiplicity of }0\leq N-1 $$ (the last inequality is because there exists another eigenvalue i.e. $||u||^2$) so you actually have $$ N-1=\text{geometric multiplicity of }0=\text{algebraic multiplicity of }0= N-1. $$</p> Answer 22: <p>First, prove that $u$ an eigenvector corresponding to eigenvalue $\sum_i u_i^2$.</p> <p>Then extend $u$ into an orthogonal basis $\{u, v_2,\ldots,v_n\}$ and show that the other basis elements are zero-eigenvectors.</p>
Difficult definite integration $\int_{-2}^{0}\frac{x^2 + x - 5}{(x-1)^2}e^x\,\mathrm dx$
2,099,715
6
https://math.stackexchange.com/questions/2099715/difficult-definite-integration-int-20-fracx2-x-5x-12ex-ma
<p>I found an integral in a contest that seems very difficult to compute. <strong>The answer is $-2$, however, I do not know how to arrive at this answer.</strong></p> <p>$$\int_{-2}^{0}\frac{x^2 + x - 5}{(x-1)^2}e^x\,\mathrm dx$$</p> <p>At first I tried to make the substitution $u = x - 1$, but I did not get anywhere. I also tried to expand the denominator and perform synthetic division, which did not help so much either.</p> <p>Also, I don't think it is possible to do partial fraction decomposition since the degree of the denominator is equal to the degree of the numerator.</p> <hr> <p>Attempt with substitution (Moo's help) -</p> <p>Let $u = x - 1$. Then, $du = 1$ $$\begin{align} I &amp;= \int_{-2}^{0} \frac{x^2 + x - 5}{(x - 1)^2} e^x\,\mathrm dx =\\ &amp;= \int_{-2}^{0} \frac{x^2 + x - 5}{u^2} e^{u+1}\,\mathrm du =\\ &amp;= \int_{-2}^{0} \frac{e^{u+1}(x^2 + x - 5)}{u^2}\,\mathrm du \end{align}$$</p> <p>Now, $x = u + 1$, $x^2 = u^2 + 2u + 2$ and therefore $$I = \int_{-3}^{-1} \frac{e^{u+1}(u^2 + 3u - 2)}{u^2}\mathrm du = e \int_{-3}^{-1} e^{u}\frac{u^2 + 3u - 3}{u^2}\mathrm du$$</p>
[ { "answer_id": 2099725, "score": 4, "text": "<p>Integration by part we get\n\\begin{align*}\n\\int\\frac{x^2 + x - 5}{(x-1)^2}e^x \\, \\mathrm{d}x&amp;=-\\frac{x^2 + x - 5}{x-1}e^x +\\int e^{x}\\left ( x+4 \\right )\\, \\mathrm{d}x \\\\\n&amp;=-\\frac{x^2 + x - 5}{x-1}e^x+e^{x}\\left ( x+3 \\right )\\\\\n&amp;=\\frac{x+2}{x-1}e^{x}\n\\end{align*}\nThen you can take it from here.</p>\n" }, { "answer_id": 2099729, "score": 3, "text": "<p>Substituting for $u=x-1$, we get, $$I=\\int_{-2}^{0} \\frac {x^2-x+5}{(x-1)^2} e^x dx =\\int_{-3}^{-1} \\frac {u^2+3u-3}{u^2} e^{u+1} du $$ $$\\Rightarrow I=e [\\int_{-3}^{-1} [3 (\\frac {e^u}{u} -\\frac {e^u}{u^2}) du] +\\int_{-3}^{-1} e^u du] =e [3I_1-3I_2 +I_3] $$ Now we have $$I_1-I_2 =\\int_{-3}^{-1}(\\frac {e^u}{u}-\\frac {e^u}{u^2}) du =\\int_{-3}^{-1} d (\\frac {e^u}{u}) =\\frac {e^u}{u} \\mid_{-3}^{-1} $$ We see that $I_3$ is easy to solve. Finally the answer is $-2$. Hope it helps. </p>\n" }, { "answer_id": 2099725, "score": 4, "text": "<p>Integration by part we get\n\\begin{align*}\n\\int\\frac{x^2 + x - 5}{(x-1)^2}e^x \\, \\mathrm{d}x&amp;=-\\frac{x^2 + x - 5}{x-1}e^x +\\int e^{x}\\left ( x+4 \\right )\\, \\mathrm{d}x \\\\\n&amp;=-\\frac{x^2 + x - 5}{x-1}e^x+e^{x}\\left ( x+3 \\right )\\\\\n&amp;=\\frac{x+2}{x-1}e^{x}\n\\end{align*}\nThen you can take it from here.</p>\n" }, { "answer_id": 2099729, "score": 3, "text": "<p>Substituting for $u=x-1$, we get, $$I=\\int_{-2}^{0} \\frac {x^2-x+5}{(x-1)^2} e^x dx =\\int_{-3}^{-1} \\frac {u^2+3u-3}{u^2} e^{u+1} du $$ $$\\Rightarrow I=e [\\int_{-3}^{-1} [3 (\\frac {e^u}{u} -\\frac {e^u}{u^2}) du] +\\int_{-3}^{-1} e^u du] =e [3I_1-3I_2 +I_3] $$ Now we have $$I_1-I_2 =\\int_{-3}^{-1}(\\frac {e^u}{u}-\\frac {e^u}{u^2}) du =\\int_{-3}^{-1} d (\\frac {e^u}{u}) =\\frac {e^u}{u} \\mid_{-3}^{-1} $$ We see that $I_3$ is easy to solve. Finally the answer is $-2$. Hope it helps. </p>\n" }, { "answer_id": 2099725, "score": 4, "text": "<p>Integration by part we get\n\\begin{align*}\n\\int\\frac{x^2 + x - 5}{(x-1)^2}e^x \\, \\mathrm{d}x&amp;=-\\frac{x^2 + x - 5}{x-1}e^x +\\int e^{x}\\left ( x+4 \\right )\\, \\mathrm{d}x \\\\\n&amp;=-\\frac{x^2 + x - 5}{x-1}e^x+e^{x}\\left ( x+3 \\right )\\\\\n&amp;=\\frac{x+2}{x-1}e^{x}\n\\end{align*}\nThen you can take it from here.</p>\n" }, { "answer_id": 2099729, "score": 3, "text": "<p>Substituting for $u=x-1$, we get, $$I=\\int_{-2}^{0} \\frac {x^2-x+5}{(x-1)^2} e^x dx =\\int_{-3}^{-1} \\frac {u^2+3u-3}{u^2} e^{u+1} du $$ $$\\Rightarrow I=e [\\int_{-3}^{-1} [3 (\\frac {e^u}{u} -\\frac {e^u}{u^2}) du] +\\int_{-3}^{-1} e^u du] =e [3I_1-3I_2 +I_3] $$ Now we have $$I_1-I_2 =\\int_{-3}^{-1}(\\frac {e^u}{u}-\\frac {e^u}{u^2}) du =\\int_{-3}^{-1} d (\\frac {e^u}{u}) =\\frac {e^u}{u} \\mid_{-3}^{-1} $$ We see that $I_3$ is easy to solve. Finally the answer is $-2$. Hope it helps. </p>\n" }, { "answer_id": 2099725, "score": 4, "text": "<p>Integration by part we get\n\\begin{align*}\n\\int\\frac{x^2 + x - 5}{(x-1)^2}e^x \\, \\mathrm{d}x&amp;=-\\frac{x^2 + x - 5}{x-1}e^x +\\int e^{x}\\left ( x+4 \\right )\\, \\mathrm{d}x \\\\\n&amp;=-\\frac{x^2 + x - 5}{x-1}e^x+e^{x}\\left ( x+3 \\right )\\\\\n&amp;=\\frac{x+2}{x-1}e^{x}\n\\end{align*}\nThen you can take it from here.</p>\n" }, { "answer_id": 2099729, "score": 3, "text": "<p>Substituting for $u=x-1$, we get, $$I=\\int_{-2}^{0} \\frac {x^2-x+5}{(x-1)^2} e^x dx =\\int_{-3}^{-1} \\frac {u^2+3u-3}{u^2} e^{u+1} du $$ $$\\Rightarrow I=e [\\int_{-3}^{-1} [3 (\\frac {e^u}{u} -\\frac {e^u}{u^2}) du] +\\int_{-3}^{-1} e^u du] =e [3I_1-3I_2 +I_3] $$ Now we have $$I_1-I_2 =\\int_{-3}^{-1}(\\frac {e^u}{u}-\\frac {e^u}{u^2}) du =\\int_{-3}^{-1} d (\\frac {e^u}{u}) =\\frac {e^u}{u} \\mid_{-3}^{-1} $$ We see that $I_3$ is easy to solve. Finally the answer is $-2$. Hope it helps. </p>\n" }, { "answer_id": 2099725, "score": 4, "text": "<p>Integration by part we get\n\\begin{align*}\n\\int\\frac{x^2 + x - 5}{(x-1)^2}e^x \\, \\mathrm{d}x&amp;=-\\frac{x^2 + x - 5}{x-1}e^x +\\int e^{x}\\left ( x+4 \\right )\\, \\mathrm{d}x \\\\\n&amp;=-\\frac{x^2 + x - 5}{x-1}e^x+e^{x}\\left ( x+3 \\right )\\\\\n&amp;=\\frac{x+2}{x-1}e^{x}\n\\end{align*}\nThen you can take it from here.</p>\n" }, { "answer_id": 2099729, "score": 3, "text": "<p>Substituting for $u=x-1$, we get, $$I=\\int_{-2}^{0} \\frac {x^2-x+5}{(x-1)^2} e^x dx =\\int_{-3}^{-1} \\frac {u^2+3u-3}{u^2} e^{u+1} du $$ $$\\Rightarrow I=e [\\int_{-3}^{-1} [3 (\\frac {e^u}{u} -\\frac {e^u}{u^2}) du] +\\int_{-3}^{-1} e^u du] =e [3I_1-3I_2 +I_3] $$ Now we have $$I_1-I_2 =\\int_{-3}^{-1}(\\frac {e^u}{u}-\\frac {e^u}{u^2}) du =\\int_{-3}^{-1} d (\\frac {e^u}{u}) =\\frac {e^u}{u} \\mid_{-3}^{-1} $$ We see that $I_3$ is easy to solve. Finally the answer is $-2$. Hope it helps. </p>\n" }, { "answer_id": 2099725, "score": 4, "text": "<p>Integration by part we get\n\\begin{align*}\n\\int\\frac{x^2 + x - 5}{(x-1)^2}e^x \\, \\mathrm{d}x&amp;=-\\frac{x^2 + x - 5}{x-1}e^x +\\int e^{x}\\left ( x+4 \\right )\\, \\mathrm{d}x \\\\\n&amp;=-\\frac{x^2 + x - 5}{x-1}e^x+e^{x}\\left ( x+3 \\right )\\\\\n&amp;=\\frac{x+2}{x-1}e^{x}\n\\end{align*}\nThen you can take it from here.</p>\n" }, { "answer_id": 2099729, "score": 3, "text": "<p>Substituting for $u=x-1$, we get, $$I=\\int_{-2}^{0} \\frac {x^2-x+5}{(x-1)^2} e^x dx =\\int_{-3}^{-1} \\frac {u^2+3u-3}{u^2} e^{u+1} du $$ $$\\Rightarrow I=e [\\int_{-3}^{-1} [3 (\\frac {e^u}{u} -\\frac {e^u}{u^2}) du] +\\int_{-3}^{-1} e^u du] =e [3I_1-3I_2 +I_3] $$ Now we have $$I_1-I_2 =\\int_{-3}^{-1}(\\frac {e^u}{u}-\\frac {e^u}{u^2}) du =\\int_{-3}^{-1} d (\\frac {e^u}{u}) =\\frac {e^u}{u} \\mid_{-3}^{-1} $$ We see that $I_3$ is easy to solve. Finally the answer is $-2$. Hope it helps. </p>\n" }, { "answer_id": 2099725, "score": 4, "text": "<p>Integration by part we get\n\\begin{align*}\n\\int\\frac{x^2 + x - 5}{(x-1)^2}e^x \\, \\mathrm{d}x&amp;=-\\frac{x^2 + x - 5}{x-1}e^x +\\int e^{x}\\left ( x+4 \\right )\\, \\mathrm{d}x \\\\\n&amp;=-\\frac{x^2 + x - 5}{x-1}e^x+e^{x}\\left ( x+3 \\right )\\\\\n&amp;=\\frac{x+2}{x-1}e^{x}\n\\end{align*}\nThen you can take it from here.</p>\n" }, { "answer_id": 2099729, "score": 3, "text": "<p>Substituting for $u=x-1$, we get, $$I=\\int_{-2}^{0} \\frac {x^2-x+5}{(x-1)^2} e^x dx =\\int_{-3}^{-1} \\frac {u^2+3u-3}{u^2} e^{u+1} du $$ $$\\Rightarrow I=e [\\int_{-3}^{-1} [3 (\\frac {e^u}{u} -\\frac {e^u}{u^2}) du] +\\int_{-3}^{-1} e^u du] =e [3I_1-3I_2 +I_3] $$ Now we have $$I_1-I_2 =\\int_{-3}^{-1}(\\frac {e^u}{u}-\\frac {e^u}{u^2}) du =\\int_{-3}^{-1} d (\\frac {e^u}{u}) =\\frac {e^u}{u} \\mid_{-3}^{-1} $$ We see that $I_3$ is easy to solve. Finally the answer is $-2$. Hope it helps. </p>\n" }, { "answer_id": 2099725, "score": 4, "text": "<p>Integration by part we get\n\\begin{align*}\n\\int\\frac{x^2 + x - 5}{(x-1)^2}e^x \\, \\mathrm{d}x&amp;=-\\frac{x^2 + x - 5}{x-1}e^x +\\int e^{x}\\left ( x+4 \\right )\\, \\mathrm{d}x \\\\\n&amp;=-\\frac{x^2 + x - 5}{x-1}e^x+e^{x}\\left ( x+3 \\right )\\\\\n&amp;=\\frac{x+2}{x-1}e^{x}\n\\end{align*}\nThen you can take it from here.</p>\n" }, { "answer_id": 2099729, "score": 3, "text": "<p>Substituting for $u=x-1$, we get, $$I=\\int_{-2}^{0} \\frac {x^2-x+5}{(x-1)^2} e^x dx =\\int_{-3}^{-1} \\frac {u^2+3u-3}{u^2} e^{u+1} du $$ $$\\Rightarrow I=e [\\int_{-3}^{-1} [3 (\\frac {e^u}{u} -\\frac {e^u}{u^2}) du] +\\int_{-3}^{-1} e^u du] =e [3I_1-3I_2 +I_3] $$ Now we have $$I_1-I_2 =\\int_{-3}^{-1}(\\frac {e^u}{u}-\\frac {e^u}{u^2}) du =\\int_{-3}^{-1} d (\\frac {e^u}{u}) =\\frac {e^u}{u} \\mid_{-3}^{-1} $$ We see that $I_3$ is easy to solve. Finally the answer is $-2$. Hope it helps. </p>\n" }, { "answer_id": 2099725, "score": 4, "text": "<p>Integration by part we get\n\\begin{align*}\n\\int\\frac{x^2 + x - 5}{(x-1)^2}e^x \\, \\mathrm{d}x&amp;=-\\frac{x^2 + x - 5}{x-1}e^x +\\int e^{x}\\left ( x+4 \\right )\\, \\mathrm{d}x \\\\\n&amp;=-\\frac{x^2 + x - 5}{x-1}e^x+e^{x}\\left ( x+3 \\right )\\\\\n&amp;=\\frac{x+2}{x-1}e^{x}\n\\end{align*}\nThen you can take it from here.</p>\n" }, { "answer_id": 2099729, "score": 3, "text": "<p>Substituting for $u=x-1$, we get, $$I=\\int_{-2}^{0} \\frac {x^2-x+5}{(x-1)^2} e^x dx =\\int_{-3}^{-1} \\frac {u^2+3u-3}{u^2} e^{u+1} du $$ $$\\Rightarrow I=e [\\int_{-3}^{-1} [3 (\\frac {e^u}{u} -\\frac {e^u}{u^2}) du] +\\int_{-3}^{-1} e^u du] =e [3I_1-3I_2 +I_3] $$ Now we have $$I_1-I_2 =\\int_{-3}^{-1}(\\frac {e^u}{u}-\\frac {e^u}{u^2}) du =\\int_{-3}^{-1} d (\\frac {e^u}{u}) =\\frac {e^u}{u} \\mid_{-3}^{-1} $$ We see that $I_3$ is easy to solve. Finally the answer is $-2$. Hope it helps. </p>\n" }, { "answer_id": 2099725, "score": 4, "text": "<p>Integration by part we get\n\\begin{align*}\n\\int\\frac{x^2 + x - 5}{(x-1)^2}e^x \\, \\mathrm{d}x&amp;=-\\frac{x^2 + x - 5}{x-1}e^x +\\int e^{x}\\left ( x+4 \\right )\\, \\mathrm{d}x \\\\\n&amp;=-\\frac{x^2 + x - 5}{x-1}e^x+e^{x}\\left ( x+3 \\right )\\\\\n&amp;=\\frac{x+2}{x-1}e^{x}\n\\end{align*}\nThen you can take it from here.</p>\n" }, { "answer_id": 2099729, "score": 3, "text": "<p>Substituting for $u=x-1$, we get, $$I=\\int_{-2}^{0} \\frac {x^2-x+5}{(x-1)^2} e^x dx =\\int_{-3}^{-1} \\frac {u^2+3u-3}{u^2} e^{u+1} du $$ $$\\Rightarrow I=e [\\int_{-3}^{-1} [3 (\\frac {e^u}{u} -\\frac {e^u}{u^2}) du] +\\int_{-3}^{-1} e^u du] =e [3I_1-3I_2 +I_3] $$ Now we have $$I_1-I_2 =\\int_{-3}^{-1}(\\frac {e^u}{u}-\\frac {e^u}{u^2}) du =\\int_{-3}^{-1} d (\\frac {e^u}{u}) =\\frac {e^u}{u} \\mid_{-3}^{-1} $$ We see that $I_3$ is easy to solve. Finally the answer is $-2$. Hope it helps. </p>\n" } ]
Question: Difficult definite integration $\int_{-2}^{0}\frac{x^2 + x - 5}{(x-1)^2}e^x\,\mathrm dx$ Answer 1: <p>Integration by part we get \begin{align*} \int\frac{x^2 + x - 5}{(x-1)^2}e^x \, \mathrm{d}x&amp;=-\frac{x^2 + x - 5}{x-1}e^x +\int e^{x}\left ( x+4 \right )\, \mathrm{d}x \\ &amp;=-\frac{x^2 + x - 5}{x-1}e^x+e^{x}\left ( x+3 \right )\\ &amp;=\frac{x+2}{x-1}e^{x} \end{align*} Then you can take it from here.</p> Answer 2: <p>Substituting for $u=x-1$, we get, $$I=\int_{-2}^{0} \frac {x^2-x+5}{(x-1)^2} e^x dx =\int_{-3}^{-1} \frac {u^2+3u-3}{u^2} e^{u+1} du $$ $$\Rightarrow I=e [\int_{-3}^{-1} [3 (\frac {e^u}{u} -\frac {e^u}{u^2}) du] +\int_{-3}^{-1} e^u du] =e [3I_1-3I_2 +I_3] $$ Now we have $$I_1-I_2 =\int_{-3}^{-1}(\frac {e^u}{u}-\frac {e^u}{u^2}) du =\int_{-3}^{-1} d (\frac {e^u}{u}) =\frac {e^u}{u} \mid_{-3}^{-1} $$ We see that $I_3$ is easy to solve. Finally the answer is $-2$. Hope it helps. </p> Answer 3: <p>Integration by part we get \begin{align*} \int\frac{x^2 + x - 5}{(x-1)^2}e^x \, \mathrm{d}x&amp;=-\frac{x^2 + x - 5}{x-1}e^x +\int e^{x}\left ( x+4 \right )\, \mathrm{d}x \\ &amp;=-\frac{x^2 + x - 5}{x-1}e^x+e^{x}\left ( x+3 \right )\\ &amp;=\frac{x+2}{x-1}e^{x} \end{align*} Then you can take it from here.</p> Answer 4: <p>Substituting for $u=x-1$, we get, $$I=\int_{-2}^{0} \frac {x^2-x+5}{(x-1)^2} e^x dx =\int_{-3}^{-1} \frac {u^2+3u-3}{u^2} e^{u+1} du $$ $$\Rightarrow I=e [\int_{-3}^{-1} [3 (\frac {e^u}{u} -\frac {e^u}{u^2}) du] +\int_{-3}^{-1} e^u du] =e [3I_1-3I_2 +I_3] $$ Now we have $$I_1-I_2 =\int_{-3}^{-1}(\frac {e^u}{u}-\frac {e^u}{u^2}) du =\int_{-3}^{-1} d (\frac {e^u}{u}) =\frac {e^u}{u} \mid_{-3}^{-1} $$ We see that $I_3$ is easy to solve. Finally the answer is $-2$. Hope it helps. </p> Answer 5: <p>Integration by part we get \begin{align*} \int\frac{x^2 + x - 5}{(x-1)^2}e^x \, \mathrm{d}x&amp;=-\frac{x^2 + x - 5}{x-1}e^x +\int e^{x}\left ( x+4 \right )\, \mathrm{d}x \\ &amp;=-\frac{x^2 + x - 5}{x-1}e^x+e^{x}\left ( x+3 \right )\\ &amp;=\frac{x+2}{x-1}e^{x} \end{align*} Then you can take it from here.</p> Answer 6: <p>Substituting for $u=x-1$, we get, $$I=\int_{-2}^{0} \frac {x^2-x+5}{(x-1)^2} e^x dx =\int_{-3}^{-1} \frac {u^2+3u-3}{u^2} e^{u+1} du $$ $$\Rightarrow I=e [\int_{-3}^{-1} [3 (\frac {e^u}{u} -\frac {e^u}{u^2}) du] +\int_{-3}^{-1} e^u du] =e [3I_1-3I_2 +I_3] $$ Now we have $$I_1-I_2 =\int_{-3}^{-1}(\frac {e^u}{u}-\frac {e^u}{u^2}) du =\int_{-3}^{-1} d (\frac {e^u}{u}) =\frac {e^u}{u} \mid_{-3}^{-1} $$ We see that $I_3$ is easy to solve. Finally the answer is $-2$. Hope it helps. </p> Answer 7: <p>Integration by part we get \begin{align*} \int\frac{x^2 + x - 5}{(x-1)^2}e^x \, \mathrm{d}x&amp;=-\frac{x^2 + x - 5}{x-1}e^x +\int e^{x}\left ( x+4 \right )\, \mathrm{d}x \\ &amp;=-\frac{x^2 + x - 5}{x-1}e^x+e^{x}\left ( x+3 \right )\\ &amp;=\frac{x+2}{x-1}e^{x} \end{align*} Then you can take it from here.</p> Answer 8: <p>Substituting for $u=x-1$, we get, $$I=\int_{-2}^{0} \frac {x^2-x+5}{(x-1)^2} e^x dx =\int_{-3}^{-1} \frac {u^2+3u-3}{u^2} e^{u+1} du $$ $$\Rightarrow I=e [\int_{-3}^{-1} [3 (\frac {e^u}{u} -\frac {e^u}{u^2}) du] +\int_{-3}^{-1} e^u du] =e [3I_1-3I_2 +I_3] $$ Now we have $$I_1-I_2 =\int_{-3}^{-1}(\frac {e^u}{u}-\frac {e^u}{u^2}) du =\int_{-3}^{-1} d (\frac {e^u}{u}) =\frac {e^u}{u} \mid_{-3}^{-1} $$ We see that $I_3$ is easy to solve. Finally the answer is $-2$. Hope it helps. </p> Answer 9: <p>Integration by part we get \begin{align*} \int\frac{x^2 + x - 5}{(x-1)^2}e^x \, \mathrm{d}x&amp;=-\frac{x^2 + x - 5}{x-1}e^x +\int e^{x}\left ( x+4 \right )\, \mathrm{d}x \\ &amp;=-\frac{x^2 + x - 5}{x-1}e^x+e^{x}\left ( x+3 \right )\\ &amp;=\frac{x+2}{x-1}e^{x} \end{align*} Then you can take it from here.</p> Answer 10: <p>Substituting for $u=x-1$, we get, $$I=\int_{-2}^{0} \frac {x^2-x+5}{(x-1)^2} e^x dx =\int_{-3}^{-1} \frac {u^2+3u-3}{u^2} e^{u+1} du $$ $$\Rightarrow I=e [\int_{-3}^{-1} [3 (\frac {e^u}{u} -\frac {e^u}{u^2}) du] +\int_{-3}^{-1} e^u du] =e [3I_1-3I_2 +I_3] $$ Now we have $$I_1-I_2 =\int_{-3}^{-1}(\frac {e^u}{u}-\frac {e^u}{u^2}) du =\int_{-3}^{-1} d (\frac {e^u}{u}) =\frac {e^u}{u} \mid_{-3}^{-1} $$ We see that $I_3$ is easy to solve. Finally the answer is $-2$. Hope it helps. </p> Answer 11: <p>Integration by part we get \begin{align*} \int\frac{x^2 + x - 5}{(x-1)^2}e^x \, \mathrm{d}x&amp;=-\frac{x^2 + x - 5}{x-1}e^x +\int e^{x}\left ( x+4 \right )\, \mathrm{d}x \\ &amp;=-\frac{x^2 + x - 5}{x-1}e^x+e^{x}\left ( x+3 \right )\\ &amp;=\frac{x+2}{x-1}e^{x} \end{align*} Then you can take it from here.</p> Answer 12: <p>Substituting for $u=x-1$, we get, $$I=\int_{-2}^{0} \frac {x^2-x+5}{(x-1)^2} e^x dx =\int_{-3}^{-1} \frac {u^2+3u-3}{u^2} e^{u+1} du $$ $$\Rightarrow I=e [\int_{-3}^{-1} [3 (\frac {e^u}{u} -\frac {e^u}{u^2}) du] +\int_{-3}^{-1} e^u du] =e [3I_1-3I_2 +I_3] $$ Now we have $$I_1-I_2 =\int_{-3}^{-1}(\frac {e^u}{u}-\frac {e^u}{u^2}) du =\int_{-3}^{-1} d (\frac {e^u}{u}) =\frac {e^u}{u} \mid_{-3}^{-1} $$ We see that $I_3$ is easy to solve. Finally the answer is $-2$. Hope it helps. </p> Answer 13: <p>Integration by part we get \begin{align*} \int\frac{x^2 + x - 5}{(x-1)^2}e^x \, \mathrm{d}x&amp;=-\frac{x^2 + x - 5}{x-1}e^x +\int e^{x}\left ( x+4 \right )\, \mathrm{d}x \\ &amp;=-\frac{x^2 + x - 5}{x-1}e^x+e^{x}\left ( x+3 \right )\\ &amp;=\frac{x+2}{x-1}e^{x} \end{align*} Then you can take it from here.</p> Answer 14: <p>Substituting for $u=x-1$, we get, $$I=\int_{-2}^{0} \frac {x^2-x+5}{(x-1)^2} e^x dx =\int_{-3}^{-1} \frac {u^2+3u-3}{u^2} e^{u+1} du $$ $$\Rightarrow I=e [\int_{-3}^{-1} [3 (\frac {e^u}{u} -\frac {e^u}{u^2}) du] +\int_{-3}^{-1} e^u du] =e [3I_1-3I_2 +I_3] $$ Now we have $$I_1-I_2 =\int_{-3}^{-1}(\frac {e^u}{u}-\frac {e^u}{u^2}) du =\int_{-3}^{-1} d (\frac {e^u}{u}) =\frac {e^u}{u} \mid_{-3}^{-1} $$ We see that $I_3$ is easy to solve. Finally the answer is $-2$. Hope it helps. </p> Answer 15: <p>Integration by part we get \begin{align*} \int\frac{x^2 + x - 5}{(x-1)^2}e^x \, \mathrm{d}x&amp;=-\frac{x^2 + x - 5}{x-1}e^x +\int e^{x}\left ( x+4 \right )\, \mathrm{d}x \\ &amp;=-\frac{x^2 + x - 5}{x-1}e^x+e^{x}\left ( x+3 \right )\\ &amp;=\frac{x+2}{x-1}e^{x} \end{align*} Then you can take it from here.</p> Answer 16: <p>Substituting for $u=x-1$, we get, $$I=\int_{-2}^{0} \frac {x^2-x+5}{(x-1)^2} e^x dx =\int_{-3}^{-1} \frac {u^2+3u-3}{u^2} e^{u+1} du $$ $$\Rightarrow I=e [\int_{-3}^{-1} [3 (\frac {e^u}{u} -\frac {e^u}{u^2}) du] +\int_{-3}^{-1} e^u du] =e [3I_1-3I_2 +I_3] $$ Now we have $$I_1-I_2 =\int_{-3}^{-1}(\frac {e^u}{u}-\frac {e^u}{u^2}) du =\int_{-3}^{-1} d (\frac {e^u}{u}) =\frac {e^u}{u} \mid_{-3}^{-1} $$ We see that $I_3$ is easy to solve. Finally the answer is $-2$. Hope it helps. </p> Answer 17: <p>Integration by part we get \begin{align*} \int\frac{x^2 + x - 5}{(x-1)^2}e^x \, \mathrm{d}x&amp;=-\frac{x^2 + x - 5}{x-1}e^x +\int e^{x}\left ( x+4 \right )\, \mathrm{d}x \\ &amp;=-\frac{x^2 + x - 5}{x-1}e^x+e^{x}\left ( x+3 \right )\\ &amp;=\frac{x+2}{x-1}e^{x} \end{align*} Then you can take it from here.</p> Answer 18: <p>Substituting for $u=x-1$, we get, $$I=\int_{-2}^{0} \frac {x^2-x+5}{(x-1)^2} e^x dx =\int_{-3}^{-1} \frac {u^2+3u-3}{u^2} e^{u+1} du $$ $$\Rightarrow I=e [\int_{-3}^{-1} [3 (\frac {e^u}{u} -\frac {e^u}{u^2}) du] +\int_{-3}^{-1} e^u du] =e [3I_1-3I_2 +I_3] $$ Now we have $$I_1-I_2 =\int_{-3}^{-1}(\frac {e^u}{u}-\frac {e^u}{u^2}) du =\int_{-3}^{-1} d (\frac {e^u}{u}) =\frac {e^u}{u} \mid_{-3}^{-1} $$ We see that $I_3$ is easy to solve. Finally the answer is $-2$. Hope it helps. </p> Answer 19: <p>Integration by part we get \begin{align*} \int\frac{x^2 + x - 5}{(x-1)^2}e^x \, \mathrm{d}x&amp;=-\frac{x^2 + x - 5}{x-1}e^x +\int e^{x}\left ( x+4 \right )\, \mathrm{d}x \\ &amp;=-\frac{x^2 + x - 5}{x-1}e^x+e^{x}\left ( x+3 \right )\\ &amp;=\frac{x+2}{x-1}e^{x} \end{align*} Then you can take it from here.</p> Answer 20: <p>Substituting for $u=x-1$, we get, $$I=\int_{-2}^{0} \frac {x^2-x+5}{(x-1)^2} e^x dx =\int_{-3}^{-1} \frac {u^2+3u-3}{u^2} e^{u+1} du $$ $$\Rightarrow I=e [\int_{-3}^{-1} [3 (\frac {e^u}{u} -\frac {e^u}{u^2}) du] +\int_{-3}^{-1} e^u du] =e [3I_1-3I_2 +I_3] $$ Now we have $$I_1-I_2 =\int_{-3}^{-1}(\frac {e^u}{u}-\frac {e^u}{u^2}) du =\int_{-3}^{-1} d (\frac {e^u}{u}) =\frac {e^u}{u} \mid_{-3}^{-1} $$ We see that $I_3$ is easy to solve. Finally the answer is $-2$. Hope it helps. </p>
if $a$ and $b$ are irrational and $a \neq b$, then is $ab$ necessarily irrational?
2,099,834
6
https://math.stackexchange.com/questions/2099834/if-a-and-b-are-irrational-and-a-neq-b-then-is-ab-necessarily-irrationa
<p>$\sqrt{2}\sqrt{2}$ is rational so it is not the case $\forall$ m,n $\in$ $\Bbb{R} - \Bbb{Q}$ , mn $\in$ $\Bbb{R} - \Bbb{Q}$. What about if m $\neq$ n? Is there a case where m $\neq$ n and $mn$ is rational?</p>
[ { "answer_id": 2099836, "score": 7, "text": "<p>Counter example for first case, and an example for second case: $a = \\sqrt{2}, b = \\dfrac{1}{\\sqrt{2}}$. $a \\neq b$, yet $ab = 1 \\in \\mathbb{Q}$. For the other case, take $a = \\sqrt{2}-1, b = \\sqrt{2}$, $a \\neq b$, and $ab = 2-\\sqrt{2} \\notin \\mathbb{Q}$.</p>\n" }, { "answer_id": 2099842, "score": 1, "text": "<p><strong>It's trivial that</strong>\n\\begin{eqnarray}\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}\\,\\Rightarrow ab\\in\\mathbb{Q}\\\\\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}^c\\,\\Rightarrow ab\\in\\mathbb{Q}^c\\,\\,\\,,\\,a\\neq0\\\\\n\\end{eqnarray}\nbut with other cases:\n\\begin{eqnarray}\n&amp;&amp; \\sqrt{2}\\in\\mathbb{Q}^c\\,,\\,\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, 2\\in\\mathbb{Q}\\\\\n&amp;&amp; \\sqrt{3}\\in\\mathbb{Q}^c\\,,\\,\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, \\sqrt{6}\\in\\mathbb{Q}^c\\\\\n\\end{eqnarray}\nMore, <strong>that's trivial that</strong>\n\\begin{eqnarray}\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}\\,\\Rightarrow a+b\\in\\mathbb{Q}\\\\\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}^c\\,\\Rightarrow a+b\\in\\mathbb{Q}^c\\\\\n\\end{eqnarray}\nbut with other cases:\n\\begin{eqnarray}\n&amp;&amp; \\sqrt{2}\\in\\mathbb{Q}^c\\,,\\,-\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, 0\\in\\mathbb{Q}\\\\\n&amp;&amp; \\sqrt{3}\\in\\mathbb{Q}^c\\,,\\,\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, \\sqrt{3}+\\sqrt{2}\\in\\mathbb{Q}^c\\\\\n\\end{eqnarray}</p>\n" }, { "answer_id": 2099855, "score": 0, "text": "<p>You are asking if $\\forall m,n\\in\\mathbb R, m\\ne n\\implies mn\\in \\mathbb Q$.</p>\n\n<p>That would mean that $\\forall m\\in\\mathbb R-\\mathbb Q, m\\ne\\sqrt2\\implies m\\sqrt2\\in \\mathbb Q$.</p>\n\n<p>There are many more counterexamples than examples, as $\\mathbb R-\\mathbb Q$ is uncountable, while $\\mathbb Q$ is.</p>\n" }, { "answer_id": 2099925, "score": 0, "text": "<p>Be $q$ a rational number other than $0$. Be $a$ an irrational number with $a^2\\ne q$. Then $b=\\frac{q}{a}$ is an irrational number that is not equal to $a$, but $ab=q$ is rational by construction.</p>\n\n<p>So the product of two different irrational numbers is not always irrational. Indeed, the number of counterexamples is uncountable (as for any given $q$ almost all irrational numbers can be chosen as $a$).</p>\n\n<p>Interestingly this means there are as many pairs of irrational numbers whose product is rational as there are products of irrational numbers whose product is irrational, despite there being more irrational numbers than rational numbers.</p>\n" }, { "answer_id": 2099836, "score": 7, "text": "<p>Counter example for first case, and an example for second case: $a = \\sqrt{2}, b = \\dfrac{1}{\\sqrt{2}}$. $a \\neq b$, yet $ab = 1 \\in \\mathbb{Q}$. For the other case, take $a = \\sqrt{2}-1, b = \\sqrt{2}$, $a \\neq b$, and $ab = 2-\\sqrt{2} \\notin \\mathbb{Q}$.</p>\n" }, { "answer_id": 2099842, "score": 1, "text": "<p><strong>It's trivial that</strong>\n\\begin{eqnarray}\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}\\,\\Rightarrow ab\\in\\mathbb{Q}\\\\\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}^c\\,\\Rightarrow ab\\in\\mathbb{Q}^c\\,\\,\\,,\\,a\\neq0\\\\\n\\end{eqnarray}\nbut with other cases:\n\\begin{eqnarray}\n&amp;&amp; \\sqrt{2}\\in\\mathbb{Q}^c\\,,\\,\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, 2\\in\\mathbb{Q}\\\\\n&amp;&amp; \\sqrt{3}\\in\\mathbb{Q}^c\\,,\\,\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, \\sqrt{6}\\in\\mathbb{Q}^c\\\\\n\\end{eqnarray}\nMore, <strong>that's trivial that</strong>\n\\begin{eqnarray}\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}\\,\\Rightarrow a+b\\in\\mathbb{Q}\\\\\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}^c\\,\\Rightarrow a+b\\in\\mathbb{Q}^c\\\\\n\\end{eqnarray}\nbut with other cases:\n\\begin{eqnarray}\n&amp;&amp; \\sqrt{2}\\in\\mathbb{Q}^c\\,,\\,-\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, 0\\in\\mathbb{Q}\\\\\n&amp;&amp; \\sqrt{3}\\in\\mathbb{Q}^c\\,,\\,\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, \\sqrt{3}+\\sqrt{2}\\in\\mathbb{Q}^c\\\\\n\\end{eqnarray}</p>\n" }, { "answer_id": 2099855, "score": 0, "text": "<p>You are asking if $\\forall m,n\\in\\mathbb R, m\\ne n\\implies mn\\in \\mathbb Q$.</p>\n\n<p>That would mean that $\\forall m\\in\\mathbb R-\\mathbb Q, m\\ne\\sqrt2\\implies m\\sqrt2\\in \\mathbb Q$.</p>\n\n<p>There are many more counterexamples than examples, as $\\mathbb R-\\mathbb Q$ is uncountable, while $\\mathbb Q$ is.</p>\n" }, { "answer_id": 2099925, "score": 0, "text": "<p>Be $q$ a rational number other than $0$. Be $a$ an irrational number with $a^2\\ne q$. Then $b=\\frac{q}{a}$ is an irrational number that is not equal to $a$, but $ab=q$ is rational by construction.</p>\n\n<p>So the product of two different irrational numbers is not always irrational. Indeed, the number of counterexamples is uncountable (as for any given $q$ almost all irrational numbers can be chosen as $a$).</p>\n\n<p>Interestingly this means there are as many pairs of irrational numbers whose product is rational as there are products of irrational numbers whose product is irrational, despite there being more irrational numbers than rational numbers.</p>\n" }, { "answer_id": 2099836, "score": 7, "text": "<p>Counter example for first case, and an example for second case: $a = \\sqrt{2}, b = \\dfrac{1}{\\sqrt{2}}$. $a \\neq b$, yet $ab = 1 \\in \\mathbb{Q}$. For the other case, take $a = \\sqrt{2}-1, b = \\sqrt{2}$, $a \\neq b$, and $ab = 2-\\sqrt{2} \\notin \\mathbb{Q}$.</p>\n" }, { "answer_id": 2099842, "score": 1, "text": "<p><strong>It's trivial that</strong>\n\\begin{eqnarray}\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}\\,\\Rightarrow ab\\in\\mathbb{Q}\\\\\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}^c\\,\\Rightarrow ab\\in\\mathbb{Q}^c\\,\\,\\,,\\,a\\neq0\\\\\n\\end{eqnarray}\nbut with other cases:\n\\begin{eqnarray}\n&amp;&amp; \\sqrt{2}\\in\\mathbb{Q}^c\\,,\\,\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, 2\\in\\mathbb{Q}\\\\\n&amp;&amp; \\sqrt{3}\\in\\mathbb{Q}^c\\,,\\,\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, \\sqrt{6}\\in\\mathbb{Q}^c\\\\\n\\end{eqnarray}\nMore, <strong>that's trivial that</strong>\n\\begin{eqnarray}\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}\\,\\Rightarrow a+b\\in\\mathbb{Q}\\\\\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}^c\\,\\Rightarrow a+b\\in\\mathbb{Q}^c\\\\\n\\end{eqnarray}\nbut with other cases:\n\\begin{eqnarray}\n&amp;&amp; \\sqrt{2}\\in\\mathbb{Q}^c\\,,\\,-\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, 0\\in\\mathbb{Q}\\\\\n&amp;&amp; \\sqrt{3}\\in\\mathbb{Q}^c\\,,\\,\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, \\sqrt{3}+\\sqrt{2}\\in\\mathbb{Q}^c\\\\\n\\end{eqnarray}</p>\n" }, { "answer_id": 2099855, "score": 0, "text": "<p>You are asking if $\\forall m,n\\in\\mathbb R, m\\ne n\\implies mn\\in \\mathbb Q$.</p>\n\n<p>That would mean that $\\forall m\\in\\mathbb R-\\mathbb Q, m\\ne\\sqrt2\\implies m\\sqrt2\\in \\mathbb Q$.</p>\n\n<p>There are many more counterexamples than examples, as $\\mathbb R-\\mathbb Q$ is uncountable, while $\\mathbb Q$ is.</p>\n" }, { "answer_id": 2099925, "score": 0, "text": "<p>Be $q$ a rational number other than $0$. Be $a$ an irrational number with $a^2\\ne q$. Then $b=\\frac{q}{a}$ is an irrational number that is not equal to $a$, but $ab=q$ is rational by construction.</p>\n\n<p>So the product of two different irrational numbers is not always irrational. Indeed, the number of counterexamples is uncountable (as for any given $q$ almost all irrational numbers can be chosen as $a$).</p>\n\n<p>Interestingly this means there are as many pairs of irrational numbers whose product is rational as there are products of irrational numbers whose product is irrational, despite there being more irrational numbers than rational numbers.</p>\n" }, { "answer_id": 2099836, "score": 7, "text": "<p>Counter example for first case, and an example for second case: $a = \\sqrt{2}, b = \\dfrac{1}{\\sqrt{2}}$. $a \\neq b$, yet $ab = 1 \\in \\mathbb{Q}$. For the other case, take $a = \\sqrt{2}-1, b = \\sqrt{2}$, $a \\neq b$, and $ab = 2-\\sqrt{2} \\notin \\mathbb{Q}$.</p>\n" }, { "answer_id": 2099842, "score": 1, "text": "<p><strong>It's trivial that</strong>\n\\begin{eqnarray}\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}\\,\\Rightarrow ab\\in\\mathbb{Q}\\\\\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}^c\\,\\Rightarrow ab\\in\\mathbb{Q}^c\\,\\,\\,,\\,a\\neq0\\\\\n\\end{eqnarray}\nbut with other cases:\n\\begin{eqnarray}\n&amp;&amp; \\sqrt{2}\\in\\mathbb{Q}^c\\,,\\,\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, 2\\in\\mathbb{Q}\\\\\n&amp;&amp; \\sqrt{3}\\in\\mathbb{Q}^c\\,,\\,\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, \\sqrt{6}\\in\\mathbb{Q}^c\\\\\n\\end{eqnarray}\nMore, <strong>that's trivial that</strong>\n\\begin{eqnarray}\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}\\,\\Rightarrow a+b\\in\\mathbb{Q}\\\\\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}^c\\,\\Rightarrow a+b\\in\\mathbb{Q}^c\\\\\n\\end{eqnarray}\nbut with other cases:\n\\begin{eqnarray}\n&amp;&amp; \\sqrt{2}\\in\\mathbb{Q}^c\\,,\\,-\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, 0\\in\\mathbb{Q}\\\\\n&amp;&amp; \\sqrt{3}\\in\\mathbb{Q}^c\\,,\\,\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, \\sqrt{3}+\\sqrt{2}\\in\\mathbb{Q}^c\\\\\n\\end{eqnarray}</p>\n" }, { "answer_id": 2099855, "score": 0, "text": "<p>You are asking if $\\forall m,n\\in\\mathbb R, m\\ne n\\implies mn\\in \\mathbb Q$.</p>\n\n<p>That would mean that $\\forall m\\in\\mathbb R-\\mathbb Q, m\\ne\\sqrt2\\implies m\\sqrt2\\in \\mathbb Q$.</p>\n\n<p>There are many more counterexamples than examples, as $\\mathbb R-\\mathbb Q$ is uncountable, while $\\mathbb Q$ is.</p>\n" }, { "answer_id": 2099925, "score": 0, "text": "<p>Be $q$ a rational number other than $0$. Be $a$ an irrational number with $a^2\\ne q$. Then $b=\\frac{q}{a}$ is an irrational number that is not equal to $a$, but $ab=q$ is rational by construction.</p>\n\n<p>So the product of two different irrational numbers is not always irrational. Indeed, the number of counterexamples is uncountable (as for any given $q$ almost all irrational numbers can be chosen as $a$).</p>\n\n<p>Interestingly this means there are as many pairs of irrational numbers whose product is rational as there are products of irrational numbers whose product is irrational, despite there being more irrational numbers than rational numbers.</p>\n" }, { "answer_id": 2099836, "score": 7, "text": "<p>Counter example for first case, and an example for second case: $a = \\sqrt{2}, b = \\dfrac{1}{\\sqrt{2}}$. $a \\neq b$, yet $ab = 1 \\in \\mathbb{Q}$. For the other case, take $a = \\sqrt{2}-1, b = \\sqrt{2}$, $a \\neq b$, and $ab = 2-\\sqrt{2} \\notin \\mathbb{Q}$.</p>\n" }, { "answer_id": 2099842, "score": 1, "text": "<p><strong>It's trivial that</strong>\n\\begin{eqnarray}\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}\\,\\Rightarrow ab\\in\\mathbb{Q}\\\\\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}^c\\,\\Rightarrow ab\\in\\mathbb{Q}^c\\,\\,\\,,\\,a\\neq0\\\\\n\\end{eqnarray}\nbut with other cases:\n\\begin{eqnarray}\n&amp;&amp; \\sqrt{2}\\in\\mathbb{Q}^c\\,,\\,\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, 2\\in\\mathbb{Q}\\\\\n&amp;&amp; \\sqrt{3}\\in\\mathbb{Q}^c\\,,\\,\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, \\sqrt{6}\\in\\mathbb{Q}^c\\\\\n\\end{eqnarray}\nMore, <strong>that's trivial that</strong>\n\\begin{eqnarray}\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}\\,\\Rightarrow a+b\\in\\mathbb{Q}\\\\\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}^c\\,\\Rightarrow a+b\\in\\mathbb{Q}^c\\\\\n\\end{eqnarray}\nbut with other cases:\n\\begin{eqnarray}\n&amp;&amp; \\sqrt{2}\\in\\mathbb{Q}^c\\,,\\,-\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, 0\\in\\mathbb{Q}\\\\\n&amp;&amp; \\sqrt{3}\\in\\mathbb{Q}^c\\,,\\,\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, \\sqrt{3}+\\sqrt{2}\\in\\mathbb{Q}^c\\\\\n\\end{eqnarray}</p>\n" }, { "answer_id": 2099855, "score": 0, "text": "<p>You are asking if $\\forall m,n\\in\\mathbb R, m\\ne n\\implies mn\\in \\mathbb Q$.</p>\n\n<p>That would mean that $\\forall m\\in\\mathbb R-\\mathbb Q, m\\ne\\sqrt2\\implies m\\sqrt2\\in \\mathbb Q$.</p>\n\n<p>There are many more counterexamples than examples, as $\\mathbb R-\\mathbb Q$ is uncountable, while $\\mathbb Q$ is.</p>\n" }, { "answer_id": 2099925, "score": 0, "text": "<p>Be $q$ a rational number other than $0$. Be $a$ an irrational number with $a^2\\ne q$. Then $b=\\frac{q}{a}$ is an irrational number that is not equal to $a$, but $ab=q$ is rational by construction.</p>\n\n<p>So the product of two different irrational numbers is not always irrational. Indeed, the number of counterexamples is uncountable (as for any given $q$ almost all irrational numbers can be chosen as $a$).</p>\n\n<p>Interestingly this means there are as many pairs of irrational numbers whose product is rational as there are products of irrational numbers whose product is irrational, despite there being more irrational numbers than rational numbers.</p>\n" }, { "answer_id": 2099836, "score": 7, "text": "<p>Counter example for first case, and an example for second case: $a = \\sqrt{2}, b = \\dfrac{1}{\\sqrt{2}}$. $a \\neq b$, yet $ab = 1 \\in \\mathbb{Q}$. For the other case, take $a = \\sqrt{2}-1, b = \\sqrt{2}$, $a \\neq b$, and $ab = 2-\\sqrt{2} \\notin \\mathbb{Q}$.</p>\n" }, { "answer_id": 2099842, "score": 1, "text": "<p><strong>It's trivial that</strong>\n\\begin{eqnarray}\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}\\,\\Rightarrow ab\\in\\mathbb{Q}\\\\\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}^c\\,\\Rightarrow ab\\in\\mathbb{Q}^c\\,\\,\\,,\\,a\\neq0\\\\\n\\end{eqnarray}\nbut with other cases:\n\\begin{eqnarray}\n&amp;&amp; \\sqrt{2}\\in\\mathbb{Q}^c\\,,\\,\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, 2\\in\\mathbb{Q}\\\\\n&amp;&amp; \\sqrt{3}\\in\\mathbb{Q}^c\\,,\\,\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, \\sqrt{6}\\in\\mathbb{Q}^c\\\\\n\\end{eqnarray}\nMore, <strong>that's trivial that</strong>\n\\begin{eqnarray}\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}\\,\\Rightarrow a+b\\in\\mathbb{Q}\\\\\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}^c\\,\\Rightarrow a+b\\in\\mathbb{Q}^c\\\\\n\\end{eqnarray}\nbut with other cases:\n\\begin{eqnarray}\n&amp;&amp; \\sqrt{2}\\in\\mathbb{Q}^c\\,,\\,-\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, 0\\in\\mathbb{Q}\\\\\n&amp;&amp; \\sqrt{3}\\in\\mathbb{Q}^c\\,,\\,\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, \\sqrt{3}+\\sqrt{2}\\in\\mathbb{Q}^c\\\\\n\\end{eqnarray}</p>\n" }, { "answer_id": 2099855, "score": 0, "text": "<p>You are asking if $\\forall m,n\\in\\mathbb R, m\\ne n\\implies mn\\in \\mathbb Q$.</p>\n\n<p>That would mean that $\\forall m\\in\\mathbb R-\\mathbb Q, m\\ne\\sqrt2\\implies m\\sqrt2\\in \\mathbb Q$.</p>\n\n<p>There are many more counterexamples than examples, as $\\mathbb R-\\mathbb Q$ is uncountable, while $\\mathbb Q$ is.</p>\n" }, { "answer_id": 2099925, "score": 0, "text": "<p>Be $q$ a rational number other than $0$. Be $a$ an irrational number with $a^2\\ne q$. Then $b=\\frac{q}{a}$ is an irrational number that is not equal to $a$, but $ab=q$ is rational by construction.</p>\n\n<p>So the product of two different irrational numbers is not always irrational. Indeed, the number of counterexamples is uncountable (as for any given $q$ almost all irrational numbers can be chosen as $a$).</p>\n\n<p>Interestingly this means there are as many pairs of irrational numbers whose product is rational as there are products of irrational numbers whose product is irrational, despite there being more irrational numbers than rational numbers.</p>\n" }, { "answer_id": 2099836, "score": 7, "text": "<p>Counter example for first case, and an example for second case: $a = \\sqrt{2}, b = \\dfrac{1}{\\sqrt{2}}$. $a \\neq b$, yet $ab = 1 \\in \\mathbb{Q}$. For the other case, take $a = \\sqrt{2}-1, b = \\sqrt{2}$, $a \\neq b$, and $ab = 2-\\sqrt{2} \\notin \\mathbb{Q}$.</p>\n" }, { "answer_id": 2099842, "score": 1, "text": "<p><strong>It's trivial that</strong>\n\\begin{eqnarray}\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}\\,\\Rightarrow ab\\in\\mathbb{Q}\\\\\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}^c\\,\\Rightarrow ab\\in\\mathbb{Q}^c\\,\\,\\,,\\,a\\neq0\\\\\n\\end{eqnarray}\nbut with other cases:\n\\begin{eqnarray}\n&amp;&amp; \\sqrt{2}\\in\\mathbb{Q}^c\\,,\\,\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, 2\\in\\mathbb{Q}\\\\\n&amp;&amp; \\sqrt{3}\\in\\mathbb{Q}^c\\,,\\,\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, \\sqrt{6}\\in\\mathbb{Q}^c\\\\\n\\end{eqnarray}\nMore, <strong>that's trivial that</strong>\n\\begin{eqnarray}\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}\\,\\Rightarrow a+b\\in\\mathbb{Q}\\\\\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}^c\\,\\Rightarrow a+b\\in\\mathbb{Q}^c\\\\\n\\end{eqnarray}\nbut with other cases:\n\\begin{eqnarray}\n&amp;&amp; \\sqrt{2}\\in\\mathbb{Q}^c\\,,\\,-\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, 0\\in\\mathbb{Q}\\\\\n&amp;&amp; \\sqrt{3}\\in\\mathbb{Q}^c\\,,\\,\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, \\sqrt{3}+\\sqrt{2}\\in\\mathbb{Q}^c\\\\\n\\end{eqnarray}</p>\n" }, { "answer_id": 2099855, "score": 0, "text": "<p>You are asking if $\\forall m,n\\in\\mathbb R, m\\ne n\\implies mn\\in \\mathbb Q$.</p>\n\n<p>That would mean that $\\forall m\\in\\mathbb R-\\mathbb Q, m\\ne\\sqrt2\\implies m\\sqrt2\\in \\mathbb Q$.</p>\n\n<p>There are many more counterexamples than examples, as $\\mathbb R-\\mathbb Q$ is uncountable, while $\\mathbb Q$ is.</p>\n" }, { "answer_id": 2099925, "score": 0, "text": "<p>Be $q$ a rational number other than $0$. Be $a$ an irrational number with $a^2\\ne q$. Then $b=\\frac{q}{a}$ is an irrational number that is not equal to $a$, but $ab=q$ is rational by construction.</p>\n\n<p>So the product of two different irrational numbers is not always irrational. Indeed, the number of counterexamples is uncountable (as for any given $q$ almost all irrational numbers can be chosen as $a$).</p>\n\n<p>Interestingly this means there are as many pairs of irrational numbers whose product is rational as there are products of irrational numbers whose product is irrational, despite there being more irrational numbers than rational numbers.</p>\n" }, { "answer_id": 2099836, "score": 7, "text": "<p>Counter example for first case, and an example for second case: $a = \\sqrt{2}, b = \\dfrac{1}{\\sqrt{2}}$. $a \\neq b$, yet $ab = 1 \\in \\mathbb{Q}$. For the other case, take $a = \\sqrt{2}-1, b = \\sqrt{2}$, $a \\neq b$, and $ab = 2-\\sqrt{2} \\notin \\mathbb{Q}$.</p>\n" }, { "answer_id": 2099842, "score": 1, "text": "<p><strong>It's trivial that</strong>\n\\begin{eqnarray}\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}\\,\\Rightarrow ab\\in\\mathbb{Q}\\\\\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}^c\\,\\Rightarrow ab\\in\\mathbb{Q}^c\\,\\,\\,,\\,a\\neq0\\\\\n\\end{eqnarray}\nbut with other cases:\n\\begin{eqnarray}\n&amp;&amp; \\sqrt{2}\\in\\mathbb{Q}^c\\,,\\,\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, 2\\in\\mathbb{Q}\\\\\n&amp;&amp; \\sqrt{3}\\in\\mathbb{Q}^c\\,,\\,\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, \\sqrt{6}\\in\\mathbb{Q}^c\\\\\n\\end{eqnarray}\nMore, <strong>that's trivial that</strong>\n\\begin{eqnarray}\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}\\,\\Rightarrow a+b\\in\\mathbb{Q}\\\\\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}^c\\,\\Rightarrow a+b\\in\\mathbb{Q}^c\\\\\n\\end{eqnarray}\nbut with other cases:\n\\begin{eqnarray}\n&amp;&amp; \\sqrt{2}\\in\\mathbb{Q}^c\\,,\\,-\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, 0\\in\\mathbb{Q}\\\\\n&amp;&amp; \\sqrt{3}\\in\\mathbb{Q}^c\\,,\\,\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, \\sqrt{3}+\\sqrt{2}\\in\\mathbb{Q}^c\\\\\n\\end{eqnarray}</p>\n" }, { "answer_id": 2099855, "score": 0, "text": "<p>You are asking if $\\forall m,n\\in\\mathbb R, m\\ne n\\implies mn\\in \\mathbb Q$.</p>\n\n<p>That would mean that $\\forall m\\in\\mathbb R-\\mathbb Q, m\\ne\\sqrt2\\implies m\\sqrt2\\in \\mathbb Q$.</p>\n\n<p>There are many more counterexamples than examples, as $\\mathbb R-\\mathbb Q$ is uncountable, while $\\mathbb Q$ is.</p>\n" }, { "answer_id": 2099925, "score": 0, "text": "<p>Be $q$ a rational number other than $0$. Be $a$ an irrational number with $a^2\\ne q$. Then $b=\\frac{q}{a}$ is an irrational number that is not equal to $a$, but $ab=q$ is rational by construction.</p>\n\n<p>So the product of two different irrational numbers is not always irrational. Indeed, the number of counterexamples is uncountable (as for any given $q$ almost all irrational numbers can be chosen as $a$).</p>\n\n<p>Interestingly this means there are as many pairs of irrational numbers whose product is rational as there are products of irrational numbers whose product is irrational, despite there being more irrational numbers than rational numbers.</p>\n" }, { "answer_id": 2099836, "score": 7, "text": "<p>Counter example for first case, and an example for second case: $a = \\sqrt{2}, b = \\dfrac{1}{\\sqrt{2}}$. $a \\neq b$, yet $ab = 1 \\in \\mathbb{Q}$. For the other case, take $a = \\sqrt{2}-1, b = \\sqrt{2}$, $a \\neq b$, and $ab = 2-\\sqrt{2} \\notin \\mathbb{Q}$.</p>\n" }, { "answer_id": 2099842, "score": 1, "text": "<p><strong>It's trivial that</strong>\n\\begin{eqnarray}\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}\\,\\Rightarrow ab\\in\\mathbb{Q}\\\\\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}^c\\,\\Rightarrow ab\\in\\mathbb{Q}^c\\,\\,\\,,\\,a\\neq0\\\\\n\\end{eqnarray}\nbut with other cases:\n\\begin{eqnarray}\n&amp;&amp; \\sqrt{2}\\in\\mathbb{Q}^c\\,,\\,\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, 2\\in\\mathbb{Q}\\\\\n&amp;&amp; \\sqrt{3}\\in\\mathbb{Q}^c\\,,\\,\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, \\sqrt{6}\\in\\mathbb{Q}^c\\\\\n\\end{eqnarray}\nMore, <strong>that's trivial that</strong>\n\\begin{eqnarray}\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}\\,\\Rightarrow a+b\\in\\mathbb{Q}\\\\\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}^c\\,\\Rightarrow a+b\\in\\mathbb{Q}^c\\\\\n\\end{eqnarray}\nbut with other cases:\n\\begin{eqnarray}\n&amp;&amp; \\sqrt{2}\\in\\mathbb{Q}^c\\,,\\,-\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, 0\\in\\mathbb{Q}\\\\\n&amp;&amp; \\sqrt{3}\\in\\mathbb{Q}^c\\,,\\,\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, \\sqrt{3}+\\sqrt{2}\\in\\mathbb{Q}^c\\\\\n\\end{eqnarray}</p>\n" }, { "answer_id": 2099855, "score": 0, "text": "<p>You are asking if $\\forall m,n\\in\\mathbb R, m\\ne n\\implies mn\\in \\mathbb Q$.</p>\n\n<p>That would mean that $\\forall m\\in\\mathbb R-\\mathbb Q, m\\ne\\sqrt2\\implies m\\sqrt2\\in \\mathbb Q$.</p>\n\n<p>There are many more counterexamples than examples, as $\\mathbb R-\\mathbb Q$ is uncountable, while $\\mathbb Q$ is.</p>\n" }, { "answer_id": 2099925, "score": 0, "text": "<p>Be $q$ a rational number other than $0$. Be $a$ an irrational number with $a^2\\ne q$. Then $b=\\frac{q}{a}$ is an irrational number that is not equal to $a$, but $ab=q$ is rational by construction.</p>\n\n<p>So the product of two different irrational numbers is not always irrational. Indeed, the number of counterexamples is uncountable (as for any given $q$ almost all irrational numbers can be chosen as $a$).</p>\n\n<p>Interestingly this means there are as many pairs of irrational numbers whose product is rational as there are products of irrational numbers whose product is irrational, despite there being more irrational numbers than rational numbers.</p>\n" }, { "answer_id": 2099836, "score": 7, "text": "<p>Counter example for first case, and an example for second case: $a = \\sqrt{2}, b = \\dfrac{1}{\\sqrt{2}}$. $a \\neq b$, yet $ab = 1 \\in \\mathbb{Q}$. For the other case, take $a = \\sqrt{2}-1, b = \\sqrt{2}$, $a \\neq b$, and $ab = 2-\\sqrt{2} \\notin \\mathbb{Q}$.</p>\n" }, { "answer_id": 2099842, "score": 1, "text": "<p><strong>It's trivial that</strong>\n\\begin{eqnarray}\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}\\,\\Rightarrow ab\\in\\mathbb{Q}\\\\\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}^c\\,\\Rightarrow ab\\in\\mathbb{Q}^c\\,\\,\\,,\\,a\\neq0\\\\\n\\end{eqnarray}\nbut with other cases:\n\\begin{eqnarray}\n&amp;&amp; \\sqrt{2}\\in\\mathbb{Q}^c\\,,\\,\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, 2\\in\\mathbb{Q}\\\\\n&amp;&amp; \\sqrt{3}\\in\\mathbb{Q}^c\\,,\\,\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, \\sqrt{6}\\in\\mathbb{Q}^c\\\\\n\\end{eqnarray}\nMore, <strong>that's trivial that</strong>\n\\begin{eqnarray}\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}\\,\\Rightarrow a+b\\in\\mathbb{Q}\\\\\n&amp;&amp; a\\in\\mathbb{Q}\\,,\\,b\\in\\mathbb{Q}^c\\,\\Rightarrow a+b\\in\\mathbb{Q}^c\\\\\n\\end{eqnarray}\nbut with other cases:\n\\begin{eqnarray}\n&amp;&amp; \\sqrt{2}\\in\\mathbb{Q}^c\\,,\\,-\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, 0\\in\\mathbb{Q}\\\\\n&amp;&amp; \\sqrt{3}\\in\\mathbb{Q}^c\\,,\\,\\sqrt{2}\\in\\mathbb{Q}^c\\,\\,\\,\\,\\text{but}\\,\\,\\, \\sqrt{3}+\\sqrt{2}\\in\\mathbb{Q}^c\\\\\n\\end{eqnarray}</p>\n" }, { "answer_id": 2099855, "score": 0, "text": "<p>You are asking if $\\forall m,n\\in\\mathbb R, m\\ne n\\implies mn\\in \\mathbb Q$.</p>\n\n<p>That would mean that $\\forall m\\in\\mathbb R-\\mathbb Q, m\\ne\\sqrt2\\implies m\\sqrt2\\in \\mathbb Q$.</p>\n\n<p>There are many more counterexamples than examples, as $\\mathbb R-\\mathbb Q$ is uncountable, while $\\mathbb Q$ is.</p>\n" }, { "answer_id": 2099925, "score": 0, "text": "<p>Be $q$ a rational number other than $0$. Be $a$ an irrational number with $a^2\\ne q$. Then $b=\\frac{q}{a}$ is an irrational number that is not equal to $a$, but $ab=q$ is rational by construction.</p>\n\n<p>So the product of two different irrational numbers is not always irrational. Indeed, the number of counterexamples is uncountable (as for any given $q$ almost all irrational numbers can be chosen as $a$).</p>\n\n<p>Interestingly this means there are as many pairs of irrational numbers whose product is rational as there are products of irrational numbers whose product is irrational, despite there being more irrational numbers than rational numbers.</p>\n" } ]
Question: if $a$ and $b$ are irrational and $a \neq b$, then is $ab$ necessarily irrational? Answer 1: <p>Counter example for first case, and an example for second case: $a = \sqrt{2}, b = \dfrac{1}{\sqrt{2}}$. $a \neq b$, yet $ab = 1 \in \mathbb{Q}$. For the other case, take $a = \sqrt{2}-1, b = \sqrt{2}$, $a \neq b$, and $ab = 2-\sqrt{2} \notin \mathbb{Q}$.</p> Answer 2: <p><strong>It's trivial that</strong> \begin{eqnarray} &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}\,\Rightarrow ab\in\mathbb{Q}\\ &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}^c\,\Rightarrow ab\in\mathbb{Q}^c\,\,\,,\,a\neq0\\ \end{eqnarray} but with other cases: \begin{eqnarray} &amp;&amp; \sqrt{2}\in\mathbb{Q}^c\,,\,\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, 2\in\mathbb{Q}\\ &amp;&amp; \sqrt{3}\in\mathbb{Q}^c\,,\,\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, \sqrt{6}\in\mathbb{Q}^c\\ \end{eqnarray} More, <strong>that's trivial that</strong> \begin{eqnarray} &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}\,\Rightarrow a+b\in\mathbb{Q}\\ &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}^c\,\Rightarrow a+b\in\mathbb{Q}^c\\ \end{eqnarray} but with other cases: \begin{eqnarray} &amp;&amp; \sqrt{2}\in\mathbb{Q}^c\,,\,-\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, 0\in\mathbb{Q}\\ &amp;&amp; \sqrt{3}\in\mathbb{Q}^c\,,\,\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, \sqrt{3}+\sqrt{2}\in\mathbb{Q}^c\\ \end{eqnarray}</p> Answer 3: <p>You are asking if $\forall m,n\in\mathbb R, m\ne n\implies mn\in \mathbb Q$.</p> <p>That would mean that $\forall m\in\mathbb R-\mathbb Q, m\ne\sqrt2\implies m\sqrt2\in \mathbb Q$.</p> <p>There are many more counterexamples than examples, as $\mathbb R-\mathbb Q$ is uncountable, while $\mathbb Q$ is.</p> Answer 4: <p>Be $q$ a rational number other than $0$. Be $a$ an irrational number with $a^2\ne q$. Then $b=\frac{q}{a}$ is an irrational number that is not equal to $a$, but $ab=q$ is rational by construction.</p> <p>So the product of two different irrational numbers is not always irrational. Indeed, the number of counterexamples is uncountable (as for any given $q$ almost all irrational numbers can be chosen as $a$).</p> <p>Interestingly this means there are as many pairs of irrational numbers whose product is rational as there are products of irrational numbers whose product is irrational, despite there being more irrational numbers than rational numbers.</p> Answer 5: <p>Counter example for first case, and an example for second case: $a = \sqrt{2}, b = \dfrac{1}{\sqrt{2}}$. $a \neq b$, yet $ab = 1 \in \mathbb{Q}$. For the other case, take $a = \sqrt{2}-1, b = \sqrt{2}$, $a \neq b$, and $ab = 2-\sqrt{2} \notin \mathbb{Q}$.</p> Answer 6: <p><strong>It's trivial that</strong> \begin{eqnarray} &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}\,\Rightarrow ab\in\mathbb{Q}\\ &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}^c\,\Rightarrow ab\in\mathbb{Q}^c\,\,\,,\,a\neq0\\ \end{eqnarray} but with other cases: \begin{eqnarray} &amp;&amp; \sqrt{2}\in\mathbb{Q}^c\,,\,\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, 2\in\mathbb{Q}\\ &amp;&amp; \sqrt{3}\in\mathbb{Q}^c\,,\,\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, \sqrt{6}\in\mathbb{Q}^c\\ \end{eqnarray} More, <strong>that's trivial that</strong> \begin{eqnarray} &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}\,\Rightarrow a+b\in\mathbb{Q}\\ &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}^c\,\Rightarrow a+b\in\mathbb{Q}^c\\ \end{eqnarray} but with other cases: \begin{eqnarray} &amp;&amp; \sqrt{2}\in\mathbb{Q}^c\,,\,-\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, 0\in\mathbb{Q}\\ &amp;&amp; \sqrt{3}\in\mathbb{Q}^c\,,\,\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, \sqrt{3}+\sqrt{2}\in\mathbb{Q}^c\\ \end{eqnarray}</p> Answer 7: <p>You are asking if $\forall m,n\in\mathbb R, m\ne n\implies mn\in \mathbb Q$.</p> <p>That would mean that $\forall m\in\mathbb R-\mathbb Q, m\ne\sqrt2\implies m\sqrt2\in \mathbb Q$.</p> <p>There are many more counterexamples than examples, as $\mathbb R-\mathbb Q$ is uncountable, while $\mathbb Q$ is.</p> Answer 8: <p>Be $q$ a rational number other than $0$. Be $a$ an irrational number with $a^2\ne q$. Then $b=\frac{q}{a}$ is an irrational number that is not equal to $a$, but $ab=q$ is rational by construction.</p> <p>So the product of two different irrational numbers is not always irrational. Indeed, the number of counterexamples is uncountable (as for any given $q$ almost all irrational numbers can be chosen as $a$).</p> <p>Interestingly this means there are as many pairs of irrational numbers whose product is rational as there are products of irrational numbers whose product is irrational, despite there being more irrational numbers than rational numbers.</p> Answer 9: <p>Counter example for first case, and an example for second case: $a = \sqrt{2}, b = \dfrac{1}{\sqrt{2}}$. $a \neq b$, yet $ab = 1 \in \mathbb{Q}$. For the other case, take $a = \sqrt{2}-1, b = \sqrt{2}$, $a \neq b$, and $ab = 2-\sqrt{2} \notin \mathbb{Q}$.</p> Answer 10: <p><strong>It's trivial that</strong> \begin{eqnarray} &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}\,\Rightarrow ab\in\mathbb{Q}\\ &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}^c\,\Rightarrow ab\in\mathbb{Q}^c\,\,\,,\,a\neq0\\ \end{eqnarray} but with other cases: \begin{eqnarray} &amp;&amp; \sqrt{2}\in\mathbb{Q}^c\,,\,\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, 2\in\mathbb{Q}\\ &amp;&amp; \sqrt{3}\in\mathbb{Q}^c\,,\,\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, \sqrt{6}\in\mathbb{Q}^c\\ \end{eqnarray} More, <strong>that's trivial that</strong> \begin{eqnarray} &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}\,\Rightarrow a+b\in\mathbb{Q}\\ &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}^c\,\Rightarrow a+b\in\mathbb{Q}^c\\ \end{eqnarray} but with other cases: \begin{eqnarray} &amp;&amp; \sqrt{2}\in\mathbb{Q}^c\,,\,-\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, 0\in\mathbb{Q}\\ &amp;&amp; \sqrt{3}\in\mathbb{Q}^c\,,\,\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, \sqrt{3}+\sqrt{2}\in\mathbb{Q}^c\\ \end{eqnarray}</p> Answer 11: <p>You are asking if $\forall m,n\in\mathbb R, m\ne n\implies mn\in \mathbb Q$.</p> <p>That would mean that $\forall m\in\mathbb R-\mathbb Q, m\ne\sqrt2\implies m\sqrt2\in \mathbb Q$.</p> <p>There are many more counterexamples than examples, as $\mathbb R-\mathbb Q$ is uncountable, while $\mathbb Q$ is.</p> Answer 12: <p>Be $q$ a rational number other than $0$. Be $a$ an irrational number with $a^2\ne q$. Then $b=\frac{q}{a}$ is an irrational number that is not equal to $a$, but $ab=q$ is rational by construction.</p> <p>So the product of two different irrational numbers is not always irrational. Indeed, the number of counterexamples is uncountable (as for any given $q$ almost all irrational numbers can be chosen as $a$).</p> <p>Interestingly this means there are as many pairs of irrational numbers whose product is rational as there are products of irrational numbers whose product is irrational, despite there being more irrational numbers than rational numbers.</p> Answer 13: <p>Counter example for first case, and an example for second case: $a = \sqrt{2}, b = \dfrac{1}{\sqrt{2}}$. $a \neq b$, yet $ab = 1 \in \mathbb{Q}$. For the other case, take $a = \sqrt{2}-1, b = \sqrt{2}$, $a \neq b$, and $ab = 2-\sqrt{2} \notin \mathbb{Q}$.</p> Answer 14: <p><strong>It's trivial that</strong> \begin{eqnarray} &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}\,\Rightarrow ab\in\mathbb{Q}\\ &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}^c\,\Rightarrow ab\in\mathbb{Q}^c\,\,\,,\,a\neq0\\ \end{eqnarray} but with other cases: \begin{eqnarray} &amp;&amp; \sqrt{2}\in\mathbb{Q}^c\,,\,\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, 2\in\mathbb{Q}\\ &amp;&amp; \sqrt{3}\in\mathbb{Q}^c\,,\,\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, \sqrt{6}\in\mathbb{Q}^c\\ \end{eqnarray} More, <strong>that's trivial that</strong> \begin{eqnarray} &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}\,\Rightarrow a+b\in\mathbb{Q}\\ &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}^c\,\Rightarrow a+b\in\mathbb{Q}^c\\ \end{eqnarray} but with other cases: \begin{eqnarray} &amp;&amp; \sqrt{2}\in\mathbb{Q}^c\,,\,-\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, 0\in\mathbb{Q}\\ &amp;&amp; \sqrt{3}\in\mathbb{Q}^c\,,\,\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, \sqrt{3}+\sqrt{2}\in\mathbb{Q}^c\\ \end{eqnarray}</p> Answer 15: <p>You are asking if $\forall m,n\in\mathbb R, m\ne n\implies mn\in \mathbb Q$.</p> <p>That would mean that $\forall m\in\mathbb R-\mathbb Q, m\ne\sqrt2\implies m\sqrt2\in \mathbb Q$.</p> <p>There are many more counterexamples than examples, as $\mathbb R-\mathbb Q$ is uncountable, while $\mathbb Q$ is.</p> Answer 16: <p>Be $q$ a rational number other than $0$. Be $a$ an irrational number with $a^2\ne q$. Then $b=\frac{q}{a}$ is an irrational number that is not equal to $a$, but $ab=q$ is rational by construction.</p> <p>So the product of two different irrational numbers is not always irrational. Indeed, the number of counterexamples is uncountable (as for any given $q$ almost all irrational numbers can be chosen as $a$).</p> <p>Interestingly this means there are as many pairs of irrational numbers whose product is rational as there are products of irrational numbers whose product is irrational, despite there being more irrational numbers than rational numbers.</p> Answer 17: <p>Counter example for first case, and an example for second case: $a = \sqrt{2}, b = \dfrac{1}{\sqrt{2}}$. $a \neq b$, yet $ab = 1 \in \mathbb{Q}$. For the other case, take $a = \sqrt{2}-1, b = \sqrt{2}$, $a \neq b$, and $ab = 2-\sqrt{2} \notin \mathbb{Q}$.</p> Answer 18: <p><strong>It's trivial that</strong> \begin{eqnarray} &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}\,\Rightarrow ab\in\mathbb{Q}\\ &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}^c\,\Rightarrow ab\in\mathbb{Q}^c\,\,\,,\,a\neq0\\ \end{eqnarray} but with other cases: \begin{eqnarray} &amp;&amp; \sqrt{2}\in\mathbb{Q}^c\,,\,\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, 2\in\mathbb{Q}\\ &amp;&amp; \sqrt{3}\in\mathbb{Q}^c\,,\,\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, \sqrt{6}\in\mathbb{Q}^c\\ \end{eqnarray} More, <strong>that's trivial that</strong> \begin{eqnarray} &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}\,\Rightarrow a+b\in\mathbb{Q}\\ &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}^c\,\Rightarrow a+b\in\mathbb{Q}^c\\ \end{eqnarray} but with other cases: \begin{eqnarray} &amp;&amp; \sqrt{2}\in\mathbb{Q}^c\,,\,-\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, 0\in\mathbb{Q}\\ &amp;&amp; \sqrt{3}\in\mathbb{Q}^c\,,\,\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, \sqrt{3}+\sqrt{2}\in\mathbb{Q}^c\\ \end{eqnarray}</p> Answer 19: <p>You are asking if $\forall m,n\in\mathbb R, m\ne n\implies mn\in \mathbb Q$.</p> <p>That would mean that $\forall m\in\mathbb R-\mathbb Q, m\ne\sqrt2\implies m\sqrt2\in \mathbb Q$.</p> <p>There are many more counterexamples than examples, as $\mathbb R-\mathbb Q$ is uncountable, while $\mathbb Q$ is.</p> Answer 20: <p>Be $q$ a rational number other than $0$. Be $a$ an irrational number with $a^2\ne q$. Then $b=\frac{q}{a}$ is an irrational number that is not equal to $a$, but $ab=q$ is rational by construction.</p> <p>So the product of two different irrational numbers is not always irrational. Indeed, the number of counterexamples is uncountable (as for any given $q$ almost all irrational numbers can be chosen as $a$).</p> <p>Interestingly this means there are as many pairs of irrational numbers whose product is rational as there are products of irrational numbers whose product is irrational, despite there being more irrational numbers than rational numbers.</p> Answer 21: <p>Counter example for first case, and an example for second case: $a = \sqrt{2}, b = \dfrac{1}{\sqrt{2}}$. $a \neq b$, yet $ab = 1 \in \mathbb{Q}$. For the other case, take $a = \sqrt{2}-1, b = \sqrt{2}$, $a \neq b$, and $ab = 2-\sqrt{2} \notin \mathbb{Q}$.</p> Answer 22: <p><strong>It's trivial that</strong> \begin{eqnarray} &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}\,\Rightarrow ab\in\mathbb{Q}\\ &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}^c\,\Rightarrow ab\in\mathbb{Q}^c\,\,\,,\,a\neq0\\ \end{eqnarray} but with other cases: \begin{eqnarray} &amp;&amp; \sqrt{2}\in\mathbb{Q}^c\,,\,\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, 2\in\mathbb{Q}\\ &amp;&amp; \sqrt{3}\in\mathbb{Q}^c\,,\,\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, \sqrt{6}\in\mathbb{Q}^c\\ \end{eqnarray} More, <strong>that's trivial that</strong> \begin{eqnarray} &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}\,\Rightarrow a+b\in\mathbb{Q}\\ &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}^c\,\Rightarrow a+b\in\mathbb{Q}^c\\ \end{eqnarray} but with other cases: \begin{eqnarray} &amp;&amp; \sqrt{2}\in\mathbb{Q}^c\,,\,-\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, 0\in\mathbb{Q}\\ &amp;&amp; \sqrt{3}\in\mathbb{Q}^c\,,\,\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, \sqrt{3}+\sqrt{2}\in\mathbb{Q}^c\\ \end{eqnarray}</p> Answer 23: <p>You are asking if $\forall m,n\in\mathbb R, m\ne n\implies mn\in \mathbb Q$.</p> <p>That would mean that $\forall m\in\mathbb R-\mathbb Q, m\ne\sqrt2\implies m\sqrt2\in \mathbb Q$.</p> <p>There are many more counterexamples than examples, as $\mathbb R-\mathbb Q$ is uncountable, while $\mathbb Q$ is.</p> Answer 24: <p>Be $q$ a rational number other than $0$. Be $a$ an irrational number with $a^2\ne q$. Then $b=\frac{q}{a}$ is an irrational number that is not equal to $a$, but $ab=q$ is rational by construction.</p> <p>So the product of two different irrational numbers is not always irrational. Indeed, the number of counterexamples is uncountable (as for any given $q$ almost all irrational numbers can be chosen as $a$).</p> <p>Interestingly this means there are as many pairs of irrational numbers whose product is rational as there are products of irrational numbers whose product is irrational, despite there being more irrational numbers than rational numbers.</p> Answer 25: <p>Counter example for first case, and an example for second case: $a = \sqrt{2}, b = \dfrac{1}{\sqrt{2}}$. $a \neq b$, yet $ab = 1 \in \mathbb{Q}$. For the other case, take $a = \sqrt{2}-1, b = \sqrt{2}$, $a \neq b$, and $ab = 2-\sqrt{2} \notin \mathbb{Q}$.</p> Answer 26: <p><strong>It's trivial that</strong> \begin{eqnarray} &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}\,\Rightarrow ab\in\mathbb{Q}\\ &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}^c\,\Rightarrow ab\in\mathbb{Q}^c\,\,\,,\,a\neq0\\ \end{eqnarray} but with other cases: \begin{eqnarray} &amp;&amp; \sqrt{2}\in\mathbb{Q}^c\,,\,\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, 2\in\mathbb{Q}\\ &amp;&amp; \sqrt{3}\in\mathbb{Q}^c\,,\,\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, \sqrt{6}\in\mathbb{Q}^c\\ \end{eqnarray} More, <strong>that's trivial that</strong> \begin{eqnarray} &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}\,\Rightarrow a+b\in\mathbb{Q}\\ &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}^c\,\Rightarrow a+b\in\mathbb{Q}^c\\ \end{eqnarray} but with other cases: \begin{eqnarray} &amp;&amp; \sqrt{2}\in\mathbb{Q}^c\,,\,-\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, 0\in\mathbb{Q}\\ &amp;&amp; \sqrt{3}\in\mathbb{Q}^c\,,\,\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, \sqrt{3}+\sqrt{2}\in\mathbb{Q}^c\\ \end{eqnarray}</p> Answer 27: <p>You are asking if $\forall m,n\in\mathbb R, m\ne n\implies mn\in \mathbb Q$.</p> <p>That would mean that $\forall m\in\mathbb R-\mathbb Q, m\ne\sqrt2\implies m\sqrt2\in \mathbb Q$.</p> <p>There are many more counterexamples than examples, as $\mathbb R-\mathbb Q$ is uncountable, while $\mathbb Q$ is.</p> Answer 28: <p>Be $q$ a rational number other than $0$. Be $a$ an irrational number with $a^2\ne q$. Then $b=\frac{q}{a}$ is an irrational number that is not equal to $a$, but $ab=q$ is rational by construction.</p> <p>So the product of two different irrational numbers is not always irrational. Indeed, the number of counterexamples is uncountable (as for any given $q$ almost all irrational numbers can be chosen as $a$).</p> <p>Interestingly this means there are as many pairs of irrational numbers whose product is rational as there are products of irrational numbers whose product is irrational, despite there being more irrational numbers than rational numbers.</p> Answer 29: <p>Counter example for first case, and an example for second case: $a = \sqrt{2}, b = \dfrac{1}{\sqrt{2}}$. $a \neq b$, yet $ab = 1 \in \mathbb{Q}$. For the other case, take $a = \sqrt{2}-1, b = \sqrt{2}$, $a \neq b$, and $ab = 2-\sqrt{2} \notin \mathbb{Q}$.</p> Answer 30: <p><strong>It's trivial that</strong> \begin{eqnarray} &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}\,\Rightarrow ab\in\mathbb{Q}\\ &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}^c\,\Rightarrow ab\in\mathbb{Q}^c\,\,\,,\,a\neq0\\ \end{eqnarray} but with other cases: \begin{eqnarray} &amp;&amp; \sqrt{2}\in\mathbb{Q}^c\,,\,\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, 2\in\mathbb{Q}\\ &amp;&amp; \sqrt{3}\in\mathbb{Q}^c\,,\,\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, \sqrt{6}\in\mathbb{Q}^c\\ \end{eqnarray} More, <strong>that's trivial that</strong> \begin{eqnarray} &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}\,\Rightarrow a+b\in\mathbb{Q}\\ &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}^c\,\Rightarrow a+b\in\mathbb{Q}^c\\ \end{eqnarray} but with other cases: \begin{eqnarray} &amp;&amp; \sqrt{2}\in\mathbb{Q}^c\,,\,-\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, 0\in\mathbb{Q}\\ &amp;&amp; \sqrt{3}\in\mathbb{Q}^c\,,\,\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, \sqrt{3}+\sqrt{2}\in\mathbb{Q}^c\\ \end{eqnarray}</p> Answer 31: <p>You are asking if $\forall m,n\in\mathbb R, m\ne n\implies mn\in \mathbb Q$.</p> <p>That would mean that $\forall m\in\mathbb R-\mathbb Q, m\ne\sqrt2\implies m\sqrt2\in \mathbb Q$.</p> <p>There are many more counterexamples than examples, as $\mathbb R-\mathbb Q$ is uncountable, while $\mathbb Q$ is.</p> Answer 32: <p>Be $q$ a rational number other than $0$. Be $a$ an irrational number with $a^2\ne q$. Then $b=\frac{q}{a}$ is an irrational number that is not equal to $a$, but $ab=q$ is rational by construction.</p> <p>So the product of two different irrational numbers is not always irrational. Indeed, the number of counterexamples is uncountable (as for any given $q$ almost all irrational numbers can be chosen as $a$).</p> <p>Interestingly this means there are as many pairs of irrational numbers whose product is rational as there are products of irrational numbers whose product is irrational, despite there being more irrational numbers than rational numbers.</p> Answer 33: <p>Counter example for first case, and an example for second case: $a = \sqrt{2}, b = \dfrac{1}{\sqrt{2}}$. $a \neq b$, yet $ab = 1 \in \mathbb{Q}$. For the other case, take $a = \sqrt{2}-1, b = \sqrt{2}$, $a \neq b$, and $ab = 2-\sqrt{2} \notin \mathbb{Q}$.</p> Answer 34: <p><strong>It's trivial that</strong> \begin{eqnarray} &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}\,\Rightarrow ab\in\mathbb{Q}\\ &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}^c\,\Rightarrow ab\in\mathbb{Q}^c\,\,\,,\,a\neq0\\ \end{eqnarray} but with other cases: \begin{eqnarray} &amp;&amp; \sqrt{2}\in\mathbb{Q}^c\,,\,\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, 2\in\mathbb{Q}\\ &amp;&amp; \sqrt{3}\in\mathbb{Q}^c\,,\,\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, \sqrt{6}\in\mathbb{Q}^c\\ \end{eqnarray} More, <strong>that's trivial that</strong> \begin{eqnarray} &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}\,\Rightarrow a+b\in\mathbb{Q}\\ &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}^c\,\Rightarrow a+b\in\mathbb{Q}^c\\ \end{eqnarray} but with other cases: \begin{eqnarray} &amp;&amp; \sqrt{2}\in\mathbb{Q}^c\,,\,-\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, 0\in\mathbb{Q}\\ &amp;&amp; \sqrt{3}\in\mathbb{Q}^c\,,\,\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, \sqrt{3}+\sqrt{2}\in\mathbb{Q}^c\\ \end{eqnarray}</p> Answer 35: <p>You are asking if $\forall m,n\in\mathbb R, m\ne n\implies mn\in \mathbb Q$.</p> <p>That would mean that $\forall m\in\mathbb R-\mathbb Q, m\ne\sqrt2\implies m\sqrt2\in \mathbb Q$.</p> <p>There are many more counterexamples than examples, as $\mathbb R-\mathbb Q$ is uncountable, while $\mathbb Q$ is.</p> Answer 36: <p>Be $q$ a rational number other than $0$. Be $a$ an irrational number with $a^2\ne q$. Then $b=\frac{q}{a}$ is an irrational number that is not equal to $a$, but $ab=q$ is rational by construction.</p> <p>So the product of two different irrational numbers is not always irrational. Indeed, the number of counterexamples is uncountable (as for any given $q$ almost all irrational numbers can be chosen as $a$).</p> <p>Interestingly this means there are as many pairs of irrational numbers whose product is rational as there are products of irrational numbers whose product is irrational, despite there being more irrational numbers than rational numbers.</p> Answer 37: <p>Counter example for first case, and an example for second case: $a = \sqrt{2}, b = \dfrac{1}{\sqrt{2}}$. $a \neq b$, yet $ab = 1 \in \mathbb{Q}$. For the other case, take $a = \sqrt{2}-1, b = \sqrt{2}$, $a \neq b$, and $ab = 2-\sqrt{2} \notin \mathbb{Q}$.</p> Answer 38: <p><strong>It's trivial that</strong> \begin{eqnarray} &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}\,\Rightarrow ab\in\mathbb{Q}\\ &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}^c\,\Rightarrow ab\in\mathbb{Q}^c\,\,\,,\,a\neq0\\ \end{eqnarray} but with other cases: \begin{eqnarray} &amp;&amp; \sqrt{2}\in\mathbb{Q}^c\,,\,\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, 2\in\mathbb{Q}\\ &amp;&amp; \sqrt{3}\in\mathbb{Q}^c\,,\,\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, \sqrt{6}\in\mathbb{Q}^c\\ \end{eqnarray} More, <strong>that's trivial that</strong> \begin{eqnarray} &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}\,\Rightarrow a+b\in\mathbb{Q}\\ &amp;&amp; a\in\mathbb{Q}\,,\,b\in\mathbb{Q}^c\,\Rightarrow a+b\in\mathbb{Q}^c\\ \end{eqnarray} but with other cases: \begin{eqnarray} &amp;&amp; \sqrt{2}\in\mathbb{Q}^c\,,\,-\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, 0\in\mathbb{Q}\\ &amp;&amp; \sqrt{3}\in\mathbb{Q}^c\,,\,\sqrt{2}\in\mathbb{Q}^c\,\,\,\,\text{but}\,\,\, \sqrt{3}+\sqrt{2}\in\mathbb{Q}^c\\ \end{eqnarray}</p> Answer 39: <p>You are asking if $\forall m,n\in\mathbb R, m\ne n\implies mn\in \mathbb Q$.</p> <p>That would mean that $\forall m\in\mathbb R-\mathbb Q, m\ne\sqrt2\implies m\sqrt2\in \mathbb Q$.</p> <p>There are many more counterexamples than examples, as $\mathbb R-\mathbb Q$ is uncountable, while $\mathbb Q$ is.</p> Answer 40: <p>Be $q$ a rational number other than $0$. Be $a$ an irrational number with $a^2\ne q$. Then $b=\frac{q}{a}$ is an irrational number that is not equal to $a$, but $ab=q$ is rational by construction.</p> <p>So the product of two different irrational numbers is not always irrational. Indeed, the number of counterexamples is uncountable (as for any given $q$ almost all irrational numbers can be chosen as $a$).</p> <p>Interestingly this means there are as many pairs of irrational numbers whose product is rational as there are products of irrational numbers whose product is irrational, despite there being more irrational numbers than rational numbers.</p>
Find all reals $x$,$y$ satisfying the following equation:
2,100,261
6
https://math.stackexchange.com/questions/2100261/find-all-reals-x-y-satisfying-the-following-equation
<blockquote> <p>Find all positive reals $x,y \in \mathbb{R}^+$ satisfying: $$\frac{x^9}{y} + \frac{y^9}{x} = 10-\frac{8}{xy}$$</p> </blockquote> <p>Since this involves higher exponents I am unable to tackle this problem. Please help me.</p>
[ { "answer_id": 2100296, "score": 9, "text": "<p><b>Hint:</b> Write it as:</p>\n\n<p>$x^{10}+y^{10}=10xy-8$</p>\n\n<p>Then use AM-GM:</p>\n\n<p>$10xy-5=x^{10}+y^{10}+1+1+1\\ge5\\sqrt[5]{x^{10}y^{10}}=5x^2y^2$</p>\n\n<p>Which gives $5(xy-1)^2\\le0$, so $xy=1$</p>\n\n<p>Then initial equation can be rewritten as $x^{10}+\\frac{1}{x^{10}}=2$</p>\n\n<p>Which by AM-GM again, (or by writing it as a $(x^{10}-1)^2=0$) has the solution, $x^{10}=1$, so $x=\\pm1$</p>\n\n<p>So $x=1,y=1$, or $x=-1,y=-1$</p>\n" }, { "answer_id": 2100397, "score": 5, "text": "<p>Just to give a non-AM-GM answer, let $u=xy$, which must be positive in order for $x^{10}+y^{10}=10xy-8$ to have a solution, and note that $x^{10}+y^{10}=(x^5-y^5)^2+2(xy)^5$. Thus</p>\n\n<p>$$\\begin{align}x^{10}+y^{10}-10xy+8\n&amp;=(x^5-y^5)^2+2(u^5-5u+4)\\\\\n&amp;=(x^5-y^5)^2+2(u-1)(u^4+u^3+u^2+u-4)\\\\\n&amp;=(x^5-y^5)^2+2(u-1)^2(u^3+2u^2+3u+4)\\\\\n&amp;\\ge0\n\\end{align}$$</p>\n\n<p>with equality if and only if $x^5=y^5$ and $xy=u=1$ (since $u\\gt0$ implies $u^3+2u^2+3u+4\\gt0$). From this we see that $x=y=\\pm1$ are the only possibilities. In particular, $(x,y)=(1,1)$ is the only solution with $x,y\\in\\mathbb{R}^+$.</p>\n\n<p>Remark: I showed the factorization of $u^5-5u+4$ in two steps for ease of checking.</p>\n" }, { "answer_id": 2100296, "score": 9, "text": "<p><b>Hint:</b> Write it as:</p>\n\n<p>$x^{10}+y^{10}=10xy-8$</p>\n\n<p>Then use AM-GM:</p>\n\n<p>$10xy-5=x^{10}+y^{10}+1+1+1\\ge5\\sqrt[5]{x^{10}y^{10}}=5x^2y^2$</p>\n\n<p>Which gives $5(xy-1)^2\\le0$, so $xy=1$</p>\n\n<p>Then initial equation can be rewritten as $x^{10}+\\frac{1}{x^{10}}=2$</p>\n\n<p>Which by AM-GM again, (or by writing it as a $(x^{10}-1)^2=0$) has the solution, $x^{10}=1$, so $x=\\pm1$</p>\n\n<p>So $x=1,y=1$, or $x=-1,y=-1$</p>\n" }, { "answer_id": 2100397, "score": 5, "text": "<p>Just to give a non-AM-GM answer, let $u=xy$, which must be positive in order for $x^{10}+y^{10}=10xy-8$ to have a solution, and note that $x^{10}+y^{10}=(x^5-y^5)^2+2(xy)^5$. Thus</p>\n\n<p>$$\\begin{align}x^{10}+y^{10}-10xy+8\n&amp;=(x^5-y^5)^2+2(u^5-5u+4)\\\\\n&amp;=(x^5-y^5)^2+2(u-1)(u^4+u^3+u^2+u-4)\\\\\n&amp;=(x^5-y^5)^2+2(u-1)^2(u^3+2u^2+3u+4)\\\\\n&amp;\\ge0\n\\end{align}$$</p>\n\n<p>with equality if and only if $x^5=y^5$ and $xy=u=1$ (since $u\\gt0$ implies $u^3+2u^2+3u+4\\gt0$). From this we see that $x=y=\\pm1$ are the only possibilities. In particular, $(x,y)=(1,1)$ is the only solution with $x,y\\in\\mathbb{R}^+$.</p>\n\n<p>Remark: I showed the factorization of $u^5-5u+4$ in two steps for ease of checking.</p>\n" }, { "answer_id": 2100296, "score": 9, "text": "<p><b>Hint:</b> Write it as:</p>\n\n<p>$x^{10}+y^{10}=10xy-8$</p>\n\n<p>Then use AM-GM:</p>\n\n<p>$10xy-5=x^{10}+y^{10}+1+1+1\\ge5\\sqrt[5]{x^{10}y^{10}}=5x^2y^2$</p>\n\n<p>Which gives $5(xy-1)^2\\le0$, so $xy=1$</p>\n\n<p>Then initial equation can be rewritten as $x^{10}+\\frac{1}{x^{10}}=2$</p>\n\n<p>Which by AM-GM again, (or by writing it as a $(x^{10}-1)^2=0$) has the solution, $x^{10}=1$, so $x=\\pm1$</p>\n\n<p>So $x=1,y=1$, or $x=-1,y=-1$</p>\n" }, { "answer_id": 2100397, "score": 5, "text": "<p>Just to give a non-AM-GM answer, let $u=xy$, which must be positive in order for $x^{10}+y^{10}=10xy-8$ to have a solution, and note that $x^{10}+y^{10}=(x^5-y^5)^2+2(xy)^5$. Thus</p>\n\n<p>$$\\begin{align}x^{10}+y^{10}-10xy+8\n&amp;=(x^5-y^5)^2+2(u^5-5u+4)\\\\\n&amp;=(x^5-y^5)^2+2(u-1)(u^4+u^3+u^2+u-4)\\\\\n&amp;=(x^5-y^5)^2+2(u-1)^2(u^3+2u^2+3u+4)\\\\\n&amp;\\ge0\n\\end{align}$$</p>\n\n<p>with equality if and only if $x^5=y^5$ and $xy=u=1$ (since $u\\gt0$ implies $u^3+2u^2+3u+4\\gt0$). From this we see that $x=y=\\pm1$ are the only possibilities. In particular, $(x,y)=(1,1)$ is the only solution with $x,y\\in\\mathbb{R}^+$.</p>\n\n<p>Remark: I showed the factorization of $u^5-5u+4$ in two steps for ease of checking.</p>\n" }, { "answer_id": 2100296, "score": 9, "text": "<p><b>Hint:</b> Write it as:</p>\n\n<p>$x^{10}+y^{10}=10xy-8$</p>\n\n<p>Then use AM-GM:</p>\n\n<p>$10xy-5=x^{10}+y^{10}+1+1+1\\ge5\\sqrt[5]{x^{10}y^{10}}=5x^2y^2$</p>\n\n<p>Which gives $5(xy-1)^2\\le0$, so $xy=1$</p>\n\n<p>Then initial equation can be rewritten as $x^{10}+\\frac{1}{x^{10}}=2$</p>\n\n<p>Which by AM-GM again, (or by writing it as a $(x^{10}-1)^2=0$) has the solution, $x^{10}=1$, so $x=\\pm1$</p>\n\n<p>So $x=1,y=1$, or $x=-1,y=-1$</p>\n" }, { "answer_id": 2100397, "score": 5, "text": "<p>Just to give a non-AM-GM answer, let $u=xy$, which must be positive in order for $x^{10}+y^{10}=10xy-8$ to have a solution, and note that $x^{10}+y^{10}=(x^5-y^5)^2+2(xy)^5$. Thus</p>\n\n<p>$$\\begin{align}x^{10}+y^{10}-10xy+8\n&amp;=(x^5-y^5)^2+2(u^5-5u+4)\\\\\n&amp;=(x^5-y^5)^2+2(u-1)(u^4+u^3+u^2+u-4)\\\\\n&amp;=(x^5-y^5)^2+2(u-1)^2(u^3+2u^2+3u+4)\\\\\n&amp;\\ge0\n\\end{align}$$</p>\n\n<p>with equality if and only if $x^5=y^5$ and $xy=u=1$ (since $u\\gt0$ implies $u^3+2u^2+3u+4\\gt0$). From this we see that $x=y=\\pm1$ are the only possibilities. In particular, $(x,y)=(1,1)$ is the only solution with $x,y\\in\\mathbb{R}^+$.</p>\n\n<p>Remark: I showed the factorization of $u^5-5u+4$ in two steps for ease of checking.</p>\n" }, { "answer_id": 2100296, "score": 9, "text": "<p><b>Hint:</b> Write it as:</p>\n\n<p>$x^{10}+y^{10}=10xy-8$</p>\n\n<p>Then use AM-GM:</p>\n\n<p>$10xy-5=x^{10}+y^{10}+1+1+1\\ge5\\sqrt[5]{x^{10}y^{10}}=5x^2y^2$</p>\n\n<p>Which gives $5(xy-1)^2\\le0$, so $xy=1$</p>\n\n<p>Then initial equation can be rewritten as $x^{10}+\\frac{1}{x^{10}}=2$</p>\n\n<p>Which by AM-GM again, (or by writing it as a $(x^{10}-1)^2=0$) has the solution, $x^{10}=1$, so $x=\\pm1$</p>\n\n<p>So $x=1,y=1$, or $x=-1,y=-1$</p>\n" }, { "answer_id": 2100397, "score": 5, "text": "<p>Just to give a non-AM-GM answer, let $u=xy$, which must be positive in order for $x^{10}+y^{10}=10xy-8$ to have a solution, and note that $x^{10}+y^{10}=(x^5-y^5)^2+2(xy)^5$. Thus</p>\n\n<p>$$\\begin{align}x^{10}+y^{10}-10xy+8\n&amp;=(x^5-y^5)^2+2(u^5-5u+4)\\\\\n&amp;=(x^5-y^5)^2+2(u-1)(u^4+u^3+u^2+u-4)\\\\\n&amp;=(x^5-y^5)^2+2(u-1)^2(u^3+2u^2+3u+4)\\\\\n&amp;\\ge0\n\\end{align}$$</p>\n\n<p>with equality if and only if $x^5=y^5$ and $xy=u=1$ (since $u\\gt0$ implies $u^3+2u^2+3u+4\\gt0$). From this we see that $x=y=\\pm1$ are the only possibilities. In particular, $(x,y)=(1,1)$ is the only solution with $x,y\\in\\mathbb{R}^+$.</p>\n\n<p>Remark: I showed the factorization of $u^5-5u+4$ in two steps for ease of checking.</p>\n" }, { "answer_id": 2100296, "score": 9, "text": "<p><b>Hint:</b> Write it as:</p>\n\n<p>$x^{10}+y^{10}=10xy-8$</p>\n\n<p>Then use AM-GM:</p>\n\n<p>$10xy-5=x^{10}+y^{10}+1+1+1\\ge5\\sqrt[5]{x^{10}y^{10}}=5x^2y^2$</p>\n\n<p>Which gives $5(xy-1)^2\\le0$, so $xy=1$</p>\n\n<p>Then initial equation can be rewritten as $x^{10}+\\frac{1}{x^{10}}=2$</p>\n\n<p>Which by AM-GM again, (or by writing it as a $(x^{10}-1)^2=0$) has the solution, $x^{10}=1$, so $x=\\pm1$</p>\n\n<p>So $x=1,y=1$, or $x=-1,y=-1$</p>\n" }, { "answer_id": 2100397, "score": 5, "text": "<p>Just to give a non-AM-GM answer, let $u=xy$, which must be positive in order for $x^{10}+y^{10}=10xy-8$ to have a solution, and note that $x^{10}+y^{10}=(x^5-y^5)^2+2(xy)^5$. Thus</p>\n\n<p>$$\\begin{align}x^{10}+y^{10}-10xy+8\n&amp;=(x^5-y^5)^2+2(u^5-5u+4)\\\\\n&amp;=(x^5-y^5)^2+2(u-1)(u^4+u^3+u^2+u-4)\\\\\n&amp;=(x^5-y^5)^2+2(u-1)^2(u^3+2u^2+3u+4)\\\\\n&amp;\\ge0\n\\end{align}$$</p>\n\n<p>with equality if and only if $x^5=y^5$ and $xy=u=1$ (since $u\\gt0$ implies $u^3+2u^2+3u+4\\gt0$). From this we see that $x=y=\\pm1$ are the only possibilities. In particular, $(x,y)=(1,1)$ is the only solution with $x,y\\in\\mathbb{R}^+$.</p>\n\n<p>Remark: I showed the factorization of $u^5-5u+4$ in two steps for ease of checking.</p>\n" }, { "answer_id": 2100296, "score": 9, "text": "<p><b>Hint:</b> Write it as:</p>\n\n<p>$x^{10}+y^{10}=10xy-8$</p>\n\n<p>Then use AM-GM:</p>\n\n<p>$10xy-5=x^{10}+y^{10}+1+1+1\\ge5\\sqrt[5]{x^{10}y^{10}}=5x^2y^2$</p>\n\n<p>Which gives $5(xy-1)^2\\le0$, so $xy=1$</p>\n\n<p>Then initial equation can be rewritten as $x^{10}+\\frac{1}{x^{10}}=2$</p>\n\n<p>Which by AM-GM again, (or by writing it as a $(x^{10}-1)^2=0$) has the solution, $x^{10}=1$, so $x=\\pm1$</p>\n\n<p>So $x=1,y=1$, or $x=-1,y=-1$</p>\n" }, { "answer_id": 2100397, "score": 5, "text": "<p>Just to give a non-AM-GM answer, let $u=xy$, which must be positive in order for $x^{10}+y^{10}=10xy-8$ to have a solution, and note that $x^{10}+y^{10}=(x^5-y^5)^2+2(xy)^5$. Thus</p>\n\n<p>$$\\begin{align}x^{10}+y^{10}-10xy+8\n&amp;=(x^5-y^5)^2+2(u^5-5u+4)\\\\\n&amp;=(x^5-y^5)^2+2(u-1)(u^4+u^3+u^2+u-4)\\\\\n&amp;=(x^5-y^5)^2+2(u-1)^2(u^3+2u^2+3u+4)\\\\\n&amp;\\ge0\n\\end{align}$$</p>\n\n<p>with equality if and only if $x^5=y^5$ and $xy=u=1$ (since $u\\gt0$ implies $u^3+2u^2+3u+4\\gt0$). From this we see that $x=y=\\pm1$ are the only possibilities. In particular, $(x,y)=(1,1)$ is the only solution with $x,y\\in\\mathbb{R}^+$.</p>\n\n<p>Remark: I showed the factorization of $u^5-5u+4$ in two steps for ease of checking.</p>\n" }, { "answer_id": 2100296, "score": 9, "text": "<p><b>Hint:</b> Write it as:</p>\n\n<p>$x^{10}+y^{10}=10xy-8$</p>\n\n<p>Then use AM-GM:</p>\n\n<p>$10xy-5=x^{10}+y^{10}+1+1+1\\ge5\\sqrt[5]{x^{10}y^{10}}=5x^2y^2$</p>\n\n<p>Which gives $5(xy-1)^2\\le0$, so $xy=1$</p>\n\n<p>Then initial equation can be rewritten as $x^{10}+\\frac{1}{x^{10}}=2$</p>\n\n<p>Which by AM-GM again, (or by writing it as a $(x^{10}-1)^2=0$) has the solution, $x^{10}=1$, so $x=\\pm1$</p>\n\n<p>So $x=1,y=1$, or $x=-1,y=-1$</p>\n" }, { "answer_id": 2100397, "score": 5, "text": "<p>Just to give a non-AM-GM answer, let $u=xy$, which must be positive in order for $x^{10}+y^{10}=10xy-8$ to have a solution, and note that $x^{10}+y^{10}=(x^5-y^5)^2+2(xy)^5$. Thus</p>\n\n<p>$$\\begin{align}x^{10}+y^{10}-10xy+8\n&amp;=(x^5-y^5)^2+2(u^5-5u+4)\\\\\n&amp;=(x^5-y^5)^2+2(u-1)(u^4+u^3+u^2+u-4)\\\\\n&amp;=(x^5-y^5)^2+2(u-1)^2(u^3+2u^2+3u+4)\\\\\n&amp;\\ge0\n\\end{align}$$</p>\n\n<p>with equality if and only if $x^5=y^5$ and $xy=u=1$ (since $u\\gt0$ implies $u^3+2u^2+3u+4\\gt0$). From this we see that $x=y=\\pm1$ are the only possibilities. In particular, $(x,y)=(1,1)$ is the only solution with $x,y\\in\\mathbb{R}^+$.</p>\n\n<p>Remark: I showed the factorization of $u^5-5u+4$ in two steps for ease of checking.</p>\n" }, { "answer_id": 2100296, "score": 9, "text": "<p><b>Hint:</b> Write it as:</p>\n\n<p>$x^{10}+y^{10}=10xy-8$</p>\n\n<p>Then use AM-GM:</p>\n\n<p>$10xy-5=x^{10}+y^{10}+1+1+1\\ge5\\sqrt[5]{x^{10}y^{10}}=5x^2y^2$</p>\n\n<p>Which gives $5(xy-1)^2\\le0$, so $xy=1$</p>\n\n<p>Then initial equation can be rewritten as $x^{10}+\\frac{1}{x^{10}}=2$</p>\n\n<p>Which by AM-GM again, (or by writing it as a $(x^{10}-1)^2=0$) has the solution, $x^{10}=1$, so $x=\\pm1$</p>\n\n<p>So $x=1,y=1$, or $x=-1,y=-1$</p>\n" }, { "answer_id": 2100397, "score": 5, "text": "<p>Just to give a non-AM-GM answer, let $u=xy$, which must be positive in order for $x^{10}+y^{10}=10xy-8$ to have a solution, and note that $x^{10}+y^{10}=(x^5-y^5)^2+2(xy)^5$. Thus</p>\n\n<p>$$\\begin{align}x^{10}+y^{10}-10xy+8\n&amp;=(x^5-y^5)^2+2(u^5-5u+4)\\\\\n&amp;=(x^5-y^5)^2+2(u-1)(u^4+u^3+u^2+u-4)\\\\\n&amp;=(x^5-y^5)^2+2(u-1)^2(u^3+2u^2+3u+4)\\\\\n&amp;\\ge0\n\\end{align}$$</p>\n\n<p>with equality if and only if $x^5=y^5$ and $xy=u=1$ (since $u\\gt0$ implies $u^3+2u^2+3u+4\\gt0$). From this we see that $x=y=\\pm1$ are the only possibilities. In particular, $(x,y)=(1,1)$ is the only solution with $x,y\\in\\mathbb{R}^+$.</p>\n\n<p>Remark: I showed the factorization of $u^5-5u+4$ in two steps for ease of checking.</p>\n" }, { "answer_id": 2100296, "score": 9, "text": "<p><b>Hint:</b> Write it as:</p>\n\n<p>$x^{10}+y^{10}=10xy-8$</p>\n\n<p>Then use AM-GM:</p>\n\n<p>$10xy-5=x^{10}+y^{10}+1+1+1\\ge5\\sqrt[5]{x^{10}y^{10}}=5x^2y^2$</p>\n\n<p>Which gives $5(xy-1)^2\\le0$, so $xy=1$</p>\n\n<p>Then initial equation can be rewritten as $x^{10}+\\frac{1}{x^{10}}=2$</p>\n\n<p>Which by AM-GM again, (or by writing it as a $(x^{10}-1)^2=0$) has the solution, $x^{10}=1$, so $x=\\pm1$</p>\n\n<p>So $x=1,y=1$, or $x=-1,y=-1$</p>\n" }, { "answer_id": 2100397, "score": 5, "text": "<p>Just to give a non-AM-GM answer, let $u=xy$, which must be positive in order for $x^{10}+y^{10}=10xy-8$ to have a solution, and note that $x^{10}+y^{10}=(x^5-y^5)^2+2(xy)^5$. Thus</p>\n\n<p>$$\\begin{align}x^{10}+y^{10}-10xy+8\n&amp;=(x^5-y^5)^2+2(u^5-5u+4)\\\\\n&amp;=(x^5-y^5)^2+2(u-1)(u^4+u^3+u^2+u-4)\\\\\n&amp;=(x^5-y^5)^2+2(u-1)^2(u^3+2u^2+3u+4)\\\\\n&amp;\\ge0\n\\end{align}$$</p>\n\n<p>with equality if and only if $x^5=y^5$ and $xy=u=1$ (since $u\\gt0$ implies $u^3+2u^2+3u+4\\gt0$). From this we see that $x=y=\\pm1$ are the only possibilities. In particular, $(x,y)=(1,1)$ is the only solution with $x,y\\in\\mathbb{R}^+$.</p>\n\n<p>Remark: I showed the factorization of $u^5-5u+4$ in two steps for ease of checking.</p>\n" } ]
Question: Find all reals $x$,$y$ satisfying the following equation: Answer 1: <p><b>Hint:</b> Write it as:</p> <p>$x^{10}+y^{10}=10xy-8$</p> <p>Then use AM-GM:</p> <p>$10xy-5=x^{10}+y^{10}+1+1+1\ge5\sqrt[5]{x^{10}y^{10}}=5x^2y^2$</p> <p>Which gives $5(xy-1)^2\le0$, so $xy=1$</p> <p>Then initial equation can be rewritten as $x^{10}+\frac{1}{x^{10}}=2$</p> <p>Which by AM-GM again, (or by writing it as a $(x^{10}-1)^2=0$) has the solution, $x^{10}=1$, so $x=\pm1$</p> <p>So $x=1,y=1$, or $x=-1,y=-1$</p> Answer 2: <p>Just to give a non-AM-GM answer, let $u=xy$, which must be positive in order for $x^{10}+y^{10}=10xy-8$ to have a solution, and note that $x^{10}+y^{10}=(x^5-y^5)^2+2(xy)^5$. Thus</p> <p>$$\begin{align}x^{10}+y^{10}-10xy+8 &amp;=(x^5-y^5)^2+2(u^5-5u+4)\\ &amp;=(x^5-y^5)^2+2(u-1)(u^4+u^3+u^2+u-4)\\ &amp;=(x^5-y^5)^2+2(u-1)^2(u^3+2u^2+3u+4)\\ &amp;\ge0 \end{align}$$</p> <p>with equality if and only if $x^5=y^5$ and $xy=u=1$ (since $u\gt0$ implies $u^3+2u^2+3u+4\gt0$). From this we see that $x=y=\pm1$ are the only possibilities. In particular, $(x,y)=(1,1)$ is the only solution with $x,y\in\mathbb{R}^+$.</p> <p>Remark: I showed the factorization of $u^5-5u+4$ in two steps for ease of checking.</p> Answer 3: <p><b>Hint:</b> Write it as:</p> <p>$x^{10}+y^{10}=10xy-8$</p> <p>Then use AM-GM:</p> <p>$10xy-5=x^{10}+y^{10}+1+1+1\ge5\sqrt[5]{x^{10}y^{10}}=5x^2y^2$</p> <p>Which gives $5(xy-1)^2\le0$, so $xy=1$</p> <p>Then initial equation can be rewritten as $x^{10}+\frac{1}{x^{10}}=2$</p> <p>Which by AM-GM again, (or by writing it as a $(x^{10}-1)^2=0$) has the solution, $x^{10}=1$, so $x=\pm1$</p> <p>So $x=1,y=1$, or $x=-1,y=-1$</p> Answer 4: <p>Just to give a non-AM-GM answer, let $u=xy$, which must be positive in order for $x^{10}+y^{10}=10xy-8$ to have a solution, and note that $x^{10}+y^{10}=(x^5-y^5)^2+2(xy)^5$. Thus</p> <p>$$\begin{align}x^{10}+y^{10}-10xy+8 &amp;=(x^5-y^5)^2+2(u^5-5u+4)\\ &amp;=(x^5-y^5)^2+2(u-1)(u^4+u^3+u^2+u-4)\\ &amp;=(x^5-y^5)^2+2(u-1)^2(u^3+2u^2+3u+4)\\ &amp;\ge0 \end{align}$$</p> <p>with equality if and only if $x^5=y^5$ and $xy=u=1$ (since $u\gt0$ implies $u^3+2u^2+3u+4\gt0$). From this we see that $x=y=\pm1$ are the only possibilities. In particular, $(x,y)=(1,1)$ is the only solution with $x,y\in\mathbb{R}^+$.</p> <p>Remark: I showed the factorization of $u^5-5u+4$ in two steps for ease of checking.</p> Answer 5: <p><b>Hint:</b> Write it as:</p> <p>$x^{10}+y^{10}=10xy-8$</p> <p>Then use AM-GM:</p> <p>$10xy-5=x^{10}+y^{10}+1+1+1\ge5\sqrt[5]{x^{10}y^{10}}=5x^2y^2$</p> <p>Which gives $5(xy-1)^2\le0$, so $xy=1$</p> <p>Then initial equation can be rewritten as $x^{10}+\frac{1}{x^{10}}=2$</p> <p>Which by AM-GM again, (or by writing it as a $(x^{10}-1)^2=0$) has the solution, $x^{10}=1$, so $x=\pm1$</p> <p>So $x=1,y=1$, or $x=-1,y=-1$</p> Answer 6: <p>Just to give a non-AM-GM answer, let $u=xy$, which must be positive in order for $x^{10}+y^{10}=10xy-8$ to have a solution, and note that $x^{10}+y^{10}=(x^5-y^5)^2+2(xy)^5$. Thus</p> <p>$$\begin{align}x^{10}+y^{10}-10xy+8 &amp;=(x^5-y^5)^2+2(u^5-5u+4)\\ &amp;=(x^5-y^5)^2+2(u-1)(u^4+u^3+u^2+u-4)\\ &amp;=(x^5-y^5)^2+2(u-1)^2(u^3+2u^2+3u+4)\\ &amp;\ge0 \end{align}$$</p> <p>with equality if and only if $x^5=y^5$ and $xy=u=1$ (since $u\gt0$ implies $u^3+2u^2+3u+4\gt0$). From this we see that $x=y=\pm1$ are the only possibilities. In particular, $(x,y)=(1,1)$ is the only solution with $x,y\in\mathbb{R}^+$.</p> <p>Remark: I showed the factorization of $u^5-5u+4$ in two steps for ease of checking.</p> Answer 7: <p><b>Hint:</b> Write it as:</p> <p>$x^{10}+y^{10}=10xy-8$</p> <p>Then use AM-GM:</p> <p>$10xy-5=x^{10}+y^{10}+1+1+1\ge5\sqrt[5]{x^{10}y^{10}}=5x^2y^2$</p> <p>Which gives $5(xy-1)^2\le0$, so $xy=1$</p> <p>Then initial equation can be rewritten as $x^{10}+\frac{1}{x^{10}}=2$</p> <p>Which by AM-GM again, (or by writing it as a $(x^{10}-1)^2=0$) has the solution, $x^{10}=1$, so $x=\pm1$</p> <p>So $x=1,y=1$, or $x=-1,y=-1$</p> Answer 8: <p>Just to give a non-AM-GM answer, let $u=xy$, which must be positive in order for $x^{10}+y^{10}=10xy-8$ to have a solution, and note that $x^{10}+y^{10}=(x^5-y^5)^2+2(xy)^5$. Thus</p> <p>$$\begin{align}x^{10}+y^{10}-10xy+8 &amp;=(x^5-y^5)^2+2(u^5-5u+4)\\ &amp;=(x^5-y^5)^2+2(u-1)(u^4+u^3+u^2+u-4)\\ &amp;=(x^5-y^5)^2+2(u-1)^2(u^3+2u^2+3u+4)\\ &amp;\ge0 \end{align}$$</p> <p>with equality if and only if $x^5=y^5$ and $xy=u=1$ (since $u\gt0$ implies $u^3+2u^2+3u+4\gt0$). From this we see that $x=y=\pm1$ are the only possibilities. In particular, $(x,y)=(1,1)$ is the only solution with $x,y\in\mathbb{R}^+$.</p> <p>Remark: I showed the factorization of $u^5-5u+4$ in two steps for ease of checking.</p> Answer 9: <p><b>Hint:</b> Write it as:</p> <p>$x^{10}+y^{10}=10xy-8$</p> <p>Then use AM-GM:</p> <p>$10xy-5=x^{10}+y^{10}+1+1+1\ge5\sqrt[5]{x^{10}y^{10}}=5x^2y^2$</p> <p>Which gives $5(xy-1)^2\le0$, so $xy=1$</p> <p>Then initial equation can be rewritten as $x^{10}+\frac{1}{x^{10}}=2$</p> <p>Which by AM-GM again, (or by writing it as a $(x^{10}-1)^2=0$) has the solution, $x^{10}=1$, so $x=\pm1$</p> <p>So $x=1,y=1$, or $x=-1,y=-1$</p> Answer 10: <p>Just to give a non-AM-GM answer, let $u=xy$, which must be positive in order for $x^{10}+y^{10}=10xy-8$ to have a solution, and note that $x^{10}+y^{10}=(x^5-y^5)^2+2(xy)^5$. Thus</p> <p>$$\begin{align}x^{10}+y^{10}-10xy+8 &amp;=(x^5-y^5)^2+2(u^5-5u+4)\\ &amp;=(x^5-y^5)^2+2(u-1)(u^4+u^3+u^2+u-4)\\ &amp;=(x^5-y^5)^2+2(u-1)^2(u^3+2u^2+3u+4)\\ &amp;\ge0 \end{align}$$</p> <p>with equality if and only if $x^5=y^5$ and $xy=u=1$ (since $u\gt0$ implies $u^3+2u^2+3u+4\gt0$). From this we see that $x=y=\pm1$ are the only possibilities. In particular, $(x,y)=(1,1)$ is the only solution with $x,y\in\mathbb{R}^+$.</p> <p>Remark: I showed the factorization of $u^5-5u+4$ in two steps for ease of checking.</p> Answer 11: <p><b>Hint:</b> Write it as:</p> <p>$x^{10}+y^{10}=10xy-8$</p> <p>Then use AM-GM:</p> <p>$10xy-5=x^{10}+y^{10}+1+1+1\ge5\sqrt[5]{x^{10}y^{10}}=5x^2y^2$</p> <p>Which gives $5(xy-1)^2\le0$, so $xy=1$</p> <p>Then initial equation can be rewritten as $x^{10}+\frac{1}{x^{10}}=2$</p> <p>Which by AM-GM again, (or by writing it as a $(x^{10}-1)^2=0$) has the solution, $x^{10}=1$, so $x=\pm1$</p> <p>So $x=1,y=1$, or $x=-1,y=-1$</p> Answer 12: <p>Just to give a non-AM-GM answer, let $u=xy$, which must be positive in order for $x^{10}+y^{10}=10xy-8$ to have a solution, and note that $x^{10}+y^{10}=(x^5-y^5)^2+2(xy)^5$. Thus</p> <p>$$\begin{align}x^{10}+y^{10}-10xy+8 &amp;=(x^5-y^5)^2+2(u^5-5u+4)\\ &amp;=(x^5-y^5)^2+2(u-1)(u^4+u^3+u^2+u-4)\\ &amp;=(x^5-y^5)^2+2(u-1)^2(u^3+2u^2+3u+4)\\ &amp;\ge0 \end{align}$$</p> <p>with equality if and only if $x^5=y^5$ and $xy=u=1$ (since $u\gt0$ implies $u^3+2u^2+3u+4\gt0$). From this we see that $x=y=\pm1$ are the only possibilities. In particular, $(x,y)=(1,1)$ is the only solution with $x,y\in\mathbb{R}^+$.</p> <p>Remark: I showed the factorization of $u^5-5u+4$ in two steps for ease of checking.</p> Answer 13: <p><b>Hint:</b> Write it as:</p> <p>$x^{10}+y^{10}=10xy-8$</p> <p>Then use AM-GM:</p> <p>$10xy-5=x^{10}+y^{10}+1+1+1\ge5\sqrt[5]{x^{10}y^{10}}=5x^2y^2$</p> <p>Which gives $5(xy-1)^2\le0$, so $xy=1$</p> <p>Then initial equation can be rewritten as $x^{10}+\frac{1}{x^{10}}=2$</p> <p>Which by AM-GM again, (or by writing it as a $(x^{10}-1)^2=0$) has the solution, $x^{10}=1$, so $x=\pm1$</p> <p>So $x=1,y=1$, or $x=-1,y=-1$</p> Answer 14: <p>Just to give a non-AM-GM answer, let $u=xy$, which must be positive in order for $x^{10}+y^{10}=10xy-8$ to have a solution, and note that $x^{10}+y^{10}=(x^5-y^5)^2+2(xy)^5$. Thus</p> <p>$$\begin{align}x^{10}+y^{10}-10xy+8 &amp;=(x^5-y^5)^2+2(u^5-5u+4)\\ &amp;=(x^5-y^5)^2+2(u-1)(u^4+u^3+u^2+u-4)\\ &amp;=(x^5-y^5)^2+2(u-1)^2(u^3+2u^2+3u+4)\\ &amp;\ge0 \end{align}$$</p> <p>with equality if and only if $x^5=y^5$ and $xy=u=1$ (since $u\gt0$ implies $u^3+2u^2+3u+4\gt0$). From this we see that $x=y=\pm1$ are the only possibilities. In particular, $(x,y)=(1,1)$ is the only solution with $x,y\in\mathbb{R}^+$.</p> <p>Remark: I showed the factorization of $u^5-5u+4$ in two steps for ease of checking.</p> Answer 15: <p><b>Hint:</b> Write it as:</p> <p>$x^{10}+y^{10}=10xy-8$</p> <p>Then use AM-GM:</p> <p>$10xy-5=x^{10}+y^{10}+1+1+1\ge5\sqrt[5]{x^{10}y^{10}}=5x^2y^2$</p> <p>Which gives $5(xy-1)^2\le0$, so $xy=1$</p> <p>Then initial equation can be rewritten as $x^{10}+\frac{1}{x^{10}}=2$</p> <p>Which by AM-GM again, (or by writing it as a $(x^{10}-1)^2=0$) has the solution, $x^{10}=1$, so $x=\pm1$</p> <p>So $x=1,y=1$, or $x=-1,y=-1$</p> Answer 16: <p>Just to give a non-AM-GM answer, let $u=xy$, which must be positive in order for $x^{10}+y^{10}=10xy-8$ to have a solution, and note that $x^{10}+y^{10}=(x^5-y^5)^2+2(xy)^5$. Thus</p> <p>$$\begin{align}x^{10}+y^{10}-10xy+8 &amp;=(x^5-y^5)^2+2(u^5-5u+4)\\ &amp;=(x^5-y^5)^2+2(u-1)(u^4+u^3+u^2+u-4)\\ &amp;=(x^5-y^5)^2+2(u-1)^2(u^3+2u^2+3u+4)\\ &amp;\ge0 \end{align}$$</p> <p>with equality if and only if $x^5=y^5$ and $xy=u=1$ (since $u\gt0$ implies $u^3+2u^2+3u+4\gt0$). From this we see that $x=y=\pm1$ are the only possibilities. In particular, $(x,y)=(1,1)$ is the only solution with $x,y\in\mathbb{R}^+$.</p> <p>Remark: I showed the factorization of $u^5-5u+4$ in two steps for ease of checking.</p> Answer 17: <p><b>Hint:</b> Write it as:</p> <p>$x^{10}+y^{10}=10xy-8$</p> <p>Then use AM-GM:</p> <p>$10xy-5=x^{10}+y^{10}+1+1+1\ge5\sqrt[5]{x^{10}y^{10}}=5x^2y^2$</p> <p>Which gives $5(xy-1)^2\le0$, so $xy=1$</p> <p>Then initial equation can be rewritten as $x^{10}+\frac{1}{x^{10}}=2$</p> <p>Which by AM-GM again, (or by writing it as a $(x^{10}-1)^2=0$) has the solution, $x^{10}=1$, so $x=\pm1$</p> <p>So $x=1,y=1$, or $x=-1,y=-1$</p> Answer 18: <p>Just to give a non-AM-GM answer, let $u=xy$, which must be positive in order for $x^{10}+y^{10}=10xy-8$ to have a solution, and note that $x^{10}+y^{10}=(x^5-y^5)^2+2(xy)^5$. Thus</p> <p>$$\begin{align}x^{10}+y^{10}-10xy+8 &amp;=(x^5-y^5)^2+2(u^5-5u+4)\\ &amp;=(x^5-y^5)^2+2(u-1)(u^4+u^3+u^2+u-4)\\ &amp;=(x^5-y^5)^2+2(u-1)^2(u^3+2u^2+3u+4)\\ &amp;\ge0 \end{align}$$</p> <p>with equality if and only if $x^5=y^5$ and $xy=u=1$ (since $u\gt0$ implies $u^3+2u^2+3u+4\gt0$). From this we see that $x=y=\pm1$ are the only possibilities. In particular, $(x,y)=(1,1)$ is the only solution with $x,y\in\mathbb{R}^+$.</p> <p>Remark: I showed the factorization of $u^5-5u+4$ in two steps for ease of checking.</p> Answer 19: <p><b>Hint:</b> Write it as:</p> <p>$x^{10}+y^{10}=10xy-8$</p> <p>Then use AM-GM:</p> <p>$10xy-5=x^{10}+y^{10}+1+1+1\ge5\sqrt[5]{x^{10}y^{10}}=5x^2y^2$</p> <p>Which gives $5(xy-1)^2\le0$, so $xy=1$</p> <p>Then initial equation can be rewritten as $x^{10}+\frac{1}{x^{10}}=2$</p> <p>Which by AM-GM again, (or by writing it as a $(x^{10}-1)^2=0$) has the solution, $x^{10}=1$, so $x=\pm1$</p> <p>So $x=1,y=1$, or $x=-1,y=-1$</p> Answer 20: <p>Just to give a non-AM-GM answer, let $u=xy$, which must be positive in order for $x^{10}+y^{10}=10xy-8$ to have a solution, and note that $x^{10}+y^{10}=(x^5-y^5)^2+2(xy)^5$. Thus</p> <p>$$\begin{align}x^{10}+y^{10}-10xy+8 &amp;=(x^5-y^5)^2+2(u^5-5u+4)\\ &amp;=(x^5-y^5)^2+2(u-1)(u^4+u^3+u^2+u-4)\\ &amp;=(x^5-y^5)^2+2(u-1)^2(u^3+2u^2+3u+4)\\ &amp;\ge0 \end{align}$$</p> <p>with equality if and only if $x^5=y^5$ and $xy=u=1$ (since $u\gt0$ implies $u^3+2u^2+3u+4\gt0$). From this we see that $x=y=\pm1$ are the only possibilities. In particular, $(x,y)=(1,1)$ is the only solution with $x,y\in\mathbb{R}^+$.</p> <p>Remark: I showed the factorization of $u^5-5u+4$ in two steps for ease of checking.</p>
$G$ be a non-measurable subgroup of $(\mathbb R,+)$ ; $I$ be a bounded interval , then $m^*(G \cap I)=m^*(I)$?
2,101,953
6
https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval
<p>Let $G$ be a non-measurable subgroup of $(\mathbb R,+)$ , $I$ be a bounded interval , then is it true that $m^*(G \cap I)=m^*(I)(=|I|)$ ? where $m^*$ denotes the Lebesgue outer measure </p>
[ { "answer_id": 2114225, "score": 8, "text": "<p>The answer \"yes\" follows from results in the article \"On Two Halves Being Two Wholes\" by Andrew Simoson in <em>The American Mathematical Monthly</em> Vol. 91, No. 3 (Mar., 1984), pp. 190-193. Some readers might not have access, but here is the JSTOR link: <a href=\"http://www.jstor.org/stable/2322357\" rel=\"nofollow noreferrer\">http://www.jstor.org/stable/2322357</a> The article was cited in the <a href=\"https://www.jstor.org/stable/2325061?seq=1#page_scan_tab_contents\" rel=\"nofollow noreferrer\">article</a> <a href=\"https://math.stackexchange.com/users/67070/i707107\">i707107</a> linked to in <a href=\"https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval#comment4348049_2101953\">a comment</a>. </p>\n\n<p>In the article a subset $A\\subseteq \\mathbb R$ is called <em>Archimedean</em> if $\\{r\\in\\mathbb R:A+r=A\\}$ is dense in $\\mathbb R$. Theorem 2 states that if $A$ is Archimedean with positive outer measure, then for every interval $I$, $m^*(A\\cap I)=m^*(I)$.</p>\n\n<p>A dense subgroup of $\\mathbb R$ is Archimedean, and a nonmeasurable subgroup of $\\mathbb R$ is dense with positive outer measure, so the theorem applies to affirm your conjecture.</p>\n" }, { "answer_id": 2115766, "score": 3, "text": "<p>From the article provided by @Jonas Meyer: <a href=\"http://www.jstor.org/stable/2322357\" rel=\"nofollow noreferrer\">http://www.jstor.org/stable/2322357</a></p>\n\n<p>An alternative approach to this problem is possible. This approach is already provided by @Syang Chen at MathOverflow: <a href=\"https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980\">https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980</a> </p>\n\n<p>Let $f(x) = m^*(G\\cap [0,x])$. Then, for any $x, y \\in G \\cap (0,\\infty)$, we have the additivity:\n$$\nf(x+y)=f(x)+f(y).\n$$\nSince $G$ is dense and $f$ is monotone, the additivity is generalized to all positive real numbers $x, y$. Moreover, $f(x) = ax$ with $a= f(1)$ is satisfied. </p>\n\n<p>Thus, if $I$ is any open interval, then \n$$\n\\frac{m^*(G\\cap I)}{m^*(I)} = a .\n$$</p>\n\n<p>Now, we find $I$ by the Theorem 1 in the article. This gives $a=1$. </p>\n" }, { "answer_id": 2114225, "score": 8, "text": "<p>The answer \"yes\" follows from results in the article \"On Two Halves Being Two Wholes\" by Andrew Simoson in <em>The American Mathematical Monthly</em> Vol. 91, No. 3 (Mar., 1984), pp. 190-193. Some readers might not have access, but here is the JSTOR link: <a href=\"http://www.jstor.org/stable/2322357\" rel=\"nofollow noreferrer\">http://www.jstor.org/stable/2322357</a> The article was cited in the <a href=\"https://www.jstor.org/stable/2325061?seq=1#page_scan_tab_contents\" rel=\"nofollow noreferrer\">article</a> <a href=\"https://math.stackexchange.com/users/67070/i707107\">i707107</a> linked to in <a href=\"https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval#comment4348049_2101953\">a comment</a>. </p>\n\n<p>In the article a subset $A\\subseteq \\mathbb R$ is called <em>Archimedean</em> if $\\{r\\in\\mathbb R:A+r=A\\}$ is dense in $\\mathbb R$. Theorem 2 states that if $A$ is Archimedean with positive outer measure, then for every interval $I$, $m^*(A\\cap I)=m^*(I)$.</p>\n\n<p>A dense subgroup of $\\mathbb R$ is Archimedean, and a nonmeasurable subgroup of $\\mathbb R$ is dense with positive outer measure, so the theorem applies to affirm your conjecture.</p>\n" }, { "answer_id": 2115766, "score": 3, "text": "<p>From the article provided by @Jonas Meyer: <a href=\"http://www.jstor.org/stable/2322357\" rel=\"nofollow noreferrer\">http://www.jstor.org/stable/2322357</a></p>\n\n<p>An alternative approach to this problem is possible. This approach is already provided by @Syang Chen at MathOverflow: <a href=\"https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980\">https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980</a> </p>\n\n<p>Let $f(x) = m^*(G\\cap [0,x])$. Then, for any $x, y \\in G \\cap (0,\\infty)$, we have the additivity:\n$$\nf(x+y)=f(x)+f(y).\n$$\nSince $G$ is dense and $f$ is monotone, the additivity is generalized to all positive real numbers $x, y$. Moreover, $f(x) = ax$ with $a= f(1)$ is satisfied. </p>\n\n<p>Thus, if $I$ is any open interval, then \n$$\n\\frac{m^*(G\\cap I)}{m^*(I)} = a .\n$$</p>\n\n<p>Now, we find $I$ by the Theorem 1 in the article. This gives $a=1$. </p>\n" }, { "answer_id": 2114225, "score": 8, "text": "<p>The answer \"yes\" follows from results in the article \"On Two Halves Being Two Wholes\" by Andrew Simoson in <em>The American Mathematical Monthly</em> Vol. 91, No. 3 (Mar., 1984), pp. 190-193. Some readers might not have access, but here is the JSTOR link: <a href=\"http://www.jstor.org/stable/2322357\" rel=\"nofollow noreferrer\">http://www.jstor.org/stable/2322357</a> The article was cited in the <a href=\"https://www.jstor.org/stable/2325061?seq=1#page_scan_tab_contents\" rel=\"nofollow noreferrer\">article</a> <a href=\"https://math.stackexchange.com/users/67070/i707107\">i707107</a> linked to in <a href=\"https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval#comment4348049_2101953\">a comment</a>. </p>\n\n<p>In the article a subset $A\\subseteq \\mathbb R$ is called <em>Archimedean</em> if $\\{r\\in\\mathbb R:A+r=A\\}$ is dense in $\\mathbb R$. Theorem 2 states that if $A$ is Archimedean with positive outer measure, then for every interval $I$, $m^*(A\\cap I)=m^*(I)$.</p>\n\n<p>A dense subgroup of $\\mathbb R$ is Archimedean, and a nonmeasurable subgroup of $\\mathbb R$ is dense with positive outer measure, so the theorem applies to affirm your conjecture.</p>\n" }, { "answer_id": 2115766, "score": 3, "text": "<p>From the article provided by @Jonas Meyer: <a href=\"http://www.jstor.org/stable/2322357\" rel=\"nofollow noreferrer\">http://www.jstor.org/stable/2322357</a></p>\n\n<p>An alternative approach to this problem is possible. This approach is already provided by @Syang Chen at MathOverflow: <a href=\"https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980\">https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980</a> </p>\n\n<p>Let $f(x) = m^*(G\\cap [0,x])$. Then, for any $x, y \\in G \\cap (0,\\infty)$, we have the additivity:\n$$\nf(x+y)=f(x)+f(y).\n$$\nSince $G$ is dense and $f$ is monotone, the additivity is generalized to all positive real numbers $x, y$. Moreover, $f(x) = ax$ with $a= f(1)$ is satisfied. </p>\n\n<p>Thus, if $I$ is any open interval, then \n$$\n\\frac{m^*(G\\cap I)}{m^*(I)} = a .\n$$</p>\n\n<p>Now, we find $I$ by the Theorem 1 in the article. This gives $a=1$. </p>\n" }, { "answer_id": 2114225, "score": 8, "text": "<p>The answer \"yes\" follows from results in the article \"On Two Halves Being Two Wholes\" by Andrew Simoson in <em>The American Mathematical Monthly</em> Vol. 91, No. 3 (Mar., 1984), pp. 190-193. Some readers might not have access, but here is the JSTOR link: <a href=\"http://www.jstor.org/stable/2322357\" rel=\"nofollow noreferrer\">http://www.jstor.org/stable/2322357</a> The article was cited in the <a href=\"https://www.jstor.org/stable/2325061?seq=1#page_scan_tab_contents\" rel=\"nofollow noreferrer\">article</a> <a href=\"https://math.stackexchange.com/users/67070/i707107\">i707107</a> linked to in <a href=\"https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval#comment4348049_2101953\">a comment</a>. </p>\n\n<p>In the article a subset $A\\subseteq \\mathbb R$ is called <em>Archimedean</em> if $\\{r\\in\\mathbb R:A+r=A\\}$ is dense in $\\mathbb R$. Theorem 2 states that if $A$ is Archimedean with positive outer measure, then for every interval $I$, $m^*(A\\cap I)=m^*(I)$.</p>\n\n<p>A dense subgroup of $\\mathbb R$ is Archimedean, and a nonmeasurable subgroup of $\\mathbb R$ is dense with positive outer measure, so the theorem applies to affirm your conjecture.</p>\n" }, { "answer_id": 2115766, "score": 3, "text": "<p>From the article provided by @Jonas Meyer: <a href=\"http://www.jstor.org/stable/2322357\" rel=\"nofollow noreferrer\">http://www.jstor.org/stable/2322357</a></p>\n\n<p>An alternative approach to this problem is possible. This approach is already provided by @Syang Chen at MathOverflow: <a href=\"https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980\">https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980</a> </p>\n\n<p>Let $f(x) = m^*(G\\cap [0,x])$. Then, for any $x, y \\in G \\cap (0,\\infty)$, we have the additivity:\n$$\nf(x+y)=f(x)+f(y).\n$$\nSince $G$ is dense and $f$ is monotone, the additivity is generalized to all positive real numbers $x, y$. Moreover, $f(x) = ax$ with $a= f(1)$ is satisfied. </p>\n\n<p>Thus, if $I$ is any open interval, then \n$$\n\\frac{m^*(G\\cap I)}{m^*(I)} = a .\n$$</p>\n\n<p>Now, we find $I$ by the Theorem 1 in the article. This gives $a=1$. </p>\n" }, { "answer_id": 2114225, "score": 8, "text": "<p>The answer \"yes\" follows from results in the article \"On Two Halves Being Two Wholes\" by Andrew Simoson in <em>The American Mathematical Monthly</em> Vol. 91, No. 3 (Mar., 1984), pp. 190-193. Some readers might not have access, but here is the JSTOR link: <a href=\"http://www.jstor.org/stable/2322357\" rel=\"nofollow noreferrer\">http://www.jstor.org/stable/2322357</a> The article was cited in the <a href=\"https://www.jstor.org/stable/2325061?seq=1#page_scan_tab_contents\" rel=\"nofollow noreferrer\">article</a> <a href=\"https://math.stackexchange.com/users/67070/i707107\">i707107</a> linked to in <a href=\"https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval#comment4348049_2101953\">a comment</a>. </p>\n\n<p>In the article a subset $A\\subseteq \\mathbb R$ is called <em>Archimedean</em> if $\\{r\\in\\mathbb R:A+r=A\\}$ is dense in $\\mathbb R$. Theorem 2 states that if $A$ is Archimedean with positive outer measure, then for every interval $I$, $m^*(A\\cap I)=m^*(I)$.</p>\n\n<p>A dense subgroup of $\\mathbb R$ is Archimedean, and a nonmeasurable subgroup of $\\mathbb R$ is dense with positive outer measure, so the theorem applies to affirm your conjecture.</p>\n" }, { "answer_id": 2115766, "score": 3, "text": "<p>From the article provided by @Jonas Meyer: <a href=\"http://www.jstor.org/stable/2322357\" rel=\"nofollow noreferrer\">http://www.jstor.org/stable/2322357</a></p>\n\n<p>An alternative approach to this problem is possible. This approach is already provided by @Syang Chen at MathOverflow: <a href=\"https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980\">https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980</a> </p>\n\n<p>Let $f(x) = m^*(G\\cap [0,x])$. Then, for any $x, y \\in G \\cap (0,\\infty)$, we have the additivity:\n$$\nf(x+y)=f(x)+f(y).\n$$\nSince $G$ is dense and $f$ is monotone, the additivity is generalized to all positive real numbers $x, y$. Moreover, $f(x) = ax$ with $a= f(1)$ is satisfied. </p>\n\n<p>Thus, if $I$ is any open interval, then \n$$\n\\frac{m^*(G\\cap I)}{m^*(I)} = a .\n$$</p>\n\n<p>Now, we find $I$ by the Theorem 1 in the article. This gives $a=1$. </p>\n" }, { "answer_id": 2114225, "score": 8, "text": "<p>The answer \"yes\" follows from results in the article \"On Two Halves Being Two Wholes\" by Andrew Simoson in <em>The American Mathematical Monthly</em> Vol. 91, No. 3 (Mar., 1984), pp. 190-193. Some readers might not have access, but here is the JSTOR link: <a href=\"http://www.jstor.org/stable/2322357\" rel=\"nofollow noreferrer\">http://www.jstor.org/stable/2322357</a> The article was cited in the <a href=\"https://www.jstor.org/stable/2325061?seq=1#page_scan_tab_contents\" rel=\"nofollow noreferrer\">article</a> <a href=\"https://math.stackexchange.com/users/67070/i707107\">i707107</a> linked to in <a href=\"https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval#comment4348049_2101953\">a comment</a>. </p>\n\n<p>In the article a subset $A\\subseteq \\mathbb R$ is called <em>Archimedean</em> if $\\{r\\in\\mathbb R:A+r=A\\}$ is dense in $\\mathbb R$. Theorem 2 states that if $A$ is Archimedean with positive outer measure, then for every interval $I$, $m^*(A\\cap I)=m^*(I)$.</p>\n\n<p>A dense subgroup of $\\mathbb R$ is Archimedean, and a nonmeasurable subgroup of $\\mathbb R$ is dense with positive outer measure, so the theorem applies to affirm your conjecture.</p>\n" }, { "answer_id": 2115766, "score": 3, "text": "<p>From the article provided by @Jonas Meyer: <a href=\"http://www.jstor.org/stable/2322357\" rel=\"nofollow noreferrer\">http://www.jstor.org/stable/2322357</a></p>\n\n<p>An alternative approach to this problem is possible. This approach is already provided by @Syang Chen at MathOverflow: <a href=\"https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980\">https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980</a> </p>\n\n<p>Let $f(x) = m^*(G\\cap [0,x])$. Then, for any $x, y \\in G \\cap (0,\\infty)$, we have the additivity:\n$$\nf(x+y)=f(x)+f(y).\n$$\nSince $G$ is dense and $f$ is monotone, the additivity is generalized to all positive real numbers $x, y$. Moreover, $f(x) = ax$ with $a= f(1)$ is satisfied. </p>\n\n<p>Thus, if $I$ is any open interval, then \n$$\n\\frac{m^*(G\\cap I)}{m^*(I)} = a .\n$$</p>\n\n<p>Now, we find $I$ by the Theorem 1 in the article. This gives $a=1$. </p>\n" }, { "answer_id": 2114225, "score": 8, "text": "<p>The answer \"yes\" follows from results in the article \"On Two Halves Being Two Wholes\" by Andrew Simoson in <em>The American Mathematical Monthly</em> Vol. 91, No. 3 (Mar., 1984), pp. 190-193. Some readers might not have access, but here is the JSTOR link: <a href=\"http://www.jstor.org/stable/2322357\" rel=\"nofollow noreferrer\">http://www.jstor.org/stable/2322357</a> The article was cited in the <a href=\"https://www.jstor.org/stable/2325061?seq=1#page_scan_tab_contents\" rel=\"nofollow noreferrer\">article</a> <a href=\"https://math.stackexchange.com/users/67070/i707107\">i707107</a> linked to in <a href=\"https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval#comment4348049_2101953\">a comment</a>. </p>\n\n<p>In the article a subset $A\\subseteq \\mathbb R$ is called <em>Archimedean</em> if $\\{r\\in\\mathbb R:A+r=A\\}$ is dense in $\\mathbb R$. Theorem 2 states that if $A$ is Archimedean with positive outer measure, then for every interval $I$, $m^*(A\\cap I)=m^*(I)$.</p>\n\n<p>A dense subgroup of $\\mathbb R$ is Archimedean, and a nonmeasurable subgroup of $\\mathbb R$ is dense with positive outer measure, so the theorem applies to affirm your conjecture.</p>\n" }, { "answer_id": 2115766, "score": 3, "text": "<p>From the article provided by @Jonas Meyer: <a href=\"http://www.jstor.org/stable/2322357\" rel=\"nofollow noreferrer\">http://www.jstor.org/stable/2322357</a></p>\n\n<p>An alternative approach to this problem is possible. This approach is already provided by @Syang Chen at MathOverflow: <a href=\"https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980\">https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980</a> </p>\n\n<p>Let $f(x) = m^*(G\\cap [0,x])$. Then, for any $x, y \\in G \\cap (0,\\infty)$, we have the additivity:\n$$\nf(x+y)=f(x)+f(y).\n$$\nSince $G$ is dense and $f$ is monotone, the additivity is generalized to all positive real numbers $x, y$. Moreover, $f(x) = ax$ with $a= f(1)$ is satisfied. </p>\n\n<p>Thus, if $I$ is any open interval, then \n$$\n\\frac{m^*(G\\cap I)}{m^*(I)} = a .\n$$</p>\n\n<p>Now, we find $I$ by the Theorem 1 in the article. This gives $a=1$. </p>\n" }, { "answer_id": 2114225, "score": 8, "text": "<p>The answer \"yes\" follows from results in the article \"On Two Halves Being Two Wholes\" by Andrew Simoson in <em>The American Mathematical Monthly</em> Vol. 91, No. 3 (Mar., 1984), pp. 190-193. Some readers might not have access, but here is the JSTOR link: <a href=\"http://www.jstor.org/stable/2322357\" rel=\"nofollow noreferrer\">http://www.jstor.org/stable/2322357</a> The article was cited in the <a href=\"https://www.jstor.org/stable/2325061?seq=1#page_scan_tab_contents\" rel=\"nofollow noreferrer\">article</a> <a href=\"https://math.stackexchange.com/users/67070/i707107\">i707107</a> linked to in <a href=\"https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval#comment4348049_2101953\">a comment</a>. </p>\n\n<p>In the article a subset $A\\subseteq \\mathbb R$ is called <em>Archimedean</em> if $\\{r\\in\\mathbb R:A+r=A\\}$ is dense in $\\mathbb R$. Theorem 2 states that if $A$ is Archimedean with positive outer measure, then for every interval $I$, $m^*(A\\cap I)=m^*(I)$.</p>\n\n<p>A dense subgroup of $\\mathbb R$ is Archimedean, and a nonmeasurable subgroup of $\\mathbb R$ is dense with positive outer measure, so the theorem applies to affirm your conjecture.</p>\n" }, { "answer_id": 2115766, "score": 3, "text": "<p>From the article provided by @Jonas Meyer: <a href=\"http://www.jstor.org/stable/2322357\" rel=\"nofollow noreferrer\">http://www.jstor.org/stable/2322357</a></p>\n\n<p>An alternative approach to this problem is possible. This approach is already provided by @Syang Chen at MathOverflow: <a href=\"https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980\">https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980</a> </p>\n\n<p>Let $f(x) = m^*(G\\cap [0,x])$. Then, for any $x, y \\in G \\cap (0,\\infty)$, we have the additivity:\n$$\nf(x+y)=f(x)+f(y).\n$$\nSince $G$ is dense and $f$ is monotone, the additivity is generalized to all positive real numbers $x, y$. Moreover, $f(x) = ax$ with $a= f(1)$ is satisfied. </p>\n\n<p>Thus, if $I$ is any open interval, then \n$$\n\\frac{m^*(G\\cap I)}{m^*(I)} = a .\n$$</p>\n\n<p>Now, we find $I$ by the Theorem 1 in the article. This gives $a=1$. </p>\n" }, { "answer_id": 2114225, "score": 8, "text": "<p>The answer \"yes\" follows from results in the article \"On Two Halves Being Two Wholes\" by Andrew Simoson in <em>The American Mathematical Monthly</em> Vol. 91, No. 3 (Mar., 1984), pp. 190-193. Some readers might not have access, but here is the JSTOR link: <a href=\"http://www.jstor.org/stable/2322357\" rel=\"nofollow noreferrer\">http://www.jstor.org/stable/2322357</a> The article was cited in the <a href=\"https://www.jstor.org/stable/2325061?seq=1#page_scan_tab_contents\" rel=\"nofollow noreferrer\">article</a> <a href=\"https://math.stackexchange.com/users/67070/i707107\">i707107</a> linked to in <a href=\"https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval#comment4348049_2101953\">a comment</a>. </p>\n\n<p>In the article a subset $A\\subseteq \\mathbb R$ is called <em>Archimedean</em> if $\\{r\\in\\mathbb R:A+r=A\\}$ is dense in $\\mathbb R$. Theorem 2 states that if $A$ is Archimedean with positive outer measure, then for every interval $I$, $m^*(A\\cap I)=m^*(I)$.</p>\n\n<p>A dense subgroup of $\\mathbb R$ is Archimedean, and a nonmeasurable subgroup of $\\mathbb R$ is dense with positive outer measure, so the theorem applies to affirm your conjecture.</p>\n" }, { "answer_id": 2115766, "score": 3, "text": "<p>From the article provided by @Jonas Meyer: <a href=\"http://www.jstor.org/stable/2322357\" rel=\"nofollow noreferrer\">http://www.jstor.org/stable/2322357</a></p>\n\n<p>An alternative approach to this problem is possible. This approach is already provided by @Syang Chen at MathOverflow: <a href=\"https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980\">https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980</a> </p>\n\n<p>Let $f(x) = m^*(G\\cap [0,x])$. Then, for any $x, y \\in G \\cap (0,\\infty)$, we have the additivity:\n$$\nf(x+y)=f(x)+f(y).\n$$\nSince $G$ is dense and $f$ is monotone, the additivity is generalized to all positive real numbers $x, y$. Moreover, $f(x) = ax$ with $a= f(1)$ is satisfied. </p>\n\n<p>Thus, if $I$ is any open interval, then \n$$\n\\frac{m^*(G\\cap I)}{m^*(I)} = a .\n$$</p>\n\n<p>Now, we find $I$ by the Theorem 1 in the article. This gives $a=1$. </p>\n" }, { "answer_id": 2114225, "score": 8, "text": "<p>The answer \"yes\" follows from results in the article \"On Two Halves Being Two Wholes\" by Andrew Simoson in <em>The American Mathematical Monthly</em> Vol. 91, No. 3 (Mar., 1984), pp. 190-193. Some readers might not have access, but here is the JSTOR link: <a href=\"http://www.jstor.org/stable/2322357\" rel=\"nofollow noreferrer\">http://www.jstor.org/stable/2322357</a> The article was cited in the <a href=\"https://www.jstor.org/stable/2325061?seq=1#page_scan_tab_contents\" rel=\"nofollow noreferrer\">article</a> <a href=\"https://math.stackexchange.com/users/67070/i707107\">i707107</a> linked to in <a href=\"https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval#comment4348049_2101953\">a comment</a>. </p>\n\n<p>In the article a subset $A\\subseteq \\mathbb R$ is called <em>Archimedean</em> if $\\{r\\in\\mathbb R:A+r=A\\}$ is dense in $\\mathbb R$. Theorem 2 states that if $A$ is Archimedean with positive outer measure, then for every interval $I$, $m^*(A\\cap I)=m^*(I)$.</p>\n\n<p>A dense subgroup of $\\mathbb R$ is Archimedean, and a nonmeasurable subgroup of $\\mathbb R$ is dense with positive outer measure, so the theorem applies to affirm your conjecture.</p>\n" }, { "answer_id": 2115766, "score": 3, "text": "<p>From the article provided by @Jonas Meyer: <a href=\"http://www.jstor.org/stable/2322357\" rel=\"nofollow noreferrer\">http://www.jstor.org/stable/2322357</a></p>\n\n<p>An alternative approach to this problem is possible. This approach is already provided by @Syang Chen at MathOverflow: <a href=\"https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980\">https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980</a> </p>\n\n<p>Let $f(x) = m^*(G\\cap [0,x])$. Then, for any $x, y \\in G \\cap (0,\\infty)$, we have the additivity:\n$$\nf(x+y)=f(x)+f(y).\n$$\nSince $G$ is dense and $f$ is monotone, the additivity is generalized to all positive real numbers $x, y$. Moreover, $f(x) = ax$ with $a= f(1)$ is satisfied. </p>\n\n<p>Thus, if $I$ is any open interval, then \n$$\n\\frac{m^*(G\\cap I)}{m^*(I)} = a .\n$$</p>\n\n<p>Now, we find $I$ by the Theorem 1 in the article. This gives $a=1$. </p>\n" } ]
Question: $G$ be a non-measurable subgroup of $(\mathbb R,+)$ ; $I$ be a bounded interval , then $m^*(G \cap I)=m^*(I)$? Answer 1: <p>The answer "yes" follows from results in the article "On Two Halves Being Two Wholes" by Andrew Simoson in <em>The American Mathematical Monthly</em> Vol. 91, No. 3 (Mar., 1984), pp. 190-193. Some readers might not have access, but here is the JSTOR link: <a href="http://www.jstor.org/stable/2322357" rel="nofollow noreferrer">http://www.jstor.org/stable/2322357</a> The article was cited in the <a href="https://www.jstor.org/stable/2325061?seq=1#page_scan_tab_contents" rel="nofollow noreferrer">article</a> <a href="https://math.stackexchange.com/users/67070/i707107">i707107</a> linked to in <a href="https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval#comment4348049_2101953">a comment</a>. </p> <p>In the article a subset $A\subseteq \mathbb R$ is called <em>Archimedean</em> if $\{r\in\mathbb R:A+r=A\}$ is dense in $\mathbb R$. Theorem 2 states that if $A$ is Archimedean with positive outer measure, then for every interval $I$, $m^*(A\cap I)=m^*(I)$.</p> <p>A dense subgroup of $\mathbb R$ is Archimedean, and a nonmeasurable subgroup of $\mathbb R$ is dense with positive outer measure, so the theorem applies to affirm your conjecture.</p> Answer 2: <p>From the article provided by @Jonas Meyer: <a href="http://www.jstor.org/stable/2322357" rel="nofollow noreferrer">http://www.jstor.org/stable/2322357</a></p> <p>An alternative approach to this problem is possible. This approach is already provided by @Syang Chen at MathOverflow: <a href="https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980">https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980</a> </p> <p>Let $f(x) = m^*(G\cap [0,x])$. Then, for any $x, y \in G \cap (0,\infty)$, we have the additivity: $$ f(x+y)=f(x)+f(y). $$ Since $G$ is dense and $f$ is monotone, the additivity is generalized to all positive real numbers $x, y$. Moreover, $f(x) = ax$ with $a= f(1)$ is satisfied. </p> <p>Thus, if $I$ is any open interval, then $$ \frac{m^*(G\cap I)}{m^*(I)} = a . $$</p> <p>Now, we find $I$ by the Theorem 1 in the article. This gives $a=1$. </p> Answer 3: <p>The answer "yes" follows from results in the article "On Two Halves Being Two Wholes" by Andrew Simoson in <em>The American Mathematical Monthly</em> Vol. 91, No. 3 (Mar., 1984), pp. 190-193. Some readers might not have access, but here is the JSTOR link: <a href="http://www.jstor.org/stable/2322357" rel="nofollow noreferrer">http://www.jstor.org/stable/2322357</a> The article was cited in the <a href="https://www.jstor.org/stable/2325061?seq=1#page_scan_tab_contents" rel="nofollow noreferrer">article</a> <a href="https://math.stackexchange.com/users/67070/i707107">i707107</a> linked to in <a href="https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval#comment4348049_2101953">a comment</a>. </p> <p>In the article a subset $A\subseteq \mathbb R$ is called <em>Archimedean</em> if $\{r\in\mathbb R:A+r=A\}$ is dense in $\mathbb R$. Theorem 2 states that if $A$ is Archimedean with positive outer measure, then for every interval $I$, $m^*(A\cap I)=m^*(I)$.</p> <p>A dense subgroup of $\mathbb R$ is Archimedean, and a nonmeasurable subgroup of $\mathbb R$ is dense with positive outer measure, so the theorem applies to affirm your conjecture.</p> Answer 4: <p>From the article provided by @Jonas Meyer: <a href="http://www.jstor.org/stable/2322357" rel="nofollow noreferrer">http://www.jstor.org/stable/2322357</a></p> <p>An alternative approach to this problem is possible. This approach is already provided by @Syang Chen at MathOverflow: <a href="https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980">https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980</a> </p> <p>Let $f(x) = m^*(G\cap [0,x])$. Then, for any $x, y \in G \cap (0,\infty)$, we have the additivity: $$ f(x+y)=f(x)+f(y). $$ Since $G$ is dense and $f$ is monotone, the additivity is generalized to all positive real numbers $x, y$. Moreover, $f(x) = ax$ with $a= f(1)$ is satisfied. </p> <p>Thus, if $I$ is any open interval, then $$ \frac{m^*(G\cap I)}{m^*(I)} = a . $$</p> <p>Now, we find $I$ by the Theorem 1 in the article. This gives $a=1$. </p> Answer 5: <p>The answer "yes" follows from results in the article "On Two Halves Being Two Wholes" by Andrew Simoson in <em>The American Mathematical Monthly</em> Vol. 91, No. 3 (Mar., 1984), pp. 190-193. Some readers might not have access, but here is the JSTOR link: <a href="http://www.jstor.org/stable/2322357" rel="nofollow noreferrer">http://www.jstor.org/stable/2322357</a> The article was cited in the <a href="https://www.jstor.org/stable/2325061?seq=1#page_scan_tab_contents" rel="nofollow noreferrer">article</a> <a href="https://math.stackexchange.com/users/67070/i707107">i707107</a> linked to in <a href="https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval#comment4348049_2101953">a comment</a>. </p> <p>In the article a subset $A\subseteq \mathbb R$ is called <em>Archimedean</em> if $\{r\in\mathbb R:A+r=A\}$ is dense in $\mathbb R$. Theorem 2 states that if $A$ is Archimedean with positive outer measure, then for every interval $I$, $m^*(A\cap I)=m^*(I)$.</p> <p>A dense subgroup of $\mathbb R$ is Archimedean, and a nonmeasurable subgroup of $\mathbb R$ is dense with positive outer measure, so the theorem applies to affirm your conjecture.</p> Answer 6: <p>From the article provided by @Jonas Meyer: <a href="http://www.jstor.org/stable/2322357" rel="nofollow noreferrer">http://www.jstor.org/stable/2322357</a></p> <p>An alternative approach to this problem is possible. This approach is already provided by @Syang Chen at MathOverflow: <a href="https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980">https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980</a> </p> <p>Let $f(x) = m^*(G\cap [0,x])$. Then, for any $x, y \in G \cap (0,\infty)$, we have the additivity: $$ f(x+y)=f(x)+f(y). $$ Since $G$ is dense and $f$ is monotone, the additivity is generalized to all positive real numbers $x, y$. Moreover, $f(x) = ax$ with $a= f(1)$ is satisfied. </p> <p>Thus, if $I$ is any open interval, then $$ \frac{m^*(G\cap I)}{m^*(I)} = a . $$</p> <p>Now, we find $I$ by the Theorem 1 in the article. This gives $a=1$. </p> Answer 7: <p>The answer "yes" follows from results in the article "On Two Halves Being Two Wholes" by Andrew Simoson in <em>The American Mathematical Monthly</em> Vol. 91, No. 3 (Mar., 1984), pp. 190-193. Some readers might not have access, but here is the JSTOR link: <a href="http://www.jstor.org/stable/2322357" rel="nofollow noreferrer">http://www.jstor.org/stable/2322357</a> The article was cited in the <a href="https://www.jstor.org/stable/2325061?seq=1#page_scan_tab_contents" rel="nofollow noreferrer">article</a> <a href="https://math.stackexchange.com/users/67070/i707107">i707107</a> linked to in <a href="https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval#comment4348049_2101953">a comment</a>. </p> <p>In the article a subset $A\subseteq \mathbb R$ is called <em>Archimedean</em> if $\{r\in\mathbb R:A+r=A\}$ is dense in $\mathbb R$. Theorem 2 states that if $A$ is Archimedean with positive outer measure, then for every interval $I$, $m^*(A\cap I)=m^*(I)$.</p> <p>A dense subgroup of $\mathbb R$ is Archimedean, and a nonmeasurable subgroup of $\mathbb R$ is dense with positive outer measure, so the theorem applies to affirm your conjecture.</p> Answer 8: <p>From the article provided by @Jonas Meyer: <a href="http://www.jstor.org/stable/2322357" rel="nofollow noreferrer">http://www.jstor.org/stable/2322357</a></p> <p>An alternative approach to this problem is possible. This approach is already provided by @Syang Chen at MathOverflow: <a href="https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980">https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980</a> </p> <p>Let $f(x) = m^*(G\cap [0,x])$. Then, for any $x, y \in G \cap (0,\infty)$, we have the additivity: $$ f(x+y)=f(x)+f(y). $$ Since $G$ is dense and $f$ is monotone, the additivity is generalized to all positive real numbers $x, y$. Moreover, $f(x) = ax$ with $a= f(1)$ is satisfied. </p> <p>Thus, if $I$ is any open interval, then $$ \frac{m^*(G\cap I)}{m^*(I)} = a . $$</p> <p>Now, we find $I$ by the Theorem 1 in the article. This gives $a=1$. </p> Answer 9: <p>The answer "yes" follows from results in the article "On Two Halves Being Two Wholes" by Andrew Simoson in <em>The American Mathematical Monthly</em> Vol. 91, No. 3 (Mar., 1984), pp. 190-193. Some readers might not have access, but here is the JSTOR link: <a href="http://www.jstor.org/stable/2322357" rel="nofollow noreferrer">http://www.jstor.org/stable/2322357</a> The article was cited in the <a href="https://www.jstor.org/stable/2325061?seq=1#page_scan_tab_contents" rel="nofollow noreferrer">article</a> <a href="https://math.stackexchange.com/users/67070/i707107">i707107</a> linked to in <a href="https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval#comment4348049_2101953">a comment</a>. </p> <p>In the article a subset $A\subseteq \mathbb R$ is called <em>Archimedean</em> if $\{r\in\mathbb R:A+r=A\}$ is dense in $\mathbb R$. Theorem 2 states that if $A$ is Archimedean with positive outer measure, then for every interval $I$, $m^*(A\cap I)=m^*(I)$.</p> <p>A dense subgroup of $\mathbb R$ is Archimedean, and a nonmeasurable subgroup of $\mathbb R$ is dense with positive outer measure, so the theorem applies to affirm your conjecture.</p> Answer 10: <p>From the article provided by @Jonas Meyer: <a href="http://www.jstor.org/stable/2322357" rel="nofollow noreferrer">http://www.jstor.org/stable/2322357</a></p> <p>An alternative approach to this problem is possible. This approach is already provided by @Syang Chen at MathOverflow: <a href="https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980">https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980</a> </p> <p>Let $f(x) = m^*(G\cap [0,x])$. Then, for any $x, y \in G \cap (0,\infty)$, we have the additivity: $$ f(x+y)=f(x)+f(y). $$ Since $G$ is dense and $f$ is monotone, the additivity is generalized to all positive real numbers $x, y$. Moreover, $f(x) = ax$ with $a= f(1)$ is satisfied. </p> <p>Thus, if $I$ is any open interval, then $$ \frac{m^*(G\cap I)}{m^*(I)} = a . $$</p> <p>Now, we find $I$ by the Theorem 1 in the article. This gives $a=1$. </p> Answer 11: <p>The answer "yes" follows from results in the article "On Two Halves Being Two Wholes" by Andrew Simoson in <em>The American Mathematical Monthly</em> Vol. 91, No. 3 (Mar., 1984), pp. 190-193. Some readers might not have access, but here is the JSTOR link: <a href="http://www.jstor.org/stable/2322357" rel="nofollow noreferrer">http://www.jstor.org/stable/2322357</a> The article was cited in the <a href="https://www.jstor.org/stable/2325061?seq=1#page_scan_tab_contents" rel="nofollow noreferrer">article</a> <a href="https://math.stackexchange.com/users/67070/i707107">i707107</a> linked to in <a href="https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval#comment4348049_2101953">a comment</a>. </p> <p>In the article a subset $A\subseteq \mathbb R$ is called <em>Archimedean</em> if $\{r\in\mathbb R:A+r=A\}$ is dense in $\mathbb R$. Theorem 2 states that if $A$ is Archimedean with positive outer measure, then for every interval $I$, $m^*(A\cap I)=m^*(I)$.</p> <p>A dense subgroup of $\mathbb R$ is Archimedean, and a nonmeasurable subgroup of $\mathbb R$ is dense with positive outer measure, so the theorem applies to affirm your conjecture.</p> Answer 12: <p>From the article provided by @Jonas Meyer: <a href="http://www.jstor.org/stable/2322357" rel="nofollow noreferrer">http://www.jstor.org/stable/2322357</a></p> <p>An alternative approach to this problem is possible. This approach is already provided by @Syang Chen at MathOverflow: <a href="https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980">https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980</a> </p> <p>Let $f(x) = m^*(G\cap [0,x])$. Then, for any $x, y \in G \cap (0,\infty)$, we have the additivity: $$ f(x+y)=f(x)+f(y). $$ Since $G$ is dense and $f$ is monotone, the additivity is generalized to all positive real numbers $x, y$. Moreover, $f(x) = ax$ with $a= f(1)$ is satisfied. </p> <p>Thus, if $I$ is any open interval, then $$ \frac{m^*(G\cap I)}{m^*(I)} = a . $$</p> <p>Now, we find $I$ by the Theorem 1 in the article. This gives $a=1$. </p> Answer 13: <p>The answer "yes" follows from results in the article "On Two Halves Being Two Wholes" by Andrew Simoson in <em>The American Mathematical Monthly</em> Vol. 91, No. 3 (Mar., 1984), pp. 190-193. Some readers might not have access, but here is the JSTOR link: <a href="http://www.jstor.org/stable/2322357" rel="nofollow noreferrer">http://www.jstor.org/stable/2322357</a> The article was cited in the <a href="https://www.jstor.org/stable/2325061?seq=1#page_scan_tab_contents" rel="nofollow noreferrer">article</a> <a href="https://math.stackexchange.com/users/67070/i707107">i707107</a> linked to in <a href="https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval#comment4348049_2101953">a comment</a>. </p> <p>In the article a subset $A\subseteq \mathbb R$ is called <em>Archimedean</em> if $\{r\in\mathbb R:A+r=A\}$ is dense in $\mathbb R$. Theorem 2 states that if $A$ is Archimedean with positive outer measure, then for every interval $I$, $m^*(A\cap I)=m^*(I)$.</p> <p>A dense subgroup of $\mathbb R$ is Archimedean, and a nonmeasurable subgroup of $\mathbb R$ is dense with positive outer measure, so the theorem applies to affirm your conjecture.</p> Answer 14: <p>From the article provided by @Jonas Meyer: <a href="http://www.jstor.org/stable/2322357" rel="nofollow noreferrer">http://www.jstor.org/stable/2322357</a></p> <p>An alternative approach to this problem is possible. This approach is already provided by @Syang Chen at MathOverflow: <a href="https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980">https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980</a> </p> <p>Let $f(x) = m^*(G\cap [0,x])$. Then, for any $x, y \in G \cap (0,\infty)$, we have the additivity: $$ f(x+y)=f(x)+f(y). $$ Since $G$ is dense and $f$ is monotone, the additivity is generalized to all positive real numbers $x, y$. Moreover, $f(x) = ax$ with $a= f(1)$ is satisfied. </p> <p>Thus, if $I$ is any open interval, then $$ \frac{m^*(G\cap I)}{m^*(I)} = a . $$</p> <p>Now, we find $I$ by the Theorem 1 in the article. This gives $a=1$. </p> Answer 15: <p>The answer "yes" follows from results in the article "On Two Halves Being Two Wholes" by Andrew Simoson in <em>The American Mathematical Monthly</em> Vol. 91, No. 3 (Mar., 1984), pp. 190-193. Some readers might not have access, but here is the JSTOR link: <a href="http://www.jstor.org/stable/2322357" rel="nofollow noreferrer">http://www.jstor.org/stable/2322357</a> The article was cited in the <a href="https://www.jstor.org/stable/2325061?seq=1#page_scan_tab_contents" rel="nofollow noreferrer">article</a> <a href="https://math.stackexchange.com/users/67070/i707107">i707107</a> linked to in <a href="https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval#comment4348049_2101953">a comment</a>. </p> <p>In the article a subset $A\subseteq \mathbb R$ is called <em>Archimedean</em> if $\{r\in\mathbb R:A+r=A\}$ is dense in $\mathbb R$. Theorem 2 states that if $A$ is Archimedean with positive outer measure, then for every interval $I$, $m^*(A\cap I)=m^*(I)$.</p> <p>A dense subgroup of $\mathbb R$ is Archimedean, and a nonmeasurable subgroup of $\mathbb R$ is dense with positive outer measure, so the theorem applies to affirm your conjecture.</p> Answer 16: <p>From the article provided by @Jonas Meyer: <a href="http://www.jstor.org/stable/2322357" rel="nofollow noreferrer">http://www.jstor.org/stable/2322357</a></p> <p>An alternative approach to this problem is possible. This approach is already provided by @Syang Chen at MathOverflow: <a href="https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980">https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980</a> </p> <p>Let $f(x) = m^*(G\cap [0,x])$. Then, for any $x, y \in G \cap (0,\infty)$, we have the additivity: $$ f(x+y)=f(x)+f(y). $$ Since $G$ is dense and $f$ is monotone, the additivity is generalized to all positive real numbers $x, y$. Moreover, $f(x) = ax$ with $a= f(1)$ is satisfied. </p> <p>Thus, if $I$ is any open interval, then $$ \frac{m^*(G\cap I)}{m^*(I)} = a . $$</p> <p>Now, we find $I$ by the Theorem 1 in the article. This gives $a=1$. </p> Answer 17: <p>The answer "yes" follows from results in the article "On Two Halves Being Two Wholes" by Andrew Simoson in <em>The American Mathematical Monthly</em> Vol. 91, No. 3 (Mar., 1984), pp. 190-193. Some readers might not have access, but here is the JSTOR link: <a href="http://www.jstor.org/stable/2322357" rel="nofollow noreferrer">http://www.jstor.org/stable/2322357</a> The article was cited in the <a href="https://www.jstor.org/stable/2325061?seq=1#page_scan_tab_contents" rel="nofollow noreferrer">article</a> <a href="https://math.stackexchange.com/users/67070/i707107">i707107</a> linked to in <a href="https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval#comment4348049_2101953">a comment</a>. </p> <p>In the article a subset $A\subseteq \mathbb R$ is called <em>Archimedean</em> if $\{r\in\mathbb R:A+r=A\}$ is dense in $\mathbb R$. Theorem 2 states that if $A$ is Archimedean with positive outer measure, then for every interval $I$, $m^*(A\cap I)=m^*(I)$.</p> <p>A dense subgroup of $\mathbb R$ is Archimedean, and a nonmeasurable subgroup of $\mathbb R$ is dense with positive outer measure, so the theorem applies to affirm your conjecture.</p> Answer 18: <p>From the article provided by @Jonas Meyer: <a href="http://www.jstor.org/stable/2322357" rel="nofollow noreferrer">http://www.jstor.org/stable/2322357</a></p> <p>An alternative approach to this problem is possible. This approach is already provided by @Syang Chen at MathOverflow: <a href="https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980">https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980</a> </p> <p>Let $f(x) = m^*(G\cap [0,x])$. Then, for any $x, y \in G \cap (0,\infty)$, we have the additivity: $$ f(x+y)=f(x)+f(y). $$ Since $G$ is dense and $f$ is monotone, the additivity is generalized to all positive real numbers $x, y$. Moreover, $f(x) = ax$ with $a= f(1)$ is satisfied. </p> <p>Thus, if $I$ is any open interval, then $$ \frac{m^*(G\cap I)}{m^*(I)} = a . $$</p> <p>Now, we find $I$ by the Theorem 1 in the article. This gives $a=1$. </p> Answer 19: <p>The answer "yes" follows from results in the article "On Two Halves Being Two Wholes" by Andrew Simoson in <em>The American Mathematical Monthly</em> Vol. 91, No. 3 (Mar., 1984), pp. 190-193. Some readers might not have access, but here is the JSTOR link: <a href="http://www.jstor.org/stable/2322357" rel="nofollow noreferrer">http://www.jstor.org/stable/2322357</a> The article was cited in the <a href="https://www.jstor.org/stable/2325061?seq=1#page_scan_tab_contents" rel="nofollow noreferrer">article</a> <a href="https://math.stackexchange.com/users/67070/i707107">i707107</a> linked to in <a href="https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval#comment4348049_2101953">a comment</a>. </p> <p>In the article a subset $A\subseteq \mathbb R$ is called <em>Archimedean</em> if $\{r\in\mathbb R:A+r=A\}$ is dense in $\mathbb R$. Theorem 2 states that if $A$ is Archimedean with positive outer measure, then for every interval $I$, $m^*(A\cap I)=m^*(I)$.</p> <p>A dense subgroup of $\mathbb R$ is Archimedean, and a nonmeasurable subgroup of $\mathbb R$ is dense with positive outer measure, so the theorem applies to affirm your conjecture.</p> Answer 20: <p>From the article provided by @Jonas Meyer: <a href="http://www.jstor.org/stable/2322357" rel="nofollow noreferrer">http://www.jstor.org/stable/2322357</a></p> <p>An alternative approach to this problem is possible. This approach is already provided by @Syang Chen at MathOverflow: <a href="https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980">https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980</a> </p> <p>Let $f(x) = m^*(G\cap [0,x])$. Then, for any $x, y \in G \cap (0,\infty)$, we have the additivity: $$ f(x+y)=f(x)+f(y). $$ Since $G$ is dense and $f$ is monotone, the additivity is generalized to all positive real numbers $x, y$. Moreover, $f(x) = ax$ with $a= f(1)$ is satisfied. </p> <p>Thus, if $I$ is any open interval, then $$ \frac{m^*(G\cap I)}{m^*(I)} = a . $$</p> <p>Now, we find $I$ by the Theorem 1 in the article. This gives $a=1$. </p>
Why is $\sum^\infty_{n=0}\sum^n_{k=0}\frac{b^k}{k!}\cdot\frac{a^{n-k}}{(n-k)!}=\sum^\infty_{n=0}\frac{(a+b)^n}{n!}$?
2,102,241
6
https://math.stackexchange.com/questions/2102241/why-is-sum-infty-n-0-sumn-k-0-fracbkk-cdot-fracan-kn-k
<p>The tutor of my algorithms class used the following equation today:</p> <p>$\sum\limits^\infty_{n=0} \sum\limits^n_{k=0} \frac {b^k} {k!} \cdot \frac {a^{n-k}} {(n-k)!}$ = $\sum\limits^\infty_{n=0} \frac {{(a+b)}^n} {n!}$</p> <p>When I asked him why we are allowed to use this he just told me that you usually prove it in one of your first maths lectures. However, I am still too stupid to see why this should be true. A hint would be really nice.</p> <p>I don't know if that is misleading but after having some thoughts about the equation I figured that the part $\sum\limits^n_{k=0} \frac {b^k} {k!} \cdot \frac {a^{n-k}} {(n-k)!}$ is a Cauchy-Product. After finding that out I wrote the first terms of the sum on a piece of paper and always saw that they look pretty similar to the binomial theorem. But I can't find a way to do it for an infinite amount of terms. Just for the given ones I wrote down.</p>
[ { "answer_id": 2102260, "score": 2, "text": "<p>The Binomial theorem states that\n$$\\sum_{k=0}^n{n\\choose k}b^k a^{n-k}=(a+b)^n,$$\nso\n$$\\sum\\limits^\\infty_{n=0} \\sum\\limits^n_{k=0} \\frac {b^k} {k!} \\cdot \\frac {a^{n-k}} {(n-k)!}=\\sum\\limits^\\infty_{n=0} \\frac{1}{n!}\\left(\\sum\\limits^n_{k=0} \\frac{n!}{k!(n-k)!}{b^k}a^{n-k}\\right)\\\\\\quad\\quad\\quad\\quad=\\sum\\limits^\\infty_{n=0} \\frac{1}{n!}\\sum\\limits^n_{k=0} {n\\choose k}b^ka^{n-k}\\\\\\quad= \\sum\\limits^\\infty_{n=0} \\frac {{(a+b)}^n} {n!}.$$</p>\n" }, { "answer_id": 2102253, "score": 0, "text": "<p>If you bear in mind that</p>\n\n<p>$$e^x=\\sum_{n=0}^\\infty{x^n\\over n!}$$</p>\n\n<p>the identity we're contemplating is just the series form of </p>\n\n<p>$$e^{a+b}=e^a\\cdot e^b$$</p>\n\n<p>Now with the binomial theorem keeping in mind ${n \\choose k}=n!/k!(n-k)!$</p>\n\n<p>$$(a+b)^n=\\sum_{k=0}^n{n\\choose k}a^kb^{n-k}$$</p>\n\n<p>We can prove it directly</p>\n" }, { "answer_id": 2102260, "score": 2, "text": "<p>The Binomial theorem states that\n$$\\sum_{k=0}^n{n\\choose k}b^k a^{n-k}=(a+b)^n,$$\nso\n$$\\sum\\limits^\\infty_{n=0} \\sum\\limits^n_{k=0} \\frac {b^k} {k!} \\cdot \\frac {a^{n-k}} {(n-k)!}=\\sum\\limits^\\infty_{n=0} \\frac{1}{n!}\\left(\\sum\\limits^n_{k=0} \\frac{n!}{k!(n-k)!}{b^k}a^{n-k}\\right)\\\\\\quad\\quad\\quad\\quad=\\sum\\limits^\\infty_{n=0} \\frac{1}{n!}\\sum\\limits^n_{k=0} {n\\choose k}b^ka^{n-k}\\\\\\quad= \\sum\\limits^\\infty_{n=0} \\frac {{(a+b)}^n} {n!}.$$</p>\n" }, { "answer_id": 2102253, "score": 0, "text": "<p>If you bear in mind that</p>\n\n<p>$$e^x=\\sum_{n=0}^\\infty{x^n\\over n!}$$</p>\n\n<p>the identity we're contemplating is just the series form of </p>\n\n<p>$$e^{a+b}=e^a\\cdot e^b$$</p>\n\n<p>Now with the binomial theorem keeping in mind ${n \\choose k}=n!/k!(n-k)!$</p>\n\n<p>$$(a+b)^n=\\sum_{k=0}^n{n\\choose k}a^kb^{n-k}$$</p>\n\n<p>We can prove it directly</p>\n" }, { "answer_id": 2102260, "score": 2, "text": "<p>The Binomial theorem states that\n$$\\sum_{k=0}^n{n\\choose k}b^k a^{n-k}=(a+b)^n,$$\nso\n$$\\sum\\limits^\\infty_{n=0} \\sum\\limits^n_{k=0} \\frac {b^k} {k!} \\cdot \\frac {a^{n-k}} {(n-k)!}=\\sum\\limits^\\infty_{n=0} \\frac{1}{n!}\\left(\\sum\\limits^n_{k=0} \\frac{n!}{k!(n-k)!}{b^k}a^{n-k}\\right)\\\\\\quad\\quad\\quad\\quad=\\sum\\limits^\\infty_{n=0} \\frac{1}{n!}\\sum\\limits^n_{k=0} {n\\choose k}b^ka^{n-k}\\\\\\quad= \\sum\\limits^\\infty_{n=0} \\frac {{(a+b)}^n} {n!}.$$</p>\n" }, { "answer_id": 2102253, "score": 0, "text": "<p>If you bear in mind that</p>\n\n<p>$$e^x=\\sum_{n=0}^\\infty{x^n\\over n!}$$</p>\n\n<p>the identity we're contemplating is just the series form of </p>\n\n<p>$$e^{a+b}=e^a\\cdot e^b$$</p>\n\n<p>Now with the binomial theorem keeping in mind ${n \\choose k}=n!/k!(n-k)!$</p>\n\n<p>$$(a+b)^n=\\sum_{k=0}^n{n\\choose k}a^kb^{n-k}$$</p>\n\n<p>We can prove it directly</p>\n" }, { "answer_id": 2102260, "score": 2, "text": "<p>The Binomial theorem states that\n$$\\sum_{k=0}^n{n\\choose k}b^k a^{n-k}=(a+b)^n,$$\nso\n$$\\sum\\limits^\\infty_{n=0} \\sum\\limits^n_{k=0} \\frac {b^k} {k!} \\cdot \\frac {a^{n-k}} {(n-k)!}=\\sum\\limits^\\infty_{n=0} \\frac{1}{n!}\\left(\\sum\\limits^n_{k=0} \\frac{n!}{k!(n-k)!}{b^k}a^{n-k}\\right)\\\\\\quad\\quad\\quad\\quad=\\sum\\limits^\\infty_{n=0} \\frac{1}{n!}\\sum\\limits^n_{k=0} {n\\choose k}b^ka^{n-k}\\\\\\quad= \\sum\\limits^\\infty_{n=0} \\frac {{(a+b)}^n} {n!}.$$</p>\n" }, { "answer_id": 2102253, "score": 0, "text": "<p>If you bear in mind that</p>\n\n<p>$$e^x=\\sum_{n=0}^\\infty{x^n\\over n!}$$</p>\n\n<p>the identity we're contemplating is just the series form of </p>\n\n<p>$$e^{a+b}=e^a\\cdot e^b$$</p>\n\n<p>Now with the binomial theorem keeping in mind ${n \\choose k}=n!/k!(n-k)!$</p>\n\n<p>$$(a+b)^n=\\sum_{k=0}^n{n\\choose k}a^kb^{n-k}$$</p>\n\n<p>We can prove it directly</p>\n" }, { "answer_id": 2102260, "score": 2, "text": "<p>The Binomial theorem states that\n$$\\sum_{k=0}^n{n\\choose k}b^k a^{n-k}=(a+b)^n,$$\nso\n$$\\sum\\limits^\\infty_{n=0} \\sum\\limits^n_{k=0} \\frac {b^k} {k!} \\cdot \\frac {a^{n-k}} {(n-k)!}=\\sum\\limits^\\infty_{n=0} \\frac{1}{n!}\\left(\\sum\\limits^n_{k=0} \\frac{n!}{k!(n-k)!}{b^k}a^{n-k}\\right)\\\\\\quad\\quad\\quad\\quad=\\sum\\limits^\\infty_{n=0} \\frac{1}{n!}\\sum\\limits^n_{k=0} {n\\choose k}b^ka^{n-k}\\\\\\quad= \\sum\\limits^\\infty_{n=0} \\frac {{(a+b)}^n} {n!}.$$</p>\n" }, { "answer_id": 2102253, "score": 0, "text": "<p>If you bear in mind that</p>\n\n<p>$$e^x=\\sum_{n=0}^\\infty{x^n\\over n!}$$</p>\n\n<p>the identity we're contemplating is just the series form of </p>\n\n<p>$$e^{a+b}=e^a\\cdot e^b$$</p>\n\n<p>Now with the binomial theorem keeping in mind ${n \\choose k}=n!/k!(n-k)!$</p>\n\n<p>$$(a+b)^n=\\sum_{k=0}^n{n\\choose k}a^kb^{n-k}$$</p>\n\n<p>We can prove it directly</p>\n" }, { "answer_id": 2102260, "score": 2, "text": "<p>The Binomial theorem states that\n$$\\sum_{k=0}^n{n\\choose k}b^k a^{n-k}=(a+b)^n,$$\nso\n$$\\sum\\limits^\\infty_{n=0} \\sum\\limits^n_{k=0} \\frac {b^k} {k!} \\cdot \\frac {a^{n-k}} {(n-k)!}=\\sum\\limits^\\infty_{n=0} \\frac{1}{n!}\\left(\\sum\\limits^n_{k=0} \\frac{n!}{k!(n-k)!}{b^k}a^{n-k}\\right)\\\\\\quad\\quad\\quad\\quad=\\sum\\limits^\\infty_{n=0} \\frac{1}{n!}\\sum\\limits^n_{k=0} {n\\choose k}b^ka^{n-k}\\\\\\quad= \\sum\\limits^\\infty_{n=0} \\frac {{(a+b)}^n} {n!}.$$</p>\n" }, { "answer_id": 2102253, "score": 0, "text": "<p>If you bear in mind that</p>\n\n<p>$$e^x=\\sum_{n=0}^\\infty{x^n\\over n!}$$</p>\n\n<p>the identity we're contemplating is just the series form of </p>\n\n<p>$$e^{a+b}=e^a\\cdot e^b$$</p>\n\n<p>Now with the binomial theorem keeping in mind ${n \\choose k}=n!/k!(n-k)!$</p>\n\n<p>$$(a+b)^n=\\sum_{k=0}^n{n\\choose k}a^kb^{n-k}$$</p>\n\n<p>We can prove it directly</p>\n" }, { "answer_id": 2102260, "score": 2, "text": "<p>The Binomial theorem states that\n$$\\sum_{k=0}^n{n\\choose k}b^k a^{n-k}=(a+b)^n,$$\nso\n$$\\sum\\limits^\\infty_{n=0} \\sum\\limits^n_{k=0} \\frac {b^k} {k!} \\cdot \\frac {a^{n-k}} {(n-k)!}=\\sum\\limits^\\infty_{n=0} \\frac{1}{n!}\\left(\\sum\\limits^n_{k=0} \\frac{n!}{k!(n-k)!}{b^k}a^{n-k}\\right)\\\\\\quad\\quad\\quad\\quad=\\sum\\limits^\\infty_{n=0} \\frac{1}{n!}\\sum\\limits^n_{k=0} {n\\choose k}b^ka^{n-k}\\\\\\quad= \\sum\\limits^\\infty_{n=0} \\frac {{(a+b)}^n} {n!}.$$</p>\n" }, { "answer_id": 2102253, "score": 0, "text": "<p>If you bear in mind that</p>\n\n<p>$$e^x=\\sum_{n=0}^\\infty{x^n\\over n!}$$</p>\n\n<p>the identity we're contemplating is just the series form of </p>\n\n<p>$$e^{a+b}=e^a\\cdot e^b$$</p>\n\n<p>Now with the binomial theorem keeping in mind ${n \\choose k}=n!/k!(n-k)!$</p>\n\n<p>$$(a+b)^n=\\sum_{k=0}^n{n\\choose k}a^kb^{n-k}$$</p>\n\n<p>We can prove it directly</p>\n" }, { "answer_id": 2102260, "score": 2, "text": "<p>The Binomial theorem states that\n$$\\sum_{k=0}^n{n\\choose k}b^k a^{n-k}=(a+b)^n,$$\nso\n$$\\sum\\limits^\\infty_{n=0} \\sum\\limits^n_{k=0} \\frac {b^k} {k!} \\cdot \\frac {a^{n-k}} {(n-k)!}=\\sum\\limits^\\infty_{n=0} \\frac{1}{n!}\\left(\\sum\\limits^n_{k=0} \\frac{n!}{k!(n-k)!}{b^k}a^{n-k}\\right)\\\\\\quad\\quad\\quad\\quad=\\sum\\limits^\\infty_{n=0} \\frac{1}{n!}\\sum\\limits^n_{k=0} {n\\choose k}b^ka^{n-k}\\\\\\quad= \\sum\\limits^\\infty_{n=0} \\frac {{(a+b)}^n} {n!}.$$</p>\n" }, { "answer_id": 2102253, "score": 0, "text": "<p>If you bear in mind that</p>\n\n<p>$$e^x=\\sum_{n=0}^\\infty{x^n\\over n!}$$</p>\n\n<p>the identity we're contemplating is just the series form of </p>\n\n<p>$$e^{a+b}=e^a\\cdot e^b$$</p>\n\n<p>Now with the binomial theorem keeping in mind ${n \\choose k}=n!/k!(n-k)!$</p>\n\n<p>$$(a+b)^n=\\sum_{k=0}^n{n\\choose k}a^kb^{n-k}$$</p>\n\n<p>We can prove it directly</p>\n" }, { "answer_id": 2102260, "score": 2, "text": "<p>The Binomial theorem states that\n$$\\sum_{k=0}^n{n\\choose k}b^k a^{n-k}=(a+b)^n,$$\nso\n$$\\sum\\limits^\\infty_{n=0} \\sum\\limits^n_{k=0} \\frac {b^k} {k!} \\cdot \\frac {a^{n-k}} {(n-k)!}=\\sum\\limits^\\infty_{n=0} \\frac{1}{n!}\\left(\\sum\\limits^n_{k=0} \\frac{n!}{k!(n-k)!}{b^k}a^{n-k}\\right)\\\\\\quad\\quad\\quad\\quad=\\sum\\limits^\\infty_{n=0} \\frac{1}{n!}\\sum\\limits^n_{k=0} {n\\choose k}b^ka^{n-k}\\\\\\quad= \\sum\\limits^\\infty_{n=0} \\frac {{(a+b)}^n} {n!}.$$</p>\n" }, { "answer_id": 2102253, "score": 0, "text": "<p>If you bear in mind that</p>\n\n<p>$$e^x=\\sum_{n=0}^\\infty{x^n\\over n!}$$</p>\n\n<p>the identity we're contemplating is just the series form of </p>\n\n<p>$$e^{a+b}=e^a\\cdot e^b$$</p>\n\n<p>Now with the binomial theorem keeping in mind ${n \\choose k}=n!/k!(n-k)!$</p>\n\n<p>$$(a+b)^n=\\sum_{k=0}^n{n\\choose k}a^kb^{n-k}$$</p>\n\n<p>We can prove it directly</p>\n" }, { "answer_id": 2102260, "score": 2, "text": "<p>The Binomial theorem states that\n$$\\sum_{k=0}^n{n\\choose k}b^k a^{n-k}=(a+b)^n,$$\nso\n$$\\sum\\limits^\\infty_{n=0} \\sum\\limits^n_{k=0} \\frac {b^k} {k!} \\cdot \\frac {a^{n-k}} {(n-k)!}=\\sum\\limits^\\infty_{n=0} \\frac{1}{n!}\\left(\\sum\\limits^n_{k=0} \\frac{n!}{k!(n-k)!}{b^k}a^{n-k}\\right)\\\\\\quad\\quad\\quad\\quad=\\sum\\limits^\\infty_{n=0} \\frac{1}{n!}\\sum\\limits^n_{k=0} {n\\choose k}b^ka^{n-k}\\\\\\quad= \\sum\\limits^\\infty_{n=0} \\frac {{(a+b)}^n} {n!}.$$</p>\n" }, { "answer_id": 2102253, "score": 0, "text": "<p>If you bear in mind that</p>\n\n<p>$$e^x=\\sum_{n=0}^\\infty{x^n\\over n!}$$</p>\n\n<p>the identity we're contemplating is just the series form of </p>\n\n<p>$$e^{a+b}=e^a\\cdot e^b$$</p>\n\n<p>Now with the binomial theorem keeping in mind ${n \\choose k}=n!/k!(n-k)!$</p>\n\n<p>$$(a+b)^n=\\sum_{k=0}^n{n\\choose k}a^kb^{n-k}$$</p>\n\n<p>We can prove it directly</p>\n" } ]
Question: Why is $\sum^\infty_{n=0}\sum^n_{k=0}\frac{b^k}{k!}\cdot\frac{a^{n-k}}{(n-k)!}=\sum^\infty_{n=0}\frac{(a+b)^n}{n!}$? Answer 1: <p>The Binomial theorem states that $$\sum_{k=0}^n{n\choose k}b^k a^{n-k}=(a+b)^n,$$ so $$\sum\limits^\infty_{n=0} \sum\limits^n_{k=0} \frac {b^k} {k!} \cdot \frac {a^{n-k}} {(n-k)!}=\sum\limits^\infty_{n=0} \frac{1}{n!}\left(\sum\limits^n_{k=0} \frac{n!}{k!(n-k)!}{b^k}a^{n-k}\right)\\\quad\quad\quad\quad=\sum\limits^\infty_{n=0} \frac{1}{n!}\sum\limits^n_{k=0} {n\choose k}b^ka^{n-k}\\\quad= \sum\limits^\infty_{n=0} \frac {{(a+b)}^n} {n!}.$$</p> Answer 2: <p>If you bear in mind that</p> <p>$$e^x=\sum_{n=0}^\infty{x^n\over n!}$$</p> <p>the identity we're contemplating is just the series form of </p> <p>$$e^{a+b}=e^a\cdot e^b$$</p> <p>Now with the binomial theorem keeping in mind ${n \choose k}=n!/k!(n-k)!$</p> <p>$$(a+b)^n=\sum_{k=0}^n{n\choose k}a^kb^{n-k}$$</p> <p>We can prove it directly</p> Answer 3: <p>The Binomial theorem states that $$\sum_{k=0}^n{n\choose k}b^k a^{n-k}=(a+b)^n,$$ so $$\sum\limits^\infty_{n=0} \sum\limits^n_{k=0} \frac {b^k} {k!} \cdot \frac {a^{n-k}} {(n-k)!}=\sum\limits^\infty_{n=0} \frac{1}{n!}\left(\sum\limits^n_{k=0} \frac{n!}{k!(n-k)!}{b^k}a^{n-k}\right)\\\quad\quad\quad\quad=\sum\limits^\infty_{n=0} \frac{1}{n!}\sum\limits^n_{k=0} {n\choose k}b^ka^{n-k}\\\quad= \sum\limits^\infty_{n=0} \frac {{(a+b)}^n} {n!}.$$</p> Answer 4: <p>If you bear in mind that</p> <p>$$e^x=\sum_{n=0}^\infty{x^n\over n!}$$</p> <p>the identity we're contemplating is just the series form of </p> <p>$$e^{a+b}=e^a\cdot e^b$$</p> <p>Now with the binomial theorem keeping in mind ${n \choose k}=n!/k!(n-k)!$</p> <p>$$(a+b)^n=\sum_{k=0}^n{n\choose k}a^kb^{n-k}$$</p> <p>We can prove it directly</p> Answer 5: <p>The Binomial theorem states that $$\sum_{k=0}^n{n\choose k}b^k a^{n-k}=(a+b)^n,$$ so $$\sum\limits^\infty_{n=0} \sum\limits^n_{k=0} \frac {b^k} {k!} \cdot \frac {a^{n-k}} {(n-k)!}=\sum\limits^\infty_{n=0} \frac{1}{n!}\left(\sum\limits^n_{k=0} \frac{n!}{k!(n-k)!}{b^k}a^{n-k}\right)\\\quad\quad\quad\quad=\sum\limits^\infty_{n=0} \frac{1}{n!}\sum\limits^n_{k=0} {n\choose k}b^ka^{n-k}\\\quad= \sum\limits^\infty_{n=0} \frac {{(a+b)}^n} {n!}.$$</p> Answer 6: <p>If you bear in mind that</p> <p>$$e^x=\sum_{n=0}^\infty{x^n\over n!}$$</p> <p>the identity we're contemplating is just the series form of </p> <p>$$e^{a+b}=e^a\cdot e^b$$</p> <p>Now with the binomial theorem keeping in mind ${n \choose k}=n!/k!(n-k)!$</p> <p>$$(a+b)^n=\sum_{k=0}^n{n\choose k}a^kb^{n-k}$$</p> <p>We can prove it directly</p> Answer 7: <p>The Binomial theorem states that $$\sum_{k=0}^n{n\choose k}b^k a^{n-k}=(a+b)^n,$$ so $$\sum\limits^\infty_{n=0} \sum\limits^n_{k=0} \frac {b^k} {k!} \cdot \frac {a^{n-k}} {(n-k)!}=\sum\limits^\infty_{n=0} \frac{1}{n!}\left(\sum\limits^n_{k=0} \frac{n!}{k!(n-k)!}{b^k}a^{n-k}\right)\\\quad\quad\quad\quad=\sum\limits^\infty_{n=0} \frac{1}{n!}\sum\limits^n_{k=0} {n\choose k}b^ka^{n-k}\\\quad= \sum\limits^\infty_{n=0} \frac {{(a+b)}^n} {n!}.$$</p> Answer 8: <p>If you bear in mind that</p> <p>$$e^x=\sum_{n=0}^\infty{x^n\over n!}$$</p> <p>the identity we're contemplating is just the series form of </p> <p>$$e^{a+b}=e^a\cdot e^b$$</p> <p>Now with the binomial theorem keeping in mind ${n \choose k}=n!/k!(n-k)!$</p> <p>$$(a+b)^n=\sum_{k=0}^n{n\choose k}a^kb^{n-k}$$</p> <p>We can prove it directly</p> Answer 9: <p>The Binomial theorem states that $$\sum_{k=0}^n{n\choose k}b^k a^{n-k}=(a+b)^n,$$ so $$\sum\limits^\infty_{n=0} \sum\limits^n_{k=0} \frac {b^k} {k!} \cdot \frac {a^{n-k}} {(n-k)!}=\sum\limits^\infty_{n=0} \frac{1}{n!}\left(\sum\limits^n_{k=0} \frac{n!}{k!(n-k)!}{b^k}a^{n-k}\right)\\\quad\quad\quad\quad=\sum\limits^\infty_{n=0} \frac{1}{n!}\sum\limits^n_{k=0} {n\choose k}b^ka^{n-k}\\\quad= \sum\limits^\infty_{n=0} \frac {{(a+b)}^n} {n!}.$$</p> Answer 10: <p>If you bear in mind that</p> <p>$$e^x=\sum_{n=0}^\infty{x^n\over n!}$$</p> <p>the identity we're contemplating is just the series form of </p> <p>$$e^{a+b}=e^a\cdot e^b$$</p> <p>Now with the binomial theorem keeping in mind ${n \choose k}=n!/k!(n-k)!$</p> <p>$$(a+b)^n=\sum_{k=0}^n{n\choose k}a^kb^{n-k}$$</p> <p>We can prove it directly</p> Answer 11: <p>The Binomial theorem states that $$\sum_{k=0}^n{n\choose k}b^k a^{n-k}=(a+b)^n,$$ so $$\sum\limits^\infty_{n=0} \sum\limits^n_{k=0} \frac {b^k} {k!} \cdot \frac {a^{n-k}} {(n-k)!}=\sum\limits^\infty_{n=0} \frac{1}{n!}\left(\sum\limits^n_{k=0} \frac{n!}{k!(n-k)!}{b^k}a^{n-k}\right)\\\quad\quad\quad\quad=\sum\limits^\infty_{n=0} \frac{1}{n!}\sum\limits^n_{k=0} {n\choose k}b^ka^{n-k}\\\quad= \sum\limits^\infty_{n=0} \frac {{(a+b)}^n} {n!}.$$</p> Answer 12: <p>If you bear in mind that</p> <p>$$e^x=\sum_{n=0}^\infty{x^n\over n!}$$</p> <p>the identity we're contemplating is just the series form of </p> <p>$$e^{a+b}=e^a\cdot e^b$$</p> <p>Now with the binomial theorem keeping in mind ${n \choose k}=n!/k!(n-k)!$</p> <p>$$(a+b)^n=\sum_{k=0}^n{n\choose k}a^kb^{n-k}$$</p> <p>We can prove it directly</p> Answer 13: <p>The Binomial theorem states that $$\sum_{k=0}^n{n\choose k}b^k a^{n-k}=(a+b)^n,$$ so $$\sum\limits^\infty_{n=0} \sum\limits^n_{k=0} \frac {b^k} {k!} \cdot \frac {a^{n-k}} {(n-k)!}=\sum\limits^\infty_{n=0} \frac{1}{n!}\left(\sum\limits^n_{k=0} \frac{n!}{k!(n-k)!}{b^k}a^{n-k}\right)\\\quad\quad\quad\quad=\sum\limits^\infty_{n=0} \frac{1}{n!}\sum\limits^n_{k=0} {n\choose k}b^ka^{n-k}\\\quad= \sum\limits^\infty_{n=0} \frac {{(a+b)}^n} {n!}.$$</p> Answer 14: <p>If you bear in mind that</p> <p>$$e^x=\sum_{n=0}^\infty{x^n\over n!}$$</p> <p>the identity we're contemplating is just the series form of </p> <p>$$e^{a+b}=e^a\cdot e^b$$</p> <p>Now with the binomial theorem keeping in mind ${n \choose k}=n!/k!(n-k)!$</p> <p>$$(a+b)^n=\sum_{k=0}^n{n\choose k}a^kb^{n-k}$$</p> <p>We can prove it directly</p> Answer 15: <p>The Binomial theorem states that $$\sum_{k=0}^n{n\choose k}b^k a^{n-k}=(a+b)^n,$$ so $$\sum\limits^\infty_{n=0} \sum\limits^n_{k=0} \frac {b^k} {k!} \cdot \frac {a^{n-k}} {(n-k)!}=\sum\limits^\infty_{n=0} \frac{1}{n!}\left(\sum\limits^n_{k=0} \frac{n!}{k!(n-k)!}{b^k}a^{n-k}\right)\\\quad\quad\quad\quad=\sum\limits^\infty_{n=0} \frac{1}{n!}\sum\limits^n_{k=0} {n\choose k}b^ka^{n-k}\\\quad= \sum\limits^\infty_{n=0} \frac {{(a+b)}^n} {n!}.$$</p> Answer 16: <p>If you bear in mind that</p> <p>$$e^x=\sum_{n=0}^\infty{x^n\over n!}$$</p> <p>the identity we're contemplating is just the series form of </p> <p>$$e^{a+b}=e^a\cdot e^b$$</p> <p>Now with the binomial theorem keeping in mind ${n \choose k}=n!/k!(n-k)!$</p> <p>$$(a+b)^n=\sum_{k=0}^n{n\choose k}a^kb^{n-k}$$</p> <p>We can prove it directly</p> Answer 17: <p>The Binomial theorem states that $$\sum_{k=0}^n{n\choose k}b^k a^{n-k}=(a+b)^n,$$ so $$\sum\limits^\infty_{n=0} \sum\limits^n_{k=0} \frac {b^k} {k!} \cdot \frac {a^{n-k}} {(n-k)!}=\sum\limits^\infty_{n=0} \frac{1}{n!}\left(\sum\limits^n_{k=0} \frac{n!}{k!(n-k)!}{b^k}a^{n-k}\right)\\\quad\quad\quad\quad=\sum\limits^\infty_{n=0} \frac{1}{n!}\sum\limits^n_{k=0} {n\choose k}b^ka^{n-k}\\\quad= \sum\limits^\infty_{n=0} \frac {{(a+b)}^n} {n!}.$$</p> Answer 18: <p>If you bear in mind that</p> <p>$$e^x=\sum_{n=0}^\infty{x^n\over n!}$$</p> <p>the identity we're contemplating is just the series form of </p> <p>$$e^{a+b}=e^a\cdot e^b$$</p> <p>Now with the binomial theorem keeping in mind ${n \choose k}=n!/k!(n-k)!$</p> <p>$$(a+b)^n=\sum_{k=0}^n{n\choose k}a^kb^{n-k}$$</p> <p>We can prove it directly</p> Answer 19: <p>The Binomial theorem states that $$\sum_{k=0}^n{n\choose k}b^k a^{n-k}=(a+b)^n,$$ so $$\sum\limits^\infty_{n=0} \sum\limits^n_{k=0} \frac {b^k} {k!} \cdot \frac {a^{n-k}} {(n-k)!}=\sum\limits^\infty_{n=0} \frac{1}{n!}\left(\sum\limits^n_{k=0} \frac{n!}{k!(n-k)!}{b^k}a^{n-k}\right)\\\quad\quad\quad\quad=\sum\limits^\infty_{n=0} \frac{1}{n!}\sum\limits^n_{k=0} {n\choose k}b^ka^{n-k}\\\quad= \sum\limits^\infty_{n=0} \frac {{(a+b)}^n} {n!}.$$</p> Answer 20: <p>If you bear in mind that</p> <p>$$e^x=\sum_{n=0}^\infty{x^n\over n!}$$</p> <p>the identity we're contemplating is just the series form of </p> <p>$$e^{a+b}=e^a\cdot e^b$$</p> <p>Now with the binomial theorem keeping in mind ${n \choose k}=n!/k!(n-k)!$</p> <p>$$(a+b)^n=\sum_{k=0}^n{n\choose k}a^kb^{n-k}$$</p> <p>We can prove it directly</p>
Does this Limit approach $1$ or $\frac{1}{2}$
2,102,542
6
https://math.stackexchange.com/questions/2102542/does-this-limit-approach-1-or-frac12
<p>Evaluate $$L=\lim _{x \to 0}\left(\frac{e^x}{e^x-1}-\frac{1}{x}\right)$$ I tried in two methods</p> <p>Method $1$. $$L=\lim_{x \to 0}\left(\frac{\frac{e^x}{x}}{\frac{e^x-1}{x}}-\frac{1}{x}\right)$$</p> <p>But $$\lim_{x \to 0}\frac{e^x-1}{x}=1$$ So</p> <p>$$L=\lim_{x \to 0}\left(\frac{e^x}{x}-\frac{1}{x}\right)=\lim_{x \to 0}\frac{e^x-1}{x}=1$$</p> <p>Method $2$. By taking LCM we get</p> <p>$$L=\lim_{x \to 0}\frac{xe^x-e^x+1}{xe^x-x}$$ Since the limit is in $\frac{0}{0}$ form using L'Hopital's Rule twice we get</p> <p>$$L=\lim_{x \to 0}\frac{xe^x}{xe^x+e^x-1}$$ i.e.,</p> <p>$$L=\lim_{x \to 0}\frac{xe^x+e^x}{xe^x+2e^x}=\frac{1}{2}$$</p> <p>I am sure that second method is correct, but i want to know whats is wrong in first method </p>
[ { "answer_id": 2102546, "score": 4, "text": "<p>The first method is not correct. You either take the limit of the whole expression or you manipulate it. You cannot take the limit of only half of it and take the limit of the rest again later.</p>\n" }, { "answer_id": 2102553, "score": 1, "text": "<p>If limits exist for both functions then you can certainly separate the limits as\n$$\\lim_{x\\to0}f(x)g(x) = \\left(\\lim_{x\\to0}f(x)\\right)\\left(\\lim_{x\\to0}g(x)\\right)$$</p>\n\n<p>But this, doesn't holds true</p>\n\n<p>$$\\lim_{x\\to0}f(x)g(x) \\neq \\lim_{x\\to0}\\left(f(x)\\left(\\lim_{x\\to0}g(x)\\right)\\right)$$</p>\n\n<p>Therefore, in your method I, the following is not true</p>\n\n<p>$$L=\\lim_{x \\to 0}\\frac{\\frac{e^x}{x}}{\\frac{e^x-1}{x}}-\\frac{1}{x} \\neq \\lim_{x \\to 0}\\frac{e^x}{x}-\\frac{1}{x}$$</p>\n" }, { "answer_id": 2102640, "score": 0, "text": "<p>Note that $e^x -1 =x+\\frac12 x^2+O(x^3)$. Hence, we see that</p>\n\n<p>$$\\begin{align}\n\\frac{e^x/x}{(e^x-1)/x}-\\frac1x&amp;=\\frac{e^x-\\left(1+\\frac12 x+O(x^2)\\right)}{x(1+\\frac12 x+O(x^2))}\\\\\\\\\n&amp;=\\frac{\\frac{e^x-1}{x}-\\frac12+O(x)}{1+\\frac12 x+O(x^2)}\n\\end{align}$$</p>\n\n<p>whereupon passing to the limit yields $1/2$ as expected!</p>\n" }, { "answer_id": 2102546, "score": 4, "text": "<p>The first method is not correct. You either take the limit of the whole expression or you manipulate it. You cannot take the limit of only half of it and take the limit of the rest again later.</p>\n" }, { "answer_id": 2102553, "score": 1, "text": "<p>If limits exist for both functions then you can certainly separate the limits as\n$$\\lim_{x\\to0}f(x)g(x) = \\left(\\lim_{x\\to0}f(x)\\right)\\left(\\lim_{x\\to0}g(x)\\right)$$</p>\n\n<p>But this, doesn't holds true</p>\n\n<p>$$\\lim_{x\\to0}f(x)g(x) \\neq \\lim_{x\\to0}\\left(f(x)\\left(\\lim_{x\\to0}g(x)\\right)\\right)$$</p>\n\n<p>Therefore, in your method I, the following is not true</p>\n\n<p>$$L=\\lim_{x \\to 0}\\frac{\\frac{e^x}{x}}{\\frac{e^x-1}{x}}-\\frac{1}{x} \\neq \\lim_{x \\to 0}\\frac{e^x}{x}-\\frac{1}{x}$$</p>\n" }, { "answer_id": 2102640, "score": 0, "text": "<p>Note that $e^x -1 =x+\\frac12 x^2+O(x^3)$. Hence, we see that</p>\n\n<p>$$\\begin{align}\n\\frac{e^x/x}{(e^x-1)/x}-\\frac1x&amp;=\\frac{e^x-\\left(1+\\frac12 x+O(x^2)\\right)}{x(1+\\frac12 x+O(x^2))}\\\\\\\\\n&amp;=\\frac{\\frac{e^x-1}{x}-\\frac12+O(x)}{1+\\frac12 x+O(x^2)}\n\\end{align}$$</p>\n\n<p>whereupon passing to the limit yields $1/2$ as expected!</p>\n" }, { "answer_id": 2102546, "score": 4, "text": "<p>The first method is not correct. You either take the limit of the whole expression or you manipulate it. You cannot take the limit of only half of it and take the limit of the rest again later.</p>\n" }, { "answer_id": 2102553, "score": 1, "text": "<p>If limits exist for both functions then you can certainly separate the limits as\n$$\\lim_{x\\to0}f(x)g(x) = \\left(\\lim_{x\\to0}f(x)\\right)\\left(\\lim_{x\\to0}g(x)\\right)$$</p>\n\n<p>But this, doesn't holds true</p>\n\n<p>$$\\lim_{x\\to0}f(x)g(x) \\neq \\lim_{x\\to0}\\left(f(x)\\left(\\lim_{x\\to0}g(x)\\right)\\right)$$</p>\n\n<p>Therefore, in your method I, the following is not true</p>\n\n<p>$$L=\\lim_{x \\to 0}\\frac{\\frac{e^x}{x}}{\\frac{e^x-1}{x}}-\\frac{1}{x} \\neq \\lim_{x \\to 0}\\frac{e^x}{x}-\\frac{1}{x}$$</p>\n" }, { "answer_id": 2102640, "score": 0, "text": "<p>Note that $e^x -1 =x+\\frac12 x^2+O(x^3)$. Hence, we see that</p>\n\n<p>$$\\begin{align}\n\\frac{e^x/x}{(e^x-1)/x}-\\frac1x&amp;=\\frac{e^x-\\left(1+\\frac12 x+O(x^2)\\right)}{x(1+\\frac12 x+O(x^2))}\\\\\\\\\n&amp;=\\frac{\\frac{e^x-1}{x}-\\frac12+O(x)}{1+\\frac12 x+O(x^2)}\n\\end{align}$$</p>\n\n<p>whereupon passing to the limit yields $1/2$ as expected!</p>\n" }, { "answer_id": 2102546, "score": 4, "text": "<p>The first method is not correct. You either take the limit of the whole expression or you manipulate it. You cannot take the limit of only half of it and take the limit of the rest again later.</p>\n" }, { "answer_id": 2102553, "score": 1, "text": "<p>If limits exist for both functions then you can certainly separate the limits as\n$$\\lim_{x\\to0}f(x)g(x) = \\left(\\lim_{x\\to0}f(x)\\right)\\left(\\lim_{x\\to0}g(x)\\right)$$</p>\n\n<p>But this, doesn't holds true</p>\n\n<p>$$\\lim_{x\\to0}f(x)g(x) \\neq \\lim_{x\\to0}\\left(f(x)\\left(\\lim_{x\\to0}g(x)\\right)\\right)$$</p>\n\n<p>Therefore, in your method I, the following is not true</p>\n\n<p>$$L=\\lim_{x \\to 0}\\frac{\\frac{e^x}{x}}{\\frac{e^x-1}{x}}-\\frac{1}{x} \\neq \\lim_{x \\to 0}\\frac{e^x}{x}-\\frac{1}{x}$$</p>\n" }, { "answer_id": 2102640, "score": 0, "text": "<p>Note that $e^x -1 =x+\\frac12 x^2+O(x^3)$. Hence, we see that</p>\n\n<p>$$\\begin{align}\n\\frac{e^x/x}{(e^x-1)/x}-\\frac1x&amp;=\\frac{e^x-\\left(1+\\frac12 x+O(x^2)\\right)}{x(1+\\frac12 x+O(x^2))}\\\\\\\\\n&amp;=\\frac{\\frac{e^x-1}{x}-\\frac12+O(x)}{1+\\frac12 x+O(x^2)}\n\\end{align}$$</p>\n\n<p>whereupon passing to the limit yields $1/2$ as expected!</p>\n" }, { "answer_id": 2102546, "score": 4, "text": "<p>The first method is not correct. You either take the limit of the whole expression or you manipulate it. You cannot take the limit of only half of it and take the limit of the rest again later.</p>\n" }, { "answer_id": 2102553, "score": 1, "text": "<p>If limits exist for both functions then you can certainly separate the limits as\n$$\\lim_{x\\to0}f(x)g(x) = \\left(\\lim_{x\\to0}f(x)\\right)\\left(\\lim_{x\\to0}g(x)\\right)$$</p>\n\n<p>But this, doesn't holds true</p>\n\n<p>$$\\lim_{x\\to0}f(x)g(x) \\neq \\lim_{x\\to0}\\left(f(x)\\left(\\lim_{x\\to0}g(x)\\right)\\right)$$</p>\n\n<p>Therefore, in your method I, the following is not true</p>\n\n<p>$$L=\\lim_{x \\to 0}\\frac{\\frac{e^x}{x}}{\\frac{e^x-1}{x}}-\\frac{1}{x} \\neq \\lim_{x \\to 0}\\frac{e^x}{x}-\\frac{1}{x}$$</p>\n" }, { "answer_id": 2102640, "score": 0, "text": "<p>Note that $e^x -1 =x+\\frac12 x^2+O(x^3)$. Hence, we see that</p>\n\n<p>$$\\begin{align}\n\\frac{e^x/x}{(e^x-1)/x}-\\frac1x&amp;=\\frac{e^x-\\left(1+\\frac12 x+O(x^2)\\right)}{x(1+\\frac12 x+O(x^2))}\\\\\\\\\n&amp;=\\frac{\\frac{e^x-1}{x}-\\frac12+O(x)}{1+\\frac12 x+O(x^2)}\n\\end{align}$$</p>\n\n<p>whereupon passing to the limit yields $1/2$ as expected!</p>\n" }, { "answer_id": 2102546, "score": 4, "text": "<p>The first method is not correct. You either take the limit of the whole expression or you manipulate it. You cannot take the limit of only half of it and take the limit of the rest again later.</p>\n" }, { "answer_id": 2102553, "score": 1, "text": "<p>If limits exist for both functions then you can certainly separate the limits as\n$$\\lim_{x\\to0}f(x)g(x) = \\left(\\lim_{x\\to0}f(x)\\right)\\left(\\lim_{x\\to0}g(x)\\right)$$</p>\n\n<p>But this, doesn't holds true</p>\n\n<p>$$\\lim_{x\\to0}f(x)g(x) \\neq \\lim_{x\\to0}\\left(f(x)\\left(\\lim_{x\\to0}g(x)\\right)\\right)$$</p>\n\n<p>Therefore, in your method I, the following is not true</p>\n\n<p>$$L=\\lim_{x \\to 0}\\frac{\\frac{e^x}{x}}{\\frac{e^x-1}{x}}-\\frac{1}{x} \\neq \\lim_{x \\to 0}\\frac{e^x}{x}-\\frac{1}{x}$$</p>\n" }, { "answer_id": 2102640, "score": 0, "text": "<p>Note that $e^x -1 =x+\\frac12 x^2+O(x^3)$. Hence, we see that</p>\n\n<p>$$\\begin{align}\n\\frac{e^x/x}{(e^x-1)/x}-\\frac1x&amp;=\\frac{e^x-\\left(1+\\frac12 x+O(x^2)\\right)}{x(1+\\frac12 x+O(x^2))}\\\\\\\\\n&amp;=\\frac{\\frac{e^x-1}{x}-\\frac12+O(x)}{1+\\frac12 x+O(x^2)}\n\\end{align}$$</p>\n\n<p>whereupon passing to the limit yields $1/2$ as expected!</p>\n" }, { "answer_id": 2102546, "score": 4, "text": "<p>The first method is not correct. You either take the limit of the whole expression or you manipulate it. You cannot take the limit of only half of it and take the limit of the rest again later.</p>\n" }, { "answer_id": 2102553, "score": 1, "text": "<p>If limits exist for both functions then you can certainly separate the limits as\n$$\\lim_{x\\to0}f(x)g(x) = \\left(\\lim_{x\\to0}f(x)\\right)\\left(\\lim_{x\\to0}g(x)\\right)$$</p>\n\n<p>But this, doesn't holds true</p>\n\n<p>$$\\lim_{x\\to0}f(x)g(x) \\neq \\lim_{x\\to0}\\left(f(x)\\left(\\lim_{x\\to0}g(x)\\right)\\right)$$</p>\n\n<p>Therefore, in your method I, the following is not true</p>\n\n<p>$$L=\\lim_{x \\to 0}\\frac{\\frac{e^x}{x}}{\\frac{e^x-1}{x}}-\\frac{1}{x} \\neq \\lim_{x \\to 0}\\frac{e^x}{x}-\\frac{1}{x}$$</p>\n" }, { "answer_id": 2102640, "score": 0, "text": "<p>Note that $e^x -1 =x+\\frac12 x^2+O(x^3)$. Hence, we see that</p>\n\n<p>$$\\begin{align}\n\\frac{e^x/x}{(e^x-1)/x}-\\frac1x&amp;=\\frac{e^x-\\left(1+\\frac12 x+O(x^2)\\right)}{x(1+\\frac12 x+O(x^2))}\\\\\\\\\n&amp;=\\frac{\\frac{e^x-1}{x}-\\frac12+O(x)}{1+\\frac12 x+O(x^2)}\n\\end{align}$$</p>\n\n<p>whereupon passing to the limit yields $1/2$ as expected!</p>\n" }, { "answer_id": 2102546, "score": 4, "text": "<p>The first method is not correct. You either take the limit of the whole expression or you manipulate it. You cannot take the limit of only half of it and take the limit of the rest again later.</p>\n" }, { "answer_id": 2102553, "score": 1, "text": "<p>If limits exist for both functions then you can certainly separate the limits as\n$$\\lim_{x\\to0}f(x)g(x) = \\left(\\lim_{x\\to0}f(x)\\right)\\left(\\lim_{x\\to0}g(x)\\right)$$</p>\n\n<p>But this, doesn't holds true</p>\n\n<p>$$\\lim_{x\\to0}f(x)g(x) \\neq \\lim_{x\\to0}\\left(f(x)\\left(\\lim_{x\\to0}g(x)\\right)\\right)$$</p>\n\n<p>Therefore, in your method I, the following is not true</p>\n\n<p>$$L=\\lim_{x \\to 0}\\frac{\\frac{e^x}{x}}{\\frac{e^x-1}{x}}-\\frac{1}{x} \\neq \\lim_{x \\to 0}\\frac{e^x}{x}-\\frac{1}{x}$$</p>\n" }, { "answer_id": 2102640, "score": 0, "text": "<p>Note that $e^x -1 =x+\\frac12 x^2+O(x^3)$. Hence, we see that</p>\n\n<p>$$\\begin{align}\n\\frac{e^x/x}{(e^x-1)/x}-\\frac1x&amp;=\\frac{e^x-\\left(1+\\frac12 x+O(x^2)\\right)}{x(1+\\frac12 x+O(x^2))}\\\\\\\\\n&amp;=\\frac{\\frac{e^x-1}{x}-\\frac12+O(x)}{1+\\frac12 x+O(x^2)}\n\\end{align}$$</p>\n\n<p>whereupon passing to the limit yields $1/2$ as expected!</p>\n" }, { "answer_id": 2102546, "score": 4, "text": "<p>The first method is not correct. You either take the limit of the whole expression or you manipulate it. You cannot take the limit of only half of it and take the limit of the rest again later.</p>\n" }, { "answer_id": 2102553, "score": 1, "text": "<p>If limits exist for both functions then you can certainly separate the limits as\n$$\\lim_{x\\to0}f(x)g(x) = \\left(\\lim_{x\\to0}f(x)\\right)\\left(\\lim_{x\\to0}g(x)\\right)$$</p>\n\n<p>But this, doesn't holds true</p>\n\n<p>$$\\lim_{x\\to0}f(x)g(x) \\neq \\lim_{x\\to0}\\left(f(x)\\left(\\lim_{x\\to0}g(x)\\right)\\right)$$</p>\n\n<p>Therefore, in your method I, the following is not true</p>\n\n<p>$$L=\\lim_{x \\to 0}\\frac{\\frac{e^x}{x}}{\\frac{e^x-1}{x}}-\\frac{1}{x} \\neq \\lim_{x \\to 0}\\frac{e^x}{x}-\\frac{1}{x}$$</p>\n" }, { "answer_id": 2102640, "score": 0, "text": "<p>Note that $e^x -1 =x+\\frac12 x^2+O(x^3)$. Hence, we see that</p>\n\n<p>$$\\begin{align}\n\\frac{e^x/x}{(e^x-1)/x}-\\frac1x&amp;=\\frac{e^x-\\left(1+\\frac12 x+O(x^2)\\right)}{x(1+\\frac12 x+O(x^2))}\\\\\\\\\n&amp;=\\frac{\\frac{e^x-1}{x}-\\frac12+O(x)}{1+\\frac12 x+O(x^2)}\n\\end{align}$$</p>\n\n<p>whereupon passing to the limit yields $1/2$ as expected!</p>\n" }, { "answer_id": 2102546, "score": 4, "text": "<p>The first method is not correct. You either take the limit of the whole expression or you manipulate it. You cannot take the limit of only half of it and take the limit of the rest again later.</p>\n" }, { "answer_id": 2102553, "score": 1, "text": "<p>If limits exist for both functions then you can certainly separate the limits as\n$$\\lim_{x\\to0}f(x)g(x) = \\left(\\lim_{x\\to0}f(x)\\right)\\left(\\lim_{x\\to0}g(x)\\right)$$</p>\n\n<p>But this, doesn't holds true</p>\n\n<p>$$\\lim_{x\\to0}f(x)g(x) \\neq \\lim_{x\\to0}\\left(f(x)\\left(\\lim_{x\\to0}g(x)\\right)\\right)$$</p>\n\n<p>Therefore, in your method I, the following is not true</p>\n\n<p>$$L=\\lim_{x \\to 0}\\frac{\\frac{e^x}{x}}{\\frac{e^x-1}{x}}-\\frac{1}{x} \\neq \\lim_{x \\to 0}\\frac{e^x}{x}-\\frac{1}{x}$$</p>\n" }, { "answer_id": 2102640, "score": 0, "text": "<p>Note that $e^x -1 =x+\\frac12 x^2+O(x^3)$. Hence, we see that</p>\n\n<p>$$\\begin{align}\n\\frac{e^x/x}{(e^x-1)/x}-\\frac1x&amp;=\\frac{e^x-\\left(1+\\frac12 x+O(x^2)\\right)}{x(1+\\frac12 x+O(x^2))}\\\\\\\\\n&amp;=\\frac{\\frac{e^x-1}{x}-\\frac12+O(x)}{1+\\frac12 x+O(x^2)}\n\\end{align}$$</p>\n\n<p>whereupon passing to the limit yields $1/2$ as expected!</p>\n" } ]
Question: Does this Limit approach $1$ or $\frac{1}{2}$ Answer 1: <p>The first method is not correct. You either take the limit of the whole expression or you manipulate it. You cannot take the limit of only half of it and take the limit of the rest again later.</p> Answer 2: <p>If limits exist for both functions then you can certainly separate the limits as $$\lim_{x\to0}f(x)g(x) = \left(\lim_{x\to0}f(x)\right)\left(\lim_{x\to0}g(x)\right)$$</p> <p>But this, doesn't holds true</p> <p>$$\lim_{x\to0}f(x)g(x) \neq \lim_{x\to0}\left(f(x)\left(\lim_{x\to0}g(x)\right)\right)$$</p> <p>Therefore, in your method I, the following is not true</p> <p>$$L=\lim_{x \to 0}\frac{\frac{e^x}{x}}{\frac{e^x-1}{x}}-\frac{1}{x} \neq \lim_{x \to 0}\frac{e^x}{x}-\frac{1}{x}$$</p> Answer 3: <p>Note that $e^x -1 =x+\frac12 x^2+O(x^3)$. Hence, we see that</p> <p>$$\begin{align} \frac{e^x/x}{(e^x-1)/x}-\frac1x&amp;=\frac{e^x-\left(1+\frac12 x+O(x^2)\right)}{x(1+\frac12 x+O(x^2))}\\\\ &amp;=\frac{\frac{e^x-1}{x}-\frac12+O(x)}{1+\frac12 x+O(x^2)} \end{align}$$</p> <p>whereupon passing to the limit yields $1/2$ as expected!</p> Answer 4: <p>The first method is not correct. You either take the limit of the whole expression or you manipulate it. You cannot take the limit of only half of it and take the limit of the rest again later.</p> Answer 5: <p>If limits exist for both functions then you can certainly separate the limits as $$\lim_{x\to0}f(x)g(x) = \left(\lim_{x\to0}f(x)\right)\left(\lim_{x\to0}g(x)\right)$$</p> <p>But this, doesn't holds true</p> <p>$$\lim_{x\to0}f(x)g(x) \neq \lim_{x\to0}\left(f(x)\left(\lim_{x\to0}g(x)\right)\right)$$</p> <p>Therefore, in your method I, the following is not true</p> <p>$$L=\lim_{x \to 0}\frac{\frac{e^x}{x}}{\frac{e^x-1}{x}}-\frac{1}{x} \neq \lim_{x \to 0}\frac{e^x}{x}-\frac{1}{x}$$</p> Answer 6: <p>Note that $e^x -1 =x+\frac12 x^2+O(x^3)$. Hence, we see that</p> <p>$$\begin{align} \frac{e^x/x}{(e^x-1)/x}-\frac1x&amp;=\frac{e^x-\left(1+\frac12 x+O(x^2)\right)}{x(1+\frac12 x+O(x^2))}\\\\ &amp;=\frac{\frac{e^x-1}{x}-\frac12+O(x)}{1+\frac12 x+O(x^2)} \end{align}$$</p> <p>whereupon passing to the limit yields $1/2$ as expected!</p> Answer 7: <p>The first method is not correct. You either take the limit of the whole expression or you manipulate it. You cannot take the limit of only half of it and take the limit of the rest again later.</p> Answer 8: <p>If limits exist for both functions then you can certainly separate the limits as $$\lim_{x\to0}f(x)g(x) = \left(\lim_{x\to0}f(x)\right)\left(\lim_{x\to0}g(x)\right)$$</p> <p>But this, doesn't holds true</p> <p>$$\lim_{x\to0}f(x)g(x) \neq \lim_{x\to0}\left(f(x)\left(\lim_{x\to0}g(x)\right)\right)$$</p> <p>Therefore, in your method I, the following is not true</p> <p>$$L=\lim_{x \to 0}\frac{\frac{e^x}{x}}{\frac{e^x-1}{x}}-\frac{1}{x} \neq \lim_{x \to 0}\frac{e^x}{x}-\frac{1}{x}$$</p> Answer 9: <p>Note that $e^x -1 =x+\frac12 x^2+O(x^3)$. Hence, we see that</p> <p>$$\begin{align} \frac{e^x/x}{(e^x-1)/x}-\frac1x&amp;=\frac{e^x-\left(1+\frac12 x+O(x^2)\right)}{x(1+\frac12 x+O(x^2))}\\\\ &amp;=\frac{\frac{e^x-1}{x}-\frac12+O(x)}{1+\frac12 x+O(x^2)} \end{align}$$</p> <p>whereupon passing to the limit yields $1/2$ as expected!</p> Answer 10: <p>The first method is not correct. You either take the limit of the whole expression or you manipulate it. You cannot take the limit of only half of it and take the limit of the rest again later.</p> Answer 11: <p>If limits exist for both functions then you can certainly separate the limits as $$\lim_{x\to0}f(x)g(x) = \left(\lim_{x\to0}f(x)\right)\left(\lim_{x\to0}g(x)\right)$$</p> <p>But this, doesn't holds true</p> <p>$$\lim_{x\to0}f(x)g(x) \neq \lim_{x\to0}\left(f(x)\left(\lim_{x\to0}g(x)\right)\right)$$</p> <p>Therefore, in your method I, the following is not true</p> <p>$$L=\lim_{x \to 0}\frac{\frac{e^x}{x}}{\frac{e^x-1}{x}}-\frac{1}{x} \neq \lim_{x \to 0}\frac{e^x}{x}-\frac{1}{x}$$</p> Answer 12: <p>Note that $e^x -1 =x+\frac12 x^2+O(x^3)$. Hence, we see that</p> <p>$$\begin{align} \frac{e^x/x}{(e^x-1)/x}-\frac1x&amp;=\frac{e^x-\left(1+\frac12 x+O(x^2)\right)}{x(1+\frac12 x+O(x^2))}\\\\ &amp;=\frac{\frac{e^x-1}{x}-\frac12+O(x)}{1+\frac12 x+O(x^2)} \end{align}$$</p> <p>whereupon passing to the limit yields $1/2$ as expected!</p> Answer 13: <p>The first method is not correct. You either take the limit of the whole expression or you manipulate it. You cannot take the limit of only half of it and take the limit of the rest again later.</p> Answer 14: <p>If limits exist for both functions then you can certainly separate the limits as $$\lim_{x\to0}f(x)g(x) = \left(\lim_{x\to0}f(x)\right)\left(\lim_{x\to0}g(x)\right)$$</p> <p>But this, doesn't holds true</p> <p>$$\lim_{x\to0}f(x)g(x) \neq \lim_{x\to0}\left(f(x)\left(\lim_{x\to0}g(x)\right)\right)$$</p> <p>Therefore, in your method I, the following is not true</p> <p>$$L=\lim_{x \to 0}\frac{\frac{e^x}{x}}{\frac{e^x-1}{x}}-\frac{1}{x} \neq \lim_{x \to 0}\frac{e^x}{x}-\frac{1}{x}$$</p> Answer 15: <p>Note that $e^x -1 =x+\frac12 x^2+O(x^3)$. Hence, we see that</p> <p>$$\begin{align} \frac{e^x/x}{(e^x-1)/x}-\frac1x&amp;=\frac{e^x-\left(1+\frac12 x+O(x^2)\right)}{x(1+\frac12 x+O(x^2))}\\\\ &amp;=\frac{\frac{e^x-1}{x}-\frac12+O(x)}{1+\frac12 x+O(x^2)} \end{align}$$</p> <p>whereupon passing to the limit yields $1/2$ as expected!</p> Answer 16: <p>The first method is not correct. You either take the limit of the whole expression or you manipulate it. You cannot take the limit of only half of it and take the limit of the rest again later.</p> Answer 17: <p>If limits exist for both functions then you can certainly separate the limits as $$\lim_{x\to0}f(x)g(x) = \left(\lim_{x\to0}f(x)\right)\left(\lim_{x\to0}g(x)\right)$$</p> <p>But this, doesn't holds true</p> <p>$$\lim_{x\to0}f(x)g(x) \neq \lim_{x\to0}\left(f(x)\left(\lim_{x\to0}g(x)\right)\right)$$</p> <p>Therefore, in your method I, the following is not true</p> <p>$$L=\lim_{x \to 0}\frac{\frac{e^x}{x}}{\frac{e^x-1}{x}}-\frac{1}{x} \neq \lim_{x \to 0}\frac{e^x}{x}-\frac{1}{x}$$</p> Answer 18: <p>Note that $e^x -1 =x+\frac12 x^2+O(x^3)$. Hence, we see that</p> <p>$$\begin{align} \frac{e^x/x}{(e^x-1)/x}-\frac1x&amp;=\frac{e^x-\left(1+\frac12 x+O(x^2)\right)}{x(1+\frac12 x+O(x^2))}\\\\ &amp;=\frac{\frac{e^x-1}{x}-\frac12+O(x)}{1+\frac12 x+O(x^2)} \end{align}$$</p> <p>whereupon passing to the limit yields $1/2$ as expected!</p> Answer 19: <p>The first method is not correct. You either take the limit of the whole expression or you manipulate it. You cannot take the limit of only half of it and take the limit of the rest again later.</p> Answer 20: <p>If limits exist for both functions then you can certainly separate the limits as $$\lim_{x\to0}f(x)g(x) = \left(\lim_{x\to0}f(x)\right)\left(\lim_{x\to0}g(x)\right)$$</p> <p>But this, doesn't holds true</p> <p>$$\lim_{x\to0}f(x)g(x) \neq \lim_{x\to0}\left(f(x)\left(\lim_{x\to0}g(x)\right)\right)$$</p> <p>Therefore, in your method I, the following is not true</p> <p>$$L=\lim_{x \to 0}\frac{\frac{e^x}{x}}{\frac{e^x-1}{x}}-\frac{1}{x} \neq \lim_{x \to 0}\frac{e^x}{x}-\frac{1}{x}$$</p> Answer 21: <p>Note that $e^x -1 =x+\frac12 x^2+O(x^3)$. Hence, we see that</p> <p>$$\begin{align} \frac{e^x/x}{(e^x-1)/x}-\frac1x&amp;=\frac{e^x-\left(1+\frac12 x+O(x^2)\right)}{x(1+\frac12 x+O(x^2))}\\\\ &amp;=\frac{\frac{e^x-1}{x}-\frac12+O(x)}{1+\frac12 x+O(x^2)} \end{align}$$</p> <p>whereupon passing to the limit yields $1/2$ as expected!</p> Answer 22: <p>The first method is not correct. You either take the limit of the whole expression or you manipulate it. You cannot take the limit of only half of it and take the limit of the rest again later.</p> Answer 23: <p>If limits exist for both functions then you can certainly separate the limits as $$\lim_{x\to0}f(x)g(x) = \left(\lim_{x\to0}f(x)\right)\left(\lim_{x\to0}g(x)\right)$$</p> <p>But this, doesn't holds true</p> <p>$$\lim_{x\to0}f(x)g(x) \neq \lim_{x\to0}\left(f(x)\left(\lim_{x\to0}g(x)\right)\right)$$</p> <p>Therefore, in your method I, the following is not true</p> <p>$$L=\lim_{x \to 0}\frac{\frac{e^x}{x}}{\frac{e^x-1}{x}}-\frac{1}{x} \neq \lim_{x \to 0}\frac{e^x}{x}-\frac{1}{x}$$</p> Answer 24: <p>Note that $e^x -1 =x+\frac12 x^2+O(x^3)$. Hence, we see that</p> <p>$$\begin{align} \frac{e^x/x}{(e^x-1)/x}-\frac1x&amp;=\frac{e^x-\left(1+\frac12 x+O(x^2)\right)}{x(1+\frac12 x+O(x^2))}\\\\ &amp;=\frac{\frac{e^x-1}{x}-\frac12+O(x)}{1+\frac12 x+O(x^2)} \end{align}$$</p> <p>whereupon passing to the limit yields $1/2$ as expected!</p> Answer 25: <p>The first method is not correct. You either take the limit of the whole expression or you manipulate it. You cannot take the limit of only half of it and take the limit of the rest again later.</p> Answer 26: <p>If limits exist for both functions then you can certainly separate the limits as $$\lim_{x\to0}f(x)g(x) = \left(\lim_{x\to0}f(x)\right)\left(\lim_{x\to0}g(x)\right)$$</p> <p>But this, doesn't holds true</p> <p>$$\lim_{x\to0}f(x)g(x) \neq \lim_{x\to0}\left(f(x)\left(\lim_{x\to0}g(x)\right)\right)$$</p> <p>Therefore, in your method I, the following is not true</p> <p>$$L=\lim_{x \to 0}\frac{\frac{e^x}{x}}{\frac{e^x-1}{x}}-\frac{1}{x} \neq \lim_{x \to 0}\frac{e^x}{x}-\frac{1}{x}$$</p> Answer 27: <p>Note that $e^x -1 =x+\frac12 x^2+O(x^3)$. Hence, we see that</p> <p>$$\begin{align} \frac{e^x/x}{(e^x-1)/x}-\frac1x&amp;=\frac{e^x-\left(1+\frac12 x+O(x^2)\right)}{x(1+\frac12 x+O(x^2))}\\\\ &amp;=\frac{\frac{e^x-1}{x}-\frac12+O(x)}{1+\frac12 x+O(x^2)} \end{align}$$</p> <p>whereupon passing to the limit yields $1/2$ as expected!</p> Answer 28: <p>The first method is not correct. You either take the limit of the whole expression or you manipulate it. You cannot take the limit of only half of it and take the limit of the rest again later.</p> Answer 29: <p>If limits exist for both functions then you can certainly separate the limits as $$\lim_{x\to0}f(x)g(x) = \left(\lim_{x\to0}f(x)\right)\left(\lim_{x\to0}g(x)\right)$$</p> <p>But this, doesn't holds true</p> <p>$$\lim_{x\to0}f(x)g(x) \neq \lim_{x\to0}\left(f(x)\left(\lim_{x\to0}g(x)\right)\right)$$</p> <p>Therefore, in your method I, the following is not true</p> <p>$$L=\lim_{x \to 0}\frac{\frac{e^x}{x}}{\frac{e^x-1}{x}}-\frac{1}{x} \neq \lim_{x \to 0}\frac{e^x}{x}-\frac{1}{x}$$</p> Answer 30: <p>Note that $e^x -1 =x+\frac12 x^2+O(x^3)$. Hence, we see that</p> <p>$$\begin{align} \frac{e^x/x}{(e^x-1)/x}-\frac1x&amp;=\frac{e^x-\left(1+\frac12 x+O(x^2)\right)}{x(1+\frac12 x+O(x^2))}\\\\ &amp;=\frac{\frac{e^x-1}{x}-\frac12+O(x)}{1+\frac12 x+O(x^2)} \end{align}$$</p> <p>whereupon passing to the limit yields $1/2$ as expected!</p>
An Elementary Inequality Problem
2,103,191
6
https://math.stackexchange.com/questions/2103191/an-elementary-inequality-problem
<p>Given $n$ positive real numbers $x_1, x_2,...,x_{n+1}$, suppose: $$\frac{1}{1+x_1}+\frac{1}{1+x_2}+...+\frac{1}{1+x_{n+1}}=1$$ Prove:$$x_1x_2...x_{n+1}\ge{n^{n+1}}$$</p> <p>I have tried the substitution $$t_i=\frac{1}{1+x_i}$$ The problem thus becomes: </p> <p>Given $$t_1+t_2+...+t_{n+1}=1, t_i\gt0$$ Prove:$$(\frac{1}{t_1}-1)(\frac{1}{t_2}-1)...(\frac{1}{t_{n+1}}-1)\ge{n^{n+1}}$$ Which is equavalent to the following: $$(\frac{t_2+t_3+...+t_{n+1}}{t_1})(\frac{t_1+t_3+...+t_{n+1}}{t_2})...(\frac{t_1+t_2+...+t_{n}}{t_{n+1}})\ge{n^{n+1}}$$ From now on, I think basic equality (which says arithmetic mean is greater than geometric mean) can be applied but I cannot quite get there. Any hints? Thanks in advance.</p>
[ { "answer_id": 2103206, "score": 3, "text": "<p>By AM-GM $$\\prod\\limits_{i=1}^{n+1}\\frac{x_{i}}{x_i+1}=\\prod\\limits_{i=1}^{n+1}\\sum\\limits_{k\\neq i}\\frac{1}{x_k+1}\\geq\\prod\\limits_{i=1}^{n+1}\\frac{n}{\\sqrt[n]{\\prod\\limits_{k\\neq i}(1+x_k)}}=\\frac{n^{n+1}}{\\prod\\limits_{i=1 }^{n+1}(1+x_i)}$$\nand we are done!</p>\n" }, { "answer_id": 2103213, "score": 2, "text": "<p>From last, let $\\sqrt[n]{t_1t_2...t_{n+1}}=A$ with $AM-GM$:\n$$\\frac{t_2+t_3+...+t_{n+1}}{n}\\geq\\sqrt[n]{t_2t_3...t_{n+1}}=\\frac{A}{\\sqrt[n]{t_1}}$$\nso for the first\n$$t_2+t_3+...+t_{n+1}\\geq\\frac{nA}{\\sqrt[n]{t_1}}$$\nand for the rest take like this.</p>\n" }, { "answer_id": 2103206, "score": 3, "text": "<p>By AM-GM $$\\prod\\limits_{i=1}^{n+1}\\frac{x_{i}}{x_i+1}=\\prod\\limits_{i=1}^{n+1}\\sum\\limits_{k\\neq i}\\frac{1}{x_k+1}\\geq\\prod\\limits_{i=1}^{n+1}\\frac{n}{\\sqrt[n]{\\prod\\limits_{k\\neq i}(1+x_k)}}=\\frac{n^{n+1}}{\\prod\\limits_{i=1 }^{n+1}(1+x_i)}$$\nand we are done!</p>\n" }, { "answer_id": 2103213, "score": 2, "text": "<p>From last, let $\\sqrt[n]{t_1t_2...t_{n+1}}=A$ with $AM-GM$:\n$$\\frac{t_2+t_3+...+t_{n+1}}{n}\\geq\\sqrt[n]{t_2t_3...t_{n+1}}=\\frac{A}{\\sqrt[n]{t_1}}$$\nso for the first\n$$t_2+t_3+...+t_{n+1}\\geq\\frac{nA}{\\sqrt[n]{t_1}}$$\nand for the rest take like this.</p>\n" }, { "answer_id": 2103206, "score": 3, "text": "<p>By AM-GM $$\\prod\\limits_{i=1}^{n+1}\\frac{x_{i}}{x_i+1}=\\prod\\limits_{i=1}^{n+1}\\sum\\limits_{k\\neq i}\\frac{1}{x_k+1}\\geq\\prod\\limits_{i=1}^{n+1}\\frac{n}{\\sqrt[n]{\\prod\\limits_{k\\neq i}(1+x_k)}}=\\frac{n^{n+1}}{\\prod\\limits_{i=1 }^{n+1}(1+x_i)}$$\nand we are done!</p>\n" }, { "answer_id": 2103213, "score": 2, "text": "<p>From last, let $\\sqrt[n]{t_1t_2...t_{n+1}}=A$ with $AM-GM$:\n$$\\frac{t_2+t_3+...+t_{n+1}}{n}\\geq\\sqrt[n]{t_2t_3...t_{n+1}}=\\frac{A}{\\sqrt[n]{t_1}}$$\nso for the first\n$$t_2+t_3+...+t_{n+1}\\geq\\frac{nA}{\\sqrt[n]{t_1}}$$\nand for the rest take like this.</p>\n" }, { "answer_id": 2103206, "score": 3, "text": "<p>By AM-GM $$\\prod\\limits_{i=1}^{n+1}\\frac{x_{i}}{x_i+1}=\\prod\\limits_{i=1}^{n+1}\\sum\\limits_{k\\neq i}\\frac{1}{x_k+1}\\geq\\prod\\limits_{i=1}^{n+1}\\frac{n}{\\sqrt[n]{\\prod\\limits_{k\\neq i}(1+x_k)}}=\\frac{n^{n+1}}{\\prod\\limits_{i=1 }^{n+1}(1+x_i)}$$\nand we are done!</p>\n" }, { "answer_id": 2103213, "score": 2, "text": "<p>From last, let $\\sqrt[n]{t_1t_2...t_{n+1}}=A$ with $AM-GM$:\n$$\\frac{t_2+t_3+...+t_{n+1}}{n}\\geq\\sqrt[n]{t_2t_3...t_{n+1}}=\\frac{A}{\\sqrt[n]{t_1}}$$\nso for the first\n$$t_2+t_3+...+t_{n+1}\\geq\\frac{nA}{\\sqrt[n]{t_1}}$$\nand for the rest take like this.</p>\n" }, { "answer_id": 2103206, "score": 3, "text": "<p>By AM-GM $$\\prod\\limits_{i=1}^{n+1}\\frac{x_{i}}{x_i+1}=\\prod\\limits_{i=1}^{n+1}\\sum\\limits_{k\\neq i}\\frac{1}{x_k+1}\\geq\\prod\\limits_{i=1}^{n+1}\\frac{n}{\\sqrt[n]{\\prod\\limits_{k\\neq i}(1+x_k)}}=\\frac{n^{n+1}}{\\prod\\limits_{i=1 }^{n+1}(1+x_i)}$$\nand we are done!</p>\n" }, { "answer_id": 2103213, "score": 2, "text": "<p>From last, let $\\sqrt[n]{t_1t_2...t_{n+1}}=A$ with $AM-GM$:\n$$\\frac{t_2+t_3+...+t_{n+1}}{n}\\geq\\sqrt[n]{t_2t_3...t_{n+1}}=\\frac{A}{\\sqrt[n]{t_1}}$$\nso for the first\n$$t_2+t_3+...+t_{n+1}\\geq\\frac{nA}{\\sqrt[n]{t_1}}$$\nand for the rest take like this.</p>\n" }, { "answer_id": 2103206, "score": 3, "text": "<p>By AM-GM $$\\prod\\limits_{i=1}^{n+1}\\frac{x_{i}}{x_i+1}=\\prod\\limits_{i=1}^{n+1}\\sum\\limits_{k\\neq i}\\frac{1}{x_k+1}\\geq\\prod\\limits_{i=1}^{n+1}\\frac{n}{\\sqrt[n]{\\prod\\limits_{k\\neq i}(1+x_k)}}=\\frac{n^{n+1}}{\\prod\\limits_{i=1 }^{n+1}(1+x_i)}$$\nand we are done!</p>\n" }, { "answer_id": 2103213, "score": 2, "text": "<p>From last, let $\\sqrt[n]{t_1t_2...t_{n+1}}=A$ with $AM-GM$:\n$$\\frac{t_2+t_3+...+t_{n+1}}{n}\\geq\\sqrt[n]{t_2t_3...t_{n+1}}=\\frac{A}{\\sqrt[n]{t_1}}$$\nso for the first\n$$t_2+t_3+...+t_{n+1}\\geq\\frac{nA}{\\sqrt[n]{t_1}}$$\nand for the rest take like this.</p>\n" }, { "answer_id": 2103206, "score": 3, "text": "<p>By AM-GM $$\\prod\\limits_{i=1}^{n+1}\\frac{x_{i}}{x_i+1}=\\prod\\limits_{i=1}^{n+1}\\sum\\limits_{k\\neq i}\\frac{1}{x_k+1}\\geq\\prod\\limits_{i=1}^{n+1}\\frac{n}{\\sqrt[n]{\\prod\\limits_{k\\neq i}(1+x_k)}}=\\frac{n^{n+1}}{\\prod\\limits_{i=1 }^{n+1}(1+x_i)}$$\nand we are done!</p>\n" }, { "answer_id": 2103213, "score": 2, "text": "<p>From last, let $\\sqrt[n]{t_1t_2...t_{n+1}}=A$ with $AM-GM$:\n$$\\frac{t_2+t_3+...+t_{n+1}}{n}\\geq\\sqrt[n]{t_2t_3...t_{n+1}}=\\frac{A}{\\sqrt[n]{t_1}}$$\nso for the first\n$$t_2+t_3+...+t_{n+1}\\geq\\frac{nA}{\\sqrt[n]{t_1}}$$\nand for the rest take like this.</p>\n" }, { "answer_id": 2103206, "score": 3, "text": "<p>By AM-GM $$\\prod\\limits_{i=1}^{n+1}\\frac{x_{i}}{x_i+1}=\\prod\\limits_{i=1}^{n+1}\\sum\\limits_{k\\neq i}\\frac{1}{x_k+1}\\geq\\prod\\limits_{i=1}^{n+1}\\frac{n}{\\sqrt[n]{\\prod\\limits_{k\\neq i}(1+x_k)}}=\\frac{n^{n+1}}{\\prod\\limits_{i=1 }^{n+1}(1+x_i)}$$\nand we are done!</p>\n" }, { "answer_id": 2103213, "score": 2, "text": "<p>From last, let $\\sqrt[n]{t_1t_2...t_{n+1}}=A$ with $AM-GM$:\n$$\\frac{t_2+t_3+...+t_{n+1}}{n}\\geq\\sqrt[n]{t_2t_3...t_{n+1}}=\\frac{A}{\\sqrt[n]{t_1}}$$\nso for the first\n$$t_2+t_3+...+t_{n+1}\\geq\\frac{nA}{\\sqrt[n]{t_1}}$$\nand for the rest take like this.</p>\n" }, { "answer_id": 2103206, "score": 3, "text": "<p>By AM-GM $$\\prod\\limits_{i=1}^{n+1}\\frac{x_{i}}{x_i+1}=\\prod\\limits_{i=1}^{n+1}\\sum\\limits_{k\\neq i}\\frac{1}{x_k+1}\\geq\\prod\\limits_{i=1}^{n+1}\\frac{n}{\\sqrt[n]{\\prod\\limits_{k\\neq i}(1+x_k)}}=\\frac{n^{n+1}}{\\prod\\limits_{i=1 }^{n+1}(1+x_i)}$$\nand we are done!</p>\n" }, { "answer_id": 2103213, "score": 2, "text": "<p>From last, let $\\sqrt[n]{t_1t_2...t_{n+1}}=A$ with $AM-GM$:\n$$\\frac{t_2+t_3+...+t_{n+1}}{n}\\geq\\sqrt[n]{t_2t_3...t_{n+1}}=\\frac{A}{\\sqrt[n]{t_1}}$$\nso for the first\n$$t_2+t_3+...+t_{n+1}\\geq\\frac{nA}{\\sqrt[n]{t_1}}$$\nand for the rest take like this.</p>\n" }, { "answer_id": 2103206, "score": 3, "text": "<p>By AM-GM $$\\prod\\limits_{i=1}^{n+1}\\frac{x_{i}}{x_i+1}=\\prod\\limits_{i=1}^{n+1}\\sum\\limits_{k\\neq i}\\frac{1}{x_k+1}\\geq\\prod\\limits_{i=1}^{n+1}\\frac{n}{\\sqrt[n]{\\prod\\limits_{k\\neq i}(1+x_k)}}=\\frac{n^{n+1}}{\\prod\\limits_{i=1 }^{n+1}(1+x_i)}$$\nand we are done!</p>\n" }, { "answer_id": 2103213, "score": 2, "text": "<p>From last, let $\\sqrt[n]{t_1t_2...t_{n+1}}=A$ with $AM-GM$:\n$$\\frac{t_2+t_3+...+t_{n+1}}{n}\\geq\\sqrt[n]{t_2t_3...t_{n+1}}=\\frac{A}{\\sqrt[n]{t_1}}$$\nso for the first\n$$t_2+t_3+...+t_{n+1}\\geq\\frac{nA}{\\sqrt[n]{t_1}}$$\nand for the rest take like this.</p>\n" } ]
Question: An Elementary Inequality Problem Answer 1: <p>By AM-GM $$\prod\limits_{i=1}^{n+1}\frac{x_{i}}{x_i+1}=\prod\limits_{i=1}^{n+1}\sum\limits_{k\neq i}\frac{1}{x_k+1}\geq\prod\limits_{i=1}^{n+1}\frac{n}{\sqrt[n]{\prod\limits_{k\neq i}(1+x_k)}}=\frac{n^{n+1}}{\prod\limits_{i=1 }^{n+1}(1+x_i)}$$ and we are done!</p> Answer 2: <p>From last, let $\sqrt[n]{t_1t_2...t_{n+1}}=A$ with $AM-GM$: $$\frac{t_2+t_3+...+t_{n+1}}{n}\geq\sqrt[n]{t_2t_3...t_{n+1}}=\frac{A}{\sqrt[n]{t_1}}$$ so for the first $$t_2+t_3+...+t_{n+1}\geq\frac{nA}{\sqrt[n]{t_1}}$$ and for the rest take like this.</p> Answer 3: <p>By AM-GM $$\prod\limits_{i=1}^{n+1}\frac{x_{i}}{x_i+1}=\prod\limits_{i=1}^{n+1}\sum\limits_{k\neq i}\frac{1}{x_k+1}\geq\prod\limits_{i=1}^{n+1}\frac{n}{\sqrt[n]{\prod\limits_{k\neq i}(1+x_k)}}=\frac{n^{n+1}}{\prod\limits_{i=1 }^{n+1}(1+x_i)}$$ and we are done!</p> Answer 4: <p>From last, let $\sqrt[n]{t_1t_2...t_{n+1}}=A$ with $AM-GM$: $$\frac{t_2+t_3+...+t_{n+1}}{n}\geq\sqrt[n]{t_2t_3...t_{n+1}}=\frac{A}{\sqrt[n]{t_1}}$$ so for the first $$t_2+t_3+...+t_{n+1}\geq\frac{nA}{\sqrt[n]{t_1}}$$ and for the rest take like this.</p> Answer 5: <p>By AM-GM $$\prod\limits_{i=1}^{n+1}\frac{x_{i}}{x_i+1}=\prod\limits_{i=1}^{n+1}\sum\limits_{k\neq i}\frac{1}{x_k+1}\geq\prod\limits_{i=1}^{n+1}\frac{n}{\sqrt[n]{\prod\limits_{k\neq i}(1+x_k)}}=\frac{n^{n+1}}{\prod\limits_{i=1 }^{n+1}(1+x_i)}$$ and we are done!</p> Answer 6: <p>From last, let $\sqrt[n]{t_1t_2...t_{n+1}}=A$ with $AM-GM$: $$\frac{t_2+t_3+...+t_{n+1}}{n}\geq\sqrt[n]{t_2t_3...t_{n+1}}=\frac{A}{\sqrt[n]{t_1}}$$ so for the first $$t_2+t_3+...+t_{n+1}\geq\frac{nA}{\sqrt[n]{t_1}}$$ and for the rest take like this.</p> Answer 7: <p>By AM-GM $$\prod\limits_{i=1}^{n+1}\frac{x_{i}}{x_i+1}=\prod\limits_{i=1}^{n+1}\sum\limits_{k\neq i}\frac{1}{x_k+1}\geq\prod\limits_{i=1}^{n+1}\frac{n}{\sqrt[n]{\prod\limits_{k\neq i}(1+x_k)}}=\frac{n^{n+1}}{\prod\limits_{i=1 }^{n+1}(1+x_i)}$$ and we are done!</p> Answer 8: <p>From last, let $\sqrt[n]{t_1t_2...t_{n+1}}=A$ with $AM-GM$: $$\frac{t_2+t_3+...+t_{n+1}}{n}\geq\sqrt[n]{t_2t_3...t_{n+1}}=\frac{A}{\sqrt[n]{t_1}}$$ so for the first $$t_2+t_3+...+t_{n+1}\geq\frac{nA}{\sqrt[n]{t_1}}$$ and for the rest take like this.</p> Answer 9: <p>By AM-GM $$\prod\limits_{i=1}^{n+1}\frac{x_{i}}{x_i+1}=\prod\limits_{i=1}^{n+1}\sum\limits_{k\neq i}\frac{1}{x_k+1}\geq\prod\limits_{i=1}^{n+1}\frac{n}{\sqrt[n]{\prod\limits_{k\neq i}(1+x_k)}}=\frac{n^{n+1}}{\prod\limits_{i=1 }^{n+1}(1+x_i)}$$ and we are done!</p> Answer 10: <p>From last, let $\sqrt[n]{t_1t_2...t_{n+1}}=A$ with $AM-GM$: $$\frac{t_2+t_3+...+t_{n+1}}{n}\geq\sqrt[n]{t_2t_3...t_{n+1}}=\frac{A}{\sqrt[n]{t_1}}$$ so for the first $$t_2+t_3+...+t_{n+1}\geq\frac{nA}{\sqrt[n]{t_1}}$$ and for the rest take like this.</p> Answer 11: <p>By AM-GM $$\prod\limits_{i=1}^{n+1}\frac{x_{i}}{x_i+1}=\prod\limits_{i=1}^{n+1}\sum\limits_{k\neq i}\frac{1}{x_k+1}\geq\prod\limits_{i=1}^{n+1}\frac{n}{\sqrt[n]{\prod\limits_{k\neq i}(1+x_k)}}=\frac{n^{n+1}}{\prod\limits_{i=1 }^{n+1}(1+x_i)}$$ and we are done!</p> Answer 12: <p>From last, let $\sqrt[n]{t_1t_2...t_{n+1}}=A$ with $AM-GM$: $$\frac{t_2+t_3+...+t_{n+1}}{n}\geq\sqrt[n]{t_2t_3...t_{n+1}}=\frac{A}{\sqrt[n]{t_1}}$$ so for the first $$t_2+t_3+...+t_{n+1}\geq\frac{nA}{\sqrt[n]{t_1}}$$ and for the rest take like this.</p> Answer 13: <p>By AM-GM $$\prod\limits_{i=1}^{n+1}\frac{x_{i}}{x_i+1}=\prod\limits_{i=1}^{n+1}\sum\limits_{k\neq i}\frac{1}{x_k+1}\geq\prod\limits_{i=1}^{n+1}\frac{n}{\sqrt[n]{\prod\limits_{k\neq i}(1+x_k)}}=\frac{n^{n+1}}{\prod\limits_{i=1 }^{n+1}(1+x_i)}$$ and we are done!</p> Answer 14: <p>From last, let $\sqrt[n]{t_1t_2...t_{n+1}}=A$ with $AM-GM$: $$\frac{t_2+t_3+...+t_{n+1}}{n}\geq\sqrt[n]{t_2t_3...t_{n+1}}=\frac{A}{\sqrt[n]{t_1}}$$ so for the first $$t_2+t_3+...+t_{n+1}\geq\frac{nA}{\sqrt[n]{t_1}}$$ and for the rest take like this.</p> Answer 15: <p>By AM-GM $$\prod\limits_{i=1}^{n+1}\frac{x_{i}}{x_i+1}=\prod\limits_{i=1}^{n+1}\sum\limits_{k\neq i}\frac{1}{x_k+1}\geq\prod\limits_{i=1}^{n+1}\frac{n}{\sqrt[n]{\prod\limits_{k\neq i}(1+x_k)}}=\frac{n^{n+1}}{\prod\limits_{i=1 }^{n+1}(1+x_i)}$$ and we are done!</p> Answer 16: <p>From last, let $\sqrt[n]{t_1t_2...t_{n+1}}=A$ with $AM-GM$: $$\frac{t_2+t_3+...+t_{n+1}}{n}\geq\sqrt[n]{t_2t_3...t_{n+1}}=\frac{A}{\sqrt[n]{t_1}}$$ so for the first $$t_2+t_3+...+t_{n+1}\geq\frac{nA}{\sqrt[n]{t_1}}$$ and for the rest take like this.</p> Answer 17: <p>By AM-GM $$\prod\limits_{i=1}^{n+1}\frac{x_{i}}{x_i+1}=\prod\limits_{i=1}^{n+1}\sum\limits_{k\neq i}\frac{1}{x_k+1}\geq\prod\limits_{i=1}^{n+1}\frac{n}{\sqrt[n]{\prod\limits_{k\neq i}(1+x_k)}}=\frac{n^{n+1}}{\prod\limits_{i=1 }^{n+1}(1+x_i)}$$ and we are done!</p> Answer 18: <p>From last, let $\sqrt[n]{t_1t_2...t_{n+1}}=A$ with $AM-GM$: $$\frac{t_2+t_3+...+t_{n+1}}{n}\geq\sqrt[n]{t_2t_3...t_{n+1}}=\frac{A}{\sqrt[n]{t_1}}$$ so for the first $$t_2+t_3+...+t_{n+1}\geq\frac{nA}{\sqrt[n]{t_1}}$$ and for the rest take like this.</p> Answer 19: <p>By AM-GM $$\prod\limits_{i=1}^{n+1}\frac{x_{i}}{x_i+1}=\prod\limits_{i=1}^{n+1}\sum\limits_{k\neq i}\frac{1}{x_k+1}\geq\prod\limits_{i=1}^{n+1}\frac{n}{\sqrt[n]{\prod\limits_{k\neq i}(1+x_k)}}=\frac{n^{n+1}}{\prod\limits_{i=1 }^{n+1}(1+x_i)}$$ and we are done!</p> Answer 20: <p>From last, let $\sqrt[n]{t_1t_2...t_{n+1}}=A$ with $AM-GM$: $$\frac{t_2+t_3+...+t_{n+1}}{n}\geq\sqrt[n]{t_2t_3...t_{n+1}}=\frac{A}{\sqrt[n]{t_1}}$$ so for the first $$t_2+t_3+...+t_{n+1}\geq\frac{nA}{\sqrt[n]{t_1}}$$ and for the rest take like this.</p>
Why is it legal to consider $\frac{dy}{dx}$ for a parametric curve when y may not be a valid function of x?
2,104,219
6
https://math.stackexchange.com/questions/2104219/why-is-it-legal-to-consider-fracdydx-for-a-parametric-curve-when-y-may-no
<p>When deriving the formula for the derivative of a parametric curve, in the form of $x = x(t)$ and $y = y(t)$, the chain rule is applied to $\frac{dy}{dt}$ to obtain $\frac{dy}{dt} = \frac{dy}{dx}\cdot\frac{dx}{dt}$, from which the slope of $y$ with respect to $x$ can be obtained.</p> <p><strong>My question is: why is it legal to use $\frac{dy}{dx}$ when $y$ is often not a function of $x$?</strong></p> <p>For example, the curve described by the parametric equations $x=6sin(t)$ and $y=t^2+t$ (<a href="https://i.stack.imgur.com/eTUZQ.png" rel="noreferrer">image here</a>) is clearly not a function of $x$, since it fails the vertical line test infinitely many times, and yet $\frac{dy}{dx} = \frac{2t+1}{6cos(t)}$. </p> <p>What is the intuition behind $\frac{dy}{dx}$ in this case? I usually think of $\frac{dy}{dx}$ as the (unique) slope induced from a small change in $x$, but that doesn't make sense here, since a small change in $x$ corresponds to infinitely many changes in $y$.</p>
[ { "answer_id": 2104225, "score": 2, "text": "<p>There is an important theorem in calculus which says, basically, that for any point on a (nice) curve where the curve is non-vertical and not self-intersecting, there is a piece of the curve surrounding that point where the curve <em>looks like</em> the graph of a function. In other words, there is a function $y(x)$ whose graph coincides with that small piece of the curve. Since derivatives only care about the immediate surroundings of any given point, this lets you make sense of $\\frac{dy}{dx}$ at that point.</p>\n" }, { "answer_id": 2104220, "score": 1, "text": "<p>The important facts are that both $x$ and $y$ are functions of $t$ and for each value of $t$ there is only one point of the curve. If the curve has as non-horizontal tangent at that point, then the slope of the tangent will be found by substituting the value of $t$ into the equation for $\\dfrac{dy}{dx}$ at that point.</p>\n" }, { "answer_id": 2104225, "score": 2, "text": "<p>There is an important theorem in calculus which says, basically, that for any point on a (nice) curve where the curve is non-vertical and not self-intersecting, there is a piece of the curve surrounding that point where the curve <em>looks like</em> the graph of a function. In other words, there is a function $y(x)$ whose graph coincides with that small piece of the curve. Since derivatives only care about the immediate surroundings of any given point, this lets you make sense of $\\frac{dy}{dx}$ at that point.</p>\n" }, { "answer_id": 2104220, "score": 1, "text": "<p>The important facts are that both $x$ and $y$ are functions of $t$ and for each value of $t$ there is only one point of the curve. If the curve has as non-horizontal tangent at that point, then the slope of the tangent will be found by substituting the value of $t$ into the equation for $\\dfrac{dy}{dx}$ at that point.</p>\n" }, { "answer_id": 2104225, "score": 2, "text": "<p>There is an important theorem in calculus which says, basically, that for any point on a (nice) curve where the curve is non-vertical and not self-intersecting, there is a piece of the curve surrounding that point where the curve <em>looks like</em> the graph of a function. In other words, there is a function $y(x)$ whose graph coincides with that small piece of the curve. Since derivatives only care about the immediate surroundings of any given point, this lets you make sense of $\\frac{dy}{dx}$ at that point.</p>\n" }, { "answer_id": 2104220, "score": 1, "text": "<p>The important facts are that both $x$ and $y$ are functions of $t$ and for each value of $t$ there is only one point of the curve. If the curve has as non-horizontal tangent at that point, then the slope of the tangent will be found by substituting the value of $t$ into the equation for $\\dfrac{dy}{dx}$ at that point.</p>\n" }, { "answer_id": 2104225, "score": 2, "text": "<p>There is an important theorem in calculus which says, basically, that for any point on a (nice) curve where the curve is non-vertical and not self-intersecting, there is a piece of the curve surrounding that point where the curve <em>looks like</em> the graph of a function. In other words, there is a function $y(x)$ whose graph coincides with that small piece of the curve. Since derivatives only care about the immediate surroundings of any given point, this lets you make sense of $\\frac{dy}{dx}$ at that point.</p>\n" }, { "answer_id": 2104220, "score": 1, "text": "<p>The important facts are that both $x$ and $y$ are functions of $t$ and for each value of $t$ there is only one point of the curve. If the curve has as non-horizontal tangent at that point, then the slope of the tangent will be found by substituting the value of $t$ into the equation for $\\dfrac{dy}{dx}$ at that point.</p>\n" }, { "answer_id": 2104225, "score": 2, "text": "<p>There is an important theorem in calculus which says, basically, that for any point on a (nice) curve where the curve is non-vertical and not self-intersecting, there is a piece of the curve surrounding that point where the curve <em>looks like</em> the graph of a function. In other words, there is a function $y(x)$ whose graph coincides with that small piece of the curve. Since derivatives only care about the immediate surroundings of any given point, this lets you make sense of $\\frac{dy}{dx}$ at that point.</p>\n" }, { "answer_id": 2104220, "score": 1, "text": "<p>The important facts are that both $x$ and $y$ are functions of $t$ and for each value of $t$ there is only one point of the curve. If the curve has as non-horizontal tangent at that point, then the slope of the tangent will be found by substituting the value of $t$ into the equation for $\\dfrac{dy}{dx}$ at that point.</p>\n" }, { "answer_id": 2104225, "score": 2, "text": "<p>There is an important theorem in calculus which says, basically, that for any point on a (nice) curve where the curve is non-vertical and not self-intersecting, there is a piece of the curve surrounding that point where the curve <em>looks like</em> the graph of a function. In other words, there is a function $y(x)$ whose graph coincides with that small piece of the curve. Since derivatives only care about the immediate surroundings of any given point, this lets you make sense of $\\frac{dy}{dx}$ at that point.</p>\n" }, { "answer_id": 2104220, "score": 1, "text": "<p>The important facts are that both $x$ and $y$ are functions of $t$ and for each value of $t$ there is only one point of the curve. If the curve has as non-horizontal tangent at that point, then the slope of the tangent will be found by substituting the value of $t$ into the equation for $\\dfrac{dy}{dx}$ at that point.</p>\n" }, { "answer_id": 2104225, "score": 2, "text": "<p>There is an important theorem in calculus which says, basically, that for any point on a (nice) curve where the curve is non-vertical and not self-intersecting, there is a piece of the curve surrounding that point where the curve <em>looks like</em> the graph of a function. In other words, there is a function $y(x)$ whose graph coincides with that small piece of the curve. Since derivatives only care about the immediate surroundings of any given point, this lets you make sense of $\\frac{dy}{dx}$ at that point.</p>\n" }, { "answer_id": 2104220, "score": 1, "text": "<p>The important facts are that both $x$ and $y$ are functions of $t$ and for each value of $t$ there is only one point of the curve. If the curve has as non-horizontal tangent at that point, then the slope of the tangent will be found by substituting the value of $t$ into the equation for $\\dfrac{dy}{dx}$ at that point.</p>\n" }, { "answer_id": 2104225, "score": 2, "text": "<p>There is an important theorem in calculus which says, basically, that for any point on a (nice) curve where the curve is non-vertical and not self-intersecting, there is a piece of the curve surrounding that point where the curve <em>looks like</em> the graph of a function. In other words, there is a function $y(x)$ whose graph coincides with that small piece of the curve. Since derivatives only care about the immediate surroundings of any given point, this lets you make sense of $\\frac{dy}{dx}$ at that point.</p>\n" }, { "answer_id": 2104220, "score": 1, "text": "<p>The important facts are that both $x$ and $y$ are functions of $t$ and for each value of $t$ there is only one point of the curve. If the curve has as non-horizontal tangent at that point, then the slope of the tangent will be found by substituting the value of $t$ into the equation for $\\dfrac{dy}{dx}$ at that point.</p>\n" }, { "answer_id": 2104225, "score": 2, "text": "<p>There is an important theorem in calculus which says, basically, that for any point on a (nice) curve where the curve is non-vertical and not self-intersecting, there is a piece of the curve surrounding that point where the curve <em>looks like</em> the graph of a function. In other words, there is a function $y(x)$ whose graph coincides with that small piece of the curve. Since derivatives only care about the immediate surroundings of any given point, this lets you make sense of $\\frac{dy}{dx}$ at that point.</p>\n" }, { "answer_id": 2104220, "score": 1, "text": "<p>The important facts are that both $x$ and $y$ are functions of $t$ and for each value of $t$ there is only one point of the curve. If the curve has as non-horizontal tangent at that point, then the slope of the tangent will be found by substituting the value of $t$ into the equation for $\\dfrac{dy}{dx}$ at that point.</p>\n" }, { "answer_id": 2104225, "score": 2, "text": "<p>There is an important theorem in calculus which says, basically, that for any point on a (nice) curve where the curve is non-vertical and not self-intersecting, there is a piece of the curve surrounding that point where the curve <em>looks like</em> the graph of a function. In other words, there is a function $y(x)$ whose graph coincides with that small piece of the curve. Since derivatives only care about the immediate surroundings of any given point, this lets you make sense of $\\frac{dy}{dx}$ at that point.</p>\n" }, { "answer_id": 2104220, "score": 1, "text": "<p>The important facts are that both $x$ and $y$ are functions of $t$ and for each value of $t$ there is only one point of the curve. If the curve has as non-horizontal tangent at that point, then the slope of the tangent will be found by substituting the value of $t$ into the equation for $\\dfrac{dy}{dx}$ at that point.</p>\n" } ]
Question: Why is it legal to consider $\frac{dy}{dx}$ for a parametric curve when y may not be a valid function of x? Answer 1: <p>There is an important theorem in calculus which says, basically, that for any point on a (nice) curve where the curve is non-vertical and not self-intersecting, there is a piece of the curve surrounding that point where the curve <em>looks like</em> the graph of a function. In other words, there is a function $y(x)$ whose graph coincides with that small piece of the curve. Since derivatives only care about the immediate surroundings of any given point, this lets you make sense of $\frac{dy}{dx}$ at that point.</p> Answer 2: <p>The important facts are that both $x$ and $y$ are functions of $t$ and for each value of $t$ there is only one point of the curve. If the curve has as non-horizontal tangent at that point, then the slope of the tangent will be found by substituting the value of $t$ into the equation for $\dfrac{dy}{dx}$ at that point.</p> Answer 3: <p>There is an important theorem in calculus which says, basically, that for any point on a (nice) curve where the curve is non-vertical and not self-intersecting, there is a piece of the curve surrounding that point where the curve <em>looks like</em> the graph of a function. In other words, there is a function $y(x)$ whose graph coincides with that small piece of the curve. Since derivatives only care about the immediate surroundings of any given point, this lets you make sense of $\frac{dy}{dx}$ at that point.</p> Answer 4: <p>The important facts are that both $x$ and $y$ are functions of $t$ and for each value of $t$ there is only one point of the curve. If the curve has as non-horizontal tangent at that point, then the slope of the tangent will be found by substituting the value of $t$ into the equation for $\dfrac{dy}{dx}$ at that point.</p> Answer 5: <p>There is an important theorem in calculus which says, basically, that for any point on a (nice) curve where the curve is non-vertical and not self-intersecting, there is a piece of the curve surrounding that point where the curve <em>looks like</em> the graph of a function. In other words, there is a function $y(x)$ whose graph coincides with that small piece of the curve. Since derivatives only care about the immediate surroundings of any given point, this lets you make sense of $\frac{dy}{dx}$ at that point.</p> Answer 6: <p>The important facts are that both $x$ and $y$ are functions of $t$ and for each value of $t$ there is only one point of the curve. If the curve has as non-horizontal tangent at that point, then the slope of the tangent will be found by substituting the value of $t$ into the equation for $\dfrac{dy}{dx}$ at that point.</p> Answer 7: <p>There is an important theorem in calculus which says, basically, that for any point on a (nice) curve where the curve is non-vertical and not self-intersecting, there is a piece of the curve surrounding that point where the curve <em>looks like</em> the graph of a function. In other words, there is a function $y(x)$ whose graph coincides with that small piece of the curve. Since derivatives only care about the immediate surroundings of any given point, this lets you make sense of $\frac{dy}{dx}$ at that point.</p> Answer 8: <p>The important facts are that both $x$ and $y$ are functions of $t$ and for each value of $t$ there is only one point of the curve. If the curve has as non-horizontal tangent at that point, then the slope of the tangent will be found by substituting the value of $t$ into the equation for $\dfrac{dy}{dx}$ at that point.</p> Answer 9: <p>There is an important theorem in calculus which says, basically, that for any point on a (nice) curve where the curve is non-vertical and not self-intersecting, there is a piece of the curve surrounding that point where the curve <em>looks like</em> the graph of a function. In other words, there is a function $y(x)$ whose graph coincides with that small piece of the curve. Since derivatives only care about the immediate surroundings of any given point, this lets you make sense of $\frac{dy}{dx}$ at that point.</p> Answer 10: <p>The important facts are that both $x$ and $y$ are functions of $t$ and for each value of $t$ there is only one point of the curve. If the curve has as non-horizontal tangent at that point, then the slope of the tangent will be found by substituting the value of $t$ into the equation for $\dfrac{dy}{dx}$ at that point.</p> Answer 11: <p>There is an important theorem in calculus which says, basically, that for any point on a (nice) curve where the curve is non-vertical and not self-intersecting, there is a piece of the curve surrounding that point where the curve <em>looks like</em> the graph of a function. In other words, there is a function $y(x)$ whose graph coincides with that small piece of the curve. Since derivatives only care about the immediate surroundings of any given point, this lets you make sense of $\frac{dy}{dx}$ at that point.</p> Answer 12: <p>The important facts are that both $x$ and $y$ are functions of $t$ and for each value of $t$ there is only one point of the curve. If the curve has as non-horizontal tangent at that point, then the slope of the tangent will be found by substituting the value of $t$ into the equation for $\dfrac{dy}{dx}$ at that point.</p> Answer 13: <p>There is an important theorem in calculus which says, basically, that for any point on a (nice) curve where the curve is non-vertical and not self-intersecting, there is a piece of the curve surrounding that point where the curve <em>looks like</em> the graph of a function. In other words, there is a function $y(x)$ whose graph coincides with that small piece of the curve. Since derivatives only care about the immediate surroundings of any given point, this lets you make sense of $\frac{dy}{dx}$ at that point.</p> Answer 14: <p>The important facts are that both $x$ and $y$ are functions of $t$ and for each value of $t$ there is only one point of the curve. If the curve has as non-horizontal tangent at that point, then the slope of the tangent will be found by substituting the value of $t$ into the equation for $\dfrac{dy}{dx}$ at that point.</p> Answer 15: <p>There is an important theorem in calculus which says, basically, that for any point on a (nice) curve where the curve is non-vertical and not self-intersecting, there is a piece of the curve surrounding that point where the curve <em>looks like</em> the graph of a function. In other words, there is a function $y(x)$ whose graph coincides with that small piece of the curve. Since derivatives only care about the immediate surroundings of any given point, this lets you make sense of $\frac{dy}{dx}$ at that point.</p> Answer 16: <p>The important facts are that both $x$ and $y$ are functions of $t$ and for each value of $t$ there is only one point of the curve. If the curve has as non-horizontal tangent at that point, then the slope of the tangent will be found by substituting the value of $t$ into the equation for $\dfrac{dy}{dx}$ at that point.</p> Answer 17: <p>There is an important theorem in calculus which says, basically, that for any point on a (nice) curve where the curve is non-vertical and not self-intersecting, there is a piece of the curve surrounding that point where the curve <em>looks like</em> the graph of a function. In other words, there is a function $y(x)$ whose graph coincides with that small piece of the curve. Since derivatives only care about the immediate surroundings of any given point, this lets you make sense of $\frac{dy}{dx}$ at that point.</p> Answer 18: <p>The important facts are that both $x$ and $y$ are functions of $t$ and for each value of $t$ there is only one point of the curve. If the curve has as non-horizontal tangent at that point, then the slope of the tangent will be found by substituting the value of $t$ into the equation for $\dfrac{dy}{dx}$ at that point.</p> Answer 19: <p>There is an important theorem in calculus which says, basically, that for any point on a (nice) curve where the curve is non-vertical and not self-intersecting, there is a piece of the curve surrounding that point where the curve <em>looks like</em> the graph of a function. In other words, there is a function $y(x)$ whose graph coincides with that small piece of the curve. Since derivatives only care about the immediate surroundings of any given point, this lets you make sense of $\frac{dy}{dx}$ at that point.</p> Answer 20: <p>The important facts are that both $x$ and $y$ are functions of $t$ and for each value of $t$ there is only one point of the curve. If the curve has as non-horizontal tangent at that point, then the slope of the tangent will be found by substituting the value of $t$ into the equation for $\dfrac{dy}{dx}$ at that point.</p>
Prove that $f$ is increasing if and only if a given inequality holds
2,104,674
6
https://math.stackexchange.com/questions/2104674/prove-that-f-is-increasing-if-and-only-if-a-given-inequality-holds
<p>Let <span class="math-container">$f: [0, \infty) \to \mathbb{R}$</span> be a continuous function. Prove that <span class="math-container">$f$</span> is increasing if and only if: <span class="math-container">$$\int_a^b f(x) dx \leq bf(b) - af(a), \, \forall \, \, 0 \leq a \leq b.$$</span></p> <p>I have no difficulties in proving that if <span class="math-container">$f$</span> is increasing then the inequality holds. But I haven't figured out yet how to prove it the other way around, that is knowing the inequality and proving that <span class="math-container">$f$</span> is increasing.</p> <p>Thank you!</p>
[ { "answer_id": 2104768, "score": 2, "text": "<p>Prove that the integral equation implies that $f$ is increasing via contraposition, viz. prove that if $f$ is not increasing then there exist\n$a_0$ and $b_0$ such that $$\\int_{a_0}^{b_0} f(x) dx &gt; b_0f(b_0) - a_0f(a_0).$$</p>\n\n<p>Indeed, if $f$ is differentiable, not increasing and not everywhere constant, then there exists an interval $[a_0, b_0]\\in\\mathbb{R^+}$ such for all $x\\in [a_0, b_0]$ we have that $$f(a_0)\\geq f(x) \\geq f(b_0)\\mbox{ and } f(a_0)&gt; f(b_0).$$\nTherefore, we have that $$\\int_{a_0}^{b_0} f(x) dx \\geq f(b_0)(b_0 - a_0)= f(b_0)b_0 - f(b_0)a_0 &gt; f(b_0)b_0 -f(a_0)a_0.$$</p>\n" }, { "answer_id": 2104685, "score": 1, "text": "<p><strong>HINTS</strong>:</p>\n\n<p>Show that, WLOG, we can assume $a = 0$. Work with</p>\n\n<p>$$\\int_0^{b} f(x)dx \\leq bf(b)$$</p>\n\n<p>It should make the result more clear.</p>\n" }, { "answer_id": 2105275, "score": 1, "text": "<p>Note that if $f(a) &gt; f(b) $ then by continuity we can choose $c\\in(a, b] $ such that $f(x) &gt; f(c) $ for all $x\\in [a, c) $ and hence $$\\int_{a} ^{c} f(t) \\, dt&gt;(c-a) f(c) $$ And given condition on $f$ implies that integral above is not greater than $cf(c) - af(a) $. It now follows that $f(a) &lt;f(c) $ which is contrary to $f(a) &gt;f(c) $. Thus we must have $f(a) \\leq f(b) $. </p>\n" }, { "answer_id": 2104768, "score": 2, "text": "<p>Prove that the integral equation implies that $f$ is increasing via contraposition, viz. prove that if $f$ is not increasing then there exist\n$a_0$ and $b_0$ such that $$\\int_{a_0}^{b_0} f(x) dx &gt; b_0f(b_0) - a_0f(a_0).$$</p>\n\n<p>Indeed, if $f$ is differentiable, not increasing and not everywhere constant, then there exists an interval $[a_0, b_0]\\in\\mathbb{R^+}$ such for all $x\\in [a_0, b_0]$ we have that $$f(a_0)\\geq f(x) \\geq f(b_0)\\mbox{ and } f(a_0)&gt; f(b_0).$$\nTherefore, we have that $$\\int_{a_0}^{b_0} f(x) dx \\geq f(b_0)(b_0 - a_0)= f(b_0)b_0 - f(b_0)a_0 &gt; f(b_0)b_0 -f(a_0)a_0.$$</p>\n" }, { "answer_id": 2104685, "score": 1, "text": "<p><strong>HINTS</strong>:</p>\n\n<p>Show that, WLOG, we can assume $a = 0$. Work with</p>\n\n<p>$$\\int_0^{b} f(x)dx \\leq bf(b)$$</p>\n\n<p>It should make the result more clear.</p>\n" }, { "answer_id": 2105275, "score": 1, "text": "<p>Note that if $f(a) &gt; f(b) $ then by continuity we can choose $c\\in(a, b] $ such that $f(x) &gt; f(c) $ for all $x\\in [a, c) $ and hence $$\\int_{a} ^{c} f(t) \\, dt&gt;(c-a) f(c) $$ And given condition on $f$ implies that integral above is not greater than $cf(c) - af(a) $. It now follows that $f(a) &lt;f(c) $ which is contrary to $f(a) &gt;f(c) $. Thus we must have $f(a) \\leq f(b) $. </p>\n" }, { "answer_id": 2104768, "score": 2, "text": "<p>Prove that the integral equation implies that $f$ is increasing via contraposition, viz. prove that if $f$ is not increasing then there exist\n$a_0$ and $b_0$ such that $$\\int_{a_0}^{b_0} f(x) dx &gt; b_0f(b_0) - a_0f(a_0).$$</p>\n\n<p>Indeed, if $f$ is differentiable, not increasing and not everywhere constant, then there exists an interval $[a_0, b_0]\\in\\mathbb{R^+}$ such for all $x\\in [a_0, b_0]$ we have that $$f(a_0)\\geq f(x) \\geq f(b_0)\\mbox{ and } f(a_0)&gt; f(b_0).$$\nTherefore, we have that $$\\int_{a_0}^{b_0} f(x) dx \\geq f(b_0)(b_0 - a_0)= f(b_0)b_0 - f(b_0)a_0 &gt; f(b_0)b_0 -f(a_0)a_0.$$</p>\n" }, { "answer_id": 2104685, "score": 1, "text": "<p><strong>HINTS</strong>:</p>\n\n<p>Show that, WLOG, we can assume $a = 0$. Work with</p>\n\n<p>$$\\int_0^{b} f(x)dx \\leq bf(b)$$</p>\n\n<p>It should make the result more clear.</p>\n" }, { "answer_id": 2105275, "score": 1, "text": "<p>Note that if $f(a) &gt; f(b) $ then by continuity we can choose $c\\in(a, b] $ such that $f(x) &gt; f(c) $ for all $x\\in [a, c) $ and hence $$\\int_{a} ^{c} f(t) \\, dt&gt;(c-a) f(c) $$ And given condition on $f$ implies that integral above is not greater than $cf(c) - af(a) $. It now follows that $f(a) &lt;f(c) $ which is contrary to $f(a) &gt;f(c) $. Thus we must have $f(a) \\leq f(b) $. </p>\n" }, { "answer_id": 2104768, "score": 2, "text": "<p>Prove that the integral equation implies that $f$ is increasing via contraposition, viz. prove that if $f$ is not increasing then there exist\n$a_0$ and $b_0$ such that $$\\int_{a_0}^{b_0} f(x) dx &gt; b_0f(b_0) - a_0f(a_0).$$</p>\n\n<p>Indeed, if $f$ is differentiable, not increasing and not everywhere constant, then there exists an interval $[a_0, b_0]\\in\\mathbb{R^+}$ such for all $x\\in [a_0, b_0]$ we have that $$f(a_0)\\geq f(x) \\geq f(b_0)\\mbox{ and } f(a_0)&gt; f(b_0).$$\nTherefore, we have that $$\\int_{a_0}^{b_0} f(x) dx \\geq f(b_0)(b_0 - a_0)= f(b_0)b_0 - f(b_0)a_0 &gt; f(b_0)b_0 -f(a_0)a_0.$$</p>\n" }, { "answer_id": 2104685, "score": 1, "text": "<p><strong>HINTS</strong>:</p>\n\n<p>Show that, WLOG, we can assume $a = 0$. Work with</p>\n\n<p>$$\\int_0^{b} f(x)dx \\leq bf(b)$$</p>\n\n<p>It should make the result more clear.</p>\n" }, { "answer_id": 2105275, "score": 1, "text": "<p>Note that if $f(a) &gt; f(b) $ then by continuity we can choose $c\\in(a, b] $ such that $f(x) &gt; f(c) $ for all $x\\in [a, c) $ and hence $$\\int_{a} ^{c} f(t) \\, dt&gt;(c-a) f(c) $$ And given condition on $f$ implies that integral above is not greater than $cf(c) - af(a) $. It now follows that $f(a) &lt;f(c) $ which is contrary to $f(a) &gt;f(c) $. Thus we must have $f(a) \\leq f(b) $. </p>\n" }, { "answer_id": 2104768, "score": 2, "text": "<p>Prove that the integral equation implies that $f$ is increasing via contraposition, viz. prove that if $f$ is not increasing then there exist\n$a_0$ and $b_0$ such that $$\\int_{a_0}^{b_0} f(x) dx &gt; b_0f(b_0) - a_0f(a_0).$$</p>\n\n<p>Indeed, if $f$ is differentiable, not increasing and not everywhere constant, then there exists an interval $[a_0, b_0]\\in\\mathbb{R^+}$ such for all $x\\in [a_0, b_0]$ we have that $$f(a_0)\\geq f(x) \\geq f(b_0)\\mbox{ and } f(a_0)&gt; f(b_0).$$\nTherefore, we have that $$\\int_{a_0}^{b_0} f(x) dx \\geq f(b_0)(b_0 - a_0)= f(b_0)b_0 - f(b_0)a_0 &gt; f(b_0)b_0 -f(a_0)a_0.$$</p>\n" }, { "answer_id": 2104685, "score": 1, "text": "<p><strong>HINTS</strong>:</p>\n\n<p>Show that, WLOG, we can assume $a = 0$. Work with</p>\n\n<p>$$\\int_0^{b} f(x)dx \\leq bf(b)$$</p>\n\n<p>It should make the result more clear.</p>\n" }, { "answer_id": 2105275, "score": 1, "text": "<p>Note that if $f(a) &gt; f(b) $ then by continuity we can choose $c\\in(a, b] $ such that $f(x) &gt; f(c) $ for all $x\\in [a, c) $ and hence $$\\int_{a} ^{c} f(t) \\, dt&gt;(c-a) f(c) $$ And given condition on $f$ implies that integral above is not greater than $cf(c) - af(a) $. It now follows that $f(a) &lt;f(c) $ which is contrary to $f(a) &gt;f(c) $. Thus we must have $f(a) \\leq f(b) $. </p>\n" }, { "answer_id": 2104768, "score": 2, "text": "<p>Prove that the integral equation implies that $f$ is increasing via contraposition, viz. prove that if $f$ is not increasing then there exist\n$a_0$ and $b_0$ such that $$\\int_{a_0}^{b_0} f(x) dx &gt; b_0f(b_0) - a_0f(a_0).$$</p>\n\n<p>Indeed, if $f$ is differentiable, not increasing and not everywhere constant, then there exists an interval $[a_0, b_0]\\in\\mathbb{R^+}$ such for all $x\\in [a_0, b_0]$ we have that $$f(a_0)\\geq f(x) \\geq f(b_0)\\mbox{ and } f(a_0)&gt; f(b_0).$$\nTherefore, we have that $$\\int_{a_0}^{b_0} f(x) dx \\geq f(b_0)(b_0 - a_0)= f(b_0)b_0 - f(b_0)a_0 &gt; f(b_0)b_0 -f(a_0)a_0.$$</p>\n" }, { "answer_id": 2104685, "score": 1, "text": "<p><strong>HINTS</strong>:</p>\n\n<p>Show that, WLOG, we can assume $a = 0$. Work with</p>\n\n<p>$$\\int_0^{b} f(x)dx \\leq bf(b)$$</p>\n\n<p>It should make the result more clear.</p>\n" }, { "answer_id": 2105275, "score": 1, "text": "<p>Note that if $f(a) &gt; f(b) $ then by continuity we can choose $c\\in(a, b] $ such that $f(x) &gt; f(c) $ for all $x\\in [a, c) $ and hence $$\\int_{a} ^{c} f(t) \\, dt&gt;(c-a) f(c) $$ And given condition on $f$ implies that integral above is not greater than $cf(c) - af(a) $. It now follows that $f(a) &lt;f(c) $ which is contrary to $f(a) &gt;f(c) $. Thus we must have $f(a) \\leq f(b) $. </p>\n" }, { "answer_id": 2104768, "score": 2, "text": "<p>Prove that the integral equation implies that $f$ is increasing via contraposition, viz. prove that if $f$ is not increasing then there exist\n$a_0$ and $b_0$ such that $$\\int_{a_0}^{b_0} f(x) dx &gt; b_0f(b_0) - a_0f(a_0).$$</p>\n\n<p>Indeed, if $f$ is differentiable, not increasing and not everywhere constant, then there exists an interval $[a_0, b_0]\\in\\mathbb{R^+}$ such for all $x\\in [a_0, b_0]$ we have that $$f(a_0)\\geq f(x) \\geq f(b_0)\\mbox{ and } f(a_0)&gt; f(b_0).$$\nTherefore, we have that $$\\int_{a_0}^{b_0} f(x) dx \\geq f(b_0)(b_0 - a_0)= f(b_0)b_0 - f(b_0)a_0 &gt; f(b_0)b_0 -f(a_0)a_0.$$</p>\n" }, { "answer_id": 2104685, "score": 1, "text": "<p><strong>HINTS</strong>:</p>\n\n<p>Show that, WLOG, we can assume $a = 0$. Work with</p>\n\n<p>$$\\int_0^{b} f(x)dx \\leq bf(b)$$</p>\n\n<p>It should make the result more clear.</p>\n" }, { "answer_id": 2105275, "score": 1, "text": "<p>Note that if $f(a) &gt; f(b) $ then by continuity we can choose $c\\in(a, b] $ such that $f(x) &gt; f(c) $ for all $x\\in [a, c) $ and hence $$\\int_{a} ^{c} f(t) \\, dt&gt;(c-a) f(c) $$ And given condition on $f$ implies that integral above is not greater than $cf(c) - af(a) $. It now follows that $f(a) &lt;f(c) $ which is contrary to $f(a) &gt;f(c) $. Thus we must have $f(a) \\leq f(b) $. </p>\n" }, { "answer_id": 2104768, "score": 2, "text": "<p>Prove that the integral equation implies that $f$ is increasing via contraposition, viz. prove that if $f$ is not increasing then there exist\n$a_0$ and $b_0$ such that $$\\int_{a_0}^{b_0} f(x) dx &gt; b_0f(b_0) - a_0f(a_0).$$</p>\n\n<p>Indeed, if $f$ is differentiable, not increasing and not everywhere constant, then there exists an interval $[a_0, b_0]\\in\\mathbb{R^+}$ such for all $x\\in [a_0, b_0]$ we have that $$f(a_0)\\geq f(x) \\geq f(b_0)\\mbox{ and } f(a_0)&gt; f(b_0).$$\nTherefore, we have that $$\\int_{a_0}^{b_0} f(x) dx \\geq f(b_0)(b_0 - a_0)= f(b_0)b_0 - f(b_0)a_0 &gt; f(b_0)b_0 -f(a_0)a_0.$$</p>\n" }, { "answer_id": 2104685, "score": 1, "text": "<p><strong>HINTS</strong>:</p>\n\n<p>Show that, WLOG, we can assume $a = 0$. Work with</p>\n\n<p>$$\\int_0^{b} f(x)dx \\leq bf(b)$$</p>\n\n<p>It should make the result more clear.</p>\n" }, { "answer_id": 2105275, "score": 1, "text": "<p>Note that if $f(a) &gt; f(b) $ then by continuity we can choose $c\\in(a, b] $ such that $f(x) &gt; f(c) $ for all $x\\in [a, c) $ and hence $$\\int_{a} ^{c} f(t) \\, dt&gt;(c-a) f(c) $$ And given condition on $f$ implies that integral above is not greater than $cf(c) - af(a) $. It now follows that $f(a) &lt;f(c) $ which is contrary to $f(a) &gt;f(c) $. Thus we must have $f(a) \\leq f(b) $. </p>\n" }, { "answer_id": 2104768, "score": 2, "text": "<p>Prove that the integral equation implies that $f$ is increasing via contraposition, viz. prove that if $f$ is not increasing then there exist\n$a_0$ and $b_0$ such that $$\\int_{a_0}^{b_0} f(x) dx &gt; b_0f(b_0) - a_0f(a_0).$$</p>\n\n<p>Indeed, if $f$ is differentiable, not increasing and not everywhere constant, then there exists an interval $[a_0, b_0]\\in\\mathbb{R^+}$ such for all $x\\in [a_0, b_0]$ we have that $$f(a_0)\\geq f(x) \\geq f(b_0)\\mbox{ and } f(a_0)&gt; f(b_0).$$\nTherefore, we have that $$\\int_{a_0}^{b_0} f(x) dx \\geq f(b_0)(b_0 - a_0)= f(b_0)b_0 - f(b_0)a_0 &gt; f(b_0)b_0 -f(a_0)a_0.$$</p>\n" }, { "answer_id": 2104685, "score": 1, "text": "<p><strong>HINTS</strong>:</p>\n\n<p>Show that, WLOG, we can assume $a = 0$. Work with</p>\n\n<p>$$\\int_0^{b} f(x)dx \\leq bf(b)$$</p>\n\n<p>It should make the result more clear.</p>\n" }, { "answer_id": 2105275, "score": 1, "text": "<p>Note that if $f(a) &gt; f(b) $ then by continuity we can choose $c\\in(a, b] $ such that $f(x) &gt; f(c) $ for all $x\\in [a, c) $ and hence $$\\int_{a} ^{c} f(t) \\, dt&gt;(c-a) f(c) $$ And given condition on $f$ implies that integral above is not greater than $cf(c) - af(a) $. It now follows that $f(a) &lt;f(c) $ which is contrary to $f(a) &gt;f(c) $. Thus we must have $f(a) \\leq f(b) $. </p>\n" }, { "answer_id": 2104768, "score": 2, "text": "<p>Prove that the integral equation implies that $f$ is increasing via contraposition, viz. prove that if $f$ is not increasing then there exist\n$a_0$ and $b_0$ such that $$\\int_{a_0}^{b_0} f(x) dx &gt; b_0f(b_0) - a_0f(a_0).$$</p>\n\n<p>Indeed, if $f$ is differentiable, not increasing and not everywhere constant, then there exists an interval $[a_0, b_0]\\in\\mathbb{R^+}$ such for all $x\\in [a_0, b_0]$ we have that $$f(a_0)\\geq f(x) \\geq f(b_0)\\mbox{ and } f(a_0)&gt; f(b_0).$$\nTherefore, we have that $$\\int_{a_0}^{b_0} f(x) dx \\geq f(b_0)(b_0 - a_0)= f(b_0)b_0 - f(b_0)a_0 &gt; f(b_0)b_0 -f(a_0)a_0.$$</p>\n" }, { "answer_id": 2104685, "score": 1, "text": "<p><strong>HINTS</strong>:</p>\n\n<p>Show that, WLOG, we can assume $a = 0$. Work with</p>\n\n<p>$$\\int_0^{b} f(x)dx \\leq bf(b)$$</p>\n\n<p>It should make the result more clear.</p>\n" }, { "answer_id": 2105275, "score": 1, "text": "<p>Note that if $f(a) &gt; f(b) $ then by continuity we can choose $c\\in(a, b] $ such that $f(x) &gt; f(c) $ for all $x\\in [a, c) $ and hence $$\\int_{a} ^{c} f(t) \\, dt&gt;(c-a) f(c) $$ And given condition on $f$ implies that integral above is not greater than $cf(c) - af(a) $. It now follows that $f(a) &lt;f(c) $ which is contrary to $f(a) &gt;f(c) $. Thus we must have $f(a) \\leq f(b) $. </p>\n" } ]
Question: Prove that $f$ is increasing if and only if a given inequality holds Answer 1: <p>Prove that the integral equation implies that $f$ is increasing via contraposition, viz. prove that if $f$ is not increasing then there exist $a_0$ and $b_0$ such that $$\int_{a_0}^{b_0} f(x) dx &gt; b_0f(b_0) - a_0f(a_0).$$</p> <p>Indeed, if $f$ is differentiable, not increasing and not everywhere constant, then there exists an interval $[a_0, b_0]\in\mathbb{R^+}$ such for all $x\in [a_0, b_0]$ we have that $$f(a_0)\geq f(x) \geq f(b_0)\mbox{ and } f(a_0)&gt; f(b_0).$$ Therefore, we have that $$\int_{a_0}^{b_0} f(x) dx \geq f(b_0)(b_0 - a_0)= f(b_0)b_0 - f(b_0)a_0 &gt; f(b_0)b_0 -f(a_0)a_0.$$</p> Answer 2: <p><strong>HINTS</strong>:</p> <p>Show that, WLOG, we can assume $a = 0$. Work with</p> <p>$$\int_0^{b} f(x)dx \leq bf(b)$$</p> <p>It should make the result more clear.</p> Answer 3: <p>Note that if $f(a) &gt; f(b) $ then by continuity we can choose $c\in(a, b] $ such that $f(x) &gt; f(c) $ for all $x\in [a, c) $ and hence $$\int_{a} ^{c} f(t) \, dt&gt;(c-a) f(c) $$ And given condition on $f$ implies that integral above is not greater than $cf(c) - af(a) $. It now follows that $f(a) &lt;f(c) $ which is contrary to $f(a) &gt;f(c) $. Thus we must have $f(a) \leq f(b) $. </p> Answer 4: <p>Prove that the integral equation implies that $f$ is increasing via contraposition, viz. prove that if $f$ is not increasing then there exist $a_0$ and $b_0$ such that $$\int_{a_0}^{b_0} f(x) dx &gt; b_0f(b_0) - a_0f(a_0).$$</p> <p>Indeed, if $f$ is differentiable, not increasing and not everywhere constant, then there exists an interval $[a_0, b_0]\in\mathbb{R^+}$ such for all $x\in [a_0, b_0]$ we have that $$f(a_0)\geq f(x) \geq f(b_0)\mbox{ and } f(a_0)&gt; f(b_0).$$ Therefore, we have that $$\int_{a_0}^{b_0} f(x) dx \geq f(b_0)(b_0 - a_0)= f(b_0)b_0 - f(b_0)a_0 &gt; f(b_0)b_0 -f(a_0)a_0.$$</p> Answer 5: <p><strong>HINTS</strong>:</p> <p>Show that, WLOG, we can assume $a = 0$. Work with</p> <p>$$\int_0^{b} f(x)dx \leq bf(b)$$</p> <p>It should make the result more clear.</p> Answer 6: <p>Note that if $f(a) &gt; f(b) $ then by continuity we can choose $c\in(a, b] $ such that $f(x) &gt; f(c) $ for all $x\in [a, c) $ and hence $$\int_{a} ^{c} f(t) \, dt&gt;(c-a) f(c) $$ And given condition on $f$ implies that integral above is not greater than $cf(c) - af(a) $. It now follows that $f(a) &lt;f(c) $ which is contrary to $f(a) &gt;f(c) $. Thus we must have $f(a) \leq f(b) $. </p> Answer 7: <p>Prove that the integral equation implies that $f$ is increasing via contraposition, viz. prove that if $f$ is not increasing then there exist $a_0$ and $b_0$ such that $$\int_{a_0}^{b_0} f(x) dx &gt; b_0f(b_0) - a_0f(a_0).$$</p> <p>Indeed, if $f$ is differentiable, not increasing and not everywhere constant, then there exists an interval $[a_0, b_0]\in\mathbb{R^+}$ such for all $x\in [a_0, b_0]$ we have that $$f(a_0)\geq f(x) \geq f(b_0)\mbox{ and } f(a_0)&gt; f(b_0).$$ Therefore, we have that $$\int_{a_0}^{b_0} f(x) dx \geq f(b_0)(b_0 - a_0)= f(b_0)b_0 - f(b_0)a_0 &gt; f(b_0)b_0 -f(a_0)a_0.$$</p> Answer 8: <p><strong>HINTS</strong>:</p> <p>Show that, WLOG, we can assume $a = 0$. Work with</p> <p>$$\int_0^{b} f(x)dx \leq bf(b)$$</p> <p>It should make the result more clear.</p> Answer 9: <p>Note that if $f(a) &gt; f(b) $ then by continuity we can choose $c\in(a, b] $ such that $f(x) &gt; f(c) $ for all $x\in [a, c) $ and hence $$\int_{a} ^{c} f(t) \, dt&gt;(c-a) f(c) $$ And given condition on $f$ implies that integral above is not greater than $cf(c) - af(a) $. It now follows that $f(a) &lt;f(c) $ which is contrary to $f(a) &gt;f(c) $. Thus we must have $f(a) \leq f(b) $. </p> Answer 10: <p>Prove that the integral equation implies that $f$ is increasing via contraposition, viz. prove that if $f$ is not increasing then there exist $a_0$ and $b_0$ such that $$\int_{a_0}^{b_0} f(x) dx &gt; b_0f(b_0) - a_0f(a_0).$$</p> <p>Indeed, if $f$ is differentiable, not increasing and not everywhere constant, then there exists an interval $[a_0, b_0]\in\mathbb{R^+}$ such for all $x\in [a_0, b_0]$ we have that $$f(a_0)\geq f(x) \geq f(b_0)\mbox{ and } f(a_0)&gt; f(b_0).$$ Therefore, we have that $$\int_{a_0}^{b_0} f(x) dx \geq f(b_0)(b_0 - a_0)= f(b_0)b_0 - f(b_0)a_0 &gt; f(b_0)b_0 -f(a_0)a_0.$$</p> Answer 11: <p><strong>HINTS</strong>:</p> <p>Show that, WLOG, we can assume $a = 0$. Work with</p> <p>$$\int_0^{b} f(x)dx \leq bf(b)$$</p> <p>It should make the result more clear.</p> Answer 12: <p>Note that if $f(a) &gt; f(b) $ then by continuity we can choose $c\in(a, b] $ such that $f(x) &gt; f(c) $ for all $x\in [a, c) $ and hence $$\int_{a} ^{c} f(t) \, dt&gt;(c-a) f(c) $$ And given condition on $f$ implies that integral above is not greater than $cf(c) - af(a) $. It now follows that $f(a) &lt;f(c) $ which is contrary to $f(a) &gt;f(c) $. Thus we must have $f(a) \leq f(b) $. </p> Answer 13: <p>Prove that the integral equation implies that $f$ is increasing via contraposition, viz. prove that if $f$ is not increasing then there exist $a_0$ and $b_0$ such that $$\int_{a_0}^{b_0} f(x) dx &gt; b_0f(b_0) - a_0f(a_0).$$</p> <p>Indeed, if $f$ is differentiable, not increasing and not everywhere constant, then there exists an interval $[a_0, b_0]\in\mathbb{R^+}$ such for all $x\in [a_0, b_0]$ we have that $$f(a_0)\geq f(x) \geq f(b_0)\mbox{ and } f(a_0)&gt; f(b_0).$$ Therefore, we have that $$\int_{a_0}^{b_0} f(x) dx \geq f(b_0)(b_0 - a_0)= f(b_0)b_0 - f(b_0)a_0 &gt; f(b_0)b_0 -f(a_0)a_0.$$</p> Answer 14: <p><strong>HINTS</strong>:</p> <p>Show that, WLOG, we can assume $a = 0$. Work with</p> <p>$$\int_0^{b} f(x)dx \leq bf(b)$$</p> <p>It should make the result more clear.</p> Answer 15: <p>Note that if $f(a) &gt; f(b) $ then by continuity we can choose $c\in(a, b] $ such that $f(x) &gt; f(c) $ for all $x\in [a, c) $ and hence $$\int_{a} ^{c} f(t) \, dt&gt;(c-a) f(c) $$ And given condition on $f$ implies that integral above is not greater than $cf(c) - af(a) $. It now follows that $f(a) &lt;f(c) $ which is contrary to $f(a) &gt;f(c) $. Thus we must have $f(a) \leq f(b) $. </p> Answer 16: <p>Prove that the integral equation implies that $f$ is increasing via contraposition, viz. prove that if $f$ is not increasing then there exist $a_0$ and $b_0$ such that $$\int_{a_0}^{b_0} f(x) dx &gt; b_0f(b_0) - a_0f(a_0).$$</p> <p>Indeed, if $f$ is differentiable, not increasing and not everywhere constant, then there exists an interval $[a_0, b_0]\in\mathbb{R^+}$ such for all $x\in [a_0, b_0]$ we have that $$f(a_0)\geq f(x) \geq f(b_0)\mbox{ and } f(a_0)&gt; f(b_0).$$ Therefore, we have that $$\int_{a_0}^{b_0} f(x) dx \geq f(b_0)(b_0 - a_0)= f(b_0)b_0 - f(b_0)a_0 &gt; f(b_0)b_0 -f(a_0)a_0.$$</p> Answer 17: <p><strong>HINTS</strong>:</p> <p>Show that, WLOG, we can assume $a = 0$. Work with</p> <p>$$\int_0^{b} f(x)dx \leq bf(b)$$</p> <p>It should make the result more clear.</p> Answer 18: <p>Note that if $f(a) &gt; f(b) $ then by continuity we can choose $c\in(a, b] $ such that $f(x) &gt; f(c) $ for all $x\in [a, c) $ and hence $$\int_{a} ^{c} f(t) \, dt&gt;(c-a) f(c) $$ And given condition on $f$ implies that integral above is not greater than $cf(c) - af(a) $. It now follows that $f(a) &lt;f(c) $ which is contrary to $f(a) &gt;f(c) $. Thus we must have $f(a) \leq f(b) $. </p> Answer 19: <p>Prove that the integral equation implies that $f$ is increasing via contraposition, viz. prove that if $f$ is not increasing then there exist $a_0$ and $b_0$ such that $$\int_{a_0}^{b_0} f(x) dx &gt; b_0f(b_0) - a_0f(a_0).$$</p> <p>Indeed, if $f$ is differentiable, not increasing and not everywhere constant, then there exists an interval $[a_0, b_0]\in\mathbb{R^+}$ such for all $x\in [a_0, b_0]$ we have that $$f(a_0)\geq f(x) \geq f(b_0)\mbox{ and } f(a_0)&gt; f(b_0).$$ Therefore, we have that $$\int_{a_0}^{b_0} f(x) dx \geq f(b_0)(b_0 - a_0)= f(b_0)b_0 - f(b_0)a_0 &gt; f(b_0)b_0 -f(a_0)a_0.$$</p> Answer 20: <p><strong>HINTS</strong>:</p> <p>Show that, WLOG, we can assume $a = 0$. Work with</p> <p>$$\int_0^{b} f(x)dx \leq bf(b)$$</p> <p>It should make the result more clear.</p> Answer 21: <p>Note that if $f(a) &gt; f(b) $ then by continuity we can choose $c\in(a, b] $ such that $f(x) &gt; f(c) $ for all $x\in [a, c) $ and hence $$\int_{a} ^{c} f(t) \, dt&gt;(c-a) f(c) $$ And given condition on $f$ implies that integral above is not greater than $cf(c) - af(a) $. It now follows that $f(a) &lt;f(c) $ which is contrary to $f(a) &gt;f(c) $. Thus we must have $f(a) \leq f(b) $. </p> Answer 22: <p>Prove that the integral equation implies that $f$ is increasing via contraposition, viz. prove that if $f$ is not increasing then there exist $a_0$ and $b_0$ such that $$\int_{a_0}^{b_0} f(x) dx &gt; b_0f(b_0) - a_0f(a_0).$$</p> <p>Indeed, if $f$ is differentiable, not increasing and not everywhere constant, then there exists an interval $[a_0, b_0]\in\mathbb{R^+}$ such for all $x\in [a_0, b_0]$ we have that $$f(a_0)\geq f(x) \geq f(b_0)\mbox{ and } f(a_0)&gt; f(b_0).$$ Therefore, we have that $$\int_{a_0}^{b_0} f(x) dx \geq f(b_0)(b_0 - a_0)= f(b_0)b_0 - f(b_0)a_0 &gt; f(b_0)b_0 -f(a_0)a_0.$$</p> Answer 23: <p><strong>HINTS</strong>:</p> <p>Show that, WLOG, we can assume $a = 0$. Work with</p> <p>$$\int_0^{b} f(x)dx \leq bf(b)$$</p> <p>It should make the result more clear.</p> Answer 24: <p>Note that if $f(a) &gt; f(b) $ then by continuity we can choose $c\in(a, b] $ such that $f(x) &gt; f(c) $ for all $x\in [a, c) $ and hence $$\int_{a} ^{c} f(t) \, dt&gt;(c-a) f(c) $$ And given condition on $f$ implies that integral above is not greater than $cf(c) - af(a) $. It now follows that $f(a) &lt;f(c) $ which is contrary to $f(a) &gt;f(c) $. Thus we must have $f(a) \leq f(b) $. </p> Answer 25: <p>Prove that the integral equation implies that $f$ is increasing via contraposition, viz. prove that if $f$ is not increasing then there exist $a_0$ and $b_0$ such that $$\int_{a_0}^{b_0} f(x) dx &gt; b_0f(b_0) - a_0f(a_0).$$</p> <p>Indeed, if $f$ is differentiable, not increasing and not everywhere constant, then there exists an interval $[a_0, b_0]\in\mathbb{R^+}$ such for all $x\in [a_0, b_0]$ we have that $$f(a_0)\geq f(x) \geq f(b_0)\mbox{ and } f(a_0)&gt; f(b_0).$$ Therefore, we have that $$\int_{a_0}^{b_0} f(x) dx \geq f(b_0)(b_0 - a_0)= f(b_0)b_0 - f(b_0)a_0 &gt; f(b_0)b_0 -f(a_0)a_0.$$</p> Answer 26: <p><strong>HINTS</strong>:</p> <p>Show that, WLOG, we can assume $a = 0$. Work with</p> <p>$$\int_0^{b} f(x)dx \leq bf(b)$$</p> <p>It should make the result more clear.</p> Answer 27: <p>Note that if $f(a) &gt; f(b) $ then by continuity we can choose $c\in(a, b] $ such that $f(x) &gt; f(c) $ for all $x\in [a, c) $ and hence $$\int_{a} ^{c} f(t) \, dt&gt;(c-a) f(c) $$ And given condition on $f$ implies that integral above is not greater than $cf(c) - af(a) $. It now follows that $f(a) &lt;f(c) $ which is contrary to $f(a) &gt;f(c) $. Thus we must have $f(a) \leq f(b) $. </p> Answer 28: <p>Prove that the integral equation implies that $f$ is increasing via contraposition, viz. prove that if $f$ is not increasing then there exist $a_0$ and $b_0$ such that $$\int_{a_0}^{b_0} f(x) dx &gt; b_0f(b_0) - a_0f(a_0).$$</p> <p>Indeed, if $f$ is differentiable, not increasing and not everywhere constant, then there exists an interval $[a_0, b_0]\in\mathbb{R^+}$ such for all $x\in [a_0, b_0]$ we have that $$f(a_0)\geq f(x) \geq f(b_0)\mbox{ and } f(a_0)&gt; f(b_0).$$ Therefore, we have that $$\int_{a_0}^{b_0} f(x) dx \geq f(b_0)(b_0 - a_0)= f(b_0)b_0 - f(b_0)a_0 &gt; f(b_0)b_0 -f(a_0)a_0.$$</p> Answer 29: <p><strong>HINTS</strong>:</p> <p>Show that, WLOG, we can assume $a = 0$. Work with</p> <p>$$\int_0^{b} f(x)dx \leq bf(b)$$</p> <p>It should make the result more clear.</p> Answer 30: <p>Note that if $f(a) &gt; f(b) $ then by continuity we can choose $c\in(a, b] $ such that $f(x) &gt; f(c) $ for all $x\in [a, c) $ and hence $$\int_{a} ^{c} f(t) \, dt&gt;(c-a) f(c) $$ And given condition on $f$ implies that integral above is not greater than $cf(c) - af(a) $. It now follows that $f(a) &lt;f(c) $ which is contrary to $f(a) &gt;f(c) $. Thus we must have $f(a) \leq f(b) $. </p>
Probability of choosing same color ball 2 times in a row
2,104,811
6
https://math.stackexchange.com/questions/2104811/probability-of-choosing-same-color-ball-2-times-in-a-row
<p>My son's teacher and I are having a debate about the correct answer to a question. I have an engineer at hand and he has a mathematician so we both feel well supported. We've also both researched the internet and found answers that we feel support our answers. Since his answer came from this website, I decided to ask this same website. Here is the question from my son's 6th grade quiz:</p> <p>"A box contains three balls of different colors. The colors are red, white and blue. What is the probability of choosing the same color ball 2 times in a row?" Your choices are: A. 2/3 B. 1/9 C. 1/3 D. 2/27</p> <p>My son answered B. 1/9. I concurred and so did my husband, the engineer. The test answer sheet also said 1/9. His teacher says it would be 1/3 and he emailed me an explanation from this website about 4 red and 6 white balls (link: <a href="https://math.stackexchange.com/questions/472819/probability-of-first-and-second-drawn-balls-of-the-same-color-without-replaceme">Probability of first and second drawn balls of the same color, without replacement</a>).</p> <p>Honestly, I don't see it. I think that explanation would support 1/9.</p> <p>We've spent quite a bit of time on this so we really have tried to reconcile on this. Could you help us with this simpler sample set of only 3 balls, one of each color? Note, we have agreed that we can assume that the first ball was replaced since there is no zero% probability in the available answers.</p> <p>We think that the first pick would 1/3 for any color and the second pick (with the ball replaced in the set) would be 1/3 again. 1/3 x 1/3 = 1/9</p> <p>He says that the first pick would be a 3/3 (100%) chance of picking any color since the color is not specified, second pick 1/3 so 1/1 x 1/3 = 1/3</p> <p>I'd really appreciate your help. Thank you.</p>
[ { "answer_id": 2104829, "score": 3, "text": "<p>Answer is $\\dfrac{1}{3}$</p>\n\n<p>Any ball can be taken first and for the second ball to be the same color, there is a probability of $\\dfrac{1}{3}$. So, required probability is $\\dfrac{1}{3}$</p>\n\n<p>Other argument can be favorable outcomes are $3×1=3$ and total possible outcomes are $3×3=9$ and therefore, probability is $\\dfrac{3}{9}=\\dfrac{1}{3}$</p>\n" }, { "answer_id": 2104955, "score": 2, "text": "<p>Its simple. </p>\n\n<p>Let red ball is picked both times then we have probability = $\\frac13 \\times \\frac13 = \\frac19$</p>\n\n<p>Or it can be white both times = $\\frac13 \\times \\frac13 = \\frac19$</p>\n\n<p>Or blue both times = $\\frac13 \\times \\frac13 = \\frac19$</p>\n\n<p>Probability = $\\frac19 + \\frac19 + \\frac19 = \\frac39 = \\frac13$</p>\n" }, { "answer_id": 2104818, "score": 1, "text": "<p>The answer is 1/3, since for any pick of the first ball, the probability of picking the same ball next time is 1/3. Another way to look at it, is to check all 9 possible combinations of (first ball, second ball). 3 out of the 9 i.e. blue-blue, red-red, white-white, have the second ball match the first, for a total of 3/9=1/3.</p>\n" }, { "answer_id": 2104829, "score": 3, "text": "<p>Answer is $\\dfrac{1}{3}$</p>\n\n<p>Any ball can be taken first and for the second ball to be the same color, there is a probability of $\\dfrac{1}{3}$. So, required probability is $\\dfrac{1}{3}$</p>\n\n<p>Other argument can be favorable outcomes are $3×1=3$ and total possible outcomes are $3×3=9$ and therefore, probability is $\\dfrac{3}{9}=\\dfrac{1}{3}$</p>\n" }, { "answer_id": 2104955, "score": 2, "text": "<p>Its simple. </p>\n\n<p>Let red ball is picked both times then we have probability = $\\frac13 \\times \\frac13 = \\frac19$</p>\n\n<p>Or it can be white both times = $\\frac13 \\times \\frac13 = \\frac19$</p>\n\n<p>Or blue both times = $\\frac13 \\times \\frac13 = \\frac19$</p>\n\n<p>Probability = $\\frac19 + \\frac19 + \\frac19 = \\frac39 = \\frac13$</p>\n" }, { "answer_id": 2104818, "score": 1, "text": "<p>The answer is 1/3, since for any pick of the first ball, the probability of picking the same ball next time is 1/3. Another way to look at it, is to check all 9 possible combinations of (first ball, second ball). 3 out of the 9 i.e. blue-blue, red-red, white-white, have the second ball match the first, for a total of 3/9=1/3.</p>\n" }, { "answer_id": 2104829, "score": 3, "text": "<p>Answer is $\\dfrac{1}{3}$</p>\n\n<p>Any ball can be taken first and for the second ball to be the same color, there is a probability of $\\dfrac{1}{3}$. So, required probability is $\\dfrac{1}{3}$</p>\n\n<p>Other argument can be favorable outcomes are $3×1=3$ and total possible outcomes are $3×3=9$ and therefore, probability is $\\dfrac{3}{9}=\\dfrac{1}{3}$</p>\n" }, { "answer_id": 2104955, "score": 2, "text": "<p>Its simple. </p>\n\n<p>Let red ball is picked both times then we have probability = $\\frac13 \\times \\frac13 = \\frac19$</p>\n\n<p>Or it can be white both times = $\\frac13 \\times \\frac13 = \\frac19$</p>\n\n<p>Or blue both times = $\\frac13 \\times \\frac13 = \\frac19$</p>\n\n<p>Probability = $\\frac19 + \\frac19 + \\frac19 = \\frac39 = \\frac13$</p>\n" }, { "answer_id": 2104818, "score": 1, "text": "<p>The answer is 1/3, since for any pick of the first ball, the probability of picking the same ball next time is 1/3. Another way to look at it, is to check all 9 possible combinations of (first ball, second ball). 3 out of the 9 i.e. blue-blue, red-red, white-white, have the second ball match the first, for a total of 3/9=1/3.</p>\n" }, { "answer_id": 2104829, "score": 3, "text": "<p>Answer is $\\dfrac{1}{3}$</p>\n\n<p>Any ball can be taken first and for the second ball to be the same color, there is a probability of $\\dfrac{1}{3}$. So, required probability is $\\dfrac{1}{3}$</p>\n\n<p>Other argument can be favorable outcomes are $3×1=3$ and total possible outcomes are $3×3=9$ and therefore, probability is $\\dfrac{3}{9}=\\dfrac{1}{3}$</p>\n" }, { "answer_id": 2104955, "score": 2, "text": "<p>Its simple. </p>\n\n<p>Let red ball is picked both times then we have probability = $\\frac13 \\times \\frac13 = \\frac19$</p>\n\n<p>Or it can be white both times = $\\frac13 \\times \\frac13 = \\frac19$</p>\n\n<p>Or blue both times = $\\frac13 \\times \\frac13 = \\frac19$</p>\n\n<p>Probability = $\\frac19 + \\frac19 + \\frac19 = \\frac39 = \\frac13$</p>\n" }, { "answer_id": 2104818, "score": 1, "text": "<p>The answer is 1/3, since for any pick of the first ball, the probability of picking the same ball next time is 1/3. Another way to look at it, is to check all 9 possible combinations of (first ball, second ball). 3 out of the 9 i.e. blue-blue, red-red, white-white, have the second ball match the first, for a total of 3/9=1/3.</p>\n" }, { "answer_id": 2104829, "score": 3, "text": "<p>Answer is $\\dfrac{1}{3}$</p>\n\n<p>Any ball can be taken first and for the second ball to be the same color, there is a probability of $\\dfrac{1}{3}$. So, required probability is $\\dfrac{1}{3}$</p>\n\n<p>Other argument can be favorable outcomes are $3×1=3$ and total possible outcomes are $3×3=9$ and therefore, probability is $\\dfrac{3}{9}=\\dfrac{1}{3}$</p>\n" }, { "answer_id": 2104955, "score": 2, "text": "<p>Its simple. </p>\n\n<p>Let red ball is picked both times then we have probability = $\\frac13 \\times \\frac13 = \\frac19$</p>\n\n<p>Or it can be white both times = $\\frac13 \\times \\frac13 = \\frac19$</p>\n\n<p>Or blue both times = $\\frac13 \\times \\frac13 = \\frac19$</p>\n\n<p>Probability = $\\frac19 + \\frac19 + \\frac19 = \\frac39 = \\frac13$</p>\n" }, { "answer_id": 2104818, "score": 1, "text": "<p>The answer is 1/3, since for any pick of the first ball, the probability of picking the same ball next time is 1/3. Another way to look at it, is to check all 9 possible combinations of (first ball, second ball). 3 out of the 9 i.e. blue-blue, red-red, white-white, have the second ball match the first, for a total of 3/9=1/3.</p>\n" }, { "answer_id": 2104829, "score": 3, "text": "<p>Answer is $\\dfrac{1}{3}$</p>\n\n<p>Any ball can be taken first and for the second ball to be the same color, there is a probability of $\\dfrac{1}{3}$. So, required probability is $\\dfrac{1}{3}$</p>\n\n<p>Other argument can be favorable outcomes are $3×1=3$ and total possible outcomes are $3×3=9$ and therefore, probability is $\\dfrac{3}{9}=\\dfrac{1}{3}$</p>\n" }, { "answer_id": 2104955, "score": 2, "text": "<p>Its simple. </p>\n\n<p>Let red ball is picked both times then we have probability = $\\frac13 \\times \\frac13 = \\frac19$</p>\n\n<p>Or it can be white both times = $\\frac13 \\times \\frac13 = \\frac19$</p>\n\n<p>Or blue both times = $\\frac13 \\times \\frac13 = \\frac19$</p>\n\n<p>Probability = $\\frac19 + \\frac19 + \\frac19 = \\frac39 = \\frac13$</p>\n" }, { "answer_id": 2104818, "score": 1, "text": "<p>The answer is 1/3, since for any pick of the first ball, the probability of picking the same ball next time is 1/3. Another way to look at it, is to check all 9 possible combinations of (first ball, second ball). 3 out of the 9 i.e. blue-blue, red-red, white-white, have the second ball match the first, for a total of 3/9=1/3.</p>\n" }, { "answer_id": 2104829, "score": 3, "text": "<p>Answer is $\\dfrac{1}{3}$</p>\n\n<p>Any ball can be taken first and for the second ball to be the same color, there is a probability of $\\dfrac{1}{3}$. So, required probability is $\\dfrac{1}{3}$</p>\n\n<p>Other argument can be favorable outcomes are $3×1=3$ and total possible outcomes are $3×3=9$ and therefore, probability is $\\dfrac{3}{9}=\\dfrac{1}{3}$</p>\n" }, { "answer_id": 2104955, "score": 2, "text": "<p>Its simple. </p>\n\n<p>Let red ball is picked both times then we have probability = $\\frac13 \\times \\frac13 = \\frac19$</p>\n\n<p>Or it can be white both times = $\\frac13 \\times \\frac13 = \\frac19$</p>\n\n<p>Or blue both times = $\\frac13 \\times \\frac13 = \\frac19$</p>\n\n<p>Probability = $\\frac19 + \\frac19 + \\frac19 = \\frac39 = \\frac13$</p>\n" }, { "answer_id": 2104818, "score": 1, "text": "<p>The answer is 1/3, since for any pick of the first ball, the probability of picking the same ball next time is 1/3. Another way to look at it, is to check all 9 possible combinations of (first ball, second ball). 3 out of the 9 i.e. blue-blue, red-red, white-white, have the second ball match the first, for a total of 3/9=1/3.</p>\n" }, { "answer_id": 2104829, "score": 3, "text": "<p>Answer is $\\dfrac{1}{3}$</p>\n\n<p>Any ball can be taken first and for the second ball to be the same color, there is a probability of $\\dfrac{1}{3}$. So, required probability is $\\dfrac{1}{3}$</p>\n\n<p>Other argument can be favorable outcomes are $3×1=3$ and total possible outcomes are $3×3=9$ and therefore, probability is $\\dfrac{3}{9}=\\dfrac{1}{3}$</p>\n" }, { "answer_id": 2104955, "score": 2, "text": "<p>Its simple. </p>\n\n<p>Let red ball is picked both times then we have probability = $\\frac13 \\times \\frac13 = \\frac19$</p>\n\n<p>Or it can be white both times = $\\frac13 \\times \\frac13 = \\frac19$</p>\n\n<p>Or blue both times = $\\frac13 \\times \\frac13 = \\frac19$</p>\n\n<p>Probability = $\\frac19 + \\frac19 + \\frac19 = \\frac39 = \\frac13$</p>\n" }, { "answer_id": 2104818, "score": 1, "text": "<p>The answer is 1/3, since for any pick of the first ball, the probability of picking the same ball next time is 1/3. Another way to look at it, is to check all 9 possible combinations of (first ball, second ball). 3 out of the 9 i.e. blue-blue, red-red, white-white, have the second ball match the first, for a total of 3/9=1/3.</p>\n" }, { "answer_id": 2104829, "score": 3, "text": "<p>Answer is $\\dfrac{1}{3}$</p>\n\n<p>Any ball can be taken first and for the second ball to be the same color, there is a probability of $\\dfrac{1}{3}$. So, required probability is $\\dfrac{1}{3}$</p>\n\n<p>Other argument can be favorable outcomes are $3×1=3$ and total possible outcomes are $3×3=9$ and therefore, probability is $\\dfrac{3}{9}=\\dfrac{1}{3}$</p>\n" }, { "answer_id": 2104955, "score": 2, "text": "<p>Its simple. </p>\n\n<p>Let red ball is picked both times then we have probability = $\\frac13 \\times \\frac13 = \\frac19$</p>\n\n<p>Or it can be white both times = $\\frac13 \\times \\frac13 = \\frac19$</p>\n\n<p>Or blue both times = $\\frac13 \\times \\frac13 = \\frac19$</p>\n\n<p>Probability = $\\frac19 + \\frac19 + \\frac19 = \\frac39 = \\frac13$</p>\n" }, { "answer_id": 2104818, "score": 1, "text": "<p>The answer is 1/3, since for any pick of the first ball, the probability of picking the same ball next time is 1/3. Another way to look at it, is to check all 9 possible combinations of (first ball, second ball). 3 out of the 9 i.e. blue-blue, red-red, white-white, have the second ball match the first, for a total of 3/9=1/3.</p>\n" }, { "answer_id": 2104829, "score": 3, "text": "<p>Answer is $\\dfrac{1}{3}$</p>\n\n<p>Any ball can be taken first and for the second ball to be the same color, there is a probability of $\\dfrac{1}{3}$. So, required probability is $\\dfrac{1}{3}$</p>\n\n<p>Other argument can be favorable outcomes are $3×1=3$ and total possible outcomes are $3×3=9$ and therefore, probability is $\\dfrac{3}{9}=\\dfrac{1}{3}$</p>\n" }, { "answer_id": 2104955, "score": 2, "text": "<p>Its simple. </p>\n\n<p>Let red ball is picked both times then we have probability = $\\frac13 \\times \\frac13 = \\frac19$</p>\n\n<p>Or it can be white both times = $\\frac13 \\times \\frac13 = \\frac19$</p>\n\n<p>Or blue both times = $\\frac13 \\times \\frac13 = \\frac19$</p>\n\n<p>Probability = $\\frac19 + \\frac19 + \\frac19 = \\frac39 = \\frac13$</p>\n" }, { "answer_id": 2104818, "score": 1, "text": "<p>The answer is 1/3, since for any pick of the first ball, the probability of picking the same ball next time is 1/3. Another way to look at it, is to check all 9 possible combinations of (first ball, second ball). 3 out of the 9 i.e. blue-blue, red-red, white-white, have the second ball match the first, for a total of 3/9=1/3.</p>\n" } ]
Question: Probability of choosing same color ball 2 times in a row Answer 1: <p>Answer is $\dfrac{1}{3}$</p> <p>Any ball can be taken first and for the second ball to be the same color, there is a probability of $\dfrac{1}{3}$. So, required probability is $\dfrac{1}{3}$</p> <p>Other argument can be favorable outcomes are $3×1=3$ and total possible outcomes are $3×3=9$ and therefore, probability is $\dfrac{3}{9}=\dfrac{1}{3}$</p> Answer 2: <p>Its simple. </p> <p>Let red ball is picked both times then we have probability = $\frac13 \times \frac13 = \frac19$</p> <p>Or it can be white both times = $\frac13 \times \frac13 = \frac19$</p> <p>Or blue both times = $\frac13 \times \frac13 = \frac19$</p> <p>Probability = $\frac19 + \frac19 + \frac19 = \frac39 = \frac13$</p> Answer 3: <p>The answer is 1/3, since for any pick of the first ball, the probability of picking the same ball next time is 1/3. Another way to look at it, is to check all 9 possible combinations of (first ball, second ball). 3 out of the 9 i.e. blue-blue, red-red, white-white, have the second ball match the first, for a total of 3/9=1/3.</p> Answer 4: <p>Answer is $\dfrac{1}{3}$</p> <p>Any ball can be taken first and for the second ball to be the same color, there is a probability of $\dfrac{1}{3}$. So, required probability is $\dfrac{1}{3}$</p> <p>Other argument can be favorable outcomes are $3×1=3$ and total possible outcomes are $3×3=9$ and therefore, probability is $\dfrac{3}{9}=\dfrac{1}{3}$</p> Answer 5: <p>Its simple. </p> <p>Let red ball is picked both times then we have probability = $\frac13 \times \frac13 = \frac19$</p> <p>Or it can be white both times = $\frac13 \times \frac13 = \frac19$</p> <p>Or blue both times = $\frac13 \times \frac13 = \frac19$</p> <p>Probability = $\frac19 + \frac19 + \frac19 = \frac39 = \frac13$</p> Answer 6: <p>The answer is 1/3, since for any pick of the first ball, the probability of picking the same ball next time is 1/3. Another way to look at it, is to check all 9 possible combinations of (first ball, second ball). 3 out of the 9 i.e. blue-blue, red-red, white-white, have the second ball match the first, for a total of 3/9=1/3.</p> Answer 7: <p>Answer is $\dfrac{1}{3}$</p> <p>Any ball can be taken first and for the second ball to be the same color, there is a probability of $\dfrac{1}{3}$. So, required probability is $\dfrac{1}{3}$</p> <p>Other argument can be favorable outcomes are $3×1=3$ and total possible outcomes are $3×3=9$ and therefore, probability is $\dfrac{3}{9}=\dfrac{1}{3}$</p> Answer 8: <p>Its simple. </p> <p>Let red ball is picked both times then we have probability = $\frac13 \times \frac13 = \frac19$</p> <p>Or it can be white both times = $\frac13 \times \frac13 = \frac19$</p> <p>Or blue both times = $\frac13 \times \frac13 = \frac19$</p> <p>Probability = $\frac19 + \frac19 + \frac19 = \frac39 = \frac13$</p> Answer 9: <p>The answer is 1/3, since for any pick of the first ball, the probability of picking the same ball next time is 1/3. Another way to look at it, is to check all 9 possible combinations of (first ball, second ball). 3 out of the 9 i.e. blue-blue, red-red, white-white, have the second ball match the first, for a total of 3/9=1/3.</p> Answer 10: <p>Answer is $\dfrac{1}{3}$</p> <p>Any ball can be taken first and for the second ball to be the same color, there is a probability of $\dfrac{1}{3}$. So, required probability is $\dfrac{1}{3}$</p> <p>Other argument can be favorable outcomes are $3×1=3$ and total possible outcomes are $3×3=9$ and therefore, probability is $\dfrac{3}{9}=\dfrac{1}{3}$</p> Answer 11: <p>Its simple. </p> <p>Let red ball is picked both times then we have probability = $\frac13 \times \frac13 = \frac19$</p> <p>Or it can be white both times = $\frac13 \times \frac13 = \frac19$</p> <p>Or blue both times = $\frac13 \times \frac13 = \frac19$</p> <p>Probability = $\frac19 + \frac19 + \frac19 = \frac39 = \frac13$</p> Answer 12: <p>The answer is 1/3, since for any pick of the first ball, the probability of picking the same ball next time is 1/3. Another way to look at it, is to check all 9 possible combinations of (first ball, second ball). 3 out of the 9 i.e. blue-blue, red-red, white-white, have the second ball match the first, for a total of 3/9=1/3.</p> Answer 13: <p>Answer is $\dfrac{1}{3}$</p> <p>Any ball can be taken first and for the second ball to be the same color, there is a probability of $\dfrac{1}{3}$. So, required probability is $\dfrac{1}{3}$</p> <p>Other argument can be favorable outcomes are $3×1=3$ and total possible outcomes are $3×3=9$ and therefore, probability is $\dfrac{3}{9}=\dfrac{1}{3}$</p> Answer 14: <p>Its simple. </p> <p>Let red ball is picked both times then we have probability = $\frac13 \times \frac13 = \frac19$</p> <p>Or it can be white both times = $\frac13 \times \frac13 = \frac19$</p> <p>Or blue both times = $\frac13 \times \frac13 = \frac19$</p> <p>Probability = $\frac19 + \frac19 + \frac19 = \frac39 = \frac13$</p> Answer 15: <p>The answer is 1/3, since for any pick of the first ball, the probability of picking the same ball next time is 1/3. Another way to look at it, is to check all 9 possible combinations of (first ball, second ball). 3 out of the 9 i.e. blue-blue, red-red, white-white, have the second ball match the first, for a total of 3/9=1/3.</p> Answer 16: <p>Answer is $\dfrac{1}{3}$</p> <p>Any ball can be taken first and for the second ball to be the same color, there is a probability of $\dfrac{1}{3}$. So, required probability is $\dfrac{1}{3}$</p> <p>Other argument can be favorable outcomes are $3×1=3$ and total possible outcomes are $3×3=9$ and therefore, probability is $\dfrac{3}{9}=\dfrac{1}{3}$</p> Answer 17: <p>Its simple. </p> <p>Let red ball is picked both times then we have probability = $\frac13 \times \frac13 = \frac19$</p> <p>Or it can be white both times = $\frac13 \times \frac13 = \frac19$</p> <p>Or blue both times = $\frac13 \times \frac13 = \frac19$</p> <p>Probability = $\frac19 + \frac19 + \frac19 = \frac39 = \frac13$</p> Answer 18: <p>The answer is 1/3, since for any pick of the first ball, the probability of picking the same ball next time is 1/3. Another way to look at it, is to check all 9 possible combinations of (first ball, second ball). 3 out of the 9 i.e. blue-blue, red-red, white-white, have the second ball match the first, for a total of 3/9=1/3.</p> Answer 19: <p>Answer is $\dfrac{1}{3}$</p> <p>Any ball can be taken first and for the second ball to be the same color, there is a probability of $\dfrac{1}{3}$. So, required probability is $\dfrac{1}{3}$</p> <p>Other argument can be favorable outcomes are $3×1=3$ and total possible outcomes are $3×3=9$ and therefore, probability is $\dfrac{3}{9}=\dfrac{1}{3}$</p> Answer 20: <p>Its simple. </p> <p>Let red ball is picked both times then we have probability = $\frac13 \times \frac13 = \frac19$</p> <p>Or it can be white both times = $\frac13 \times \frac13 = \frac19$</p> <p>Or blue both times = $\frac13 \times \frac13 = \frac19$</p> <p>Probability = $\frac19 + \frac19 + \frac19 = \frac39 = \frac13$</p> Answer 21: <p>The answer is 1/3, since for any pick of the first ball, the probability of picking the same ball next time is 1/3. Another way to look at it, is to check all 9 possible combinations of (first ball, second ball). 3 out of the 9 i.e. blue-blue, red-red, white-white, have the second ball match the first, for a total of 3/9=1/3.</p> Answer 22: <p>Answer is $\dfrac{1}{3}$</p> <p>Any ball can be taken first and for the second ball to be the same color, there is a probability of $\dfrac{1}{3}$. So, required probability is $\dfrac{1}{3}$</p> <p>Other argument can be favorable outcomes are $3×1=3$ and total possible outcomes are $3×3=9$ and therefore, probability is $\dfrac{3}{9}=\dfrac{1}{3}$</p> Answer 23: <p>Its simple. </p> <p>Let red ball is picked both times then we have probability = $\frac13 \times \frac13 = \frac19$</p> <p>Or it can be white both times = $\frac13 \times \frac13 = \frac19$</p> <p>Or blue both times = $\frac13 \times \frac13 = \frac19$</p> <p>Probability = $\frac19 + \frac19 + \frac19 = \frac39 = \frac13$</p> Answer 24: <p>The answer is 1/3, since for any pick of the first ball, the probability of picking the same ball next time is 1/3. Another way to look at it, is to check all 9 possible combinations of (first ball, second ball). 3 out of the 9 i.e. blue-blue, red-red, white-white, have the second ball match the first, for a total of 3/9=1/3.</p> Answer 25: <p>Answer is $\dfrac{1}{3}$</p> <p>Any ball can be taken first and for the second ball to be the same color, there is a probability of $\dfrac{1}{3}$. So, required probability is $\dfrac{1}{3}$</p> <p>Other argument can be favorable outcomes are $3×1=3$ and total possible outcomes are $3×3=9$ and therefore, probability is $\dfrac{3}{9}=\dfrac{1}{3}$</p> Answer 26: <p>Its simple. </p> <p>Let red ball is picked both times then we have probability = $\frac13 \times \frac13 = \frac19$</p> <p>Or it can be white both times = $\frac13 \times \frac13 = \frac19$</p> <p>Or blue both times = $\frac13 \times \frac13 = \frac19$</p> <p>Probability = $\frac19 + \frac19 + \frac19 = \frac39 = \frac13$</p> Answer 27: <p>The answer is 1/3, since for any pick of the first ball, the probability of picking the same ball next time is 1/3. Another way to look at it, is to check all 9 possible combinations of (first ball, second ball). 3 out of the 9 i.e. blue-blue, red-red, white-white, have the second ball match the first, for a total of 3/9=1/3.</p> Answer 28: <p>Answer is $\dfrac{1}{3}$</p> <p>Any ball can be taken first and for the second ball to be the same color, there is a probability of $\dfrac{1}{3}$. So, required probability is $\dfrac{1}{3}$</p> <p>Other argument can be favorable outcomes are $3×1=3$ and total possible outcomes are $3×3=9$ and therefore, probability is $\dfrac{3}{9}=\dfrac{1}{3}$</p> Answer 29: <p>Its simple. </p> <p>Let red ball is picked both times then we have probability = $\frac13 \times \frac13 = \frac19$</p> <p>Or it can be white both times = $\frac13 \times \frac13 = \frac19$</p> <p>Or blue both times = $\frac13 \times \frac13 = \frac19$</p> <p>Probability = $\frac19 + \frac19 + \frac19 = \frac39 = \frac13$</p> Answer 30: <p>The answer is 1/3, since for any pick of the first ball, the probability of picking the same ball next time is 1/3. Another way to look at it, is to check all 9 possible combinations of (first ball, second ball). 3 out of the 9 i.e. blue-blue, red-red, white-white, have the second ball match the first, for a total of 3/9=1/3.</p>
If $2^{2017} + 2^{2014} + 2^n$ is a perfect square, find $n$.
2,105,224
6
https://math.stackexchange.com/questions/2105224/if-22017-22014-2n-is-a-perfect-square-find-n
<p>If $2^{2017} + 2^{2014} + 2^n$ is a perfect square, find $n$.</p> <hr> <p>My first shot would be to assume the perfect square is $2^{2018}$, but how would I prove that? Even if it is, what is $n$? All help is appreciated.</p>
[ { "answer_id": 2105236, "score": 11, "text": "<p>$(2^x+2^y)^2=2^{2x}+2^{x+y+1}+2^{2y}$ so we can take $y=1007$ and $x=1009$ to conclude $2^{2018}+2^{2017}+2^{2014}$ is a square.</p>\n\n<hr>\n\n<p>how to obtain all $n$:</p>\n\n<p>$9(2^{2014})=2^{2017}+2^{2014}=k^2-2^n$.</p>\n\n<p>Notice that we must have $k^2\\equiv 1 \\bmod 3$ implying $2^n\\equiv 1\\bmod 3$.</p>\n\n<p>So $n=2m$.</p>\n\n<p>From here we have $2^{2014}\\times 9= (k+2^m)(k-2^m)=z(z+2^{m+1})$</p>\n\n<p>there are now two cases, case one is when $z=9(2^j)$ and case two is when $z=2^j$</p>\n\n<p>the first case is impossible because the number $9$ in binary is $1001$ and adding a power of two clearly wont make it a power of $2$. The other case is clearly only solved by taking $z=2^{1007}$ and taking $m=1009$ thus getting $z(z+2^{m+1})=2̣^{1007}(2^{1007}+2^{1010})=9(2^{2014})$.</p>\n\n<p>So we need $n=2(1009)=2018$</p>\n" }, { "answer_id": 2105237, "score": 9, "text": "<p>The square is $$(2^{1007}+2^{1009})^2=2^{2014}+2^{2017}+2^{2018}$$</p>\n\n<p>The answe is therefore $n=2018$</p>\n" }, { "answer_id": 2106664, "score": 2, "text": "<p>If you want <em>an</em> answer then $(a+b)^2 = a^2 + 2ab + b^2$ should suggest an obvious answer by setting $a^2 = 2^{2014}; 2ab=2^{2017}; 2^n = c^2$. </p>\n\n<p>i.e $(2^{1007} + 2^{\\frac n2})^2 = 2^{2014} + 2*2^{1007}*2^{\\frac n2} + 2^n= 2^{2014} + 2^{1 + 1007 + \\frac n2} + 2^n$. We just have to solve for $1 + 1007 + \\frac n2 = 2017$. So $n = 2018$.</p>\n\n<p>But is it the only solution?</p>\n\n<p>Bear with me.</p>\n\n<p>Let $m$ be any positive integer. Let $m = \\sum_{i=0}^k a_i2^i; a_i = \\{0|1\\}$ but its unique binary expansion. </p>\n\n<p>Claim: If $m$ has 3 or more non-zero terms in its binary expansion then $m^2$ has more than 3 non-zero terms in its binary expansion.</p>\n\n<p>Proof: Let $m = 2^a + 2^b + 2^c + \\sum_{i= c+1}^k a_i 2^i; a &lt; b &lt; c$. ($a$ may equal $0$. and $a_i; i &gt; c$ may all be $0$.) </p>\n\n<p>Then $m = (1 + 2^{b'} + 2^{c'} + \\sum_{i=c+1}^k a_i2^{i - a})2^a; c'= c-a;b'=b-a$</p>\n\n<p>So $m^2 = [(1 + 2^{b'} + 2^{c'} + 2*2^{b'}2^{c'}+ 2^{2b'} + 2^{c'}) + 2(1 + 2^{b'} + 2^{c'})\\sum_{i=c+1}^k a_i2^{i - a} + (\\sum_{i=c+1}^k a_i2^{i - a})^2]2^{2a}$.</p>\n\n<p>$= 2^a + 2^b + 2^c + 2^{1+b+c} +2^{2b} + 2^{2c} +..... $.</p>\n\n<p>Note all the later terms, if they exist, are larger than $2^{2c}$ so $m^2$ has at least four non-zero terms in its binary expansion.</p>\n\n<p>Okay...</p>\n\n<p>So $2^{2017} + 2^{2014} + 2^n = m^2$ has at most three non-zero terms in its expansion. So $m$ has at most $2$ it its expansion.</p>\n\n<p>So $m = 2^k$ or $m = 2^k + 2^j$.</p>\n\n<p>$2^{2017} + 2^{2014} + 2^n = 2^{2k}$ is impossible.</p>\n\n<p>And $2^{2017} + 2^{2014} + 2^n = (2^k + 2^j)^2 = 2^{2k} + 2*2^j*2^k + 2^{2k}$ has only $n = 2018$ for solution.</p>\n\n<p>So $n = 2018$ is the only solution.</p>\n" }, { "answer_id": 2105455, "score": 1, "text": "<p>The simple, fast, paper and pencil approach to find <strong>an</strong> answer (not <em>all</em> answers) is thus:</p>\n\n<p>$$2^{2017} + 2^{2014} + 2^n$$ Factor: $$2^{2014}(2^{3} + 1 + 2^{n-2014})$$</p>\n\n<p>If you factor out a perfect square from any number, the result is a perfect square if and only if the original number was a perfect square. Therefore we can just ignore the factor $2^{2014}$ as it is a perfect square, and seek to find an $n$ such that the following is a perfect square: $$2^{3} + 1 + 2^{n-2014}$$ Simplified: $$9+2^{n-2014}$$</p>\n\n<p>So the simple question is, what power of two added to $9$ results in a perfect square?</p>\n\n<p>The smallest answer to this modified question* is easily found by trial and error, or by remembering the $3,4,5$ pythagorean triple, or any number of other ways:</p>\n\n<p>$$9 + 16 = 25 \\Rightarrow 9+2^4=25$$</p>\n\n<p>Thus, we can get a possible value for $n$ using: $$n-2014=4$$</p>\n\n<p>So one possible value for $n$ is $4+2014=2018$.</p>\n\n<hr>\n\n<p><sub>*This allows us to find the smallest possible value of $n$ that is <em>greater than</em> $2014$. There may be a smaller answer.</sub></p>\n" }, { "answer_id": 4560253, "score": 0, "text": "<p>We are going to solve a more general question: For any given natural number <span class=\"math-container\">$k$</span>, find the integer <span class=\"math-container\">$n$</span> satisfying that\n<span class=\"math-container\">$$\n2^{2k}+2^{2k+3}+2^n=m^2,\n$$</span>\nwhere <span class=\"math-container\">$m$</span> is a natural number.\nSimplifying the equation to <span class=\"math-container\">$$9\\cdot 2^{2k}+2^n=m^2 \\Leftrightarrow 2^n=(m+p)(m-p), $$</span>\nwhere <span class=\"math-container\">$p=3\\cdot 2^{k}.$</span></p>\n<p>Splitting the power of <span class=\"math-container\">$2$</span> yields\n<span class=\"math-container\">$$\n\\left\\{\\begin{array}{l}\nm+p=2^s \\cdots (1)\\\\\nm-p=2^{n-s} \\cdots (2)\n\\end{array}\\right.\n$$</span>\nfor some integer <span class=\"math-container\">$s&gt;n-s.$</span></p>\n<p><span class=\"math-container\">$(1)-(2)$</span> gives <span class=\"math-container\">$$\n3 \\cdot 2^{k}=p=2^{n-s-1}\\left(2^{2s-n}-1\\right) \n$$</span>\n<span class=\"math-container\">$$\n\\Rightarrow \\left\\{\\begin{array}{l}\nn-s-1=k \\\\\n2 s-n=2\n\\end{array}\\right. \\Rightarrow \\boxed{n=2k+4},\n$$</span></p>\n<p>which is the <strong>unique</strong> solution of the equation,</p>\n<p>Back to our equation, when <span class=\"math-container\">$2k=2014, n=2018$</span> is its <strong>unique</strong> solution.</p>\n" }, { "answer_id": 2105236, "score": 11, "text": "<p>$(2^x+2^y)^2=2^{2x}+2^{x+y+1}+2^{2y}$ so we can take $y=1007$ and $x=1009$ to conclude $2^{2018}+2^{2017}+2^{2014}$ is a square.</p>\n\n<hr>\n\n<p>how to obtain all $n$:</p>\n\n<p>$9(2^{2014})=2^{2017}+2^{2014}=k^2-2^n$.</p>\n\n<p>Notice that we must have $k^2\\equiv 1 \\bmod 3$ implying $2^n\\equiv 1\\bmod 3$.</p>\n\n<p>So $n=2m$.</p>\n\n<p>From here we have $2^{2014}\\times 9= (k+2^m)(k-2^m)=z(z+2^{m+1})$</p>\n\n<p>there are now two cases, case one is when $z=9(2^j)$ and case two is when $z=2^j$</p>\n\n<p>the first case is impossible because the number $9$ in binary is $1001$ and adding a power of two clearly wont make it a power of $2$. The other case is clearly only solved by taking $z=2^{1007}$ and taking $m=1009$ thus getting $z(z+2^{m+1})=2̣^{1007}(2^{1007}+2^{1010})=9(2^{2014})$.</p>\n\n<p>So we need $n=2(1009)=2018$</p>\n" }, { "answer_id": 2105237, "score": 9, "text": "<p>The square is $$(2^{1007}+2^{1009})^2=2^{2014}+2^{2017}+2^{2018}$$</p>\n\n<p>The answe is therefore $n=2018$</p>\n" }, { "answer_id": 2106664, "score": 2, "text": "<p>If you want <em>an</em> answer then $(a+b)^2 = a^2 + 2ab + b^2$ should suggest an obvious answer by setting $a^2 = 2^{2014}; 2ab=2^{2017}; 2^n = c^2$. </p>\n\n<p>i.e $(2^{1007} + 2^{\\frac n2})^2 = 2^{2014} + 2*2^{1007}*2^{\\frac n2} + 2^n= 2^{2014} + 2^{1 + 1007 + \\frac n2} + 2^n$. We just have to solve for $1 + 1007 + \\frac n2 = 2017$. So $n = 2018$.</p>\n\n<p>But is it the only solution?</p>\n\n<p>Bear with me.</p>\n\n<p>Let $m$ be any positive integer. Let $m = \\sum_{i=0}^k a_i2^i; a_i = \\{0|1\\}$ but its unique binary expansion. </p>\n\n<p>Claim: If $m$ has 3 or more non-zero terms in its binary expansion then $m^2$ has more than 3 non-zero terms in its binary expansion.</p>\n\n<p>Proof: Let $m = 2^a + 2^b + 2^c + \\sum_{i= c+1}^k a_i 2^i; a &lt; b &lt; c$. ($a$ may equal $0$. and $a_i; i &gt; c$ may all be $0$.) </p>\n\n<p>Then $m = (1 + 2^{b'} + 2^{c'} + \\sum_{i=c+1}^k a_i2^{i - a})2^a; c'= c-a;b'=b-a$</p>\n\n<p>So $m^2 = [(1 + 2^{b'} + 2^{c'} + 2*2^{b'}2^{c'}+ 2^{2b'} + 2^{c'}) + 2(1 + 2^{b'} + 2^{c'})\\sum_{i=c+1}^k a_i2^{i - a} + (\\sum_{i=c+1}^k a_i2^{i - a})^2]2^{2a}$.</p>\n\n<p>$= 2^a + 2^b + 2^c + 2^{1+b+c} +2^{2b} + 2^{2c} +..... $.</p>\n\n<p>Note all the later terms, if they exist, are larger than $2^{2c}$ so $m^2$ has at least four non-zero terms in its binary expansion.</p>\n\n<p>Okay...</p>\n\n<p>So $2^{2017} + 2^{2014} + 2^n = m^2$ has at most three non-zero terms in its expansion. So $m$ has at most $2$ it its expansion.</p>\n\n<p>So $m = 2^k$ or $m = 2^k + 2^j$.</p>\n\n<p>$2^{2017} + 2^{2014} + 2^n = 2^{2k}$ is impossible.</p>\n\n<p>And $2^{2017} + 2^{2014} + 2^n = (2^k + 2^j)^2 = 2^{2k} + 2*2^j*2^k + 2^{2k}$ has only $n = 2018$ for solution.</p>\n\n<p>So $n = 2018$ is the only solution.</p>\n" }, { "answer_id": 2105455, "score": 1, "text": "<p>The simple, fast, paper and pencil approach to find <strong>an</strong> answer (not <em>all</em> answers) is thus:</p>\n\n<p>$$2^{2017} + 2^{2014} + 2^n$$ Factor: $$2^{2014}(2^{3} + 1 + 2^{n-2014})$$</p>\n\n<p>If you factor out a perfect square from any number, the result is a perfect square if and only if the original number was a perfect square. Therefore we can just ignore the factor $2^{2014}$ as it is a perfect square, and seek to find an $n$ such that the following is a perfect square: $$2^{3} + 1 + 2^{n-2014}$$ Simplified: $$9+2^{n-2014}$$</p>\n\n<p>So the simple question is, what power of two added to $9$ results in a perfect square?</p>\n\n<p>The smallest answer to this modified question* is easily found by trial and error, or by remembering the $3,4,5$ pythagorean triple, or any number of other ways:</p>\n\n<p>$$9 + 16 = 25 \\Rightarrow 9+2^4=25$$</p>\n\n<p>Thus, we can get a possible value for $n$ using: $$n-2014=4$$</p>\n\n<p>So one possible value for $n$ is $4+2014=2018$.</p>\n\n<hr>\n\n<p><sub>*This allows us to find the smallest possible value of $n$ that is <em>greater than</em> $2014$. There may be a smaller answer.</sub></p>\n" }, { "answer_id": 4560253, "score": 0, "text": "<p>We are going to solve a more general question: For any given natural number <span class=\"math-container\">$k$</span>, find the integer <span class=\"math-container\">$n$</span> satisfying that\n<span class=\"math-container\">$$\n2^{2k}+2^{2k+3}+2^n=m^2,\n$$</span>\nwhere <span class=\"math-container\">$m$</span> is a natural number.\nSimplifying the equation to <span class=\"math-container\">$$9\\cdot 2^{2k}+2^n=m^2 \\Leftrightarrow 2^n=(m+p)(m-p), $$</span>\nwhere <span class=\"math-container\">$p=3\\cdot 2^{k}.$</span></p>\n<p>Splitting the power of <span class=\"math-container\">$2$</span> yields\n<span class=\"math-container\">$$\n\\left\\{\\begin{array}{l}\nm+p=2^s \\cdots (1)\\\\\nm-p=2^{n-s} \\cdots (2)\n\\end{array}\\right.\n$$</span>\nfor some integer <span class=\"math-container\">$s&gt;n-s.$</span></p>\n<p><span class=\"math-container\">$(1)-(2)$</span> gives <span class=\"math-container\">$$\n3 \\cdot 2^{k}=p=2^{n-s-1}\\left(2^{2s-n}-1\\right) \n$$</span>\n<span class=\"math-container\">$$\n\\Rightarrow \\left\\{\\begin{array}{l}\nn-s-1=k \\\\\n2 s-n=2\n\\end{array}\\right. \\Rightarrow \\boxed{n=2k+4},\n$$</span></p>\n<p>which is the <strong>unique</strong> solution of the equation,</p>\n<p>Back to our equation, when <span class=\"math-container\">$2k=2014, n=2018$</span> is its <strong>unique</strong> solution.</p>\n" }, { "answer_id": 2105236, "score": 11, "text": "<p>$(2^x+2^y)^2=2^{2x}+2^{x+y+1}+2^{2y}$ so we can take $y=1007$ and $x=1009$ to conclude $2^{2018}+2^{2017}+2^{2014}$ is a square.</p>\n\n<hr>\n\n<p>how to obtain all $n$:</p>\n\n<p>$9(2^{2014})=2^{2017}+2^{2014}=k^2-2^n$.</p>\n\n<p>Notice that we must have $k^2\\equiv 1 \\bmod 3$ implying $2^n\\equiv 1\\bmod 3$.</p>\n\n<p>So $n=2m$.</p>\n\n<p>From here we have $2^{2014}\\times 9= (k+2^m)(k-2^m)=z(z+2^{m+1})$</p>\n\n<p>there are now two cases, case one is when $z=9(2^j)$ and case two is when $z=2^j$</p>\n\n<p>the first case is impossible because the number $9$ in binary is $1001$ and adding a power of two clearly wont make it a power of $2$. The other case is clearly only solved by taking $z=2^{1007}$ and taking $m=1009$ thus getting $z(z+2^{m+1})=2̣^{1007}(2^{1007}+2^{1010})=9(2^{2014})$.</p>\n\n<p>So we need $n=2(1009)=2018$</p>\n" }, { "answer_id": 2105237, "score": 9, "text": "<p>The square is $$(2^{1007}+2^{1009})^2=2^{2014}+2^{2017}+2^{2018}$$</p>\n\n<p>The answe is therefore $n=2018$</p>\n" }, { "answer_id": 2106664, "score": 2, "text": "<p>If you want <em>an</em> answer then $(a+b)^2 = a^2 + 2ab + b^2$ should suggest an obvious answer by setting $a^2 = 2^{2014}; 2ab=2^{2017}; 2^n = c^2$. </p>\n\n<p>i.e $(2^{1007} + 2^{\\frac n2})^2 = 2^{2014} + 2*2^{1007}*2^{\\frac n2} + 2^n= 2^{2014} + 2^{1 + 1007 + \\frac n2} + 2^n$. We just have to solve for $1 + 1007 + \\frac n2 = 2017$. So $n = 2018$.</p>\n\n<p>But is it the only solution?</p>\n\n<p>Bear with me.</p>\n\n<p>Let $m$ be any positive integer. Let $m = \\sum_{i=0}^k a_i2^i; a_i = \\{0|1\\}$ but its unique binary expansion. </p>\n\n<p>Claim: If $m$ has 3 or more non-zero terms in its binary expansion then $m^2$ has more than 3 non-zero terms in its binary expansion.</p>\n\n<p>Proof: Let $m = 2^a + 2^b + 2^c + \\sum_{i= c+1}^k a_i 2^i; a &lt; b &lt; c$. ($a$ may equal $0$. and $a_i; i &gt; c$ may all be $0$.) </p>\n\n<p>Then $m = (1 + 2^{b'} + 2^{c'} + \\sum_{i=c+1}^k a_i2^{i - a})2^a; c'= c-a;b'=b-a$</p>\n\n<p>So $m^2 = [(1 + 2^{b'} + 2^{c'} + 2*2^{b'}2^{c'}+ 2^{2b'} + 2^{c'}) + 2(1 + 2^{b'} + 2^{c'})\\sum_{i=c+1}^k a_i2^{i - a} + (\\sum_{i=c+1}^k a_i2^{i - a})^2]2^{2a}$.</p>\n\n<p>$= 2^a + 2^b + 2^c + 2^{1+b+c} +2^{2b} + 2^{2c} +..... $.</p>\n\n<p>Note all the later terms, if they exist, are larger than $2^{2c}$ so $m^2$ has at least four non-zero terms in its binary expansion.</p>\n\n<p>Okay...</p>\n\n<p>So $2^{2017} + 2^{2014} + 2^n = m^2$ has at most three non-zero terms in its expansion. So $m$ has at most $2$ it its expansion.</p>\n\n<p>So $m = 2^k$ or $m = 2^k + 2^j$.</p>\n\n<p>$2^{2017} + 2^{2014} + 2^n = 2^{2k}$ is impossible.</p>\n\n<p>And $2^{2017} + 2^{2014} + 2^n = (2^k + 2^j)^2 = 2^{2k} + 2*2^j*2^k + 2^{2k}$ has only $n = 2018$ for solution.</p>\n\n<p>So $n = 2018$ is the only solution.</p>\n" }, { "answer_id": 2105455, "score": 1, "text": "<p>The simple, fast, paper and pencil approach to find <strong>an</strong> answer (not <em>all</em> answers) is thus:</p>\n\n<p>$$2^{2017} + 2^{2014} + 2^n$$ Factor: $$2^{2014}(2^{3} + 1 + 2^{n-2014})$$</p>\n\n<p>If you factor out a perfect square from any number, the result is a perfect square if and only if the original number was a perfect square. Therefore we can just ignore the factor $2^{2014}$ as it is a perfect square, and seek to find an $n$ such that the following is a perfect square: $$2^{3} + 1 + 2^{n-2014}$$ Simplified: $$9+2^{n-2014}$$</p>\n\n<p>So the simple question is, what power of two added to $9$ results in a perfect square?</p>\n\n<p>The smallest answer to this modified question* is easily found by trial and error, or by remembering the $3,4,5$ pythagorean triple, or any number of other ways:</p>\n\n<p>$$9 + 16 = 25 \\Rightarrow 9+2^4=25$$</p>\n\n<p>Thus, we can get a possible value for $n$ using: $$n-2014=4$$</p>\n\n<p>So one possible value for $n$ is $4+2014=2018$.</p>\n\n<hr>\n\n<p><sub>*This allows us to find the smallest possible value of $n$ that is <em>greater than</em> $2014$. There may be a smaller answer.</sub></p>\n" }, { "answer_id": 4560253, "score": 0, "text": "<p>We are going to solve a more general question: For any given natural number <span class=\"math-container\">$k$</span>, find the integer <span class=\"math-container\">$n$</span> satisfying that\n<span class=\"math-container\">$$\n2^{2k}+2^{2k+3}+2^n=m^2,\n$$</span>\nwhere <span class=\"math-container\">$m$</span> is a natural number.\nSimplifying the equation to <span class=\"math-container\">$$9\\cdot 2^{2k}+2^n=m^2 \\Leftrightarrow 2^n=(m+p)(m-p), $$</span>\nwhere <span class=\"math-container\">$p=3\\cdot 2^{k}.$</span></p>\n<p>Splitting the power of <span class=\"math-container\">$2$</span> yields\n<span class=\"math-container\">$$\n\\left\\{\\begin{array}{l}\nm+p=2^s \\cdots (1)\\\\\nm-p=2^{n-s} \\cdots (2)\n\\end{array}\\right.\n$$</span>\nfor some integer <span class=\"math-container\">$s&gt;n-s.$</span></p>\n<p><span class=\"math-container\">$(1)-(2)$</span> gives <span class=\"math-container\">$$\n3 \\cdot 2^{k}=p=2^{n-s-1}\\left(2^{2s-n}-1\\right) \n$$</span>\n<span class=\"math-container\">$$\n\\Rightarrow \\left\\{\\begin{array}{l}\nn-s-1=k \\\\\n2 s-n=2\n\\end{array}\\right. \\Rightarrow \\boxed{n=2k+4},\n$$</span></p>\n<p>which is the <strong>unique</strong> solution of the equation,</p>\n<p>Back to our equation, when <span class=\"math-container\">$2k=2014, n=2018$</span> is its <strong>unique</strong> solution.</p>\n" }, { "answer_id": 2105236, "score": 11, "text": "<p>$(2^x+2^y)^2=2^{2x}+2^{x+y+1}+2^{2y}$ so we can take $y=1007$ and $x=1009$ to conclude $2^{2018}+2^{2017}+2^{2014}$ is a square.</p>\n\n<hr>\n\n<p>how to obtain all $n$:</p>\n\n<p>$9(2^{2014})=2^{2017}+2^{2014}=k^2-2^n$.</p>\n\n<p>Notice that we must have $k^2\\equiv 1 \\bmod 3$ implying $2^n\\equiv 1\\bmod 3$.</p>\n\n<p>So $n=2m$.</p>\n\n<p>From here we have $2^{2014}\\times 9= (k+2^m)(k-2^m)=z(z+2^{m+1})$</p>\n\n<p>there are now two cases, case one is when $z=9(2^j)$ and case two is when $z=2^j$</p>\n\n<p>the first case is impossible because the number $9$ in binary is $1001$ and adding a power of two clearly wont make it a power of $2$. The other case is clearly only solved by taking $z=2^{1007}$ and taking $m=1009$ thus getting $z(z+2^{m+1})=2̣^{1007}(2^{1007}+2^{1010})=9(2^{2014})$.</p>\n\n<p>So we need $n=2(1009)=2018$</p>\n" }, { "answer_id": 2105237, "score": 9, "text": "<p>The square is $$(2^{1007}+2^{1009})^2=2^{2014}+2^{2017}+2^{2018}$$</p>\n\n<p>The answe is therefore $n=2018$</p>\n" }, { "answer_id": 2106664, "score": 2, "text": "<p>If you want <em>an</em> answer then $(a+b)^2 = a^2 + 2ab + b^2$ should suggest an obvious answer by setting $a^2 = 2^{2014}; 2ab=2^{2017}; 2^n = c^2$. </p>\n\n<p>i.e $(2^{1007} + 2^{\\frac n2})^2 = 2^{2014} + 2*2^{1007}*2^{\\frac n2} + 2^n= 2^{2014} + 2^{1 + 1007 + \\frac n2} + 2^n$. We just have to solve for $1 + 1007 + \\frac n2 = 2017$. So $n = 2018$.</p>\n\n<p>But is it the only solution?</p>\n\n<p>Bear with me.</p>\n\n<p>Let $m$ be any positive integer. Let $m = \\sum_{i=0}^k a_i2^i; a_i = \\{0|1\\}$ but its unique binary expansion. </p>\n\n<p>Claim: If $m$ has 3 or more non-zero terms in its binary expansion then $m^2$ has more than 3 non-zero terms in its binary expansion.</p>\n\n<p>Proof: Let $m = 2^a + 2^b + 2^c + \\sum_{i= c+1}^k a_i 2^i; a &lt; b &lt; c$. ($a$ may equal $0$. and $a_i; i &gt; c$ may all be $0$.) </p>\n\n<p>Then $m = (1 + 2^{b'} + 2^{c'} + \\sum_{i=c+1}^k a_i2^{i - a})2^a; c'= c-a;b'=b-a$</p>\n\n<p>So $m^2 = [(1 + 2^{b'} + 2^{c'} + 2*2^{b'}2^{c'}+ 2^{2b'} + 2^{c'}) + 2(1 + 2^{b'} + 2^{c'})\\sum_{i=c+1}^k a_i2^{i - a} + (\\sum_{i=c+1}^k a_i2^{i - a})^2]2^{2a}$.</p>\n\n<p>$= 2^a + 2^b + 2^c + 2^{1+b+c} +2^{2b} + 2^{2c} +..... $.</p>\n\n<p>Note all the later terms, if they exist, are larger than $2^{2c}$ so $m^2$ has at least four non-zero terms in its binary expansion.</p>\n\n<p>Okay...</p>\n\n<p>So $2^{2017} + 2^{2014} + 2^n = m^2$ has at most three non-zero terms in its expansion. So $m$ has at most $2$ it its expansion.</p>\n\n<p>So $m = 2^k$ or $m = 2^k + 2^j$.</p>\n\n<p>$2^{2017} + 2^{2014} + 2^n = 2^{2k}$ is impossible.</p>\n\n<p>And $2^{2017} + 2^{2014} + 2^n = (2^k + 2^j)^2 = 2^{2k} + 2*2^j*2^k + 2^{2k}$ has only $n = 2018$ for solution.</p>\n\n<p>So $n = 2018$ is the only solution.</p>\n" }, { "answer_id": 2105455, "score": 1, "text": "<p>The simple, fast, paper and pencil approach to find <strong>an</strong> answer (not <em>all</em> answers) is thus:</p>\n\n<p>$$2^{2017} + 2^{2014} + 2^n$$ Factor: $$2^{2014}(2^{3} + 1 + 2^{n-2014})$$</p>\n\n<p>If you factor out a perfect square from any number, the result is a perfect square if and only if the original number was a perfect square. Therefore we can just ignore the factor $2^{2014}$ as it is a perfect square, and seek to find an $n$ such that the following is a perfect square: $$2^{3} + 1 + 2^{n-2014}$$ Simplified: $$9+2^{n-2014}$$</p>\n\n<p>So the simple question is, what power of two added to $9$ results in a perfect square?</p>\n\n<p>The smallest answer to this modified question* is easily found by trial and error, or by remembering the $3,4,5$ pythagorean triple, or any number of other ways:</p>\n\n<p>$$9 + 16 = 25 \\Rightarrow 9+2^4=25$$</p>\n\n<p>Thus, we can get a possible value for $n$ using: $$n-2014=4$$</p>\n\n<p>So one possible value for $n$ is $4+2014=2018$.</p>\n\n<hr>\n\n<p><sub>*This allows us to find the smallest possible value of $n$ that is <em>greater than</em> $2014$. There may be a smaller answer.</sub></p>\n" }, { "answer_id": 4560253, "score": 0, "text": "<p>We are going to solve a more general question: For any given natural number <span class=\"math-container\">$k$</span>, find the integer <span class=\"math-container\">$n$</span> satisfying that\n<span class=\"math-container\">$$\n2^{2k}+2^{2k+3}+2^n=m^2,\n$$</span>\nwhere <span class=\"math-container\">$m$</span> is a natural number.\nSimplifying the equation to <span class=\"math-container\">$$9\\cdot 2^{2k}+2^n=m^2 \\Leftrightarrow 2^n=(m+p)(m-p), $$</span>\nwhere <span class=\"math-container\">$p=3\\cdot 2^{k}.$</span></p>\n<p>Splitting the power of <span class=\"math-container\">$2$</span> yields\n<span class=\"math-container\">$$\n\\left\\{\\begin{array}{l}\nm+p=2^s \\cdots (1)\\\\\nm-p=2^{n-s} \\cdots (2)\n\\end{array}\\right.\n$$</span>\nfor some integer <span class=\"math-container\">$s&gt;n-s.$</span></p>\n<p><span class=\"math-container\">$(1)-(2)$</span> gives <span class=\"math-container\">$$\n3 \\cdot 2^{k}=p=2^{n-s-1}\\left(2^{2s-n}-1\\right) \n$$</span>\n<span class=\"math-container\">$$\n\\Rightarrow \\left\\{\\begin{array}{l}\nn-s-1=k \\\\\n2 s-n=2\n\\end{array}\\right. \\Rightarrow \\boxed{n=2k+4},\n$$</span></p>\n<p>which is the <strong>unique</strong> solution of the equation,</p>\n<p>Back to our equation, when <span class=\"math-container\">$2k=2014, n=2018$</span> is its <strong>unique</strong> solution.</p>\n" }, { "answer_id": 2105236, "score": 11, "text": "<p>$(2^x+2^y)^2=2^{2x}+2^{x+y+1}+2^{2y}$ so we can take $y=1007$ and $x=1009$ to conclude $2^{2018}+2^{2017}+2^{2014}$ is a square.</p>\n\n<hr>\n\n<p>how to obtain all $n$:</p>\n\n<p>$9(2^{2014})=2^{2017}+2^{2014}=k^2-2^n$.</p>\n\n<p>Notice that we must have $k^2\\equiv 1 \\bmod 3$ implying $2^n\\equiv 1\\bmod 3$.</p>\n\n<p>So $n=2m$.</p>\n\n<p>From here we have $2^{2014}\\times 9= (k+2^m)(k-2^m)=z(z+2^{m+1})$</p>\n\n<p>there are now two cases, case one is when $z=9(2^j)$ and case two is when $z=2^j$</p>\n\n<p>the first case is impossible because the number $9$ in binary is $1001$ and adding a power of two clearly wont make it a power of $2$. The other case is clearly only solved by taking $z=2^{1007}$ and taking $m=1009$ thus getting $z(z+2^{m+1})=2̣^{1007}(2^{1007}+2^{1010})=9(2^{2014})$.</p>\n\n<p>So we need $n=2(1009)=2018$</p>\n" }, { "answer_id": 2105237, "score": 9, "text": "<p>The square is $$(2^{1007}+2^{1009})^2=2^{2014}+2^{2017}+2^{2018}$$</p>\n\n<p>The answe is therefore $n=2018$</p>\n" }, { "answer_id": 2106664, "score": 2, "text": "<p>If you want <em>an</em> answer then $(a+b)^2 = a^2 + 2ab + b^2$ should suggest an obvious answer by setting $a^2 = 2^{2014}; 2ab=2^{2017}; 2^n = c^2$. </p>\n\n<p>i.e $(2^{1007} + 2^{\\frac n2})^2 = 2^{2014} + 2*2^{1007}*2^{\\frac n2} + 2^n= 2^{2014} + 2^{1 + 1007 + \\frac n2} + 2^n$. We just have to solve for $1 + 1007 + \\frac n2 = 2017$. So $n = 2018$.</p>\n\n<p>But is it the only solution?</p>\n\n<p>Bear with me.</p>\n\n<p>Let $m$ be any positive integer. Let $m = \\sum_{i=0}^k a_i2^i; a_i = \\{0|1\\}$ but its unique binary expansion. </p>\n\n<p>Claim: If $m$ has 3 or more non-zero terms in its binary expansion then $m^2$ has more than 3 non-zero terms in its binary expansion.</p>\n\n<p>Proof: Let $m = 2^a + 2^b + 2^c + \\sum_{i= c+1}^k a_i 2^i; a &lt; b &lt; c$. ($a$ may equal $0$. and $a_i; i &gt; c$ may all be $0$.) </p>\n\n<p>Then $m = (1 + 2^{b'} + 2^{c'} + \\sum_{i=c+1}^k a_i2^{i - a})2^a; c'= c-a;b'=b-a$</p>\n\n<p>So $m^2 = [(1 + 2^{b'} + 2^{c'} + 2*2^{b'}2^{c'}+ 2^{2b'} + 2^{c'}) + 2(1 + 2^{b'} + 2^{c'})\\sum_{i=c+1}^k a_i2^{i - a} + (\\sum_{i=c+1}^k a_i2^{i - a})^2]2^{2a}$.</p>\n\n<p>$= 2^a + 2^b + 2^c + 2^{1+b+c} +2^{2b} + 2^{2c} +..... $.</p>\n\n<p>Note all the later terms, if they exist, are larger than $2^{2c}$ so $m^2$ has at least four non-zero terms in its binary expansion.</p>\n\n<p>Okay...</p>\n\n<p>So $2^{2017} + 2^{2014} + 2^n = m^2$ has at most three non-zero terms in its expansion. So $m$ has at most $2$ it its expansion.</p>\n\n<p>So $m = 2^k$ or $m = 2^k + 2^j$.</p>\n\n<p>$2^{2017} + 2^{2014} + 2^n = 2^{2k}$ is impossible.</p>\n\n<p>And $2^{2017} + 2^{2014} + 2^n = (2^k + 2^j)^2 = 2^{2k} + 2*2^j*2^k + 2^{2k}$ has only $n = 2018$ for solution.</p>\n\n<p>So $n = 2018$ is the only solution.</p>\n" }, { "answer_id": 2105455, "score": 1, "text": "<p>The simple, fast, paper and pencil approach to find <strong>an</strong> answer (not <em>all</em> answers) is thus:</p>\n\n<p>$$2^{2017} + 2^{2014} + 2^n$$ Factor: $$2^{2014}(2^{3} + 1 + 2^{n-2014})$$</p>\n\n<p>If you factor out a perfect square from any number, the result is a perfect square if and only if the original number was a perfect square. Therefore we can just ignore the factor $2^{2014}$ as it is a perfect square, and seek to find an $n$ such that the following is a perfect square: $$2^{3} + 1 + 2^{n-2014}$$ Simplified: $$9+2^{n-2014}$$</p>\n\n<p>So the simple question is, what power of two added to $9$ results in a perfect square?</p>\n\n<p>The smallest answer to this modified question* is easily found by trial and error, or by remembering the $3,4,5$ pythagorean triple, or any number of other ways:</p>\n\n<p>$$9 + 16 = 25 \\Rightarrow 9+2^4=25$$</p>\n\n<p>Thus, we can get a possible value for $n$ using: $$n-2014=4$$</p>\n\n<p>So one possible value for $n$ is $4+2014=2018$.</p>\n\n<hr>\n\n<p><sub>*This allows us to find the smallest possible value of $n$ that is <em>greater than</em> $2014$. There may be a smaller answer.</sub></p>\n" }, { "answer_id": 4560253, "score": 0, "text": "<p>We are going to solve a more general question: For any given natural number <span class=\"math-container\">$k$</span>, find the integer <span class=\"math-container\">$n$</span> satisfying that\n<span class=\"math-container\">$$\n2^{2k}+2^{2k+3}+2^n=m^2,\n$$</span>\nwhere <span class=\"math-container\">$m$</span> is a natural number.\nSimplifying the equation to <span class=\"math-container\">$$9\\cdot 2^{2k}+2^n=m^2 \\Leftrightarrow 2^n=(m+p)(m-p), $$</span>\nwhere <span class=\"math-container\">$p=3\\cdot 2^{k}.$</span></p>\n<p>Splitting the power of <span class=\"math-container\">$2$</span> yields\n<span class=\"math-container\">$$\n\\left\\{\\begin{array}{l}\nm+p=2^s \\cdots (1)\\\\\nm-p=2^{n-s} \\cdots (2)\n\\end{array}\\right.\n$$</span>\nfor some integer <span class=\"math-container\">$s&gt;n-s.$</span></p>\n<p><span class=\"math-container\">$(1)-(2)$</span> gives <span class=\"math-container\">$$\n3 \\cdot 2^{k}=p=2^{n-s-1}\\left(2^{2s-n}-1\\right) \n$$</span>\n<span class=\"math-container\">$$\n\\Rightarrow \\left\\{\\begin{array}{l}\nn-s-1=k \\\\\n2 s-n=2\n\\end{array}\\right. \\Rightarrow \\boxed{n=2k+4},\n$$</span></p>\n<p>which is the <strong>unique</strong> solution of the equation,</p>\n<p>Back to our equation, when <span class=\"math-container\">$2k=2014, n=2018$</span> is its <strong>unique</strong> solution.</p>\n" }, { "answer_id": 2105236, "score": 11, "text": "<p>$(2^x+2^y)^2=2^{2x}+2^{x+y+1}+2^{2y}$ so we can take $y=1007$ and $x=1009$ to conclude $2^{2018}+2^{2017}+2^{2014}$ is a square.</p>\n\n<hr>\n\n<p>how to obtain all $n$:</p>\n\n<p>$9(2^{2014})=2^{2017}+2^{2014}=k^2-2^n$.</p>\n\n<p>Notice that we must have $k^2\\equiv 1 \\bmod 3$ implying $2^n\\equiv 1\\bmod 3$.</p>\n\n<p>So $n=2m$.</p>\n\n<p>From here we have $2^{2014}\\times 9= (k+2^m)(k-2^m)=z(z+2^{m+1})$</p>\n\n<p>there are now two cases, case one is when $z=9(2^j)$ and case two is when $z=2^j$</p>\n\n<p>the first case is impossible because the number $9$ in binary is $1001$ and adding a power of two clearly wont make it a power of $2$. The other case is clearly only solved by taking $z=2^{1007}$ and taking $m=1009$ thus getting $z(z+2^{m+1})=2̣^{1007}(2^{1007}+2^{1010})=9(2^{2014})$.</p>\n\n<p>So we need $n=2(1009)=2018$</p>\n" }, { "answer_id": 2105237, "score": 9, "text": "<p>The square is $$(2^{1007}+2^{1009})^2=2^{2014}+2^{2017}+2^{2018}$$</p>\n\n<p>The answe is therefore $n=2018$</p>\n" }, { "answer_id": 2106664, "score": 2, "text": "<p>If you want <em>an</em> answer then $(a+b)^2 = a^2 + 2ab + b^2$ should suggest an obvious answer by setting $a^2 = 2^{2014}; 2ab=2^{2017}; 2^n = c^2$. </p>\n\n<p>i.e $(2^{1007} + 2^{\\frac n2})^2 = 2^{2014} + 2*2^{1007}*2^{\\frac n2} + 2^n= 2^{2014} + 2^{1 + 1007 + \\frac n2} + 2^n$. We just have to solve for $1 + 1007 + \\frac n2 = 2017$. So $n = 2018$.</p>\n\n<p>But is it the only solution?</p>\n\n<p>Bear with me.</p>\n\n<p>Let $m$ be any positive integer. Let $m = \\sum_{i=0}^k a_i2^i; a_i = \\{0|1\\}$ but its unique binary expansion. </p>\n\n<p>Claim: If $m$ has 3 or more non-zero terms in its binary expansion then $m^2$ has more than 3 non-zero terms in its binary expansion.</p>\n\n<p>Proof: Let $m = 2^a + 2^b + 2^c + \\sum_{i= c+1}^k a_i 2^i; a &lt; b &lt; c$. ($a$ may equal $0$. and $a_i; i &gt; c$ may all be $0$.) </p>\n\n<p>Then $m = (1 + 2^{b'} + 2^{c'} + \\sum_{i=c+1}^k a_i2^{i - a})2^a; c'= c-a;b'=b-a$</p>\n\n<p>So $m^2 = [(1 + 2^{b'} + 2^{c'} + 2*2^{b'}2^{c'}+ 2^{2b'} + 2^{c'}) + 2(1 + 2^{b'} + 2^{c'})\\sum_{i=c+1}^k a_i2^{i - a} + (\\sum_{i=c+1}^k a_i2^{i - a})^2]2^{2a}$.</p>\n\n<p>$= 2^a + 2^b + 2^c + 2^{1+b+c} +2^{2b} + 2^{2c} +..... $.</p>\n\n<p>Note all the later terms, if they exist, are larger than $2^{2c}$ so $m^2$ has at least four non-zero terms in its binary expansion.</p>\n\n<p>Okay...</p>\n\n<p>So $2^{2017} + 2^{2014} + 2^n = m^2$ has at most three non-zero terms in its expansion. So $m$ has at most $2$ it its expansion.</p>\n\n<p>So $m = 2^k$ or $m = 2^k + 2^j$.</p>\n\n<p>$2^{2017} + 2^{2014} + 2^n = 2^{2k}$ is impossible.</p>\n\n<p>And $2^{2017} + 2^{2014} + 2^n = (2^k + 2^j)^2 = 2^{2k} + 2*2^j*2^k + 2^{2k}$ has only $n = 2018$ for solution.</p>\n\n<p>So $n = 2018$ is the only solution.</p>\n" }, { "answer_id": 2105455, "score": 1, "text": "<p>The simple, fast, paper and pencil approach to find <strong>an</strong> answer (not <em>all</em> answers) is thus:</p>\n\n<p>$$2^{2017} + 2^{2014} + 2^n$$ Factor: $$2^{2014}(2^{3} + 1 + 2^{n-2014})$$</p>\n\n<p>If you factor out a perfect square from any number, the result is a perfect square if and only if the original number was a perfect square. Therefore we can just ignore the factor $2^{2014}$ as it is a perfect square, and seek to find an $n$ such that the following is a perfect square: $$2^{3} + 1 + 2^{n-2014}$$ Simplified: $$9+2^{n-2014}$$</p>\n\n<p>So the simple question is, what power of two added to $9$ results in a perfect square?</p>\n\n<p>The smallest answer to this modified question* is easily found by trial and error, or by remembering the $3,4,5$ pythagorean triple, or any number of other ways:</p>\n\n<p>$$9 + 16 = 25 \\Rightarrow 9+2^4=25$$</p>\n\n<p>Thus, we can get a possible value for $n$ using: $$n-2014=4$$</p>\n\n<p>So one possible value for $n$ is $4+2014=2018$.</p>\n\n<hr>\n\n<p><sub>*This allows us to find the smallest possible value of $n$ that is <em>greater than</em> $2014$. There may be a smaller answer.</sub></p>\n" }, { "answer_id": 4560253, "score": 0, "text": "<p>We are going to solve a more general question: For any given natural number <span class=\"math-container\">$k$</span>, find the integer <span class=\"math-container\">$n$</span> satisfying that\n<span class=\"math-container\">$$\n2^{2k}+2^{2k+3}+2^n=m^2,\n$$</span>\nwhere <span class=\"math-container\">$m$</span> is a natural number.\nSimplifying the equation to <span class=\"math-container\">$$9\\cdot 2^{2k}+2^n=m^2 \\Leftrightarrow 2^n=(m+p)(m-p), $$</span>\nwhere <span class=\"math-container\">$p=3\\cdot 2^{k}.$</span></p>\n<p>Splitting the power of <span class=\"math-container\">$2$</span> yields\n<span class=\"math-container\">$$\n\\left\\{\\begin{array}{l}\nm+p=2^s \\cdots (1)\\\\\nm-p=2^{n-s} \\cdots (2)\n\\end{array}\\right.\n$$</span>\nfor some integer <span class=\"math-container\">$s&gt;n-s.$</span></p>\n<p><span class=\"math-container\">$(1)-(2)$</span> gives <span class=\"math-container\">$$\n3 \\cdot 2^{k}=p=2^{n-s-1}\\left(2^{2s-n}-1\\right) \n$$</span>\n<span class=\"math-container\">$$\n\\Rightarrow \\left\\{\\begin{array}{l}\nn-s-1=k \\\\\n2 s-n=2\n\\end{array}\\right. \\Rightarrow \\boxed{n=2k+4},\n$$</span></p>\n<p>which is the <strong>unique</strong> solution of the equation,</p>\n<p>Back to our equation, when <span class=\"math-container\">$2k=2014, n=2018$</span> is its <strong>unique</strong> solution.</p>\n" }, { "answer_id": 2105236, "score": 11, "text": "<p>$(2^x+2^y)^2=2^{2x}+2^{x+y+1}+2^{2y}$ so we can take $y=1007$ and $x=1009$ to conclude $2^{2018}+2^{2017}+2^{2014}$ is a square.</p>\n\n<hr>\n\n<p>how to obtain all $n$:</p>\n\n<p>$9(2^{2014})=2^{2017}+2^{2014}=k^2-2^n$.</p>\n\n<p>Notice that we must have $k^2\\equiv 1 \\bmod 3$ implying $2^n\\equiv 1\\bmod 3$.</p>\n\n<p>So $n=2m$.</p>\n\n<p>From here we have $2^{2014}\\times 9= (k+2^m)(k-2^m)=z(z+2^{m+1})$</p>\n\n<p>there are now two cases, case one is when $z=9(2^j)$ and case two is when $z=2^j$</p>\n\n<p>the first case is impossible because the number $9$ in binary is $1001$ and adding a power of two clearly wont make it a power of $2$. The other case is clearly only solved by taking $z=2^{1007}$ and taking $m=1009$ thus getting $z(z+2^{m+1})=2̣^{1007}(2^{1007}+2^{1010})=9(2^{2014})$.</p>\n\n<p>So we need $n=2(1009)=2018$</p>\n" }, { "answer_id": 2105237, "score": 9, "text": "<p>The square is $$(2^{1007}+2^{1009})^2=2^{2014}+2^{2017}+2^{2018}$$</p>\n\n<p>The answe is therefore $n=2018$</p>\n" }, { "answer_id": 2106664, "score": 2, "text": "<p>If you want <em>an</em> answer then $(a+b)^2 = a^2 + 2ab + b^2$ should suggest an obvious answer by setting $a^2 = 2^{2014}; 2ab=2^{2017}; 2^n = c^2$. </p>\n\n<p>i.e $(2^{1007} + 2^{\\frac n2})^2 = 2^{2014} + 2*2^{1007}*2^{\\frac n2} + 2^n= 2^{2014} + 2^{1 + 1007 + \\frac n2} + 2^n$. We just have to solve for $1 + 1007 + \\frac n2 = 2017$. So $n = 2018$.</p>\n\n<p>But is it the only solution?</p>\n\n<p>Bear with me.</p>\n\n<p>Let $m$ be any positive integer. Let $m = \\sum_{i=0}^k a_i2^i; a_i = \\{0|1\\}$ but its unique binary expansion. </p>\n\n<p>Claim: If $m$ has 3 or more non-zero terms in its binary expansion then $m^2$ has more than 3 non-zero terms in its binary expansion.</p>\n\n<p>Proof: Let $m = 2^a + 2^b + 2^c + \\sum_{i= c+1}^k a_i 2^i; a &lt; b &lt; c$. ($a$ may equal $0$. and $a_i; i &gt; c$ may all be $0$.) </p>\n\n<p>Then $m = (1 + 2^{b'} + 2^{c'} + \\sum_{i=c+1}^k a_i2^{i - a})2^a; c'= c-a;b'=b-a$</p>\n\n<p>So $m^2 = [(1 + 2^{b'} + 2^{c'} + 2*2^{b'}2^{c'}+ 2^{2b'} + 2^{c'}) + 2(1 + 2^{b'} + 2^{c'})\\sum_{i=c+1}^k a_i2^{i - a} + (\\sum_{i=c+1}^k a_i2^{i - a})^2]2^{2a}$.</p>\n\n<p>$= 2^a + 2^b + 2^c + 2^{1+b+c} +2^{2b} + 2^{2c} +..... $.</p>\n\n<p>Note all the later terms, if they exist, are larger than $2^{2c}$ so $m^2$ has at least four non-zero terms in its binary expansion.</p>\n\n<p>Okay...</p>\n\n<p>So $2^{2017} + 2^{2014} + 2^n = m^2$ has at most three non-zero terms in its expansion. So $m$ has at most $2$ it its expansion.</p>\n\n<p>So $m = 2^k$ or $m = 2^k + 2^j$.</p>\n\n<p>$2^{2017} + 2^{2014} + 2^n = 2^{2k}$ is impossible.</p>\n\n<p>And $2^{2017} + 2^{2014} + 2^n = (2^k + 2^j)^2 = 2^{2k} + 2*2^j*2^k + 2^{2k}$ has only $n = 2018$ for solution.</p>\n\n<p>So $n = 2018$ is the only solution.</p>\n" }, { "answer_id": 2105455, "score": 1, "text": "<p>The simple, fast, paper and pencil approach to find <strong>an</strong> answer (not <em>all</em> answers) is thus:</p>\n\n<p>$$2^{2017} + 2^{2014} + 2^n$$ Factor: $$2^{2014}(2^{3} + 1 + 2^{n-2014})$$</p>\n\n<p>If you factor out a perfect square from any number, the result is a perfect square if and only if the original number was a perfect square. Therefore we can just ignore the factor $2^{2014}$ as it is a perfect square, and seek to find an $n$ such that the following is a perfect square: $$2^{3} + 1 + 2^{n-2014}$$ Simplified: $$9+2^{n-2014}$$</p>\n\n<p>So the simple question is, what power of two added to $9$ results in a perfect square?</p>\n\n<p>The smallest answer to this modified question* is easily found by trial and error, or by remembering the $3,4,5$ pythagorean triple, or any number of other ways:</p>\n\n<p>$$9 + 16 = 25 \\Rightarrow 9+2^4=25$$</p>\n\n<p>Thus, we can get a possible value for $n$ using: $$n-2014=4$$</p>\n\n<p>So one possible value for $n$ is $4+2014=2018$.</p>\n\n<hr>\n\n<p><sub>*This allows us to find the smallest possible value of $n$ that is <em>greater than</em> $2014$. There may be a smaller answer.</sub></p>\n" }, { "answer_id": 4560253, "score": 0, "text": "<p>We are going to solve a more general question: For any given natural number <span class=\"math-container\">$k$</span>, find the integer <span class=\"math-container\">$n$</span> satisfying that\n<span class=\"math-container\">$$\n2^{2k}+2^{2k+3}+2^n=m^2,\n$$</span>\nwhere <span class=\"math-container\">$m$</span> is a natural number.\nSimplifying the equation to <span class=\"math-container\">$$9\\cdot 2^{2k}+2^n=m^2 \\Leftrightarrow 2^n=(m+p)(m-p), $$</span>\nwhere <span class=\"math-container\">$p=3\\cdot 2^{k}.$</span></p>\n<p>Splitting the power of <span class=\"math-container\">$2$</span> yields\n<span class=\"math-container\">$$\n\\left\\{\\begin{array}{l}\nm+p=2^s \\cdots (1)\\\\\nm-p=2^{n-s} \\cdots (2)\n\\end{array}\\right.\n$$</span>\nfor some integer <span class=\"math-container\">$s&gt;n-s.$</span></p>\n<p><span class=\"math-container\">$(1)-(2)$</span> gives <span class=\"math-container\">$$\n3 \\cdot 2^{k}=p=2^{n-s-1}\\left(2^{2s-n}-1\\right) \n$$</span>\n<span class=\"math-container\">$$\n\\Rightarrow \\left\\{\\begin{array}{l}\nn-s-1=k \\\\\n2 s-n=2\n\\end{array}\\right. \\Rightarrow \\boxed{n=2k+4},\n$$</span></p>\n<p>which is the <strong>unique</strong> solution of the equation,</p>\n<p>Back to our equation, when <span class=\"math-container\">$2k=2014, n=2018$</span> is its <strong>unique</strong> solution.</p>\n" }, { "answer_id": 2105236, "score": 11, "text": "<p>$(2^x+2^y)^2=2^{2x}+2^{x+y+1}+2^{2y}$ so we can take $y=1007$ and $x=1009$ to conclude $2^{2018}+2^{2017}+2^{2014}$ is a square.</p>\n\n<hr>\n\n<p>how to obtain all $n$:</p>\n\n<p>$9(2^{2014})=2^{2017}+2^{2014}=k^2-2^n$.</p>\n\n<p>Notice that we must have $k^2\\equiv 1 \\bmod 3$ implying $2^n\\equiv 1\\bmod 3$.</p>\n\n<p>So $n=2m$.</p>\n\n<p>From here we have $2^{2014}\\times 9= (k+2^m)(k-2^m)=z(z+2^{m+1})$</p>\n\n<p>there are now two cases, case one is when $z=9(2^j)$ and case two is when $z=2^j$</p>\n\n<p>the first case is impossible because the number $9$ in binary is $1001$ and adding a power of two clearly wont make it a power of $2$. The other case is clearly only solved by taking $z=2^{1007}$ and taking $m=1009$ thus getting $z(z+2^{m+1})=2̣^{1007}(2^{1007}+2^{1010})=9(2^{2014})$.</p>\n\n<p>So we need $n=2(1009)=2018$</p>\n" }, { "answer_id": 2105237, "score": 9, "text": "<p>The square is $$(2^{1007}+2^{1009})^2=2^{2014}+2^{2017}+2^{2018}$$</p>\n\n<p>The answe is therefore $n=2018$</p>\n" }, { "answer_id": 2106664, "score": 2, "text": "<p>If you want <em>an</em> answer then $(a+b)^2 = a^2 + 2ab + b^2$ should suggest an obvious answer by setting $a^2 = 2^{2014}; 2ab=2^{2017}; 2^n = c^2$. </p>\n\n<p>i.e $(2^{1007} + 2^{\\frac n2})^2 = 2^{2014} + 2*2^{1007}*2^{\\frac n2} + 2^n= 2^{2014} + 2^{1 + 1007 + \\frac n2} + 2^n$. We just have to solve for $1 + 1007 + \\frac n2 = 2017$. So $n = 2018$.</p>\n\n<p>But is it the only solution?</p>\n\n<p>Bear with me.</p>\n\n<p>Let $m$ be any positive integer. Let $m = \\sum_{i=0}^k a_i2^i; a_i = \\{0|1\\}$ but its unique binary expansion. </p>\n\n<p>Claim: If $m$ has 3 or more non-zero terms in its binary expansion then $m^2$ has more than 3 non-zero terms in its binary expansion.</p>\n\n<p>Proof: Let $m = 2^a + 2^b + 2^c + \\sum_{i= c+1}^k a_i 2^i; a &lt; b &lt; c$. ($a$ may equal $0$. and $a_i; i &gt; c$ may all be $0$.) </p>\n\n<p>Then $m = (1 + 2^{b'} + 2^{c'} + \\sum_{i=c+1}^k a_i2^{i - a})2^a; c'= c-a;b'=b-a$</p>\n\n<p>So $m^2 = [(1 + 2^{b'} + 2^{c'} + 2*2^{b'}2^{c'}+ 2^{2b'} + 2^{c'}) + 2(1 + 2^{b'} + 2^{c'})\\sum_{i=c+1}^k a_i2^{i - a} + (\\sum_{i=c+1}^k a_i2^{i - a})^2]2^{2a}$.</p>\n\n<p>$= 2^a + 2^b + 2^c + 2^{1+b+c} +2^{2b} + 2^{2c} +..... $.</p>\n\n<p>Note all the later terms, if they exist, are larger than $2^{2c}$ so $m^2$ has at least four non-zero terms in its binary expansion.</p>\n\n<p>Okay...</p>\n\n<p>So $2^{2017} + 2^{2014} + 2^n = m^2$ has at most three non-zero terms in its expansion. So $m$ has at most $2$ it its expansion.</p>\n\n<p>So $m = 2^k$ or $m = 2^k + 2^j$.</p>\n\n<p>$2^{2017} + 2^{2014} + 2^n = 2^{2k}$ is impossible.</p>\n\n<p>And $2^{2017} + 2^{2014} + 2^n = (2^k + 2^j)^2 = 2^{2k} + 2*2^j*2^k + 2^{2k}$ has only $n = 2018$ for solution.</p>\n\n<p>So $n = 2018$ is the only solution.</p>\n" }, { "answer_id": 2105455, "score": 1, "text": "<p>The simple, fast, paper and pencil approach to find <strong>an</strong> answer (not <em>all</em> answers) is thus:</p>\n\n<p>$$2^{2017} + 2^{2014} + 2^n$$ Factor: $$2^{2014}(2^{3} + 1 + 2^{n-2014})$$</p>\n\n<p>If you factor out a perfect square from any number, the result is a perfect square if and only if the original number was a perfect square. Therefore we can just ignore the factor $2^{2014}$ as it is a perfect square, and seek to find an $n$ such that the following is a perfect square: $$2^{3} + 1 + 2^{n-2014}$$ Simplified: $$9+2^{n-2014}$$</p>\n\n<p>So the simple question is, what power of two added to $9$ results in a perfect square?</p>\n\n<p>The smallest answer to this modified question* is easily found by trial and error, or by remembering the $3,4,5$ pythagorean triple, or any number of other ways:</p>\n\n<p>$$9 + 16 = 25 \\Rightarrow 9+2^4=25$$</p>\n\n<p>Thus, we can get a possible value for $n$ using: $$n-2014=4$$</p>\n\n<p>So one possible value for $n$ is $4+2014=2018$.</p>\n\n<hr>\n\n<p><sub>*This allows us to find the smallest possible value of $n$ that is <em>greater than</em> $2014$. There may be a smaller answer.</sub></p>\n" }, { "answer_id": 4560253, "score": 0, "text": "<p>We are going to solve a more general question: For any given natural number <span class=\"math-container\">$k$</span>, find the integer <span class=\"math-container\">$n$</span> satisfying that\n<span class=\"math-container\">$$\n2^{2k}+2^{2k+3}+2^n=m^2,\n$$</span>\nwhere <span class=\"math-container\">$m$</span> is a natural number.\nSimplifying the equation to <span class=\"math-container\">$$9\\cdot 2^{2k}+2^n=m^2 \\Leftrightarrow 2^n=(m+p)(m-p), $$</span>\nwhere <span class=\"math-container\">$p=3\\cdot 2^{k}.$</span></p>\n<p>Splitting the power of <span class=\"math-container\">$2$</span> yields\n<span class=\"math-container\">$$\n\\left\\{\\begin{array}{l}\nm+p=2^s \\cdots (1)\\\\\nm-p=2^{n-s} \\cdots (2)\n\\end{array}\\right.\n$$</span>\nfor some integer <span class=\"math-container\">$s&gt;n-s.$</span></p>\n<p><span class=\"math-container\">$(1)-(2)$</span> gives <span class=\"math-container\">$$\n3 \\cdot 2^{k}=p=2^{n-s-1}\\left(2^{2s-n}-1\\right) \n$$</span>\n<span class=\"math-container\">$$\n\\Rightarrow \\left\\{\\begin{array}{l}\nn-s-1=k \\\\\n2 s-n=2\n\\end{array}\\right. \\Rightarrow \\boxed{n=2k+4},\n$$</span></p>\n<p>which is the <strong>unique</strong> solution of the equation,</p>\n<p>Back to our equation, when <span class=\"math-container\">$2k=2014, n=2018$</span> is its <strong>unique</strong> solution.</p>\n" }, { "answer_id": 2105236, "score": 11, "text": "<p>$(2^x+2^y)^2=2^{2x}+2^{x+y+1}+2^{2y}$ so we can take $y=1007$ and $x=1009$ to conclude $2^{2018}+2^{2017}+2^{2014}$ is a square.</p>\n\n<hr>\n\n<p>how to obtain all $n$:</p>\n\n<p>$9(2^{2014})=2^{2017}+2^{2014}=k^2-2^n$.</p>\n\n<p>Notice that we must have $k^2\\equiv 1 \\bmod 3$ implying $2^n\\equiv 1\\bmod 3$.</p>\n\n<p>So $n=2m$.</p>\n\n<p>From here we have $2^{2014}\\times 9= (k+2^m)(k-2^m)=z(z+2^{m+1})$</p>\n\n<p>there are now two cases, case one is when $z=9(2^j)$ and case two is when $z=2^j$</p>\n\n<p>the first case is impossible because the number $9$ in binary is $1001$ and adding a power of two clearly wont make it a power of $2$. The other case is clearly only solved by taking $z=2^{1007}$ and taking $m=1009$ thus getting $z(z+2^{m+1})=2̣^{1007}(2^{1007}+2^{1010})=9(2^{2014})$.</p>\n\n<p>So we need $n=2(1009)=2018$</p>\n" }, { "answer_id": 2105237, "score": 9, "text": "<p>The square is $$(2^{1007}+2^{1009})^2=2^{2014}+2^{2017}+2^{2018}$$</p>\n\n<p>The answe is therefore $n=2018$</p>\n" }, { "answer_id": 2106664, "score": 2, "text": "<p>If you want <em>an</em> answer then $(a+b)^2 = a^2 + 2ab + b^2$ should suggest an obvious answer by setting $a^2 = 2^{2014}; 2ab=2^{2017}; 2^n = c^2$. </p>\n\n<p>i.e $(2^{1007} + 2^{\\frac n2})^2 = 2^{2014} + 2*2^{1007}*2^{\\frac n2} + 2^n= 2^{2014} + 2^{1 + 1007 + \\frac n2} + 2^n$. We just have to solve for $1 + 1007 + \\frac n2 = 2017$. So $n = 2018$.</p>\n\n<p>But is it the only solution?</p>\n\n<p>Bear with me.</p>\n\n<p>Let $m$ be any positive integer. Let $m = \\sum_{i=0}^k a_i2^i; a_i = \\{0|1\\}$ but its unique binary expansion. </p>\n\n<p>Claim: If $m$ has 3 or more non-zero terms in its binary expansion then $m^2$ has more than 3 non-zero terms in its binary expansion.</p>\n\n<p>Proof: Let $m = 2^a + 2^b + 2^c + \\sum_{i= c+1}^k a_i 2^i; a &lt; b &lt; c$. ($a$ may equal $0$. and $a_i; i &gt; c$ may all be $0$.) </p>\n\n<p>Then $m = (1 + 2^{b'} + 2^{c'} + \\sum_{i=c+1}^k a_i2^{i - a})2^a; c'= c-a;b'=b-a$</p>\n\n<p>So $m^2 = [(1 + 2^{b'} + 2^{c'} + 2*2^{b'}2^{c'}+ 2^{2b'} + 2^{c'}) + 2(1 + 2^{b'} + 2^{c'})\\sum_{i=c+1}^k a_i2^{i - a} + (\\sum_{i=c+1}^k a_i2^{i - a})^2]2^{2a}$.</p>\n\n<p>$= 2^a + 2^b + 2^c + 2^{1+b+c} +2^{2b} + 2^{2c} +..... $.</p>\n\n<p>Note all the later terms, if they exist, are larger than $2^{2c}$ so $m^2$ has at least four non-zero terms in its binary expansion.</p>\n\n<p>Okay...</p>\n\n<p>So $2^{2017} + 2^{2014} + 2^n = m^2$ has at most three non-zero terms in its expansion. So $m$ has at most $2$ it its expansion.</p>\n\n<p>So $m = 2^k$ or $m = 2^k + 2^j$.</p>\n\n<p>$2^{2017} + 2^{2014} + 2^n = 2^{2k}$ is impossible.</p>\n\n<p>And $2^{2017} + 2^{2014} + 2^n = (2^k + 2^j)^2 = 2^{2k} + 2*2^j*2^k + 2^{2k}$ has only $n = 2018$ for solution.</p>\n\n<p>So $n = 2018$ is the only solution.</p>\n" }, { "answer_id": 2105455, "score": 1, "text": "<p>The simple, fast, paper and pencil approach to find <strong>an</strong> answer (not <em>all</em> answers) is thus:</p>\n\n<p>$$2^{2017} + 2^{2014} + 2^n$$ Factor: $$2^{2014}(2^{3} + 1 + 2^{n-2014})$$</p>\n\n<p>If you factor out a perfect square from any number, the result is a perfect square if and only if the original number was a perfect square. Therefore we can just ignore the factor $2^{2014}$ as it is a perfect square, and seek to find an $n$ such that the following is a perfect square: $$2^{3} + 1 + 2^{n-2014}$$ Simplified: $$9+2^{n-2014}$$</p>\n\n<p>So the simple question is, what power of two added to $9$ results in a perfect square?</p>\n\n<p>The smallest answer to this modified question* is easily found by trial and error, or by remembering the $3,4,5$ pythagorean triple, or any number of other ways:</p>\n\n<p>$$9 + 16 = 25 \\Rightarrow 9+2^4=25$$</p>\n\n<p>Thus, we can get a possible value for $n$ using: $$n-2014=4$$</p>\n\n<p>So one possible value for $n$ is $4+2014=2018$.</p>\n\n<hr>\n\n<p><sub>*This allows us to find the smallest possible value of $n$ that is <em>greater than</em> $2014$. There may be a smaller answer.</sub></p>\n" }, { "answer_id": 4560253, "score": 0, "text": "<p>We are going to solve a more general question: For any given natural number <span class=\"math-container\">$k$</span>, find the integer <span class=\"math-container\">$n$</span> satisfying that\n<span class=\"math-container\">$$\n2^{2k}+2^{2k+3}+2^n=m^2,\n$$</span>\nwhere <span class=\"math-container\">$m$</span> is a natural number.\nSimplifying the equation to <span class=\"math-container\">$$9\\cdot 2^{2k}+2^n=m^2 \\Leftrightarrow 2^n=(m+p)(m-p), $$</span>\nwhere <span class=\"math-container\">$p=3\\cdot 2^{k}.$</span></p>\n<p>Splitting the power of <span class=\"math-container\">$2$</span> yields\n<span class=\"math-container\">$$\n\\left\\{\\begin{array}{l}\nm+p=2^s \\cdots (1)\\\\\nm-p=2^{n-s} \\cdots (2)\n\\end{array}\\right.\n$$</span>\nfor some integer <span class=\"math-container\">$s&gt;n-s.$</span></p>\n<p><span class=\"math-container\">$(1)-(2)$</span> gives <span class=\"math-container\">$$\n3 \\cdot 2^{k}=p=2^{n-s-1}\\left(2^{2s-n}-1\\right) \n$$</span>\n<span class=\"math-container\">$$\n\\Rightarrow \\left\\{\\begin{array}{l}\nn-s-1=k \\\\\n2 s-n=2\n\\end{array}\\right. \\Rightarrow \\boxed{n=2k+4},\n$$</span></p>\n<p>which is the <strong>unique</strong> solution of the equation,</p>\n<p>Back to our equation, when <span class=\"math-container\">$2k=2014, n=2018$</span> is its <strong>unique</strong> solution.</p>\n" }, { "answer_id": 2105236, "score": 11, "text": "<p>$(2^x+2^y)^2=2^{2x}+2^{x+y+1}+2^{2y}$ so we can take $y=1007$ and $x=1009$ to conclude $2^{2018}+2^{2017}+2^{2014}$ is a square.</p>\n\n<hr>\n\n<p>how to obtain all $n$:</p>\n\n<p>$9(2^{2014})=2^{2017}+2^{2014}=k^2-2^n$.</p>\n\n<p>Notice that we must have $k^2\\equiv 1 \\bmod 3$ implying $2^n\\equiv 1\\bmod 3$.</p>\n\n<p>So $n=2m$.</p>\n\n<p>From here we have $2^{2014}\\times 9= (k+2^m)(k-2^m)=z(z+2^{m+1})$</p>\n\n<p>there are now two cases, case one is when $z=9(2^j)$ and case two is when $z=2^j$</p>\n\n<p>the first case is impossible because the number $9$ in binary is $1001$ and adding a power of two clearly wont make it a power of $2$. The other case is clearly only solved by taking $z=2^{1007}$ and taking $m=1009$ thus getting $z(z+2^{m+1})=2̣^{1007}(2^{1007}+2^{1010})=9(2^{2014})$.</p>\n\n<p>So we need $n=2(1009)=2018$</p>\n" }, { "answer_id": 2105237, "score": 9, "text": "<p>The square is $$(2^{1007}+2^{1009})^2=2^{2014}+2^{2017}+2^{2018}$$</p>\n\n<p>The answe is therefore $n=2018$</p>\n" }, { "answer_id": 2106664, "score": 2, "text": "<p>If you want <em>an</em> answer then $(a+b)^2 = a^2 + 2ab + b^2$ should suggest an obvious answer by setting $a^2 = 2^{2014}; 2ab=2^{2017}; 2^n = c^2$. </p>\n\n<p>i.e $(2^{1007} + 2^{\\frac n2})^2 = 2^{2014} + 2*2^{1007}*2^{\\frac n2} + 2^n= 2^{2014} + 2^{1 + 1007 + \\frac n2} + 2^n$. We just have to solve for $1 + 1007 + \\frac n2 = 2017$. So $n = 2018$.</p>\n\n<p>But is it the only solution?</p>\n\n<p>Bear with me.</p>\n\n<p>Let $m$ be any positive integer. Let $m = \\sum_{i=0}^k a_i2^i; a_i = \\{0|1\\}$ but its unique binary expansion. </p>\n\n<p>Claim: If $m$ has 3 or more non-zero terms in its binary expansion then $m^2$ has more than 3 non-zero terms in its binary expansion.</p>\n\n<p>Proof: Let $m = 2^a + 2^b + 2^c + \\sum_{i= c+1}^k a_i 2^i; a &lt; b &lt; c$. ($a$ may equal $0$. and $a_i; i &gt; c$ may all be $0$.) </p>\n\n<p>Then $m = (1 + 2^{b'} + 2^{c'} + \\sum_{i=c+1}^k a_i2^{i - a})2^a; c'= c-a;b'=b-a$</p>\n\n<p>So $m^2 = [(1 + 2^{b'} + 2^{c'} + 2*2^{b'}2^{c'}+ 2^{2b'} + 2^{c'}) + 2(1 + 2^{b'} + 2^{c'})\\sum_{i=c+1}^k a_i2^{i - a} + (\\sum_{i=c+1}^k a_i2^{i - a})^2]2^{2a}$.</p>\n\n<p>$= 2^a + 2^b + 2^c + 2^{1+b+c} +2^{2b} + 2^{2c} +..... $.</p>\n\n<p>Note all the later terms, if they exist, are larger than $2^{2c}$ so $m^2$ has at least four non-zero terms in its binary expansion.</p>\n\n<p>Okay...</p>\n\n<p>So $2^{2017} + 2^{2014} + 2^n = m^2$ has at most three non-zero terms in its expansion. So $m$ has at most $2$ it its expansion.</p>\n\n<p>So $m = 2^k$ or $m = 2^k + 2^j$.</p>\n\n<p>$2^{2017} + 2^{2014} + 2^n = 2^{2k}$ is impossible.</p>\n\n<p>And $2^{2017} + 2^{2014} + 2^n = (2^k + 2^j)^2 = 2^{2k} + 2*2^j*2^k + 2^{2k}$ has only $n = 2018$ for solution.</p>\n\n<p>So $n = 2018$ is the only solution.</p>\n" }, { "answer_id": 2105455, "score": 1, "text": "<p>The simple, fast, paper and pencil approach to find <strong>an</strong> answer (not <em>all</em> answers) is thus:</p>\n\n<p>$$2^{2017} + 2^{2014} + 2^n$$ Factor: $$2^{2014}(2^{3} + 1 + 2^{n-2014})$$</p>\n\n<p>If you factor out a perfect square from any number, the result is a perfect square if and only if the original number was a perfect square. Therefore we can just ignore the factor $2^{2014}$ as it is a perfect square, and seek to find an $n$ such that the following is a perfect square: $$2^{3} + 1 + 2^{n-2014}$$ Simplified: $$9+2^{n-2014}$$</p>\n\n<p>So the simple question is, what power of two added to $9$ results in a perfect square?</p>\n\n<p>The smallest answer to this modified question* is easily found by trial and error, or by remembering the $3,4,5$ pythagorean triple, or any number of other ways:</p>\n\n<p>$$9 + 16 = 25 \\Rightarrow 9+2^4=25$$</p>\n\n<p>Thus, we can get a possible value for $n$ using: $$n-2014=4$$</p>\n\n<p>So one possible value for $n$ is $4+2014=2018$.</p>\n\n<hr>\n\n<p><sub>*This allows us to find the smallest possible value of $n$ that is <em>greater than</em> $2014$. There may be a smaller answer.</sub></p>\n" }, { "answer_id": 4560253, "score": 0, "text": "<p>We are going to solve a more general question: For any given natural number <span class=\"math-container\">$k$</span>, find the integer <span class=\"math-container\">$n$</span> satisfying that\n<span class=\"math-container\">$$\n2^{2k}+2^{2k+3}+2^n=m^2,\n$$</span>\nwhere <span class=\"math-container\">$m$</span> is a natural number.\nSimplifying the equation to <span class=\"math-container\">$$9\\cdot 2^{2k}+2^n=m^2 \\Leftrightarrow 2^n=(m+p)(m-p), $$</span>\nwhere <span class=\"math-container\">$p=3\\cdot 2^{k}.$</span></p>\n<p>Splitting the power of <span class=\"math-container\">$2$</span> yields\n<span class=\"math-container\">$$\n\\left\\{\\begin{array}{l}\nm+p=2^s \\cdots (1)\\\\\nm-p=2^{n-s} \\cdots (2)\n\\end{array}\\right.\n$$</span>\nfor some integer <span class=\"math-container\">$s&gt;n-s.$</span></p>\n<p><span class=\"math-container\">$(1)-(2)$</span> gives <span class=\"math-container\">$$\n3 \\cdot 2^{k}=p=2^{n-s-1}\\left(2^{2s-n}-1\\right) \n$$</span>\n<span class=\"math-container\">$$\n\\Rightarrow \\left\\{\\begin{array}{l}\nn-s-1=k \\\\\n2 s-n=2\n\\end{array}\\right. \\Rightarrow \\boxed{n=2k+4},\n$$</span></p>\n<p>which is the <strong>unique</strong> solution of the equation,</p>\n<p>Back to our equation, when <span class=\"math-container\">$2k=2014, n=2018$</span> is its <strong>unique</strong> solution.</p>\n" } ]
Question: If $2^{2017} + 2^{2014} + 2^n$ is a perfect square, find $n$. Answer 1: <p>$(2^x+2^y)^2=2^{2x}+2^{x+y+1}+2^{2y}$ so we can take $y=1007$ and $x=1009$ to conclude $2^{2018}+2^{2017}+2^{2014}$ is a square.</p> <hr> <p>how to obtain all $n$:</p> <p>$9(2^{2014})=2^{2017}+2^{2014}=k^2-2^n$.</p> <p>Notice that we must have $k^2\equiv 1 \bmod 3$ implying $2^n\equiv 1\bmod 3$.</p> <p>So $n=2m$.</p> <p>From here we have $2^{2014}\times 9= (k+2^m)(k-2^m)=z(z+2^{m+1})$</p> <p>there are now two cases, case one is when $z=9(2^j)$ and case two is when $z=2^j$</p> <p>the first case is impossible because the number $9$ in binary is $1001$ and adding a power of two clearly wont make it a power of $2$. The other case is clearly only solved by taking $z=2^{1007}$ and taking $m=1009$ thus getting $z(z+2^{m+1})=2̣^{1007}(2^{1007}+2^{1010})=9(2^{2014})$.</p> <p>So we need $n=2(1009)=2018$</p> Answer 2: <p>The square is $$(2^{1007}+2^{1009})^2=2^{2014}+2^{2017}+2^{2018}$$</p> <p>The answe is therefore $n=2018$</p> Answer 3: <p>If you want <em>an</em> answer then $(a+b)^2 = a^2 + 2ab + b^2$ should suggest an obvious answer by setting $a^2 = 2^{2014}; 2ab=2^{2017}; 2^n = c^2$. </p> <p>i.e $(2^{1007} + 2^{\frac n2})^2 = 2^{2014} + 2*2^{1007}*2^{\frac n2} + 2^n= 2^{2014} + 2^{1 + 1007 + \frac n2} + 2^n$. We just have to solve for $1 + 1007 + \frac n2 = 2017$. So $n = 2018$.</p> <p>But is it the only solution?</p> <p>Bear with me.</p> <p>Let $m$ be any positive integer. Let $m = \sum_{i=0}^k a_i2^i; a_i = \{0|1\}$ but its unique binary expansion. </p> <p>Claim: If $m$ has 3 or more non-zero terms in its binary expansion then $m^2$ has more than 3 non-zero terms in its binary expansion.</p> <p>Proof: Let $m = 2^a + 2^b + 2^c + \sum_{i= c+1}^k a_i 2^i; a &lt; b &lt; c$. ($a$ may equal $0$. and $a_i; i &gt; c$ may all be $0$.) </p> <p>Then $m = (1 + 2^{b'} + 2^{c'} + \sum_{i=c+1}^k a_i2^{i - a})2^a; c'= c-a;b'=b-a$</p> <p>So $m^2 = [(1 + 2^{b'} + 2^{c'} + 2*2^{b'}2^{c'}+ 2^{2b'} + 2^{c'}) + 2(1 + 2^{b'} + 2^{c'})\sum_{i=c+1}^k a_i2^{i - a} + (\sum_{i=c+1}^k a_i2^{i - a})^2]2^{2a}$.</p> <p>$= 2^a + 2^b + 2^c + 2^{1+b+c} +2^{2b} + 2^{2c} +..... $.</p> <p>Note all the later terms, if they exist, are larger than $2^{2c}$ so $m^2$ has at least four non-zero terms in its binary expansion.</p> <p>Okay...</p> <p>So $2^{2017} + 2^{2014} + 2^n = m^2$ has at most three non-zero terms in its expansion. So $m$ has at most $2$ it its expansion.</p> <p>So $m = 2^k$ or $m = 2^k + 2^j$.</p> <p>$2^{2017} + 2^{2014} + 2^n = 2^{2k}$ is impossible.</p> <p>And $2^{2017} + 2^{2014} + 2^n = (2^k + 2^j)^2 = 2^{2k} + 2*2^j*2^k + 2^{2k}$ has only $n = 2018$ for solution.</p> <p>So $n = 2018$ is the only solution.</p> Answer 4: <p>The simple, fast, paper and pencil approach to find <strong>an</strong> answer (not <em>all</em> answers) is thus:</p> <p>$$2^{2017} + 2^{2014} + 2^n$$ Factor: $$2^{2014}(2^{3} + 1 + 2^{n-2014})$$</p> <p>If you factor out a perfect square from any number, the result is a perfect square if and only if the original number was a perfect square. Therefore we can just ignore the factor $2^{2014}$ as it is a perfect square, and seek to find an $n$ such that the following is a perfect square: $$2^{3} + 1 + 2^{n-2014}$$ Simplified: $$9+2^{n-2014}$$</p> <p>So the simple question is, what power of two added to $9$ results in a perfect square?</p> <p>The smallest answer to this modified question* is easily found by trial and error, or by remembering the $3,4,5$ pythagorean triple, or any number of other ways:</p> <p>$$9 + 16 = 25 \Rightarrow 9+2^4=25$$</p> <p>Thus, we can get a possible value for $n$ using: $$n-2014=4$$</p> <p>So one possible value for $n$ is $4+2014=2018$.</p> <hr> <p><sub>*This allows us to find the smallest possible value of $n$ that is <em>greater than</em> $2014$. There may be a smaller answer.</sub></p> Answer 5: <p>We are going to solve a more general question: For any given natural number <span class="math-container">$k$</span>, find the integer <span class="math-container">$n$</span> satisfying that <span class="math-container">$$ 2^{2k}+2^{2k+3}+2^n=m^2, $$</span> where <span class="math-container">$m$</span> is a natural number. Simplifying the equation to <span class="math-container">$$9\cdot 2^{2k}+2^n=m^2 \Leftrightarrow 2^n=(m+p)(m-p), $$</span> where <span class="math-container">$p=3\cdot 2^{k}.$</span></p> <p>Splitting the power of <span class="math-container">$2$</span> yields <span class="math-container">$$ \left\{\begin{array}{l} m+p=2^s \cdots (1)\\ m-p=2^{n-s} \cdots (2) \end{array}\right. $$</span> for some integer <span class="math-container">$s&gt;n-s.$</span></p> <p><span class="math-container">$(1)-(2)$</span> gives <span class="math-container">$$ 3 \cdot 2^{k}=p=2^{n-s-1}\left(2^{2s-n}-1\right) $$</span> <span class="math-container">$$ \Rightarrow \left\{\begin{array}{l} n-s-1=k \\ 2 s-n=2 \end{array}\right. \Rightarrow \boxed{n=2k+4}, $$</span></p> <p>which is the <strong>unique</strong> solution of the equation,</p> <p>Back to our equation, when <span class="math-container">$2k=2014, n=2018$</span> is its <strong>unique</strong> solution.</p> Answer 6: <p>$(2^x+2^y)^2=2^{2x}+2^{x+y+1}+2^{2y}$ so we can take $y=1007$ and $x=1009$ to conclude $2^{2018}+2^{2017}+2^{2014}$ is a square.</p> <hr> <p>how to obtain all $n$:</p> <p>$9(2^{2014})=2^{2017}+2^{2014}=k^2-2^n$.</p> <p>Notice that we must have $k^2\equiv 1 \bmod 3$ implying $2^n\equiv 1\bmod 3$.</p> <p>So $n=2m$.</p> <p>From here we have $2^{2014}\times 9= (k+2^m)(k-2^m)=z(z+2^{m+1})$</p> <p>there are now two cases, case one is when $z=9(2^j)$ and case two is when $z=2^j$</p> <p>the first case is impossible because the number $9$ in binary is $1001$ and adding a power of two clearly wont make it a power of $2$. The other case is clearly only solved by taking $z=2^{1007}$ and taking $m=1009$ thus getting $z(z+2^{m+1})=2̣^{1007}(2^{1007}+2^{1010})=9(2^{2014})$.</p> <p>So we need $n=2(1009)=2018$</p> Answer 7: <p>The square is $$(2^{1007}+2^{1009})^2=2^{2014}+2^{2017}+2^{2018}$$</p> <p>The answe is therefore $n=2018$</p> Answer 8: <p>If you want <em>an</em> answer then $(a+b)^2 = a^2 + 2ab + b^2$ should suggest an obvious answer by setting $a^2 = 2^{2014}; 2ab=2^{2017}; 2^n = c^2$. </p> <p>i.e $(2^{1007} + 2^{\frac n2})^2 = 2^{2014} + 2*2^{1007}*2^{\frac n2} + 2^n= 2^{2014} + 2^{1 + 1007 + \frac n2} + 2^n$. We just have to solve for $1 + 1007 + \frac n2 = 2017$. So $n = 2018$.</p> <p>But is it the only solution?</p> <p>Bear with me.</p> <p>Let $m$ be any positive integer. Let $m = \sum_{i=0}^k a_i2^i; a_i = \{0|1\}$ but its unique binary expansion. </p> <p>Claim: If $m$ has 3 or more non-zero terms in its binary expansion then $m^2$ has more than 3 non-zero terms in its binary expansion.</p> <p>Proof: Let $m = 2^a + 2^b + 2^c + \sum_{i= c+1}^k a_i 2^i; a &lt; b &lt; c$. ($a$ may equal $0$. and $a_i; i &gt; c$ may all be $0$.) </p> <p>Then $m = (1 + 2^{b'} + 2^{c'} + \sum_{i=c+1}^k a_i2^{i - a})2^a; c'= c-a;b'=b-a$</p> <p>So $m^2 = [(1 + 2^{b'} + 2^{c'} + 2*2^{b'}2^{c'}+ 2^{2b'} + 2^{c'}) + 2(1 + 2^{b'} + 2^{c'})\sum_{i=c+1}^k a_i2^{i - a} + (\sum_{i=c+1}^k a_i2^{i - a})^2]2^{2a}$.</p> <p>$= 2^a + 2^b + 2^c + 2^{1+b+c} +2^{2b} + 2^{2c} +..... $.</p> <p>Note all the later terms, if they exist, are larger than $2^{2c}$ so $m^2$ has at least four non-zero terms in its binary expansion.</p> <p>Okay...</p> <p>So $2^{2017} + 2^{2014} + 2^n = m^2$ has at most three non-zero terms in its expansion. So $m$ has at most $2$ it its expansion.</p> <p>So $m = 2^k$ or $m = 2^k + 2^j$.</p> <p>$2^{2017} + 2^{2014} + 2^n = 2^{2k}$ is impossible.</p> <p>And $2^{2017} + 2^{2014} + 2^n = (2^k + 2^j)^2 = 2^{2k} + 2*2^j*2^k + 2^{2k}$ has only $n = 2018$ for solution.</p> <p>So $n = 2018$ is the only solution.</p> Answer 9: <p>The simple, fast, paper and pencil approach to find <strong>an</strong> answer (not <em>all</em> answers) is thus:</p> <p>$$2^{2017} + 2^{2014} + 2^n$$ Factor: $$2^{2014}(2^{3} + 1 + 2^{n-2014})$$</p> <p>If you factor out a perfect square from any number, the result is a perfect square if and only if the original number was a perfect square. Therefore we can just ignore the factor $2^{2014}$ as it is a perfect square, and seek to find an $n$ such that the following is a perfect square: $$2^{3} + 1 + 2^{n-2014}$$ Simplified: $$9+2^{n-2014}$$</p> <p>So the simple question is, what power of two added to $9$ results in a perfect square?</p> <p>The smallest answer to this modified question* is easily found by trial and error, or by remembering the $3,4,5$ pythagorean triple, or any number of other ways:</p> <p>$$9 + 16 = 25 \Rightarrow 9+2^4=25$$</p> <p>Thus, we can get a possible value for $n$ using: $$n-2014=4$$</p> <p>So one possible value for $n$ is $4+2014=2018$.</p> <hr> <p><sub>*This allows us to find the smallest possible value of $n$ that is <em>greater than</em> $2014$. There may be a smaller answer.</sub></p> Answer 10: <p>We are going to solve a more general question: For any given natural number <span class="math-container">$k$</span>, find the integer <span class="math-container">$n$</span> satisfying that <span class="math-container">$$ 2^{2k}+2^{2k+3}+2^n=m^2, $$</span> where <span class="math-container">$m$</span> is a natural number. Simplifying the equation to <span class="math-container">$$9\cdot 2^{2k}+2^n=m^2 \Leftrightarrow 2^n=(m+p)(m-p), $$</span> where <span class="math-container">$p=3\cdot 2^{k}.$</span></p> <p>Splitting the power of <span class="math-container">$2$</span> yields <span class="math-container">$$ \left\{\begin{array}{l} m+p=2^s \cdots (1)\\ m-p=2^{n-s} \cdots (2) \end{array}\right. $$</span> for some integer <span class="math-container">$s&gt;n-s.$</span></p> <p><span class="math-container">$(1)-(2)$</span> gives <span class="math-container">$$ 3 \cdot 2^{k}=p=2^{n-s-1}\left(2^{2s-n}-1\right) $$</span> <span class="math-container">$$ \Rightarrow \left\{\begin{array}{l} n-s-1=k \\ 2 s-n=2 \end{array}\right. \Rightarrow \boxed{n=2k+4}, $$</span></p> <p>which is the <strong>unique</strong> solution of the equation,</p> <p>Back to our equation, when <span class="math-container">$2k=2014, n=2018$</span> is its <strong>unique</strong> solution.</p> Answer 11: <p>$(2^x+2^y)^2=2^{2x}+2^{x+y+1}+2^{2y}$ so we can take $y=1007$ and $x=1009$ to conclude $2^{2018}+2^{2017}+2^{2014}$ is a square.</p> <hr> <p>how to obtain all $n$:</p> <p>$9(2^{2014})=2^{2017}+2^{2014}=k^2-2^n$.</p> <p>Notice that we must have $k^2\equiv 1 \bmod 3$ implying $2^n\equiv 1\bmod 3$.</p> <p>So $n=2m$.</p> <p>From here we have $2^{2014}\times 9= (k+2^m)(k-2^m)=z(z+2^{m+1})$</p> <p>there are now two cases, case one is when $z=9(2^j)$ and case two is when $z=2^j$</p> <p>the first case is impossible because the number $9$ in binary is $1001$ and adding a power of two clearly wont make it a power of $2$. The other case is clearly only solved by taking $z=2^{1007}$ and taking $m=1009$ thus getting $z(z+2^{m+1})=2̣^{1007}(2^{1007}+2^{1010})=9(2^{2014})$.</p> <p>So we need $n=2(1009)=2018$</p> Answer 12: <p>The square is $$(2^{1007}+2^{1009})^2=2^{2014}+2^{2017}+2^{2018}$$</p> <p>The answe is therefore $n=2018$</p> Answer 13: <p>If you want <em>an</em> answer then $(a+b)^2 = a^2 + 2ab + b^2$ should suggest an obvious answer by setting $a^2 = 2^{2014}; 2ab=2^{2017}; 2^n = c^2$. </p> <p>i.e $(2^{1007} + 2^{\frac n2})^2 = 2^{2014} + 2*2^{1007}*2^{\frac n2} + 2^n= 2^{2014} + 2^{1 + 1007 + \frac n2} + 2^n$. We just have to solve for $1 + 1007 + \frac n2 = 2017$. So $n = 2018$.</p> <p>But is it the only solution?</p> <p>Bear with me.</p> <p>Let $m$ be any positive integer. Let $m = \sum_{i=0}^k a_i2^i; a_i = \{0|1\}$ but its unique binary expansion. </p> <p>Claim: If $m$ has 3 or more non-zero terms in its binary expansion then $m^2$ has more than 3 non-zero terms in its binary expansion.</p> <p>Proof: Let $m = 2^a + 2^b + 2^c + \sum_{i= c+1}^k a_i 2^i; a &lt; b &lt; c$. ($a$ may equal $0$. and $a_i; i &gt; c$ may all be $0$.) </p> <p>Then $m = (1 + 2^{b'} + 2^{c'} + \sum_{i=c+1}^k a_i2^{i - a})2^a; c'= c-a;b'=b-a$</p> <p>So $m^2 = [(1 + 2^{b'} + 2^{c'} + 2*2^{b'}2^{c'}+ 2^{2b'} + 2^{c'}) + 2(1 + 2^{b'} + 2^{c'})\sum_{i=c+1}^k a_i2^{i - a} + (\sum_{i=c+1}^k a_i2^{i - a})^2]2^{2a}$.</p> <p>$= 2^a + 2^b + 2^c + 2^{1+b+c} +2^{2b} + 2^{2c} +..... $.</p> <p>Note all the later terms, if they exist, are larger than $2^{2c}$ so $m^2$ has at least four non-zero terms in its binary expansion.</p> <p>Okay...</p> <p>So $2^{2017} + 2^{2014} + 2^n = m^2$ has at most three non-zero terms in its expansion. So $m$ has at most $2$ it its expansion.</p> <p>So $m = 2^k$ or $m = 2^k + 2^j$.</p> <p>$2^{2017} + 2^{2014} + 2^n = 2^{2k}$ is impossible.</p> <p>And $2^{2017} + 2^{2014} + 2^n = (2^k + 2^j)^2 = 2^{2k} + 2*2^j*2^k + 2^{2k}$ has only $n = 2018$ for solution.</p> <p>So $n = 2018$ is the only solution.</p> Answer 14: <p>The simple, fast, paper and pencil approach to find <strong>an</strong> answer (not <em>all</em> answers) is thus:</p> <p>$$2^{2017} + 2^{2014} + 2^n$$ Factor: $$2^{2014}(2^{3} + 1 + 2^{n-2014})$$</p> <p>If you factor out a perfect square from any number, the result is a perfect square if and only if the original number was a perfect square. Therefore we can just ignore the factor $2^{2014}$ as it is a perfect square, and seek to find an $n$ such that the following is a perfect square: $$2^{3} + 1 + 2^{n-2014}$$ Simplified: $$9+2^{n-2014}$$</p> <p>So the simple question is, what power of two added to $9$ results in a perfect square?</p> <p>The smallest answer to this modified question* is easily found by trial and error, or by remembering the $3,4,5$ pythagorean triple, or any number of other ways:</p> <p>$$9 + 16 = 25 \Rightarrow 9+2^4=25$$</p> <p>Thus, we can get a possible value for $n$ using: $$n-2014=4$$</p> <p>So one possible value for $n$ is $4+2014=2018$.</p> <hr> <p><sub>*This allows us to find the smallest possible value of $n$ that is <em>greater than</em> $2014$. There may be a smaller answer.</sub></p> Answer 15: <p>We are going to solve a more general question: For any given natural number <span class="math-container">$k$</span>, find the integer <span class="math-container">$n$</span> satisfying that <span class="math-container">$$ 2^{2k}+2^{2k+3}+2^n=m^2, $$</span> where <span class="math-container">$m$</span> is a natural number. Simplifying the equation to <span class="math-container">$$9\cdot 2^{2k}+2^n=m^2 \Leftrightarrow 2^n=(m+p)(m-p), $$</span> where <span class="math-container">$p=3\cdot 2^{k}.$</span></p> <p>Splitting the power of <span class="math-container">$2$</span> yields <span class="math-container">$$ \left\{\begin{array}{l} m+p=2^s \cdots (1)\\ m-p=2^{n-s} \cdots (2) \end{array}\right. $$</span> for some integer <span class="math-container">$s&gt;n-s.$</span></p> <p><span class="math-container">$(1)-(2)$</span> gives <span class="math-container">$$ 3 \cdot 2^{k}=p=2^{n-s-1}\left(2^{2s-n}-1\right) $$</span> <span class="math-container">$$ \Rightarrow \left\{\begin{array}{l} n-s-1=k \\ 2 s-n=2 \end{array}\right. \Rightarrow \boxed{n=2k+4}, $$</span></p> <p>which is the <strong>unique</strong> solution of the equation,</p> <p>Back to our equation, when <span class="math-container">$2k=2014, n=2018$</span> is its <strong>unique</strong> solution.</p> Answer 16: <p>$(2^x+2^y)^2=2^{2x}+2^{x+y+1}+2^{2y}$ so we can take $y=1007$ and $x=1009$ to conclude $2^{2018}+2^{2017}+2^{2014}$ is a square.</p> <hr> <p>how to obtain all $n$:</p> <p>$9(2^{2014})=2^{2017}+2^{2014}=k^2-2^n$.</p> <p>Notice that we must have $k^2\equiv 1 \bmod 3$ implying $2^n\equiv 1\bmod 3$.</p> <p>So $n=2m$.</p> <p>From here we have $2^{2014}\times 9= (k+2^m)(k-2^m)=z(z+2^{m+1})$</p> <p>there are now two cases, case one is when $z=9(2^j)$ and case two is when $z=2^j$</p> <p>the first case is impossible because the number $9$ in binary is $1001$ and adding a power of two clearly wont make it a power of $2$. The other case is clearly only solved by taking $z=2^{1007}$ and taking $m=1009$ thus getting $z(z+2^{m+1})=2̣^{1007}(2^{1007}+2^{1010})=9(2^{2014})$.</p> <p>So we need $n=2(1009)=2018$</p> Answer 17: <p>The square is $$(2^{1007}+2^{1009})^2=2^{2014}+2^{2017}+2^{2018}$$</p> <p>The answe is therefore $n=2018$</p> Answer 18: <p>If you want <em>an</em> answer then $(a+b)^2 = a^2 + 2ab + b^2$ should suggest an obvious answer by setting $a^2 = 2^{2014}; 2ab=2^{2017}; 2^n = c^2$. </p> <p>i.e $(2^{1007} + 2^{\frac n2})^2 = 2^{2014} + 2*2^{1007}*2^{\frac n2} + 2^n= 2^{2014} + 2^{1 + 1007 + \frac n2} + 2^n$. We just have to solve for $1 + 1007 + \frac n2 = 2017$. So $n = 2018$.</p> <p>But is it the only solution?</p> <p>Bear with me.</p> <p>Let $m$ be any positive integer. Let $m = \sum_{i=0}^k a_i2^i; a_i = \{0|1\}$ but its unique binary expansion. </p> <p>Claim: If $m$ has 3 or more non-zero terms in its binary expansion then $m^2$ has more than 3 non-zero terms in its binary expansion.</p> <p>Proof: Let $m = 2^a + 2^b + 2^c + \sum_{i= c+1}^k a_i 2^i; a &lt; b &lt; c$. ($a$ may equal $0$. and $a_i; i &gt; c$ may all be $0$.) </p> <p>Then $m = (1 + 2^{b'} + 2^{c'} + \sum_{i=c+1}^k a_i2^{i - a})2^a; c'= c-a;b'=b-a$</p> <p>So $m^2 = [(1 + 2^{b'} + 2^{c'} + 2*2^{b'}2^{c'}+ 2^{2b'} + 2^{c'}) + 2(1 + 2^{b'} + 2^{c'})\sum_{i=c+1}^k a_i2^{i - a} + (\sum_{i=c+1}^k a_i2^{i - a})^2]2^{2a}$.</p> <p>$= 2^a + 2^b + 2^c + 2^{1+b+c} +2^{2b} + 2^{2c} +..... $.</p> <p>Note all the later terms, if they exist, are larger than $2^{2c}$ so $m^2$ has at least four non-zero terms in its binary expansion.</p> <p>Okay...</p> <p>So $2^{2017} + 2^{2014} + 2^n = m^2$ has at most three non-zero terms in its expansion. So $m$ has at most $2$ it its expansion.</p> <p>So $m = 2^k$ or $m = 2^k + 2^j$.</p> <p>$2^{2017} + 2^{2014} + 2^n = 2^{2k}$ is impossible.</p> <p>And $2^{2017} + 2^{2014} + 2^n = (2^k + 2^j)^2 = 2^{2k} + 2*2^j*2^k + 2^{2k}$ has only $n = 2018$ for solution.</p> <p>So $n = 2018$ is the only solution.</p> Answer 19: <p>The simple, fast, paper and pencil approach to find <strong>an</strong> answer (not <em>all</em> answers) is thus:</p> <p>$$2^{2017} + 2^{2014} + 2^n$$ Factor: $$2^{2014}(2^{3} + 1 + 2^{n-2014})$$</p> <p>If you factor out a perfect square from any number, the result is a perfect square if and only if the original number was a perfect square. Therefore we can just ignore the factor $2^{2014}$ as it is a perfect square, and seek to find an $n$ such that the following is a perfect square: $$2^{3} + 1 + 2^{n-2014}$$ Simplified: $$9+2^{n-2014}$$</p> <p>So the simple question is, what power of two added to $9$ results in a perfect square?</p> <p>The smallest answer to this modified question* is easily found by trial and error, or by remembering the $3,4,5$ pythagorean triple, or any number of other ways:</p> <p>$$9 + 16 = 25 \Rightarrow 9+2^4=25$$</p> <p>Thus, we can get a possible value for $n$ using: $$n-2014=4$$</p> <p>So one possible value for $n$ is $4+2014=2018$.</p> <hr> <p><sub>*This allows us to find the smallest possible value of $n$ that is <em>greater than</em> $2014$. There may be a smaller answer.</sub></p> Answer 20: <p>We are going to solve a more general question: For any given natural number <span class="math-container">$k$</span>, find the integer <span class="math-container">$n$</span> satisfying that <span class="math-container">$$ 2^{2k}+2^{2k+3}+2^n=m^2, $$</span> where <span class="math-container">$m$</span> is a natural number. Simplifying the equation to <span class="math-container">$$9\cdot 2^{2k}+2^n=m^2 \Leftrightarrow 2^n=(m+p)(m-p), $$</span> where <span class="math-container">$p=3\cdot 2^{k}.$</span></p> <p>Splitting the power of <span class="math-container">$2$</span> yields <span class="math-container">$$ \left\{\begin{array}{l} m+p=2^s \cdots (1)\\ m-p=2^{n-s} \cdots (2) \end{array}\right. $$</span> for some integer <span class="math-container">$s&gt;n-s.$</span></p> <p><span class="math-container">$(1)-(2)$</span> gives <span class="math-container">$$ 3 \cdot 2^{k}=p=2^{n-s-1}\left(2^{2s-n}-1\right) $$</span> <span class="math-container">$$ \Rightarrow \left\{\begin{array}{l} n-s-1=k \\ 2 s-n=2 \end{array}\right. \Rightarrow \boxed{n=2k+4}, $$</span></p> <p>which is the <strong>unique</strong> solution of the equation,</p> <p>Back to our equation, when <span class="math-container">$2k=2014, n=2018$</span> is its <strong>unique</strong> solution.</p> Answer 21: <p>$(2^x+2^y)^2=2^{2x}+2^{x+y+1}+2^{2y}$ so we can take $y=1007$ and $x=1009$ to conclude $2^{2018}+2^{2017}+2^{2014}$ is a square.</p> <hr> <p>how to obtain all $n$:</p> <p>$9(2^{2014})=2^{2017}+2^{2014}=k^2-2^n$.</p> <p>Notice that we must have $k^2\equiv 1 \bmod 3$ implying $2^n\equiv 1\bmod 3$.</p> <p>So $n=2m$.</p> <p>From here we have $2^{2014}\times 9= (k+2^m)(k-2^m)=z(z+2^{m+1})$</p> <p>there are now two cases, case one is when $z=9(2^j)$ and case two is when $z=2^j$</p> <p>the first case is impossible because the number $9$ in binary is $1001$ and adding a power of two clearly wont make it a power of $2$. The other case is clearly only solved by taking $z=2^{1007}$ and taking $m=1009$ thus getting $z(z+2^{m+1})=2̣^{1007}(2^{1007}+2^{1010})=9(2^{2014})$.</p> <p>So we need $n=2(1009)=2018$</p> Answer 22: <p>The square is $$(2^{1007}+2^{1009})^2=2^{2014}+2^{2017}+2^{2018}$$</p> <p>The answe is therefore $n=2018$</p> Answer 23: <p>If you want <em>an</em> answer then $(a+b)^2 = a^2 + 2ab + b^2$ should suggest an obvious answer by setting $a^2 = 2^{2014}; 2ab=2^{2017}; 2^n = c^2$. </p> <p>i.e $(2^{1007} + 2^{\frac n2})^2 = 2^{2014} + 2*2^{1007}*2^{\frac n2} + 2^n= 2^{2014} + 2^{1 + 1007 + \frac n2} + 2^n$. We just have to solve for $1 + 1007 + \frac n2 = 2017$. So $n = 2018$.</p> <p>But is it the only solution?</p> <p>Bear with me.</p> <p>Let $m$ be any positive integer. Let $m = \sum_{i=0}^k a_i2^i; a_i = \{0|1\}$ but its unique binary expansion. </p> <p>Claim: If $m$ has 3 or more non-zero terms in its binary expansion then $m^2$ has more than 3 non-zero terms in its binary expansion.</p> <p>Proof: Let $m = 2^a + 2^b + 2^c + \sum_{i= c+1}^k a_i 2^i; a &lt; b &lt; c$. ($a$ may equal $0$. and $a_i; i &gt; c$ may all be $0$.) </p> <p>Then $m = (1 + 2^{b'} + 2^{c'} + \sum_{i=c+1}^k a_i2^{i - a})2^a; c'= c-a;b'=b-a$</p> <p>So $m^2 = [(1 + 2^{b'} + 2^{c'} + 2*2^{b'}2^{c'}+ 2^{2b'} + 2^{c'}) + 2(1 + 2^{b'} + 2^{c'})\sum_{i=c+1}^k a_i2^{i - a} + (\sum_{i=c+1}^k a_i2^{i - a})^2]2^{2a}$.</p> <p>$= 2^a + 2^b + 2^c + 2^{1+b+c} +2^{2b} + 2^{2c} +..... $.</p> <p>Note all the later terms, if they exist, are larger than $2^{2c}$ so $m^2$ has at least four non-zero terms in its binary expansion.</p> <p>Okay...</p> <p>So $2^{2017} + 2^{2014} + 2^n = m^2$ has at most three non-zero terms in its expansion. So $m$ has at most $2$ it its expansion.</p> <p>So $m = 2^k$ or $m = 2^k + 2^j$.</p> <p>$2^{2017} + 2^{2014} + 2^n = 2^{2k}$ is impossible.</p> <p>And $2^{2017} + 2^{2014} + 2^n = (2^k + 2^j)^2 = 2^{2k} + 2*2^j*2^k + 2^{2k}$ has only $n = 2018$ for solution.</p> <p>So $n = 2018$ is the only solution.</p> Answer 24: <p>The simple, fast, paper and pencil approach to find <strong>an</strong> answer (not <em>all</em> answers) is thus:</p> <p>$$2^{2017} + 2^{2014} + 2^n$$ Factor: $$2^{2014}(2^{3} + 1 + 2^{n-2014})$$</p> <p>If you factor out a perfect square from any number, the result is a perfect square if and only if the original number was a perfect square. Therefore we can just ignore the factor $2^{2014}$ as it is a perfect square, and seek to find an $n$ such that the following is a perfect square: $$2^{3} + 1 + 2^{n-2014}$$ Simplified: $$9+2^{n-2014}$$</p> <p>So the simple question is, what power of two added to $9$ results in a perfect square?</p> <p>The smallest answer to this modified question* is easily found by trial and error, or by remembering the $3,4,5$ pythagorean triple, or any number of other ways:</p> <p>$$9 + 16 = 25 \Rightarrow 9+2^4=25$$</p> <p>Thus, we can get a possible value for $n$ using: $$n-2014=4$$</p> <p>So one possible value for $n$ is $4+2014=2018$.</p> <hr> <p><sub>*This allows us to find the smallest possible value of $n$ that is <em>greater than</em> $2014$. There may be a smaller answer.</sub></p> Answer 25: <p>We are going to solve a more general question: For any given natural number <span class="math-container">$k$</span>, find the integer <span class="math-container">$n$</span> satisfying that <span class="math-container">$$ 2^{2k}+2^{2k+3}+2^n=m^2, $$</span> where <span class="math-container">$m$</span> is a natural number. Simplifying the equation to <span class="math-container">$$9\cdot 2^{2k}+2^n=m^2 \Leftrightarrow 2^n=(m+p)(m-p), $$</span> where <span class="math-container">$p=3\cdot 2^{k}.$</span></p> <p>Splitting the power of <span class="math-container">$2$</span> yields <span class="math-container">$$ \left\{\begin{array}{l} m+p=2^s \cdots (1)\\ m-p=2^{n-s} \cdots (2) \end{array}\right. $$</span> for some integer <span class="math-container">$s&gt;n-s.$</span></p> <p><span class="math-container">$(1)-(2)$</span> gives <span class="math-container">$$ 3 \cdot 2^{k}=p=2^{n-s-1}\left(2^{2s-n}-1\right) $$</span> <span class="math-container">$$ \Rightarrow \left\{\begin{array}{l} n-s-1=k \\ 2 s-n=2 \end{array}\right. \Rightarrow \boxed{n=2k+4}, $$</span></p> <p>which is the <strong>unique</strong> solution of the equation,</p> <p>Back to our equation, when <span class="math-container">$2k=2014, n=2018$</span> is its <strong>unique</strong> solution.</p> Answer 26: <p>$(2^x+2^y)^2=2^{2x}+2^{x+y+1}+2^{2y}$ so we can take $y=1007$ and $x=1009$ to conclude $2^{2018}+2^{2017}+2^{2014}$ is a square.</p> <hr> <p>how to obtain all $n$:</p> <p>$9(2^{2014})=2^{2017}+2^{2014}=k^2-2^n$.</p> <p>Notice that we must have $k^2\equiv 1 \bmod 3$ implying $2^n\equiv 1\bmod 3$.</p> <p>So $n=2m$.</p> <p>From here we have $2^{2014}\times 9= (k+2^m)(k-2^m)=z(z+2^{m+1})$</p> <p>there are now two cases, case one is when $z=9(2^j)$ and case two is when $z=2^j$</p> <p>the first case is impossible because the number $9$ in binary is $1001$ and adding a power of two clearly wont make it a power of $2$. The other case is clearly only solved by taking $z=2^{1007}$ and taking $m=1009$ thus getting $z(z+2^{m+1})=2̣^{1007}(2^{1007}+2^{1010})=9(2^{2014})$.</p> <p>So we need $n=2(1009)=2018$</p> Answer 27: <p>The square is $$(2^{1007}+2^{1009})^2=2^{2014}+2^{2017}+2^{2018}$$</p> <p>The answe is therefore $n=2018$</p> Answer 28: <p>If you want <em>an</em> answer then $(a+b)^2 = a^2 + 2ab + b^2$ should suggest an obvious answer by setting $a^2 = 2^{2014}; 2ab=2^{2017}; 2^n = c^2$. </p> <p>i.e $(2^{1007} + 2^{\frac n2})^2 = 2^{2014} + 2*2^{1007}*2^{\frac n2} + 2^n= 2^{2014} + 2^{1 + 1007 + \frac n2} + 2^n$. We just have to solve for $1 + 1007 + \frac n2 = 2017$. So $n = 2018$.</p> <p>But is it the only solution?</p> <p>Bear with me.</p> <p>Let $m$ be any positive integer. Let $m = \sum_{i=0}^k a_i2^i; a_i = \{0|1\}$ but its unique binary expansion. </p> <p>Claim: If $m$ has 3 or more non-zero terms in its binary expansion then $m^2$ has more than 3 non-zero terms in its binary expansion.</p> <p>Proof: Let $m = 2^a + 2^b + 2^c + \sum_{i= c+1}^k a_i 2^i; a &lt; b &lt; c$. ($a$ may equal $0$. and $a_i; i &gt; c$ may all be $0$.) </p> <p>Then $m = (1 + 2^{b'} + 2^{c'} + \sum_{i=c+1}^k a_i2^{i - a})2^a; c'= c-a;b'=b-a$</p> <p>So $m^2 = [(1 + 2^{b'} + 2^{c'} + 2*2^{b'}2^{c'}+ 2^{2b'} + 2^{c'}) + 2(1 + 2^{b'} + 2^{c'})\sum_{i=c+1}^k a_i2^{i - a} + (\sum_{i=c+1}^k a_i2^{i - a})^2]2^{2a}$.</p> <p>$= 2^a + 2^b + 2^c + 2^{1+b+c} +2^{2b} + 2^{2c} +..... $.</p> <p>Note all the later terms, if they exist, are larger than $2^{2c}$ so $m^2$ has at least four non-zero terms in its binary expansion.</p> <p>Okay...</p> <p>So $2^{2017} + 2^{2014} + 2^n = m^2$ has at most three non-zero terms in its expansion. So $m$ has at most $2$ it its expansion.</p> <p>So $m = 2^k$ or $m = 2^k + 2^j$.</p> <p>$2^{2017} + 2^{2014} + 2^n = 2^{2k}$ is impossible.</p> <p>And $2^{2017} + 2^{2014} + 2^n = (2^k + 2^j)^2 = 2^{2k} + 2*2^j*2^k + 2^{2k}$ has only $n = 2018$ for solution.</p> <p>So $n = 2018$ is the only solution.</p> Answer 29: <p>The simple, fast, paper and pencil approach to find <strong>an</strong> answer (not <em>all</em> answers) is thus:</p> <p>$$2^{2017} + 2^{2014} + 2^n$$ Factor: $$2^{2014}(2^{3} + 1 + 2^{n-2014})$$</p> <p>If you factor out a perfect square from any number, the result is a perfect square if and only if the original number was a perfect square. Therefore we can just ignore the factor $2^{2014}$ as it is a perfect square, and seek to find an $n$ such that the following is a perfect square: $$2^{3} + 1 + 2^{n-2014}$$ Simplified: $$9+2^{n-2014}$$</p> <p>So the simple question is, what power of two added to $9$ results in a perfect square?</p> <p>The smallest answer to this modified question* is easily found by trial and error, or by remembering the $3,4,5$ pythagorean triple, or any number of other ways:</p> <p>$$9 + 16 = 25 \Rightarrow 9+2^4=25$$</p> <p>Thus, we can get a possible value for $n$ using: $$n-2014=4$$</p> <p>So one possible value for $n$ is $4+2014=2018$.</p> <hr> <p><sub>*This allows us to find the smallest possible value of $n$ that is <em>greater than</em> $2014$. There may be a smaller answer.</sub></p> Answer 30: <p>We are going to solve a more general question: For any given natural number <span class="math-container">$k$</span>, find the integer <span class="math-container">$n$</span> satisfying that <span class="math-container">$$ 2^{2k}+2^{2k+3}+2^n=m^2, $$</span> where <span class="math-container">$m$</span> is a natural number. Simplifying the equation to <span class="math-container">$$9\cdot 2^{2k}+2^n=m^2 \Leftrightarrow 2^n=(m+p)(m-p), $$</span> where <span class="math-container">$p=3\cdot 2^{k}.$</span></p> <p>Splitting the power of <span class="math-container">$2$</span> yields <span class="math-container">$$ \left\{\begin{array}{l} m+p=2^s \cdots (1)\\ m-p=2^{n-s} \cdots (2) \end{array}\right. $$</span> for some integer <span class="math-container">$s&gt;n-s.$</span></p> <p><span class="math-container">$(1)-(2)$</span> gives <span class="math-container">$$ 3 \cdot 2^{k}=p=2^{n-s-1}\left(2^{2s-n}-1\right) $$</span> <span class="math-container">$$ \Rightarrow \left\{\begin{array}{l} n-s-1=k \\ 2 s-n=2 \end{array}\right. \Rightarrow \boxed{n=2k+4}, $$</span></p> <p>which is the <strong>unique</strong> solution of the equation,</p> <p>Back to our equation, when <span class="math-container">$2k=2014, n=2018$</span> is its <strong>unique</strong> solution.</p> Answer 31: <p>$(2^x+2^y)^2=2^{2x}+2^{x+y+1}+2^{2y}$ so we can take $y=1007$ and $x=1009$ to conclude $2^{2018}+2^{2017}+2^{2014}$ is a square.</p> <hr> <p>how to obtain all $n$:</p> <p>$9(2^{2014})=2^{2017}+2^{2014}=k^2-2^n$.</p> <p>Notice that we must have $k^2\equiv 1 \bmod 3$ implying $2^n\equiv 1\bmod 3$.</p> <p>So $n=2m$.</p> <p>From here we have $2^{2014}\times 9= (k+2^m)(k-2^m)=z(z+2^{m+1})$</p> <p>there are now two cases, case one is when $z=9(2^j)$ and case two is when $z=2^j$</p> <p>the first case is impossible because the number $9$ in binary is $1001$ and adding a power of two clearly wont make it a power of $2$. The other case is clearly only solved by taking $z=2^{1007}$ and taking $m=1009$ thus getting $z(z+2^{m+1})=2̣^{1007}(2^{1007}+2^{1010})=9(2^{2014})$.</p> <p>So we need $n=2(1009)=2018$</p> Answer 32: <p>The square is $$(2^{1007}+2^{1009})^2=2^{2014}+2^{2017}+2^{2018}$$</p> <p>The answe is therefore $n=2018$</p> Answer 33: <p>If you want <em>an</em> answer then $(a+b)^2 = a^2 + 2ab + b^2$ should suggest an obvious answer by setting $a^2 = 2^{2014}; 2ab=2^{2017}; 2^n = c^2$. </p> <p>i.e $(2^{1007} + 2^{\frac n2})^2 = 2^{2014} + 2*2^{1007}*2^{\frac n2} + 2^n= 2^{2014} + 2^{1 + 1007 + \frac n2} + 2^n$. We just have to solve for $1 + 1007 + \frac n2 = 2017$. So $n = 2018$.</p> <p>But is it the only solution?</p> <p>Bear with me.</p> <p>Let $m$ be any positive integer. Let $m = \sum_{i=0}^k a_i2^i; a_i = \{0|1\}$ but its unique binary expansion. </p> <p>Claim: If $m$ has 3 or more non-zero terms in its binary expansion then $m^2$ has more than 3 non-zero terms in its binary expansion.</p> <p>Proof: Let $m = 2^a + 2^b + 2^c + \sum_{i= c+1}^k a_i 2^i; a &lt; b &lt; c$. ($a$ may equal $0$. and $a_i; i &gt; c$ may all be $0$.) </p> <p>Then $m = (1 + 2^{b'} + 2^{c'} + \sum_{i=c+1}^k a_i2^{i - a})2^a; c'= c-a;b'=b-a$</p> <p>So $m^2 = [(1 + 2^{b'} + 2^{c'} + 2*2^{b'}2^{c'}+ 2^{2b'} + 2^{c'}) + 2(1 + 2^{b'} + 2^{c'})\sum_{i=c+1}^k a_i2^{i - a} + (\sum_{i=c+1}^k a_i2^{i - a})^2]2^{2a}$.</p> <p>$= 2^a + 2^b + 2^c + 2^{1+b+c} +2^{2b} + 2^{2c} +..... $.</p> <p>Note all the later terms, if they exist, are larger than $2^{2c}$ so $m^2$ has at least four non-zero terms in its binary expansion.</p> <p>Okay...</p> <p>So $2^{2017} + 2^{2014} + 2^n = m^2$ has at most three non-zero terms in its expansion. So $m$ has at most $2$ it its expansion.</p> <p>So $m = 2^k$ or $m = 2^k + 2^j$.</p> <p>$2^{2017} + 2^{2014} + 2^n = 2^{2k}$ is impossible.</p> <p>And $2^{2017} + 2^{2014} + 2^n = (2^k + 2^j)^2 = 2^{2k} + 2*2^j*2^k + 2^{2k}$ has only $n = 2018$ for solution.</p> <p>So $n = 2018$ is the only solution.</p> Answer 34: <p>The simple, fast, paper and pencil approach to find <strong>an</strong> answer (not <em>all</em> answers) is thus:</p> <p>$$2^{2017} + 2^{2014} + 2^n$$ Factor: $$2^{2014}(2^{3} + 1 + 2^{n-2014})$$</p> <p>If you factor out a perfect square from any number, the result is a perfect square if and only if the original number was a perfect square. Therefore we can just ignore the factor $2^{2014}$ as it is a perfect square, and seek to find an $n$ such that the following is a perfect square: $$2^{3} + 1 + 2^{n-2014}$$ Simplified: $$9+2^{n-2014}$$</p> <p>So the simple question is, what power of two added to $9$ results in a perfect square?</p> <p>The smallest answer to this modified question* is easily found by trial and error, or by remembering the $3,4,5$ pythagorean triple, or any number of other ways:</p> <p>$$9 + 16 = 25 \Rightarrow 9+2^4=25$$</p> <p>Thus, we can get a possible value for $n$ using: $$n-2014=4$$</p> <p>So one possible value for $n$ is $4+2014=2018$.</p> <hr> <p><sub>*This allows us to find the smallest possible value of $n$ that is <em>greater than</em> $2014$. There may be a smaller answer.</sub></p> Answer 35: <p>We are going to solve a more general question: For any given natural number <span class="math-container">$k$</span>, find the integer <span class="math-container">$n$</span> satisfying that <span class="math-container">$$ 2^{2k}+2^{2k+3}+2^n=m^2, $$</span> where <span class="math-container">$m$</span> is a natural number. Simplifying the equation to <span class="math-container">$$9\cdot 2^{2k}+2^n=m^2 \Leftrightarrow 2^n=(m+p)(m-p), $$</span> where <span class="math-container">$p=3\cdot 2^{k}.$</span></p> <p>Splitting the power of <span class="math-container">$2$</span> yields <span class="math-container">$$ \left\{\begin{array}{l} m+p=2^s \cdots (1)\\ m-p=2^{n-s} \cdots (2) \end{array}\right. $$</span> for some integer <span class="math-container">$s&gt;n-s.$</span></p> <p><span class="math-container">$(1)-(2)$</span> gives <span class="math-container">$$ 3 \cdot 2^{k}=p=2^{n-s-1}\left(2^{2s-n}-1\right) $$</span> <span class="math-container">$$ \Rightarrow \left\{\begin{array}{l} n-s-1=k \\ 2 s-n=2 \end{array}\right. \Rightarrow \boxed{n=2k+4}, $$</span></p> <p>which is the <strong>unique</strong> solution of the equation,</p> <p>Back to our equation, when <span class="math-container">$2k=2014, n=2018$</span> is its <strong>unique</strong> solution.</p> Answer 36: <p>$(2^x+2^y)^2=2^{2x}+2^{x+y+1}+2^{2y}$ so we can take $y=1007$ and $x=1009$ to conclude $2^{2018}+2^{2017}+2^{2014}$ is a square.</p> <hr> <p>how to obtain all $n$:</p> <p>$9(2^{2014})=2^{2017}+2^{2014}=k^2-2^n$.</p> <p>Notice that we must have $k^2\equiv 1 \bmod 3$ implying $2^n\equiv 1\bmod 3$.</p> <p>So $n=2m$.</p> <p>From here we have $2^{2014}\times 9= (k+2^m)(k-2^m)=z(z+2^{m+1})$</p> <p>there are now two cases, case one is when $z=9(2^j)$ and case two is when $z=2^j$</p> <p>the first case is impossible because the number $9$ in binary is $1001$ and adding a power of two clearly wont make it a power of $2$. The other case is clearly only solved by taking $z=2^{1007}$ and taking $m=1009$ thus getting $z(z+2^{m+1})=2̣^{1007}(2^{1007}+2^{1010})=9(2^{2014})$.</p> <p>So we need $n=2(1009)=2018$</p> Answer 37: <p>The square is $$(2^{1007}+2^{1009})^2=2^{2014}+2^{2017}+2^{2018}$$</p> <p>The answe is therefore $n=2018$</p> Answer 38: <p>If you want <em>an</em> answer then $(a+b)^2 = a^2 + 2ab + b^2$ should suggest an obvious answer by setting $a^2 = 2^{2014}; 2ab=2^{2017}; 2^n = c^2$. </p> <p>i.e $(2^{1007} + 2^{\frac n2})^2 = 2^{2014} + 2*2^{1007}*2^{\frac n2} + 2^n= 2^{2014} + 2^{1 + 1007 + \frac n2} + 2^n$. We just have to solve for $1 + 1007 + \frac n2 = 2017$. So $n = 2018$.</p> <p>But is it the only solution?</p> <p>Bear with me.</p> <p>Let $m$ be any positive integer. Let $m = \sum_{i=0}^k a_i2^i; a_i = \{0|1\}$ but its unique binary expansion. </p> <p>Claim: If $m$ has 3 or more non-zero terms in its binary expansion then $m^2$ has more than 3 non-zero terms in its binary expansion.</p> <p>Proof: Let $m = 2^a + 2^b + 2^c + \sum_{i= c+1}^k a_i 2^i; a &lt; b &lt; c$. ($a$ may equal $0$. and $a_i; i &gt; c$ may all be $0$.) </p> <p>Then $m = (1 + 2^{b'} + 2^{c'} + \sum_{i=c+1}^k a_i2^{i - a})2^a; c'= c-a;b'=b-a$</p> <p>So $m^2 = [(1 + 2^{b'} + 2^{c'} + 2*2^{b'}2^{c'}+ 2^{2b'} + 2^{c'}) + 2(1 + 2^{b'} + 2^{c'})\sum_{i=c+1}^k a_i2^{i - a} + (\sum_{i=c+1}^k a_i2^{i - a})^2]2^{2a}$.</p> <p>$= 2^a + 2^b + 2^c + 2^{1+b+c} +2^{2b} + 2^{2c} +..... $.</p> <p>Note all the later terms, if they exist, are larger than $2^{2c}$ so $m^2$ has at least four non-zero terms in its binary expansion.</p> <p>Okay...</p> <p>So $2^{2017} + 2^{2014} + 2^n = m^2$ has at most three non-zero terms in its expansion. So $m$ has at most $2$ it its expansion.</p> <p>So $m = 2^k$ or $m = 2^k + 2^j$.</p> <p>$2^{2017} + 2^{2014} + 2^n = 2^{2k}$ is impossible.</p> <p>And $2^{2017} + 2^{2014} + 2^n = (2^k + 2^j)^2 = 2^{2k} + 2*2^j*2^k + 2^{2k}$ has only $n = 2018$ for solution.</p> <p>So $n = 2018$ is the only solution.</p> Answer 39: <p>The simple, fast, paper and pencil approach to find <strong>an</strong> answer (not <em>all</em> answers) is thus:</p> <p>$$2^{2017} + 2^{2014} + 2^n$$ Factor: $$2^{2014}(2^{3} + 1 + 2^{n-2014})$$</p> <p>If you factor out a perfect square from any number, the result is a perfect square if and only if the original number was a perfect square. Therefore we can just ignore the factor $2^{2014}$ as it is a perfect square, and seek to find an $n$ such that the following is a perfect square: $$2^{3} + 1 + 2^{n-2014}$$ Simplified: $$9+2^{n-2014}$$</p> <p>So the simple question is, what power of two added to $9$ results in a perfect square?</p> <p>The smallest answer to this modified question* is easily found by trial and error, or by remembering the $3,4,5$ pythagorean triple, or any number of other ways:</p> <p>$$9 + 16 = 25 \Rightarrow 9+2^4=25$$</p> <p>Thus, we can get a possible value for $n$ using: $$n-2014=4$$</p> <p>So one possible value for $n$ is $4+2014=2018$.</p> <hr> <p><sub>*This allows us to find the smallest possible value of $n$ that is <em>greater than</em> $2014$. There may be a smaller answer.</sub></p> Answer 40: <p>We are going to solve a more general question: For any given natural number <span class="math-container">$k$</span>, find the integer <span class="math-container">$n$</span> satisfying that <span class="math-container">$$ 2^{2k}+2^{2k+3}+2^n=m^2, $$</span> where <span class="math-container">$m$</span> is a natural number. Simplifying the equation to <span class="math-container">$$9\cdot 2^{2k}+2^n=m^2 \Leftrightarrow 2^n=(m+p)(m-p), $$</span> where <span class="math-container">$p=3\cdot 2^{k}.$</span></p> <p>Splitting the power of <span class="math-container">$2$</span> yields <span class="math-container">$$ \left\{\begin{array}{l} m+p=2^s \cdots (1)\\ m-p=2^{n-s} \cdots (2) \end{array}\right. $$</span> for some integer <span class="math-container">$s&gt;n-s.$</span></p> <p><span class="math-container">$(1)-(2)$</span> gives <span class="math-container">$$ 3 \cdot 2^{k}=p=2^{n-s-1}\left(2^{2s-n}-1\right) $$</span> <span class="math-container">$$ \Rightarrow \left\{\begin{array}{l} n-s-1=k \\ 2 s-n=2 \end{array}\right. \Rightarrow \boxed{n=2k+4}, $$</span></p> <p>which is the <strong>unique</strong> solution of the equation,</p> <p>Back to our equation, when <span class="math-container">$2k=2014, n=2018$</span> is its <strong>unique</strong> solution.</p> Answer 41: <p>$(2^x+2^y)^2=2^{2x}+2^{x+y+1}+2^{2y}$ so we can take $y=1007$ and $x=1009$ to conclude $2^{2018}+2^{2017}+2^{2014}$ is a square.</p> <hr> <p>how to obtain all $n$:</p> <p>$9(2^{2014})=2^{2017}+2^{2014}=k^2-2^n$.</p> <p>Notice that we must have $k^2\equiv 1 \bmod 3$ implying $2^n\equiv 1\bmod 3$.</p> <p>So $n=2m$.</p> <p>From here we have $2^{2014}\times 9= (k+2^m)(k-2^m)=z(z+2^{m+1})$</p> <p>there are now two cases, case one is when $z=9(2^j)$ and case two is when $z=2^j$</p> <p>the first case is impossible because the number $9$ in binary is $1001$ and adding a power of two clearly wont make it a power of $2$. The other case is clearly only solved by taking $z=2^{1007}$ and taking $m=1009$ thus getting $z(z+2^{m+1})=2̣^{1007}(2^{1007}+2^{1010})=9(2^{2014})$.</p> <p>So we need $n=2(1009)=2018$</p> Answer 42: <p>The square is $$(2^{1007}+2^{1009})^2=2^{2014}+2^{2017}+2^{2018}$$</p> <p>The answe is therefore $n=2018$</p> Answer 43: <p>If you want <em>an</em> answer then $(a+b)^2 = a^2 + 2ab + b^2$ should suggest an obvious answer by setting $a^2 = 2^{2014}; 2ab=2^{2017}; 2^n = c^2$. </p> <p>i.e $(2^{1007} + 2^{\frac n2})^2 = 2^{2014} + 2*2^{1007}*2^{\frac n2} + 2^n= 2^{2014} + 2^{1 + 1007 + \frac n2} + 2^n$. We just have to solve for $1 + 1007 + \frac n2 = 2017$. So $n = 2018$.</p> <p>But is it the only solution?</p> <p>Bear with me.</p> <p>Let $m$ be any positive integer. Let $m = \sum_{i=0}^k a_i2^i; a_i = \{0|1\}$ but its unique binary expansion. </p> <p>Claim: If $m$ has 3 or more non-zero terms in its binary expansion then $m^2$ has more than 3 non-zero terms in its binary expansion.</p> <p>Proof: Let $m = 2^a + 2^b + 2^c + \sum_{i= c+1}^k a_i 2^i; a &lt; b &lt; c$. ($a$ may equal $0$. and $a_i; i &gt; c$ may all be $0$.) </p> <p>Then $m = (1 + 2^{b'} + 2^{c'} + \sum_{i=c+1}^k a_i2^{i - a})2^a; c'= c-a;b'=b-a$</p> <p>So $m^2 = [(1 + 2^{b'} + 2^{c'} + 2*2^{b'}2^{c'}+ 2^{2b'} + 2^{c'}) + 2(1 + 2^{b'} + 2^{c'})\sum_{i=c+1}^k a_i2^{i - a} + (\sum_{i=c+1}^k a_i2^{i - a})^2]2^{2a}$.</p> <p>$= 2^a + 2^b + 2^c + 2^{1+b+c} +2^{2b} + 2^{2c} +..... $.</p> <p>Note all the later terms, if they exist, are larger than $2^{2c}$ so $m^2$ has at least four non-zero terms in its binary expansion.</p> <p>Okay...</p> <p>So $2^{2017} + 2^{2014} + 2^n = m^2$ has at most three non-zero terms in its expansion. So $m$ has at most $2$ it its expansion.</p> <p>So $m = 2^k$ or $m = 2^k + 2^j$.</p> <p>$2^{2017} + 2^{2014} + 2^n = 2^{2k}$ is impossible.</p> <p>And $2^{2017} + 2^{2014} + 2^n = (2^k + 2^j)^2 = 2^{2k} + 2*2^j*2^k + 2^{2k}$ has only $n = 2018$ for solution.</p> <p>So $n = 2018$ is the only solution.</p> Answer 44: <p>The simple, fast, paper and pencil approach to find <strong>an</strong> answer (not <em>all</em> answers) is thus:</p> <p>$$2^{2017} + 2^{2014} + 2^n$$ Factor: $$2^{2014}(2^{3} + 1 + 2^{n-2014})$$</p> <p>If you factor out a perfect square from any number, the result is a perfect square if and only if the original number was a perfect square. Therefore we can just ignore the factor $2^{2014}$ as it is a perfect square, and seek to find an $n$ such that the following is a perfect square: $$2^{3} + 1 + 2^{n-2014}$$ Simplified: $$9+2^{n-2014}$$</p> <p>So the simple question is, what power of two added to $9$ results in a perfect square?</p> <p>The smallest answer to this modified question* is easily found by trial and error, or by remembering the $3,4,5$ pythagorean triple, or any number of other ways:</p> <p>$$9 + 16 = 25 \Rightarrow 9+2^4=25$$</p> <p>Thus, we can get a possible value for $n$ using: $$n-2014=4$$</p> <p>So one possible value for $n$ is $4+2014=2018$.</p> <hr> <p><sub>*This allows us to find the smallest possible value of $n$ that is <em>greater than</em> $2014$. There may be a smaller answer.</sub></p> Answer 45: <p>We are going to solve a more general question: For any given natural number <span class="math-container">$k$</span>, find the integer <span class="math-container">$n$</span> satisfying that <span class="math-container">$$ 2^{2k}+2^{2k+3}+2^n=m^2, $$</span> where <span class="math-container">$m$</span> is a natural number. Simplifying the equation to <span class="math-container">$$9\cdot 2^{2k}+2^n=m^2 \Leftrightarrow 2^n=(m+p)(m-p), $$</span> where <span class="math-container">$p=3\cdot 2^{k}.$</span></p> <p>Splitting the power of <span class="math-container">$2$</span> yields <span class="math-container">$$ \left\{\begin{array}{l} m+p=2^s \cdots (1)\\ m-p=2^{n-s} \cdots (2) \end{array}\right. $$</span> for some integer <span class="math-container">$s&gt;n-s.$</span></p> <p><span class="math-container">$(1)-(2)$</span> gives <span class="math-container">$$ 3 \cdot 2^{k}=p=2^{n-s-1}\left(2^{2s-n}-1\right) $$</span> <span class="math-container">$$ \Rightarrow \left\{\begin{array}{l} n-s-1=k \\ 2 s-n=2 \end{array}\right. \Rightarrow \boxed{n=2k+4}, $$</span></p> <p>which is the <strong>unique</strong> solution of the equation,</p> <p>Back to our equation, when <span class="math-container">$2k=2014, n=2018$</span> is its <strong>unique</strong> solution.</p> Answer 46: <p>$(2^x+2^y)^2=2^{2x}+2^{x+y+1}+2^{2y}$ so we can take $y=1007$ and $x=1009$ to conclude $2^{2018}+2^{2017}+2^{2014}$ is a square.</p> <hr> <p>how to obtain all $n$:</p> <p>$9(2^{2014})=2^{2017}+2^{2014}=k^2-2^n$.</p> <p>Notice that we must have $k^2\equiv 1 \bmod 3$ implying $2^n\equiv 1\bmod 3$.</p> <p>So $n=2m$.</p> <p>From here we have $2^{2014}\times 9= (k+2^m)(k-2^m)=z(z+2^{m+1})$</p> <p>there are now two cases, case one is when $z=9(2^j)$ and case two is when $z=2^j$</p> <p>the first case is impossible because the number $9$ in binary is $1001$ and adding a power of two clearly wont make it a power of $2$. The other case is clearly only solved by taking $z=2^{1007}$ and taking $m=1009$ thus getting $z(z+2^{m+1})=2̣^{1007}(2^{1007}+2^{1010})=9(2^{2014})$.</p> <p>So we need $n=2(1009)=2018$</p> Answer 47: <p>The square is $$(2^{1007}+2^{1009})^2=2^{2014}+2^{2017}+2^{2018}$$</p> <p>The answe is therefore $n=2018$</p> Answer 48: <p>If you want <em>an</em> answer then $(a+b)^2 = a^2 + 2ab + b^2$ should suggest an obvious answer by setting $a^2 = 2^{2014}; 2ab=2^{2017}; 2^n = c^2$. </p> <p>i.e $(2^{1007} + 2^{\frac n2})^2 = 2^{2014} + 2*2^{1007}*2^{\frac n2} + 2^n= 2^{2014} + 2^{1 + 1007 + \frac n2} + 2^n$. We just have to solve for $1 + 1007 + \frac n2 = 2017$. So $n = 2018$.</p> <p>But is it the only solution?</p> <p>Bear with me.</p> <p>Let $m$ be any positive integer. Let $m = \sum_{i=0}^k a_i2^i; a_i = \{0|1\}$ but its unique binary expansion. </p> <p>Claim: If $m$ has 3 or more non-zero terms in its binary expansion then $m^2$ has more than 3 non-zero terms in its binary expansion.</p> <p>Proof: Let $m = 2^a + 2^b + 2^c + \sum_{i= c+1}^k a_i 2^i; a &lt; b &lt; c$. ($a$ may equal $0$. and $a_i; i &gt; c$ may all be $0$.) </p> <p>Then $m = (1 + 2^{b'} + 2^{c'} + \sum_{i=c+1}^k a_i2^{i - a})2^a; c'= c-a;b'=b-a$</p> <p>So $m^2 = [(1 + 2^{b'} + 2^{c'} + 2*2^{b'}2^{c'}+ 2^{2b'} + 2^{c'}) + 2(1 + 2^{b'} + 2^{c'})\sum_{i=c+1}^k a_i2^{i - a} + (\sum_{i=c+1}^k a_i2^{i - a})^2]2^{2a}$.</p> <p>$= 2^a + 2^b + 2^c + 2^{1+b+c} +2^{2b} + 2^{2c} +..... $.</p> <p>Note all the later terms, if they exist, are larger than $2^{2c}$ so $m^2$ has at least four non-zero terms in its binary expansion.</p> <p>Okay...</p> <p>So $2^{2017} + 2^{2014} + 2^n = m^2$ has at most three non-zero terms in its expansion. So $m$ has at most $2$ it its expansion.</p> <p>So $m = 2^k$ or $m = 2^k + 2^j$.</p> <p>$2^{2017} + 2^{2014} + 2^n = 2^{2k}$ is impossible.</p> <p>And $2^{2017} + 2^{2014} + 2^n = (2^k + 2^j)^2 = 2^{2k} + 2*2^j*2^k + 2^{2k}$ has only $n = 2018$ for solution.</p> <p>So $n = 2018$ is the only solution.</p> Answer 49: <p>The simple, fast, paper and pencil approach to find <strong>an</strong> answer (not <em>all</em> answers) is thus:</p> <p>$$2^{2017} + 2^{2014} + 2^n$$ Factor: $$2^{2014}(2^{3} + 1 + 2^{n-2014})$$</p> <p>If you factor out a perfect square from any number, the result is a perfect square if and only if the original number was a perfect square. Therefore we can just ignore the factor $2^{2014}$ as it is a perfect square, and seek to find an $n$ such that the following is a perfect square: $$2^{3} + 1 + 2^{n-2014}$$ Simplified: $$9+2^{n-2014}$$</p> <p>So the simple question is, what power of two added to $9$ results in a perfect square?</p> <p>The smallest answer to this modified question* is easily found by trial and error, or by remembering the $3,4,5$ pythagorean triple, or any number of other ways:</p> <p>$$9 + 16 = 25 \Rightarrow 9+2^4=25$$</p> <p>Thus, we can get a possible value for $n$ using: $$n-2014=4$$</p> <p>So one possible value for $n$ is $4+2014=2018$.</p> <hr> <p><sub>*This allows us to find the smallest possible value of $n$ that is <em>greater than</em> $2014$. There may be a smaller answer.</sub></p> Answer 50: <p>We are going to solve a more general question: For any given natural number <span class="math-container">$k$</span>, find the integer <span class="math-container">$n$</span> satisfying that <span class="math-container">$$ 2^{2k}+2^{2k+3}+2^n=m^2, $$</span> where <span class="math-container">$m$</span> is a natural number. Simplifying the equation to <span class="math-container">$$9\cdot 2^{2k}+2^n=m^2 \Leftrightarrow 2^n=(m+p)(m-p), $$</span> where <span class="math-container">$p=3\cdot 2^{k}.$</span></p> <p>Splitting the power of <span class="math-container">$2$</span> yields <span class="math-container">$$ \left\{\begin{array}{l} m+p=2^s \cdots (1)\\ m-p=2^{n-s} \cdots (2) \end{array}\right. $$</span> for some integer <span class="math-container">$s&gt;n-s.$</span></p> <p><span class="math-container">$(1)-(2)$</span> gives <span class="math-container">$$ 3 \cdot 2^{k}=p=2^{n-s-1}\left(2^{2s-n}-1\right) $$</span> <span class="math-container">$$ \Rightarrow \left\{\begin{array}{l} n-s-1=k \\ 2 s-n=2 \end{array}\right. \Rightarrow \boxed{n=2k+4}, $$</span></p> <p>which is the <strong>unique</strong> solution of the equation,</p> <p>Back to our equation, when <span class="math-container">$2k=2014, n=2018$</span> is its <strong>unique</strong> solution.</p>
Matrix diagonalization: what if the (basis) vectors are not orthogonal?
2,105,571
6
https://math.stackexchange.com/questions/2105571/matrix-diagonalization-what-if-the-basis-vectors-are-not-orthogonal
<p>From the definition of eigenvectors of a (square) matrix $A$, I can obtain the result: $$ A=P\Omega P^{-1} $$ where $P$ is unitary matrix (columns are eigenvectors), and $\Omega $ is a diagonal matrix of eigenvalues.</p> <p>But I am wondering a very different situation: is it possible to decompose a given matrix $A$ as: $$ A=M \Omega M^{-1} $$ where $M$ is NOT unitary (the columns are not orthogonal). In other words, how to find a non-orthogonal matrix $M$ making $A=M \Omega M^{-1}$ holds ($\Omega$ is diagonal), given an arbitrary square matrix $A$?</p>
[ { "answer_id": 2105738, "score": 2, "text": "<p>That $P$ is unitary is a relativly rare case! Always happens, when $A$ is \"normal\", which means $AA^H = A^H A$. It can occure more often, but I#m not sure about that. </p>\n\n<p>One example to make a difference is by using\n$$\nA = \\begin{pmatrix}1 &amp;2 \\\\ 0 &amp; 1\\end{pmatrix} \\begin{pmatrix} 2 &amp; 0 \\\\ 0 &amp;5\\end{pmatrix}\\begin{pmatrix}1 &amp;2 \\\\ 0 &amp; 1\\end{pmatrix}^{-1} = \\begin{pmatrix}2&amp;6\\\\0&amp;5 \\end{pmatrix}\n$$\nThe collumns of $P$ are not orthogonal. </p>\n\n<p>Keep in mind, that it always depends on $A$ and it's eigenvalues and eigenvectors. If the vectors are orthogonal, you can find an orthonormal basis. If not, you can not. However you can just judge this by calculationg the eigenvalues. </p>\n" }, { "answer_id": 2105742, "score": 1, "text": "<p>For a square matrix $A$ of order $n$, $A=M\\Omega M^{-1}$ holds iff $A$ has $n$ linearly independent eigenvectors, where $M$ contains the $n$ eigenvectors as its columns and obviously $det(M)\\ne 0$. Moreover, to satisfy the restriction for $M$ (non-unitary), matrix $A$ should not be normal .</p>\n" }, { "answer_id": 2105738, "score": 2, "text": "<p>That $P$ is unitary is a relativly rare case! Always happens, when $A$ is \"normal\", which means $AA^H = A^H A$. It can occure more often, but I#m not sure about that. </p>\n\n<p>One example to make a difference is by using\n$$\nA = \\begin{pmatrix}1 &amp;2 \\\\ 0 &amp; 1\\end{pmatrix} \\begin{pmatrix} 2 &amp; 0 \\\\ 0 &amp;5\\end{pmatrix}\\begin{pmatrix}1 &amp;2 \\\\ 0 &amp; 1\\end{pmatrix}^{-1} = \\begin{pmatrix}2&amp;6\\\\0&amp;5 \\end{pmatrix}\n$$\nThe collumns of $P$ are not orthogonal. </p>\n\n<p>Keep in mind, that it always depends on $A$ and it's eigenvalues and eigenvectors. If the vectors are orthogonal, you can find an orthonormal basis. If not, you can not. However you can just judge this by calculationg the eigenvalues. </p>\n" }, { "answer_id": 2105742, "score": 1, "text": "<p>For a square matrix $A$ of order $n$, $A=M\\Omega M^{-1}$ holds iff $A$ has $n$ linearly independent eigenvectors, where $M$ contains the $n$ eigenvectors as its columns and obviously $det(M)\\ne 0$. Moreover, to satisfy the restriction for $M$ (non-unitary), matrix $A$ should not be normal .</p>\n" }, { "answer_id": 2105738, "score": 2, "text": "<p>That $P$ is unitary is a relativly rare case! Always happens, when $A$ is \"normal\", which means $AA^H = A^H A$. It can occure more often, but I#m not sure about that. </p>\n\n<p>One example to make a difference is by using\n$$\nA = \\begin{pmatrix}1 &amp;2 \\\\ 0 &amp; 1\\end{pmatrix} \\begin{pmatrix} 2 &amp; 0 \\\\ 0 &amp;5\\end{pmatrix}\\begin{pmatrix}1 &amp;2 \\\\ 0 &amp; 1\\end{pmatrix}^{-1} = \\begin{pmatrix}2&amp;6\\\\0&amp;5 \\end{pmatrix}\n$$\nThe collumns of $P$ are not orthogonal. </p>\n\n<p>Keep in mind, that it always depends on $A$ and it's eigenvalues and eigenvectors. If the vectors are orthogonal, you can find an orthonormal basis. If not, you can not. However you can just judge this by calculationg the eigenvalues. </p>\n" }, { "answer_id": 2105742, "score": 1, "text": "<p>For a square matrix $A$ of order $n$, $A=M\\Omega M^{-1}$ holds iff $A$ has $n$ linearly independent eigenvectors, where $M$ contains the $n$ eigenvectors as its columns and obviously $det(M)\\ne 0$. Moreover, to satisfy the restriction for $M$ (non-unitary), matrix $A$ should not be normal .</p>\n" }, { "answer_id": 2105738, "score": 2, "text": "<p>That $P$ is unitary is a relativly rare case! Always happens, when $A$ is \"normal\", which means $AA^H = A^H A$. It can occure more often, but I#m not sure about that. </p>\n\n<p>One example to make a difference is by using\n$$\nA = \\begin{pmatrix}1 &amp;2 \\\\ 0 &amp; 1\\end{pmatrix} \\begin{pmatrix} 2 &amp; 0 \\\\ 0 &amp;5\\end{pmatrix}\\begin{pmatrix}1 &amp;2 \\\\ 0 &amp; 1\\end{pmatrix}^{-1} = \\begin{pmatrix}2&amp;6\\\\0&amp;5 \\end{pmatrix}\n$$\nThe collumns of $P$ are not orthogonal. </p>\n\n<p>Keep in mind, that it always depends on $A$ and it's eigenvalues and eigenvectors. If the vectors are orthogonal, you can find an orthonormal basis. If not, you can not. However you can just judge this by calculationg the eigenvalues. </p>\n" }, { "answer_id": 2105742, "score": 1, "text": "<p>For a square matrix $A$ of order $n$, $A=M\\Omega M^{-1}$ holds iff $A$ has $n$ linearly independent eigenvectors, where $M$ contains the $n$ eigenvectors as its columns and obviously $det(M)\\ne 0$. Moreover, to satisfy the restriction for $M$ (non-unitary), matrix $A$ should not be normal .</p>\n" }, { "answer_id": 2105738, "score": 2, "text": "<p>That $P$ is unitary is a relativly rare case! Always happens, when $A$ is \"normal\", which means $AA^H = A^H A$. It can occure more often, but I#m not sure about that. </p>\n\n<p>One example to make a difference is by using\n$$\nA = \\begin{pmatrix}1 &amp;2 \\\\ 0 &amp; 1\\end{pmatrix} \\begin{pmatrix} 2 &amp; 0 \\\\ 0 &amp;5\\end{pmatrix}\\begin{pmatrix}1 &amp;2 \\\\ 0 &amp; 1\\end{pmatrix}^{-1} = \\begin{pmatrix}2&amp;6\\\\0&amp;5 \\end{pmatrix}\n$$\nThe collumns of $P$ are not orthogonal. </p>\n\n<p>Keep in mind, that it always depends on $A$ and it's eigenvalues and eigenvectors. If the vectors are orthogonal, you can find an orthonormal basis. If not, you can not. However you can just judge this by calculationg the eigenvalues. </p>\n" }, { "answer_id": 2105742, "score": 1, "text": "<p>For a square matrix $A$ of order $n$, $A=M\\Omega M^{-1}$ holds iff $A$ has $n$ linearly independent eigenvectors, where $M$ contains the $n$ eigenvectors as its columns and obviously $det(M)\\ne 0$. Moreover, to satisfy the restriction for $M$ (non-unitary), matrix $A$ should not be normal .</p>\n" }, { "answer_id": 2105738, "score": 2, "text": "<p>That $P$ is unitary is a relativly rare case! Always happens, when $A$ is \"normal\", which means $AA^H = A^H A$. It can occure more often, but I#m not sure about that. </p>\n\n<p>One example to make a difference is by using\n$$\nA = \\begin{pmatrix}1 &amp;2 \\\\ 0 &amp; 1\\end{pmatrix} \\begin{pmatrix} 2 &amp; 0 \\\\ 0 &amp;5\\end{pmatrix}\\begin{pmatrix}1 &amp;2 \\\\ 0 &amp; 1\\end{pmatrix}^{-1} = \\begin{pmatrix}2&amp;6\\\\0&amp;5 \\end{pmatrix}\n$$\nThe collumns of $P$ are not orthogonal. </p>\n\n<p>Keep in mind, that it always depends on $A$ and it's eigenvalues and eigenvectors. If the vectors are orthogonal, you can find an orthonormal basis. If not, you can not. However you can just judge this by calculationg the eigenvalues. </p>\n" }, { "answer_id": 2105742, "score": 1, "text": "<p>For a square matrix $A$ of order $n$, $A=M\\Omega M^{-1}$ holds iff $A$ has $n$ linearly independent eigenvectors, where $M$ contains the $n$ eigenvectors as its columns and obviously $det(M)\\ne 0$. Moreover, to satisfy the restriction for $M$ (non-unitary), matrix $A$ should not be normal .</p>\n" }, { "answer_id": 2105738, "score": 2, "text": "<p>That $P$ is unitary is a relativly rare case! Always happens, when $A$ is \"normal\", which means $AA^H = A^H A$. It can occure more often, but I#m not sure about that. </p>\n\n<p>One example to make a difference is by using\n$$\nA = \\begin{pmatrix}1 &amp;2 \\\\ 0 &amp; 1\\end{pmatrix} \\begin{pmatrix} 2 &amp; 0 \\\\ 0 &amp;5\\end{pmatrix}\\begin{pmatrix}1 &amp;2 \\\\ 0 &amp; 1\\end{pmatrix}^{-1} = \\begin{pmatrix}2&amp;6\\\\0&amp;5 \\end{pmatrix}\n$$\nThe collumns of $P$ are not orthogonal. </p>\n\n<p>Keep in mind, that it always depends on $A$ and it's eigenvalues and eigenvectors. If the vectors are orthogonal, you can find an orthonormal basis. If not, you can not. However you can just judge this by calculationg the eigenvalues. </p>\n" }, { "answer_id": 2105742, "score": 1, "text": "<p>For a square matrix $A$ of order $n$, $A=M\\Omega M^{-1}$ holds iff $A$ has $n$ linearly independent eigenvectors, where $M$ contains the $n$ eigenvectors as its columns and obviously $det(M)\\ne 0$. Moreover, to satisfy the restriction for $M$ (non-unitary), matrix $A$ should not be normal .</p>\n" }, { "answer_id": 2105738, "score": 2, "text": "<p>That $P$ is unitary is a relativly rare case! Always happens, when $A$ is \"normal\", which means $AA^H = A^H A$. It can occure more often, but I#m not sure about that. </p>\n\n<p>One example to make a difference is by using\n$$\nA = \\begin{pmatrix}1 &amp;2 \\\\ 0 &amp; 1\\end{pmatrix} \\begin{pmatrix} 2 &amp; 0 \\\\ 0 &amp;5\\end{pmatrix}\\begin{pmatrix}1 &amp;2 \\\\ 0 &amp; 1\\end{pmatrix}^{-1} = \\begin{pmatrix}2&amp;6\\\\0&amp;5 \\end{pmatrix}\n$$\nThe collumns of $P$ are not orthogonal. </p>\n\n<p>Keep in mind, that it always depends on $A$ and it's eigenvalues and eigenvectors. If the vectors are orthogonal, you can find an orthonormal basis. If not, you can not. However you can just judge this by calculationg the eigenvalues. </p>\n" }, { "answer_id": 2105742, "score": 1, "text": "<p>For a square matrix $A$ of order $n$, $A=M\\Omega M^{-1}$ holds iff $A$ has $n$ linearly independent eigenvectors, where $M$ contains the $n$ eigenvectors as its columns and obviously $det(M)\\ne 0$. Moreover, to satisfy the restriction for $M$ (non-unitary), matrix $A$ should not be normal .</p>\n" }, { "answer_id": 2105738, "score": 2, "text": "<p>That $P$ is unitary is a relativly rare case! Always happens, when $A$ is \"normal\", which means $AA^H = A^H A$. It can occure more often, but I#m not sure about that. </p>\n\n<p>One example to make a difference is by using\n$$\nA = \\begin{pmatrix}1 &amp;2 \\\\ 0 &amp; 1\\end{pmatrix} \\begin{pmatrix} 2 &amp; 0 \\\\ 0 &amp;5\\end{pmatrix}\\begin{pmatrix}1 &amp;2 \\\\ 0 &amp; 1\\end{pmatrix}^{-1} = \\begin{pmatrix}2&amp;6\\\\0&amp;5 \\end{pmatrix}\n$$\nThe collumns of $P$ are not orthogonal. </p>\n\n<p>Keep in mind, that it always depends on $A$ and it's eigenvalues and eigenvectors. If the vectors are orthogonal, you can find an orthonormal basis. If not, you can not. However you can just judge this by calculationg the eigenvalues. </p>\n" }, { "answer_id": 2105742, "score": 1, "text": "<p>For a square matrix $A$ of order $n$, $A=M\\Omega M^{-1}$ holds iff $A$ has $n$ linearly independent eigenvectors, where $M$ contains the $n$ eigenvectors as its columns and obviously $det(M)\\ne 0$. Moreover, to satisfy the restriction for $M$ (non-unitary), matrix $A$ should not be normal .</p>\n" }, { "answer_id": 2105738, "score": 2, "text": "<p>That $P$ is unitary is a relativly rare case! Always happens, when $A$ is \"normal\", which means $AA^H = A^H A$. It can occure more often, but I#m not sure about that. </p>\n\n<p>One example to make a difference is by using\n$$\nA = \\begin{pmatrix}1 &amp;2 \\\\ 0 &amp; 1\\end{pmatrix} \\begin{pmatrix} 2 &amp; 0 \\\\ 0 &amp;5\\end{pmatrix}\\begin{pmatrix}1 &amp;2 \\\\ 0 &amp; 1\\end{pmatrix}^{-1} = \\begin{pmatrix}2&amp;6\\\\0&amp;5 \\end{pmatrix}\n$$\nThe collumns of $P$ are not orthogonal. </p>\n\n<p>Keep in mind, that it always depends on $A$ and it's eigenvalues and eigenvectors. If the vectors are orthogonal, you can find an orthonormal basis. If not, you can not. However you can just judge this by calculationg the eigenvalues. </p>\n" }, { "answer_id": 2105742, "score": 1, "text": "<p>For a square matrix $A$ of order $n$, $A=M\\Omega M^{-1}$ holds iff $A$ has $n$ linearly independent eigenvectors, where $M$ contains the $n$ eigenvectors as its columns and obviously $det(M)\\ne 0$. Moreover, to satisfy the restriction for $M$ (non-unitary), matrix $A$ should not be normal .</p>\n" } ]
Question: Matrix diagonalization: what if the (basis) vectors are not orthogonal? Answer 1: <p>That $P$ is unitary is a relativly rare case! Always happens, when $A$ is "normal", which means $AA^H = A^H A$. It can occure more often, but I#m not sure about that. </p> <p>One example to make a difference is by using $$ A = \begin{pmatrix}1 &amp;2 \\ 0 &amp; 1\end{pmatrix} \begin{pmatrix} 2 &amp; 0 \\ 0 &amp;5\end{pmatrix}\begin{pmatrix}1 &amp;2 \\ 0 &amp; 1\end{pmatrix}^{-1} = \begin{pmatrix}2&amp;6\\0&amp;5 \end{pmatrix} $$ The collumns of $P$ are not orthogonal. </p> <p>Keep in mind, that it always depends on $A$ and it's eigenvalues and eigenvectors. If the vectors are orthogonal, you can find an orthonormal basis. If not, you can not. However you can just judge this by calculationg the eigenvalues. </p> Answer 2: <p>For a square matrix $A$ of order $n$, $A=M\Omega M^{-1}$ holds iff $A$ has $n$ linearly independent eigenvectors, where $M$ contains the $n$ eigenvectors as its columns and obviously $det(M)\ne 0$. Moreover, to satisfy the restriction for $M$ (non-unitary), matrix $A$ should not be normal .</p> Answer 3: <p>That $P$ is unitary is a relativly rare case! Always happens, when $A$ is "normal", which means $AA^H = A^H A$. It can occure more often, but I#m not sure about that. </p> <p>One example to make a difference is by using $$ A = \begin{pmatrix}1 &amp;2 \\ 0 &amp; 1\end{pmatrix} \begin{pmatrix} 2 &amp; 0 \\ 0 &amp;5\end{pmatrix}\begin{pmatrix}1 &amp;2 \\ 0 &amp; 1\end{pmatrix}^{-1} = \begin{pmatrix}2&amp;6\\0&amp;5 \end{pmatrix} $$ The collumns of $P$ are not orthogonal. </p> <p>Keep in mind, that it always depends on $A$ and it's eigenvalues and eigenvectors. If the vectors are orthogonal, you can find an orthonormal basis. If not, you can not. However you can just judge this by calculationg the eigenvalues. </p> Answer 4: <p>For a square matrix $A$ of order $n$, $A=M\Omega M^{-1}$ holds iff $A$ has $n$ linearly independent eigenvectors, where $M$ contains the $n$ eigenvectors as its columns and obviously $det(M)\ne 0$. Moreover, to satisfy the restriction for $M$ (non-unitary), matrix $A$ should not be normal .</p> Answer 5: <p>That $P$ is unitary is a relativly rare case! Always happens, when $A$ is "normal", which means $AA^H = A^H A$. It can occure more often, but I#m not sure about that. </p> <p>One example to make a difference is by using $$ A = \begin{pmatrix}1 &amp;2 \\ 0 &amp; 1\end{pmatrix} \begin{pmatrix} 2 &amp; 0 \\ 0 &amp;5\end{pmatrix}\begin{pmatrix}1 &amp;2 \\ 0 &amp; 1\end{pmatrix}^{-1} = \begin{pmatrix}2&amp;6\\0&amp;5 \end{pmatrix} $$ The collumns of $P$ are not orthogonal. </p> <p>Keep in mind, that it always depends on $A$ and it's eigenvalues and eigenvectors. If the vectors are orthogonal, you can find an orthonormal basis. If not, you can not. However you can just judge this by calculationg the eigenvalues. </p> Answer 6: <p>For a square matrix $A$ of order $n$, $A=M\Omega M^{-1}$ holds iff $A$ has $n$ linearly independent eigenvectors, where $M$ contains the $n$ eigenvectors as its columns and obviously $det(M)\ne 0$. Moreover, to satisfy the restriction for $M$ (non-unitary), matrix $A$ should not be normal .</p> Answer 7: <p>That $P$ is unitary is a relativly rare case! Always happens, when $A$ is "normal", which means $AA^H = A^H A$. It can occure more often, but I#m not sure about that. </p> <p>One example to make a difference is by using $$ A = \begin{pmatrix}1 &amp;2 \\ 0 &amp; 1\end{pmatrix} \begin{pmatrix} 2 &amp; 0 \\ 0 &amp;5\end{pmatrix}\begin{pmatrix}1 &amp;2 \\ 0 &amp; 1\end{pmatrix}^{-1} = \begin{pmatrix}2&amp;6\\0&amp;5 \end{pmatrix} $$ The collumns of $P$ are not orthogonal. </p> <p>Keep in mind, that it always depends on $A$ and it's eigenvalues and eigenvectors. If the vectors are orthogonal, you can find an orthonormal basis. If not, you can not. However you can just judge this by calculationg the eigenvalues. </p> Answer 8: <p>For a square matrix $A$ of order $n$, $A=M\Omega M^{-1}$ holds iff $A$ has $n$ linearly independent eigenvectors, where $M$ contains the $n$ eigenvectors as its columns and obviously $det(M)\ne 0$. Moreover, to satisfy the restriction for $M$ (non-unitary), matrix $A$ should not be normal .</p> Answer 9: <p>That $P$ is unitary is a relativly rare case! Always happens, when $A$ is "normal", which means $AA^H = A^H A$. It can occure more often, but I#m not sure about that. </p> <p>One example to make a difference is by using $$ A = \begin{pmatrix}1 &amp;2 \\ 0 &amp; 1\end{pmatrix} \begin{pmatrix} 2 &amp; 0 \\ 0 &amp;5\end{pmatrix}\begin{pmatrix}1 &amp;2 \\ 0 &amp; 1\end{pmatrix}^{-1} = \begin{pmatrix}2&amp;6\\0&amp;5 \end{pmatrix} $$ The collumns of $P$ are not orthogonal. </p> <p>Keep in mind, that it always depends on $A$ and it's eigenvalues and eigenvectors. If the vectors are orthogonal, you can find an orthonormal basis. If not, you can not. However you can just judge this by calculationg the eigenvalues. </p> Answer 10: <p>For a square matrix $A$ of order $n$, $A=M\Omega M^{-1}$ holds iff $A$ has $n$ linearly independent eigenvectors, where $M$ contains the $n$ eigenvectors as its columns and obviously $det(M)\ne 0$. Moreover, to satisfy the restriction for $M$ (non-unitary), matrix $A$ should not be normal .</p> Answer 11: <p>That $P$ is unitary is a relativly rare case! Always happens, when $A$ is "normal", which means $AA^H = A^H A$. It can occure more often, but I#m not sure about that. </p> <p>One example to make a difference is by using $$ A = \begin{pmatrix}1 &amp;2 \\ 0 &amp; 1\end{pmatrix} \begin{pmatrix} 2 &amp; 0 \\ 0 &amp;5\end{pmatrix}\begin{pmatrix}1 &amp;2 \\ 0 &amp; 1\end{pmatrix}^{-1} = \begin{pmatrix}2&amp;6\\0&amp;5 \end{pmatrix} $$ The collumns of $P$ are not orthogonal. </p> <p>Keep in mind, that it always depends on $A$ and it's eigenvalues and eigenvectors. If the vectors are orthogonal, you can find an orthonormal basis. If not, you can not. However you can just judge this by calculationg the eigenvalues. </p> Answer 12: <p>For a square matrix $A$ of order $n$, $A=M\Omega M^{-1}$ holds iff $A$ has $n$ linearly independent eigenvectors, where $M$ contains the $n$ eigenvectors as its columns and obviously $det(M)\ne 0$. Moreover, to satisfy the restriction for $M$ (non-unitary), matrix $A$ should not be normal .</p> Answer 13: <p>That $P$ is unitary is a relativly rare case! Always happens, when $A$ is "normal", which means $AA^H = A^H A$. It can occure more often, but I#m not sure about that. </p> <p>One example to make a difference is by using $$ A = \begin{pmatrix}1 &amp;2 \\ 0 &amp; 1\end{pmatrix} \begin{pmatrix} 2 &amp; 0 \\ 0 &amp;5\end{pmatrix}\begin{pmatrix}1 &amp;2 \\ 0 &amp; 1\end{pmatrix}^{-1} = \begin{pmatrix}2&amp;6\\0&amp;5 \end{pmatrix} $$ The collumns of $P$ are not orthogonal. </p> <p>Keep in mind, that it always depends on $A$ and it's eigenvalues and eigenvectors. If the vectors are orthogonal, you can find an orthonormal basis. If not, you can not. However you can just judge this by calculationg the eigenvalues. </p> Answer 14: <p>For a square matrix $A$ of order $n$, $A=M\Omega M^{-1}$ holds iff $A$ has $n$ linearly independent eigenvectors, where $M$ contains the $n$ eigenvectors as its columns and obviously $det(M)\ne 0$. Moreover, to satisfy the restriction for $M$ (non-unitary), matrix $A$ should not be normal .</p> Answer 15: <p>That $P$ is unitary is a relativly rare case! Always happens, when $A$ is "normal", which means $AA^H = A^H A$. It can occure more often, but I#m not sure about that. </p> <p>One example to make a difference is by using $$ A = \begin{pmatrix}1 &amp;2 \\ 0 &amp; 1\end{pmatrix} \begin{pmatrix} 2 &amp; 0 \\ 0 &amp;5\end{pmatrix}\begin{pmatrix}1 &amp;2 \\ 0 &amp; 1\end{pmatrix}^{-1} = \begin{pmatrix}2&amp;6\\0&amp;5 \end{pmatrix} $$ The collumns of $P$ are not orthogonal. </p> <p>Keep in mind, that it always depends on $A$ and it's eigenvalues and eigenvectors. If the vectors are orthogonal, you can find an orthonormal basis. If not, you can not. However you can just judge this by calculationg the eigenvalues. </p> Answer 16: <p>For a square matrix $A$ of order $n$, $A=M\Omega M^{-1}$ holds iff $A$ has $n$ linearly independent eigenvectors, where $M$ contains the $n$ eigenvectors as its columns and obviously $det(M)\ne 0$. Moreover, to satisfy the restriction for $M$ (non-unitary), matrix $A$ should not be normal .</p> Answer 17: <p>That $P$ is unitary is a relativly rare case! Always happens, when $A$ is "normal", which means $AA^H = A^H A$. It can occure more often, but I#m not sure about that. </p> <p>One example to make a difference is by using $$ A = \begin{pmatrix}1 &amp;2 \\ 0 &amp; 1\end{pmatrix} \begin{pmatrix} 2 &amp; 0 \\ 0 &amp;5\end{pmatrix}\begin{pmatrix}1 &amp;2 \\ 0 &amp; 1\end{pmatrix}^{-1} = \begin{pmatrix}2&amp;6\\0&amp;5 \end{pmatrix} $$ The collumns of $P$ are not orthogonal. </p> <p>Keep in mind, that it always depends on $A$ and it's eigenvalues and eigenvectors. If the vectors are orthogonal, you can find an orthonormal basis. If not, you can not. However you can just judge this by calculationg the eigenvalues. </p> Answer 18: <p>For a square matrix $A$ of order $n$, $A=M\Omega M^{-1}$ holds iff $A$ has $n$ linearly independent eigenvectors, where $M$ contains the $n$ eigenvectors as its columns and obviously $det(M)\ne 0$. Moreover, to satisfy the restriction for $M$ (non-unitary), matrix $A$ should not be normal .</p> Answer 19: <p>That $P$ is unitary is a relativly rare case! Always happens, when $A$ is "normal", which means $AA^H = A^H A$. It can occure more often, but I#m not sure about that. </p> <p>One example to make a difference is by using $$ A = \begin{pmatrix}1 &amp;2 \\ 0 &amp; 1\end{pmatrix} \begin{pmatrix} 2 &amp; 0 \\ 0 &amp;5\end{pmatrix}\begin{pmatrix}1 &amp;2 \\ 0 &amp; 1\end{pmatrix}^{-1} = \begin{pmatrix}2&amp;6\\0&amp;5 \end{pmatrix} $$ The collumns of $P$ are not orthogonal. </p> <p>Keep in mind, that it always depends on $A$ and it's eigenvalues and eigenvectors. If the vectors are orthogonal, you can find an orthonormal basis. If not, you can not. However you can just judge this by calculationg the eigenvalues. </p> Answer 20: <p>For a square matrix $A$ of order $n$, $A=M\Omega M^{-1}$ holds iff $A$ has $n$ linearly independent eigenvectors, where $M$ contains the $n$ eigenvectors as its columns and obviously $det(M)\ne 0$. Moreover, to satisfy the restriction for $M$ (non-unitary), matrix $A$ should not be normal .</p>
Proof of Separating Axis Theorem for polygons
2,106,402
6
https://math.stackexchange.com/questions/2106402/proof-of-separating-axis-theorem-for-polygons
<p>One well-known special case of the <a href="https://en.wikipedia.org/wiki/Hyperplane_separation_theorem" rel="noreferrer">Hyperplane Separation Theorem</a> (Separating Axis Theorem) states:</p> <blockquote> <p>Let $A, B \subset \mathbb{R}^2$ be convex polygons. Then $A \cap B = \emptyset$ if and only if there is a straight line $L$, such that $A \subset L^-$ and $B \subset L^+$</p> </blockquote> <p>…where $L^-$ and $L^+$ are the open half-planes formed by $L$.</p> <p>In other words: two convex polygons do not collide if and only if you can draw a straight line between them.</p> <p><img src="https://i.stack.imgur.com/nDKhg.png" alt="Illustration"></p> <p>When reading articles about collision detection in computer science, you often find the following, stronger version:</p> <blockquote> <p>Let $A, B \subset \mathbb{R}^2$ be convex polygons. Then $A \cap B = \emptyset$ if and only if there is a straight line $L$ <strong>parallel to one of $A$'s or $B$'s edges</strong>, such that $A \subset L^-$ and $B \subset L^+$.</p> </blockquote> <p><img src="https://i.stack.imgur.com/r11fn.png" alt="Illustration"></p> <p>However, I haven't found any proof for this second version anywhere. I'd also think that the generalization is true for $\mathbb{R}^n$ with $n \ge 1$, but I have no idea how to approach this. Anyone?</p>
[ { "answer_id": 2203231, "score": 7, "text": "<p>Great question! it's amazing how many people claim this but without any argument or proof. I couldn't find any proof of it online but I did find a proof myself. I reformulated the theorem so it is correct in $n$ dimensions, and then extracted the original theorem from it in the $2D$ case.</p>\n\n<p><strong>Theorem (general):</strong> let $A, B \\subset \\mathbb{R}^n$ be 2 convex compact polytopes such that $A\\cap B = \\emptyset$, then there exists a separating hyperplane with normal vector orthogonal to one of the facets of the Minkowski sum $A+(-B)$</p>\n\n<p>To make it easier on terminology I will work in $\\mathbb{R}^2$ here, but the proof works exactly the same way for any other $n$ (just different words)</p>\n\n<p><strong>Theorem (2D):</strong> let $A, B \\subset \\mathbb{R}^2$ be 2 convex compact polygons such that $A\\cap B = \\emptyset$, then there exists a separating line with normal vector orthogonal to one of the sides the Minkowski sum $A+(-B)$</p>\n\n<p><strong>Proof:</strong></p>\n\n<p><em>Step 1, choosing the axis:</em>\nDenote $K := A+(-B)$. $K$ is itself a compact convex polygon. We can extend each of its sides to a line. Since K is convex it will always be completely in 1 of the 2 halfplanes given by such a line. K is even equal to the intersection of all these halfplanes. (<a href=\"https://en.wikipedia.org/wiki/Convex_polytope#Intersection_of_half-spaces\" rel=\"noreferrer\">this is one of the definitions of a convex polygon</a>). write\n$$K = \\bigcap_i H_i$$\nwhere the $H_i$ are these halfplanes. Since $A\\cap B=\\emptyset$ we have $0\\notin K$. This implies $0\\notin H_i$ for some $i$. This $H_i$ was derived from a line $l$ that was the extention of one of the sides of $K$. Now take $v \\in l$ such that $\\lVert v \\rVert = \\inf_{x\\in l} \\lVert x\\rVert$. This vector exists because of the <a href=\"https://en.wikipedia.org/wiki/Hilbert_projection_theorem\" rel=\"noreferrer\">Hilbert projection theorem</a>. Note that, since $0\\not\\in l$, we have $v\\ne 0$.</p>\n\n<p><em>Step 2, orthogonality:</em> Take a unit vector $w$ in the direction of $l$ (in $n$-D: orthogonal to the normal of $l$). We then have that $v-aw \\in l$ for all $a \\in \\Bbb R$, and because $v$ has minimal norm:\n\\begin{align*}\n\\lVert v \\rVert ^2 &amp; \\leq \\lVert v - aw\\rVert ^2\\\\\n&amp; = \\langle v- aw,v-aw\\rangle\\\\\n&amp;= \\lVert v \\rVert ^2 - 2 a \\langle v,w \\rangle + a^2\\\\\n\\end{align*}\nNow, let $a = \\langle v,w\\rangle$ then we have\n$$\\lVert v\\rVert ^2 \\leq \\lVert v\\rVert ^2 - \\langle v,w \\rangle ^2$$\nso $\\langle v,w\\rangle = 0$ and $v$ is orthogonal to $l$.</p>\n\n<p><em>Step 3, separation:</em> Take a look at the proof of the hyperplane separation theorem given <a href=\"https://en.wikipedia.org/wiki/Hyperplane_separation_theorem#Statements_and_proof\" rel=\"noreferrer\">here</a>. If we prove that\n$$ \\lVert v \\rVert ^2 \\leq \\lVert v + t(x-v) \\rVert^2 $$\nfor all $t\\in [0,1]$ and for all $x\\in K$, then the rest of the proof can be done exactly the same. Ask me if you want me to explicitly write it down here but it's lengthy. To show the desired inequality, remember that $0$ is on the other side of $l$ than $K$. since a halfplane is convex, $v + t(x-v)$ will be on the same side of $l$ as $K$. This means that the line segment connecting $0$ and $v + t(x-v)$ will intersect $l$ at some point $s$, where\n$$\\lVert s\\rVert \\leq \\lVert v + t(x-v) \\rVert $$ Now because $s\\in l$ we have $\\lVert v\\rVert \\leq \\lVert s \\rVert$ and we are done.</p>\n\n<p>Your version of the theorem follows from this because in 2D, the Minkowski sum of 2 polygons can be obtained just by sliding arroud the edges of the original polygons. This is no longer true in 3D or higher so that explains the counter example given in the other answer that looked like this:\n<a href=\"https://i.stack.imgur.com/LMBfH.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/LMBfH.png\" alt=\"enter image description here\"></a></p>\n\n<p>However, the modified theorem does hold in 3D and higher, so we need to look at the Minkowski sum. Call the bottom object $A$ and the top one $B$, then $A+(-B)$ will look like this:<a href=\"https://i.stack.imgur.com/JRZwF.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/JRZwF.png\" alt=\"enter image description here\"></a></p>\n\n<p>This Minkowski sum does provide a separating axis orthogonal to its top facet.</p>\n" }, { "answer_id": 2116902, "score": 3, "text": "<p>I think this is an interesting question. </p>\n\n<p>However, the result is <em>not true for dimension $n\\geq 3$</em>. Consider the following counter example for $n=3$.</p>\n\n<p><a href=\"https://i.stack.imgur.com/8PXjR.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/8PXjR.png\" alt=\"enter image description here\"></a>\nThe counter example is probably best seen using the picture, although I can provide the half-space description if necessary. The counter example is two prisms with rotated squares as a base. They are situated in such a way that any half space defining one contains points of the other. Therefore, the conclusion is false.</p>\n\n<p>Moreover, this can be generalized by taking $n$-dimensional cubic-prisms and orient them in this way (here an $n$-dimensional cubic-prism has a $n-1$ dimensional cube as it's base). </p>\n\n<p>Also, it sounds like you have a solution for dimension 2. In light of how the conclusion is false for higher dimensions, I do not believe that such a solution will be simple in the sense that it avoids using geometry in 2 dimensions. </p>\n" }, { "answer_id": 2203231, "score": 7, "text": "<p>Great question! it's amazing how many people claim this but without any argument or proof. I couldn't find any proof of it online but I did find a proof myself. I reformulated the theorem so it is correct in $n$ dimensions, and then extracted the original theorem from it in the $2D$ case.</p>\n\n<p><strong>Theorem (general):</strong> let $A, B \\subset \\mathbb{R}^n$ be 2 convex compact polytopes such that $A\\cap B = \\emptyset$, then there exists a separating hyperplane with normal vector orthogonal to one of the facets of the Minkowski sum $A+(-B)$</p>\n\n<p>To make it easier on terminology I will work in $\\mathbb{R}^2$ here, but the proof works exactly the same way for any other $n$ (just different words)</p>\n\n<p><strong>Theorem (2D):</strong> let $A, B \\subset \\mathbb{R}^2$ be 2 convex compact polygons such that $A\\cap B = \\emptyset$, then there exists a separating line with normal vector orthogonal to one of the sides the Minkowski sum $A+(-B)$</p>\n\n<p><strong>Proof:</strong></p>\n\n<p><em>Step 1, choosing the axis:</em>\nDenote $K := A+(-B)$. $K$ is itself a compact convex polygon. We can extend each of its sides to a line. Since K is convex it will always be completely in 1 of the 2 halfplanes given by such a line. K is even equal to the intersection of all these halfplanes. (<a href=\"https://en.wikipedia.org/wiki/Convex_polytope#Intersection_of_half-spaces\" rel=\"noreferrer\">this is one of the definitions of a convex polygon</a>). write\n$$K = \\bigcap_i H_i$$\nwhere the $H_i$ are these halfplanes. Since $A\\cap B=\\emptyset$ we have $0\\notin K$. This implies $0\\notin H_i$ for some $i$. This $H_i$ was derived from a line $l$ that was the extention of one of the sides of $K$. Now take $v \\in l$ such that $\\lVert v \\rVert = \\inf_{x\\in l} \\lVert x\\rVert$. This vector exists because of the <a href=\"https://en.wikipedia.org/wiki/Hilbert_projection_theorem\" rel=\"noreferrer\">Hilbert projection theorem</a>. Note that, since $0\\not\\in l$, we have $v\\ne 0$.</p>\n\n<p><em>Step 2, orthogonality:</em> Take a unit vector $w$ in the direction of $l$ (in $n$-D: orthogonal to the normal of $l$). We then have that $v-aw \\in l$ for all $a \\in \\Bbb R$, and because $v$ has minimal norm:\n\\begin{align*}\n\\lVert v \\rVert ^2 &amp; \\leq \\lVert v - aw\\rVert ^2\\\\\n&amp; = \\langle v- aw,v-aw\\rangle\\\\\n&amp;= \\lVert v \\rVert ^2 - 2 a \\langle v,w \\rangle + a^2\\\\\n\\end{align*}\nNow, let $a = \\langle v,w\\rangle$ then we have\n$$\\lVert v\\rVert ^2 \\leq \\lVert v\\rVert ^2 - \\langle v,w \\rangle ^2$$\nso $\\langle v,w\\rangle = 0$ and $v$ is orthogonal to $l$.</p>\n\n<p><em>Step 3, separation:</em> Take a look at the proof of the hyperplane separation theorem given <a href=\"https://en.wikipedia.org/wiki/Hyperplane_separation_theorem#Statements_and_proof\" rel=\"noreferrer\">here</a>. If we prove that\n$$ \\lVert v \\rVert ^2 \\leq \\lVert v + t(x-v) \\rVert^2 $$\nfor all $t\\in [0,1]$ and for all $x\\in K$, then the rest of the proof can be done exactly the same. Ask me if you want me to explicitly write it down here but it's lengthy. To show the desired inequality, remember that $0$ is on the other side of $l$ than $K$. since a halfplane is convex, $v + t(x-v)$ will be on the same side of $l$ as $K$. This means that the line segment connecting $0$ and $v + t(x-v)$ will intersect $l$ at some point $s$, where\n$$\\lVert s\\rVert \\leq \\lVert v + t(x-v) \\rVert $$ Now because $s\\in l$ we have $\\lVert v\\rVert \\leq \\lVert s \\rVert$ and we are done.</p>\n\n<p>Your version of the theorem follows from this because in 2D, the Minkowski sum of 2 polygons can be obtained just by sliding arroud the edges of the original polygons. This is no longer true in 3D or higher so that explains the counter example given in the other answer that looked like this:\n<a href=\"https://i.stack.imgur.com/LMBfH.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/LMBfH.png\" alt=\"enter image description here\"></a></p>\n\n<p>However, the modified theorem does hold in 3D and higher, so we need to look at the Minkowski sum. Call the bottom object $A$ and the top one $B$, then $A+(-B)$ will look like this:<a href=\"https://i.stack.imgur.com/JRZwF.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/JRZwF.png\" alt=\"enter image description here\"></a></p>\n\n<p>This Minkowski sum does provide a separating axis orthogonal to its top facet.</p>\n" }, { "answer_id": 2116902, "score": 3, "text": "<p>I think this is an interesting question. </p>\n\n<p>However, the result is <em>not true for dimension $n\\geq 3$</em>. Consider the following counter example for $n=3$.</p>\n\n<p><a href=\"https://i.stack.imgur.com/8PXjR.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/8PXjR.png\" alt=\"enter image description here\"></a>\nThe counter example is probably best seen using the picture, although I can provide the half-space description if necessary. The counter example is two prisms with rotated squares as a base. They are situated in such a way that any half space defining one contains points of the other. Therefore, the conclusion is false.</p>\n\n<p>Moreover, this can be generalized by taking $n$-dimensional cubic-prisms and orient them in this way (here an $n$-dimensional cubic-prism has a $n-1$ dimensional cube as it's base). </p>\n\n<p>Also, it sounds like you have a solution for dimension 2. In light of how the conclusion is false for higher dimensions, I do not believe that such a solution will be simple in the sense that it avoids using geometry in 2 dimensions. </p>\n" }, { "answer_id": 2203231, "score": 7, "text": "<p>Great question! it's amazing how many people claim this but without any argument or proof. I couldn't find any proof of it online but I did find a proof myself. I reformulated the theorem so it is correct in $n$ dimensions, and then extracted the original theorem from it in the $2D$ case.</p>\n\n<p><strong>Theorem (general):</strong> let $A, B \\subset \\mathbb{R}^n$ be 2 convex compact polytopes such that $A\\cap B = \\emptyset$, then there exists a separating hyperplane with normal vector orthogonal to one of the facets of the Minkowski sum $A+(-B)$</p>\n\n<p>To make it easier on terminology I will work in $\\mathbb{R}^2$ here, but the proof works exactly the same way for any other $n$ (just different words)</p>\n\n<p><strong>Theorem (2D):</strong> let $A, B \\subset \\mathbb{R}^2$ be 2 convex compact polygons such that $A\\cap B = \\emptyset$, then there exists a separating line with normal vector orthogonal to one of the sides the Minkowski sum $A+(-B)$</p>\n\n<p><strong>Proof:</strong></p>\n\n<p><em>Step 1, choosing the axis:</em>\nDenote $K := A+(-B)$. $K$ is itself a compact convex polygon. We can extend each of its sides to a line. Since K is convex it will always be completely in 1 of the 2 halfplanes given by such a line. K is even equal to the intersection of all these halfplanes. (<a href=\"https://en.wikipedia.org/wiki/Convex_polytope#Intersection_of_half-spaces\" rel=\"noreferrer\">this is one of the definitions of a convex polygon</a>). write\n$$K = \\bigcap_i H_i$$\nwhere the $H_i$ are these halfplanes. Since $A\\cap B=\\emptyset$ we have $0\\notin K$. This implies $0\\notin H_i$ for some $i$. This $H_i$ was derived from a line $l$ that was the extention of one of the sides of $K$. Now take $v \\in l$ such that $\\lVert v \\rVert = \\inf_{x\\in l} \\lVert x\\rVert$. This vector exists because of the <a href=\"https://en.wikipedia.org/wiki/Hilbert_projection_theorem\" rel=\"noreferrer\">Hilbert projection theorem</a>. Note that, since $0\\not\\in l$, we have $v\\ne 0$.</p>\n\n<p><em>Step 2, orthogonality:</em> Take a unit vector $w$ in the direction of $l$ (in $n$-D: orthogonal to the normal of $l$). We then have that $v-aw \\in l$ for all $a \\in \\Bbb R$, and because $v$ has minimal norm:\n\\begin{align*}\n\\lVert v \\rVert ^2 &amp; \\leq \\lVert v - aw\\rVert ^2\\\\\n&amp; = \\langle v- aw,v-aw\\rangle\\\\\n&amp;= \\lVert v \\rVert ^2 - 2 a \\langle v,w \\rangle + a^2\\\\\n\\end{align*}\nNow, let $a = \\langle v,w\\rangle$ then we have\n$$\\lVert v\\rVert ^2 \\leq \\lVert v\\rVert ^2 - \\langle v,w \\rangle ^2$$\nso $\\langle v,w\\rangle = 0$ and $v$ is orthogonal to $l$.</p>\n\n<p><em>Step 3, separation:</em> Take a look at the proof of the hyperplane separation theorem given <a href=\"https://en.wikipedia.org/wiki/Hyperplane_separation_theorem#Statements_and_proof\" rel=\"noreferrer\">here</a>. If we prove that\n$$ \\lVert v \\rVert ^2 \\leq \\lVert v + t(x-v) \\rVert^2 $$\nfor all $t\\in [0,1]$ and for all $x\\in K$, then the rest of the proof can be done exactly the same. Ask me if you want me to explicitly write it down here but it's lengthy. To show the desired inequality, remember that $0$ is on the other side of $l$ than $K$. since a halfplane is convex, $v + t(x-v)$ will be on the same side of $l$ as $K$. This means that the line segment connecting $0$ and $v + t(x-v)$ will intersect $l$ at some point $s$, where\n$$\\lVert s\\rVert \\leq \\lVert v + t(x-v) \\rVert $$ Now because $s\\in l$ we have $\\lVert v\\rVert \\leq \\lVert s \\rVert$ and we are done.</p>\n\n<p>Your version of the theorem follows from this because in 2D, the Minkowski sum of 2 polygons can be obtained just by sliding arroud the edges of the original polygons. This is no longer true in 3D or higher so that explains the counter example given in the other answer that looked like this:\n<a href=\"https://i.stack.imgur.com/LMBfH.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/LMBfH.png\" alt=\"enter image description here\"></a></p>\n\n<p>However, the modified theorem does hold in 3D and higher, so we need to look at the Minkowski sum. Call the bottom object $A$ and the top one $B$, then $A+(-B)$ will look like this:<a href=\"https://i.stack.imgur.com/JRZwF.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/JRZwF.png\" alt=\"enter image description here\"></a></p>\n\n<p>This Minkowski sum does provide a separating axis orthogonal to its top facet.</p>\n" }, { "answer_id": 2116902, "score": 3, "text": "<p>I think this is an interesting question. </p>\n\n<p>However, the result is <em>not true for dimension $n\\geq 3$</em>. Consider the following counter example for $n=3$.</p>\n\n<p><a href=\"https://i.stack.imgur.com/8PXjR.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/8PXjR.png\" alt=\"enter image description here\"></a>\nThe counter example is probably best seen using the picture, although I can provide the half-space description if necessary. The counter example is two prisms with rotated squares as a base. They are situated in such a way that any half space defining one contains points of the other. Therefore, the conclusion is false.</p>\n\n<p>Moreover, this can be generalized by taking $n$-dimensional cubic-prisms and orient them in this way (here an $n$-dimensional cubic-prism has a $n-1$ dimensional cube as it's base). </p>\n\n<p>Also, it sounds like you have a solution for dimension 2. In light of how the conclusion is false for higher dimensions, I do not believe that such a solution will be simple in the sense that it avoids using geometry in 2 dimensions. </p>\n" }, { "answer_id": 2203231, "score": 7, "text": "<p>Great question! it's amazing how many people claim this but without any argument or proof. I couldn't find any proof of it online but I did find a proof myself. I reformulated the theorem so it is correct in $n$ dimensions, and then extracted the original theorem from it in the $2D$ case.</p>\n\n<p><strong>Theorem (general):</strong> let $A, B \\subset \\mathbb{R}^n$ be 2 convex compact polytopes such that $A\\cap B = \\emptyset$, then there exists a separating hyperplane with normal vector orthogonal to one of the facets of the Minkowski sum $A+(-B)$</p>\n\n<p>To make it easier on terminology I will work in $\\mathbb{R}^2$ here, but the proof works exactly the same way for any other $n$ (just different words)</p>\n\n<p><strong>Theorem (2D):</strong> let $A, B \\subset \\mathbb{R}^2$ be 2 convex compact polygons such that $A\\cap B = \\emptyset$, then there exists a separating line with normal vector orthogonal to one of the sides the Minkowski sum $A+(-B)$</p>\n\n<p><strong>Proof:</strong></p>\n\n<p><em>Step 1, choosing the axis:</em>\nDenote $K := A+(-B)$. $K$ is itself a compact convex polygon. We can extend each of its sides to a line. Since K is convex it will always be completely in 1 of the 2 halfplanes given by such a line. K is even equal to the intersection of all these halfplanes. (<a href=\"https://en.wikipedia.org/wiki/Convex_polytope#Intersection_of_half-spaces\" rel=\"noreferrer\">this is one of the definitions of a convex polygon</a>). write\n$$K = \\bigcap_i H_i$$\nwhere the $H_i$ are these halfplanes. Since $A\\cap B=\\emptyset$ we have $0\\notin K$. This implies $0\\notin H_i$ for some $i$. This $H_i$ was derived from a line $l$ that was the extention of one of the sides of $K$. Now take $v \\in l$ such that $\\lVert v \\rVert = \\inf_{x\\in l} \\lVert x\\rVert$. This vector exists because of the <a href=\"https://en.wikipedia.org/wiki/Hilbert_projection_theorem\" rel=\"noreferrer\">Hilbert projection theorem</a>. Note that, since $0\\not\\in l$, we have $v\\ne 0$.</p>\n\n<p><em>Step 2, orthogonality:</em> Take a unit vector $w$ in the direction of $l$ (in $n$-D: orthogonal to the normal of $l$). We then have that $v-aw \\in l$ for all $a \\in \\Bbb R$, and because $v$ has minimal norm:\n\\begin{align*}\n\\lVert v \\rVert ^2 &amp; \\leq \\lVert v - aw\\rVert ^2\\\\\n&amp; = \\langle v- aw,v-aw\\rangle\\\\\n&amp;= \\lVert v \\rVert ^2 - 2 a \\langle v,w \\rangle + a^2\\\\\n\\end{align*}\nNow, let $a = \\langle v,w\\rangle$ then we have\n$$\\lVert v\\rVert ^2 \\leq \\lVert v\\rVert ^2 - \\langle v,w \\rangle ^2$$\nso $\\langle v,w\\rangle = 0$ and $v$ is orthogonal to $l$.</p>\n\n<p><em>Step 3, separation:</em> Take a look at the proof of the hyperplane separation theorem given <a href=\"https://en.wikipedia.org/wiki/Hyperplane_separation_theorem#Statements_and_proof\" rel=\"noreferrer\">here</a>. If we prove that\n$$ \\lVert v \\rVert ^2 \\leq \\lVert v + t(x-v) \\rVert^2 $$\nfor all $t\\in [0,1]$ and for all $x\\in K$, then the rest of the proof can be done exactly the same. Ask me if you want me to explicitly write it down here but it's lengthy. To show the desired inequality, remember that $0$ is on the other side of $l$ than $K$. since a halfplane is convex, $v + t(x-v)$ will be on the same side of $l$ as $K$. This means that the line segment connecting $0$ and $v + t(x-v)$ will intersect $l$ at some point $s$, where\n$$\\lVert s\\rVert \\leq \\lVert v + t(x-v) \\rVert $$ Now because $s\\in l$ we have $\\lVert v\\rVert \\leq \\lVert s \\rVert$ and we are done.</p>\n\n<p>Your version of the theorem follows from this because in 2D, the Minkowski sum of 2 polygons can be obtained just by sliding arroud the edges of the original polygons. This is no longer true in 3D or higher so that explains the counter example given in the other answer that looked like this:\n<a href=\"https://i.stack.imgur.com/LMBfH.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/LMBfH.png\" alt=\"enter image description here\"></a></p>\n\n<p>However, the modified theorem does hold in 3D and higher, so we need to look at the Minkowski sum. Call the bottom object $A$ and the top one $B$, then $A+(-B)$ will look like this:<a href=\"https://i.stack.imgur.com/JRZwF.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/JRZwF.png\" alt=\"enter image description here\"></a></p>\n\n<p>This Minkowski sum does provide a separating axis orthogonal to its top facet.</p>\n" }, { "answer_id": 2116902, "score": 3, "text": "<p>I think this is an interesting question. </p>\n\n<p>However, the result is <em>not true for dimension $n\\geq 3$</em>. Consider the following counter example for $n=3$.</p>\n\n<p><a href=\"https://i.stack.imgur.com/8PXjR.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/8PXjR.png\" alt=\"enter image description here\"></a>\nThe counter example is probably best seen using the picture, although I can provide the half-space description if necessary. The counter example is two prisms with rotated squares as a base. They are situated in such a way that any half space defining one contains points of the other. Therefore, the conclusion is false.</p>\n\n<p>Moreover, this can be generalized by taking $n$-dimensional cubic-prisms and orient them in this way (here an $n$-dimensional cubic-prism has a $n-1$ dimensional cube as it's base). </p>\n\n<p>Also, it sounds like you have a solution for dimension 2. In light of how the conclusion is false for higher dimensions, I do not believe that such a solution will be simple in the sense that it avoids using geometry in 2 dimensions. </p>\n" }, { "answer_id": 2203231, "score": 7, "text": "<p>Great question! it's amazing how many people claim this but without any argument or proof. I couldn't find any proof of it online but I did find a proof myself. I reformulated the theorem so it is correct in $n$ dimensions, and then extracted the original theorem from it in the $2D$ case.</p>\n\n<p><strong>Theorem (general):</strong> let $A, B \\subset \\mathbb{R}^n$ be 2 convex compact polytopes such that $A\\cap B = \\emptyset$, then there exists a separating hyperplane with normal vector orthogonal to one of the facets of the Minkowski sum $A+(-B)$</p>\n\n<p>To make it easier on terminology I will work in $\\mathbb{R}^2$ here, but the proof works exactly the same way for any other $n$ (just different words)</p>\n\n<p><strong>Theorem (2D):</strong> let $A, B \\subset \\mathbb{R}^2$ be 2 convex compact polygons such that $A\\cap B = \\emptyset$, then there exists a separating line with normal vector orthogonal to one of the sides the Minkowski sum $A+(-B)$</p>\n\n<p><strong>Proof:</strong></p>\n\n<p><em>Step 1, choosing the axis:</em>\nDenote $K := A+(-B)$. $K$ is itself a compact convex polygon. We can extend each of its sides to a line. Since K is convex it will always be completely in 1 of the 2 halfplanes given by such a line. K is even equal to the intersection of all these halfplanes. (<a href=\"https://en.wikipedia.org/wiki/Convex_polytope#Intersection_of_half-spaces\" rel=\"noreferrer\">this is one of the definitions of a convex polygon</a>). write\n$$K = \\bigcap_i H_i$$\nwhere the $H_i$ are these halfplanes. Since $A\\cap B=\\emptyset$ we have $0\\notin K$. This implies $0\\notin H_i$ for some $i$. This $H_i$ was derived from a line $l$ that was the extention of one of the sides of $K$. Now take $v \\in l$ such that $\\lVert v \\rVert = \\inf_{x\\in l} \\lVert x\\rVert$. This vector exists because of the <a href=\"https://en.wikipedia.org/wiki/Hilbert_projection_theorem\" rel=\"noreferrer\">Hilbert projection theorem</a>. Note that, since $0\\not\\in l$, we have $v\\ne 0$.</p>\n\n<p><em>Step 2, orthogonality:</em> Take a unit vector $w$ in the direction of $l$ (in $n$-D: orthogonal to the normal of $l$). We then have that $v-aw \\in l$ for all $a \\in \\Bbb R$, and because $v$ has minimal norm:\n\\begin{align*}\n\\lVert v \\rVert ^2 &amp; \\leq \\lVert v - aw\\rVert ^2\\\\\n&amp; = \\langle v- aw,v-aw\\rangle\\\\\n&amp;= \\lVert v \\rVert ^2 - 2 a \\langle v,w \\rangle + a^2\\\\\n\\end{align*}\nNow, let $a = \\langle v,w\\rangle$ then we have\n$$\\lVert v\\rVert ^2 \\leq \\lVert v\\rVert ^2 - \\langle v,w \\rangle ^2$$\nso $\\langle v,w\\rangle = 0$ and $v$ is orthogonal to $l$.</p>\n\n<p><em>Step 3, separation:</em> Take a look at the proof of the hyperplane separation theorem given <a href=\"https://en.wikipedia.org/wiki/Hyperplane_separation_theorem#Statements_and_proof\" rel=\"noreferrer\">here</a>. If we prove that\n$$ \\lVert v \\rVert ^2 \\leq \\lVert v + t(x-v) \\rVert^2 $$\nfor all $t\\in [0,1]$ and for all $x\\in K$, then the rest of the proof can be done exactly the same. Ask me if you want me to explicitly write it down here but it's lengthy. To show the desired inequality, remember that $0$ is on the other side of $l$ than $K$. since a halfplane is convex, $v + t(x-v)$ will be on the same side of $l$ as $K$. This means that the line segment connecting $0$ and $v + t(x-v)$ will intersect $l$ at some point $s$, where\n$$\\lVert s\\rVert \\leq \\lVert v + t(x-v) \\rVert $$ Now because $s\\in l$ we have $\\lVert v\\rVert \\leq \\lVert s \\rVert$ and we are done.</p>\n\n<p>Your version of the theorem follows from this because in 2D, the Minkowski sum of 2 polygons can be obtained just by sliding arroud the edges of the original polygons. This is no longer true in 3D or higher so that explains the counter example given in the other answer that looked like this:\n<a href=\"https://i.stack.imgur.com/LMBfH.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/LMBfH.png\" alt=\"enter image description here\"></a></p>\n\n<p>However, the modified theorem does hold in 3D and higher, so we need to look at the Minkowski sum. Call the bottom object $A$ and the top one $B$, then $A+(-B)$ will look like this:<a href=\"https://i.stack.imgur.com/JRZwF.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/JRZwF.png\" alt=\"enter image description here\"></a></p>\n\n<p>This Minkowski sum does provide a separating axis orthogonal to its top facet.</p>\n" }, { "answer_id": 2116902, "score": 3, "text": "<p>I think this is an interesting question. </p>\n\n<p>However, the result is <em>not true for dimension $n\\geq 3$</em>. Consider the following counter example for $n=3$.</p>\n\n<p><a href=\"https://i.stack.imgur.com/8PXjR.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/8PXjR.png\" alt=\"enter image description here\"></a>\nThe counter example is probably best seen using the picture, although I can provide the half-space description if necessary. The counter example is two prisms with rotated squares as a base. They are situated in such a way that any half space defining one contains points of the other. Therefore, the conclusion is false.</p>\n\n<p>Moreover, this can be generalized by taking $n$-dimensional cubic-prisms and orient them in this way (here an $n$-dimensional cubic-prism has a $n-1$ dimensional cube as it's base). </p>\n\n<p>Also, it sounds like you have a solution for dimension 2. In light of how the conclusion is false for higher dimensions, I do not believe that such a solution will be simple in the sense that it avoids using geometry in 2 dimensions. </p>\n" }, { "answer_id": 2203231, "score": 7, "text": "<p>Great question! it's amazing how many people claim this but without any argument or proof. I couldn't find any proof of it online but I did find a proof myself. I reformulated the theorem so it is correct in $n$ dimensions, and then extracted the original theorem from it in the $2D$ case.</p>\n\n<p><strong>Theorem (general):</strong> let $A, B \\subset \\mathbb{R}^n$ be 2 convex compact polytopes such that $A\\cap B = \\emptyset$, then there exists a separating hyperplane with normal vector orthogonal to one of the facets of the Minkowski sum $A+(-B)$</p>\n\n<p>To make it easier on terminology I will work in $\\mathbb{R}^2$ here, but the proof works exactly the same way for any other $n$ (just different words)</p>\n\n<p><strong>Theorem (2D):</strong> let $A, B \\subset \\mathbb{R}^2$ be 2 convex compact polygons such that $A\\cap B = \\emptyset$, then there exists a separating line with normal vector orthogonal to one of the sides the Minkowski sum $A+(-B)$</p>\n\n<p><strong>Proof:</strong></p>\n\n<p><em>Step 1, choosing the axis:</em>\nDenote $K := A+(-B)$. $K$ is itself a compact convex polygon. We can extend each of its sides to a line. Since K is convex it will always be completely in 1 of the 2 halfplanes given by such a line. K is even equal to the intersection of all these halfplanes. (<a href=\"https://en.wikipedia.org/wiki/Convex_polytope#Intersection_of_half-spaces\" rel=\"noreferrer\">this is one of the definitions of a convex polygon</a>). write\n$$K = \\bigcap_i H_i$$\nwhere the $H_i$ are these halfplanes. Since $A\\cap B=\\emptyset$ we have $0\\notin K$. This implies $0\\notin H_i$ for some $i$. This $H_i$ was derived from a line $l$ that was the extention of one of the sides of $K$. Now take $v \\in l$ such that $\\lVert v \\rVert = \\inf_{x\\in l} \\lVert x\\rVert$. This vector exists because of the <a href=\"https://en.wikipedia.org/wiki/Hilbert_projection_theorem\" rel=\"noreferrer\">Hilbert projection theorem</a>. Note that, since $0\\not\\in l$, we have $v\\ne 0$.</p>\n\n<p><em>Step 2, orthogonality:</em> Take a unit vector $w$ in the direction of $l$ (in $n$-D: orthogonal to the normal of $l$). We then have that $v-aw \\in l$ for all $a \\in \\Bbb R$, and because $v$ has minimal norm:\n\\begin{align*}\n\\lVert v \\rVert ^2 &amp; \\leq \\lVert v - aw\\rVert ^2\\\\\n&amp; = \\langle v- aw,v-aw\\rangle\\\\\n&amp;= \\lVert v \\rVert ^2 - 2 a \\langle v,w \\rangle + a^2\\\\\n\\end{align*}\nNow, let $a = \\langle v,w\\rangle$ then we have\n$$\\lVert v\\rVert ^2 \\leq \\lVert v\\rVert ^2 - \\langle v,w \\rangle ^2$$\nso $\\langle v,w\\rangle = 0$ and $v$ is orthogonal to $l$.</p>\n\n<p><em>Step 3, separation:</em> Take a look at the proof of the hyperplane separation theorem given <a href=\"https://en.wikipedia.org/wiki/Hyperplane_separation_theorem#Statements_and_proof\" rel=\"noreferrer\">here</a>. If we prove that\n$$ \\lVert v \\rVert ^2 \\leq \\lVert v + t(x-v) \\rVert^2 $$\nfor all $t\\in [0,1]$ and for all $x\\in K$, then the rest of the proof can be done exactly the same. Ask me if you want me to explicitly write it down here but it's lengthy. To show the desired inequality, remember that $0$ is on the other side of $l$ than $K$. since a halfplane is convex, $v + t(x-v)$ will be on the same side of $l$ as $K$. This means that the line segment connecting $0$ and $v + t(x-v)$ will intersect $l$ at some point $s$, where\n$$\\lVert s\\rVert \\leq \\lVert v + t(x-v) \\rVert $$ Now because $s\\in l$ we have $\\lVert v\\rVert \\leq \\lVert s \\rVert$ and we are done.</p>\n\n<p>Your version of the theorem follows from this because in 2D, the Minkowski sum of 2 polygons can be obtained just by sliding arroud the edges of the original polygons. This is no longer true in 3D or higher so that explains the counter example given in the other answer that looked like this:\n<a href=\"https://i.stack.imgur.com/LMBfH.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/LMBfH.png\" alt=\"enter image description here\"></a></p>\n\n<p>However, the modified theorem does hold in 3D and higher, so we need to look at the Minkowski sum. Call the bottom object $A$ and the top one $B$, then $A+(-B)$ will look like this:<a href=\"https://i.stack.imgur.com/JRZwF.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/JRZwF.png\" alt=\"enter image description here\"></a></p>\n\n<p>This Minkowski sum does provide a separating axis orthogonal to its top facet.</p>\n" }, { "answer_id": 2116902, "score": 3, "text": "<p>I think this is an interesting question. </p>\n\n<p>However, the result is <em>not true for dimension $n\\geq 3$</em>. Consider the following counter example for $n=3$.</p>\n\n<p><a href=\"https://i.stack.imgur.com/8PXjR.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/8PXjR.png\" alt=\"enter image description here\"></a>\nThe counter example is probably best seen using the picture, although I can provide the half-space description if necessary. The counter example is two prisms with rotated squares as a base. They are situated in such a way that any half space defining one contains points of the other. Therefore, the conclusion is false.</p>\n\n<p>Moreover, this can be generalized by taking $n$-dimensional cubic-prisms and orient them in this way (here an $n$-dimensional cubic-prism has a $n-1$ dimensional cube as it's base). </p>\n\n<p>Also, it sounds like you have a solution for dimension 2. In light of how the conclusion is false for higher dimensions, I do not believe that such a solution will be simple in the sense that it avoids using geometry in 2 dimensions. </p>\n" }, { "answer_id": 2203231, "score": 7, "text": "<p>Great question! it's amazing how many people claim this but without any argument or proof. I couldn't find any proof of it online but I did find a proof myself. I reformulated the theorem so it is correct in $n$ dimensions, and then extracted the original theorem from it in the $2D$ case.</p>\n\n<p><strong>Theorem (general):</strong> let $A, B \\subset \\mathbb{R}^n$ be 2 convex compact polytopes such that $A\\cap B = \\emptyset$, then there exists a separating hyperplane with normal vector orthogonal to one of the facets of the Minkowski sum $A+(-B)$</p>\n\n<p>To make it easier on terminology I will work in $\\mathbb{R}^2$ here, but the proof works exactly the same way for any other $n$ (just different words)</p>\n\n<p><strong>Theorem (2D):</strong> let $A, B \\subset \\mathbb{R}^2$ be 2 convex compact polygons such that $A\\cap B = \\emptyset$, then there exists a separating line with normal vector orthogonal to one of the sides the Minkowski sum $A+(-B)$</p>\n\n<p><strong>Proof:</strong></p>\n\n<p><em>Step 1, choosing the axis:</em>\nDenote $K := A+(-B)$. $K$ is itself a compact convex polygon. We can extend each of its sides to a line. Since K is convex it will always be completely in 1 of the 2 halfplanes given by such a line. K is even equal to the intersection of all these halfplanes. (<a href=\"https://en.wikipedia.org/wiki/Convex_polytope#Intersection_of_half-spaces\" rel=\"noreferrer\">this is one of the definitions of a convex polygon</a>). write\n$$K = \\bigcap_i H_i$$\nwhere the $H_i$ are these halfplanes. Since $A\\cap B=\\emptyset$ we have $0\\notin K$. This implies $0\\notin H_i$ for some $i$. This $H_i$ was derived from a line $l$ that was the extention of one of the sides of $K$. Now take $v \\in l$ such that $\\lVert v \\rVert = \\inf_{x\\in l} \\lVert x\\rVert$. This vector exists because of the <a href=\"https://en.wikipedia.org/wiki/Hilbert_projection_theorem\" rel=\"noreferrer\">Hilbert projection theorem</a>. Note that, since $0\\not\\in l$, we have $v\\ne 0$.</p>\n\n<p><em>Step 2, orthogonality:</em> Take a unit vector $w$ in the direction of $l$ (in $n$-D: orthogonal to the normal of $l$). We then have that $v-aw \\in l$ for all $a \\in \\Bbb R$, and because $v$ has minimal norm:\n\\begin{align*}\n\\lVert v \\rVert ^2 &amp; \\leq \\lVert v - aw\\rVert ^2\\\\\n&amp; = \\langle v- aw,v-aw\\rangle\\\\\n&amp;= \\lVert v \\rVert ^2 - 2 a \\langle v,w \\rangle + a^2\\\\\n\\end{align*}\nNow, let $a = \\langle v,w\\rangle$ then we have\n$$\\lVert v\\rVert ^2 \\leq \\lVert v\\rVert ^2 - \\langle v,w \\rangle ^2$$\nso $\\langle v,w\\rangle = 0$ and $v$ is orthogonal to $l$.</p>\n\n<p><em>Step 3, separation:</em> Take a look at the proof of the hyperplane separation theorem given <a href=\"https://en.wikipedia.org/wiki/Hyperplane_separation_theorem#Statements_and_proof\" rel=\"noreferrer\">here</a>. If we prove that\n$$ \\lVert v \\rVert ^2 \\leq \\lVert v + t(x-v) \\rVert^2 $$\nfor all $t\\in [0,1]$ and for all $x\\in K$, then the rest of the proof can be done exactly the same. Ask me if you want me to explicitly write it down here but it's lengthy. To show the desired inequality, remember that $0$ is on the other side of $l$ than $K$. since a halfplane is convex, $v + t(x-v)$ will be on the same side of $l$ as $K$. This means that the line segment connecting $0$ and $v + t(x-v)$ will intersect $l$ at some point $s$, where\n$$\\lVert s\\rVert \\leq \\lVert v + t(x-v) \\rVert $$ Now because $s\\in l$ we have $\\lVert v\\rVert \\leq \\lVert s \\rVert$ and we are done.</p>\n\n<p>Your version of the theorem follows from this because in 2D, the Minkowski sum of 2 polygons can be obtained just by sliding arroud the edges of the original polygons. This is no longer true in 3D or higher so that explains the counter example given in the other answer that looked like this:\n<a href=\"https://i.stack.imgur.com/LMBfH.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/LMBfH.png\" alt=\"enter image description here\"></a></p>\n\n<p>However, the modified theorem does hold in 3D and higher, so we need to look at the Minkowski sum. Call the bottom object $A$ and the top one $B$, then $A+(-B)$ will look like this:<a href=\"https://i.stack.imgur.com/JRZwF.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/JRZwF.png\" alt=\"enter image description here\"></a></p>\n\n<p>This Minkowski sum does provide a separating axis orthogonal to its top facet.</p>\n" }, { "answer_id": 2116902, "score": 3, "text": "<p>I think this is an interesting question. </p>\n\n<p>However, the result is <em>not true for dimension $n\\geq 3$</em>. Consider the following counter example for $n=3$.</p>\n\n<p><a href=\"https://i.stack.imgur.com/8PXjR.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/8PXjR.png\" alt=\"enter image description here\"></a>\nThe counter example is probably best seen using the picture, although I can provide the half-space description if necessary. The counter example is two prisms with rotated squares as a base. They are situated in such a way that any half space defining one contains points of the other. Therefore, the conclusion is false.</p>\n\n<p>Moreover, this can be generalized by taking $n$-dimensional cubic-prisms and orient them in this way (here an $n$-dimensional cubic-prism has a $n-1$ dimensional cube as it's base). </p>\n\n<p>Also, it sounds like you have a solution for dimension 2. In light of how the conclusion is false for higher dimensions, I do not believe that such a solution will be simple in the sense that it avoids using geometry in 2 dimensions. </p>\n" }, { "answer_id": 2203231, "score": 7, "text": "<p>Great question! it's amazing how many people claim this but without any argument or proof. I couldn't find any proof of it online but I did find a proof myself. I reformulated the theorem so it is correct in $n$ dimensions, and then extracted the original theorem from it in the $2D$ case.</p>\n\n<p><strong>Theorem (general):</strong> let $A, B \\subset \\mathbb{R}^n$ be 2 convex compact polytopes such that $A\\cap B = \\emptyset$, then there exists a separating hyperplane with normal vector orthogonal to one of the facets of the Minkowski sum $A+(-B)$</p>\n\n<p>To make it easier on terminology I will work in $\\mathbb{R}^2$ here, but the proof works exactly the same way for any other $n$ (just different words)</p>\n\n<p><strong>Theorem (2D):</strong> let $A, B \\subset \\mathbb{R}^2$ be 2 convex compact polygons such that $A\\cap B = \\emptyset$, then there exists a separating line with normal vector orthogonal to one of the sides the Minkowski sum $A+(-B)$</p>\n\n<p><strong>Proof:</strong></p>\n\n<p><em>Step 1, choosing the axis:</em>\nDenote $K := A+(-B)$. $K$ is itself a compact convex polygon. We can extend each of its sides to a line. Since K is convex it will always be completely in 1 of the 2 halfplanes given by such a line. K is even equal to the intersection of all these halfplanes. (<a href=\"https://en.wikipedia.org/wiki/Convex_polytope#Intersection_of_half-spaces\" rel=\"noreferrer\">this is one of the definitions of a convex polygon</a>). write\n$$K = \\bigcap_i H_i$$\nwhere the $H_i$ are these halfplanes. Since $A\\cap B=\\emptyset$ we have $0\\notin K$. This implies $0\\notin H_i$ for some $i$. This $H_i$ was derived from a line $l$ that was the extention of one of the sides of $K$. Now take $v \\in l$ such that $\\lVert v \\rVert = \\inf_{x\\in l} \\lVert x\\rVert$. This vector exists because of the <a href=\"https://en.wikipedia.org/wiki/Hilbert_projection_theorem\" rel=\"noreferrer\">Hilbert projection theorem</a>. Note that, since $0\\not\\in l$, we have $v\\ne 0$.</p>\n\n<p><em>Step 2, orthogonality:</em> Take a unit vector $w$ in the direction of $l$ (in $n$-D: orthogonal to the normal of $l$). We then have that $v-aw \\in l$ for all $a \\in \\Bbb R$, and because $v$ has minimal norm:\n\\begin{align*}\n\\lVert v \\rVert ^2 &amp; \\leq \\lVert v - aw\\rVert ^2\\\\\n&amp; = \\langle v- aw,v-aw\\rangle\\\\\n&amp;= \\lVert v \\rVert ^2 - 2 a \\langle v,w \\rangle + a^2\\\\\n\\end{align*}\nNow, let $a = \\langle v,w\\rangle$ then we have\n$$\\lVert v\\rVert ^2 \\leq \\lVert v\\rVert ^2 - \\langle v,w \\rangle ^2$$\nso $\\langle v,w\\rangle = 0$ and $v$ is orthogonal to $l$.</p>\n\n<p><em>Step 3, separation:</em> Take a look at the proof of the hyperplane separation theorem given <a href=\"https://en.wikipedia.org/wiki/Hyperplane_separation_theorem#Statements_and_proof\" rel=\"noreferrer\">here</a>. If we prove that\n$$ \\lVert v \\rVert ^2 \\leq \\lVert v + t(x-v) \\rVert^2 $$\nfor all $t\\in [0,1]$ and for all $x\\in K$, then the rest of the proof can be done exactly the same. Ask me if you want me to explicitly write it down here but it's lengthy. To show the desired inequality, remember that $0$ is on the other side of $l$ than $K$. since a halfplane is convex, $v + t(x-v)$ will be on the same side of $l$ as $K$. This means that the line segment connecting $0$ and $v + t(x-v)$ will intersect $l$ at some point $s$, where\n$$\\lVert s\\rVert \\leq \\lVert v + t(x-v) \\rVert $$ Now because $s\\in l$ we have $\\lVert v\\rVert \\leq \\lVert s \\rVert$ and we are done.</p>\n\n<p>Your version of the theorem follows from this because in 2D, the Minkowski sum of 2 polygons can be obtained just by sliding arroud the edges of the original polygons. This is no longer true in 3D or higher so that explains the counter example given in the other answer that looked like this:\n<a href=\"https://i.stack.imgur.com/LMBfH.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/LMBfH.png\" alt=\"enter image description here\"></a></p>\n\n<p>However, the modified theorem does hold in 3D and higher, so we need to look at the Minkowski sum. Call the bottom object $A$ and the top one $B$, then $A+(-B)$ will look like this:<a href=\"https://i.stack.imgur.com/JRZwF.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/JRZwF.png\" alt=\"enter image description here\"></a></p>\n\n<p>This Minkowski sum does provide a separating axis orthogonal to its top facet.</p>\n" }, { "answer_id": 2116902, "score": 3, "text": "<p>I think this is an interesting question. </p>\n\n<p>However, the result is <em>not true for dimension $n\\geq 3$</em>. Consider the following counter example for $n=3$.</p>\n\n<p><a href=\"https://i.stack.imgur.com/8PXjR.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/8PXjR.png\" alt=\"enter image description here\"></a>\nThe counter example is probably best seen using the picture, although I can provide the half-space description if necessary. The counter example is two prisms with rotated squares as a base. They are situated in such a way that any half space defining one contains points of the other. Therefore, the conclusion is false.</p>\n\n<p>Moreover, this can be generalized by taking $n$-dimensional cubic-prisms and orient them in this way (here an $n$-dimensional cubic-prism has a $n-1$ dimensional cube as it's base). </p>\n\n<p>Also, it sounds like you have a solution for dimension 2. In light of how the conclusion is false for higher dimensions, I do not believe that such a solution will be simple in the sense that it avoids using geometry in 2 dimensions. </p>\n" }, { "answer_id": 2203231, "score": 7, "text": "<p>Great question! it's amazing how many people claim this but without any argument or proof. I couldn't find any proof of it online but I did find a proof myself. I reformulated the theorem so it is correct in $n$ dimensions, and then extracted the original theorem from it in the $2D$ case.</p>\n\n<p><strong>Theorem (general):</strong> let $A, B \\subset \\mathbb{R}^n$ be 2 convex compact polytopes such that $A\\cap B = \\emptyset$, then there exists a separating hyperplane with normal vector orthogonal to one of the facets of the Minkowski sum $A+(-B)$</p>\n\n<p>To make it easier on terminology I will work in $\\mathbb{R}^2$ here, but the proof works exactly the same way for any other $n$ (just different words)</p>\n\n<p><strong>Theorem (2D):</strong> let $A, B \\subset \\mathbb{R}^2$ be 2 convex compact polygons such that $A\\cap B = \\emptyset$, then there exists a separating line with normal vector orthogonal to one of the sides the Minkowski sum $A+(-B)$</p>\n\n<p><strong>Proof:</strong></p>\n\n<p><em>Step 1, choosing the axis:</em>\nDenote $K := A+(-B)$. $K$ is itself a compact convex polygon. We can extend each of its sides to a line. Since K is convex it will always be completely in 1 of the 2 halfplanes given by such a line. K is even equal to the intersection of all these halfplanes. (<a href=\"https://en.wikipedia.org/wiki/Convex_polytope#Intersection_of_half-spaces\" rel=\"noreferrer\">this is one of the definitions of a convex polygon</a>). write\n$$K = \\bigcap_i H_i$$\nwhere the $H_i$ are these halfplanes. Since $A\\cap B=\\emptyset$ we have $0\\notin K$. This implies $0\\notin H_i$ for some $i$. This $H_i$ was derived from a line $l$ that was the extention of one of the sides of $K$. Now take $v \\in l$ such that $\\lVert v \\rVert = \\inf_{x\\in l} \\lVert x\\rVert$. This vector exists because of the <a href=\"https://en.wikipedia.org/wiki/Hilbert_projection_theorem\" rel=\"noreferrer\">Hilbert projection theorem</a>. Note that, since $0\\not\\in l$, we have $v\\ne 0$.</p>\n\n<p><em>Step 2, orthogonality:</em> Take a unit vector $w$ in the direction of $l$ (in $n$-D: orthogonal to the normal of $l$). We then have that $v-aw \\in l$ for all $a \\in \\Bbb R$, and because $v$ has minimal norm:\n\\begin{align*}\n\\lVert v \\rVert ^2 &amp; \\leq \\lVert v - aw\\rVert ^2\\\\\n&amp; = \\langle v- aw,v-aw\\rangle\\\\\n&amp;= \\lVert v \\rVert ^2 - 2 a \\langle v,w \\rangle + a^2\\\\\n\\end{align*}\nNow, let $a = \\langle v,w\\rangle$ then we have\n$$\\lVert v\\rVert ^2 \\leq \\lVert v\\rVert ^2 - \\langle v,w \\rangle ^2$$\nso $\\langle v,w\\rangle = 0$ and $v$ is orthogonal to $l$.</p>\n\n<p><em>Step 3, separation:</em> Take a look at the proof of the hyperplane separation theorem given <a href=\"https://en.wikipedia.org/wiki/Hyperplane_separation_theorem#Statements_and_proof\" rel=\"noreferrer\">here</a>. If we prove that\n$$ \\lVert v \\rVert ^2 \\leq \\lVert v + t(x-v) \\rVert^2 $$\nfor all $t\\in [0,1]$ and for all $x\\in K$, then the rest of the proof can be done exactly the same. Ask me if you want me to explicitly write it down here but it's lengthy. To show the desired inequality, remember that $0$ is on the other side of $l$ than $K$. since a halfplane is convex, $v + t(x-v)$ will be on the same side of $l$ as $K$. This means that the line segment connecting $0$ and $v + t(x-v)$ will intersect $l$ at some point $s$, where\n$$\\lVert s\\rVert \\leq \\lVert v + t(x-v) \\rVert $$ Now because $s\\in l$ we have $\\lVert v\\rVert \\leq \\lVert s \\rVert$ and we are done.</p>\n\n<p>Your version of the theorem follows from this because in 2D, the Minkowski sum of 2 polygons can be obtained just by sliding arroud the edges of the original polygons. This is no longer true in 3D or higher so that explains the counter example given in the other answer that looked like this:\n<a href=\"https://i.stack.imgur.com/LMBfH.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/LMBfH.png\" alt=\"enter image description here\"></a></p>\n\n<p>However, the modified theorem does hold in 3D and higher, so we need to look at the Minkowski sum. Call the bottom object $A$ and the top one $B$, then $A+(-B)$ will look like this:<a href=\"https://i.stack.imgur.com/JRZwF.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/JRZwF.png\" alt=\"enter image description here\"></a></p>\n\n<p>This Minkowski sum does provide a separating axis orthogonal to its top facet.</p>\n" }, { "answer_id": 2116902, "score": 3, "text": "<p>I think this is an interesting question. </p>\n\n<p>However, the result is <em>not true for dimension $n\\geq 3$</em>. Consider the following counter example for $n=3$.</p>\n\n<p><a href=\"https://i.stack.imgur.com/8PXjR.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/8PXjR.png\" alt=\"enter image description here\"></a>\nThe counter example is probably best seen using the picture, although I can provide the half-space description if necessary. The counter example is two prisms with rotated squares as a base. They are situated in such a way that any half space defining one contains points of the other. Therefore, the conclusion is false.</p>\n\n<p>Moreover, this can be generalized by taking $n$-dimensional cubic-prisms and orient them in this way (here an $n$-dimensional cubic-prism has a $n-1$ dimensional cube as it's base). </p>\n\n<p>Also, it sounds like you have a solution for dimension 2. In light of how the conclusion is false for higher dimensions, I do not believe that such a solution will be simple in the sense that it avoids using geometry in 2 dimensions. </p>\n" }, { "answer_id": 2203231, "score": 7, "text": "<p>Great question! it's amazing how many people claim this but without any argument or proof. I couldn't find any proof of it online but I did find a proof myself. I reformulated the theorem so it is correct in $n$ dimensions, and then extracted the original theorem from it in the $2D$ case.</p>\n\n<p><strong>Theorem (general):</strong> let $A, B \\subset \\mathbb{R}^n$ be 2 convex compact polytopes such that $A\\cap B = \\emptyset$, then there exists a separating hyperplane with normal vector orthogonal to one of the facets of the Minkowski sum $A+(-B)$</p>\n\n<p>To make it easier on terminology I will work in $\\mathbb{R}^2$ here, but the proof works exactly the same way for any other $n$ (just different words)</p>\n\n<p><strong>Theorem (2D):</strong> let $A, B \\subset \\mathbb{R}^2$ be 2 convex compact polygons such that $A\\cap B = \\emptyset$, then there exists a separating line with normal vector orthogonal to one of the sides the Minkowski sum $A+(-B)$</p>\n\n<p><strong>Proof:</strong></p>\n\n<p><em>Step 1, choosing the axis:</em>\nDenote $K := A+(-B)$. $K$ is itself a compact convex polygon. We can extend each of its sides to a line. Since K is convex it will always be completely in 1 of the 2 halfplanes given by such a line. K is even equal to the intersection of all these halfplanes. (<a href=\"https://en.wikipedia.org/wiki/Convex_polytope#Intersection_of_half-spaces\" rel=\"noreferrer\">this is one of the definitions of a convex polygon</a>). write\n$$K = \\bigcap_i H_i$$\nwhere the $H_i$ are these halfplanes. Since $A\\cap B=\\emptyset$ we have $0\\notin K$. This implies $0\\notin H_i$ for some $i$. This $H_i$ was derived from a line $l$ that was the extention of one of the sides of $K$. Now take $v \\in l$ such that $\\lVert v \\rVert = \\inf_{x\\in l} \\lVert x\\rVert$. This vector exists because of the <a href=\"https://en.wikipedia.org/wiki/Hilbert_projection_theorem\" rel=\"noreferrer\">Hilbert projection theorem</a>. Note that, since $0\\not\\in l$, we have $v\\ne 0$.</p>\n\n<p><em>Step 2, orthogonality:</em> Take a unit vector $w$ in the direction of $l$ (in $n$-D: orthogonal to the normal of $l$). We then have that $v-aw \\in l$ for all $a \\in \\Bbb R$, and because $v$ has minimal norm:\n\\begin{align*}\n\\lVert v \\rVert ^2 &amp; \\leq \\lVert v - aw\\rVert ^2\\\\\n&amp; = \\langle v- aw,v-aw\\rangle\\\\\n&amp;= \\lVert v \\rVert ^2 - 2 a \\langle v,w \\rangle + a^2\\\\\n\\end{align*}\nNow, let $a = \\langle v,w\\rangle$ then we have\n$$\\lVert v\\rVert ^2 \\leq \\lVert v\\rVert ^2 - \\langle v,w \\rangle ^2$$\nso $\\langle v,w\\rangle = 0$ and $v$ is orthogonal to $l$.</p>\n\n<p><em>Step 3, separation:</em> Take a look at the proof of the hyperplane separation theorem given <a href=\"https://en.wikipedia.org/wiki/Hyperplane_separation_theorem#Statements_and_proof\" rel=\"noreferrer\">here</a>. If we prove that\n$$ \\lVert v \\rVert ^2 \\leq \\lVert v + t(x-v) \\rVert^2 $$\nfor all $t\\in [0,1]$ and for all $x\\in K$, then the rest of the proof can be done exactly the same. Ask me if you want me to explicitly write it down here but it's lengthy. To show the desired inequality, remember that $0$ is on the other side of $l$ than $K$. since a halfplane is convex, $v + t(x-v)$ will be on the same side of $l$ as $K$. This means that the line segment connecting $0$ and $v + t(x-v)$ will intersect $l$ at some point $s$, where\n$$\\lVert s\\rVert \\leq \\lVert v + t(x-v) \\rVert $$ Now because $s\\in l$ we have $\\lVert v\\rVert \\leq \\lVert s \\rVert$ and we are done.</p>\n\n<p>Your version of the theorem follows from this because in 2D, the Minkowski sum of 2 polygons can be obtained just by sliding arroud the edges of the original polygons. This is no longer true in 3D or higher so that explains the counter example given in the other answer that looked like this:\n<a href=\"https://i.stack.imgur.com/LMBfH.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/LMBfH.png\" alt=\"enter image description here\"></a></p>\n\n<p>However, the modified theorem does hold in 3D and higher, so we need to look at the Minkowski sum. Call the bottom object $A$ and the top one $B$, then $A+(-B)$ will look like this:<a href=\"https://i.stack.imgur.com/JRZwF.png\" rel=\"noreferrer\"><img src=\"https://i.stack.imgur.com/JRZwF.png\" alt=\"enter image description here\"></a></p>\n\n<p>This Minkowski sum does provide a separating axis orthogonal to its top facet.</p>\n" }, { "answer_id": 2116902, "score": 3, "text": "<p>I think this is an interesting question. </p>\n\n<p>However, the result is <em>not true for dimension $n\\geq 3$</em>. Consider the following counter example for $n=3$.</p>\n\n<p><a href=\"https://i.stack.imgur.com/8PXjR.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/8PXjR.png\" alt=\"enter image description here\"></a>\nThe counter example is probably best seen using the picture, although I can provide the half-space description if necessary. The counter example is two prisms with rotated squares as a base. They are situated in such a way that any half space defining one contains points of the other. Therefore, the conclusion is false.</p>\n\n<p>Moreover, this can be generalized by taking $n$-dimensional cubic-prisms and orient them in this way (here an $n$-dimensional cubic-prism has a $n-1$ dimensional cube as it's base). </p>\n\n<p>Also, it sounds like you have a solution for dimension 2. In light of how the conclusion is false for higher dimensions, I do not believe that such a solution will be simple in the sense that it avoids using geometry in 2 dimensions. </p>\n" } ]
Question: Proof of Separating Axis Theorem for polygons Answer 1: <p>Great question! it's amazing how many people claim this but without any argument or proof. I couldn't find any proof of it online but I did find a proof myself. I reformulated the theorem so it is correct in $n$ dimensions, and then extracted the original theorem from it in the $2D$ case.</p> <p><strong>Theorem (general):</strong> let $A, B \subset \mathbb{R}^n$ be 2 convex compact polytopes such that $A\cap B = \emptyset$, then there exists a separating hyperplane with normal vector orthogonal to one of the facets of the Minkowski sum $A+(-B)$</p> <p>To make it easier on terminology I will work in $\mathbb{R}^2$ here, but the proof works exactly the same way for any other $n$ (just different words)</p> <p><strong>Theorem (2D):</strong> let $A, B \subset \mathbb{R}^2$ be 2 convex compact polygons such that $A\cap B = \emptyset$, then there exists a separating line with normal vector orthogonal to one of the sides the Minkowski sum $A+(-B)$</p> <p><strong>Proof:</strong></p> <p><em>Step 1, choosing the axis:</em> Denote $K := A+(-B)$. $K$ is itself a compact convex polygon. We can extend each of its sides to a line. Since K is convex it will always be completely in 1 of the 2 halfplanes given by such a line. K is even equal to the intersection of all these halfplanes. (<a href="https://en.wikipedia.org/wiki/Convex_polytope#Intersection_of_half-spaces" rel="noreferrer">this is one of the definitions of a convex polygon</a>). write $$K = \bigcap_i H_i$$ where the $H_i$ are these halfplanes. Since $A\cap B=\emptyset$ we have $0\notin K$. This implies $0\notin H_i$ for some $i$. This $H_i$ was derived from a line $l$ that was the extention of one of the sides of $K$. Now take $v \in l$ such that $\lVert v \rVert = \inf_{x\in l} \lVert x\rVert$. This vector exists because of the <a href="https://en.wikipedia.org/wiki/Hilbert_projection_theorem" rel="noreferrer">Hilbert projection theorem</a>. Note that, since $0\not\in l$, we have $v\ne 0$.</p> <p><em>Step 2, orthogonality:</em> Take a unit vector $w$ in the direction of $l$ (in $n$-D: orthogonal to the normal of $l$). We then have that $v-aw \in l$ for all $a \in \Bbb R$, and because $v$ has minimal norm: \begin{align*} \lVert v \rVert ^2 &amp; \leq \lVert v - aw\rVert ^2\\ &amp; = \langle v- aw,v-aw\rangle\\ &amp;= \lVert v \rVert ^2 - 2 a \langle v,w \rangle + a^2\\ \end{align*} Now, let $a = \langle v,w\rangle$ then we have $$\lVert v\rVert ^2 \leq \lVert v\rVert ^2 - \langle v,w \rangle ^2$$ so $\langle v,w\rangle = 0$ and $v$ is orthogonal to $l$.</p> <p><em>Step 3, separation:</em> Take a look at the proof of the hyperplane separation theorem given <a href="https://en.wikipedia.org/wiki/Hyperplane_separation_theorem#Statements_and_proof" rel="noreferrer">here</a>. If we prove that $$ \lVert v \rVert ^2 \leq \lVert v + t(x-v) \rVert^2 $$ for all $t\in [0,1]$ and for all $x\in K$, then the rest of the proof can be done exactly the same. Ask me if you want me to explicitly write it down here but it's lengthy. To show the desired inequality, remember that $0$ is on the other side of $l$ than $K$. since a halfplane is convex, $v + t(x-v)$ will be on the same side of $l$ as $K$. This means that the line segment connecting $0$ and $v + t(x-v)$ will intersect $l$ at some point $s$, where $$\lVert s\rVert \leq \lVert v + t(x-v) \rVert $$ Now because $s\in l$ we have $\lVert v\rVert \leq \lVert s \rVert$ and we are done.</p> <p>Your version of the theorem follows from this because in 2D, the Minkowski sum of 2 polygons can be obtained just by sliding arroud the edges of the original polygons. This is no longer true in 3D or higher so that explains the counter example given in the other answer that looked like this: <a href="https://i.stack.imgur.com/LMBfH.png" rel="noreferrer"><img src="https://i.stack.imgur.com/LMBfH.png" alt="enter image description here"></a></p> <p>However, the modified theorem does hold in 3D and higher, so we need to look at the Minkowski sum. Call the bottom object $A$ and the top one $B$, then $A+(-B)$ will look like this:<a href="https://i.stack.imgur.com/JRZwF.png" rel="noreferrer"><img src="https://i.stack.imgur.com/JRZwF.png" alt="enter image description here"></a></p> <p>This Minkowski sum does provide a separating axis orthogonal to its top facet.</p> Answer 2: <p>I think this is an interesting question. </p> <p>However, the result is <em>not true for dimension $n\geq 3$</em>. Consider the following counter example for $n=3$.</p> <p><a href="https://i.stack.imgur.com/8PXjR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8PXjR.png" alt="enter image description here"></a> The counter example is probably best seen using the picture, although I can provide the half-space description if necessary. The counter example is two prisms with rotated squares as a base. They are situated in such a way that any half space defining one contains points of the other. Therefore, the conclusion is false.</p> <p>Moreover, this can be generalized by taking $n$-dimensional cubic-prisms and orient them in this way (here an $n$-dimensional cubic-prism has a $n-1$ dimensional cube as it's base). </p> <p>Also, it sounds like you have a solution for dimension 2. In light of how the conclusion is false for higher dimensions, I do not believe that such a solution will be simple in the sense that it avoids using geometry in 2 dimensions. </p> Answer 3: <p>Great question! it's amazing how many people claim this but without any argument or proof. I couldn't find any proof of it online but I did find a proof myself. I reformulated the theorem so it is correct in $n$ dimensions, and then extracted the original theorem from it in the $2D$ case.</p> <p><strong>Theorem (general):</strong> let $A, B \subset \mathbb{R}^n$ be 2 convex compact polytopes such that $A\cap B = \emptyset$, then there exists a separating hyperplane with normal vector orthogonal to one of the facets of the Minkowski sum $A+(-B)$</p> <p>To make it easier on terminology I will work in $\mathbb{R}^2$ here, but the proof works exactly the same way for any other $n$ (just different words)</p> <p><strong>Theorem (2D):</strong> let $A, B \subset \mathbb{R}^2$ be 2 convex compact polygons such that $A\cap B = \emptyset$, then there exists a separating line with normal vector orthogonal to one of the sides the Minkowski sum $A+(-B)$</p> <p><strong>Proof:</strong></p> <p><em>Step 1, choosing the axis:</em> Denote $K := A+(-B)$. $K$ is itself a compact convex polygon. We can extend each of its sides to a line. Since K is convex it will always be completely in 1 of the 2 halfplanes given by such a line. K is even equal to the intersection of all these halfplanes. (<a href="https://en.wikipedia.org/wiki/Convex_polytope#Intersection_of_half-spaces" rel="noreferrer">this is one of the definitions of a convex polygon</a>). write $$K = \bigcap_i H_i$$ where the $H_i$ are these halfplanes. Since $A\cap B=\emptyset$ we have $0\notin K$. This implies $0\notin H_i$ for some $i$. This $H_i$ was derived from a line $l$ that was the extention of one of the sides of $K$. Now take $v \in l$ such that $\lVert v \rVert = \inf_{x\in l} \lVert x\rVert$. This vector exists because of the <a href="https://en.wikipedia.org/wiki/Hilbert_projection_theorem" rel="noreferrer">Hilbert projection theorem</a>. Note that, since $0\not\in l$, we have $v\ne 0$.</p> <p><em>Step 2, orthogonality:</em> Take a unit vector $w$ in the direction of $l$ (in $n$-D: orthogonal to the normal of $l$). We then have that $v-aw \in l$ for all $a \in \Bbb R$, and because $v$ has minimal norm: \begin{align*} \lVert v \rVert ^2 &amp; \leq \lVert v - aw\rVert ^2\\ &amp; = \langle v- aw,v-aw\rangle\\ &amp;= \lVert v \rVert ^2 - 2 a \langle v,w \rangle + a^2\\ \end{align*} Now, let $a = \langle v,w\rangle$ then we have $$\lVert v\rVert ^2 \leq \lVert v\rVert ^2 - \langle v,w \rangle ^2$$ so $\langle v,w\rangle = 0$ and $v$ is orthogonal to $l$.</p> <p><em>Step 3, separation:</em> Take a look at the proof of the hyperplane separation theorem given <a href="https://en.wikipedia.org/wiki/Hyperplane_separation_theorem#Statements_and_proof" rel="noreferrer">here</a>. If we prove that $$ \lVert v \rVert ^2 \leq \lVert v + t(x-v) \rVert^2 $$ for all $t\in [0,1]$ and for all $x\in K$, then the rest of the proof can be done exactly the same. Ask me if you want me to explicitly write it down here but it's lengthy. To show the desired inequality, remember that $0$ is on the other side of $l$ than $K$. since a halfplane is convex, $v + t(x-v)$ will be on the same side of $l$ as $K$. This means that the line segment connecting $0$ and $v + t(x-v)$ will intersect $l$ at some point $s$, where $$\lVert s\rVert \leq \lVert v + t(x-v) \rVert $$ Now because $s\in l$ we have $\lVert v\rVert \leq \lVert s \rVert$ and we are done.</p> <p>Your version of the theorem follows from this because in 2D, the Minkowski sum of 2 polygons can be obtained just by sliding arroud the edges of the original polygons. This is no longer true in 3D or higher so that explains the counter example given in the other answer that looked like this: <a href="https://i.stack.imgur.com/LMBfH.png" rel="noreferrer"><img src="https://i.stack.imgur.com/LMBfH.png" alt="enter image description here"></a></p> <p>However, the modified theorem does hold in 3D and higher, so we need to look at the Minkowski sum. Call the bottom object $A$ and the top one $B$, then $A+(-B)$ will look like this:<a href="https://i.stack.imgur.com/JRZwF.png" rel="noreferrer"><img src="https://i.stack.imgur.com/JRZwF.png" alt="enter image description here"></a></p> <p>This Minkowski sum does provide a separating axis orthogonal to its top facet.</p> Answer 4: <p>I think this is an interesting question. </p> <p>However, the result is <em>not true for dimension $n\geq 3$</em>. Consider the following counter example for $n=3$.</p> <p><a href="https://i.stack.imgur.com/8PXjR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8PXjR.png" alt="enter image description here"></a> The counter example is probably best seen using the picture, although I can provide the half-space description if necessary. The counter example is two prisms with rotated squares as a base. They are situated in such a way that any half space defining one contains points of the other. Therefore, the conclusion is false.</p> <p>Moreover, this can be generalized by taking $n$-dimensional cubic-prisms and orient them in this way (here an $n$-dimensional cubic-prism has a $n-1$ dimensional cube as it's base). </p> <p>Also, it sounds like you have a solution for dimension 2. In light of how the conclusion is false for higher dimensions, I do not believe that such a solution will be simple in the sense that it avoids using geometry in 2 dimensions. </p> Answer 5: <p>Great question! it's amazing how many people claim this but without any argument or proof. I couldn't find any proof of it online but I did find a proof myself. I reformulated the theorem so it is correct in $n$ dimensions, and then extracted the original theorem from it in the $2D$ case.</p> <p><strong>Theorem (general):</strong> let $A, B \subset \mathbb{R}^n$ be 2 convex compact polytopes such that $A\cap B = \emptyset$, then there exists a separating hyperplane with normal vector orthogonal to one of the facets of the Minkowski sum $A+(-B)$</p> <p>To make it easier on terminology I will work in $\mathbb{R}^2$ here, but the proof works exactly the same way for any other $n$ (just different words)</p> <p><strong>Theorem (2D):</strong> let $A, B \subset \mathbb{R}^2$ be 2 convex compact polygons such that $A\cap B = \emptyset$, then there exists a separating line with normal vector orthogonal to one of the sides the Minkowski sum $A+(-B)$</p> <p><strong>Proof:</strong></p> <p><em>Step 1, choosing the axis:</em> Denote $K := A+(-B)$. $K$ is itself a compact convex polygon. We can extend each of its sides to a line. Since K is convex it will always be completely in 1 of the 2 halfplanes given by such a line. K is even equal to the intersection of all these halfplanes. (<a href="https://en.wikipedia.org/wiki/Convex_polytope#Intersection_of_half-spaces" rel="noreferrer">this is one of the definitions of a convex polygon</a>). write $$K = \bigcap_i H_i$$ where the $H_i$ are these halfplanes. Since $A\cap B=\emptyset$ we have $0\notin K$. This implies $0\notin H_i$ for some $i$. This $H_i$ was derived from a line $l$ that was the extention of one of the sides of $K$. Now take $v \in l$ such that $\lVert v \rVert = \inf_{x\in l} \lVert x\rVert$. This vector exists because of the <a href="https://en.wikipedia.org/wiki/Hilbert_projection_theorem" rel="noreferrer">Hilbert projection theorem</a>. Note that, since $0\not\in l$, we have $v\ne 0$.</p> <p><em>Step 2, orthogonality:</em> Take a unit vector $w$ in the direction of $l$ (in $n$-D: orthogonal to the normal of $l$). We then have that $v-aw \in l$ for all $a \in \Bbb R$, and because $v$ has minimal norm: \begin{align*} \lVert v \rVert ^2 &amp; \leq \lVert v - aw\rVert ^2\\ &amp; = \langle v- aw,v-aw\rangle\\ &amp;= \lVert v \rVert ^2 - 2 a \langle v,w \rangle + a^2\\ \end{align*} Now, let $a = \langle v,w\rangle$ then we have $$\lVert v\rVert ^2 \leq \lVert v\rVert ^2 - \langle v,w \rangle ^2$$ so $\langle v,w\rangle = 0$ and $v$ is orthogonal to $l$.</p> <p><em>Step 3, separation:</em> Take a look at the proof of the hyperplane separation theorem given <a href="https://en.wikipedia.org/wiki/Hyperplane_separation_theorem#Statements_and_proof" rel="noreferrer">here</a>. If we prove that $$ \lVert v \rVert ^2 \leq \lVert v + t(x-v) \rVert^2 $$ for all $t\in [0,1]$ and for all $x\in K$, then the rest of the proof can be done exactly the same. Ask me if you want me to explicitly write it down here but it's lengthy. To show the desired inequality, remember that $0$ is on the other side of $l$ than $K$. since a halfplane is convex, $v + t(x-v)$ will be on the same side of $l$ as $K$. This means that the line segment connecting $0$ and $v + t(x-v)$ will intersect $l$ at some point $s$, where $$\lVert s\rVert \leq \lVert v + t(x-v) \rVert $$ Now because $s\in l$ we have $\lVert v\rVert \leq \lVert s \rVert$ and we are done.</p> <p>Your version of the theorem follows from this because in 2D, the Minkowski sum of 2 polygons can be obtained just by sliding arroud the edges of the original polygons. This is no longer true in 3D or higher so that explains the counter example given in the other answer that looked like this: <a href="https://i.stack.imgur.com/LMBfH.png" rel="noreferrer"><img src="https://i.stack.imgur.com/LMBfH.png" alt="enter image description here"></a></p> <p>However, the modified theorem does hold in 3D and higher, so we need to look at the Minkowski sum. Call the bottom object $A$ and the top one $B$, then $A+(-B)$ will look like this:<a href="https://i.stack.imgur.com/JRZwF.png" rel="noreferrer"><img src="https://i.stack.imgur.com/JRZwF.png" alt="enter image description here"></a></p> <p>This Minkowski sum does provide a separating axis orthogonal to its top facet.</p> Answer 6: <p>I think this is an interesting question. </p> <p>However, the result is <em>not true for dimension $n\geq 3$</em>. Consider the following counter example for $n=3$.</p> <p><a href="https://i.stack.imgur.com/8PXjR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8PXjR.png" alt="enter image description here"></a> The counter example is probably best seen using the picture, although I can provide the half-space description if necessary. The counter example is two prisms with rotated squares as a base. They are situated in such a way that any half space defining one contains points of the other. Therefore, the conclusion is false.</p> <p>Moreover, this can be generalized by taking $n$-dimensional cubic-prisms and orient them in this way (here an $n$-dimensional cubic-prism has a $n-1$ dimensional cube as it's base). </p> <p>Also, it sounds like you have a solution for dimension 2. In light of how the conclusion is false for higher dimensions, I do not believe that such a solution will be simple in the sense that it avoids using geometry in 2 dimensions. </p> Answer 7: <p>Great question! it's amazing how many people claim this but without any argument or proof. I couldn't find any proof of it online but I did find a proof myself. I reformulated the theorem so it is correct in $n$ dimensions, and then extracted the original theorem from it in the $2D$ case.</p> <p><strong>Theorem (general):</strong> let $A, B \subset \mathbb{R}^n$ be 2 convex compact polytopes such that $A\cap B = \emptyset$, then there exists a separating hyperplane with normal vector orthogonal to one of the facets of the Minkowski sum $A+(-B)$</p> <p>To make it easier on terminology I will work in $\mathbb{R}^2$ here, but the proof works exactly the same way for any other $n$ (just different words)</p> <p><strong>Theorem (2D):</strong> let $A, B \subset \mathbb{R}^2$ be 2 convex compact polygons such that $A\cap B = \emptyset$, then there exists a separating line with normal vector orthogonal to one of the sides the Minkowski sum $A+(-B)$</p> <p><strong>Proof:</strong></p> <p><em>Step 1, choosing the axis:</em> Denote $K := A+(-B)$. $K$ is itself a compact convex polygon. We can extend each of its sides to a line. Since K is convex it will always be completely in 1 of the 2 halfplanes given by such a line. K is even equal to the intersection of all these halfplanes. (<a href="https://en.wikipedia.org/wiki/Convex_polytope#Intersection_of_half-spaces" rel="noreferrer">this is one of the definitions of a convex polygon</a>). write $$K = \bigcap_i H_i$$ where the $H_i$ are these halfplanes. Since $A\cap B=\emptyset$ we have $0\notin K$. This implies $0\notin H_i$ for some $i$. This $H_i$ was derived from a line $l$ that was the extention of one of the sides of $K$. Now take $v \in l$ such that $\lVert v \rVert = \inf_{x\in l} \lVert x\rVert$. This vector exists because of the <a href="https://en.wikipedia.org/wiki/Hilbert_projection_theorem" rel="noreferrer">Hilbert projection theorem</a>. Note that, since $0\not\in l$, we have $v\ne 0$.</p> <p><em>Step 2, orthogonality:</em> Take a unit vector $w$ in the direction of $l$ (in $n$-D: orthogonal to the normal of $l$). We then have that $v-aw \in l$ for all $a \in \Bbb R$, and because $v$ has minimal norm: \begin{align*} \lVert v \rVert ^2 &amp; \leq \lVert v - aw\rVert ^2\\ &amp; = \langle v- aw,v-aw\rangle\\ &amp;= \lVert v \rVert ^2 - 2 a \langle v,w \rangle + a^2\\ \end{align*} Now, let $a = \langle v,w\rangle$ then we have $$\lVert v\rVert ^2 \leq \lVert v\rVert ^2 - \langle v,w \rangle ^2$$ so $\langle v,w\rangle = 0$ and $v$ is orthogonal to $l$.</p> <p><em>Step 3, separation:</em> Take a look at the proof of the hyperplane separation theorem given <a href="https://en.wikipedia.org/wiki/Hyperplane_separation_theorem#Statements_and_proof" rel="noreferrer">here</a>. If we prove that $$ \lVert v \rVert ^2 \leq \lVert v + t(x-v) \rVert^2 $$ for all $t\in [0,1]$ and for all $x\in K$, then the rest of the proof can be done exactly the same. Ask me if you want me to explicitly write it down here but it's lengthy. To show the desired inequality, remember that $0$ is on the other side of $l$ than $K$. since a halfplane is convex, $v + t(x-v)$ will be on the same side of $l$ as $K$. This means that the line segment connecting $0$ and $v + t(x-v)$ will intersect $l$ at some point $s$, where $$\lVert s\rVert \leq \lVert v + t(x-v) \rVert $$ Now because $s\in l$ we have $\lVert v\rVert \leq \lVert s \rVert$ and we are done.</p> <p>Your version of the theorem follows from this because in 2D, the Minkowski sum of 2 polygons can be obtained just by sliding arroud the edges of the original polygons. This is no longer true in 3D or higher so that explains the counter example given in the other answer that looked like this: <a href="https://i.stack.imgur.com/LMBfH.png" rel="noreferrer"><img src="https://i.stack.imgur.com/LMBfH.png" alt="enter image description here"></a></p> <p>However, the modified theorem does hold in 3D and higher, so we need to look at the Minkowski sum. Call the bottom object $A$ and the top one $B$, then $A+(-B)$ will look like this:<a href="https://i.stack.imgur.com/JRZwF.png" rel="noreferrer"><img src="https://i.stack.imgur.com/JRZwF.png" alt="enter image description here"></a></p> <p>This Minkowski sum does provide a separating axis orthogonal to its top facet.</p> Answer 8: <p>I think this is an interesting question. </p> <p>However, the result is <em>not true for dimension $n\geq 3$</em>. Consider the following counter example for $n=3$.</p> <p><a href="https://i.stack.imgur.com/8PXjR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8PXjR.png" alt="enter image description here"></a> The counter example is probably best seen using the picture, although I can provide the half-space description if necessary. The counter example is two prisms with rotated squares as a base. They are situated in such a way that any half space defining one contains points of the other. Therefore, the conclusion is false.</p> <p>Moreover, this can be generalized by taking $n$-dimensional cubic-prisms and orient them in this way (here an $n$-dimensional cubic-prism has a $n-1$ dimensional cube as it's base). </p> <p>Also, it sounds like you have a solution for dimension 2. In light of how the conclusion is false for higher dimensions, I do not believe that such a solution will be simple in the sense that it avoids using geometry in 2 dimensions. </p> Answer 9: <p>Great question! it's amazing how many people claim this but without any argument or proof. I couldn't find any proof of it online but I did find a proof myself. I reformulated the theorem so it is correct in $n$ dimensions, and then extracted the original theorem from it in the $2D$ case.</p> <p><strong>Theorem (general):</strong> let $A, B \subset \mathbb{R}^n$ be 2 convex compact polytopes such that $A\cap B = \emptyset$, then there exists a separating hyperplane with normal vector orthogonal to one of the facets of the Minkowski sum $A+(-B)$</p> <p>To make it easier on terminology I will work in $\mathbb{R}^2$ here, but the proof works exactly the same way for any other $n$ (just different words)</p> <p><strong>Theorem (2D):</strong> let $A, B \subset \mathbb{R}^2$ be 2 convex compact polygons such that $A\cap B = \emptyset$, then there exists a separating line with normal vector orthogonal to one of the sides the Minkowski sum $A+(-B)$</p> <p><strong>Proof:</strong></p> <p><em>Step 1, choosing the axis:</em> Denote $K := A+(-B)$. $K$ is itself a compact convex polygon. We can extend each of its sides to a line. Since K is convex it will always be completely in 1 of the 2 halfplanes given by such a line. K is even equal to the intersection of all these halfplanes. (<a href="https://en.wikipedia.org/wiki/Convex_polytope#Intersection_of_half-spaces" rel="noreferrer">this is one of the definitions of a convex polygon</a>). write $$K = \bigcap_i H_i$$ where the $H_i$ are these halfplanes. Since $A\cap B=\emptyset$ we have $0\notin K$. This implies $0\notin H_i$ for some $i$. This $H_i$ was derived from a line $l$ that was the extention of one of the sides of $K$. Now take $v \in l$ such that $\lVert v \rVert = \inf_{x\in l} \lVert x\rVert$. This vector exists because of the <a href="https://en.wikipedia.org/wiki/Hilbert_projection_theorem" rel="noreferrer">Hilbert projection theorem</a>. Note that, since $0\not\in l$, we have $v\ne 0$.</p> <p><em>Step 2, orthogonality:</em> Take a unit vector $w$ in the direction of $l$ (in $n$-D: orthogonal to the normal of $l$). We then have that $v-aw \in l$ for all $a \in \Bbb R$, and because $v$ has minimal norm: \begin{align*} \lVert v \rVert ^2 &amp; \leq \lVert v - aw\rVert ^2\\ &amp; = \langle v- aw,v-aw\rangle\\ &amp;= \lVert v \rVert ^2 - 2 a \langle v,w \rangle + a^2\\ \end{align*} Now, let $a = \langle v,w\rangle$ then we have $$\lVert v\rVert ^2 \leq \lVert v\rVert ^2 - \langle v,w \rangle ^2$$ so $\langle v,w\rangle = 0$ and $v$ is orthogonal to $l$.</p> <p><em>Step 3, separation:</em> Take a look at the proof of the hyperplane separation theorem given <a href="https://en.wikipedia.org/wiki/Hyperplane_separation_theorem#Statements_and_proof" rel="noreferrer">here</a>. If we prove that $$ \lVert v \rVert ^2 \leq \lVert v + t(x-v) \rVert^2 $$ for all $t\in [0,1]$ and for all $x\in K$, then the rest of the proof can be done exactly the same. Ask me if you want me to explicitly write it down here but it's lengthy. To show the desired inequality, remember that $0$ is on the other side of $l$ than $K$. since a halfplane is convex, $v + t(x-v)$ will be on the same side of $l$ as $K$. This means that the line segment connecting $0$ and $v + t(x-v)$ will intersect $l$ at some point $s$, where $$\lVert s\rVert \leq \lVert v + t(x-v) \rVert $$ Now because $s\in l$ we have $\lVert v\rVert \leq \lVert s \rVert$ and we are done.</p> <p>Your version of the theorem follows from this because in 2D, the Minkowski sum of 2 polygons can be obtained just by sliding arroud the edges of the original polygons. This is no longer true in 3D or higher so that explains the counter example given in the other answer that looked like this: <a href="https://i.stack.imgur.com/LMBfH.png" rel="noreferrer"><img src="https://i.stack.imgur.com/LMBfH.png" alt="enter image description here"></a></p> <p>However, the modified theorem does hold in 3D and higher, so we need to look at the Minkowski sum. Call the bottom object $A$ and the top one $B$, then $A+(-B)$ will look like this:<a href="https://i.stack.imgur.com/JRZwF.png" rel="noreferrer"><img src="https://i.stack.imgur.com/JRZwF.png" alt="enter image description here"></a></p> <p>This Minkowski sum does provide a separating axis orthogonal to its top facet.</p> Answer 10: <p>I think this is an interesting question. </p> <p>However, the result is <em>not true for dimension $n\geq 3$</em>. Consider the following counter example for $n=3$.</p> <p><a href="https://i.stack.imgur.com/8PXjR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8PXjR.png" alt="enter image description here"></a> The counter example is probably best seen using the picture, although I can provide the half-space description if necessary. The counter example is two prisms with rotated squares as a base. They are situated in such a way that any half space defining one contains points of the other. Therefore, the conclusion is false.</p> <p>Moreover, this can be generalized by taking $n$-dimensional cubic-prisms and orient them in this way (here an $n$-dimensional cubic-prism has a $n-1$ dimensional cube as it's base). </p> <p>Also, it sounds like you have a solution for dimension 2. In light of how the conclusion is false for higher dimensions, I do not believe that such a solution will be simple in the sense that it avoids using geometry in 2 dimensions. </p> Answer 11: <p>Great question! it's amazing how many people claim this but without any argument or proof. I couldn't find any proof of it online but I did find a proof myself. I reformulated the theorem so it is correct in $n$ dimensions, and then extracted the original theorem from it in the $2D$ case.</p> <p><strong>Theorem (general):</strong> let $A, B \subset \mathbb{R}^n$ be 2 convex compact polytopes such that $A\cap B = \emptyset$, then there exists a separating hyperplane with normal vector orthogonal to one of the facets of the Minkowski sum $A+(-B)$</p> <p>To make it easier on terminology I will work in $\mathbb{R}^2$ here, but the proof works exactly the same way for any other $n$ (just different words)</p> <p><strong>Theorem (2D):</strong> let $A, B \subset \mathbb{R}^2$ be 2 convex compact polygons such that $A\cap B = \emptyset$, then there exists a separating line with normal vector orthogonal to one of the sides the Minkowski sum $A+(-B)$</p> <p><strong>Proof:</strong></p> <p><em>Step 1, choosing the axis:</em> Denote $K := A+(-B)$. $K$ is itself a compact convex polygon. We can extend each of its sides to a line. Since K is convex it will always be completely in 1 of the 2 halfplanes given by such a line. K is even equal to the intersection of all these halfplanes. (<a href="https://en.wikipedia.org/wiki/Convex_polytope#Intersection_of_half-spaces" rel="noreferrer">this is one of the definitions of a convex polygon</a>). write $$K = \bigcap_i H_i$$ where the $H_i$ are these halfplanes. Since $A\cap B=\emptyset$ we have $0\notin K$. This implies $0\notin H_i$ for some $i$. This $H_i$ was derived from a line $l$ that was the extention of one of the sides of $K$. Now take $v \in l$ such that $\lVert v \rVert = \inf_{x\in l} \lVert x\rVert$. This vector exists because of the <a href="https://en.wikipedia.org/wiki/Hilbert_projection_theorem" rel="noreferrer">Hilbert projection theorem</a>. Note that, since $0\not\in l$, we have $v\ne 0$.</p> <p><em>Step 2, orthogonality:</em> Take a unit vector $w$ in the direction of $l$ (in $n$-D: orthogonal to the normal of $l$). We then have that $v-aw \in l$ for all $a \in \Bbb R$, and because $v$ has minimal norm: \begin{align*} \lVert v \rVert ^2 &amp; \leq \lVert v - aw\rVert ^2\\ &amp; = \langle v- aw,v-aw\rangle\\ &amp;= \lVert v \rVert ^2 - 2 a \langle v,w \rangle + a^2\\ \end{align*} Now, let $a = \langle v,w\rangle$ then we have $$\lVert v\rVert ^2 \leq \lVert v\rVert ^2 - \langle v,w \rangle ^2$$ so $\langle v,w\rangle = 0$ and $v$ is orthogonal to $l$.</p> <p><em>Step 3, separation:</em> Take a look at the proof of the hyperplane separation theorem given <a href="https://en.wikipedia.org/wiki/Hyperplane_separation_theorem#Statements_and_proof" rel="noreferrer">here</a>. If we prove that $$ \lVert v \rVert ^2 \leq \lVert v + t(x-v) \rVert^2 $$ for all $t\in [0,1]$ and for all $x\in K$, then the rest of the proof can be done exactly the same. Ask me if you want me to explicitly write it down here but it's lengthy. To show the desired inequality, remember that $0$ is on the other side of $l$ than $K$. since a halfplane is convex, $v + t(x-v)$ will be on the same side of $l$ as $K$. This means that the line segment connecting $0$ and $v + t(x-v)$ will intersect $l$ at some point $s$, where $$\lVert s\rVert \leq \lVert v + t(x-v) \rVert $$ Now because $s\in l$ we have $\lVert v\rVert \leq \lVert s \rVert$ and we are done.</p> <p>Your version of the theorem follows from this because in 2D, the Minkowski sum of 2 polygons can be obtained just by sliding arroud the edges of the original polygons. This is no longer true in 3D or higher so that explains the counter example given in the other answer that looked like this: <a href="https://i.stack.imgur.com/LMBfH.png" rel="noreferrer"><img src="https://i.stack.imgur.com/LMBfH.png" alt="enter image description here"></a></p> <p>However, the modified theorem does hold in 3D and higher, so we need to look at the Minkowski sum. Call the bottom object $A$ and the top one $B$, then $A+(-B)$ will look like this:<a href="https://i.stack.imgur.com/JRZwF.png" rel="noreferrer"><img src="https://i.stack.imgur.com/JRZwF.png" alt="enter image description here"></a></p> <p>This Minkowski sum does provide a separating axis orthogonal to its top facet.</p> Answer 12: <p>I think this is an interesting question. </p> <p>However, the result is <em>not true for dimension $n\geq 3$</em>. Consider the following counter example for $n=3$.</p> <p><a href="https://i.stack.imgur.com/8PXjR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8PXjR.png" alt="enter image description here"></a> The counter example is probably best seen using the picture, although I can provide the half-space description if necessary. The counter example is two prisms with rotated squares as a base. They are situated in such a way that any half space defining one contains points of the other. Therefore, the conclusion is false.</p> <p>Moreover, this can be generalized by taking $n$-dimensional cubic-prisms and orient them in this way (here an $n$-dimensional cubic-prism has a $n-1$ dimensional cube as it's base). </p> <p>Also, it sounds like you have a solution for dimension 2. In light of how the conclusion is false for higher dimensions, I do not believe that such a solution will be simple in the sense that it avoids using geometry in 2 dimensions. </p> Answer 13: <p>Great question! it's amazing how many people claim this but without any argument or proof. I couldn't find any proof of it online but I did find a proof myself. I reformulated the theorem so it is correct in $n$ dimensions, and then extracted the original theorem from it in the $2D$ case.</p> <p><strong>Theorem (general):</strong> let $A, B \subset \mathbb{R}^n$ be 2 convex compact polytopes such that $A\cap B = \emptyset$, then there exists a separating hyperplane with normal vector orthogonal to one of the facets of the Minkowski sum $A+(-B)$</p> <p>To make it easier on terminology I will work in $\mathbb{R}^2$ here, but the proof works exactly the same way for any other $n$ (just different words)</p> <p><strong>Theorem (2D):</strong> let $A, B \subset \mathbb{R}^2$ be 2 convex compact polygons such that $A\cap B = \emptyset$, then there exists a separating line with normal vector orthogonal to one of the sides the Minkowski sum $A+(-B)$</p> <p><strong>Proof:</strong></p> <p><em>Step 1, choosing the axis:</em> Denote $K := A+(-B)$. $K$ is itself a compact convex polygon. We can extend each of its sides to a line. Since K is convex it will always be completely in 1 of the 2 halfplanes given by such a line. K is even equal to the intersection of all these halfplanes. (<a href="https://en.wikipedia.org/wiki/Convex_polytope#Intersection_of_half-spaces" rel="noreferrer">this is one of the definitions of a convex polygon</a>). write $$K = \bigcap_i H_i$$ where the $H_i$ are these halfplanes. Since $A\cap B=\emptyset$ we have $0\notin K$. This implies $0\notin H_i$ for some $i$. This $H_i$ was derived from a line $l$ that was the extention of one of the sides of $K$. Now take $v \in l$ such that $\lVert v \rVert = \inf_{x\in l} \lVert x\rVert$. This vector exists because of the <a href="https://en.wikipedia.org/wiki/Hilbert_projection_theorem" rel="noreferrer">Hilbert projection theorem</a>. Note that, since $0\not\in l$, we have $v\ne 0$.</p> <p><em>Step 2, orthogonality:</em> Take a unit vector $w$ in the direction of $l$ (in $n$-D: orthogonal to the normal of $l$). We then have that $v-aw \in l$ for all $a \in \Bbb R$, and because $v$ has minimal norm: \begin{align*} \lVert v \rVert ^2 &amp; \leq \lVert v - aw\rVert ^2\\ &amp; = \langle v- aw,v-aw\rangle\\ &amp;= \lVert v \rVert ^2 - 2 a \langle v,w \rangle + a^2\\ \end{align*} Now, let $a = \langle v,w\rangle$ then we have $$\lVert v\rVert ^2 \leq \lVert v\rVert ^2 - \langle v,w \rangle ^2$$ so $\langle v,w\rangle = 0$ and $v$ is orthogonal to $l$.</p> <p><em>Step 3, separation:</em> Take a look at the proof of the hyperplane separation theorem given <a href="https://en.wikipedia.org/wiki/Hyperplane_separation_theorem#Statements_and_proof" rel="noreferrer">here</a>. If we prove that $$ \lVert v \rVert ^2 \leq \lVert v + t(x-v) \rVert^2 $$ for all $t\in [0,1]$ and for all $x\in K$, then the rest of the proof can be done exactly the same. Ask me if you want me to explicitly write it down here but it's lengthy. To show the desired inequality, remember that $0$ is on the other side of $l$ than $K$. since a halfplane is convex, $v + t(x-v)$ will be on the same side of $l$ as $K$. This means that the line segment connecting $0$ and $v + t(x-v)$ will intersect $l$ at some point $s$, where $$\lVert s\rVert \leq \lVert v + t(x-v) \rVert $$ Now because $s\in l$ we have $\lVert v\rVert \leq \lVert s \rVert$ and we are done.</p> <p>Your version of the theorem follows from this because in 2D, the Minkowski sum of 2 polygons can be obtained just by sliding arroud the edges of the original polygons. This is no longer true in 3D or higher so that explains the counter example given in the other answer that looked like this: <a href="https://i.stack.imgur.com/LMBfH.png" rel="noreferrer"><img src="https://i.stack.imgur.com/LMBfH.png" alt="enter image description here"></a></p> <p>However, the modified theorem does hold in 3D and higher, so we need to look at the Minkowski sum. Call the bottom object $A$ and the top one $B$, then $A+(-B)$ will look like this:<a href="https://i.stack.imgur.com/JRZwF.png" rel="noreferrer"><img src="https://i.stack.imgur.com/JRZwF.png" alt="enter image description here"></a></p> <p>This Minkowski sum does provide a separating axis orthogonal to its top facet.</p> Answer 14: <p>I think this is an interesting question. </p> <p>However, the result is <em>not true for dimension $n\geq 3$</em>. Consider the following counter example for $n=3$.</p> <p><a href="https://i.stack.imgur.com/8PXjR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8PXjR.png" alt="enter image description here"></a> The counter example is probably best seen using the picture, although I can provide the half-space description if necessary. The counter example is two prisms with rotated squares as a base. They are situated in such a way that any half space defining one contains points of the other. Therefore, the conclusion is false.</p> <p>Moreover, this can be generalized by taking $n$-dimensional cubic-prisms and orient them in this way (here an $n$-dimensional cubic-prism has a $n-1$ dimensional cube as it's base). </p> <p>Also, it sounds like you have a solution for dimension 2. In light of how the conclusion is false for higher dimensions, I do not believe that such a solution will be simple in the sense that it avoids using geometry in 2 dimensions. </p> Answer 15: <p>Great question! it's amazing how many people claim this but without any argument or proof. I couldn't find any proof of it online but I did find a proof myself. I reformulated the theorem so it is correct in $n$ dimensions, and then extracted the original theorem from it in the $2D$ case.</p> <p><strong>Theorem (general):</strong> let $A, B \subset \mathbb{R}^n$ be 2 convex compact polytopes such that $A\cap B = \emptyset$, then there exists a separating hyperplane with normal vector orthogonal to one of the facets of the Minkowski sum $A+(-B)$</p> <p>To make it easier on terminology I will work in $\mathbb{R}^2$ here, but the proof works exactly the same way for any other $n$ (just different words)</p> <p><strong>Theorem (2D):</strong> let $A, B \subset \mathbb{R}^2$ be 2 convex compact polygons such that $A\cap B = \emptyset$, then there exists a separating line with normal vector orthogonal to one of the sides the Minkowski sum $A+(-B)$</p> <p><strong>Proof:</strong></p> <p><em>Step 1, choosing the axis:</em> Denote $K := A+(-B)$. $K$ is itself a compact convex polygon. We can extend each of its sides to a line. Since K is convex it will always be completely in 1 of the 2 halfplanes given by such a line. K is even equal to the intersection of all these halfplanes. (<a href="https://en.wikipedia.org/wiki/Convex_polytope#Intersection_of_half-spaces" rel="noreferrer">this is one of the definitions of a convex polygon</a>). write $$K = \bigcap_i H_i$$ where the $H_i$ are these halfplanes. Since $A\cap B=\emptyset$ we have $0\notin K$. This implies $0\notin H_i$ for some $i$. This $H_i$ was derived from a line $l$ that was the extention of one of the sides of $K$. Now take $v \in l$ such that $\lVert v \rVert = \inf_{x\in l} \lVert x\rVert$. This vector exists because of the <a href="https://en.wikipedia.org/wiki/Hilbert_projection_theorem" rel="noreferrer">Hilbert projection theorem</a>. Note that, since $0\not\in l$, we have $v\ne 0$.</p> <p><em>Step 2, orthogonality:</em> Take a unit vector $w$ in the direction of $l$ (in $n$-D: orthogonal to the normal of $l$). We then have that $v-aw \in l$ for all $a \in \Bbb R$, and because $v$ has minimal norm: \begin{align*} \lVert v \rVert ^2 &amp; \leq \lVert v - aw\rVert ^2\\ &amp; = \langle v- aw,v-aw\rangle\\ &amp;= \lVert v \rVert ^2 - 2 a \langle v,w \rangle + a^2\\ \end{align*} Now, let $a = \langle v,w\rangle$ then we have $$\lVert v\rVert ^2 \leq \lVert v\rVert ^2 - \langle v,w \rangle ^2$$ so $\langle v,w\rangle = 0$ and $v$ is orthogonal to $l$.</p> <p><em>Step 3, separation:</em> Take a look at the proof of the hyperplane separation theorem given <a href="https://en.wikipedia.org/wiki/Hyperplane_separation_theorem#Statements_and_proof" rel="noreferrer">here</a>. If we prove that $$ \lVert v \rVert ^2 \leq \lVert v + t(x-v) \rVert^2 $$ for all $t\in [0,1]$ and for all $x\in K$, then the rest of the proof can be done exactly the same. Ask me if you want me to explicitly write it down here but it's lengthy. To show the desired inequality, remember that $0$ is on the other side of $l$ than $K$. since a halfplane is convex, $v + t(x-v)$ will be on the same side of $l$ as $K$. This means that the line segment connecting $0$ and $v + t(x-v)$ will intersect $l$ at some point $s$, where $$\lVert s\rVert \leq \lVert v + t(x-v) \rVert $$ Now because $s\in l$ we have $\lVert v\rVert \leq \lVert s \rVert$ and we are done.</p> <p>Your version of the theorem follows from this because in 2D, the Minkowski sum of 2 polygons can be obtained just by sliding arroud the edges of the original polygons. This is no longer true in 3D or higher so that explains the counter example given in the other answer that looked like this: <a href="https://i.stack.imgur.com/LMBfH.png" rel="noreferrer"><img src="https://i.stack.imgur.com/LMBfH.png" alt="enter image description here"></a></p> <p>However, the modified theorem does hold in 3D and higher, so we need to look at the Minkowski sum. Call the bottom object $A$ and the top one $B$, then $A+(-B)$ will look like this:<a href="https://i.stack.imgur.com/JRZwF.png" rel="noreferrer"><img src="https://i.stack.imgur.com/JRZwF.png" alt="enter image description here"></a></p> <p>This Minkowski sum does provide a separating axis orthogonal to its top facet.</p> Answer 16: <p>I think this is an interesting question. </p> <p>However, the result is <em>not true for dimension $n\geq 3$</em>. Consider the following counter example for $n=3$.</p> <p><a href="https://i.stack.imgur.com/8PXjR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8PXjR.png" alt="enter image description here"></a> The counter example is probably best seen using the picture, although I can provide the half-space description if necessary. The counter example is two prisms with rotated squares as a base. They are situated in such a way that any half space defining one contains points of the other. Therefore, the conclusion is false.</p> <p>Moreover, this can be generalized by taking $n$-dimensional cubic-prisms and orient them in this way (here an $n$-dimensional cubic-prism has a $n-1$ dimensional cube as it's base). </p> <p>Also, it sounds like you have a solution for dimension 2. In light of how the conclusion is false for higher dimensions, I do not believe that such a solution will be simple in the sense that it avoids using geometry in 2 dimensions. </p> Answer 17: <p>Great question! it's amazing how many people claim this but without any argument or proof. I couldn't find any proof of it online but I did find a proof myself. I reformulated the theorem so it is correct in $n$ dimensions, and then extracted the original theorem from it in the $2D$ case.</p> <p><strong>Theorem (general):</strong> let $A, B \subset \mathbb{R}^n$ be 2 convex compact polytopes such that $A\cap B = \emptyset$, then there exists a separating hyperplane with normal vector orthogonal to one of the facets of the Minkowski sum $A+(-B)$</p> <p>To make it easier on terminology I will work in $\mathbb{R}^2$ here, but the proof works exactly the same way for any other $n$ (just different words)</p> <p><strong>Theorem (2D):</strong> let $A, B \subset \mathbb{R}^2$ be 2 convex compact polygons such that $A\cap B = \emptyset$, then there exists a separating line with normal vector orthogonal to one of the sides the Minkowski sum $A+(-B)$</p> <p><strong>Proof:</strong></p> <p><em>Step 1, choosing the axis:</em> Denote $K := A+(-B)$. $K$ is itself a compact convex polygon. We can extend each of its sides to a line. Since K is convex it will always be completely in 1 of the 2 halfplanes given by such a line. K is even equal to the intersection of all these halfplanes. (<a href="https://en.wikipedia.org/wiki/Convex_polytope#Intersection_of_half-spaces" rel="noreferrer">this is one of the definitions of a convex polygon</a>). write $$K = \bigcap_i H_i$$ where the $H_i$ are these halfplanes. Since $A\cap B=\emptyset$ we have $0\notin K$. This implies $0\notin H_i$ for some $i$. This $H_i$ was derived from a line $l$ that was the extention of one of the sides of $K$. Now take $v \in l$ such that $\lVert v \rVert = \inf_{x\in l} \lVert x\rVert$. This vector exists because of the <a href="https://en.wikipedia.org/wiki/Hilbert_projection_theorem" rel="noreferrer">Hilbert projection theorem</a>. Note that, since $0\not\in l$, we have $v\ne 0$.</p> <p><em>Step 2, orthogonality:</em> Take a unit vector $w$ in the direction of $l$ (in $n$-D: orthogonal to the normal of $l$). We then have that $v-aw \in l$ for all $a \in \Bbb R$, and because $v$ has minimal norm: \begin{align*} \lVert v \rVert ^2 &amp; \leq \lVert v - aw\rVert ^2\\ &amp; = \langle v- aw,v-aw\rangle\\ &amp;= \lVert v \rVert ^2 - 2 a \langle v,w \rangle + a^2\\ \end{align*} Now, let $a = \langle v,w\rangle$ then we have $$\lVert v\rVert ^2 \leq \lVert v\rVert ^2 - \langle v,w \rangle ^2$$ so $\langle v,w\rangle = 0$ and $v$ is orthogonal to $l$.</p> <p><em>Step 3, separation:</em> Take a look at the proof of the hyperplane separation theorem given <a href="https://en.wikipedia.org/wiki/Hyperplane_separation_theorem#Statements_and_proof" rel="noreferrer">here</a>. If we prove that $$ \lVert v \rVert ^2 \leq \lVert v + t(x-v) \rVert^2 $$ for all $t\in [0,1]$ and for all $x\in K$, then the rest of the proof can be done exactly the same. Ask me if you want me to explicitly write it down here but it's lengthy. To show the desired inequality, remember that $0$ is on the other side of $l$ than $K$. since a halfplane is convex, $v + t(x-v)$ will be on the same side of $l$ as $K$. This means that the line segment connecting $0$ and $v + t(x-v)$ will intersect $l$ at some point $s$, where $$\lVert s\rVert \leq \lVert v + t(x-v) \rVert $$ Now because $s\in l$ we have $\lVert v\rVert \leq \lVert s \rVert$ and we are done.</p> <p>Your version of the theorem follows from this because in 2D, the Minkowski sum of 2 polygons can be obtained just by sliding arroud the edges of the original polygons. This is no longer true in 3D or higher so that explains the counter example given in the other answer that looked like this: <a href="https://i.stack.imgur.com/LMBfH.png" rel="noreferrer"><img src="https://i.stack.imgur.com/LMBfH.png" alt="enter image description here"></a></p> <p>However, the modified theorem does hold in 3D and higher, so we need to look at the Minkowski sum. Call the bottom object $A$ and the top one $B$, then $A+(-B)$ will look like this:<a href="https://i.stack.imgur.com/JRZwF.png" rel="noreferrer"><img src="https://i.stack.imgur.com/JRZwF.png" alt="enter image description here"></a></p> <p>This Minkowski sum does provide a separating axis orthogonal to its top facet.</p> Answer 18: <p>I think this is an interesting question. </p> <p>However, the result is <em>not true for dimension $n\geq 3$</em>. Consider the following counter example for $n=3$.</p> <p><a href="https://i.stack.imgur.com/8PXjR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8PXjR.png" alt="enter image description here"></a> The counter example is probably best seen using the picture, although I can provide the half-space description if necessary. The counter example is two prisms with rotated squares as a base. They are situated in such a way that any half space defining one contains points of the other. Therefore, the conclusion is false.</p> <p>Moreover, this can be generalized by taking $n$-dimensional cubic-prisms and orient them in this way (here an $n$-dimensional cubic-prism has a $n-1$ dimensional cube as it's base). </p> <p>Also, it sounds like you have a solution for dimension 2. In light of how the conclusion is false for higher dimensions, I do not believe that such a solution will be simple in the sense that it avoids using geometry in 2 dimensions. </p> Answer 19: <p>Great question! it's amazing how many people claim this but without any argument or proof. I couldn't find any proof of it online but I did find a proof myself. I reformulated the theorem so it is correct in $n$ dimensions, and then extracted the original theorem from it in the $2D$ case.</p> <p><strong>Theorem (general):</strong> let $A, B \subset \mathbb{R}^n$ be 2 convex compact polytopes such that $A\cap B = \emptyset$, then there exists a separating hyperplane with normal vector orthogonal to one of the facets of the Minkowski sum $A+(-B)$</p> <p>To make it easier on terminology I will work in $\mathbb{R}^2$ here, but the proof works exactly the same way for any other $n$ (just different words)</p> <p><strong>Theorem (2D):</strong> let $A, B \subset \mathbb{R}^2$ be 2 convex compact polygons such that $A\cap B = \emptyset$, then there exists a separating line with normal vector orthogonal to one of the sides the Minkowski sum $A+(-B)$</p> <p><strong>Proof:</strong></p> <p><em>Step 1, choosing the axis:</em> Denote $K := A+(-B)$. $K$ is itself a compact convex polygon. We can extend each of its sides to a line. Since K is convex it will always be completely in 1 of the 2 halfplanes given by such a line. K is even equal to the intersection of all these halfplanes. (<a href="https://en.wikipedia.org/wiki/Convex_polytope#Intersection_of_half-spaces" rel="noreferrer">this is one of the definitions of a convex polygon</a>). write $$K = \bigcap_i H_i$$ where the $H_i$ are these halfplanes. Since $A\cap B=\emptyset$ we have $0\notin K$. This implies $0\notin H_i$ for some $i$. This $H_i$ was derived from a line $l$ that was the extention of one of the sides of $K$. Now take $v \in l$ such that $\lVert v \rVert = \inf_{x\in l} \lVert x\rVert$. This vector exists because of the <a href="https://en.wikipedia.org/wiki/Hilbert_projection_theorem" rel="noreferrer">Hilbert projection theorem</a>. Note that, since $0\not\in l$, we have $v\ne 0$.</p> <p><em>Step 2, orthogonality:</em> Take a unit vector $w$ in the direction of $l$ (in $n$-D: orthogonal to the normal of $l$). We then have that $v-aw \in l$ for all $a \in \Bbb R$, and because $v$ has minimal norm: \begin{align*} \lVert v \rVert ^2 &amp; \leq \lVert v - aw\rVert ^2\\ &amp; = \langle v- aw,v-aw\rangle\\ &amp;= \lVert v \rVert ^2 - 2 a \langle v,w \rangle + a^2\\ \end{align*} Now, let $a = \langle v,w\rangle$ then we have $$\lVert v\rVert ^2 \leq \lVert v\rVert ^2 - \langle v,w \rangle ^2$$ so $\langle v,w\rangle = 0$ and $v$ is orthogonal to $l$.</p> <p><em>Step 3, separation:</em> Take a look at the proof of the hyperplane separation theorem given <a href="https://en.wikipedia.org/wiki/Hyperplane_separation_theorem#Statements_and_proof" rel="noreferrer">here</a>. If we prove that $$ \lVert v \rVert ^2 \leq \lVert v + t(x-v) \rVert^2 $$ for all $t\in [0,1]$ and for all $x\in K$, then the rest of the proof can be done exactly the same. Ask me if you want me to explicitly write it down here but it's lengthy. To show the desired inequality, remember that $0$ is on the other side of $l$ than $K$. since a halfplane is convex, $v + t(x-v)$ will be on the same side of $l$ as $K$. This means that the line segment connecting $0$ and $v + t(x-v)$ will intersect $l$ at some point $s$, where $$\lVert s\rVert \leq \lVert v + t(x-v) \rVert $$ Now because $s\in l$ we have $\lVert v\rVert \leq \lVert s \rVert$ and we are done.</p> <p>Your version of the theorem follows from this because in 2D, the Minkowski sum of 2 polygons can be obtained just by sliding arroud the edges of the original polygons. This is no longer true in 3D or higher so that explains the counter example given in the other answer that looked like this: <a href="https://i.stack.imgur.com/LMBfH.png" rel="noreferrer"><img src="https://i.stack.imgur.com/LMBfH.png" alt="enter image description here"></a></p> <p>However, the modified theorem does hold in 3D and higher, so we need to look at the Minkowski sum. Call the bottom object $A$ and the top one $B$, then $A+(-B)$ will look like this:<a href="https://i.stack.imgur.com/JRZwF.png" rel="noreferrer"><img src="https://i.stack.imgur.com/JRZwF.png" alt="enter image description here"></a></p> <p>This Minkowski sum does provide a separating axis orthogonal to its top facet.</p> Answer 20: <p>I think this is an interesting question. </p> <p>However, the result is <em>not true for dimension $n\geq 3$</em>. Consider the following counter example for $n=3$.</p> <p><a href="https://i.stack.imgur.com/8PXjR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8PXjR.png" alt="enter image description here"></a> The counter example is probably best seen using the picture, although I can provide the half-space description if necessary. The counter example is two prisms with rotated squares as a base. They are situated in such a way that any half space defining one contains points of the other. Therefore, the conclusion is false.</p> <p>Moreover, this can be generalized by taking $n$-dimensional cubic-prisms and orient them in this way (here an $n$-dimensional cubic-prism has a $n-1$ dimensional cube as it's base). </p> <p>Also, it sounds like you have a solution for dimension 2. In light of how the conclusion is false for higher dimensions, I do not believe that such a solution will be simple in the sense that it avoids using geometry in 2 dimensions. </p>
Help for series calculation $\sum_{n\ge1} \frac{1}{4n^3-n}$
2,106,470
6
https://math.stackexchange.com/questions/2106470/help-for-series-calculation-sum-n-ge1-frac14n3-n
<p>I want to find the following series.</p> <p>$$\sum_{n\ge1} \frac{1}{4n^3-n}$$ </p> <p>So, fisrt of all, patial fraction : $$\frac{1}{4n^3-n}=\frac{1}{2n+1}+\frac{1}{2n-1}-\frac{1}{n}.$$</p> <p>Next, I consider the geometric series $$\sum_{n\ge1} x^{2n-2}+x^{2n}-x^{n-1}=\frac{1}{1-x^2}+\frac{x^2}{1-x^2}-\frac{1}{1-x}\tag{1}$$ where ${-1\lt x\lt1}$.</p> <p>Then, I integrate both side of (1) from $0$ to $1$:</p> <p>$$\sum_{n\ge1}\frac{1}{2n-1}+\frac{1}{2n+1}-\frac{1}{n}=\int_0^1 \frac{x^2-x}{1-x^2}dx$$</p> <p>Finally, I get the value that is equal to $ln(2)-1$ by improper integral but it's less than $0$. What did I do wrong? All help is appreciated.</p>
[ { "answer_id": 2106505, "score": 5, "text": "<p><em>Following a comment of mine above:</em></p>\n\n<ul>\n<li><p>without checking too carefully, the issue in your step most likely resides in the <em>\"I integrate both sides of (1) from 0 to 1.\"</em> You then integrate the LHS termwise -- <em>why</em> can you do that? The interval of convergence of your power series is $1$, so it is <em>not</em> obvious you can swap $\\sum_n$ and $\\int_0^1$. And, as it turns out, worse than \"not obvious\" it appears to be wrong.</p></li>\n<li><p>Now, another approach to compute the sum: starting with your $(1)$, we write, for $N\\geq 1$,\n$$\\begin{align}\n\\sum_{n=1}^N a_n &amp;= \\sum_{n=1}^N \\frac{1}{2n+1}+\\sum_{n=1}^N \\frac{1}{2n-1}-\\sum_{n=1}^N \\frac{1}{n}\\\\\n&amp;= \\sum_{n=1}^N \\left(\\frac{1}{2n+1}+\\frac{1}{2n}\\right)+\\sum_{n=1}^N \\left(\\frac{1}{2n-1}+\\frac{1}{2n}\\right)-2\\sum_{n=1}^N \\frac{1}{n} \\\\\n&amp;= \\sum_{n=2}^{2N+1} \\frac{1}{n}+\\sum_{n=1}^{2N} \\frac{1}{n}-2\\sum_{n=1}^N \\frac{1}{n} \\\\\n&amp;= H_{2N+1}-1+H_{2N}-2H_N \\\\\n&amp;= \\ln(2N+1)+\\ln(2N)-2\\ln N - 1 +o(1)\\\\\n&amp;= 2\\ln 2 - 1 + \\ln(N+\\frac{1}{2})+ \\ln(N+\\frac{1}{2})-2\\ln N +o(1)\\\\\n&amp;= 2\\ln 2 - 1 + \\ln\\frac{N+\\frac{1}{2}}{N}+ \\ln\\frac{N-\\frac{1}{2}}{N} +o(1)\\\\\n&amp;\\xrightarrow[N\\to\\infty]{} \\boxed{2\\ln 2 -1}\n\\end{align}$$\nwhere $H_N = \\sum_{n=1}^N \\frac{1}{n}$ denotes the $N$-th <a href=\"https://en.wikipedia.org/wiki/Harmonic_number\">Harmonic number</a>, which <a href=\"https://en.wikipedia.org/wiki/Harmonic_series_(mathematics)#Rate_of_divergence\">satisfies</a>\n$$\nH_N = \\ln N + \\gamma + o(1)\n$$\nwhen $N\\to\\infty$ ($\\gamma$ being Euler's constant). We also used that by continuity of the logarithm, $\\ln\\frac{N\\pm\\frac{1}{2}}{N} \\xrightarrow[N\\to\\infty]{} \\ln 1=0$.</p></li>\n</ul>\n" }, { "answer_id": 2106649, "score": 4, "text": "<p>You can do this way. It is easier. Let\n$$ f(x)=\\sum_{n=1}^\\infty\\frac{1}{4n^3-n}x^{2n+1}. $$\nThen $f(1)=\\sum_{n=1}^\\infty\\frac{1}{4n^3-n}$ and\n$$ f'(x)=\\sum_{n=1}^\\infty\\frac{1}{(2n-1)n}x^{2n}, f''(x)=2\\sum_{n=1}^\\infty\\frac{1}{(2n-1)}x^{2n-1}, f'''(x)=2\\sum_{n=1}^\\infty x^{2n-2}=\\frac2{1-x^2}$$\nand hence\n$$ f(1)=\\int_0^1\\int_0^s\\int_0^t\\frac2{1-x^2}dxdtds=2\\ln2-1.$$</p>\n" }, { "answer_id": 2106505, "score": 5, "text": "<p><em>Following a comment of mine above:</em></p>\n\n<ul>\n<li><p>without checking too carefully, the issue in your step most likely resides in the <em>\"I integrate both sides of (1) from 0 to 1.\"</em> You then integrate the LHS termwise -- <em>why</em> can you do that? The interval of convergence of your power series is $1$, so it is <em>not</em> obvious you can swap $\\sum_n$ and $\\int_0^1$. And, as it turns out, worse than \"not obvious\" it appears to be wrong.</p></li>\n<li><p>Now, another approach to compute the sum: starting with your $(1)$, we write, for $N\\geq 1$,\n$$\\begin{align}\n\\sum_{n=1}^N a_n &amp;= \\sum_{n=1}^N \\frac{1}{2n+1}+\\sum_{n=1}^N \\frac{1}{2n-1}-\\sum_{n=1}^N \\frac{1}{n}\\\\\n&amp;= \\sum_{n=1}^N \\left(\\frac{1}{2n+1}+\\frac{1}{2n}\\right)+\\sum_{n=1}^N \\left(\\frac{1}{2n-1}+\\frac{1}{2n}\\right)-2\\sum_{n=1}^N \\frac{1}{n} \\\\\n&amp;= \\sum_{n=2}^{2N+1} \\frac{1}{n}+\\sum_{n=1}^{2N} \\frac{1}{n}-2\\sum_{n=1}^N \\frac{1}{n} \\\\\n&amp;= H_{2N+1}-1+H_{2N}-2H_N \\\\\n&amp;= \\ln(2N+1)+\\ln(2N)-2\\ln N - 1 +o(1)\\\\\n&amp;= 2\\ln 2 - 1 + \\ln(N+\\frac{1}{2})+ \\ln(N+\\frac{1}{2})-2\\ln N +o(1)\\\\\n&amp;= 2\\ln 2 - 1 + \\ln\\frac{N+\\frac{1}{2}}{N}+ \\ln\\frac{N-\\frac{1}{2}}{N} +o(1)\\\\\n&amp;\\xrightarrow[N\\to\\infty]{} \\boxed{2\\ln 2 -1}\n\\end{align}$$\nwhere $H_N = \\sum_{n=1}^N \\frac{1}{n}$ denotes the $N$-th <a href=\"https://en.wikipedia.org/wiki/Harmonic_number\">Harmonic number</a>, which <a href=\"https://en.wikipedia.org/wiki/Harmonic_series_(mathematics)#Rate_of_divergence\">satisfies</a>\n$$\nH_N = \\ln N + \\gamma + o(1)\n$$\nwhen $N\\to\\infty$ ($\\gamma$ being Euler's constant). We also used that by continuity of the logarithm, $\\ln\\frac{N\\pm\\frac{1}{2}}{N} \\xrightarrow[N\\to\\infty]{} \\ln 1=0$.</p></li>\n</ul>\n" }, { "answer_id": 2106649, "score": 4, "text": "<p>You can do this way. It is easier. Let\n$$ f(x)=\\sum_{n=1}^\\infty\\frac{1}{4n^3-n}x^{2n+1}. $$\nThen $f(1)=\\sum_{n=1}^\\infty\\frac{1}{4n^3-n}$ and\n$$ f'(x)=\\sum_{n=1}^\\infty\\frac{1}{(2n-1)n}x^{2n}, f''(x)=2\\sum_{n=1}^\\infty\\frac{1}{(2n-1)}x^{2n-1}, f'''(x)=2\\sum_{n=1}^\\infty x^{2n-2}=\\frac2{1-x^2}$$\nand hence\n$$ f(1)=\\int_0^1\\int_0^s\\int_0^t\\frac2{1-x^2}dxdtds=2\\ln2-1.$$</p>\n" }, { "answer_id": 2106505, "score": 5, "text": "<p><em>Following a comment of mine above:</em></p>\n\n<ul>\n<li><p>without checking too carefully, the issue in your step most likely resides in the <em>\"I integrate both sides of (1) from 0 to 1.\"</em> You then integrate the LHS termwise -- <em>why</em> can you do that? The interval of convergence of your power series is $1$, so it is <em>not</em> obvious you can swap $\\sum_n$ and $\\int_0^1$. And, as it turns out, worse than \"not obvious\" it appears to be wrong.</p></li>\n<li><p>Now, another approach to compute the sum: starting with your $(1)$, we write, for $N\\geq 1$,\n$$\\begin{align}\n\\sum_{n=1}^N a_n &amp;= \\sum_{n=1}^N \\frac{1}{2n+1}+\\sum_{n=1}^N \\frac{1}{2n-1}-\\sum_{n=1}^N \\frac{1}{n}\\\\\n&amp;= \\sum_{n=1}^N \\left(\\frac{1}{2n+1}+\\frac{1}{2n}\\right)+\\sum_{n=1}^N \\left(\\frac{1}{2n-1}+\\frac{1}{2n}\\right)-2\\sum_{n=1}^N \\frac{1}{n} \\\\\n&amp;= \\sum_{n=2}^{2N+1} \\frac{1}{n}+\\sum_{n=1}^{2N} \\frac{1}{n}-2\\sum_{n=1}^N \\frac{1}{n} \\\\\n&amp;= H_{2N+1}-1+H_{2N}-2H_N \\\\\n&amp;= \\ln(2N+1)+\\ln(2N)-2\\ln N - 1 +o(1)\\\\\n&amp;= 2\\ln 2 - 1 + \\ln(N+\\frac{1}{2})+ \\ln(N+\\frac{1}{2})-2\\ln N +o(1)\\\\\n&amp;= 2\\ln 2 - 1 + \\ln\\frac{N+\\frac{1}{2}}{N}+ \\ln\\frac{N-\\frac{1}{2}}{N} +o(1)\\\\\n&amp;\\xrightarrow[N\\to\\infty]{} \\boxed{2\\ln 2 -1}\n\\end{align}$$\nwhere $H_N = \\sum_{n=1}^N \\frac{1}{n}$ denotes the $N$-th <a href=\"https://en.wikipedia.org/wiki/Harmonic_number\">Harmonic number</a>, which <a href=\"https://en.wikipedia.org/wiki/Harmonic_series_(mathematics)#Rate_of_divergence\">satisfies</a>\n$$\nH_N = \\ln N + \\gamma + o(1)\n$$\nwhen $N\\to\\infty$ ($\\gamma$ being Euler's constant). We also used that by continuity of the logarithm, $\\ln\\frac{N\\pm\\frac{1}{2}}{N} \\xrightarrow[N\\to\\infty]{} \\ln 1=0$.</p></li>\n</ul>\n" }, { "answer_id": 2106649, "score": 4, "text": "<p>You can do this way. It is easier. Let\n$$ f(x)=\\sum_{n=1}^\\infty\\frac{1}{4n^3-n}x^{2n+1}. $$\nThen $f(1)=\\sum_{n=1}^\\infty\\frac{1}{4n^3-n}$ and\n$$ f'(x)=\\sum_{n=1}^\\infty\\frac{1}{(2n-1)n}x^{2n}, f''(x)=2\\sum_{n=1}^\\infty\\frac{1}{(2n-1)}x^{2n-1}, f'''(x)=2\\sum_{n=1}^\\infty x^{2n-2}=\\frac2{1-x^2}$$\nand hence\n$$ f(1)=\\int_0^1\\int_0^s\\int_0^t\\frac2{1-x^2}dxdtds=2\\ln2-1.$$</p>\n" }, { "answer_id": 2106505, "score": 5, "text": "<p><em>Following a comment of mine above:</em></p>\n\n<ul>\n<li><p>without checking too carefully, the issue in your step most likely resides in the <em>\"I integrate both sides of (1) from 0 to 1.\"</em> You then integrate the LHS termwise -- <em>why</em> can you do that? The interval of convergence of your power series is $1$, so it is <em>not</em> obvious you can swap $\\sum_n$ and $\\int_0^1$. And, as it turns out, worse than \"not obvious\" it appears to be wrong.</p></li>\n<li><p>Now, another approach to compute the sum: starting with your $(1)$, we write, for $N\\geq 1$,\n$$\\begin{align}\n\\sum_{n=1}^N a_n &amp;= \\sum_{n=1}^N \\frac{1}{2n+1}+\\sum_{n=1}^N \\frac{1}{2n-1}-\\sum_{n=1}^N \\frac{1}{n}\\\\\n&amp;= \\sum_{n=1}^N \\left(\\frac{1}{2n+1}+\\frac{1}{2n}\\right)+\\sum_{n=1}^N \\left(\\frac{1}{2n-1}+\\frac{1}{2n}\\right)-2\\sum_{n=1}^N \\frac{1}{n} \\\\\n&amp;= \\sum_{n=2}^{2N+1} \\frac{1}{n}+\\sum_{n=1}^{2N} \\frac{1}{n}-2\\sum_{n=1}^N \\frac{1}{n} \\\\\n&amp;= H_{2N+1}-1+H_{2N}-2H_N \\\\\n&amp;= \\ln(2N+1)+\\ln(2N)-2\\ln N - 1 +o(1)\\\\\n&amp;= 2\\ln 2 - 1 + \\ln(N+\\frac{1}{2})+ \\ln(N+\\frac{1}{2})-2\\ln N +o(1)\\\\\n&amp;= 2\\ln 2 - 1 + \\ln\\frac{N+\\frac{1}{2}}{N}+ \\ln\\frac{N-\\frac{1}{2}}{N} +o(1)\\\\\n&amp;\\xrightarrow[N\\to\\infty]{} \\boxed{2\\ln 2 -1}\n\\end{align}$$\nwhere $H_N = \\sum_{n=1}^N \\frac{1}{n}$ denotes the $N$-th <a href=\"https://en.wikipedia.org/wiki/Harmonic_number\">Harmonic number</a>, which <a href=\"https://en.wikipedia.org/wiki/Harmonic_series_(mathematics)#Rate_of_divergence\">satisfies</a>\n$$\nH_N = \\ln N + \\gamma + o(1)\n$$\nwhen $N\\to\\infty$ ($\\gamma$ being Euler's constant). We also used that by continuity of the logarithm, $\\ln\\frac{N\\pm\\frac{1}{2}}{N} \\xrightarrow[N\\to\\infty]{} \\ln 1=0$.</p></li>\n</ul>\n" }, { "answer_id": 2106649, "score": 4, "text": "<p>You can do this way. It is easier. Let\n$$ f(x)=\\sum_{n=1}^\\infty\\frac{1}{4n^3-n}x^{2n+1}. $$\nThen $f(1)=\\sum_{n=1}^\\infty\\frac{1}{4n^3-n}$ and\n$$ f'(x)=\\sum_{n=1}^\\infty\\frac{1}{(2n-1)n}x^{2n}, f''(x)=2\\sum_{n=1}^\\infty\\frac{1}{(2n-1)}x^{2n-1}, f'''(x)=2\\sum_{n=1}^\\infty x^{2n-2}=\\frac2{1-x^2}$$\nand hence\n$$ f(1)=\\int_0^1\\int_0^s\\int_0^t\\frac2{1-x^2}dxdtds=2\\ln2-1.$$</p>\n" }, { "answer_id": 2106505, "score": 5, "text": "<p><em>Following a comment of mine above:</em></p>\n\n<ul>\n<li><p>without checking too carefully, the issue in your step most likely resides in the <em>\"I integrate both sides of (1) from 0 to 1.\"</em> You then integrate the LHS termwise -- <em>why</em> can you do that? The interval of convergence of your power series is $1$, so it is <em>not</em> obvious you can swap $\\sum_n$ and $\\int_0^1$. And, as it turns out, worse than \"not obvious\" it appears to be wrong.</p></li>\n<li><p>Now, another approach to compute the sum: starting with your $(1)$, we write, for $N\\geq 1$,\n$$\\begin{align}\n\\sum_{n=1}^N a_n &amp;= \\sum_{n=1}^N \\frac{1}{2n+1}+\\sum_{n=1}^N \\frac{1}{2n-1}-\\sum_{n=1}^N \\frac{1}{n}\\\\\n&amp;= \\sum_{n=1}^N \\left(\\frac{1}{2n+1}+\\frac{1}{2n}\\right)+\\sum_{n=1}^N \\left(\\frac{1}{2n-1}+\\frac{1}{2n}\\right)-2\\sum_{n=1}^N \\frac{1}{n} \\\\\n&amp;= \\sum_{n=2}^{2N+1} \\frac{1}{n}+\\sum_{n=1}^{2N} \\frac{1}{n}-2\\sum_{n=1}^N \\frac{1}{n} \\\\\n&amp;= H_{2N+1}-1+H_{2N}-2H_N \\\\\n&amp;= \\ln(2N+1)+\\ln(2N)-2\\ln N - 1 +o(1)\\\\\n&amp;= 2\\ln 2 - 1 + \\ln(N+\\frac{1}{2})+ \\ln(N+\\frac{1}{2})-2\\ln N +o(1)\\\\\n&amp;= 2\\ln 2 - 1 + \\ln\\frac{N+\\frac{1}{2}}{N}+ \\ln\\frac{N-\\frac{1}{2}}{N} +o(1)\\\\\n&amp;\\xrightarrow[N\\to\\infty]{} \\boxed{2\\ln 2 -1}\n\\end{align}$$\nwhere $H_N = \\sum_{n=1}^N \\frac{1}{n}$ denotes the $N$-th <a href=\"https://en.wikipedia.org/wiki/Harmonic_number\">Harmonic number</a>, which <a href=\"https://en.wikipedia.org/wiki/Harmonic_series_(mathematics)#Rate_of_divergence\">satisfies</a>\n$$\nH_N = \\ln N + \\gamma + o(1)\n$$\nwhen $N\\to\\infty$ ($\\gamma$ being Euler's constant). We also used that by continuity of the logarithm, $\\ln\\frac{N\\pm\\frac{1}{2}}{N} \\xrightarrow[N\\to\\infty]{} \\ln 1=0$.</p></li>\n</ul>\n" }, { "answer_id": 2106649, "score": 4, "text": "<p>You can do this way. It is easier. Let\n$$ f(x)=\\sum_{n=1}^\\infty\\frac{1}{4n^3-n}x^{2n+1}. $$\nThen $f(1)=\\sum_{n=1}^\\infty\\frac{1}{4n^3-n}$ and\n$$ f'(x)=\\sum_{n=1}^\\infty\\frac{1}{(2n-1)n}x^{2n}, f''(x)=2\\sum_{n=1}^\\infty\\frac{1}{(2n-1)}x^{2n-1}, f'''(x)=2\\sum_{n=1}^\\infty x^{2n-2}=\\frac2{1-x^2}$$\nand hence\n$$ f(1)=\\int_0^1\\int_0^s\\int_0^t\\frac2{1-x^2}dxdtds=2\\ln2-1.$$</p>\n" }, { "answer_id": 2106505, "score": 5, "text": "<p><em>Following a comment of mine above:</em></p>\n\n<ul>\n<li><p>without checking too carefully, the issue in your step most likely resides in the <em>\"I integrate both sides of (1) from 0 to 1.\"</em> You then integrate the LHS termwise -- <em>why</em> can you do that? The interval of convergence of your power series is $1$, so it is <em>not</em> obvious you can swap $\\sum_n$ and $\\int_0^1$. And, as it turns out, worse than \"not obvious\" it appears to be wrong.</p></li>\n<li><p>Now, another approach to compute the sum: starting with your $(1)$, we write, for $N\\geq 1$,\n$$\\begin{align}\n\\sum_{n=1}^N a_n &amp;= \\sum_{n=1}^N \\frac{1}{2n+1}+\\sum_{n=1}^N \\frac{1}{2n-1}-\\sum_{n=1}^N \\frac{1}{n}\\\\\n&amp;= \\sum_{n=1}^N \\left(\\frac{1}{2n+1}+\\frac{1}{2n}\\right)+\\sum_{n=1}^N \\left(\\frac{1}{2n-1}+\\frac{1}{2n}\\right)-2\\sum_{n=1}^N \\frac{1}{n} \\\\\n&amp;= \\sum_{n=2}^{2N+1} \\frac{1}{n}+\\sum_{n=1}^{2N} \\frac{1}{n}-2\\sum_{n=1}^N \\frac{1}{n} \\\\\n&amp;= H_{2N+1}-1+H_{2N}-2H_N \\\\\n&amp;= \\ln(2N+1)+\\ln(2N)-2\\ln N - 1 +o(1)\\\\\n&amp;= 2\\ln 2 - 1 + \\ln(N+\\frac{1}{2})+ \\ln(N+\\frac{1}{2})-2\\ln N +o(1)\\\\\n&amp;= 2\\ln 2 - 1 + \\ln\\frac{N+\\frac{1}{2}}{N}+ \\ln\\frac{N-\\frac{1}{2}}{N} +o(1)\\\\\n&amp;\\xrightarrow[N\\to\\infty]{} \\boxed{2\\ln 2 -1}\n\\end{align}$$\nwhere $H_N = \\sum_{n=1}^N \\frac{1}{n}$ denotes the $N$-th <a href=\"https://en.wikipedia.org/wiki/Harmonic_number\">Harmonic number</a>, which <a href=\"https://en.wikipedia.org/wiki/Harmonic_series_(mathematics)#Rate_of_divergence\">satisfies</a>\n$$\nH_N = \\ln N + \\gamma + o(1)\n$$\nwhen $N\\to\\infty$ ($\\gamma$ being Euler's constant). We also used that by continuity of the logarithm, $\\ln\\frac{N\\pm\\frac{1}{2}}{N} \\xrightarrow[N\\to\\infty]{} \\ln 1=0$.</p></li>\n</ul>\n" }, { "answer_id": 2106649, "score": 4, "text": "<p>You can do this way. It is easier. Let\n$$ f(x)=\\sum_{n=1}^\\infty\\frac{1}{4n^3-n}x^{2n+1}. $$\nThen $f(1)=\\sum_{n=1}^\\infty\\frac{1}{4n^3-n}$ and\n$$ f'(x)=\\sum_{n=1}^\\infty\\frac{1}{(2n-1)n}x^{2n}, f''(x)=2\\sum_{n=1}^\\infty\\frac{1}{(2n-1)}x^{2n-1}, f'''(x)=2\\sum_{n=1}^\\infty x^{2n-2}=\\frac2{1-x^2}$$\nand hence\n$$ f(1)=\\int_0^1\\int_0^s\\int_0^t\\frac2{1-x^2}dxdtds=2\\ln2-1.$$</p>\n" }, { "answer_id": 2106505, "score": 5, "text": "<p><em>Following a comment of mine above:</em></p>\n\n<ul>\n<li><p>without checking too carefully, the issue in your step most likely resides in the <em>\"I integrate both sides of (1) from 0 to 1.\"</em> You then integrate the LHS termwise -- <em>why</em> can you do that? The interval of convergence of your power series is $1$, so it is <em>not</em> obvious you can swap $\\sum_n$ and $\\int_0^1$. And, as it turns out, worse than \"not obvious\" it appears to be wrong.</p></li>\n<li><p>Now, another approach to compute the sum: starting with your $(1)$, we write, for $N\\geq 1$,\n$$\\begin{align}\n\\sum_{n=1}^N a_n &amp;= \\sum_{n=1}^N \\frac{1}{2n+1}+\\sum_{n=1}^N \\frac{1}{2n-1}-\\sum_{n=1}^N \\frac{1}{n}\\\\\n&amp;= \\sum_{n=1}^N \\left(\\frac{1}{2n+1}+\\frac{1}{2n}\\right)+\\sum_{n=1}^N \\left(\\frac{1}{2n-1}+\\frac{1}{2n}\\right)-2\\sum_{n=1}^N \\frac{1}{n} \\\\\n&amp;= \\sum_{n=2}^{2N+1} \\frac{1}{n}+\\sum_{n=1}^{2N} \\frac{1}{n}-2\\sum_{n=1}^N \\frac{1}{n} \\\\\n&amp;= H_{2N+1}-1+H_{2N}-2H_N \\\\\n&amp;= \\ln(2N+1)+\\ln(2N)-2\\ln N - 1 +o(1)\\\\\n&amp;= 2\\ln 2 - 1 + \\ln(N+\\frac{1}{2})+ \\ln(N+\\frac{1}{2})-2\\ln N +o(1)\\\\\n&amp;= 2\\ln 2 - 1 + \\ln\\frac{N+\\frac{1}{2}}{N}+ \\ln\\frac{N-\\frac{1}{2}}{N} +o(1)\\\\\n&amp;\\xrightarrow[N\\to\\infty]{} \\boxed{2\\ln 2 -1}\n\\end{align}$$\nwhere $H_N = \\sum_{n=1}^N \\frac{1}{n}$ denotes the $N$-th <a href=\"https://en.wikipedia.org/wiki/Harmonic_number\">Harmonic number</a>, which <a href=\"https://en.wikipedia.org/wiki/Harmonic_series_(mathematics)#Rate_of_divergence\">satisfies</a>\n$$\nH_N = \\ln N + \\gamma + o(1)\n$$\nwhen $N\\to\\infty$ ($\\gamma$ being Euler's constant). We also used that by continuity of the logarithm, $\\ln\\frac{N\\pm\\frac{1}{2}}{N} \\xrightarrow[N\\to\\infty]{} \\ln 1=0$.</p></li>\n</ul>\n" }, { "answer_id": 2106649, "score": 4, "text": "<p>You can do this way. It is easier. Let\n$$ f(x)=\\sum_{n=1}^\\infty\\frac{1}{4n^3-n}x^{2n+1}. $$\nThen $f(1)=\\sum_{n=1}^\\infty\\frac{1}{4n^3-n}$ and\n$$ f'(x)=\\sum_{n=1}^\\infty\\frac{1}{(2n-1)n}x^{2n}, f''(x)=2\\sum_{n=1}^\\infty\\frac{1}{(2n-1)}x^{2n-1}, f'''(x)=2\\sum_{n=1}^\\infty x^{2n-2}=\\frac2{1-x^2}$$\nand hence\n$$ f(1)=\\int_0^1\\int_0^s\\int_0^t\\frac2{1-x^2}dxdtds=2\\ln2-1.$$</p>\n" }, { "answer_id": 2106505, "score": 5, "text": "<p><em>Following a comment of mine above:</em></p>\n\n<ul>\n<li><p>without checking too carefully, the issue in your step most likely resides in the <em>\"I integrate both sides of (1) from 0 to 1.\"</em> You then integrate the LHS termwise -- <em>why</em> can you do that? The interval of convergence of your power series is $1$, so it is <em>not</em> obvious you can swap $\\sum_n$ and $\\int_0^1$. And, as it turns out, worse than \"not obvious\" it appears to be wrong.</p></li>\n<li><p>Now, another approach to compute the sum: starting with your $(1)$, we write, for $N\\geq 1$,\n$$\\begin{align}\n\\sum_{n=1}^N a_n &amp;= \\sum_{n=1}^N \\frac{1}{2n+1}+\\sum_{n=1}^N \\frac{1}{2n-1}-\\sum_{n=1}^N \\frac{1}{n}\\\\\n&amp;= \\sum_{n=1}^N \\left(\\frac{1}{2n+1}+\\frac{1}{2n}\\right)+\\sum_{n=1}^N \\left(\\frac{1}{2n-1}+\\frac{1}{2n}\\right)-2\\sum_{n=1}^N \\frac{1}{n} \\\\\n&amp;= \\sum_{n=2}^{2N+1} \\frac{1}{n}+\\sum_{n=1}^{2N} \\frac{1}{n}-2\\sum_{n=1}^N \\frac{1}{n} \\\\\n&amp;= H_{2N+1}-1+H_{2N}-2H_N \\\\\n&amp;= \\ln(2N+1)+\\ln(2N)-2\\ln N - 1 +o(1)\\\\\n&amp;= 2\\ln 2 - 1 + \\ln(N+\\frac{1}{2})+ \\ln(N+\\frac{1}{2})-2\\ln N +o(1)\\\\\n&amp;= 2\\ln 2 - 1 + \\ln\\frac{N+\\frac{1}{2}}{N}+ \\ln\\frac{N-\\frac{1}{2}}{N} +o(1)\\\\\n&amp;\\xrightarrow[N\\to\\infty]{} \\boxed{2\\ln 2 -1}\n\\end{align}$$\nwhere $H_N = \\sum_{n=1}^N \\frac{1}{n}$ denotes the $N$-th <a href=\"https://en.wikipedia.org/wiki/Harmonic_number\">Harmonic number</a>, which <a href=\"https://en.wikipedia.org/wiki/Harmonic_series_(mathematics)#Rate_of_divergence\">satisfies</a>\n$$\nH_N = \\ln N + \\gamma + o(1)\n$$\nwhen $N\\to\\infty$ ($\\gamma$ being Euler's constant). We also used that by continuity of the logarithm, $\\ln\\frac{N\\pm\\frac{1}{2}}{N} \\xrightarrow[N\\to\\infty]{} \\ln 1=0$.</p></li>\n</ul>\n" }, { "answer_id": 2106649, "score": 4, "text": "<p>You can do this way. It is easier. Let\n$$ f(x)=\\sum_{n=1}^\\infty\\frac{1}{4n^3-n}x^{2n+1}. $$\nThen $f(1)=\\sum_{n=1}^\\infty\\frac{1}{4n^3-n}$ and\n$$ f'(x)=\\sum_{n=1}^\\infty\\frac{1}{(2n-1)n}x^{2n}, f''(x)=2\\sum_{n=1}^\\infty\\frac{1}{(2n-1)}x^{2n-1}, f'''(x)=2\\sum_{n=1}^\\infty x^{2n-2}=\\frac2{1-x^2}$$\nand hence\n$$ f(1)=\\int_0^1\\int_0^s\\int_0^t\\frac2{1-x^2}dxdtds=2\\ln2-1.$$</p>\n" }, { "answer_id": 2106505, "score": 5, "text": "<p><em>Following a comment of mine above:</em></p>\n\n<ul>\n<li><p>without checking too carefully, the issue in your step most likely resides in the <em>\"I integrate both sides of (1) from 0 to 1.\"</em> You then integrate the LHS termwise -- <em>why</em> can you do that? The interval of convergence of your power series is $1$, so it is <em>not</em> obvious you can swap $\\sum_n$ and $\\int_0^1$. And, as it turns out, worse than \"not obvious\" it appears to be wrong.</p></li>\n<li><p>Now, another approach to compute the sum: starting with your $(1)$, we write, for $N\\geq 1$,\n$$\\begin{align}\n\\sum_{n=1}^N a_n &amp;= \\sum_{n=1}^N \\frac{1}{2n+1}+\\sum_{n=1}^N \\frac{1}{2n-1}-\\sum_{n=1}^N \\frac{1}{n}\\\\\n&amp;= \\sum_{n=1}^N \\left(\\frac{1}{2n+1}+\\frac{1}{2n}\\right)+\\sum_{n=1}^N \\left(\\frac{1}{2n-1}+\\frac{1}{2n}\\right)-2\\sum_{n=1}^N \\frac{1}{n} \\\\\n&amp;= \\sum_{n=2}^{2N+1} \\frac{1}{n}+\\sum_{n=1}^{2N} \\frac{1}{n}-2\\sum_{n=1}^N \\frac{1}{n} \\\\\n&amp;= H_{2N+1}-1+H_{2N}-2H_N \\\\\n&amp;= \\ln(2N+1)+\\ln(2N)-2\\ln N - 1 +o(1)\\\\\n&amp;= 2\\ln 2 - 1 + \\ln(N+\\frac{1}{2})+ \\ln(N+\\frac{1}{2})-2\\ln N +o(1)\\\\\n&amp;= 2\\ln 2 - 1 + \\ln\\frac{N+\\frac{1}{2}}{N}+ \\ln\\frac{N-\\frac{1}{2}}{N} +o(1)\\\\\n&amp;\\xrightarrow[N\\to\\infty]{} \\boxed{2\\ln 2 -1}\n\\end{align}$$\nwhere $H_N = \\sum_{n=1}^N \\frac{1}{n}$ denotes the $N$-th <a href=\"https://en.wikipedia.org/wiki/Harmonic_number\">Harmonic number</a>, which <a href=\"https://en.wikipedia.org/wiki/Harmonic_series_(mathematics)#Rate_of_divergence\">satisfies</a>\n$$\nH_N = \\ln N + \\gamma + o(1)\n$$\nwhen $N\\to\\infty$ ($\\gamma$ being Euler's constant). We also used that by continuity of the logarithm, $\\ln\\frac{N\\pm\\frac{1}{2}}{N} \\xrightarrow[N\\to\\infty]{} \\ln 1=0$.</p></li>\n</ul>\n" }, { "answer_id": 2106649, "score": 4, "text": "<p>You can do this way. It is easier. Let\n$$ f(x)=\\sum_{n=1}^\\infty\\frac{1}{4n^3-n}x^{2n+1}. $$\nThen $f(1)=\\sum_{n=1}^\\infty\\frac{1}{4n^3-n}$ and\n$$ f'(x)=\\sum_{n=1}^\\infty\\frac{1}{(2n-1)n}x^{2n}, f''(x)=2\\sum_{n=1}^\\infty\\frac{1}{(2n-1)}x^{2n-1}, f'''(x)=2\\sum_{n=1}^\\infty x^{2n-2}=\\frac2{1-x^2}$$\nand hence\n$$ f(1)=\\int_0^1\\int_0^s\\int_0^t\\frac2{1-x^2}dxdtds=2\\ln2-1.$$</p>\n" }, { "answer_id": 2106505, "score": 5, "text": "<p><em>Following a comment of mine above:</em></p>\n\n<ul>\n<li><p>without checking too carefully, the issue in your step most likely resides in the <em>\"I integrate both sides of (1) from 0 to 1.\"</em> You then integrate the LHS termwise -- <em>why</em> can you do that? The interval of convergence of your power series is $1$, so it is <em>not</em> obvious you can swap $\\sum_n$ and $\\int_0^1$. And, as it turns out, worse than \"not obvious\" it appears to be wrong.</p></li>\n<li><p>Now, another approach to compute the sum: starting with your $(1)$, we write, for $N\\geq 1$,\n$$\\begin{align}\n\\sum_{n=1}^N a_n &amp;= \\sum_{n=1}^N \\frac{1}{2n+1}+\\sum_{n=1}^N \\frac{1}{2n-1}-\\sum_{n=1}^N \\frac{1}{n}\\\\\n&amp;= \\sum_{n=1}^N \\left(\\frac{1}{2n+1}+\\frac{1}{2n}\\right)+\\sum_{n=1}^N \\left(\\frac{1}{2n-1}+\\frac{1}{2n}\\right)-2\\sum_{n=1}^N \\frac{1}{n} \\\\\n&amp;= \\sum_{n=2}^{2N+1} \\frac{1}{n}+\\sum_{n=1}^{2N} \\frac{1}{n}-2\\sum_{n=1}^N \\frac{1}{n} \\\\\n&amp;= H_{2N+1}-1+H_{2N}-2H_N \\\\\n&amp;= \\ln(2N+1)+\\ln(2N)-2\\ln N - 1 +o(1)\\\\\n&amp;= 2\\ln 2 - 1 + \\ln(N+\\frac{1}{2})+ \\ln(N+\\frac{1}{2})-2\\ln N +o(1)\\\\\n&amp;= 2\\ln 2 - 1 + \\ln\\frac{N+\\frac{1}{2}}{N}+ \\ln\\frac{N-\\frac{1}{2}}{N} +o(1)\\\\\n&amp;\\xrightarrow[N\\to\\infty]{} \\boxed{2\\ln 2 -1}\n\\end{align}$$\nwhere $H_N = \\sum_{n=1}^N \\frac{1}{n}$ denotes the $N$-th <a href=\"https://en.wikipedia.org/wiki/Harmonic_number\">Harmonic number</a>, which <a href=\"https://en.wikipedia.org/wiki/Harmonic_series_(mathematics)#Rate_of_divergence\">satisfies</a>\n$$\nH_N = \\ln N + \\gamma + o(1)\n$$\nwhen $N\\to\\infty$ ($\\gamma$ being Euler's constant). We also used that by continuity of the logarithm, $\\ln\\frac{N\\pm\\frac{1}{2}}{N} \\xrightarrow[N\\to\\infty]{} \\ln 1=0$.</p></li>\n</ul>\n" }, { "answer_id": 2106649, "score": 4, "text": "<p>You can do this way. It is easier. Let\n$$ f(x)=\\sum_{n=1}^\\infty\\frac{1}{4n^3-n}x^{2n+1}. $$\nThen $f(1)=\\sum_{n=1}^\\infty\\frac{1}{4n^3-n}$ and\n$$ f'(x)=\\sum_{n=1}^\\infty\\frac{1}{(2n-1)n}x^{2n}, f''(x)=2\\sum_{n=1}^\\infty\\frac{1}{(2n-1)}x^{2n-1}, f'''(x)=2\\sum_{n=1}^\\infty x^{2n-2}=\\frac2{1-x^2}$$\nand hence\n$$ f(1)=\\int_0^1\\int_0^s\\int_0^t\\frac2{1-x^2}dxdtds=2\\ln2-1.$$</p>\n" } ]
Question: Help for series calculation $\sum_{n\ge1} \frac{1}{4n^3-n}$ Answer 1: <p><em>Following a comment of mine above:</em></p> <ul> <li><p>without checking too carefully, the issue in your step most likely resides in the <em>"I integrate both sides of (1) from 0 to 1."</em> You then integrate the LHS termwise -- <em>why</em> can you do that? The interval of convergence of your power series is $1$, so it is <em>not</em> obvious you can swap $\sum_n$ and $\int_0^1$. And, as it turns out, worse than "not obvious" it appears to be wrong.</p></li> <li><p>Now, another approach to compute the sum: starting with your $(1)$, we write, for $N\geq 1$, $$\begin{align} \sum_{n=1}^N a_n &amp;= \sum_{n=1}^N \frac{1}{2n+1}+\sum_{n=1}^N \frac{1}{2n-1}-\sum_{n=1}^N \frac{1}{n}\\ &amp;= \sum_{n=1}^N \left(\frac{1}{2n+1}+\frac{1}{2n}\right)+\sum_{n=1}^N \left(\frac{1}{2n-1}+\frac{1}{2n}\right)-2\sum_{n=1}^N \frac{1}{n} \\ &amp;= \sum_{n=2}^{2N+1} \frac{1}{n}+\sum_{n=1}^{2N} \frac{1}{n}-2\sum_{n=1}^N \frac{1}{n} \\ &amp;= H_{2N+1}-1+H_{2N}-2H_N \\ &amp;= \ln(2N+1)+\ln(2N)-2\ln N - 1 +o(1)\\ &amp;= 2\ln 2 - 1 + \ln(N+\frac{1}{2})+ \ln(N+\frac{1}{2})-2\ln N +o(1)\\ &amp;= 2\ln 2 - 1 + \ln\frac{N+\frac{1}{2}}{N}+ \ln\frac{N-\frac{1}{2}}{N} +o(1)\\ &amp;\xrightarrow[N\to\infty]{} \boxed{2\ln 2 -1} \end{align}$$ where $H_N = \sum_{n=1}^N \frac{1}{n}$ denotes the $N$-th <a href="https://en.wikipedia.org/wiki/Harmonic_number">Harmonic number</a>, which <a href="https://en.wikipedia.org/wiki/Harmonic_series_(mathematics)#Rate_of_divergence">satisfies</a> $$ H_N = \ln N + \gamma + o(1) $$ when $N\to\infty$ ($\gamma$ being Euler's constant). We also used that by continuity of the logarithm, $\ln\frac{N\pm\frac{1}{2}}{N} \xrightarrow[N\to\infty]{} \ln 1=0$.</p></li> </ul> Answer 2: <p>You can do this way. It is easier. Let $$ f(x)=\sum_{n=1}^\infty\frac{1}{4n^3-n}x^{2n+1}. $$ Then $f(1)=\sum_{n=1}^\infty\frac{1}{4n^3-n}$ and $$ f'(x)=\sum_{n=1}^\infty\frac{1}{(2n-1)n}x^{2n}, f''(x)=2\sum_{n=1}^\infty\frac{1}{(2n-1)}x^{2n-1}, f'''(x)=2\sum_{n=1}^\infty x^{2n-2}=\frac2{1-x^2}$$ and hence $$ f(1)=\int_0^1\int_0^s\int_0^t\frac2{1-x^2}dxdtds=2\ln2-1.$$</p> Answer 3: <p><em>Following a comment of mine above:</em></p> <ul> <li><p>without checking too carefully, the issue in your step most likely resides in the <em>"I integrate both sides of (1) from 0 to 1."</em> You then integrate the LHS termwise -- <em>why</em> can you do that? The interval of convergence of your power series is $1$, so it is <em>not</em> obvious you can swap $\sum_n$ and $\int_0^1$. And, as it turns out, worse than "not obvious" it appears to be wrong.</p></li> <li><p>Now, another approach to compute the sum: starting with your $(1)$, we write, for $N\geq 1$, $$\begin{align} \sum_{n=1}^N a_n &amp;= \sum_{n=1}^N \frac{1}{2n+1}+\sum_{n=1}^N \frac{1}{2n-1}-\sum_{n=1}^N \frac{1}{n}\\ &amp;= \sum_{n=1}^N \left(\frac{1}{2n+1}+\frac{1}{2n}\right)+\sum_{n=1}^N \left(\frac{1}{2n-1}+\frac{1}{2n}\right)-2\sum_{n=1}^N \frac{1}{n} \\ &amp;= \sum_{n=2}^{2N+1} \frac{1}{n}+\sum_{n=1}^{2N} \frac{1}{n}-2\sum_{n=1}^N \frac{1}{n} \\ &amp;= H_{2N+1}-1+H_{2N}-2H_N \\ &amp;= \ln(2N+1)+\ln(2N)-2\ln N - 1 +o(1)\\ &amp;= 2\ln 2 - 1 + \ln(N+\frac{1}{2})+ \ln(N+\frac{1}{2})-2\ln N +o(1)\\ &amp;= 2\ln 2 - 1 + \ln\frac{N+\frac{1}{2}}{N}+ \ln\frac{N-\frac{1}{2}}{N} +o(1)\\ &amp;\xrightarrow[N\to\infty]{} \boxed{2\ln 2 -1} \end{align}$$ where $H_N = \sum_{n=1}^N \frac{1}{n}$ denotes the $N$-th <a href="https://en.wikipedia.org/wiki/Harmonic_number">Harmonic number</a>, which <a href="https://en.wikipedia.org/wiki/Harmonic_series_(mathematics)#Rate_of_divergence">satisfies</a> $$ H_N = \ln N + \gamma + o(1) $$ when $N\to\infty$ ($\gamma$ being Euler's constant). We also used that by continuity of the logarithm, $\ln\frac{N\pm\frac{1}{2}}{N} \xrightarrow[N\to\infty]{} \ln 1=0$.</p></li> </ul> Answer 4: <p>You can do this way. It is easier. Let $$ f(x)=\sum_{n=1}^\infty\frac{1}{4n^3-n}x^{2n+1}. $$ Then $f(1)=\sum_{n=1}^\infty\frac{1}{4n^3-n}$ and $$ f'(x)=\sum_{n=1}^\infty\frac{1}{(2n-1)n}x^{2n}, f''(x)=2\sum_{n=1}^\infty\frac{1}{(2n-1)}x^{2n-1}, f'''(x)=2\sum_{n=1}^\infty x^{2n-2}=\frac2{1-x^2}$$ and hence $$ f(1)=\int_0^1\int_0^s\int_0^t\frac2{1-x^2}dxdtds=2\ln2-1.$$</p> Answer 5: <p><em>Following a comment of mine above:</em></p> <ul> <li><p>without checking too carefully, the issue in your step most likely resides in the <em>"I integrate both sides of (1) from 0 to 1."</em> You then integrate the LHS termwise -- <em>why</em> can you do that? The interval of convergence of your power series is $1$, so it is <em>not</em> obvious you can swap $\sum_n$ and $\int_0^1$. And, as it turns out, worse than "not obvious" it appears to be wrong.</p></li> <li><p>Now, another approach to compute the sum: starting with your $(1)$, we write, for $N\geq 1$, $$\begin{align} \sum_{n=1}^N a_n &amp;= \sum_{n=1}^N \frac{1}{2n+1}+\sum_{n=1}^N \frac{1}{2n-1}-\sum_{n=1}^N \frac{1}{n}\\ &amp;= \sum_{n=1}^N \left(\frac{1}{2n+1}+\frac{1}{2n}\right)+\sum_{n=1}^N \left(\frac{1}{2n-1}+\frac{1}{2n}\right)-2\sum_{n=1}^N \frac{1}{n} \\ &amp;= \sum_{n=2}^{2N+1} \frac{1}{n}+\sum_{n=1}^{2N} \frac{1}{n}-2\sum_{n=1}^N \frac{1}{n} \\ &amp;= H_{2N+1}-1+H_{2N}-2H_N \\ &amp;= \ln(2N+1)+\ln(2N)-2\ln N - 1 +o(1)\\ &amp;= 2\ln 2 - 1 + \ln(N+\frac{1}{2})+ \ln(N+\frac{1}{2})-2\ln N +o(1)\\ &amp;= 2\ln 2 - 1 + \ln\frac{N+\frac{1}{2}}{N}+ \ln\frac{N-\frac{1}{2}}{N} +o(1)\\ &amp;\xrightarrow[N\to\infty]{} \boxed{2\ln 2 -1} \end{align}$$ where $H_N = \sum_{n=1}^N \frac{1}{n}$ denotes the $N$-th <a href="https://en.wikipedia.org/wiki/Harmonic_number">Harmonic number</a>, which <a href="https://en.wikipedia.org/wiki/Harmonic_series_(mathematics)#Rate_of_divergence">satisfies</a> $$ H_N = \ln N + \gamma + o(1) $$ when $N\to\infty$ ($\gamma$ being Euler's constant). We also used that by continuity of the logarithm, $\ln\frac{N\pm\frac{1}{2}}{N} \xrightarrow[N\to\infty]{} \ln 1=0$.</p></li> </ul> Answer 6: <p>You can do this way. It is easier. Let $$ f(x)=\sum_{n=1}^\infty\frac{1}{4n^3-n}x^{2n+1}. $$ Then $f(1)=\sum_{n=1}^\infty\frac{1}{4n^3-n}$ and $$ f'(x)=\sum_{n=1}^\infty\frac{1}{(2n-1)n}x^{2n}, f''(x)=2\sum_{n=1}^\infty\frac{1}{(2n-1)}x^{2n-1}, f'''(x)=2\sum_{n=1}^\infty x^{2n-2}=\frac2{1-x^2}$$ and hence $$ f(1)=\int_0^1\int_0^s\int_0^t\frac2{1-x^2}dxdtds=2\ln2-1.$$</p> Answer 7: <p><em>Following a comment of mine above:</em></p> <ul> <li><p>without checking too carefully, the issue in your step most likely resides in the <em>"I integrate both sides of (1) from 0 to 1."</em> You then integrate the LHS termwise -- <em>why</em> can you do that? The interval of convergence of your power series is $1$, so it is <em>not</em> obvious you can swap $\sum_n$ and $\int_0^1$. And, as it turns out, worse than "not obvious" it appears to be wrong.</p></li> <li><p>Now, another approach to compute the sum: starting with your $(1)$, we write, for $N\geq 1$, $$\begin{align} \sum_{n=1}^N a_n &amp;= \sum_{n=1}^N \frac{1}{2n+1}+\sum_{n=1}^N \frac{1}{2n-1}-\sum_{n=1}^N \frac{1}{n}\\ &amp;= \sum_{n=1}^N \left(\frac{1}{2n+1}+\frac{1}{2n}\right)+\sum_{n=1}^N \left(\frac{1}{2n-1}+\frac{1}{2n}\right)-2\sum_{n=1}^N \frac{1}{n} \\ &amp;= \sum_{n=2}^{2N+1} \frac{1}{n}+\sum_{n=1}^{2N} \frac{1}{n}-2\sum_{n=1}^N \frac{1}{n} \\ &amp;= H_{2N+1}-1+H_{2N}-2H_N \\ &amp;= \ln(2N+1)+\ln(2N)-2\ln N - 1 +o(1)\\ &amp;= 2\ln 2 - 1 + \ln(N+\frac{1}{2})+ \ln(N+\frac{1}{2})-2\ln N +o(1)\\ &amp;= 2\ln 2 - 1 + \ln\frac{N+\frac{1}{2}}{N}+ \ln\frac{N-\frac{1}{2}}{N} +o(1)\\ &amp;\xrightarrow[N\to\infty]{} \boxed{2\ln 2 -1} \end{align}$$ where $H_N = \sum_{n=1}^N \frac{1}{n}$ denotes the $N$-th <a href="https://en.wikipedia.org/wiki/Harmonic_number">Harmonic number</a>, which <a href="https://en.wikipedia.org/wiki/Harmonic_series_(mathematics)#Rate_of_divergence">satisfies</a> $$ H_N = \ln N + \gamma + o(1) $$ when $N\to\infty$ ($\gamma$ being Euler's constant). We also used that by continuity of the logarithm, $\ln\frac{N\pm\frac{1}{2}}{N} \xrightarrow[N\to\infty]{} \ln 1=0$.</p></li> </ul> Answer 8: <p>You can do this way. It is easier. Let $$ f(x)=\sum_{n=1}^\infty\frac{1}{4n^3-n}x^{2n+1}. $$ Then $f(1)=\sum_{n=1}^\infty\frac{1}{4n^3-n}$ and $$ f'(x)=\sum_{n=1}^\infty\frac{1}{(2n-1)n}x^{2n}, f''(x)=2\sum_{n=1}^\infty\frac{1}{(2n-1)}x^{2n-1}, f'''(x)=2\sum_{n=1}^\infty x^{2n-2}=\frac2{1-x^2}$$ and hence $$ f(1)=\int_0^1\int_0^s\int_0^t\frac2{1-x^2}dxdtds=2\ln2-1.$$</p> Answer 9: <p><em>Following a comment of mine above:</em></p> <ul> <li><p>without checking too carefully, the issue in your step most likely resides in the <em>"I integrate both sides of (1) from 0 to 1."</em> You then integrate the LHS termwise -- <em>why</em> can you do that? The interval of convergence of your power series is $1$, so it is <em>not</em> obvious you can swap $\sum_n$ and $\int_0^1$. And, as it turns out, worse than "not obvious" it appears to be wrong.</p></li> <li><p>Now, another approach to compute the sum: starting with your $(1)$, we write, for $N\geq 1$, $$\begin{align} \sum_{n=1}^N a_n &amp;= \sum_{n=1}^N \frac{1}{2n+1}+\sum_{n=1}^N \frac{1}{2n-1}-\sum_{n=1}^N \frac{1}{n}\\ &amp;= \sum_{n=1}^N \left(\frac{1}{2n+1}+\frac{1}{2n}\right)+\sum_{n=1}^N \left(\frac{1}{2n-1}+\frac{1}{2n}\right)-2\sum_{n=1}^N \frac{1}{n} \\ &amp;= \sum_{n=2}^{2N+1} \frac{1}{n}+\sum_{n=1}^{2N} \frac{1}{n}-2\sum_{n=1}^N \frac{1}{n} \\ &amp;= H_{2N+1}-1+H_{2N}-2H_N \\ &amp;= \ln(2N+1)+\ln(2N)-2\ln N - 1 +o(1)\\ &amp;= 2\ln 2 - 1 + \ln(N+\frac{1}{2})+ \ln(N+\frac{1}{2})-2\ln N +o(1)\\ &amp;= 2\ln 2 - 1 + \ln\frac{N+\frac{1}{2}}{N}+ \ln\frac{N-\frac{1}{2}}{N} +o(1)\\ &amp;\xrightarrow[N\to\infty]{} \boxed{2\ln 2 -1} \end{align}$$ where $H_N = \sum_{n=1}^N \frac{1}{n}$ denotes the $N$-th <a href="https://en.wikipedia.org/wiki/Harmonic_number">Harmonic number</a>, which <a href="https://en.wikipedia.org/wiki/Harmonic_series_(mathematics)#Rate_of_divergence">satisfies</a> $$ H_N = \ln N + \gamma + o(1) $$ when $N\to\infty$ ($\gamma$ being Euler's constant). We also used that by continuity of the logarithm, $\ln\frac{N\pm\frac{1}{2}}{N} \xrightarrow[N\to\infty]{} \ln 1=0$.</p></li> </ul> Answer 10: <p>You can do this way. It is easier. Let $$ f(x)=\sum_{n=1}^\infty\frac{1}{4n^3-n}x^{2n+1}. $$ Then $f(1)=\sum_{n=1}^\infty\frac{1}{4n^3-n}$ and $$ f'(x)=\sum_{n=1}^\infty\frac{1}{(2n-1)n}x^{2n}, f''(x)=2\sum_{n=1}^\infty\frac{1}{(2n-1)}x^{2n-1}, f'''(x)=2\sum_{n=1}^\infty x^{2n-2}=\frac2{1-x^2}$$ and hence $$ f(1)=\int_0^1\int_0^s\int_0^t\frac2{1-x^2}dxdtds=2\ln2-1.$$</p> Answer 11: <p><em>Following a comment of mine above:</em></p> <ul> <li><p>without checking too carefully, the issue in your step most likely resides in the <em>"I integrate both sides of (1) from 0 to 1."</em> You then integrate the LHS termwise -- <em>why</em> can you do that? The interval of convergence of your power series is $1$, so it is <em>not</em> obvious you can swap $\sum_n$ and $\int_0^1$. And, as it turns out, worse than "not obvious" it appears to be wrong.</p></li> <li><p>Now, another approach to compute the sum: starting with your $(1)$, we write, for $N\geq 1$, $$\begin{align} \sum_{n=1}^N a_n &amp;= \sum_{n=1}^N \frac{1}{2n+1}+\sum_{n=1}^N \frac{1}{2n-1}-\sum_{n=1}^N \frac{1}{n}\\ &amp;= \sum_{n=1}^N \left(\frac{1}{2n+1}+\frac{1}{2n}\right)+\sum_{n=1}^N \left(\frac{1}{2n-1}+\frac{1}{2n}\right)-2\sum_{n=1}^N \frac{1}{n} \\ &amp;= \sum_{n=2}^{2N+1} \frac{1}{n}+\sum_{n=1}^{2N} \frac{1}{n}-2\sum_{n=1}^N \frac{1}{n} \\ &amp;= H_{2N+1}-1+H_{2N}-2H_N \\ &amp;= \ln(2N+1)+\ln(2N)-2\ln N - 1 +o(1)\\ &amp;= 2\ln 2 - 1 + \ln(N+\frac{1}{2})+ \ln(N+\frac{1}{2})-2\ln N +o(1)\\ &amp;= 2\ln 2 - 1 + \ln\frac{N+\frac{1}{2}}{N}+ \ln\frac{N-\frac{1}{2}}{N} +o(1)\\ &amp;\xrightarrow[N\to\infty]{} \boxed{2\ln 2 -1} \end{align}$$ where $H_N = \sum_{n=1}^N \frac{1}{n}$ denotes the $N$-th <a href="https://en.wikipedia.org/wiki/Harmonic_number">Harmonic number</a>, which <a href="https://en.wikipedia.org/wiki/Harmonic_series_(mathematics)#Rate_of_divergence">satisfies</a> $$ H_N = \ln N + \gamma + o(1) $$ when $N\to\infty$ ($\gamma$ being Euler's constant). We also used that by continuity of the logarithm, $\ln\frac{N\pm\frac{1}{2}}{N} \xrightarrow[N\to\infty]{} \ln 1=0$.</p></li> </ul> Answer 12: <p>You can do this way. It is easier. Let $$ f(x)=\sum_{n=1}^\infty\frac{1}{4n^3-n}x^{2n+1}. $$ Then $f(1)=\sum_{n=1}^\infty\frac{1}{4n^3-n}$ and $$ f'(x)=\sum_{n=1}^\infty\frac{1}{(2n-1)n}x^{2n}, f''(x)=2\sum_{n=1}^\infty\frac{1}{(2n-1)}x^{2n-1}, f'''(x)=2\sum_{n=1}^\infty x^{2n-2}=\frac2{1-x^2}$$ and hence $$ f(1)=\int_0^1\int_0^s\int_0^t\frac2{1-x^2}dxdtds=2\ln2-1.$$</p> Answer 13: <p><em>Following a comment of mine above:</em></p> <ul> <li><p>without checking too carefully, the issue in your step most likely resides in the <em>"I integrate both sides of (1) from 0 to 1."</em> You then integrate the LHS termwise -- <em>why</em> can you do that? The interval of convergence of your power series is $1$, so it is <em>not</em> obvious you can swap $\sum_n$ and $\int_0^1$. And, as it turns out, worse than "not obvious" it appears to be wrong.</p></li> <li><p>Now, another approach to compute the sum: starting with your $(1)$, we write, for $N\geq 1$, $$\begin{align} \sum_{n=1}^N a_n &amp;= \sum_{n=1}^N \frac{1}{2n+1}+\sum_{n=1}^N \frac{1}{2n-1}-\sum_{n=1}^N \frac{1}{n}\\ &amp;= \sum_{n=1}^N \left(\frac{1}{2n+1}+\frac{1}{2n}\right)+\sum_{n=1}^N \left(\frac{1}{2n-1}+\frac{1}{2n}\right)-2\sum_{n=1}^N \frac{1}{n} \\ &amp;= \sum_{n=2}^{2N+1} \frac{1}{n}+\sum_{n=1}^{2N} \frac{1}{n}-2\sum_{n=1}^N \frac{1}{n} \\ &amp;= H_{2N+1}-1+H_{2N}-2H_N \\ &amp;= \ln(2N+1)+\ln(2N)-2\ln N - 1 +o(1)\\ &amp;= 2\ln 2 - 1 + \ln(N+\frac{1}{2})+ \ln(N+\frac{1}{2})-2\ln N +o(1)\\ &amp;= 2\ln 2 - 1 + \ln\frac{N+\frac{1}{2}}{N}+ \ln\frac{N-\frac{1}{2}}{N} +o(1)\\ &amp;\xrightarrow[N\to\infty]{} \boxed{2\ln 2 -1} \end{align}$$ where $H_N = \sum_{n=1}^N \frac{1}{n}$ denotes the $N$-th <a href="https://en.wikipedia.org/wiki/Harmonic_number">Harmonic number</a>, which <a href="https://en.wikipedia.org/wiki/Harmonic_series_(mathematics)#Rate_of_divergence">satisfies</a> $$ H_N = \ln N + \gamma + o(1) $$ when $N\to\infty$ ($\gamma$ being Euler's constant). We also used that by continuity of the logarithm, $\ln\frac{N\pm\frac{1}{2}}{N} \xrightarrow[N\to\infty]{} \ln 1=0$.</p></li> </ul> Answer 14: <p>You can do this way. It is easier. Let $$ f(x)=\sum_{n=1}^\infty\frac{1}{4n^3-n}x^{2n+1}. $$ Then $f(1)=\sum_{n=1}^\infty\frac{1}{4n^3-n}$ and $$ f'(x)=\sum_{n=1}^\infty\frac{1}{(2n-1)n}x^{2n}, f''(x)=2\sum_{n=1}^\infty\frac{1}{(2n-1)}x^{2n-1}, f'''(x)=2\sum_{n=1}^\infty x^{2n-2}=\frac2{1-x^2}$$ and hence $$ f(1)=\int_0^1\int_0^s\int_0^t\frac2{1-x^2}dxdtds=2\ln2-1.$$</p> Answer 15: <p><em>Following a comment of mine above:</em></p> <ul> <li><p>without checking too carefully, the issue in your step most likely resides in the <em>"I integrate both sides of (1) from 0 to 1."</em> You then integrate the LHS termwise -- <em>why</em> can you do that? The interval of convergence of your power series is $1$, so it is <em>not</em> obvious you can swap $\sum_n$ and $\int_0^1$. And, as it turns out, worse than "not obvious" it appears to be wrong.</p></li> <li><p>Now, another approach to compute the sum: starting with your $(1)$, we write, for $N\geq 1$, $$\begin{align} \sum_{n=1}^N a_n &amp;= \sum_{n=1}^N \frac{1}{2n+1}+\sum_{n=1}^N \frac{1}{2n-1}-\sum_{n=1}^N \frac{1}{n}\\ &amp;= \sum_{n=1}^N \left(\frac{1}{2n+1}+\frac{1}{2n}\right)+\sum_{n=1}^N \left(\frac{1}{2n-1}+\frac{1}{2n}\right)-2\sum_{n=1}^N \frac{1}{n} \\ &amp;= \sum_{n=2}^{2N+1} \frac{1}{n}+\sum_{n=1}^{2N} \frac{1}{n}-2\sum_{n=1}^N \frac{1}{n} \\ &amp;= H_{2N+1}-1+H_{2N}-2H_N \\ &amp;= \ln(2N+1)+\ln(2N)-2\ln N - 1 +o(1)\\ &amp;= 2\ln 2 - 1 + \ln(N+\frac{1}{2})+ \ln(N+\frac{1}{2})-2\ln N +o(1)\\ &amp;= 2\ln 2 - 1 + \ln\frac{N+\frac{1}{2}}{N}+ \ln\frac{N-\frac{1}{2}}{N} +o(1)\\ &amp;\xrightarrow[N\to\infty]{} \boxed{2\ln 2 -1} \end{align}$$ where $H_N = \sum_{n=1}^N \frac{1}{n}$ denotes the $N$-th <a href="https://en.wikipedia.org/wiki/Harmonic_number">Harmonic number</a>, which <a href="https://en.wikipedia.org/wiki/Harmonic_series_(mathematics)#Rate_of_divergence">satisfies</a> $$ H_N = \ln N + \gamma + o(1) $$ when $N\to\infty$ ($\gamma$ being Euler's constant). We also used that by continuity of the logarithm, $\ln\frac{N\pm\frac{1}{2}}{N} \xrightarrow[N\to\infty]{} \ln 1=0$.</p></li> </ul> Answer 16: <p>You can do this way. It is easier. Let $$ f(x)=\sum_{n=1}^\infty\frac{1}{4n^3-n}x^{2n+1}. $$ Then $f(1)=\sum_{n=1}^\infty\frac{1}{4n^3-n}$ and $$ f'(x)=\sum_{n=1}^\infty\frac{1}{(2n-1)n}x^{2n}, f''(x)=2\sum_{n=1}^\infty\frac{1}{(2n-1)}x^{2n-1}, f'''(x)=2\sum_{n=1}^\infty x^{2n-2}=\frac2{1-x^2}$$ and hence $$ f(1)=\int_0^1\int_0^s\int_0^t\frac2{1-x^2}dxdtds=2\ln2-1.$$</p> Answer 17: <p><em>Following a comment of mine above:</em></p> <ul> <li><p>without checking too carefully, the issue in your step most likely resides in the <em>"I integrate both sides of (1) from 0 to 1."</em> You then integrate the LHS termwise -- <em>why</em> can you do that? The interval of convergence of your power series is $1$, so it is <em>not</em> obvious you can swap $\sum_n$ and $\int_0^1$. And, as it turns out, worse than "not obvious" it appears to be wrong.</p></li> <li><p>Now, another approach to compute the sum: starting with your $(1)$, we write, for $N\geq 1$, $$\begin{align} \sum_{n=1}^N a_n &amp;= \sum_{n=1}^N \frac{1}{2n+1}+\sum_{n=1}^N \frac{1}{2n-1}-\sum_{n=1}^N \frac{1}{n}\\ &amp;= \sum_{n=1}^N \left(\frac{1}{2n+1}+\frac{1}{2n}\right)+\sum_{n=1}^N \left(\frac{1}{2n-1}+\frac{1}{2n}\right)-2\sum_{n=1}^N \frac{1}{n} \\ &amp;= \sum_{n=2}^{2N+1} \frac{1}{n}+\sum_{n=1}^{2N} \frac{1}{n}-2\sum_{n=1}^N \frac{1}{n} \\ &amp;= H_{2N+1}-1+H_{2N}-2H_N \\ &amp;= \ln(2N+1)+\ln(2N)-2\ln N - 1 +o(1)\\ &amp;= 2\ln 2 - 1 + \ln(N+\frac{1}{2})+ \ln(N+\frac{1}{2})-2\ln N +o(1)\\ &amp;= 2\ln 2 - 1 + \ln\frac{N+\frac{1}{2}}{N}+ \ln\frac{N-\frac{1}{2}}{N} +o(1)\\ &amp;\xrightarrow[N\to\infty]{} \boxed{2\ln 2 -1} \end{align}$$ where $H_N = \sum_{n=1}^N \frac{1}{n}$ denotes the $N$-th <a href="https://en.wikipedia.org/wiki/Harmonic_number">Harmonic number</a>, which <a href="https://en.wikipedia.org/wiki/Harmonic_series_(mathematics)#Rate_of_divergence">satisfies</a> $$ H_N = \ln N + \gamma + o(1) $$ when $N\to\infty$ ($\gamma$ being Euler's constant). We also used that by continuity of the logarithm, $\ln\frac{N\pm\frac{1}{2}}{N} \xrightarrow[N\to\infty]{} \ln 1=0$.</p></li> </ul> Answer 18: <p>You can do this way. It is easier. Let $$ f(x)=\sum_{n=1}^\infty\frac{1}{4n^3-n}x^{2n+1}. $$ Then $f(1)=\sum_{n=1}^\infty\frac{1}{4n^3-n}$ and $$ f'(x)=\sum_{n=1}^\infty\frac{1}{(2n-1)n}x^{2n}, f''(x)=2\sum_{n=1}^\infty\frac{1}{(2n-1)}x^{2n-1}, f'''(x)=2\sum_{n=1}^\infty x^{2n-2}=\frac2{1-x^2}$$ and hence $$ f(1)=\int_0^1\int_0^s\int_0^t\frac2{1-x^2}dxdtds=2\ln2-1.$$</p> Answer 19: <p><em>Following a comment of mine above:</em></p> <ul> <li><p>without checking too carefully, the issue in your step most likely resides in the <em>"I integrate both sides of (1) from 0 to 1."</em> You then integrate the LHS termwise -- <em>why</em> can you do that? The interval of convergence of your power series is $1$, so it is <em>not</em> obvious you can swap $\sum_n$ and $\int_0^1$. And, as it turns out, worse than "not obvious" it appears to be wrong.</p></li> <li><p>Now, another approach to compute the sum: starting with your $(1)$, we write, for $N\geq 1$, $$\begin{align} \sum_{n=1}^N a_n &amp;= \sum_{n=1}^N \frac{1}{2n+1}+\sum_{n=1}^N \frac{1}{2n-1}-\sum_{n=1}^N \frac{1}{n}\\ &amp;= \sum_{n=1}^N \left(\frac{1}{2n+1}+\frac{1}{2n}\right)+\sum_{n=1}^N \left(\frac{1}{2n-1}+\frac{1}{2n}\right)-2\sum_{n=1}^N \frac{1}{n} \\ &amp;= \sum_{n=2}^{2N+1} \frac{1}{n}+\sum_{n=1}^{2N} \frac{1}{n}-2\sum_{n=1}^N \frac{1}{n} \\ &amp;= H_{2N+1}-1+H_{2N}-2H_N \\ &amp;= \ln(2N+1)+\ln(2N)-2\ln N - 1 +o(1)\\ &amp;= 2\ln 2 - 1 + \ln(N+\frac{1}{2})+ \ln(N+\frac{1}{2})-2\ln N +o(1)\\ &amp;= 2\ln 2 - 1 + \ln\frac{N+\frac{1}{2}}{N}+ \ln\frac{N-\frac{1}{2}}{N} +o(1)\\ &amp;\xrightarrow[N\to\infty]{} \boxed{2\ln 2 -1} \end{align}$$ where $H_N = \sum_{n=1}^N \frac{1}{n}$ denotes the $N$-th <a href="https://en.wikipedia.org/wiki/Harmonic_number">Harmonic number</a>, which <a href="https://en.wikipedia.org/wiki/Harmonic_series_(mathematics)#Rate_of_divergence">satisfies</a> $$ H_N = \ln N + \gamma + o(1) $$ when $N\to\infty$ ($\gamma$ being Euler's constant). We also used that by continuity of the logarithm, $\ln\frac{N\pm\frac{1}{2}}{N} \xrightarrow[N\to\infty]{} \ln 1=0$.</p></li> </ul> Answer 20: <p>You can do this way. It is easier. Let $$ f(x)=\sum_{n=1}^\infty\frac{1}{4n^3-n}x^{2n+1}. $$ Then $f(1)=\sum_{n=1}^\infty\frac{1}{4n^3-n}$ and $$ f'(x)=\sum_{n=1}^\infty\frac{1}{(2n-1)n}x^{2n}, f''(x)=2\sum_{n=1}^\infty\frac{1}{(2n-1)}x^{2n-1}, f'''(x)=2\sum_{n=1}^\infty x^{2n-2}=\frac2{1-x^2}$$ and hence $$ f(1)=\int_0^1\int_0^s\int_0^t\frac2{1-x^2}dxdtds=2\ln2-1.$$</p>
Find a minimum of $x^2+y^2$ under the condition $x^3+3xy+y^3=1$
2,107,048
6
https://math.stackexchange.com/questions/2107048/find-a-minimum-of-x2y2-under-the-condition-x33xyy3-1
<p>As in the title, I've tried to find a maximum and mininum of $x^2+y^2$ when $x^3+3xy+y^3=1$ holds. It is not too hard to show that $x^2+y^2$ has no maximum, but I can't find a minimum. Lagrange multiplier gives a dirty calculation so I can't handle it. Is there any elegant way to find it? Thanks for any help.</p> <p>p.s. Sorry. I make a typo in the $xy$-coefficient.</p>
[ { "answer_id": 2107128, "score": 6, "text": "<p>Noting that\n$$x^3+y^3=(x+y)^3-3xy(x+y)$$\nand\n$$(x+y)^3-1=(x+y-1)\\left[(x+y)^2+(x+y)+1\\right]$$\ngives\n$$x^3+3xy+y^3=1 \\\\\\\\ \\implies\\ (x+y)^3-1-3xy(x+y)+3xy=0 \\\\\\\\ \\implies\\ (x+y-1)\\left[(x+y)^2+(x+y)+1-3xy\\right]=0 \\\\\\\\ \\implies\\ (x+y-1)(x^2+y^2-xy+x+y+1)=0$$\nSo either $x+y=1$ or $x^2+y^2-xy+x+y+1=0$.</p>\n\n<p>Treating the latter as a quadratic equation in $x$ gives its discriminant as $(y-1)^2-4(y^2+y+1)=-3(y+1)^2$; so a real solution exists only when $y=-1$, at which value $x=-1$. This gives $x^2+y^2=(-1)^2+(-1)^2=2$.</p>\n\n<p>As for $x+y=1$, we want the circle of $x^2+y^2=r$ $(r\\ge0)$ to be tangent to that straight line. This occurs at $x=y=\\dfrac12$, giving $r_{\\min}=\\left(\\dfrac12\\right)^2+\\left(\\dfrac12\\right)^2=\\dfrac12$.</p>\n\n<p>Hence the minimum value is\n$$\\min\\left(2,\\,\\frac12\\right)\\ =\\ \\frac12$$</p>\n" }, { "answer_id": 2107077, "score": 5, "text": "<p>$$x^3+3xy+y^3-1=(x+y-1)(x^2+y^2-xy+x+y+1)=0$$</p>\n\n<p>So either $y=1-x$ or $(x,y)=(-1,-1)$</p>\n\n<p>Subbing this in gives $x^2+(1-x)^2$ or $2$ to be minimized. Considering the first simple calculus gives us that $\\frac{dy}{dx}=2x-2(1-x)=4x-2$. Setting this equal to zero gives $x=y=\\frac{1}{2}$ as a possible minimum/maximum. Looking at the second derivative $\\frac{d^2y}{dx^2}=4$ shows it is a minimum. Evaluating it gives $x^2+y^2=\\frac{1}{2}$ which is less than the value at $(x,y)=(-1,-1)$ so the minimum value is $\\frac{1}{2}$ at $(x,y)=\\left(\\frac{1}{2},\\frac{1}{2}\\right)$.</p>\n" }, { "answer_id": 2591519, "score": 2, "text": "<p>Let $x=u+v$ and $y=u-v$. Then $x^2+y^2=2(u^2+v^2)$, while the equation $x^3+3xy+y^3=1$ becomes $2u^3+6uv^2+3(u^2-v^2)=1$, or</p>\n\n<p>$$2u^3+3u^2-1=-3v^2(2u-1)$$</p>\n\n<p>Factoring the cubic on the left hand side, we obtain</p>\n\n<p>$$(u+1)^2(2u-1)=-3v^2(2u-1)$$</p>\n\n<p>So either $2u-1=0$, in which case $v$ can be anything, or else $(u+1)^2=-3v^2$, which, because of the minus sign, is possible only if $u+1=v=0$. The latter possibility leads to </p>\n\n<p>$$x^2+y^2=2((-1)^2+0^2)=2$$ </p>\n\n<p>while the former leads to </p>\n\n<p>$$x^2+y^2=2\\left(\\left(1\\over2\\right)^2+v^2\\right)={1\\over2}+2v^2$$</p>\n\n<p>which is clearly minimized when $v=0$. So the minimum value of $x^2+y^2$ is $1\\over2$, and it occurs at $(x,y)=({1\\over2}+0,{1\\over2}-0)=({1\\over2},{1\\over2})$.</p>\n" }, { "answer_id": 2107128, "score": 6, "text": "<p>Noting that\n$$x^3+y^3=(x+y)^3-3xy(x+y)$$\nand\n$$(x+y)^3-1=(x+y-1)\\left[(x+y)^2+(x+y)+1\\right]$$\ngives\n$$x^3+3xy+y^3=1 \\\\\\\\ \\implies\\ (x+y)^3-1-3xy(x+y)+3xy=0 \\\\\\\\ \\implies\\ (x+y-1)\\left[(x+y)^2+(x+y)+1-3xy\\right]=0 \\\\\\\\ \\implies\\ (x+y-1)(x^2+y^2-xy+x+y+1)=0$$\nSo either $x+y=1$ or $x^2+y^2-xy+x+y+1=0$.</p>\n\n<p>Treating the latter as a quadratic equation in $x$ gives its discriminant as $(y-1)^2-4(y^2+y+1)=-3(y+1)^2$; so a real solution exists only when $y=-1$, at which value $x=-1$. This gives $x^2+y^2=(-1)^2+(-1)^2=2$.</p>\n\n<p>As for $x+y=1$, we want the circle of $x^2+y^2=r$ $(r\\ge0)$ to be tangent to that straight line. This occurs at $x=y=\\dfrac12$, giving $r_{\\min}=\\left(\\dfrac12\\right)^2+\\left(\\dfrac12\\right)^2=\\dfrac12$.</p>\n\n<p>Hence the minimum value is\n$$\\min\\left(2,\\,\\frac12\\right)\\ =\\ \\frac12$$</p>\n" }, { "answer_id": 2107077, "score": 5, "text": "<p>$$x^3+3xy+y^3-1=(x+y-1)(x^2+y^2-xy+x+y+1)=0$$</p>\n\n<p>So either $y=1-x$ or $(x,y)=(-1,-1)$</p>\n\n<p>Subbing this in gives $x^2+(1-x)^2$ or $2$ to be minimized. Considering the first simple calculus gives us that $\\frac{dy}{dx}=2x-2(1-x)=4x-2$. Setting this equal to zero gives $x=y=\\frac{1}{2}$ as a possible minimum/maximum. Looking at the second derivative $\\frac{d^2y}{dx^2}=4$ shows it is a minimum. Evaluating it gives $x^2+y^2=\\frac{1}{2}$ which is less than the value at $(x,y)=(-1,-1)$ so the minimum value is $\\frac{1}{2}$ at $(x,y)=\\left(\\frac{1}{2},\\frac{1}{2}\\right)$.</p>\n" }, { "answer_id": 2591519, "score": 2, "text": "<p>Let $x=u+v$ and $y=u-v$. Then $x^2+y^2=2(u^2+v^2)$, while the equation $x^3+3xy+y^3=1$ becomes $2u^3+6uv^2+3(u^2-v^2)=1$, or</p>\n\n<p>$$2u^3+3u^2-1=-3v^2(2u-1)$$</p>\n\n<p>Factoring the cubic on the left hand side, we obtain</p>\n\n<p>$$(u+1)^2(2u-1)=-3v^2(2u-1)$$</p>\n\n<p>So either $2u-1=0$, in which case $v$ can be anything, or else $(u+1)^2=-3v^2$, which, because of the minus sign, is possible only if $u+1=v=0$. The latter possibility leads to </p>\n\n<p>$$x^2+y^2=2((-1)^2+0^2)=2$$ </p>\n\n<p>while the former leads to </p>\n\n<p>$$x^2+y^2=2\\left(\\left(1\\over2\\right)^2+v^2\\right)={1\\over2}+2v^2$$</p>\n\n<p>which is clearly minimized when $v=0$. So the minimum value of $x^2+y^2$ is $1\\over2$, and it occurs at $(x,y)=({1\\over2}+0,{1\\over2}-0)=({1\\over2},{1\\over2})$.</p>\n" }, { "answer_id": 2107128, "score": 6, "text": "<p>Noting that\n$$x^3+y^3=(x+y)^3-3xy(x+y)$$\nand\n$$(x+y)^3-1=(x+y-1)\\left[(x+y)^2+(x+y)+1\\right]$$\ngives\n$$x^3+3xy+y^3=1 \\\\\\\\ \\implies\\ (x+y)^3-1-3xy(x+y)+3xy=0 \\\\\\\\ \\implies\\ (x+y-1)\\left[(x+y)^2+(x+y)+1-3xy\\right]=0 \\\\\\\\ \\implies\\ (x+y-1)(x^2+y^2-xy+x+y+1)=0$$\nSo either $x+y=1$ or $x^2+y^2-xy+x+y+1=0$.</p>\n\n<p>Treating the latter as a quadratic equation in $x$ gives its discriminant as $(y-1)^2-4(y^2+y+1)=-3(y+1)^2$; so a real solution exists only when $y=-1$, at which value $x=-1$. This gives $x^2+y^2=(-1)^2+(-1)^2=2$.</p>\n\n<p>As for $x+y=1$, we want the circle of $x^2+y^2=r$ $(r\\ge0)$ to be tangent to that straight line. This occurs at $x=y=\\dfrac12$, giving $r_{\\min}=\\left(\\dfrac12\\right)^2+\\left(\\dfrac12\\right)^2=\\dfrac12$.</p>\n\n<p>Hence the minimum value is\n$$\\min\\left(2,\\,\\frac12\\right)\\ =\\ \\frac12$$</p>\n" }, { "answer_id": 2107077, "score": 5, "text": "<p>$$x^3+3xy+y^3-1=(x+y-1)(x^2+y^2-xy+x+y+1)=0$$</p>\n\n<p>So either $y=1-x$ or $(x,y)=(-1,-1)$</p>\n\n<p>Subbing this in gives $x^2+(1-x)^2$ or $2$ to be minimized. Considering the first simple calculus gives us that $\\frac{dy}{dx}=2x-2(1-x)=4x-2$. Setting this equal to zero gives $x=y=\\frac{1}{2}$ as a possible minimum/maximum. Looking at the second derivative $\\frac{d^2y}{dx^2}=4$ shows it is a minimum. Evaluating it gives $x^2+y^2=\\frac{1}{2}$ which is less than the value at $(x,y)=(-1,-1)$ so the minimum value is $\\frac{1}{2}$ at $(x,y)=\\left(\\frac{1}{2},\\frac{1}{2}\\right)$.</p>\n" }, { "answer_id": 2591519, "score": 2, "text": "<p>Let $x=u+v$ and $y=u-v$. Then $x^2+y^2=2(u^2+v^2)$, while the equation $x^3+3xy+y^3=1$ becomes $2u^3+6uv^2+3(u^2-v^2)=1$, or</p>\n\n<p>$$2u^3+3u^2-1=-3v^2(2u-1)$$</p>\n\n<p>Factoring the cubic on the left hand side, we obtain</p>\n\n<p>$$(u+1)^2(2u-1)=-3v^2(2u-1)$$</p>\n\n<p>So either $2u-1=0$, in which case $v$ can be anything, or else $(u+1)^2=-3v^2$, which, because of the minus sign, is possible only if $u+1=v=0$. The latter possibility leads to </p>\n\n<p>$$x^2+y^2=2((-1)^2+0^2)=2$$ </p>\n\n<p>while the former leads to </p>\n\n<p>$$x^2+y^2=2\\left(\\left(1\\over2\\right)^2+v^2\\right)={1\\over2}+2v^2$$</p>\n\n<p>which is clearly minimized when $v=0$. So the minimum value of $x^2+y^2$ is $1\\over2$, and it occurs at $(x,y)=({1\\over2}+0,{1\\over2}-0)=({1\\over2},{1\\over2})$.</p>\n" }, { "answer_id": 2107128, "score": 6, "text": "<p>Noting that\n$$x^3+y^3=(x+y)^3-3xy(x+y)$$\nand\n$$(x+y)^3-1=(x+y-1)\\left[(x+y)^2+(x+y)+1\\right]$$\ngives\n$$x^3+3xy+y^3=1 \\\\\\\\ \\implies\\ (x+y)^3-1-3xy(x+y)+3xy=0 \\\\\\\\ \\implies\\ (x+y-1)\\left[(x+y)^2+(x+y)+1-3xy\\right]=0 \\\\\\\\ \\implies\\ (x+y-1)(x^2+y^2-xy+x+y+1)=0$$\nSo either $x+y=1$ or $x^2+y^2-xy+x+y+1=0$.</p>\n\n<p>Treating the latter as a quadratic equation in $x$ gives its discriminant as $(y-1)^2-4(y^2+y+1)=-3(y+1)^2$; so a real solution exists only when $y=-1$, at which value $x=-1$. This gives $x^2+y^2=(-1)^2+(-1)^2=2$.</p>\n\n<p>As for $x+y=1$, we want the circle of $x^2+y^2=r$ $(r\\ge0)$ to be tangent to that straight line. This occurs at $x=y=\\dfrac12$, giving $r_{\\min}=\\left(\\dfrac12\\right)^2+\\left(\\dfrac12\\right)^2=\\dfrac12$.</p>\n\n<p>Hence the minimum value is\n$$\\min\\left(2,\\,\\frac12\\right)\\ =\\ \\frac12$$</p>\n" }, { "answer_id": 2107077, "score": 5, "text": "<p>$$x^3+3xy+y^3-1=(x+y-1)(x^2+y^2-xy+x+y+1)=0$$</p>\n\n<p>So either $y=1-x$ or $(x,y)=(-1,-1)$</p>\n\n<p>Subbing this in gives $x^2+(1-x)^2$ or $2$ to be minimized. Considering the first simple calculus gives us that $\\frac{dy}{dx}=2x-2(1-x)=4x-2$. Setting this equal to zero gives $x=y=\\frac{1}{2}$ as a possible minimum/maximum. Looking at the second derivative $\\frac{d^2y}{dx^2}=4$ shows it is a minimum. Evaluating it gives $x^2+y^2=\\frac{1}{2}$ which is less than the value at $(x,y)=(-1,-1)$ so the minimum value is $\\frac{1}{2}$ at $(x,y)=\\left(\\frac{1}{2},\\frac{1}{2}\\right)$.</p>\n" }, { "answer_id": 2591519, "score": 2, "text": "<p>Let $x=u+v$ and $y=u-v$. Then $x^2+y^2=2(u^2+v^2)$, while the equation $x^3+3xy+y^3=1$ becomes $2u^3+6uv^2+3(u^2-v^2)=1$, or</p>\n\n<p>$$2u^3+3u^2-1=-3v^2(2u-1)$$</p>\n\n<p>Factoring the cubic on the left hand side, we obtain</p>\n\n<p>$$(u+1)^2(2u-1)=-3v^2(2u-1)$$</p>\n\n<p>So either $2u-1=0$, in which case $v$ can be anything, or else $(u+1)^2=-3v^2$, which, because of the minus sign, is possible only if $u+1=v=0$. The latter possibility leads to </p>\n\n<p>$$x^2+y^2=2((-1)^2+0^2)=2$$ </p>\n\n<p>while the former leads to </p>\n\n<p>$$x^2+y^2=2\\left(\\left(1\\over2\\right)^2+v^2\\right)={1\\over2}+2v^2$$</p>\n\n<p>which is clearly minimized when $v=0$. So the minimum value of $x^2+y^2$ is $1\\over2$, and it occurs at $(x,y)=({1\\over2}+0,{1\\over2}-0)=({1\\over2},{1\\over2})$.</p>\n" }, { "answer_id": 2107128, "score": 6, "text": "<p>Noting that\n$$x^3+y^3=(x+y)^3-3xy(x+y)$$\nand\n$$(x+y)^3-1=(x+y-1)\\left[(x+y)^2+(x+y)+1\\right]$$\ngives\n$$x^3+3xy+y^3=1 \\\\\\\\ \\implies\\ (x+y)^3-1-3xy(x+y)+3xy=0 \\\\\\\\ \\implies\\ (x+y-1)\\left[(x+y)^2+(x+y)+1-3xy\\right]=0 \\\\\\\\ \\implies\\ (x+y-1)(x^2+y^2-xy+x+y+1)=0$$\nSo either $x+y=1$ or $x^2+y^2-xy+x+y+1=0$.</p>\n\n<p>Treating the latter as a quadratic equation in $x$ gives its discriminant as $(y-1)^2-4(y^2+y+1)=-3(y+1)^2$; so a real solution exists only when $y=-1$, at which value $x=-1$. This gives $x^2+y^2=(-1)^2+(-1)^2=2$.</p>\n\n<p>As for $x+y=1$, we want the circle of $x^2+y^2=r$ $(r\\ge0)$ to be tangent to that straight line. This occurs at $x=y=\\dfrac12$, giving $r_{\\min}=\\left(\\dfrac12\\right)^2+\\left(\\dfrac12\\right)^2=\\dfrac12$.</p>\n\n<p>Hence the minimum value is\n$$\\min\\left(2,\\,\\frac12\\right)\\ =\\ \\frac12$$</p>\n" }, { "answer_id": 2107077, "score": 5, "text": "<p>$$x^3+3xy+y^3-1=(x+y-1)(x^2+y^2-xy+x+y+1)=0$$</p>\n\n<p>So either $y=1-x$ or $(x,y)=(-1,-1)$</p>\n\n<p>Subbing this in gives $x^2+(1-x)^2$ or $2$ to be minimized. Considering the first simple calculus gives us that $\\frac{dy}{dx}=2x-2(1-x)=4x-2$. Setting this equal to zero gives $x=y=\\frac{1}{2}$ as a possible minimum/maximum. Looking at the second derivative $\\frac{d^2y}{dx^2}=4$ shows it is a minimum. Evaluating it gives $x^2+y^2=\\frac{1}{2}$ which is less than the value at $(x,y)=(-1,-1)$ so the minimum value is $\\frac{1}{2}$ at $(x,y)=\\left(\\frac{1}{2},\\frac{1}{2}\\right)$.</p>\n" }, { "answer_id": 2591519, "score": 2, "text": "<p>Let $x=u+v$ and $y=u-v$. Then $x^2+y^2=2(u^2+v^2)$, while the equation $x^3+3xy+y^3=1$ becomes $2u^3+6uv^2+3(u^2-v^2)=1$, or</p>\n\n<p>$$2u^3+3u^2-1=-3v^2(2u-1)$$</p>\n\n<p>Factoring the cubic on the left hand side, we obtain</p>\n\n<p>$$(u+1)^2(2u-1)=-3v^2(2u-1)$$</p>\n\n<p>So either $2u-1=0$, in which case $v$ can be anything, or else $(u+1)^2=-3v^2$, which, because of the minus sign, is possible only if $u+1=v=0$. The latter possibility leads to </p>\n\n<p>$$x^2+y^2=2((-1)^2+0^2)=2$$ </p>\n\n<p>while the former leads to </p>\n\n<p>$$x^2+y^2=2\\left(\\left(1\\over2\\right)^2+v^2\\right)={1\\over2}+2v^2$$</p>\n\n<p>which is clearly minimized when $v=0$. So the minimum value of $x^2+y^2$ is $1\\over2$, and it occurs at $(x,y)=({1\\over2}+0,{1\\over2}-0)=({1\\over2},{1\\over2})$.</p>\n" }, { "answer_id": 2107128, "score": 6, "text": "<p>Noting that\n$$x^3+y^3=(x+y)^3-3xy(x+y)$$\nand\n$$(x+y)^3-1=(x+y-1)\\left[(x+y)^2+(x+y)+1\\right]$$\ngives\n$$x^3+3xy+y^3=1 \\\\\\\\ \\implies\\ (x+y)^3-1-3xy(x+y)+3xy=0 \\\\\\\\ \\implies\\ (x+y-1)\\left[(x+y)^2+(x+y)+1-3xy\\right]=0 \\\\\\\\ \\implies\\ (x+y-1)(x^2+y^2-xy+x+y+1)=0$$\nSo either $x+y=1$ or $x^2+y^2-xy+x+y+1=0$.</p>\n\n<p>Treating the latter as a quadratic equation in $x$ gives its discriminant as $(y-1)^2-4(y^2+y+1)=-3(y+1)^2$; so a real solution exists only when $y=-1$, at which value $x=-1$. This gives $x^2+y^2=(-1)^2+(-1)^2=2$.</p>\n\n<p>As for $x+y=1$, we want the circle of $x^2+y^2=r$ $(r\\ge0)$ to be tangent to that straight line. This occurs at $x=y=\\dfrac12$, giving $r_{\\min}=\\left(\\dfrac12\\right)^2+\\left(\\dfrac12\\right)^2=\\dfrac12$.</p>\n\n<p>Hence the minimum value is\n$$\\min\\left(2,\\,\\frac12\\right)\\ =\\ \\frac12$$</p>\n" }, { "answer_id": 2107077, "score": 5, "text": "<p>$$x^3+3xy+y^3-1=(x+y-1)(x^2+y^2-xy+x+y+1)=0$$</p>\n\n<p>So either $y=1-x$ or $(x,y)=(-1,-1)$</p>\n\n<p>Subbing this in gives $x^2+(1-x)^2$ or $2$ to be minimized. Considering the first simple calculus gives us that $\\frac{dy}{dx}=2x-2(1-x)=4x-2$. Setting this equal to zero gives $x=y=\\frac{1}{2}$ as a possible minimum/maximum. Looking at the second derivative $\\frac{d^2y}{dx^2}=4$ shows it is a minimum. Evaluating it gives $x^2+y^2=\\frac{1}{2}$ which is less than the value at $(x,y)=(-1,-1)$ so the minimum value is $\\frac{1}{2}$ at $(x,y)=\\left(\\frac{1}{2},\\frac{1}{2}\\right)$.</p>\n" }, { "answer_id": 2591519, "score": 2, "text": "<p>Let $x=u+v$ and $y=u-v$. Then $x^2+y^2=2(u^2+v^2)$, while the equation $x^3+3xy+y^3=1$ becomes $2u^3+6uv^2+3(u^2-v^2)=1$, or</p>\n\n<p>$$2u^3+3u^2-1=-3v^2(2u-1)$$</p>\n\n<p>Factoring the cubic on the left hand side, we obtain</p>\n\n<p>$$(u+1)^2(2u-1)=-3v^2(2u-1)$$</p>\n\n<p>So either $2u-1=0$, in which case $v$ can be anything, or else $(u+1)^2=-3v^2$, which, because of the minus sign, is possible only if $u+1=v=0$. The latter possibility leads to </p>\n\n<p>$$x^2+y^2=2((-1)^2+0^2)=2$$ </p>\n\n<p>while the former leads to </p>\n\n<p>$$x^2+y^2=2\\left(\\left(1\\over2\\right)^2+v^2\\right)={1\\over2}+2v^2$$</p>\n\n<p>which is clearly minimized when $v=0$. So the minimum value of $x^2+y^2$ is $1\\over2$, and it occurs at $(x,y)=({1\\over2}+0,{1\\over2}-0)=({1\\over2},{1\\over2})$.</p>\n" }, { "answer_id": 2107128, "score": 6, "text": "<p>Noting that\n$$x^3+y^3=(x+y)^3-3xy(x+y)$$\nand\n$$(x+y)^3-1=(x+y-1)\\left[(x+y)^2+(x+y)+1\\right]$$\ngives\n$$x^3+3xy+y^3=1 \\\\\\\\ \\implies\\ (x+y)^3-1-3xy(x+y)+3xy=0 \\\\\\\\ \\implies\\ (x+y-1)\\left[(x+y)^2+(x+y)+1-3xy\\right]=0 \\\\\\\\ \\implies\\ (x+y-1)(x^2+y^2-xy+x+y+1)=0$$\nSo either $x+y=1$ or $x^2+y^2-xy+x+y+1=0$.</p>\n\n<p>Treating the latter as a quadratic equation in $x$ gives its discriminant as $(y-1)^2-4(y^2+y+1)=-3(y+1)^2$; so a real solution exists only when $y=-1$, at which value $x=-1$. This gives $x^2+y^2=(-1)^2+(-1)^2=2$.</p>\n\n<p>As for $x+y=1$, we want the circle of $x^2+y^2=r$ $(r\\ge0)$ to be tangent to that straight line. This occurs at $x=y=\\dfrac12$, giving $r_{\\min}=\\left(\\dfrac12\\right)^2+\\left(\\dfrac12\\right)^2=\\dfrac12$.</p>\n\n<p>Hence the minimum value is\n$$\\min\\left(2,\\,\\frac12\\right)\\ =\\ \\frac12$$</p>\n" }, { "answer_id": 2107077, "score": 5, "text": "<p>$$x^3+3xy+y^3-1=(x+y-1)(x^2+y^2-xy+x+y+1)=0$$</p>\n\n<p>So either $y=1-x$ or $(x,y)=(-1,-1)$</p>\n\n<p>Subbing this in gives $x^2+(1-x)^2$ or $2$ to be minimized. Considering the first simple calculus gives us that $\\frac{dy}{dx}=2x-2(1-x)=4x-2$. Setting this equal to zero gives $x=y=\\frac{1}{2}$ as a possible minimum/maximum. Looking at the second derivative $\\frac{d^2y}{dx^2}=4$ shows it is a minimum. Evaluating it gives $x^2+y^2=\\frac{1}{2}$ which is less than the value at $(x,y)=(-1,-1)$ so the minimum value is $\\frac{1}{2}$ at $(x,y)=\\left(\\frac{1}{2},\\frac{1}{2}\\right)$.</p>\n" }, { "answer_id": 2591519, "score": 2, "text": "<p>Let $x=u+v$ and $y=u-v$. Then $x^2+y^2=2(u^2+v^2)$, while the equation $x^3+3xy+y^3=1$ becomes $2u^3+6uv^2+3(u^2-v^2)=1$, or</p>\n\n<p>$$2u^3+3u^2-1=-3v^2(2u-1)$$</p>\n\n<p>Factoring the cubic on the left hand side, we obtain</p>\n\n<p>$$(u+1)^2(2u-1)=-3v^2(2u-1)$$</p>\n\n<p>So either $2u-1=0$, in which case $v$ can be anything, or else $(u+1)^2=-3v^2$, which, because of the minus sign, is possible only if $u+1=v=0$. The latter possibility leads to </p>\n\n<p>$$x^2+y^2=2((-1)^2+0^2)=2$$ </p>\n\n<p>while the former leads to </p>\n\n<p>$$x^2+y^2=2\\left(\\left(1\\over2\\right)^2+v^2\\right)={1\\over2}+2v^2$$</p>\n\n<p>which is clearly minimized when $v=0$. So the minimum value of $x^2+y^2$ is $1\\over2$, and it occurs at $(x,y)=({1\\over2}+0,{1\\over2}-0)=({1\\over2},{1\\over2})$.</p>\n" }, { "answer_id": 2107128, "score": 6, "text": "<p>Noting that\n$$x^3+y^3=(x+y)^3-3xy(x+y)$$\nand\n$$(x+y)^3-1=(x+y-1)\\left[(x+y)^2+(x+y)+1\\right]$$\ngives\n$$x^3+3xy+y^3=1 \\\\\\\\ \\implies\\ (x+y)^3-1-3xy(x+y)+3xy=0 \\\\\\\\ \\implies\\ (x+y-1)\\left[(x+y)^2+(x+y)+1-3xy\\right]=0 \\\\\\\\ \\implies\\ (x+y-1)(x^2+y^2-xy+x+y+1)=0$$\nSo either $x+y=1$ or $x^2+y^2-xy+x+y+1=0$.</p>\n\n<p>Treating the latter as a quadratic equation in $x$ gives its discriminant as $(y-1)^2-4(y^2+y+1)=-3(y+1)^2$; so a real solution exists only when $y=-1$, at which value $x=-1$. This gives $x^2+y^2=(-1)^2+(-1)^2=2$.</p>\n\n<p>As for $x+y=1$, we want the circle of $x^2+y^2=r$ $(r\\ge0)$ to be tangent to that straight line. This occurs at $x=y=\\dfrac12$, giving $r_{\\min}=\\left(\\dfrac12\\right)^2+\\left(\\dfrac12\\right)^2=\\dfrac12$.</p>\n\n<p>Hence the minimum value is\n$$\\min\\left(2,\\,\\frac12\\right)\\ =\\ \\frac12$$</p>\n" }, { "answer_id": 2107077, "score": 5, "text": "<p>$$x^3+3xy+y^3-1=(x+y-1)(x^2+y^2-xy+x+y+1)=0$$</p>\n\n<p>So either $y=1-x$ or $(x,y)=(-1,-1)$</p>\n\n<p>Subbing this in gives $x^2+(1-x)^2$ or $2$ to be minimized. Considering the first simple calculus gives us that $\\frac{dy}{dx}=2x-2(1-x)=4x-2$. Setting this equal to zero gives $x=y=\\frac{1}{2}$ as a possible minimum/maximum. Looking at the second derivative $\\frac{d^2y}{dx^2}=4$ shows it is a minimum. Evaluating it gives $x^2+y^2=\\frac{1}{2}$ which is less than the value at $(x,y)=(-1,-1)$ so the minimum value is $\\frac{1}{2}$ at $(x,y)=\\left(\\frac{1}{2},\\frac{1}{2}\\right)$.</p>\n" }, { "answer_id": 2591519, "score": 2, "text": "<p>Let $x=u+v$ and $y=u-v$. Then $x^2+y^2=2(u^2+v^2)$, while the equation $x^3+3xy+y^3=1$ becomes $2u^3+6uv^2+3(u^2-v^2)=1$, or</p>\n\n<p>$$2u^3+3u^2-1=-3v^2(2u-1)$$</p>\n\n<p>Factoring the cubic on the left hand side, we obtain</p>\n\n<p>$$(u+1)^2(2u-1)=-3v^2(2u-1)$$</p>\n\n<p>So either $2u-1=0$, in which case $v$ can be anything, or else $(u+1)^2=-3v^2$, which, because of the minus sign, is possible only if $u+1=v=0$. The latter possibility leads to </p>\n\n<p>$$x^2+y^2=2((-1)^2+0^2)=2$$ </p>\n\n<p>while the former leads to </p>\n\n<p>$$x^2+y^2=2\\left(\\left(1\\over2\\right)^2+v^2\\right)={1\\over2}+2v^2$$</p>\n\n<p>which is clearly minimized when $v=0$. So the minimum value of $x^2+y^2$ is $1\\over2$, and it occurs at $(x,y)=({1\\over2}+0,{1\\over2}-0)=({1\\over2},{1\\over2})$.</p>\n" }, { "answer_id": 2107128, "score": 6, "text": "<p>Noting that\n$$x^3+y^3=(x+y)^3-3xy(x+y)$$\nand\n$$(x+y)^3-1=(x+y-1)\\left[(x+y)^2+(x+y)+1\\right]$$\ngives\n$$x^3+3xy+y^3=1 \\\\\\\\ \\implies\\ (x+y)^3-1-3xy(x+y)+3xy=0 \\\\\\\\ \\implies\\ (x+y-1)\\left[(x+y)^2+(x+y)+1-3xy\\right]=0 \\\\\\\\ \\implies\\ (x+y-1)(x^2+y^2-xy+x+y+1)=0$$\nSo either $x+y=1$ or $x^2+y^2-xy+x+y+1=0$.</p>\n\n<p>Treating the latter as a quadratic equation in $x$ gives its discriminant as $(y-1)^2-4(y^2+y+1)=-3(y+1)^2$; so a real solution exists only when $y=-1$, at which value $x=-1$. This gives $x^2+y^2=(-1)^2+(-1)^2=2$.</p>\n\n<p>As for $x+y=1$, we want the circle of $x^2+y^2=r$ $(r\\ge0)$ to be tangent to that straight line. This occurs at $x=y=\\dfrac12$, giving $r_{\\min}=\\left(\\dfrac12\\right)^2+\\left(\\dfrac12\\right)^2=\\dfrac12$.</p>\n\n<p>Hence the minimum value is\n$$\\min\\left(2,\\,\\frac12\\right)\\ =\\ \\frac12$$</p>\n" }, { "answer_id": 2107077, "score": 5, "text": "<p>$$x^3+3xy+y^3-1=(x+y-1)(x^2+y^2-xy+x+y+1)=0$$</p>\n\n<p>So either $y=1-x$ or $(x,y)=(-1,-1)$</p>\n\n<p>Subbing this in gives $x^2+(1-x)^2$ or $2$ to be minimized. Considering the first simple calculus gives us that $\\frac{dy}{dx}=2x-2(1-x)=4x-2$. Setting this equal to zero gives $x=y=\\frac{1}{2}$ as a possible minimum/maximum. Looking at the second derivative $\\frac{d^2y}{dx^2}=4$ shows it is a minimum. Evaluating it gives $x^2+y^2=\\frac{1}{2}$ which is less than the value at $(x,y)=(-1,-1)$ so the minimum value is $\\frac{1}{2}$ at $(x,y)=\\left(\\frac{1}{2},\\frac{1}{2}\\right)$.</p>\n" }, { "answer_id": 2591519, "score": 2, "text": "<p>Let $x=u+v$ and $y=u-v$. Then $x^2+y^2=2(u^2+v^2)$, while the equation $x^3+3xy+y^3=1$ becomes $2u^3+6uv^2+3(u^2-v^2)=1$, or</p>\n\n<p>$$2u^3+3u^2-1=-3v^2(2u-1)$$</p>\n\n<p>Factoring the cubic on the left hand side, we obtain</p>\n\n<p>$$(u+1)^2(2u-1)=-3v^2(2u-1)$$</p>\n\n<p>So either $2u-1=0$, in which case $v$ can be anything, or else $(u+1)^2=-3v^2$, which, because of the minus sign, is possible only if $u+1=v=0$. The latter possibility leads to </p>\n\n<p>$$x^2+y^2=2((-1)^2+0^2)=2$$ </p>\n\n<p>while the former leads to </p>\n\n<p>$$x^2+y^2=2\\left(\\left(1\\over2\\right)^2+v^2\\right)={1\\over2}+2v^2$$</p>\n\n<p>which is clearly minimized when $v=0$. So the minimum value of $x^2+y^2$ is $1\\over2$, and it occurs at $(x,y)=({1\\over2}+0,{1\\over2}-0)=({1\\over2},{1\\over2})$.</p>\n" }, { "answer_id": 2107128, "score": 6, "text": "<p>Noting that\n$$x^3+y^3=(x+y)^3-3xy(x+y)$$\nand\n$$(x+y)^3-1=(x+y-1)\\left[(x+y)^2+(x+y)+1\\right]$$\ngives\n$$x^3+3xy+y^3=1 \\\\\\\\ \\implies\\ (x+y)^3-1-3xy(x+y)+3xy=0 \\\\\\\\ \\implies\\ (x+y-1)\\left[(x+y)^2+(x+y)+1-3xy\\right]=0 \\\\\\\\ \\implies\\ (x+y-1)(x^2+y^2-xy+x+y+1)=0$$\nSo either $x+y=1$ or $x^2+y^2-xy+x+y+1=0$.</p>\n\n<p>Treating the latter as a quadratic equation in $x$ gives its discriminant as $(y-1)^2-4(y^2+y+1)=-3(y+1)^2$; so a real solution exists only when $y=-1$, at which value $x=-1$. This gives $x^2+y^2=(-1)^2+(-1)^2=2$.</p>\n\n<p>As for $x+y=1$, we want the circle of $x^2+y^2=r$ $(r\\ge0)$ to be tangent to that straight line. This occurs at $x=y=\\dfrac12$, giving $r_{\\min}=\\left(\\dfrac12\\right)^2+\\left(\\dfrac12\\right)^2=\\dfrac12$.</p>\n\n<p>Hence the minimum value is\n$$\\min\\left(2,\\,\\frac12\\right)\\ =\\ \\frac12$$</p>\n" }, { "answer_id": 2107077, "score": 5, "text": "<p>$$x^3+3xy+y^3-1=(x+y-1)(x^2+y^2-xy+x+y+1)=0$$</p>\n\n<p>So either $y=1-x$ or $(x,y)=(-1,-1)$</p>\n\n<p>Subbing this in gives $x^2+(1-x)^2$ or $2$ to be minimized. Considering the first simple calculus gives us that $\\frac{dy}{dx}=2x-2(1-x)=4x-2$. Setting this equal to zero gives $x=y=\\frac{1}{2}$ as a possible minimum/maximum. Looking at the second derivative $\\frac{d^2y}{dx^2}=4$ shows it is a minimum. Evaluating it gives $x^2+y^2=\\frac{1}{2}$ which is less than the value at $(x,y)=(-1,-1)$ so the minimum value is $\\frac{1}{2}$ at $(x,y)=\\left(\\frac{1}{2},\\frac{1}{2}\\right)$.</p>\n" }, { "answer_id": 2591519, "score": 2, "text": "<p>Let $x=u+v$ and $y=u-v$. Then $x^2+y^2=2(u^2+v^2)$, while the equation $x^3+3xy+y^3=1$ becomes $2u^3+6uv^2+3(u^2-v^2)=1$, or</p>\n\n<p>$$2u^3+3u^2-1=-3v^2(2u-1)$$</p>\n\n<p>Factoring the cubic on the left hand side, we obtain</p>\n\n<p>$$(u+1)^2(2u-1)=-3v^2(2u-1)$$</p>\n\n<p>So either $2u-1=0$, in which case $v$ can be anything, or else $(u+1)^2=-3v^2$, which, because of the minus sign, is possible only if $u+1=v=0$. The latter possibility leads to </p>\n\n<p>$$x^2+y^2=2((-1)^2+0^2)=2$$ </p>\n\n<p>while the former leads to </p>\n\n<p>$$x^2+y^2=2\\left(\\left(1\\over2\\right)^2+v^2\\right)={1\\over2}+2v^2$$</p>\n\n<p>which is clearly minimized when $v=0$. So the minimum value of $x^2+y^2$ is $1\\over2$, and it occurs at $(x,y)=({1\\over2}+0,{1\\over2}-0)=({1\\over2},{1\\over2})$.</p>\n" } ]
Question: Find a minimum of $x^2+y^2$ under the condition $x^3+3xy+y^3=1$ Answer 1: <p>Noting that $$x^3+y^3=(x+y)^3-3xy(x+y)$$ and $$(x+y)^3-1=(x+y-1)\left[(x+y)^2+(x+y)+1\right]$$ gives $$x^3+3xy+y^3=1 \\\\ \implies\ (x+y)^3-1-3xy(x+y)+3xy=0 \\\\ \implies\ (x+y-1)\left[(x+y)^2+(x+y)+1-3xy\right]=0 \\\\ \implies\ (x+y-1)(x^2+y^2-xy+x+y+1)=0$$ So either $x+y=1$ or $x^2+y^2-xy+x+y+1=0$.</p> <p>Treating the latter as a quadratic equation in $x$ gives its discriminant as $(y-1)^2-4(y^2+y+1)=-3(y+1)^2$; so a real solution exists only when $y=-1$, at which value $x=-1$. This gives $x^2+y^2=(-1)^2+(-1)^2=2$.</p> <p>As for $x+y=1$, we want the circle of $x^2+y^2=r$ $(r\ge0)$ to be tangent to that straight line. This occurs at $x=y=\dfrac12$, giving $r_{\min}=\left(\dfrac12\right)^2+\left(\dfrac12\right)^2=\dfrac12$.</p> <p>Hence the minimum value is $$\min\left(2,\,\frac12\right)\ =\ \frac12$$</p> Answer 2: <p>$$x^3+3xy+y^3-1=(x+y-1)(x^2+y^2-xy+x+y+1)=0$$</p> <p>So either $y=1-x$ or $(x,y)=(-1,-1)$</p> <p>Subbing this in gives $x^2+(1-x)^2$ or $2$ to be minimized. Considering the first simple calculus gives us that $\frac{dy}{dx}=2x-2(1-x)=4x-2$. Setting this equal to zero gives $x=y=\frac{1}{2}$ as a possible minimum/maximum. Looking at the second derivative $\frac{d^2y}{dx^2}=4$ shows it is a minimum. Evaluating it gives $x^2+y^2=\frac{1}{2}$ which is less than the value at $(x,y)=(-1,-1)$ so the minimum value is $\frac{1}{2}$ at $(x,y)=\left(\frac{1}{2},\frac{1}{2}\right)$.</p> Answer 3: <p>Let $x=u+v$ and $y=u-v$. Then $x^2+y^2=2(u^2+v^2)$, while the equation $x^3+3xy+y^3=1$ becomes $2u^3+6uv^2+3(u^2-v^2)=1$, or</p> <p>$$2u^3+3u^2-1=-3v^2(2u-1)$$</p> <p>Factoring the cubic on the left hand side, we obtain</p> <p>$$(u+1)^2(2u-1)=-3v^2(2u-1)$$</p> <p>So either $2u-1=0$, in which case $v$ can be anything, or else $(u+1)^2=-3v^2$, which, because of the minus sign, is possible only if $u+1=v=0$. The latter possibility leads to </p> <p>$$x^2+y^2=2((-1)^2+0^2)=2$$ </p> <p>while the former leads to </p> <p>$$x^2+y^2=2\left(\left(1\over2\right)^2+v^2\right)={1\over2}+2v^2$$</p> <p>which is clearly minimized when $v=0$. So the minimum value of $x^2+y^2$ is $1\over2$, and it occurs at $(x,y)=({1\over2}+0,{1\over2}-0)=({1\over2},{1\over2})$.</p> Answer 4: <p>Noting that $$x^3+y^3=(x+y)^3-3xy(x+y)$$ and $$(x+y)^3-1=(x+y-1)\left[(x+y)^2+(x+y)+1\right]$$ gives $$x^3+3xy+y^3=1 \\\\ \implies\ (x+y)^3-1-3xy(x+y)+3xy=0 \\\\ \implies\ (x+y-1)\left[(x+y)^2+(x+y)+1-3xy\right]=0 \\\\ \implies\ (x+y-1)(x^2+y^2-xy+x+y+1)=0$$ So either $x+y=1$ or $x^2+y^2-xy+x+y+1=0$.</p> <p>Treating the latter as a quadratic equation in $x$ gives its discriminant as $(y-1)^2-4(y^2+y+1)=-3(y+1)^2$; so a real solution exists only when $y=-1$, at which value $x=-1$. This gives $x^2+y^2=(-1)^2+(-1)^2=2$.</p> <p>As for $x+y=1$, we want the circle of $x^2+y^2=r$ $(r\ge0)$ to be tangent to that straight line. This occurs at $x=y=\dfrac12$, giving $r_{\min}=\left(\dfrac12\right)^2+\left(\dfrac12\right)^2=\dfrac12$.</p> <p>Hence the minimum value is $$\min\left(2,\,\frac12\right)\ =\ \frac12$$</p> Answer 5: <p>$$x^3+3xy+y^3-1=(x+y-1)(x^2+y^2-xy+x+y+1)=0$$</p> <p>So either $y=1-x$ or $(x,y)=(-1,-1)$</p> <p>Subbing this in gives $x^2+(1-x)^2$ or $2$ to be minimized. Considering the first simple calculus gives us that $\frac{dy}{dx}=2x-2(1-x)=4x-2$. Setting this equal to zero gives $x=y=\frac{1}{2}$ as a possible minimum/maximum. Looking at the second derivative $\frac{d^2y}{dx^2}=4$ shows it is a minimum. Evaluating it gives $x^2+y^2=\frac{1}{2}$ which is less than the value at $(x,y)=(-1,-1)$ so the minimum value is $\frac{1}{2}$ at $(x,y)=\left(\frac{1}{2},\frac{1}{2}\right)$.</p> Answer 6: <p>Let $x=u+v$ and $y=u-v$. Then $x^2+y^2=2(u^2+v^2)$, while the equation $x^3+3xy+y^3=1$ becomes $2u^3+6uv^2+3(u^2-v^2)=1$, or</p> <p>$$2u^3+3u^2-1=-3v^2(2u-1)$$</p> <p>Factoring the cubic on the left hand side, we obtain</p> <p>$$(u+1)^2(2u-1)=-3v^2(2u-1)$$</p> <p>So either $2u-1=0$, in which case $v$ can be anything, or else $(u+1)^2=-3v^2$, which, because of the minus sign, is possible only if $u+1=v=0$. The latter possibility leads to </p> <p>$$x^2+y^2=2((-1)^2+0^2)=2$$ </p> <p>while the former leads to </p> <p>$$x^2+y^2=2\left(\left(1\over2\right)^2+v^2\right)={1\over2}+2v^2$$</p> <p>which is clearly minimized when $v=0$. So the minimum value of $x^2+y^2$ is $1\over2$, and it occurs at $(x,y)=({1\over2}+0,{1\over2}-0)=({1\over2},{1\over2})$.</p> Answer 7: <p>Noting that $$x^3+y^3=(x+y)^3-3xy(x+y)$$ and $$(x+y)^3-1=(x+y-1)\left[(x+y)^2+(x+y)+1\right]$$ gives $$x^3+3xy+y^3=1 \\\\ \implies\ (x+y)^3-1-3xy(x+y)+3xy=0 \\\\ \implies\ (x+y-1)\left[(x+y)^2+(x+y)+1-3xy\right]=0 \\\\ \implies\ (x+y-1)(x^2+y^2-xy+x+y+1)=0$$ So either $x+y=1$ or $x^2+y^2-xy+x+y+1=0$.</p> <p>Treating the latter as a quadratic equation in $x$ gives its discriminant as $(y-1)^2-4(y^2+y+1)=-3(y+1)^2$; so a real solution exists only when $y=-1$, at which value $x=-1$. This gives $x^2+y^2=(-1)^2+(-1)^2=2$.</p> <p>As for $x+y=1$, we want the circle of $x^2+y^2=r$ $(r\ge0)$ to be tangent to that straight line. This occurs at $x=y=\dfrac12$, giving $r_{\min}=\left(\dfrac12\right)^2+\left(\dfrac12\right)^2=\dfrac12$.</p> <p>Hence the minimum value is $$\min\left(2,\,\frac12\right)\ =\ \frac12$$</p> Answer 8: <p>$$x^3+3xy+y^3-1=(x+y-1)(x^2+y^2-xy+x+y+1)=0$$</p> <p>So either $y=1-x$ or $(x,y)=(-1,-1)$</p> <p>Subbing this in gives $x^2+(1-x)^2$ or $2$ to be minimized. Considering the first simple calculus gives us that $\frac{dy}{dx}=2x-2(1-x)=4x-2$. Setting this equal to zero gives $x=y=\frac{1}{2}$ as a possible minimum/maximum. Looking at the second derivative $\frac{d^2y}{dx^2}=4$ shows it is a minimum. Evaluating it gives $x^2+y^2=\frac{1}{2}$ which is less than the value at $(x,y)=(-1,-1)$ so the minimum value is $\frac{1}{2}$ at $(x,y)=\left(\frac{1}{2},\frac{1}{2}\right)$.</p> Answer 9: <p>Let $x=u+v$ and $y=u-v$. Then $x^2+y^2=2(u^2+v^2)$, while the equation $x^3+3xy+y^3=1$ becomes $2u^3+6uv^2+3(u^2-v^2)=1$, or</p> <p>$$2u^3+3u^2-1=-3v^2(2u-1)$$</p> <p>Factoring the cubic on the left hand side, we obtain</p> <p>$$(u+1)^2(2u-1)=-3v^2(2u-1)$$</p> <p>So either $2u-1=0$, in which case $v$ can be anything, or else $(u+1)^2=-3v^2$, which, because of the minus sign, is possible only if $u+1=v=0$. The latter possibility leads to </p> <p>$$x^2+y^2=2((-1)^2+0^2)=2$$ </p> <p>while the former leads to </p> <p>$$x^2+y^2=2\left(\left(1\over2\right)^2+v^2\right)={1\over2}+2v^2$$</p> <p>which is clearly minimized when $v=0$. So the minimum value of $x^2+y^2$ is $1\over2$, and it occurs at $(x,y)=({1\over2}+0,{1\over2}-0)=({1\over2},{1\over2})$.</p> Answer 10: <p>Noting that $$x^3+y^3=(x+y)^3-3xy(x+y)$$ and $$(x+y)^3-1=(x+y-1)\left[(x+y)^2+(x+y)+1\right]$$ gives $$x^3+3xy+y^3=1 \\\\ \implies\ (x+y)^3-1-3xy(x+y)+3xy=0 \\\\ \implies\ (x+y-1)\left[(x+y)^2+(x+y)+1-3xy\right]=0 \\\\ \implies\ (x+y-1)(x^2+y^2-xy+x+y+1)=0$$ So either $x+y=1$ or $x^2+y^2-xy+x+y+1=0$.</p> <p>Treating the latter as a quadratic equation in $x$ gives its discriminant as $(y-1)^2-4(y^2+y+1)=-3(y+1)^2$; so a real solution exists only when $y=-1$, at which value $x=-1$. This gives $x^2+y^2=(-1)^2+(-1)^2=2$.</p> <p>As for $x+y=1$, we want the circle of $x^2+y^2=r$ $(r\ge0)$ to be tangent to that straight line. This occurs at $x=y=\dfrac12$, giving $r_{\min}=\left(\dfrac12\right)^2+\left(\dfrac12\right)^2=\dfrac12$.</p> <p>Hence the minimum value is $$\min\left(2,\,\frac12\right)\ =\ \frac12$$</p> Answer 11: <p>$$x^3+3xy+y^3-1=(x+y-1)(x^2+y^2-xy+x+y+1)=0$$</p> <p>So either $y=1-x$ or $(x,y)=(-1,-1)$</p> <p>Subbing this in gives $x^2+(1-x)^2$ or $2$ to be minimized. Considering the first simple calculus gives us that $\frac{dy}{dx}=2x-2(1-x)=4x-2$. Setting this equal to zero gives $x=y=\frac{1}{2}$ as a possible minimum/maximum. Looking at the second derivative $\frac{d^2y}{dx^2}=4$ shows it is a minimum. Evaluating it gives $x^2+y^2=\frac{1}{2}$ which is less than the value at $(x,y)=(-1,-1)$ so the minimum value is $\frac{1}{2}$ at $(x,y)=\left(\frac{1}{2},\frac{1}{2}\right)$.</p> Answer 12: <p>Let $x=u+v$ and $y=u-v$. Then $x^2+y^2=2(u^2+v^2)$, while the equation $x^3+3xy+y^3=1$ becomes $2u^3+6uv^2+3(u^2-v^2)=1$, or</p> <p>$$2u^3+3u^2-1=-3v^2(2u-1)$$</p> <p>Factoring the cubic on the left hand side, we obtain</p> <p>$$(u+1)^2(2u-1)=-3v^2(2u-1)$$</p> <p>So either $2u-1=0$, in which case $v$ can be anything, or else $(u+1)^2=-3v^2$, which, because of the minus sign, is possible only if $u+1=v=0$. The latter possibility leads to </p> <p>$$x^2+y^2=2((-1)^2+0^2)=2$$ </p> <p>while the former leads to </p> <p>$$x^2+y^2=2\left(\left(1\over2\right)^2+v^2\right)={1\over2}+2v^2$$</p> <p>which is clearly minimized when $v=0$. So the minimum value of $x^2+y^2$ is $1\over2$, and it occurs at $(x,y)=({1\over2}+0,{1\over2}-0)=({1\over2},{1\over2})$.</p> Answer 13: <p>Noting that $$x^3+y^3=(x+y)^3-3xy(x+y)$$ and $$(x+y)^3-1=(x+y-1)\left[(x+y)^2+(x+y)+1\right]$$ gives $$x^3+3xy+y^3=1 \\\\ \implies\ (x+y)^3-1-3xy(x+y)+3xy=0 \\\\ \implies\ (x+y-1)\left[(x+y)^2+(x+y)+1-3xy\right]=0 \\\\ \implies\ (x+y-1)(x^2+y^2-xy+x+y+1)=0$$ So either $x+y=1$ or $x^2+y^2-xy+x+y+1=0$.</p> <p>Treating the latter as a quadratic equation in $x$ gives its discriminant as $(y-1)^2-4(y^2+y+1)=-3(y+1)^2$; so a real solution exists only when $y=-1$, at which value $x=-1$. This gives $x^2+y^2=(-1)^2+(-1)^2=2$.</p> <p>As for $x+y=1$, we want the circle of $x^2+y^2=r$ $(r\ge0)$ to be tangent to that straight line. This occurs at $x=y=\dfrac12$, giving $r_{\min}=\left(\dfrac12\right)^2+\left(\dfrac12\right)^2=\dfrac12$.</p> <p>Hence the minimum value is $$\min\left(2,\,\frac12\right)\ =\ \frac12$$</p> Answer 14: <p>$$x^3+3xy+y^3-1=(x+y-1)(x^2+y^2-xy+x+y+1)=0$$</p> <p>So either $y=1-x$ or $(x,y)=(-1,-1)$</p> <p>Subbing this in gives $x^2+(1-x)^2$ or $2$ to be minimized. Considering the first simple calculus gives us that $\frac{dy}{dx}=2x-2(1-x)=4x-2$. Setting this equal to zero gives $x=y=\frac{1}{2}$ as a possible minimum/maximum. Looking at the second derivative $\frac{d^2y}{dx^2}=4$ shows it is a minimum. Evaluating it gives $x^2+y^2=\frac{1}{2}$ which is less than the value at $(x,y)=(-1,-1)$ so the minimum value is $\frac{1}{2}$ at $(x,y)=\left(\frac{1}{2},\frac{1}{2}\right)$.</p> Answer 15: <p>Let $x=u+v$ and $y=u-v$. Then $x^2+y^2=2(u^2+v^2)$, while the equation $x^3+3xy+y^3=1$ becomes $2u^3+6uv^2+3(u^2-v^2)=1$, or</p> <p>$$2u^3+3u^2-1=-3v^2(2u-1)$$</p> <p>Factoring the cubic on the left hand side, we obtain</p> <p>$$(u+1)^2(2u-1)=-3v^2(2u-1)$$</p> <p>So either $2u-1=0$, in which case $v$ can be anything, or else $(u+1)^2=-3v^2$, which, because of the minus sign, is possible only if $u+1=v=0$. The latter possibility leads to </p> <p>$$x^2+y^2=2((-1)^2+0^2)=2$$ </p> <p>while the former leads to </p> <p>$$x^2+y^2=2\left(\left(1\over2\right)^2+v^2\right)={1\over2}+2v^2$$</p> <p>which is clearly minimized when $v=0$. So the minimum value of $x^2+y^2$ is $1\over2$, and it occurs at $(x,y)=({1\over2}+0,{1\over2}-0)=({1\over2},{1\over2})$.</p> Answer 16: <p>Noting that $$x^3+y^3=(x+y)^3-3xy(x+y)$$ and $$(x+y)^3-1=(x+y-1)\left[(x+y)^2+(x+y)+1\right]$$ gives $$x^3+3xy+y^3=1 \\\\ \implies\ (x+y)^3-1-3xy(x+y)+3xy=0 \\\\ \implies\ (x+y-1)\left[(x+y)^2+(x+y)+1-3xy\right]=0 \\\\ \implies\ (x+y-1)(x^2+y^2-xy+x+y+1)=0$$ So either $x+y=1$ or $x^2+y^2-xy+x+y+1=0$.</p> <p>Treating the latter as a quadratic equation in $x$ gives its discriminant as $(y-1)^2-4(y^2+y+1)=-3(y+1)^2$; so a real solution exists only when $y=-1$, at which value $x=-1$. This gives $x^2+y^2=(-1)^2+(-1)^2=2$.</p> <p>As for $x+y=1$, we want the circle of $x^2+y^2=r$ $(r\ge0)$ to be tangent to that straight line. This occurs at $x=y=\dfrac12$, giving $r_{\min}=\left(\dfrac12\right)^2+\left(\dfrac12\right)^2=\dfrac12$.</p> <p>Hence the minimum value is $$\min\left(2,\,\frac12\right)\ =\ \frac12$$</p> Answer 17: <p>$$x^3+3xy+y^3-1=(x+y-1)(x^2+y^2-xy+x+y+1)=0$$</p> <p>So either $y=1-x$ or $(x,y)=(-1,-1)$</p> <p>Subbing this in gives $x^2+(1-x)^2$ or $2$ to be minimized. Considering the first simple calculus gives us that $\frac{dy}{dx}=2x-2(1-x)=4x-2$. Setting this equal to zero gives $x=y=\frac{1}{2}$ as a possible minimum/maximum. Looking at the second derivative $\frac{d^2y}{dx^2}=4$ shows it is a minimum. Evaluating it gives $x^2+y^2=\frac{1}{2}$ which is less than the value at $(x,y)=(-1,-1)$ so the minimum value is $\frac{1}{2}$ at $(x,y)=\left(\frac{1}{2},\frac{1}{2}\right)$.</p> Answer 18: <p>Let $x=u+v$ and $y=u-v$. Then $x^2+y^2=2(u^2+v^2)$, while the equation $x^3+3xy+y^3=1$ becomes $2u^3+6uv^2+3(u^2-v^2)=1$, or</p> <p>$$2u^3+3u^2-1=-3v^2(2u-1)$$</p> <p>Factoring the cubic on the left hand side, we obtain</p> <p>$$(u+1)^2(2u-1)=-3v^2(2u-1)$$</p> <p>So either $2u-1=0$, in which case $v$ can be anything, or else $(u+1)^2=-3v^2$, which, because of the minus sign, is possible only if $u+1=v=0$. The latter possibility leads to </p> <p>$$x^2+y^2=2((-1)^2+0^2)=2$$ </p> <p>while the former leads to </p> <p>$$x^2+y^2=2\left(\left(1\over2\right)^2+v^2\right)={1\over2}+2v^2$$</p> <p>which is clearly minimized when $v=0$. So the minimum value of $x^2+y^2$ is $1\over2$, and it occurs at $(x,y)=({1\over2}+0,{1\over2}-0)=({1\over2},{1\over2})$.</p> Answer 19: <p>Noting that $$x^3+y^3=(x+y)^3-3xy(x+y)$$ and $$(x+y)^3-1=(x+y-1)\left[(x+y)^2+(x+y)+1\right]$$ gives $$x^3+3xy+y^3=1 \\\\ \implies\ (x+y)^3-1-3xy(x+y)+3xy=0 \\\\ \implies\ (x+y-1)\left[(x+y)^2+(x+y)+1-3xy\right]=0 \\\\ \implies\ (x+y-1)(x^2+y^2-xy+x+y+1)=0$$ So either $x+y=1$ or $x^2+y^2-xy+x+y+1=0$.</p> <p>Treating the latter as a quadratic equation in $x$ gives its discriminant as $(y-1)^2-4(y^2+y+1)=-3(y+1)^2$; so a real solution exists only when $y=-1$, at which value $x=-1$. This gives $x^2+y^2=(-1)^2+(-1)^2=2$.</p> <p>As for $x+y=1$, we want the circle of $x^2+y^2=r$ $(r\ge0)$ to be tangent to that straight line. This occurs at $x=y=\dfrac12$, giving $r_{\min}=\left(\dfrac12\right)^2+\left(\dfrac12\right)^2=\dfrac12$.</p> <p>Hence the minimum value is $$\min\left(2,\,\frac12\right)\ =\ \frac12$$</p> Answer 20: <p>$$x^3+3xy+y^3-1=(x+y-1)(x^2+y^2-xy+x+y+1)=0$$</p> <p>So either $y=1-x$ or $(x,y)=(-1,-1)$</p> <p>Subbing this in gives $x^2+(1-x)^2$ or $2$ to be minimized. Considering the first simple calculus gives us that $\frac{dy}{dx}=2x-2(1-x)=4x-2$. Setting this equal to zero gives $x=y=\frac{1}{2}$ as a possible minimum/maximum. Looking at the second derivative $\frac{d^2y}{dx^2}=4$ shows it is a minimum. Evaluating it gives $x^2+y^2=\frac{1}{2}$ which is less than the value at $(x,y)=(-1,-1)$ so the minimum value is $\frac{1}{2}$ at $(x,y)=\left(\frac{1}{2},\frac{1}{2}\right)$.</p> Answer 21: <p>Let $x=u+v$ and $y=u-v$. Then $x^2+y^2=2(u^2+v^2)$, while the equation $x^3+3xy+y^3=1$ becomes $2u^3+6uv^2+3(u^2-v^2)=1$, or</p> <p>$$2u^3+3u^2-1=-3v^2(2u-1)$$</p> <p>Factoring the cubic on the left hand side, we obtain</p> <p>$$(u+1)^2(2u-1)=-3v^2(2u-1)$$</p> <p>So either $2u-1=0$, in which case $v$ can be anything, or else $(u+1)^2=-3v^2$, which, because of the minus sign, is possible only if $u+1=v=0$. The latter possibility leads to </p> <p>$$x^2+y^2=2((-1)^2+0^2)=2$$ </p> <p>while the former leads to </p> <p>$$x^2+y^2=2\left(\left(1\over2\right)^2+v^2\right)={1\over2}+2v^2$$</p> <p>which is clearly minimized when $v=0$. So the minimum value of $x^2+y^2$ is $1\over2$, and it occurs at $(x,y)=({1\over2}+0,{1\over2}-0)=({1\over2},{1\over2})$.</p> Answer 22: <p>Noting that $$x^3+y^3=(x+y)^3-3xy(x+y)$$ and $$(x+y)^3-1=(x+y-1)\left[(x+y)^2+(x+y)+1\right]$$ gives $$x^3+3xy+y^3=1 \\\\ \implies\ (x+y)^3-1-3xy(x+y)+3xy=0 \\\\ \implies\ (x+y-1)\left[(x+y)^2+(x+y)+1-3xy\right]=0 \\\\ \implies\ (x+y-1)(x^2+y^2-xy+x+y+1)=0$$ So either $x+y=1$ or $x^2+y^2-xy+x+y+1=0$.</p> <p>Treating the latter as a quadratic equation in $x$ gives its discriminant as $(y-1)^2-4(y^2+y+1)=-3(y+1)^2$; so a real solution exists only when $y=-1$, at which value $x=-1$. This gives $x^2+y^2=(-1)^2+(-1)^2=2$.</p> <p>As for $x+y=1$, we want the circle of $x^2+y^2=r$ $(r\ge0)$ to be tangent to that straight line. This occurs at $x=y=\dfrac12$, giving $r_{\min}=\left(\dfrac12\right)^2+\left(\dfrac12\right)^2=\dfrac12$.</p> <p>Hence the minimum value is $$\min\left(2,\,\frac12\right)\ =\ \frac12$$</p> Answer 23: <p>$$x^3+3xy+y^3-1=(x+y-1)(x^2+y^2-xy+x+y+1)=0$$</p> <p>So either $y=1-x$ or $(x,y)=(-1,-1)$</p> <p>Subbing this in gives $x^2+(1-x)^2$ or $2$ to be minimized. Considering the first simple calculus gives us that $\frac{dy}{dx}=2x-2(1-x)=4x-2$. Setting this equal to zero gives $x=y=\frac{1}{2}$ as a possible minimum/maximum. Looking at the second derivative $\frac{d^2y}{dx^2}=4$ shows it is a minimum. Evaluating it gives $x^2+y^2=\frac{1}{2}$ which is less than the value at $(x,y)=(-1,-1)$ so the minimum value is $\frac{1}{2}$ at $(x,y)=\left(\frac{1}{2},\frac{1}{2}\right)$.</p> Answer 24: <p>Let $x=u+v$ and $y=u-v$. Then $x^2+y^2=2(u^2+v^2)$, while the equation $x^3+3xy+y^3=1$ becomes $2u^3+6uv^2+3(u^2-v^2)=1$, or</p> <p>$$2u^3+3u^2-1=-3v^2(2u-1)$$</p> <p>Factoring the cubic on the left hand side, we obtain</p> <p>$$(u+1)^2(2u-1)=-3v^2(2u-1)$$</p> <p>So either $2u-1=0$, in which case $v$ can be anything, or else $(u+1)^2=-3v^2$, which, because of the minus sign, is possible only if $u+1=v=0$. The latter possibility leads to </p> <p>$$x^2+y^2=2((-1)^2+0^2)=2$$ </p> <p>while the former leads to </p> <p>$$x^2+y^2=2\left(\left(1\over2\right)^2+v^2\right)={1\over2}+2v^2$$</p> <p>which is clearly minimized when $v=0$. So the minimum value of $x^2+y^2$ is $1\over2$, and it occurs at $(x,y)=({1\over2}+0,{1\over2}-0)=({1\over2},{1\over2})$.</p> Answer 25: <p>Noting that $$x^3+y^3=(x+y)^3-3xy(x+y)$$ and $$(x+y)^3-1=(x+y-1)\left[(x+y)^2+(x+y)+1\right]$$ gives $$x^3+3xy+y^3=1 \\\\ \implies\ (x+y)^3-1-3xy(x+y)+3xy=0 \\\\ \implies\ (x+y-1)\left[(x+y)^2+(x+y)+1-3xy\right]=0 \\\\ \implies\ (x+y-1)(x^2+y^2-xy+x+y+1)=0$$ So either $x+y=1$ or $x^2+y^2-xy+x+y+1=0$.</p> <p>Treating the latter as a quadratic equation in $x$ gives its discriminant as $(y-1)^2-4(y^2+y+1)=-3(y+1)^2$; so a real solution exists only when $y=-1$, at which value $x=-1$. This gives $x^2+y^2=(-1)^2+(-1)^2=2$.</p> <p>As for $x+y=1$, we want the circle of $x^2+y^2=r$ $(r\ge0)$ to be tangent to that straight line. This occurs at $x=y=\dfrac12$, giving $r_{\min}=\left(\dfrac12\right)^2+\left(\dfrac12\right)^2=\dfrac12$.</p> <p>Hence the minimum value is $$\min\left(2,\,\frac12\right)\ =\ \frac12$$</p> Answer 26: <p>$$x^3+3xy+y^3-1=(x+y-1)(x^2+y^2-xy+x+y+1)=0$$</p> <p>So either $y=1-x$ or $(x,y)=(-1,-1)$</p> <p>Subbing this in gives $x^2+(1-x)^2$ or $2$ to be minimized. Considering the first simple calculus gives us that $\frac{dy}{dx}=2x-2(1-x)=4x-2$. Setting this equal to zero gives $x=y=\frac{1}{2}$ as a possible minimum/maximum. Looking at the second derivative $\frac{d^2y}{dx^2}=4$ shows it is a minimum. Evaluating it gives $x^2+y^2=\frac{1}{2}$ which is less than the value at $(x,y)=(-1,-1)$ so the minimum value is $\frac{1}{2}$ at $(x,y)=\left(\frac{1}{2},\frac{1}{2}\right)$.</p> Answer 27: <p>Let $x=u+v$ and $y=u-v$. Then $x^2+y^2=2(u^2+v^2)$, while the equation $x^3+3xy+y^3=1$ becomes $2u^3+6uv^2+3(u^2-v^2)=1$, or</p> <p>$$2u^3+3u^2-1=-3v^2(2u-1)$$</p> <p>Factoring the cubic on the left hand side, we obtain</p> <p>$$(u+1)^2(2u-1)=-3v^2(2u-1)$$</p> <p>So either $2u-1=0$, in which case $v$ can be anything, or else $(u+1)^2=-3v^2$, which, because of the minus sign, is possible only if $u+1=v=0$. The latter possibility leads to </p> <p>$$x^2+y^2=2((-1)^2+0^2)=2$$ </p> <p>while the former leads to </p> <p>$$x^2+y^2=2\left(\left(1\over2\right)^2+v^2\right)={1\over2}+2v^2$$</p> <p>which is clearly minimized when $v=0$. So the minimum value of $x^2+y^2$ is $1\over2$, and it occurs at $(x,y)=({1\over2}+0,{1\over2}-0)=({1\over2},{1\over2})$.</p> Answer 28: <p>Noting that $$x^3+y^3=(x+y)^3-3xy(x+y)$$ and $$(x+y)^3-1=(x+y-1)\left[(x+y)^2+(x+y)+1\right]$$ gives $$x^3+3xy+y^3=1 \\\\ \implies\ (x+y)^3-1-3xy(x+y)+3xy=0 \\\\ \implies\ (x+y-1)\left[(x+y)^2+(x+y)+1-3xy\right]=0 \\\\ \implies\ (x+y-1)(x^2+y^2-xy+x+y+1)=0$$ So either $x+y=1$ or $x^2+y^2-xy+x+y+1=0$.</p> <p>Treating the latter as a quadratic equation in $x$ gives its discriminant as $(y-1)^2-4(y^2+y+1)=-3(y+1)^2$; so a real solution exists only when $y=-1$, at which value $x=-1$. This gives $x^2+y^2=(-1)^2+(-1)^2=2$.</p> <p>As for $x+y=1$, we want the circle of $x^2+y^2=r$ $(r\ge0)$ to be tangent to that straight line. This occurs at $x=y=\dfrac12$, giving $r_{\min}=\left(\dfrac12\right)^2+\left(\dfrac12\right)^2=\dfrac12$.</p> <p>Hence the minimum value is $$\min\left(2,\,\frac12\right)\ =\ \frac12$$</p> Answer 29: <p>$$x^3+3xy+y^3-1=(x+y-1)(x^2+y^2-xy+x+y+1)=0$$</p> <p>So either $y=1-x$ or $(x,y)=(-1,-1)$</p> <p>Subbing this in gives $x^2+(1-x)^2$ or $2$ to be minimized. Considering the first simple calculus gives us that $\frac{dy}{dx}=2x-2(1-x)=4x-2$. Setting this equal to zero gives $x=y=\frac{1}{2}$ as a possible minimum/maximum. Looking at the second derivative $\frac{d^2y}{dx^2}=4$ shows it is a minimum. Evaluating it gives $x^2+y^2=\frac{1}{2}$ which is less than the value at $(x,y)=(-1,-1)$ so the minimum value is $\frac{1}{2}$ at $(x,y)=\left(\frac{1}{2},\frac{1}{2}\right)$.</p> Answer 30: <p>Let $x=u+v$ and $y=u-v$. Then $x^2+y^2=2(u^2+v^2)$, while the equation $x^3+3xy+y^3=1$ becomes $2u^3+6uv^2+3(u^2-v^2)=1$, or</p> <p>$$2u^3+3u^2-1=-3v^2(2u-1)$$</p> <p>Factoring the cubic on the left hand side, we obtain</p> <p>$$(u+1)^2(2u-1)=-3v^2(2u-1)$$</p> <p>So either $2u-1=0$, in which case $v$ can be anything, or else $(u+1)^2=-3v^2$, which, because of the minus sign, is possible only if $u+1=v=0$. The latter possibility leads to </p> <p>$$x^2+y^2=2((-1)^2+0^2)=2$$ </p> <p>while the former leads to </p> <p>$$x^2+y^2=2\left(\left(1\over2\right)^2+v^2\right)={1\over2}+2v^2$$</p> <p>which is clearly minimized when $v=0$. So the minimum value of $x^2+y^2$ is $1\over2$, and it occurs at $(x,y)=({1\over2}+0,{1\over2}-0)=({1\over2},{1\over2})$.</p>
How would one describe set of elements of $\mathbb{Z}(\sqrt{-d})$ whose norm are divisible by a rational prime $p$?
2,107,189
6
https://math.stackexchange.com/questions/2107189/how-would-one-describe-set-of-elements-of-mathbbz-sqrt-d-whose-norm-are
<p>If we consider non-UFD's with $d &gt; 1$, how would one go about describing the integers in $\mathbb{Z}(\sqrt{-d})$ whose norm is divisible by a prime $p$ but under the stipulation that it must be done using principal Ideals? Can it be done? I'll detail what I mean;</p> <p>An example for a UFD would be the following;</p> <p>Let $f(\alpha)=1$ for $\alpha \in \mathbb{Z}(\sqrt{-1})$, and 0 otherwise.. I want a new function, $g(\alpha)$, such that $g(\alpha)=1$ if $p \mid N\alpha$, otherwise $g(\alpha)=0$. For an inert prime $p$, $g(\alpha)=f(\frac{\alpha}{p})$. If p splits, then $g(\alpha)=f(\frac{\alpha}{\mathfrak{p}})+f(\frac{\alpha}{\bar{\mathfrak{p}}})-f(\frac{\alpha}{N(\mathfrak{p})})$ where $N(\mathfrak{p})=p$. In other words, how can we construct $g$ using only $f$ for Imaginary quadratic fields without unique factorization? An example would be much appreciated.</p>
[ { "answer_id": 2124662, "score": 4, "text": "<p>The functions $f$ and $g$ can be defined, but they have to operate on ideals rather than numbers, and those ideals may or may not be principal.</p>\n\n<p>First, though, let's tidy up the notation. $f_R(\\alpha) = 1$ (or True) if $\\alpha$ is a number in the ring $R$, and $0$ (or False) otherwise. And $g_R^p(\\alpha) = 1$ if $\\alpha$ is a number in the prime ideal generated by $p$, or by a splitting factor of $p$, or co-generated by $p$.</p>\n\n<p>Now let's examine $\\alpha = 82$, $p = 41$ and $R = \\textbf{Z}[i]$. Since $41$ splits as $(4 - 5i)(4 + 5i)$, we have $$g_R^p(\\alpha) = f_{\\textbf{Z}[i]}\\left(\\frac{82}{4 - 5i}\\right) + f_{\\textbf{Z}[i]}\\left(\\frac{82}{4 + 5i}\\right) - f_{\\textbf{Z}[i]}\\left(\\frac{82}{41}\\right) = 1 + 1 - 1 = 1.$$ That's correct, right?</p>\n\n<p>Now try $\\alpha = 86$, $p = 43$. As $43$ is inert, we have $g_R^p(\\alpha) = 1$, as expected.</p>\n\n<p>Let's move on to $g_{\\textbf{Z}[\\sqrt{-5}]}^{41}(82)$. Since $41$ splits as $(6 - \\sqrt{-5})(6 + \\sqrt{-5})$, the result should be much the same as $g_{\\textbf{Z}[i]}^{41}(82)$.</p>\n\n<p>And now we get to something really interesting: $g_{\\textbf{Z}[\\sqrt{-5}]}^{43}(76 + 18 \\sqrt{-5})$. $43$ is inert in this ring, and yet $N(76 + 18 \\sqrt{-5}) = 7396$, a multiple of $43$. Worse, $$\\frac{76 + 18 \\sqrt{-5}}{43} = \\frac{2}{43}(38 + 9 \\sqrt{-5}),$$ so $$f_{\\textbf{Z}[\\sqrt{-5}]}\\left(\\frac{76 + 18 \\sqrt{-5}}{43}\\right) = 0.$$</p>\n\n<p>If instead we defined $g_R^p(\\alpha)$ to look for the containment of $\\langle \\alpha \\rangle$ in $\\langle p \\rangle$ or $\\langle p, x \\rangle$ or $\\langle p, \\overline x \\rangle$, we might get the result we want, since $\\langle 76 + 18 \\sqrt{-5} \\rangle$ is in either $\\langle 43, 9 + \\sqrt{-5} \\rangle$ or $\\langle 43, 9 - \\sqrt{-5} \\rangle$, one of those.</p>\n" }, { "answer_id": 2112063, "score": 3, "text": "<p>I don't think what you're asking for is possible, at least not for some rational primes. Let me build on the example Mr. Brooks gave in a comment yesterday.</p>\n\n<p>The classic non-UFD is $\\mathbb Z[\\sqrt{-5}]$. Consider the rational prime $p = 3$. The norm function in the field $\\mathbb Q(\\sqrt{-5})$ is $N(a + b\\sqrt{-5}) = a^2 + 5b^2$. Possible integer norms are 0, 1, 4, 5, 6, 9, ...</p>\n\n<p>Since 3 is an impossible norm, that means 3 is irreducible. But 6, the first nontrivial positive multiple of 3, is a possible norm, and in fact $(1 - \\sqrt{-5})(1 + \\sqrt{-5}) = 6$.</p>\n\n<p>So if the union of $\\mathfrak P_1$ and $\\mathfrak P_2$ contains all numbers in this domain with norm divisible by 3, it looks something like $\\langle 3, 1 - \\sqrt{-5} \\rangle \\cup \\langle 3, 1 + \\sqrt{-5} \\rangle$.</p>\n\n<p>But neither of those two ideals are principal. Or at least they don't look principal. If either of them was principal, we'd be able to solve $x \\pm x\\sqrt{-5} = 3$ or $3x = 1 \\pm \\sqrt{-5}$ with $x \\in \\mathbb Z[\\sqrt{-5}]$.</p>\n\n<p>What would be the norm of this number $x$? Remember that the norm is a multiplicative function. To solve the former equation, we would have to have $$N(x) = \\frac{3}{2},$$ which is not an integer at all, and to solve the latter equation we'd need $N(x) = 2$, an impossible norm in this domain.</p>\n\n<p>Now let's look at $2 + \\sqrt{-5}$. Since this number has a norm of 9, it must be contained in $\\mathfrak P_1 \\cup \\mathfrak P_2$. I emphasize that $(2 + \\sqrt{-5}) \\not\\in \\langle 3 \\rangle$ nor $\\langle 1 - \\sqrt{-5} \\rangle$ nor $\\langle 1 + \\sqrt{-5} \\rangle$.</p>\n\n<p>If $\\mathfrak P_1 = \\langle 3, 1 - \\sqrt{-5} \\rangle$, it consists of all numbers in this domain of the form $3\\alpha + (1 - \\sqrt{-5})\\beta$, with $\\alpha$ and $\\beta$ also numbers in this domain. And since $3 + (-1 + \\sqrt{-5}) = 2 + \\sqrt{-5}$, that means $\\alpha = 1$ and $\\beta = -1$ is our verification that $(2 + \\sqrt{-5}) \\in \\mathfrak P_1$.</p>\n\n<p>EDIT: Thanks to Dave R. for pointing out a mistake of arithmetic. I've corrected it now.</p>\n" }, { "answer_id": 2124662, "score": 4, "text": "<p>The functions $f$ and $g$ can be defined, but they have to operate on ideals rather than numbers, and those ideals may or may not be principal.</p>\n\n<p>First, though, let's tidy up the notation. $f_R(\\alpha) = 1$ (or True) if $\\alpha$ is a number in the ring $R$, and $0$ (or False) otherwise. And $g_R^p(\\alpha) = 1$ if $\\alpha$ is a number in the prime ideal generated by $p$, or by a splitting factor of $p$, or co-generated by $p$.</p>\n\n<p>Now let's examine $\\alpha = 82$, $p = 41$ and $R = \\textbf{Z}[i]$. Since $41$ splits as $(4 - 5i)(4 + 5i)$, we have $$g_R^p(\\alpha) = f_{\\textbf{Z}[i]}\\left(\\frac{82}{4 - 5i}\\right) + f_{\\textbf{Z}[i]}\\left(\\frac{82}{4 + 5i}\\right) - f_{\\textbf{Z}[i]}\\left(\\frac{82}{41}\\right) = 1 + 1 - 1 = 1.$$ That's correct, right?</p>\n\n<p>Now try $\\alpha = 86$, $p = 43$. As $43$ is inert, we have $g_R^p(\\alpha) = 1$, as expected.</p>\n\n<p>Let's move on to $g_{\\textbf{Z}[\\sqrt{-5}]}^{41}(82)$. Since $41$ splits as $(6 - \\sqrt{-5})(6 + \\sqrt{-5})$, the result should be much the same as $g_{\\textbf{Z}[i]}^{41}(82)$.</p>\n\n<p>And now we get to something really interesting: $g_{\\textbf{Z}[\\sqrt{-5}]}^{43}(76 + 18 \\sqrt{-5})$. $43$ is inert in this ring, and yet $N(76 + 18 \\sqrt{-5}) = 7396$, a multiple of $43$. Worse, $$\\frac{76 + 18 \\sqrt{-5}}{43} = \\frac{2}{43}(38 + 9 \\sqrt{-5}),$$ so $$f_{\\textbf{Z}[\\sqrt{-5}]}\\left(\\frac{76 + 18 \\sqrt{-5}}{43}\\right) = 0.$$</p>\n\n<p>If instead we defined $g_R^p(\\alpha)$ to look for the containment of $\\langle \\alpha \\rangle$ in $\\langle p \\rangle$ or $\\langle p, x \\rangle$ or $\\langle p, \\overline x \\rangle$, we might get the result we want, since $\\langle 76 + 18 \\sqrt{-5} \\rangle$ is in either $\\langle 43, 9 + \\sqrt{-5} \\rangle$ or $\\langle 43, 9 - \\sqrt{-5} \\rangle$, one of those.</p>\n" }, { "answer_id": 2112063, "score": 3, "text": "<p>I don't think what you're asking for is possible, at least not for some rational primes. Let me build on the example Mr. Brooks gave in a comment yesterday.</p>\n\n<p>The classic non-UFD is $\\mathbb Z[\\sqrt{-5}]$. Consider the rational prime $p = 3$. The norm function in the field $\\mathbb Q(\\sqrt{-5})$ is $N(a + b\\sqrt{-5}) = a^2 + 5b^2$. Possible integer norms are 0, 1, 4, 5, 6, 9, ...</p>\n\n<p>Since 3 is an impossible norm, that means 3 is irreducible. But 6, the first nontrivial positive multiple of 3, is a possible norm, and in fact $(1 - \\sqrt{-5})(1 + \\sqrt{-5}) = 6$.</p>\n\n<p>So if the union of $\\mathfrak P_1$ and $\\mathfrak P_2$ contains all numbers in this domain with norm divisible by 3, it looks something like $\\langle 3, 1 - \\sqrt{-5} \\rangle \\cup \\langle 3, 1 + \\sqrt{-5} \\rangle$.</p>\n\n<p>But neither of those two ideals are principal. Or at least they don't look principal. If either of them was principal, we'd be able to solve $x \\pm x\\sqrt{-5} = 3$ or $3x = 1 \\pm \\sqrt{-5}$ with $x \\in \\mathbb Z[\\sqrt{-5}]$.</p>\n\n<p>What would be the norm of this number $x$? Remember that the norm is a multiplicative function. To solve the former equation, we would have to have $$N(x) = \\frac{3}{2},$$ which is not an integer at all, and to solve the latter equation we'd need $N(x) = 2$, an impossible norm in this domain.</p>\n\n<p>Now let's look at $2 + \\sqrt{-5}$. Since this number has a norm of 9, it must be contained in $\\mathfrak P_1 \\cup \\mathfrak P_2$. I emphasize that $(2 + \\sqrt{-5}) \\not\\in \\langle 3 \\rangle$ nor $\\langle 1 - \\sqrt{-5} \\rangle$ nor $\\langle 1 + \\sqrt{-5} \\rangle$.</p>\n\n<p>If $\\mathfrak P_1 = \\langle 3, 1 - \\sqrt{-5} \\rangle$, it consists of all numbers in this domain of the form $3\\alpha + (1 - \\sqrt{-5})\\beta$, with $\\alpha$ and $\\beta$ also numbers in this domain. And since $3 + (-1 + \\sqrt{-5}) = 2 + \\sqrt{-5}$, that means $\\alpha = 1$ and $\\beta = -1$ is our verification that $(2 + \\sqrt{-5}) \\in \\mathfrak P_1$.</p>\n\n<p>EDIT: Thanks to Dave R. for pointing out a mistake of arithmetic. I've corrected it now.</p>\n" }, { "answer_id": 2124662, "score": 4, "text": "<p>The functions $f$ and $g$ can be defined, but they have to operate on ideals rather than numbers, and those ideals may or may not be principal.</p>\n\n<p>First, though, let's tidy up the notation. $f_R(\\alpha) = 1$ (or True) if $\\alpha$ is a number in the ring $R$, and $0$ (or False) otherwise. And $g_R^p(\\alpha) = 1$ if $\\alpha$ is a number in the prime ideal generated by $p$, or by a splitting factor of $p$, or co-generated by $p$.</p>\n\n<p>Now let's examine $\\alpha = 82$, $p = 41$ and $R = \\textbf{Z}[i]$. Since $41$ splits as $(4 - 5i)(4 + 5i)$, we have $$g_R^p(\\alpha) = f_{\\textbf{Z}[i]}\\left(\\frac{82}{4 - 5i}\\right) + f_{\\textbf{Z}[i]}\\left(\\frac{82}{4 + 5i}\\right) - f_{\\textbf{Z}[i]}\\left(\\frac{82}{41}\\right) = 1 + 1 - 1 = 1.$$ That's correct, right?</p>\n\n<p>Now try $\\alpha = 86$, $p = 43$. As $43$ is inert, we have $g_R^p(\\alpha) = 1$, as expected.</p>\n\n<p>Let's move on to $g_{\\textbf{Z}[\\sqrt{-5}]}^{41}(82)$. Since $41$ splits as $(6 - \\sqrt{-5})(6 + \\sqrt{-5})$, the result should be much the same as $g_{\\textbf{Z}[i]}^{41}(82)$.</p>\n\n<p>And now we get to something really interesting: $g_{\\textbf{Z}[\\sqrt{-5}]}^{43}(76 + 18 \\sqrt{-5})$. $43$ is inert in this ring, and yet $N(76 + 18 \\sqrt{-5}) = 7396$, a multiple of $43$. Worse, $$\\frac{76 + 18 \\sqrt{-5}}{43} = \\frac{2}{43}(38 + 9 \\sqrt{-5}),$$ so $$f_{\\textbf{Z}[\\sqrt{-5}]}\\left(\\frac{76 + 18 \\sqrt{-5}}{43}\\right) = 0.$$</p>\n\n<p>If instead we defined $g_R^p(\\alpha)$ to look for the containment of $\\langle \\alpha \\rangle$ in $\\langle p \\rangle$ or $\\langle p, x \\rangle$ or $\\langle p, \\overline x \\rangle$, we might get the result we want, since $\\langle 76 + 18 \\sqrt{-5} \\rangle$ is in either $\\langle 43, 9 + \\sqrt{-5} \\rangle$ or $\\langle 43, 9 - \\sqrt{-5} \\rangle$, one of those.</p>\n" }, { "answer_id": 2112063, "score": 3, "text": "<p>I don't think what you're asking for is possible, at least not for some rational primes. Let me build on the example Mr. Brooks gave in a comment yesterday.</p>\n\n<p>The classic non-UFD is $\\mathbb Z[\\sqrt{-5}]$. Consider the rational prime $p = 3$. The norm function in the field $\\mathbb Q(\\sqrt{-5})$ is $N(a + b\\sqrt{-5}) = a^2 + 5b^2$. Possible integer norms are 0, 1, 4, 5, 6, 9, ...</p>\n\n<p>Since 3 is an impossible norm, that means 3 is irreducible. But 6, the first nontrivial positive multiple of 3, is a possible norm, and in fact $(1 - \\sqrt{-5})(1 + \\sqrt{-5}) = 6$.</p>\n\n<p>So if the union of $\\mathfrak P_1$ and $\\mathfrak P_2$ contains all numbers in this domain with norm divisible by 3, it looks something like $\\langle 3, 1 - \\sqrt{-5} \\rangle \\cup \\langle 3, 1 + \\sqrt{-5} \\rangle$.</p>\n\n<p>But neither of those two ideals are principal. Or at least they don't look principal. If either of them was principal, we'd be able to solve $x \\pm x\\sqrt{-5} = 3$ or $3x = 1 \\pm \\sqrt{-5}$ with $x \\in \\mathbb Z[\\sqrt{-5}]$.</p>\n\n<p>What would be the norm of this number $x$? Remember that the norm is a multiplicative function. To solve the former equation, we would have to have $$N(x) = \\frac{3}{2},$$ which is not an integer at all, and to solve the latter equation we'd need $N(x) = 2$, an impossible norm in this domain.</p>\n\n<p>Now let's look at $2 + \\sqrt{-5}$. Since this number has a norm of 9, it must be contained in $\\mathfrak P_1 \\cup \\mathfrak P_2$. I emphasize that $(2 + \\sqrt{-5}) \\not\\in \\langle 3 \\rangle$ nor $\\langle 1 - \\sqrt{-5} \\rangle$ nor $\\langle 1 + \\sqrt{-5} \\rangle$.</p>\n\n<p>If $\\mathfrak P_1 = \\langle 3, 1 - \\sqrt{-5} \\rangle$, it consists of all numbers in this domain of the form $3\\alpha + (1 - \\sqrt{-5})\\beta$, with $\\alpha$ and $\\beta$ also numbers in this domain. And since $3 + (-1 + \\sqrt{-5}) = 2 + \\sqrt{-5}$, that means $\\alpha = 1$ and $\\beta = -1$ is our verification that $(2 + \\sqrt{-5}) \\in \\mathfrak P_1$.</p>\n\n<p>EDIT: Thanks to Dave R. for pointing out a mistake of arithmetic. I've corrected it now.</p>\n" }, { "answer_id": 2124662, "score": 4, "text": "<p>The functions $f$ and $g$ can be defined, but they have to operate on ideals rather than numbers, and those ideals may or may not be principal.</p>\n\n<p>First, though, let's tidy up the notation. $f_R(\\alpha) = 1$ (or True) if $\\alpha$ is a number in the ring $R$, and $0$ (or False) otherwise. And $g_R^p(\\alpha) = 1$ if $\\alpha$ is a number in the prime ideal generated by $p$, or by a splitting factor of $p$, or co-generated by $p$.</p>\n\n<p>Now let's examine $\\alpha = 82$, $p = 41$ and $R = \\textbf{Z}[i]$. Since $41$ splits as $(4 - 5i)(4 + 5i)$, we have $$g_R^p(\\alpha) = f_{\\textbf{Z}[i]}\\left(\\frac{82}{4 - 5i}\\right) + f_{\\textbf{Z}[i]}\\left(\\frac{82}{4 + 5i}\\right) - f_{\\textbf{Z}[i]}\\left(\\frac{82}{41}\\right) = 1 + 1 - 1 = 1.$$ That's correct, right?</p>\n\n<p>Now try $\\alpha = 86$, $p = 43$. As $43$ is inert, we have $g_R^p(\\alpha) = 1$, as expected.</p>\n\n<p>Let's move on to $g_{\\textbf{Z}[\\sqrt{-5}]}^{41}(82)$. Since $41$ splits as $(6 - \\sqrt{-5})(6 + \\sqrt{-5})$, the result should be much the same as $g_{\\textbf{Z}[i]}^{41}(82)$.</p>\n\n<p>And now we get to something really interesting: $g_{\\textbf{Z}[\\sqrt{-5}]}^{43}(76 + 18 \\sqrt{-5})$. $43$ is inert in this ring, and yet $N(76 + 18 \\sqrt{-5}) = 7396$, a multiple of $43$. Worse, $$\\frac{76 + 18 \\sqrt{-5}}{43} = \\frac{2}{43}(38 + 9 \\sqrt{-5}),$$ so $$f_{\\textbf{Z}[\\sqrt{-5}]}\\left(\\frac{76 + 18 \\sqrt{-5}}{43}\\right) = 0.$$</p>\n\n<p>If instead we defined $g_R^p(\\alpha)$ to look for the containment of $\\langle \\alpha \\rangle$ in $\\langle p \\rangle$ or $\\langle p, x \\rangle$ or $\\langle p, \\overline x \\rangle$, we might get the result we want, since $\\langle 76 + 18 \\sqrt{-5} \\rangle$ is in either $\\langle 43, 9 + \\sqrt{-5} \\rangle$ or $\\langle 43, 9 - \\sqrt{-5} \\rangle$, one of those.</p>\n" }, { "answer_id": 2112063, "score": 3, "text": "<p>I don't think what you're asking for is possible, at least not for some rational primes. Let me build on the example Mr. Brooks gave in a comment yesterday.</p>\n\n<p>The classic non-UFD is $\\mathbb Z[\\sqrt{-5}]$. Consider the rational prime $p = 3$. The norm function in the field $\\mathbb Q(\\sqrt{-5})$ is $N(a + b\\sqrt{-5}) = a^2 + 5b^2$. Possible integer norms are 0, 1, 4, 5, 6, 9, ...</p>\n\n<p>Since 3 is an impossible norm, that means 3 is irreducible. But 6, the first nontrivial positive multiple of 3, is a possible norm, and in fact $(1 - \\sqrt{-5})(1 + \\sqrt{-5}) = 6$.</p>\n\n<p>So if the union of $\\mathfrak P_1$ and $\\mathfrak P_2$ contains all numbers in this domain with norm divisible by 3, it looks something like $\\langle 3, 1 - \\sqrt{-5} \\rangle \\cup \\langle 3, 1 + \\sqrt{-5} \\rangle$.</p>\n\n<p>But neither of those two ideals are principal. Or at least they don't look principal. If either of them was principal, we'd be able to solve $x \\pm x\\sqrt{-5} = 3$ or $3x = 1 \\pm \\sqrt{-5}$ with $x \\in \\mathbb Z[\\sqrt{-5}]$.</p>\n\n<p>What would be the norm of this number $x$? Remember that the norm is a multiplicative function. To solve the former equation, we would have to have $$N(x) = \\frac{3}{2},$$ which is not an integer at all, and to solve the latter equation we'd need $N(x) = 2$, an impossible norm in this domain.</p>\n\n<p>Now let's look at $2 + \\sqrt{-5}$. Since this number has a norm of 9, it must be contained in $\\mathfrak P_1 \\cup \\mathfrak P_2$. I emphasize that $(2 + \\sqrt{-5}) \\not\\in \\langle 3 \\rangle$ nor $\\langle 1 - \\sqrt{-5} \\rangle$ nor $\\langle 1 + \\sqrt{-5} \\rangle$.</p>\n\n<p>If $\\mathfrak P_1 = \\langle 3, 1 - \\sqrt{-5} \\rangle$, it consists of all numbers in this domain of the form $3\\alpha + (1 - \\sqrt{-5})\\beta$, with $\\alpha$ and $\\beta$ also numbers in this domain. And since $3 + (-1 + \\sqrt{-5}) = 2 + \\sqrt{-5}$, that means $\\alpha = 1$ and $\\beta = -1$ is our verification that $(2 + \\sqrt{-5}) \\in \\mathfrak P_1$.</p>\n\n<p>EDIT: Thanks to Dave R. for pointing out a mistake of arithmetic. I've corrected it now.</p>\n" }, { "answer_id": 2124662, "score": 4, "text": "<p>The functions $f$ and $g$ can be defined, but they have to operate on ideals rather than numbers, and those ideals may or may not be principal.</p>\n\n<p>First, though, let's tidy up the notation. $f_R(\\alpha) = 1$ (or True) if $\\alpha$ is a number in the ring $R$, and $0$ (or False) otherwise. And $g_R^p(\\alpha) = 1$ if $\\alpha$ is a number in the prime ideal generated by $p$, or by a splitting factor of $p$, or co-generated by $p$.</p>\n\n<p>Now let's examine $\\alpha = 82$, $p = 41$ and $R = \\textbf{Z}[i]$. Since $41$ splits as $(4 - 5i)(4 + 5i)$, we have $$g_R^p(\\alpha) = f_{\\textbf{Z}[i]}\\left(\\frac{82}{4 - 5i}\\right) + f_{\\textbf{Z}[i]}\\left(\\frac{82}{4 + 5i}\\right) - f_{\\textbf{Z}[i]}\\left(\\frac{82}{41}\\right) = 1 + 1 - 1 = 1.$$ That's correct, right?</p>\n\n<p>Now try $\\alpha = 86$, $p = 43$. As $43$ is inert, we have $g_R^p(\\alpha) = 1$, as expected.</p>\n\n<p>Let's move on to $g_{\\textbf{Z}[\\sqrt{-5}]}^{41}(82)$. Since $41$ splits as $(6 - \\sqrt{-5})(6 + \\sqrt{-5})$, the result should be much the same as $g_{\\textbf{Z}[i]}^{41}(82)$.</p>\n\n<p>And now we get to something really interesting: $g_{\\textbf{Z}[\\sqrt{-5}]}^{43}(76 + 18 \\sqrt{-5})$. $43$ is inert in this ring, and yet $N(76 + 18 \\sqrt{-5}) = 7396$, a multiple of $43$. Worse, $$\\frac{76 + 18 \\sqrt{-5}}{43} = \\frac{2}{43}(38 + 9 \\sqrt{-5}),$$ so $$f_{\\textbf{Z}[\\sqrt{-5}]}\\left(\\frac{76 + 18 \\sqrt{-5}}{43}\\right) = 0.$$</p>\n\n<p>If instead we defined $g_R^p(\\alpha)$ to look for the containment of $\\langle \\alpha \\rangle$ in $\\langle p \\rangle$ or $\\langle p, x \\rangle$ or $\\langle p, \\overline x \\rangle$, we might get the result we want, since $\\langle 76 + 18 \\sqrt{-5} \\rangle$ is in either $\\langle 43, 9 + \\sqrt{-5} \\rangle$ or $\\langle 43, 9 - \\sqrt{-5} \\rangle$, one of those.</p>\n" }, { "answer_id": 2112063, "score": 3, "text": "<p>I don't think what you're asking for is possible, at least not for some rational primes. Let me build on the example Mr. Brooks gave in a comment yesterday.</p>\n\n<p>The classic non-UFD is $\\mathbb Z[\\sqrt{-5}]$. Consider the rational prime $p = 3$. The norm function in the field $\\mathbb Q(\\sqrt{-5})$ is $N(a + b\\sqrt{-5}) = a^2 + 5b^2$. Possible integer norms are 0, 1, 4, 5, 6, 9, ...</p>\n\n<p>Since 3 is an impossible norm, that means 3 is irreducible. But 6, the first nontrivial positive multiple of 3, is a possible norm, and in fact $(1 - \\sqrt{-5})(1 + \\sqrt{-5}) = 6$.</p>\n\n<p>So if the union of $\\mathfrak P_1$ and $\\mathfrak P_2$ contains all numbers in this domain with norm divisible by 3, it looks something like $\\langle 3, 1 - \\sqrt{-5} \\rangle \\cup \\langle 3, 1 + \\sqrt{-5} \\rangle$.</p>\n\n<p>But neither of those two ideals are principal. Or at least they don't look principal. If either of them was principal, we'd be able to solve $x \\pm x\\sqrt{-5} = 3$ or $3x = 1 \\pm \\sqrt{-5}$ with $x \\in \\mathbb Z[\\sqrt{-5}]$.</p>\n\n<p>What would be the norm of this number $x$? Remember that the norm is a multiplicative function. To solve the former equation, we would have to have $$N(x) = \\frac{3}{2},$$ which is not an integer at all, and to solve the latter equation we'd need $N(x) = 2$, an impossible norm in this domain.</p>\n\n<p>Now let's look at $2 + \\sqrt{-5}$. Since this number has a norm of 9, it must be contained in $\\mathfrak P_1 \\cup \\mathfrak P_2$. I emphasize that $(2 + \\sqrt{-5}) \\not\\in \\langle 3 \\rangle$ nor $\\langle 1 - \\sqrt{-5} \\rangle$ nor $\\langle 1 + \\sqrt{-5} \\rangle$.</p>\n\n<p>If $\\mathfrak P_1 = \\langle 3, 1 - \\sqrt{-5} \\rangle$, it consists of all numbers in this domain of the form $3\\alpha + (1 - \\sqrt{-5})\\beta$, with $\\alpha$ and $\\beta$ also numbers in this domain. And since $3 + (-1 + \\sqrt{-5}) = 2 + \\sqrt{-5}$, that means $\\alpha = 1$ and $\\beta = -1$ is our verification that $(2 + \\sqrt{-5}) \\in \\mathfrak P_1$.</p>\n\n<p>EDIT: Thanks to Dave R. for pointing out a mistake of arithmetic. I've corrected it now.</p>\n" }, { "answer_id": 2124662, "score": 4, "text": "<p>The functions $f$ and $g$ can be defined, but they have to operate on ideals rather than numbers, and those ideals may or may not be principal.</p>\n\n<p>First, though, let's tidy up the notation. $f_R(\\alpha) = 1$ (or True) if $\\alpha$ is a number in the ring $R$, and $0$ (or False) otherwise. And $g_R^p(\\alpha) = 1$ if $\\alpha$ is a number in the prime ideal generated by $p$, or by a splitting factor of $p$, or co-generated by $p$.</p>\n\n<p>Now let's examine $\\alpha = 82$, $p = 41$ and $R = \\textbf{Z}[i]$. Since $41$ splits as $(4 - 5i)(4 + 5i)$, we have $$g_R^p(\\alpha) = f_{\\textbf{Z}[i]}\\left(\\frac{82}{4 - 5i}\\right) + f_{\\textbf{Z}[i]}\\left(\\frac{82}{4 + 5i}\\right) - f_{\\textbf{Z}[i]}\\left(\\frac{82}{41}\\right) = 1 + 1 - 1 = 1.$$ That's correct, right?</p>\n\n<p>Now try $\\alpha = 86$, $p = 43$. As $43$ is inert, we have $g_R^p(\\alpha) = 1$, as expected.</p>\n\n<p>Let's move on to $g_{\\textbf{Z}[\\sqrt{-5}]}^{41}(82)$. Since $41$ splits as $(6 - \\sqrt{-5})(6 + \\sqrt{-5})$, the result should be much the same as $g_{\\textbf{Z}[i]}^{41}(82)$.</p>\n\n<p>And now we get to something really interesting: $g_{\\textbf{Z}[\\sqrt{-5}]}^{43}(76 + 18 \\sqrt{-5})$. $43$ is inert in this ring, and yet $N(76 + 18 \\sqrt{-5}) = 7396$, a multiple of $43$. Worse, $$\\frac{76 + 18 \\sqrt{-5}}{43} = \\frac{2}{43}(38 + 9 \\sqrt{-5}),$$ so $$f_{\\textbf{Z}[\\sqrt{-5}]}\\left(\\frac{76 + 18 \\sqrt{-5}}{43}\\right) = 0.$$</p>\n\n<p>If instead we defined $g_R^p(\\alpha)$ to look for the containment of $\\langle \\alpha \\rangle$ in $\\langle p \\rangle$ or $\\langle p, x \\rangle$ or $\\langle p, \\overline x \\rangle$, we might get the result we want, since $\\langle 76 + 18 \\sqrt{-5} \\rangle$ is in either $\\langle 43, 9 + \\sqrt{-5} \\rangle$ or $\\langle 43, 9 - \\sqrt{-5} \\rangle$, one of those.</p>\n" }, { "answer_id": 2112063, "score": 3, "text": "<p>I don't think what you're asking for is possible, at least not for some rational primes. Let me build on the example Mr. Brooks gave in a comment yesterday.</p>\n\n<p>The classic non-UFD is $\\mathbb Z[\\sqrt{-5}]$. Consider the rational prime $p = 3$. The norm function in the field $\\mathbb Q(\\sqrt{-5})$ is $N(a + b\\sqrt{-5}) = a^2 + 5b^2$. Possible integer norms are 0, 1, 4, 5, 6, 9, ...</p>\n\n<p>Since 3 is an impossible norm, that means 3 is irreducible. But 6, the first nontrivial positive multiple of 3, is a possible norm, and in fact $(1 - \\sqrt{-5})(1 + \\sqrt{-5}) = 6$.</p>\n\n<p>So if the union of $\\mathfrak P_1$ and $\\mathfrak P_2$ contains all numbers in this domain with norm divisible by 3, it looks something like $\\langle 3, 1 - \\sqrt{-5} \\rangle \\cup \\langle 3, 1 + \\sqrt{-5} \\rangle$.</p>\n\n<p>But neither of those two ideals are principal. Or at least they don't look principal. If either of them was principal, we'd be able to solve $x \\pm x\\sqrt{-5} = 3$ or $3x = 1 \\pm \\sqrt{-5}$ with $x \\in \\mathbb Z[\\sqrt{-5}]$.</p>\n\n<p>What would be the norm of this number $x$? Remember that the norm is a multiplicative function. To solve the former equation, we would have to have $$N(x) = \\frac{3}{2},$$ which is not an integer at all, and to solve the latter equation we'd need $N(x) = 2$, an impossible norm in this domain.</p>\n\n<p>Now let's look at $2 + \\sqrt{-5}$. Since this number has a norm of 9, it must be contained in $\\mathfrak P_1 \\cup \\mathfrak P_2$. I emphasize that $(2 + \\sqrt{-5}) \\not\\in \\langle 3 \\rangle$ nor $\\langle 1 - \\sqrt{-5} \\rangle$ nor $\\langle 1 + \\sqrt{-5} \\rangle$.</p>\n\n<p>If $\\mathfrak P_1 = \\langle 3, 1 - \\sqrt{-5} \\rangle$, it consists of all numbers in this domain of the form $3\\alpha + (1 - \\sqrt{-5})\\beta$, with $\\alpha$ and $\\beta$ also numbers in this domain. And since $3 + (-1 + \\sqrt{-5}) = 2 + \\sqrt{-5}$, that means $\\alpha = 1$ and $\\beta = -1$ is our verification that $(2 + \\sqrt{-5}) \\in \\mathfrak P_1$.</p>\n\n<p>EDIT: Thanks to Dave R. for pointing out a mistake of arithmetic. I've corrected it now.</p>\n" }, { "answer_id": 2124662, "score": 4, "text": "<p>The functions $f$ and $g$ can be defined, but they have to operate on ideals rather than numbers, and those ideals may or may not be principal.</p>\n\n<p>First, though, let's tidy up the notation. $f_R(\\alpha) = 1$ (or True) if $\\alpha$ is a number in the ring $R$, and $0$ (or False) otherwise. And $g_R^p(\\alpha) = 1$ if $\\alpha$ is a number in the prime ideal generated by $p$, or by a splitting factor of $p$, or co-generated by $p$.</p>\n\n<p>Now let's examine $\\alpha = 82$, $p = 41$ and $R = \\textbf{Z}[i]$. Since $41$ splits as $(4 - 5i)(4 + 5i)$, we have $$g_R^p(\\alpha) = f_{\\textbf{Z}[i]}\\left(\\frac{82}{4 - 5i}\\right) + f_{\\textbf{Z}[i]}\\left(\\frac{82}{4 + 5i}\\right) - f_{\\textbf{Z}[i]}\\left(\\frac{82}{41}\\right) = 1 + 1 - 1 = 1.$$ That's correct, right?</p>\n\n<p>Now try $\\alpha = 86$, $p = 43$. As $43$ is inert, we have $g_R^p(\\alpha) = 1$, as expected.</p>\n\n<p>Let's move on to $g_{\\textbf{Z}[\\sqrt{-5}]}^{41}(82)$. Since $41$ splits as $(6 - \\sqrt{-5})(6 + \\sqrt{-5})$, the result should be much the same as $g_{\\textbf{Z}[i]}^{41}(82)$.</p>\n\n<p>And now we get to something really interesting: $g_{\\textbf{Z}[\\sqrt{-5}]}^{43}(76 + 18 \\sqrt{-5})$. $43$ is inert in this ring, and yet $N(76 + 18 \\sqrt{-5}) = 7396$, a multiple of $43$. Worse, $$\\frac{76 + 18 \\sqrt{-5}}{43} = \\frac{2}{43}(38 + 9 \\sqrt{-5}),$$ so $$f_{\\textbf{Z}[\\sqrt{-5}]}\\left(\\frac{76 + 18 \\sqrt{-5}}{43}\\right) = 0.$$</p>\n\n<p>If instead we defined $g_R^p(\\alpha)$ to look for the containment of $\\langle \\alpha \\rangle$ in $\\langle p \\rangle$ or $\\langle p, x \\rangle$ or $\\langle p, \\overline x \\rangle$, we might get the result we want, since $\\langle 76 + 18 \\sqrt{-5} \\rangle$ is in either $\\langle 43, 9 + \\sqrt{-5} \\rangle$ or $\\langle 43, 9 - \\sqrt{-5} \\rangle$, one of those.</p>\n" }, { "answer_id": 2112063, "score": 3, "text": "<p>I don't think what you're asking for is possible, at least not for some rational primes. Let me build on the example Mr. Brooks gave in a comment yesterday.</p>\n\n<p>The classic non-UFD is $\\mathbb Z[\\sqrt{-5}]$. Consider the rational prime $p = 3$. The norm function in the field $\\mathbb Q(\\sqrt{-5})$ is $N(a + b\\sqrt{-5}) = a^2 + 5b^2$. Possible integer norms are 0, 1, 4, 5, 6, 9, ...</p>\n\n<p>Since 3 is an impossible norm, that means 3 is irreducible. But 6, the first nontrivial positive multiple of 3, is a possible norm, and in fact $(1 - \\sqrt{-5})(1 + \\sqrt{-5}) = 6$.</p>\n\n<p>So if the union of $\\mathfrak P_1$ and $\\mathfrak P_2$ contains all numbers in this domain with norm divisible by 3, it looks something like $\\langle 3, 1 - \\sqrt{-5} \\rangle \\cup \\langle 3, 1 + \\sqrt{-5} \\rangle$.</p>\n\n<p>But neither of those two ideals are principal. Or at least they don't look principal. If either of them was principal, we'd be able to solve $x \\pm x\\sqrt{-5} = 3$ or $3x = 1 \\pm \\sqrt{-5}$ with $x \\in \\mathbb Z[\\sqrt{-5}]$.</p>\n\n<p>What would be the norm of this number $x$? Remember that the norm is a multiplicative function. To solve the former equation, we would have to have $$N(x) = \\frac{3}{2},$$ which is not an integer at all, and to solve the latter equation we'd need $N(x) = 2$, an impossible norm in this domain.</p>\n\n<p>Now let's look at $2 + \\sqrt{-5}$. Since this number has a norm of 9, it must be contained in $\\mathfrak P_1 \\cup \\mathfrak P_2$. I emphasize that $(2 + \\sqrt{-5}) \\not\\in \\langle 3 \\rangle$ nor $\\langle 1 - \\sqrt{-5} \\rangle$ nor $\\langle 1 + \\sqrt{-5} \\rangle$.</p>\n\n<p>If $\\mathfrak P_1 = \\langle 3, 1 - \\sqrt{-5} \\rangle$, it consists of all numbers in this domain of the form $3\\alpha + (1 - \\sqrt{-5})\\beta$, with $\\alpha$ and $\\beta$ also numbers in this domain. And since $3 + (-1 + \\sqrt{-5}) = 2 + \\sqrt{-5}$, that means $\\alpha = 1$ and $\\beta = -1$ is our verification that $(2 + \\sqrt{-5}) \\in \\mathfrak P_1$.</p>\n\n<p>EDIT: Thanks to Dave R. for pointing out a mistake of arithmetic. I've corrected it now.</p>\n" }, { "answer_id": 2124662, "score": 4, "text": "<p>The functions $f$ and $g$ can be defined, but they have to operate on ideals rather than numbers, and those ideals may or may not be principal.</p>\n\n<p>First, though, let's tidy up the notation. $f_R(\\alpha) = 1$ (or True) if $\\alpha$ is a number in the ring $R$, and $0$ (or False) otherwise. And $g_R^p(\\alpha) = 1$ if $\\alpha$ is a number in the prime ideal generated by $p$, or by a splitting factor of $p$, or co-generated by $p$.</p>\n\n<p>Now let's examine $\\alpha = 82$, $p = 41$ and $R = \\textbf{Z}[i]$. Since $41$ splits as $(4 - 5i)(4 + 5i)$, we have $$g_R^p(\\alpha) = f_{\\textbf{Z}[i]}\\left(\\frac{82}{4 - 5i}\\right) + f_{\\textbf{Z}[i]}\\left(\\frac{82}{4 + 5i}\\right) - f_{\\textbf{Z}[i]}\\left(\\frac{82}{41}\\right) = 1 + 1 - 1 = 1.$$ That's correct, right?</p>\n\n<p>Now try $\\alpha = 86$, $p = 43$. As $43$ is inert, we have $g_R^p(\\alpha) = 1$, as expected.</p>\n\n<p>Let's move on to $g_{\\textbf{Z}[\\sqrt{-5}]}^{41}(82)$. Since $41$ splits as $(6 - \\sqrt{-5})(6 + \\sqrt{-5})$, the result should be much the same as $g_{\\textbf{Z}[i]}^{41}(82)$.</p>\n\n<p>And now we get to something really interesting: $g_{\\textbf{Z}[\\sqrt{-5}]}^{43}(76 + 18 \\sqrt{-5})$. $43$ is inert in this ring, and yet $N(76 + 18 \\sqrt{-5}) = 7396$, a multiple of $43$. Worse, $$\\frac{76 + 18 \\sqrt{-5}}{43} = \\frac{2}{43}(38 + 9 \\sqrt{-5}),$$ so $$f_{\\textbf{Z}[\\sqrt{-5}]}\\left(\\frac{76 + 18 \\sqrt{-5}}{43}\\right) = 0.$$</p>\n\n<p>If instead we defined $g_R^p(\\alpha)$ to look for the containment of $\\langle \\alpha \\rangle$ in $\\langle p \\rangle$ or $\\langle p, x \\rangle$ or $\\langle p, \\overline x \\rangle$, we might get the result we want, since $\\langle 76 + 18 \\sqrt{-5} \\rangle$ is in either $\\langle 43, 9 + \\sqrt{-5} \\rangle$ or $\\langle 43, 9 - \\sqrt{-5} \\rangle$, one of those.</p>\n" }, { "answer_id": 2112063, "score": 3, "text": "<p>I don't think what you're asking for is possible, at least not for some rational primes. Let me build on the example Mr. Brooks gave in a comment yesterday.</p>\n\n<p>The classic non-UFD is $\\mathbb Z[\\sqrt{-5}]$. Consider the rational prime $p = 3$. The norm function in the field $\\mathbb Q(\\sqrt{-5})$ is $N(a + b\\sqrt{-5}) = a^2 + 5b^2$. Possible integer norms are 0, 1, 4, 5, 6, 9, ...</p>\n\n<p>Since 3 is an impossible norm, that means 3 is irreducible. But 6, the first nontrivial positive multiple of 3, is a possible norm, and in fact $(1 - \\sqrt{-5})(1 + \\sqrt{-5}) = 6$.</p>\n\n<p>So if the union of $\\mathfrak P_1$ and $\\mathfrak P_2$ contains all numbers in this domain with norm divisible by 3, it looks something like $\\langle 3, 1 - \\sqrt{-5} \\rangle \\cup \\langle 3, 1 + \\sqrt{-5} \\rangle$.</p>\n\n<p>But neither of those two ideals are principal. Or at least they don't look principal. If either of them was principal, we'd be able to solve $x \\pm x\\sqrt{-5} = 3$ or $3x = 1 \\pm \\sqrt{-5}$ with $x \\in \\mathbb Z[\\sqrt{-5}]$.</p>\n\n<p>What would be the norm of this number $x$? Remember that the norm is a multiplicative function. To solve the former equation, we would have to have $$N(x) = \\frac{3}{2},$$ which is not an integer at all, and to solve the latter equation we'd need $N(x) = 2$, an impossible norm in this domain.</p>\n\n<p>Now let's look at $2 + \\sqrt{-5}$. Since this number has a norm of 9, it must be contained in $\\mathfrak P_1 \\cup \\mathfrak P_2$. I emphasize that $(2 + \\sqrt{-5}) \\not\\in \\langle 3 \\rangle$ nor $\\langle 1 - \\sqrt{-5} \\rangle$ nor $\\langle 1 + \\sqrt{-5} \\rangle$.</p>\n\n<p>If $\\mathfrak P_1 = \\langle 3, 1 - \\sqrt{-5} \\rangle$, it consists of all numbers in this domain of the form $3\\alpha + (1 - \\sqrt{-5})\\beta$, with $\\alpha$ and $\\beta$ also numbers in this domain. And since $3 + (-1 + \\sqrt{-5}) = 2 + \\sqrt{-5}$, that means $\\alpha = 1$ and $\\beta = -1$ is our verification that $(2 + \\sqrt{-5}) \\in \\mathfrak P_1$.</p>\n\n<p>EDIT: Thanks to Dave R. for pointing out a mistake of arithmetic. I've corrected it now.</p>\n" }, { "answer_id": 2124662, "score": 4, "text": "<p>The functions $f$ and $g$ can be defined, but they have to operate on ideals rather than numbers, and those ideals may or may not be principal.</p>\n\n<p>First, though, let's tidy up the notation. $f_R(\\alpha) = 1$ (or True) if $\\alpha$ is a number in the ring $R$, and $0$ (or False) otherwise. And $g_R^p(\\alpha) = 1$ if $\\alpha$ is a number in the prime ideal generated by $p$, or by a splitting factor of $p$, or co-generated by $p$.</p>\n\n<p>Now let's examine $\\alpha = 82$, $p = 41$ and $R = \\textbf{Z}[i]$. Since $41$ splits as $(4 - 5i)(4 + 5i)$, we have $$g_R^p(\\alpha) = f_{\\textbf{Z}[i]}\\left(\\frac{82}{4 - 5i}\\right) + f_{\\textbf{Z}[i]}\\left(\\frac{82}{4 + 5i}\\right) - f_{\\textbf{Z}[i]}\\left(\\frac{82}{41}\\right) = 1 + 1 - 1 = 1.$$ That's correct, right?</p>\n\n<p>Now try $\\alpha = 86$, $p = 43$. As $43$ is inert, we have $g_R^p(\\alpha) = 1$, as expected.</p>\n\n<p>Let's move on to $g_{\\textbf{Z}[\\sqrt{-5}]}^{41}(82)$. Since $41$ splits as $(6 - \\sqrt{-5})(6 + \\sqrt{-5})$, the result should be much the same as $g_{\\textbf{Z}[i]}^{41}(82)$.</p>\n\n<p>And now we get to something really interesting: $g_{\\textbf{Z}[\\sqrt{-5}]}^{43}(76 + 18 \\sqrt{-5})$. $43$ is inert in this ring, and yet $N(76 + 18 \\sqrt{-5}) = 7396$, a multiple of $43$. Worse, $$\\frac{76 + 18 \\sqrt{-5}}{43} = \\frac{2}{43}(38 + 9 \\sqrt{-5}),$$ so $$f_{\\textbf{Z}[\\sqrt{-5}]}\\left(\\frac{76 + 18 \\sqrt{-5}}{43}\\right) = 0.$$</p>\n\n<p>If instead we defined $g_R^p(\\alpha)$ to look for the containment of $\\langle \\alpha \\rangle$ in $\\langle p \\rangle$ or $\\langle p, x \\rangle$ or $\\langle p, \\overline x \\rangle$, we might get the result we want, since $\\langle 76 + 18 \\sqrt{-5} \\rangle$ is in either $\\langle 43, 9 + \\sqrt{-5} \\rangle$ or $\\langle 43, 9 - \\sqrt{-5} \\rangle$, one of those.</p>\n" }, { "answer_id": 2112063, "score": 3, "text": "<p>I don't think what you're asking for is possible, at least not for some rational primes. Let me build on the example Mr. Brooks gave in a comment yesterday.</p>\n\n<p>The classic non-UFD is $\\mathbb Z[\\sqrt{-5}]$. Consider the rational prime $p = 3$. The norm function in the field $\\mathbb Q(\\sqrt{-5})$ is $N(a + b\\sqrt{-5}) = a^2 + 5b^2$. Possible integer norms are 0, 1, 4, 5, 6, 9, ...</p>\n\n<p>Since 3 is an impossible norm, that means 3 is irreducible. But 6, the first nontrivial positive multiple of 3, is a possible norm, and in fact $(1 - \\sqrt{-5})(1 + \\sqrt{-5}) = 6$.</p>\n\n<p>So if the union of $\\mathfrak P_1$ and $\\mathfrak P_2$ contains all numbers in this domain with norm divisible by 3, it looks something like $\\langle 3, 1 - \\sqrt{-5} \\rangle \\cup \\langle 3, 1 + \\sqrt{-5} \\rangle$.</p>\n\n<p>But neither of those two ideals are principal. Or at least they don't look principal. If either of them was principal, we'd be able to solve $x \\pm x\\sqrt{-5} = 3$ or $3x = 1 \\pm \\sqrt{-5}$ with $x \\in \\mathbb Z[\\sqrt{-5}]$.</p>\n\n<p>What would be the norm of this number $x$? Remember that the norm is a multiplicative function. To solve the former equation, we would have to have $$N(x) = \\frac{3}{2},$$ which is not an integer at all, and to solve the latter equation we'd need $N(x) = 2$, an impossible norm in this domain.</p>\n\n<p>Now let's look at $2 + \\sqrt{-5}$. Since this number has a norm of 9, it must be contained in $\\mathfrak P_1 \\cup \\mathfrak P_2$. I emphasize that $(2 + \\sqrt{-5}) \\not\\in \\langle 3 \\rangle$ nor $\\langle 1 - \\sqrt{-5} \\rangle$ nor $\\langle 1 + \\sqrt{-5} \\rangle$.</p>\n\n<p>If $\\mathfrak P_1 = \\langle 3, 1 - \\sqrt{-5} \\rangle$, it consists of all numbers in this domain of the form $3\\alpha + (1 - \\sqrt{-5})\\beta$, with $\\alpha$ and $\\beta$ also numbers in this domain. And since $3 + (-1 + \\sqrt{-5}) = 2 + \\sqrt{-5}$, that means $\\alpha = 1$ and $\\beta = -1$ is our verification that $(2 + \\sqrt{-5}) \\in \\mathfrak P_1$.</p>\n\n<p>EDIT: Thanks to Dave R. for pointing out a mistake of arithmetic. I've corrected it now.</p>\n" }, { "answer_id": 2124662, "score": 4, "text": "<p>The functions $f$ and $g$ can be defined, but they have to operate on ideals rather than numbers, and those ideals may or may not be principal.</p>\n\n<p>First, though, let's tidy up the notation. $f_R(\\alpha) = 1$ (or True) if $\\alpha$ is a number in the ring $R$, and $0$ (or False) otherwise. And $g_R^p(\\alpha) = 1$ if $\\alpha$ is a number in the prime ideal generated by $p$, or by a splitting factor of $p$, or co-generated by $p$.</p>\n\n<p>Now let's examine $\\alpha = 82$, $p = 41$ and $R = \\textbf{Z}[i]$. Since $41$ splits as $(4 - 5i)(4 + 5i)$, we have $$g_R^p(\\alpha) = f_{\\textbf{Z}[i]}\\left(\\frac{82}{4 - 5i}\\right) + f_{\\textbf{Z}[i]}\\left(\\frac{82}{4 + 5i}\\right) - f_{\\textbf{Z}[i]}\\left(\\frac{82}{41}\\right) = 1 + 1 - 1 = 1.$$ That's correct, right?</p>\n\n<p>Now try $\\alpha = 86$, $p = 43$. As $43$ is inert, we have $g_R^p(\\alpha) = 1$, as expected.</p>\n\n<p>Let's move on to $g_{\\textbf{Z}[\\sqrt{-5}]}^{41}(82)$. Since $41$ splits as $(6 - \\sqrt{-5})(6 + \\sqrt{-5})$, the result should be much the same as $g_{\\textbf{Z}[i]}^{41}(82)$.</p>\n\n<p>And now we get to something really interesting: $g_{\\textbf{Z}[\\sqrt{-5}]}^{43}(76 + 18 \\sqrt{-5})$. $43$ is inert in this ring, and yet $N(76 + 18 \\sqrt{-5}) = 7396$, a multiple of $43$. Worse, $$\\frac{76 + 18 \\sqrt{-5}}{43} = \\frac{2}{43}(38 + 9 \\sqrt{-5}),$$ so $$f_{\\textbf{Z}[\\sqrt{-5}]}\\left(\\frac{76 + 18 \\sqrt{-5}}{43}\\right) = 0.$$</p>\n\n<p>If instead we defined $g_R^p(\\alpha)$ to look for the containment of $\\langle \\alpha \\rangle$ in $\\langle p \\rangle$ or $\\langle p, x \\rangle$ or $\\langle p, \\overline x \\rangle$, we might get the result we want, since $\\langle 76 + 18 \\sqrt{-5} \\rangle$ is in either $\\langle 43, 9 + \\sqrt{-5} \\rangle$ or $\\langle 43, 9 - \\sqrt{-5} \\rangle$, one of those.</p>\n" }, { "answer_id": 2112063, "score": 3, "text": "<p>I don't think what you're asking for is possible, at least not for some rational primes. Let me build on the example Mr. Brooks gave in a comment yesterday.</p>\n\n<p>The classic non-UFD is $\\mathbb Z[\\sqrt{-5}]$. Consider the rational prime $p = 3$. The norm function in the field $\\mathbb Q(\\sqrt{-5})$ is $N(a + b\\sqrt{-5}) = a^2 + 5b^2$. Possible integer norms are 0, 1, 4, 5, 6, 9, ...</p>\n\n<p>Since 3 is an impossible norm, that means 3 is irreducible. But 6, the first nontrivial positive multiple of 3, is a possible norm, and in fact $(1 - \\sqrt{-5})(1 + \\sqrt{-5}) = 6$.</p>\n\n<p>So if the union of $\\mathfrak P_1$ and $\\mathfrak P_2$ contains all numbers in this domain with norm divisible by 3, it looks something like $\\langle 3, 1 - \\sqrt{-5} \\rangle \\cup \\langle 3, 1 + \\sqrt{-5} \\rangle$.</p>\n\n<p>But neither of those two ideals are principal. Or at least they don't look principal. If either of them was principal, we'd be able to solve $x \\pm x\\sqrt{-5} = 3$ or $3x = 1 \\pm \\sqrt{-5}$ with $x \\in \\mathbb Z[\\sqrt{-5}]$.</p>\n\n<p>What would be the norm of this number $x$? Remember that the norm is a multiplicative function. To solve the former equation, we would have to have $$N(x) = \\frac{3}{2},$$ which is not an integer at all, and to solve the latter equation we'd need $N(x) = 2$, an impossible norm in this domain.</p>\n\n<p>Now let's look at $2 + \\sqrt{-5}$. Since this number has a norm of 9, it must be contained in $\\mathfrak P_1 \\cup \\mathfrak P_2$. I emphasize that $(2 + \\sqrt{-5}) \\not\\in \\langle 3 \\rangle$ nor $\\langle 1 - \\sqrt{-5} \\rangle$ nor $\\langle 1 + \\sqrt{-5} \\rangle$.</p>\n\n<p>If $\\mathfrak P_1 = \\langle 3, 1 - \\sqrt{-5} \\rangle$, it consists of all numbers in this domain of the form $3\\alpha + (1 - \\sqrt{-5})\\beta$, with $\\alpha$ and $\\beta$ also numbers in this domain. And since $3 + (-1 + \\sqrt{-5}) = 2 + \\sqrt{-5}$, that means $\\alpha = 1$ and $\\beta = -1$ is our verification that $(2 + \\sqrt{-5}) \\in \\mathfrak P_1$.</p>\n\n<p>EDIT: Thanks to Dave R. for pointing out a mistake of arithmetic. I've corrected it now.</p>\n" } ]
Question: How would one describe set of elements of $\mathbb{Z}(\sqrt{-d})$ whose norm are divisible by a rational prime $p$? Answer 1: <p>The functions $f$ and $g$ can be defined, but they have to operate on ideals rather than numbers, and those ideals may or may not be principal.</p> <p>First, though, let's tidy up the notation. $f_R(\alpha) = 1$ (or True) if $\alpha$ is a number in the ring $R$, and $0$ (or False) otherwise. And $g_R^p(\alpha) = 1$ if $\alpha$ is a number in the prime ideal generated by $p$, or by a splitting factor of $p$, or co-generated by $p$.</p> <p>Now let's examine $\alpha = 82$, $p = 41$ and $R = \textbf{Z}[i]$. Since $41$ splits as $(4 - 5i)(4 + 5i)$, we have $$g_R^p(\alpha) = f_{\textbf{Z}[i]}\left(\frac{82}{4 - 5i}\right) + f_{\textbf{Z}[i]}\left(\frac{82}{4 + 5i}\right) - f_{\textbf{Z}[i]}\left(\frac{82}{41}\right) = 1 + 1 - 1 = 1.$$ That's correct, right?</p> <p>Now try $\alpha = 86$, $p = 43$. As $43$ is inert, we have $g_R^p(\alpha) = 1$, as expected.</p> <p>Let's move on to $g_{\textbf{Z}[\sqrt{-5}]}^{41}(82)$. Since $41$ splits as $(6 - \sqrt{-5})(6 + \sqrt{-5})$, the result should be much the same as $g_{\textbf{Z}[i]}^{41}(82)$.</p> <p>And now we get to something really interesting: $g_{\textbf{Z}[\sqrt{-5}]}^{43}(76 + 18 \sqrt{-5})$. $43$ is inert in this ring, and yet $N(76 + 18 \sqrt{-5}) = 7396$, a multiple of $43$. Worse, $$\frac{76 + 18 \sqrt{-5}}{43} = \frac{2}{43}(38 + 9 \sqrt{-5}),$$ so $$f_{\textbf{Z}[\sqrt{-5}]}\left(\frac{76 + 18 \sqrt{-5}}{43}\right) = 0.$$</p> <p>If instead we defined $g_R^p(\alpha)$ to look for the containment of $\langle \alpha \rangle$ in $\langle p \rangle$ or $\langle p, x \rangle$ or $\langle p, \overline x \rangle$, we might get the result we want, since $\langle 76 + 18 \sqrt{-5} \rangle$ is in either $\langle 43, 9 + \sqrt{-5} \rangle$ or $\langle 43, 9 - \sqrt{-5} \rangle$, one of those.</p> Answer 2: <p>I don't think what you're asking for is possible, at least not for some rational primes. Let me build on the example Mr. Brooks gave in a comment yesterday.</p> <p>The classic non-UFD is $\mathbb Z[\sqrt{-5}]$. Consider the rational prime $p = 3$. The norm function in the field $\mathbb Q(\sqrt{-5})$ is $N(a + b\sqrt{-5}) = a^2 + 5b^2$. Possible integer norms are 0, 1, 4, 5, 6, 9, ...</p> <p>Since 3 is an impossible norm, that means 3 is irreducible. But 6, the first nontrivial positive multiple of 3, is a possible norm, and in fact $(1 - \sqrt{-5})(1 + \sqrt{-5}) = 6$.</p> <p>So if the union of $\mathfrak P_1$ and $\mathfrak P_2$ contains all numbers in this domain with norm divisible by 3, it looks something like $\langle 3, 1 - \sqrt{-5} \rangle \cup \langle 3, 1 + \sqrt{-5} \rangle$.</p> <p>But neither of those two ideals are principal. Or at least they don't look principal. If either of them was principal, we'd be able to solve $x \pm x\sqrt{-5} = 3$ or $3x = 1 \pm \sqrt{-5}$ with $x \in \mathbb Z[\sqrt{-5}]$.</p> <p>What would be the norm of this number $x$? Remember that the norm is a multiplicative function. To solve the former equation, we would have to have $$N(x) = \frac{3}{2},$$ which is not an integer at all, and to solve the latter equation we'd need $N(x) = 2$, an impossible norm in this domain.</p> <p>Now let's look at $2 + \sqrt{-5}$. Since this number has a norm of 9, it must be contained in $\mathfrak P_1 \cup \mathfrak P_2$. I emphasize that $(2 + \sqrt{-5}) \not\in \langle 3 \rangle$ nor $\langle 1 - \sqrt{-5} \rangle$ nor $\langle 1 + \sqrt{-5} \rangle$.</p> <p>If $\mathfrak P_1 = \langle 3, 1 - \sqrt{-5} \rangle$, it consists of all numbers in this domain of the form $3\alpha + (1 - \sqrt{-5})\beta$, with $\alpha$ and $\beta$ also numbers in this domain. And since $3 + (-1 + \sqrt{-5}) = 2 + \sqrt{-5}$, that means $\alpha = 1$ and $\beta = -1$ is our verification that $(2 + \sqrt{-5}) \in \mathfrak P_1$.</p> <p>EDIT: Thanks to Dave R. for pointing out a mistake of arithmetic. I've corrected it now.</p> Answer 3: <p>The functions $f$ and $g$ can be defined, but they have to operate on ideals rather than numbers, and those ideals may or may not be principal.</p> <p>First, though, let's tidy up the notation. $f_R(\alpha) = 1$ (or True) if $\alpha$ is a number in the ring $R$, and $0$ (or False) otherwise. And $g_R^p(\alpha) = 1$ if $\alpha$ is a number in the prime ideal generated by $p$, or by a splitting factor of $p$, or co-generated by $p$.</p> <p>Now let's examine $\alpha = 82$, $p = 41$ and $R = \textbf{Z}[i]$. Since $41$ splits as $(4 - 5i)(4 + 5i)$, we have $$g_R^p(\alpha) = f_{\textbf{Z}[i]}\left(\frac{82}{4 - 5i}\right) + f_{\textbf{Z}[i]}\left(\frac{82}{4 + 5i}\right) - f_{\textbf{Z}[i]}\left(\frac{82}{41}\right) = 1 + 1 - 1 = 1.$$ That's correct, right?</p> <p>Now try $\alpha = 86$, $p = 43$. As $43$ is inert, we have $g_R^p(\alpha) = 1$, as expected.</p> <p>Let's move on to $g_{\textbf{Z}[\sqrt{-5}]}^{41}(82)$. Since $41$ splits as $(6 - \sqrt{-5})(6 + \sqrt{-5})$, the result should be much the same as $g_{\textbf{Z}[i]}^{41}(82)$.</p> <p>And now we get to something really interesting: $g_{\textbf{Z}[\sqrt{-5}]}^{43}(76 + 18 \sqrt{-5})$. $43$ is inert in this ring, and yet $N(76 + 18 \sqrt{-5}) = 7396$, a multiple of $43$. Worse, $$\frac{76 + 18 \sqrt{-5}}{43} = \frac{2}{43}(38 + 9 \sqrt{-5}),$$ so $$f_{\textbf{Z}[\sqrt{-5}]}\left(\frac{76 + 18 \sqrt{-5}}{43}\right) = 0.$$</p> <p>If instead we defined $g_R^p(\alpha)$ to look for the containment of $\langle \alpha \rangle$ in $\langle p \rangle$ or $\langle p, x \rangle$ or $\langle p, \overline x \rangle$, we might get the result we want, since $\langle 76 + 18 \sqrt{-5} \rangle$ is in either $\langle 43, 9 + \sqrt{-5} \rangle$ or $\langle 43, 9 - \sqrt{-5} \rangle$, one of those.</p> Answer 4: <p>I don't think what you're asking for is possible, at least not for some rational primes. Let me build on the example Mr. Brooks gave in a comment yesterday.</p> <p>The classic non-UFD is $\mathbb Z[\sqrt{-5}]$. Consider the rational prime $p = 3$. The norm function in the field $\mathbb Q(\sqrt{-5})$ is $N(a + b\sqrt{-5}) = a^2 + 5b^2$. Possible integer norms are 0, 1, 4, 5, 6, 9, ...</p> <p>Since 3 is an impossible norm, that means 3 is irreducible. But 6, the first nontrivial positive multiple of 3, is a possible norm, and in fact $(1 - \sqrt{-5})(1 + \sqrt{-5}) = 6$.</p> <p>So if the union of $\mathfrak P_1$ and $\mathfrak P_2$ contains all numbers in this domain with norm divisible by 3, it looks something like $\langle 3, 1 - \sqrt{-5} \rangle \cup \langle 3, 1 + \sqrt{-5} \rangle$.</p> <p>But neither of those two ideals are principal. Or at least they don't look principal. If either of them was principal, we'd be able to solve $x \pm x\sqrt{-5} = 3$ or $3x = 1 \pm \sqrt{-5}$ with $x \in \mathbb Z[\sqrt{-5}]$.</p> <p>What would be the norm of this number $x$? Remember that the norm is a multiplicative function. To solve the former equation, we would have to have $$N(x) = \frac{3}{2},$$ which is not an integer at all, and to solve the latter equation we'd need $N(x) = 2$, an impossible norm in this domain.</p> <p>Now let's look at $2 + \sqrt{-5}$. Since this number has a norm of 9, it must be contained in $\mathfrak P_1 \cup \mathfrak P_2$. I emphasize that $(2 + \sqrt{-5}) \not\in \langle 3 \rangle$ nor $\langle 1 - \sqrt{-5} \rangle$ nor $\langle 1 + \sqrt{-5} \rangle$.</p> <p>If $\mathfrak P_1 = \langle 3, 1 - \sqrt{-5} \rangle$, it consists of all numbers in this domain of the form $3\alpha + (1 - \sqrt{-5})\beta$, with $\alpha$ and $\beta$ also numbers in this domain. And since $3 + (-1 + \sqrt{-5}) = 2 + \sqrt{-5}$, that means $\alpha = 1$ and $\beta = -1$ is our verification that $(2 + \sqrt{-5}) \in \mathfrak P_1$.</p> <p>EDIT: Thanks to Dave R. for pointing out a mistake of arithmetic. I've corrected it now.</p> Answer 5: <p>The functions $f$ and $g$ can be defined, but they have to operate on ideals rather than numbers, and those ideals may or may not be principal.</p> <p>First, though, let's tidy up the notation. $f_R(\alpha) = 1$ (or True) if $\alpha$ is a number in the ring $R$, and $0$ (or False) otherwise. And $g_R^p(\alpha) = 1$ if $\alpha$ is a number in the prime ideal generated by $p$, or by a splitting factor of $p$, or co-generated by $p$.</p> <p>Now let's examine $\alpha = 82$, $p = 41$ and $R = \textbf{Z}[i]$. Since $41$ splits as $(4 - 5i)(4 + 5i)$, we have $$g_R^p(\alpha) = f_{\textbf{Z}[i]}\left(\frac{82}{4 - 5i}\right) + f_{\textbf{Z}[i]}\left(\frac{82}{4 + 5i}\right) - f_{\textbf{Z}[i]}\left(\frac{82}{41}\right) = 1 + 1 - 1 = 1.$$ That's correct, right?</p> <p>Now try $\alpha = 86$, $p = 43$. As $43$ is inert, we have $g_R^p(\alpha) = 1$, as expected.</p> <p>Let's move on to $g_{\textbf{Z}[\sqrt{-5}]}^{41}(82)$. Since $41$ splits as $(6 - \sqrt{-5})(6 + \sqrt{-5})$, the result should be much the same as $g_{\textbf{Z}[i]}^{41}(82)$.</p> <p>And now we get to something really interesting: $g_{\textbf{Z}[\sqrt{-5}]}^{43}(76 + 18 \sqrt{-5})$. $43$ is inert in this ring, and yet $N(76 + 18 \sqrt{-5}) = 7396$, a multiple of $43$. Worse, $$\frac{76 + 18 \sqrt{-5}}{43} = \frac{2}{43}(38 + 9 \sqrt{-5}),$$ so $$f_{\textbf{Z}[\sqrt{-5}]}\left(\frac{76 + 18 \sqrt{-5}}{43}\right) = 0.$$</p> <p>If instead we defined $g_R^p(\alpha)$ to look for the containment of $\langle \alpha \rangle$ in $\langle p \rangle$ or $\langle p, x \rangle$ or $\langle p, \overline x \rangle$, we might get the result we want, since $\langle 76 + 18 \sqrt{-5} \rangle$ is in either $\langle 43, 9 + \sqrt{-5} \rangle$ or $\langle 43, 9 - \sqrt{-5} \rangle$, one of those.</p> Answer 6: <p>I don't think what you're asking for is possible, at least not for some rational primes. Let me build on the example Mr. Brooks gave in a comment yesterday.</p> <p>The classic non-UFD is $\mathbb Z[\sqrt{-5}]$. Consider the rational prime $p = 3$. The norm function in the field $\mathbb Q(\sqrt{-5})$ is $N(a + b\sqrt{-5}) = a^2 + 5b^2$. Possible integer norms are 0, 1, 4, 5, 6, 9, ...</p> <p>Since 3 is an impossible norm, that means 3 is irreducible. But 6, the first nontrivial positive multiple of 3, is a possible norm, and in fact $(1 - \sqrt{-5})(1 + \sqrt{-5}) = 6$.</p> <p>So if the union of $\mathfrak P_1$ and $\mathfrak P_2$ contains all numbers in this domain with norm divisible by 3, it looks something like $\langle 3, 1 - \sqrt{-5} \rangle \cup \langle 3, 1 + \sqrt{-5} \rangle$.</p> <p>But neither of those two ideals are principal. Or at least they don't look principal. If either of them was principal, we'd be able to solve $x \pm x\sqrt{-5} = 3$ or $3x = 1 \pm \sqrt{-5}$ with $x \in \mathbb Z[\sqrt{-5}]$.</p> <p>What would be the norm of this number $x$? Remember that the norm is a multiplicative function. To solve the former equation, we would have to have $$N(x) = \frac{3}{2},$$ which is not an integer at all, and to solve the latter equation we'd need $N(x) = 2$, an impossible norm in this domain.</p> <p>Now let's look at $2 + \sqrt{-5}$. Since this number has a norm of 9, it must be contained in $\mathfrak P_1 \cup \mathfrak P_2$. I emphasize that $(2 + \sqrt{-5}) \not\in \langle 3 \rangle$ nor $\langle 1 - \sqrt{-5} \rangle$ nor $\langle 1 + \sqrt{-5} \rangle$.</p> <p>If $\mathfrak P_1 = \langle 3, 1 - \sqrt{-5} \rangle$, it consists of all numbers in this domain of the form $3\alpha + (1 - \sqrt{-5})\beta$, with $\alpha$ and $\beta$ also numbers in this domain. And since $3 + (-1 + \sqrt{-5}) = 2 + \sqrt{-5}$, that means $\alpha = 1$ and $\beta = -1$ is our verification that $(2 + \sqrt{-5}) \in \mathfrak P_1$.</p> <p>EDIT: Thanks to Dave R. for pointing out a mistake of arithmetic. I've corrected it now.</p> Answer 7: <p>The functions $f$ and $g$ can be defined, but they have to operate on ideals rather than numbers, and those ideals may or may not be principal.</p> <p>First, though, let's tidy up the notation. $f_R(\alpha) = 1$ (or True) if $\alpha$ is a number in the ring $R$, and $0$ (or False) otherwise. And $g_R^p(\alpha) = 1$ if $\alpha$ is a number in the prime ideal generated by $p$, or by a splitting factor of $p$, or co-generated by $p$.</p> <p>Now let's examine $\alpha = 82$, $p = 41$ and $R = \textbf{Z}[i]$. Since $41$ splits as $(4 - 5i)(4 + 5i)$, we have $$g_R^p(\alpha) = f_{\textbf{Z}[i]}\left(\frac{82}{4 - 5i}\right) + f_{\textbf{Z}[i]}\left(\frac{82}{4 + 5i}\right) - f_{\textbf{Z}[i]}\left(\frac{82}{41}\right) = 1 + 1 - 1 = 1.$$ That's correct, right?</p> <p>Now try $\alpha = 86$, $p = 43$. As $43$ is inert, we have $g_R^p(\alpha) = 1$, as expected.</p> <p>Let's move on to $g_{\textbf{Z}[\sqrt{-5}]}^{41}(82)$. Since $41$ splits as $(6 - \sqrt{-5})(6 + \sqrt{-5})$, the result should be much the same as $g_{\textbf{Z}[i]}^{41}(82)$.</p> <p>And now we get to something really interesting: $g_{\textbf{Z}[\sqrt{-5}]}^{43}(76 + 18 \sqrt{-5})$. $43$ is inert in this ring, and yet $N(76 + 18 \sqrt{-5}) = 7396$, a multiple of $43$. Worse, $$\frac{76 + 18 \sqrt{-5}}{43} = \frac{2}{43}(38 + 9 \sqrt{-5}),$$ so $$f_{\textbf{Z}[\sqrt{-5}]}\left(\frac{76 + 18 \sqrt{-5}}{43}\right) = 0.$$</p> <p>If instead we defined $g_R^p(\alpha)$ to look for the containment of $\langle \alpha \rangle$ in $\langle p \rangle$ or $\langle p, x \rangle$ or $\langle p, \overline x \rangle$, we might get the result we want, since $\langle 76 + 18 \sqrt{-5} \rangle$ is in either $\langle 43, 9 + \sqrt{-5} \rangle$ or $\langle 43, 9 - \sqrt{-5} \rangle$, one of those.</p> Answer 8: <p>I don't think what you're asking for is possible, at least not for some rational primes. Let me build on the example Mr. Brooks gave in a comment yesterday.</p> <p>The classic non-UFD is $\mathbb Z[\sqrt{-5}]$. Consider the rational prime $p = 3$. The norm function in the field $\mathbb Q(\sqrt{-5})$ is $N(a + b\sqrt{-5}) = a^2 + 5b^2$. Possible integer norms are 0, 1, 4, 5, 6, 9, ...</p> <p>Since 3 is an impossible norm, that means 3 is irreducible. But 6, the first nontrivial positive multiple of 3, is a possible norm, and in fact $(1 - \sqrt{-5})(1 + \sqrt{-5}) = 6$.</p> <p>So if the union of $\mathfrak P_1$ and $\mathfrak P_2$ contains all numbers in this domain with norm divisible by 3, it looks something like $\langle 3, 1 - \sqrt{-5} \rangle \cup \langle 3, 1 + \sqrt{-5} \rangle$.</p> <p>But neither of those two ideals are principal. Or at least they don't look principal. If either of them was principal, we'd be able to solve $x \pm x\sqrt{-5} = 3$ or $3x = 1 \pm \sqrt{-5}$ with $x \in \mathbb Z[\sqrt{-5}]$.</p> <p>What would be the norm of this number $x$? Remember that the norm is a multiplicative function. To solve the former equation, we would have to have $$N(x) = \frac{3}{2},$$ which is not an integer at all, and to solve the latter equation we'd need $N(x) = 2$, an impossible norm in this domain.</p> <p>Now let's look at $2 + \sqrt{-5}$. Since this number has a norm of 9, it must be contained in $\mathfrak P_1 \cup \mathfrak P_2$. I emphasize that $(2 + \sqrt{-5}) \not\in \langle 3 \rangle$ nor $\langle 1 - \sqrt{-5} \rangle$ nor $\langle 1 + \sqrt{-5} \rangle$.</p> <p>If $\mathfrak P_1 = \langle 3, 1 - \sqrt{-5} \rangle$, it consists of all numbers in this domain of the form $3\alpha + (1 - \sqrt{-5})\beta$, with $\alpha$ and $\beta$ also numbers in this domain. And since $3 + (-1 + \sqrt{-5}) = 2 + \sqrt{-5}$, that means $\alpha = 1$ and $\beta = -1$ is our verification that $(2 + \sqrt{-5}) \in \mathfrak P_1$.</p> <p>EDIT: Thanks to Dave R. for pointing out a mistake of arithmetic. I've corrected it now.</p> Answer 9: <p>The functions $f$ and $g$ can be defined, but they have to operate on ideals rather than numbers, and those ideals may or may not be principal.</p> <p>First, though, let's tidy up the notation. $f_R(\alpha) = 1$ (or True) if $\alpha$ is a number in the ring $R$, and $0$ (or False) otherwise. And $g_R^p(\alpha) = 1$ if $\alpha$ is a number in the prime ideal generated by $p$, or by a splitting factor of $p$, or co-generated by $p$.</p> <p>Now let's examine $\alpha = 82$, $p = 41$ and $R = \textbf{Z}[i]$. Since $41$ splits as $(4 - 5i)(4 + 5i)$, we have $$g_R^p(\alpha) = f_{\textbf{Z}[i]}\left(\frac{82}{4 - 5i}\right) + f_{\textbf{Z}[i]}\left(\frac{82}{4 + 5i}\right) - f_{\textbf{Z}[i]}\left(\frac{82}{41}\right) = 1 + 1 - 1 = 1.$$ That's correct, right?</p> <p>Now try $\alpha = 86$, $p = 43$. As $43$ is inert, we have $g_R^p(\alpha) = 1$, as expected.</p> <p>Let's move on to $g_{\textbf{Z}[\sqrt{-5}]}^{41}(82)$. Since $41$ splits as $(6 - \sqrt{-5})(6 + \sqrt{-5})$, the result should be much the same as $g_{\textbf{Z}[i]}^{41}(82)$.</p> <p>And now we get to something really interesting: $g_{\textbf{Z}[\sqrt{-5}]}^{43}(76 + 18 \sqrt{-5})$. $43$ is inert in this ring, and yet $N(76 + 18 \sqrt{-5}) = 7396$, a multiple of $43$. Worse, $$\frac{76 + 18 \sqrt{-5}}{43} = \frac{2}{43}(38 + 9 \sqrt{-5}),$$ so $$f_{\textbf{Z}[\sqrt{-5}]}\left(\frac{76 + 18 \sqrt{-5}}{43}\right) = 0.$$</p> <p>If instead we defined $g_R^p(\alpha)$ to look for the containment of $\langle \alpha \rangle$ in $\langle p \rangle$ or $\langle p, x \rangle$ or $\langle p, \overline x \rangle$, we might get the result we want, since $\langle 76 + 18 \sqrt{-5} \rangle$ is in either $\langle 43, 9 + \sqrt{-5} \rangle$ or $\langle 43, 9 - \sqrt{-5} \rangle$, one of those.</p> Answer 10: <p>I don't think what you're asking for is possible, at least not for some rational primes. Let me build on the example Mr. Brooks gave in a comment yesterday.</p> <p>The classic non-UFD is $\mathbb Z[\sqrt{-5}]$. Consider the rational prime $p = 3$. The norm function in the field $\mathbb Q(\sqrt{-5})$ is $N(a + b\sqrt{-5}) = a^2 + 5b^2$. Possible integer norms are 0, 1, 4, 5, 6, 9, ...</p> <p>Since 3 is an impossible norm, that means 3 is irreducible. But 6, the first nontrivial positive multiple of 3, is a possible norm, and in fact $(1 - \sqrt{-5})(1 + \sqrt{-5}) = 6$.</p> <p>So if the union of $\mathfrak P_1$ and $\mathfrak P_2$ contains all numbers in this domain with norm divisible by 3, it looks something like $\langle 3, 1 - \sqrt{-5} \rangle \cup \langle 3, 1 + \sqrt{-5} \rangle$.</p> <p>But neither of those two ideals are principal. Or at least they don't look principal. If either of them was principal, we'd be able to solve $x \pm x\sqrt{-5} = 3$ or $3x = 1 \pm \sqrt{-5}$ with $x \in \mathbb Z[\sqrt{-5}]$.</p> <p>What would be the norm of this number $x$? Remember that the norm is a multiplicative function. To solve the former equation, we would have to have $$N(x) = \frac{3}{2},$$ which is not an integer at all, and to solve the latter equation we'd need $N(x) = 2$, an impossible norm in this domain.</p> <p>Now let's look at $2 + \sqrt{-5}$. Since this number has a norm of 9, it must be contained in $\mathfrak P_1 \cup \mathfrak P_2$. I emphasize that $(2 + \sqrt{-5}) \not\in \langle 3 \rangle$ nor $\langle 1 - \sqrt{-5} \rangle$ nor $\langle 1 + \sqrt{-5} \rangle$.</p> <p>If $\mathfrak P_1 = \langle 3, 1 - \sqrt{-5} \rangle$, it consists of all numbers in this domain of the form $3\alpha + (1 - \sqrt{-5})\beta$, with $\alpha$ and $\beta$ also numbers in this domain. And since $3 + (-1 + \sqrt{-5}) = 2 + \sqrt{-5}$, that means $\alpha = 1$ and $\beta = -1$ is our verification that $(2 + \sqrt{-5}) \in \mathfrak P_1$.</p> <p>EDIT: Thanks to Dave R. for pointing out a mistake of arithmetic. I've corrected it now.</p> Answer 11: <p>The functions $f$ and $g$ can be defined, but they have to operate on ideals rather than numbers, and those ideals may or may not be principal.</p> <p>First, though, let's tidy up the notation. $f_R(\alpha) = 1$ (or True) if $\alpha$ is a number in the ring $R$, and $0$ (or False) otherwise. And $g_R^p(\alpha) = 1$ if $\alpha$ is a number in the prime ideal generated by $p$, or by a splitting factor of $p$, or co-generated by $p$.</p> <p>Now let's examine $\alpha = 82$, $p = 41$ and $R = \textbf{Z}[i]$. Since $41$ splits as $(4 - 5i)(4 + 5i)$, we have $$g_R^p(\alpha) = f_{\textbf{Z}[i]}\left(\frac{82}{4 - 5i}\right) + f_{\textbf{Z}[i]}\left(\frac{82}{4 + 5i}\right) - f_{\textbf{Z}[i]}\left(\frac{82}{41}\right) = 1 + 1 - 1 = 1.$$ That's correct, right?</p> <p>Now try $\alpha = 86$, $p = 43$. As $43$ is inert, we have $g_R^p(\alpha) = 1$, as expected.</p> <p>Let's move on to $g_{\textbf{Z}[\sqrt{-5}]}^{41}(82)$. Since $41$ splits as $(6 - \sqrt{-5})(6 + \sqrt{-5})$, the result should be much the same as $g_{\textbf{Z}[i]}^{41}(82)$.</p> <p>And now we get to something really interesting: $g_{\textbf{Z}[\sqrt{-5}]}^{43}(76 + 18 \sqrt{-5})$. $43$ is inert in this ring, and yet $N(76 + 18 \sqrt{-5}) = 7396$, a multiple of $43$. Worse, $$\frac{76 + 18 \sqrt{-5}}{43} = \frac{2}{43}(38 + 9 \sqrt{-5}),$$ so $$f_{\textbf{Z}[\sqrt{-5}]}\left(\frac{76 + 18 \sqrt{-5}}{43}\right) = 0.$$</p> <p>If instead we defined $g_R^p(\alpha)$ to look for the containment of $\langle \alpha \rangle$ in $\langle p \rangle$ or $\langle p, x \rangle$ or $\langle p, \overline x \rangle$, we might get the result we want, since $\langle 76 + 18 \sqrt{-5} \rangle$ is in either $\langle 43, 9 + \sqrt{-5} \rangle$ or $\langle 43, 9 - \sqrt{-5} \rangle$, one of those.</p> Answer 12: <p>I don't think what you're asking for is possible, at least not for some rational primes. Let me build on the example Mr. Brooks gave in a comment yesterday.</p> <p>The classic non-UFD is $\mathbb Z[\sqrt{-5}]$. Consider the rational prime $p = 3$. The norm function in the field $\mathbb Q(\sqrt{-5})$ is $N(a + b\sqrt{-5}) = a^2 + 5b^2$. Possible integer norms are 0, 1, 4, 5, 6, 9, ...</p> <p>Since 3 is an impossible norm, that means 3 is irreducible. But 6, the first nontrivial positive multiple of 3, is a possible norm, and in fact $(1 - \sqrt{-5})(1 + \sqrt{-5}) = 6$.</p> <p>So if the union of $\mathfrak P_1$ and $\mathfrak P_2$ contains all numbers in this domain with norm divisible by 3, it looks something like $\langle 3, 1 - \sqrt{-5} \rangle \cup \langle 3, 1 + \sqrt{-5} \rangle$.</p> <p>But neither of those two ideals are principal. Or at least they don't look principal. If either of them was principal, we'd be able to solve $x \pm x\sqrt{-5} = 3$ or $3x = 1 \pm \sqrt{-5}$ with $x \in \mathbb Z[\sqrt{-5}]$.</p> <p>What would be the norm of this number $x$? Remember that the norm is a multiplicative function. To solve the former equation, we would have to have $$N(x) = \frac{3}{2},$$ which is not an integer at all, and to solve the latter equation we'd need $N(x) = 2$, an impossible norm in this domain.</p> <p>Now let's look at $2 + \sqrt{-5}$. Since this number has a norm of 9, it must be contained in $\mathfrak P_1 \cup \mathfrak P_2$. I emphasize that $(2 + \sqrt{-5}) \not\in \langle 3 \rangle$ nor $\langle 1 - \sqrt{-5} \rangle$ nor $\langle 1 + \sqrt{-5} \rangle$.</p> <p>If $\mathfrak P_1 = \langle 3, 1 - \sqrt{-5} \rangle$, it consists of all numbers in this domain of the form $3\alpha + (1 - \sqrt{-5})\beta$, with $\alpha$ and $\beta$ also numbers in this domain. And since $3 + (-1 + \sqrt{-5}) = 2 + \sqrt{-5}$, that means $\alpha = 1$ and $\beta = -1$ is our verification that $(2 + \sqrt{-5}) \in \mathfrak P_1$.</p> <p>EDIT: Thanks to Dave R. for pointing out a mistake of arithmetic. I've corrected it now.</p> Answer 13: <p>The functions $f$ and $g$ can be defined, but they have to operate on ideals rather than numbers, and those ideals may or may not be principal.</p> <p>First, though, let's tidy up the notation. $f_R(\alpha) = 1$ (or True) if $\alpha$ is a number in the ring $R$, and $0$ (or False) otherwise. And $g_R^p(\alpha) = 1$ if $\alpha$ is a number in the prime ideal generated by $p$, or by a splitting factor of $p$, or co-generated by $p$.</p> <p>Now let's examine $\alpha = 82$, $p = 41$ and $R = \textbf{Z}[i]$. Since $41$ splits as $(4 - 5i)(4 + 5i)$, we have $$g_R^p(\alpha) = f_{\textbf{Z}[i]}\left(\frac{82}{4 - 5i}\right) + f_{\textbf{Z}[i]}\left(\frac{82}{4 + 5i}\right) - f_{\textbf{Z}[i]}\left(\frac{82}{41}\right) = 1 + 1 - 1 = 1.$$ That's correct, right?</p> <p>Now try $\alpha = 86$, $p = 43$. As $43$ is inert, we have $g_R^p(\alpha) = 1$, as expected.</p> <p>Let's move on to $g_{\textbf{Z}[\sqrt{-5}]}^{41}(82)$. Since $41$ splits as $(6 - \sqrt{-5})(6 + \sqrt{-5})$, the result should be much the same as $g_{\textbf{Z}[i]}^{41}(82)$.</p> <p>And now we get to something really interesting: $g_{\textbf{Z}[\sqrt{-5}]}^{43}(76 + 18 \sqrt{-5})$. $43$ is inert in this ring, and yet $N(76 + 18 \sqrt{-5}) = 7396$, a multiple of $43$. Worse, $$\frac{76 + 18 \sqrt{-5}}{43} = \frac{2}{43}(38 + 9 \sqrt{-5}),$$ so $$f_{\textbf{Z}[\sqrt{-5}]}\left(\frac{76 + 18 \sqrt{-5}}{43}\right) = 0.$$</p> <p>If instead we defined $g_R^p(\alpha)$ to look for the containment of $\langle \alpha \rangle$ in $\langle p \rangle$ or $\langle p, x \rangle$ or $\langle p, \overline x \rangle$, we might get the result we want, since $\langle 76 + 18 \sqrt{-5} \rangle$ is in either $\langle 43, 9 + \sqrt{-5} \rangle$ or $\langle 43, 9 - \sqrt{-5} \rangle$, one of those.</p> Answer 14: <p>I don't think what you're asking for is possible, at least not for some rational primes. Let me build on the example Mr. Brooks gave in a comment yesterday.</p> <p>The classic non-UFD is $\mathbb Z[\sqrt{-5}]$. Consider the rational prime $p = 3$. The norm function in the field $\mathbb Q(\sqrt{-5})$ is $N(a + b\sqrt{-5}) = a^2 + 5b^2$. Possible integer norms are 0, 1, 4, 5, 6, 9, ...</p> <p>Since 3 is an impossible norm, that means 3 is irreducible. But 6, the first nontrivial positive multiple of 3, is a possible norm, and in fact $(1 - \sqrt{-5})(1 + \sqrt{-5}) = 6$.</p> <p>So if the union of $\mathfrak P_1$ and $\mathfrak P_2$ contains all numbers in this domain with norm divisible by 3, it looks something like $\langle 3, 1 - \sqrt{-5} \rangle \cup \langle 3, 1 + \sqrt{-5} \rangle$.</p> <p>But neither of those two ideals are principal. Or at least they don't look principal. If either of them was principal, we'd be able to solve $x \pm x\sqrt{-5} = 3$ or $3x = 1 \pm \sqrt{-5}$ with $x \in \mathbb Z[\sqrt{-5}]$.</p> <p>What would be the norm of this number $x$? Remember that the norm is a multiplicative function. To solve the former equation, we would have to have $$N(x) = \frac{3}{2},$$ which is not an integer at all, and to solve the latter equation we'd need $N(x) = 2$, an impossible norm in this domain.</p> <p>Now let's look at $2 + \sqrt{-5}$. Since this number has a norm of 9, it must be contained in $\mathfrak P_1 \cup \mathfrak P_2$. I emphasize that $(2 + \sqrt{-5}) \not\in \langle 3 \rangle$ nor $\langle 1 - \sqrt{-5} \rangle$ nor $\langle 1 + \sqrt{-5} \rangle$.</p> <p>If $\mathfrak P_1 = \langle 3, 1 - \sqrt{-5} \rangle$, it consists of all numbers in this domain of the form $3\alpha + (1 - \sqrt{-5})\beta$, with $\alpha$ and $\beta$ also numbers in this domain. And since $3 + (-1 + \sqrt{-5}) = 2 + \sqrt{-5}$, that means $\alpha = 1$ and $\beta = -1$ is our verification that $(2 + \sqrt{-5}) \in \mathfrak P_1$.</p> <p>EDIT: Thanks to Dave R. for pointing out a mistake of arithmetic. I've corrected it now.</p> Answer 15: <p>The functions $f$ and $g$ can be defined, but they have to operate on ideals rather than numbers, and those ideals may or may not be principal.</p> <p>First, though, let's tidy up the notation. $f_R(\alpha) = 1$ (or True) if $\alpha$ is a number in the ring $R$, and $0$ (or False) otherwise. And $g_R^p(\alpha) = 1$ if $\alpha$ is a number in the prime ideal generated by $p$, or by a splitting factor of $p$, or co-generated by $p$.</p> <p>Now let's examine $\alpha = 82$, $p = 41$ and $R = \textbf{Z}[i]$. Since $41$ splits as $(4 - 5i)(4 + 5i)$, we have $$g_R^p(\alpha) = f_{\textbf{Z}[i]}\left(\frac{82}{4 - 5i}\right) + f_{\textbf{Z}[i]}\left(\frac{82}{4 + 5i}\right) - f_{\textbf{Z}[i]}\left(\frac{82}{41}\right) = 1 + 1 - 1 = 1.$$ That's correct, right?</p> <p>Now try $\alpha = 86$, $p = 43$. As $43$ is inert, we have $g_R^p(\alpha) = 1$, as expected.</p> <p>Let's move on to $g_{\textbf{Z}[\sqrt{-5}]}^{41}(82)$. Since $41$ splits as $(6 - \sqrt{-5})(6 + \sqrt{-5})$, the result should be much the same as $g_{\textbf{Z}[i]}^{41}(82)$.</p> <p>And now we get to something really interesting: $g_{\textbf{Z}[\sqrt{-5}]}^{43}(76 + 18 \sqrt{-5})$. $43$ is inert in this ring, and yet $N(76 + 18 \sqrt{-5}) = 7396$, a multiple of $43$. Worse, $$\frac{76 + 18 \sqrt{-5}}{43} = \frac{2}{43}(38 + 9 \sqrt{-5}),$$ so $$f_{\textbf{Z}[\sqrt{-5}]}\left(\frac{76 + 18 \sqrt{-5}}{43}\right) = 0.$$</p> <p>If instead we defined $g_R^p(\alpha)$ to look for the containment of $\langle \alpha \rangle$ in $\langle p \rangle$ or $\langle p, x \rangle$ or $\langle p, \overline x \rangle$, we might get the result we want, since $\langle 76 + 18 \sqrt{-5} \rangle$ is in either $\langle 43, 9 + \sqrt{-5} \rangle$ or $\langle 43, 9 - \sqrt{-5} \rangle$, one of those.</p> Answer 16: <p>I don't think what you're asking for is possible, at least not for some rational primes. Let me build on the example Mr. Brooks gave in a comment yesterday.</p> <p>The classic non-UFD is $\mathbb Z[\sqrt{-5}]$. Consider the rational prime $p = 3$. The norm function in the field $\mathbb Q(\sqrt{-5})$ is $N(a + b\sqrt{-5}) = a^2 + 5b^2$. Possible integer norms are 0, 1, 4, 5, 6, 9, ...</p> <p>Since 3 is an impossible norm, that means 3 is irreducible. But 6, the first nontrivial positive multiple of 3, is a possible norm, and in fact $(1 - \sqrt{-5})(1 + \sqrt{-5}) = 6$.</p> <p>So if the union of $\mathfrak P_1$ and $\mathfrak P_2$ contains all numbers in this domain with norm divisible by 3, it looks something like $\langle 3, 1 - \sqrt{-5} \rangle \cup \langle 3, 1 + \sqrt{-5} \rangle$.</p> <p>But neither of those two ideals are principal. Or at least they don't look principal. If either of them was principal, we'd be able to solve $x \pm x\sqrt{-5} = 3$ or $3x = 1 \pm \sqrt{-5}$ with $x \in \mathbb Z[\sqrt{-5}]$.</p> <p>What would be the norm of this number $x$? Remember that the norm is a multiplicative function. To solve the former equation, we would have to have $$N(x) = \frac{3}{2},$$ which is not an integer at all, and to solve the latter equation we'd need $N(x) = 2$, an impossible norm in this domain.</p> <p>Now let's look at $2 + \sqrt{-5}$. Since this number has a norm of 9, it must be contained in $\mathfrak P_1 \cup \mathfrak P_2$. I emphasize that $(2 + \sqrt{-5}) \not\in \langle 3 \rangle$ nor $\langle 1 - \sqrt{-5} \rangle$ nor $\langle 1 + \sqrt{-5} \rangle$.</p> <p>If $\mathfrak P_1 = \langle 3, 1 - \sqrt{-5} \rangle$, it consists of all numbers in this domain of the form $3\alpha + (1 - \sqrt{-5})\beta$, with $\alpha$ and $\beta$ also numbers in this domain. And since $3 + (-1 + \sqrt{-5}) = 2 + \sqrt{-5}$, that means $\alpha = 1$ and $\beta = -1$ is our verification that $(2 + \sqrt{-5}) \in \mathfrak P_1$.</p> <p>EDIT: Thanks to Dave R. for pointing out a mistake of arithmetic. I've corrected it now.</p> Answer 17: <p>The functions $f$ and $g$ can be defined, but they have to operate on ideals rather than numbers, and those ideals may or may not be principal.</p> <p>First, though, let's tidy up the notation. $f_R(\alpha) = 1$ (or True) if $\alpha$ is a number in the ring $R$, and $0$ (or False) otherwise. And $g_R^p(\alpha) = 1$ if $\alpha$ is a number in the prime ideal generated by $p$, or by a splitting factor of $p$, or co-generated by $p$.</p> <p>Now let's examine $\alpha = 82$, $p = 41$ and $R = \textbf{Z}[i]$. Since $41$ splits as $(4 - 5i)(4 + 5i)$, we have $$g_R^p(\alpha) = f_{\textbf{Z}[i]}\left(\frac{82}{4 - 5i}\right) + f_{\textbf{Z}[i]}\left(\frac{82}{4 + 5i}\right) - f_{\textbf{Z}[i]}\left(\frac{82}{41}\right) = 1 + 1 - 1 = 1.$$ That's correct, right?</p> <p>Now try $\alpha = 86$, $p = 43$. As $43$ is inert, we have $g_R^p(\alpha) = 1$, as expected.</p> <p>Let's move on to $g_{\textbf{Z}[\sqrt{-5}]}^{41}(82)$. Since $41$ splits as $(6 - \sqrt{-5})(6 + \sqrt{-5})$, the result should be much the same as $g_{\textbf{Z}[i]}^{41}(82)$.</p> <p>And now we get to something really interesting: $g_{\textbf{Z}[\sqrt{-5}]}^{43}(76 + 18 \sqrt{-5})$. $43$ is inert in this ring, and yet $N(76 + 18 \sqrt{-5}) = 7396$, a multiple of $43$. Worse, $$\frac{76 + 18 \sqrt{-5}}{43} = \frac{2}{43}(38 + 9 \sqrt{-5}),$$ so $$f_{\textbf{Z}[\sqrt{-5}]}\left(\frac{76 + 18 \sqrt{-5}}{43}\right) = 0.$$</p> <p>If instead we defined $g_R^p(\alpha)$ to look for the containment of $\langle \alpha \rangle$ in $\langle p \rangle$ or $\langle p, x \rangle$ or $\langle p, \overline x \rangle$, we might get the result we want, since $\langle 76 + 18 \sqrt{-5} \rangle$ is in either $\langle 43, 9 + \sqrt{-5} \rangle$ or $\langle 43, 9 - \sqrt{-5} \rangle$, one of those.</p> Answer 18: <p>I don't think what you're asking for is possible, at least not for some rational primes. Let me build on the example Mr. Brooks gave in a comment yesterday.</p> <p>The classic non-UFD is $\mathbb Z[\sqrt{-5}]$. Consider the rational prime $p = 3$. The norm function in the field $\mathbb Q(\sqrt{-5})$ is $N(a + b\sqrt{-5}) = a^2 + 5b^2$. Possible integer norms are 0, 1, 4, 5, 6, 9, ...</p> <p>Since 3 is an impossible norm, that means 3 is irreducible. But 6, the first nontrivial positive multiple of 3, is a possible norm, and in fact $(1 - \sqrt{-5})(1 + \sqrt{-5}) = 6$.</p> <p>So if the union of $\mathfrak P_1$ and $\mathfrak P_2$ contains all numbers in this domain with norm divisible by 3, it looks something like $\langle 3, 1 - \sqrt{-5} \rangle \cup \langle 3, 1 + \sqrt{-5} \rangle$.</p> <p>But neither of those two ideals are principal. Or at least they don't look principal. If either of them was principal, we'd be able to solve $x \pm x\sqrt{-5} = 3$ or $3x = 1 \pm \sqrt{-5}$ with $x \in \mathbb Z[\sqrt{-5}]$.</p> <p>What would be the norm of this number $x$? Remember that the norm is a multiplicative function. To solve the former equation, we would have to have $$N(x) = \frac{3}{2},$$ which is not an integer at all, and to solve the latter equation we'd need $N(x) = 2$, an impossible norm in this domain.</p> <p>Now let's look at $2 + \sqrt{-5}$. Since this number has a norm of 9, it must be contained in $\mathfrak P_1 \cup \mathfrak P_2$. I emphasize that $(2 + \sqrt{-5}) \not\in \langle 3 \rangle$ nor $\langle 1 - \sqrt{-5} \rangle$ nor $\langle 1 + \sqrt{-5} \rangle$.</p> <p>If $\mathfrak P_1 = \langle 3, 1 - \sqrt{-5} \rangle$, it consists of all numbers in this domain of the form $3\alpha + (1 - \sqrt{-5})\beta$, with $\alpha$ and $\beta$ also numbers in this domain. And since $3 + (-1 + \sqrt{-5}) = 2 + \sqrt{-5}$, that means $\alpha = 1$ and $\beta = -1$ is our verification that $(2 + \sqrt{-5}) \in \mathfrak P_1$.</p> <p>EDIT: Thanks to Dave R. for pointing out a mistake of arithmetic. I've corrected it now.</p> Answer 19: <p>The functions $f$ and $g$ can be defined, but they have to operate on ideals rather than numbers, and those ideals may or may not be principal.</p> <p>First, though, let's tidy up the notation. $f_R(\alpha) = 1$ (or True) if $\alpha$ is a number in the ring $R$, and $0$ (or False) otherwise. And $g_R^p(\alpha) = 1$ if $\alpha$ is a number in the prime ideal generated by $p$, or by a splitting factor of $p$, or co-generated by $p$.</p> <p>Now let's examine $\alpha = 82$, $p = 41$ and $R = \textbf{Z}[i]$. Since $41$ splits as $(4 - 5i)(4 + 5i)$, we have $$g_R^p(\alpha) = f_{\textbf{Z}[i]}\left(\frac{82}{4 - 5i}\right) + f_{\textbf{Z}[i]}\left(\frac{82}{4 + 5i}\right) - f_{\textbf{Z}[i]}\left(\frac{82}{41}\right) = 1 + 1 - 1 = 1.$$ That's correct, right?</p> <p>Now try $\alpha = 86$, $p = 43$. As $43$ is inert, we have $g_R^p(\alpha) = 1$, as expected.</p> <p>Let's move on to $g_{\textbf{Z}[\sqrt{-5}]}^{41}(82)$. Since $41$ splits as $(6 - \sqrt{-5})(6 + \sqrt{-5})$, the result should be much the same as $g_{\textbf{Z}[i]}^{41}(82)$.</p> <p>And now we get to something really interesting: $g_{\textbf{Z}[\sqrt{-5}]}^{43}(76 + 18 \sqrt{-5})$. $43$ is inert in this ring, and yet $N(76 + 18 \sqrt{-5}) = 7396$, a multiple of $43$. Worse, $$\frac{76 + 18 \sqrt{-5}}{43} = \frac{2}{43}(38 + 9 \sqrt{-5}),$$ so $$f_{\textbf{Z}[\sqrt{-5}]}\left(\frac{76 + 18 \sqrt{-5}}{43}\right) = 0.$$</p> <p>If instead we defined $g_R^p(\alpha)$ to look for the containment of $\langle \alpha \rangle$ in $\langle p \rangle$ or $\langle p, x \rangle$ or $\langle p, \overline x \rangle$, we might get the result we want, since $\langle 76 + 18 \sqrt{-5} \rangle$ is in either $\langle 43, 9 + \sqrt{-5} \rangle$ or $\langle 43, 9 - \sqrt{-5} \rangle$, one of those.</p> Answer 20: <p>I don't think what you're asking for is possible, at least not for some rational primes. Let me build on the example Mr. Brooks gave in a comment yesterday.</p> <p>The classic non-UFD is $\mathbb Z[\sqrt{-5}]$. Consider the rational prime $p = 3$. The norm function in the field $\mathbb Q(\sqrt{-5})$ is $N(a + b\sqrt{-5}) = a^2 + 5b^2$. Possible integer norms are 0, 1, 4, 5, 6, 9, ...</p> <p>Since 3 is an impossible norm, that means 3 is irreducible. But 6, the first nontrivial positive multiple of 3, is a possible norm, and in fact $(1 - \sqrt{-5})(1 + \sqrt{-5}) = 6$.</p> <p>So if the union of $\mathfrak P_1$ and $\mathfrak P_2$ contains all numbers in this domain with norm divisible by 3, it looks something like $\langle 3, 1 - \sqrt{-5} \rangle \cup \langle 3, 1 + \sqrt{-5} \rangle$.</p> <p>But neither of those two ideals are principal. Or at least they don't look principal. If either of them was principal, we'd be able to solve $x \pm x\sqrt{-5} = 3$ or $3x = 1 \pm \sqrt{-5}$ with $x \in \mathbb Z[\sqrt{-5}]$.</p> <p>What would be the norm of this number $x$? Remember that the norm is a multiplicative function. To solve the former equation, we would have to have $$N(x) = \frac{3}{2},$$ which is not an integer at all, and to solve the latter equation we'd need $N(x) = 2$, an impossible norm in this domain.</p> <p>Now let's look at $2 + \sqrt{-5}$. Since this number has a norm of 9, it must be contained in $\mathfrak P_1 \cup \mathfrak P_2$. I emphasize that $(2 + \sqrt{-5}) \not\in \langle 3 \rangle$ nor $\langle 1 - \sqrt{-5} \rangle$ nor $\langle 1 + \sqrt{-5} \rangle$.</p> <p>If $\mathfrak P_1 = \langle 3, 1 - \sqrt{-5} \rangle$, it consists of all numbers in this domain of the form $3\alpha + (1 - \sqrt{-5})\beta$, with $\alpha$ and $\beta$ also numbers in this domain. And since $3 + (-1 + \sqrt{-5}) = 2 + \sqrt{-5}$, that means $\alpha = 1$ and $\beta = -1$ is our verification that $(2 + \sqrt{-5}) \in \mathfrak P_1$.</p> <p>EDIT: Thanks to Dave R. for pointing out a mistake of arithmetic. I've corrected it now.</p>
Norm of the product of a self-adjoint operator and one of its spectral projections
2,107,684
6
https://math.stackexchange.com/questions/2107684/norm-of-the-product-of-a-self-adjoint-operator-and-one-of-its-spectral-projectio
<p>Let $A$ be a bounded self-adjoint operator on a Hilbert space $H$ such that $A$ has a cyclic vector. That is, there exist $x \in H$ such that the linear subspace spanned by $\{ x, Ax, A^2x,...\}$ is dense in $H$.</p> <p>Also let $E$ denote the spectral measure corresponding to $A$, i.e. $E : \Sigma \rightarrow B(H)$ with $E(\Omega)=E_{\Omega}=\chi_{\Omega \cap \sigma(A)} (A)$ given by the functional calculus for $A$. Here $\Sigma $ is the Borel-$\sigma$-Algebra over $\mathbb{R}$ and $\chi_M$ denotes the characteristic function of a set $M$. $\sigma(A)$ is the spectrum of $A$.</p> <p><strong>Question</strong>: Assume that the interval $[0,1]$ is contained in the spectrum $\sigma(A)$ of A. What can we say about the norm $||E_\Omega A||$ if $\Omega$ is, for example, a subinterval of $[0,1]$, say $\Omega=[\tfrac{1}{4}, \tfrac{1}{2})$? What if $\Omega=[\tfrac{1}{4}, \tfrac{1}{3}) \cup ([\tfrac{1}{3}, \tfrac{1}{2}) \cap \mathbb{Q})$?</p> <p>One thing we know is that $||E_\Omega A|| = ||A E_\Omega||$. So we only have to compute the norm on the range of $E_\Omega$. But I don't really see how the cyclic vector comes into play. It tells us that $H$ is separable. I don't know how that could be helpful.</p>
[ { "answer_id": 2108371, "score": 4, "text": "<p>Whenever you apply a spectral projection $E(S)\\ne I$ to $A$, you end up with $0$ in the point spectrum of $E(S)A=AE(S)$ because $\\{AE(S)\\}E(\\sigma\\setminus S)=0$ and $E(\\sigma\\setminus S) \\ne 0$. So that special case always requires special attention.</p>\n\n<p>If $(a,b)\\subseteq\\sigma(A)$, then $E(a,b) \\ne 0$; otherwise $\\mu \\in (a,b)$ would lead to a bounded operator,\n$$\n R(\\mu)=\\int_{\\sigma}\\frac{1}{\\lambda-\\mu}dE(\\mu),\n$$\nwhich would have to be the resolvent $R(\\mu)=(A-\\mu I)^{-1}$. And that would force $(a,b)\\in\\rho(A)$, contrary to assumption.</p>\n\n<p>Suppose $[0,1]\\subseteq\\sigma(A)$. Then $E(a,b) \\ne 0$ for $(a,b)\\subseteq[0,1]$. I'll first consider your first question where you ask about the spectrum of $AE[1/4,1/2)$. Automatically $0\\in\\sigma(AE[1/4,1/2))$ because $\\{AE[1/4,1/2)\\}E(1/2,1)=0$ and $E(1/2,1)\\ne 0$. For $\\mu\\ne 0$ and $\\mu\\notin [1/4,1/2]$,\n$$\n AE[1/4,1/2)-\\mu I=(A-\\mu I)E[1/4,1/2)-\\mu E(\\sigma\\setminus[1/4,1/2))\n$$\nhas a bounded inverse given by\n$$\n (AE[1/4,1/2)-\\mu I)^{-1}=\\int_{[1/4,1/2)}\\frac{1}{\\lambda-\\mu}dE(\\lambda)-\\frac{1}{\\mu}E(\\sigma\\setminus[1/4,1/2)).\n$$\nTherefore, $\\sigma(AE[1/4,1/2))\\subseteq [1/4,1/2]\\cup\\{0\\}$. Conversely $0\\in\\sigma(AE[1/4,1/2))$ was noted above, and, for any $\\mu\\in(1/4,1/2)$, the projections $E(\\mu-\\delta,\\mu+\\delta)\\ne 0$ for all $\\delta &gt; 0$, which gives the existence of a non-zero vector $x_{\\delta}$ such that $E(\\mu-\\delta,\\mu+\\delta)x_{\\delta}=x_{\\delta}$ and, hence,\n\\begin{align}\n \\|AE[1/4,1/2)x_{\\delta}-\\mu x_{\\delta}\\|\n &amp; = \\|(A-\\mu I)E(\\mu-\\delta,\\mu+\\delta)x_{\\delta}\\| \\\\\n &amp; \\le \\delta \\|E(\\mu-\\delta,\\mu+\\delta)x_{\\delta}\\| \\\\\n &amp; = \\delta\\|x_{\\delta}\\|.\n\\end{align}\nSo $AE[1/4,1/2)-\\mu I$ cannot be continuously invertible, which proves\n$$\n \\{0\\}\\cup (1/4,1/2) \\subseteq \\sigma(AE[1/4,1/2))\n$$\nBecause the spectrum is closed,\n$$\n \\{0\\} \\cup [1/4,1/2]\\subseteq \\sigma(AE[1/4,1/2)).\n$$\nThe opposite inclusion was previously shown. So\n$$\n \\sigma(AE[1/4,1/2))=\\{0\\}\\cup[1/4,1/2].\n$$\nThe operator $AE[1/4,1/2]$ is selfadjoint. So its norm is its spectral radius, which gives $\\|AE[1/4,1/2)\\|=1/2$.</p>\n\n<p>I'll let you consider the other cases. Note for example that $S=[1/3,1/2]\\cap\\mathbb{Q}$ could be such that $E(S)=0$, or it could give $E(S)=E[1/3,1/2]$, or $E(S)=E(T)$ could hold for a lot of closed subsets $T$ of $[1/3,1/2]$ because the spectrum is closed, and every subset of $[1/3,1/2]\\cap\\mathbb{Q}$ could consist of eigenvalues.</p>\n" }, { "answer_id": 2131605, "score": 2, "text": "<p>In the second case where $\\Omega=[\\tfrac{1}{4}, \\tfrac{1}{3}) \\cup ([\\tfrac{1}{3}, \\tfrac{1}{2}) \\cap \\mathbb{Q})$ there is no definite answer <em>even if we assume the existence of a cyclic vector</em>.</p>\n\n<p>First, consider $A$ to be multiplication by $f(x)=x$ on $L^2[0,1]$ with the usual Lebesgue measure. A cyclic vector is given by the constant function 1 since polynomials are dense in $L^2[0,1]$. Then the spectral measure $E_\\Omega$ is given explicitly by the functional calculus as the operator $\\chi_\\Omega (A) \\phi (x) = \\chi_\\Omega(x) \\phi$.</p>\n\n<p>In particular, the operator $E_\\Omega A$ is given by multiplication with $g(x)=x \\chi_\\Omega(x)$. Computing its norm amounts to computing its spectrum which amounts to computing its essential range with respect to the Lebesgue measure. Clearly, the essential range of $g$ is $\\{ 0 \\} \\cup [\\tfrac{1}{4}, \\tfrac{1}{3}]$, so $|| E_\\Omega A ||= \\tfrac{1}{3}$.</p>\n\n<p>Second, consider $L^2[0,1]$ with the measure given by $\\mu = \\lambda + \\delta_{\\tfrac{1}{2}}$, where $\\lambda$ denotes the Lebesgue measure. I.e. we give the point $\\tfrac{1}{2}$ mass 1. Then, by the same argument as above we have to find the essential range of $g(x)=x \\chi_\\Omega(x)$ w.r.t. $\\mu$. Here, we have that the essential range is $\\{ 0 \\} \\cup [\\tfrac{1}{4}, \\tfrac{1}{3}] \\cup \\{ \\tfrac{1}{2} \\}$, so $|| E_\\Omega A ||= \\tfrac{1}{2}$.</p>\n\n<p>Note: In the second case, we also have a cyclic vector, namely the constant function 1. This is consequence of the following: For finite, regular Borel measures on compact subsets $K$ of $\\mathbb{R}$ the continuous functions are dense in $L^2(K)$. Now, the polynomials are dense in the continuous functions (w.r.t. to the sup-norm), so particular they are dense w.r.t. to the $L^2$ norm.</p>\n" }, { "answer_id": 2108371, "score": 4, "text": "<p>Whenever you apply a spectral projection $E(S)\\ne I$ to $A$, you end up with $0$ in the point spectrum of $E(S)A=AE(S)$ because $\\{AE(S)\\}E(\\sigma\\setminus S)=0$ and $E(\\sigma\\setminus S) \\ne 0$. So that special case always requires special attention.</p>\n\n<p>If $(a,b)\\subseteq\\sigma(A)$, then $E(a,b) \\ne 0$; otherwise $\\mu \\in (a,b)$ would lead to a bounded operator,\n$$\n R(\\mu)=\\int_{\\sigma}\\frac{1}{\\lambda-\\mu}dE(\\mu),\n$$\nwhich would have to be the resolvent $R(\\mu)=(A-\\mu I)^{-1}$. And that would force $(a,b)\\in\\rho(A)$, contrary to assumption.</p>\n\n<p>Suppose $[0,1]\\subseteq\\sigma(A)$. Then $E(a,b) \\ne 0$ for $(a,b)\\subseteq[0,1]$. I'll first consider your first question where you ask about the spectrum of $AE[1/4,1/2)$. Automatically $0\\in\\sigma(AE[1/4,1/2))$ because $\\{AE[1/4,1/2)\\}E(1/2,1)=0$ and $E(1/2,1)\\ne 0$. For $\\mu\\ne 0$ and $\\mu\\notin [1/4,1/2]$,\n$$\n AE[1/4,1/2)-\\mu I=(A-\\mu I)E[1/4,1/2)-\\mu E(\\sigma\\setminus[1/4,1/2))\n$$\nhas a bounded inverse given by\n$$\n (AE[1/4,1/2)-\\mu I)^{-1}=\\int_{[1/4,1/2)}\\frac{1}{\\lambda-\\mu}dE(\\lambda)-\\frac{1}{\\mu}E(\\sigma\\setminus[1/4,1/2)).\n$$\nTherefore, $\\sigma(AE[1/4,1/2))\\subseteq [1/4,1/2]\\cup\\{0\\}$. Conversely $0\\in\\sigma(AE[1/4,1/2))$ was noted above, and, for any $\\mu\\in(1/4,1/2)$, the projections $E(\\mu-\\delta,\\mu+\\delta)\\ne 0$ for all $\\delta &gt; 0$, which gives the existence of a non-zero vector $x_{\\delta}$ such that $E(\\mu-\\delta,\\mu+\\delta)x_{\\delta}=x_{\\delta}$ and, hence,\n\\begin{align}\n \\|AE[1/4,1/2)x_{\\delta}-\\mu x_{\\delta}\\|\n &amp; = \\|(A-\\mu I)E(\\mu-\\delta,\\mu+\\delta)x_{\\delta}\\| \\\\\n &amp; \\le \\delta \\|E(\\mu-\\delta,\\mu+\\delta)x_{\\delta}\\| \\\\\n &amp; = \\delta\\|x_{\\delta}\\|.\n\\end{align}\nSo $AE[1/4,1/2)-\\mu I$ cannot be continuously invertible, which proves\n$$\n \\{0\\}\\cup (1/4,1/2) \\subseteq \\sigma(AE[1/4,1/2))\n$$\nBecause the spectrum is closed,\n$$\n \\{0\\} \\cup [1/4,1/2]\\subseteq \\sigma(AE[1/4,1/2)).\n$$\nThe opposite inclusion was previously shown. So\n$$\n \\sigma(AE[1/4,1/2))=\\{0\\}\\cup[1/4,1/2].\n$$\nThe operator $AE[1/4,1/2]$ is selfadjoint. So its norm is its spectral radius, which gives $\\|AE[1/4,1/2)\\|=1/2$.</p>\n\n<p>I'll let you consider the other cases. Note for example that $S=[1/3,1/2]\\cap\\mathbb{Q}$ could be such that $E(S)=0$, or it could give $E(S)=E[1/3,1/2]$, or $E(S)=E(T)$ could hold for a lot of closed subsets $T$ of $[1/3,1/2]$ because the spectrum is closed, and every subset of $[1/3,1/2]\\cap\\mathbb{Q}$ could consist of eigenvalues.</p>\n" }, { "answer_id": 2131605, "score": 2, "text": "<p>In the second case where $\\Omega=[\\tfrac{1}{4}, \\tfrac{1}{3}) \\cup ([\\tfrac{1}{3}, \\tfrac{1}{2}) \\cap \\mathbb{Q})$ there is no definite answer <em>even if we assume the existence of a cyclic vector</em>.</p>\n\n<p>First, consider $A$ to be multiplication by $f(x)=x$ on $L^2[0,1]$ with the usual Lebesgue measure. A cyclic vector is given by the constant function 1 since polynomials are dense in $L^2[0,1]$. Then the spectral measure $E_\\Omega$ is given explicitly by the functional calculus as the operator $\\chi_\\Omega (A) \\phi (x) = \\chi_\\Omega(x) \\phi$.</p>\n\n<p>In particular, the operator $E_\\Omega A$ is given by multiplication with $g(x)=x \\chi_\\Omega(x)$. Computing its norm amounts to computing its spectrum which amounts to computing its essential range with respect to the Lebesgue measure. Clearly, the essential range of $g$ is $\\{ 0 \\} \\cup [\\tfrac{1}{4}, \\tfrac{1}{3}]$, so $|| E_\\Omega A ||= \\tfrac{1}{3}$.</p>\n\n<p>Second, consider $L^2[0,1]$ with the measure given by $\\mu = \\lambda + \\delta_{\\tfrac{1}{2}}$, where $\\lambda$ denotes the Lebesgue measure. I.e. we give the point $\\tfrac{1}{2}$ mass 1. Then, by the same argument as above we have to find the essential range of $g(x)=x \\chi_\\Omega(x)$ w.r.t. $\\mu$. Here, we have that the essential range is $\\{ 0 \\} \\cup [\\tfrac{1}{4}, \\tfrac{1}{3}] \\cup \\{ \\tfrac{1}{2} \\}$, so $|| E_\\Omega A ||= \\tfrac{1}{2}$.</p>\n\n<p>Note: In the second case, we also have a cyclic vector, namely the constant function 1. This is consequence of the following: For finite, regular Borel measures on compact subsets $K$ of $\\mathbb{R}$ the continuous functions are dense in $L^2(K)$. Now, the polynomials are dense in the continuous functions (w.r.t. to the sup-norm), so particular they are dense w.r.t. to the $L^2$ norm.</p>\n" }, { "answer_id": 2108371, "score": 4, "text": "<p>Whenever you apply a spectral projection $E(S)\\ne I$ to $A$, you end up with $0$ in the point spectrum of $E(S)A=AE(S)$ because $\\{AE(S)\\}E(\\sigma\\setminus S)=0$ and $E(\\sigma\\setminus S) \\ne 0$. So that special case always requires special attention.</p>\n\n<p>If $(a,b)\\subseteq\\sigma(A)$, then $E(a,b) \\ne 0$; otherwise $\\mu \\in (a,b)$ would lead to a bounded operator,\n$$\n R(\\mu)=\\int_{\\sigma}\\frac{1}{\\lambda-\\mu}dE(\\mu),\n$$\nwhich would have to be the resolvent $R(\\mu)=(A-\\mu I)^{-1}$. And that would force $(a,b)\\in\\rho(A)$, contrary to assumption.</p>\n\n<p>Suppose $[0,1]\\subseteq\\sigma(A)$. Then $E(a,b) \\ne 0$ for $(a,b)\\subseteq[0,1]$. I'll first consider your first question where you ask about the spectrum of $AE[1/4,1/2)$. Automatically $0\\in\\sigma(AE[1/4,1/2))$ because $\\{AE[1/4,1/2)\\}E(1/2,1)=0$ and $E(1/2,1)\\ne 0$. For $\\mu\\ne 0$ and $\\mu\\notin [1/4,1/2]$,\n$$\n AE[1/4,1/2)-\\mu I=(A-\\mu I)E[1/4,1/2)-\\mu E(\\sigma\\setminus[1/4,1/2))\n$$\nhas a bounded inverse given by\n$$\n (AE[1/4,1/2)-\\mu I)^{-1}=\\int_{[1/4,1/2)}\\frac{1}{\\lambda-\\mu}dE(\\lambda)-\\frac{1}{\\mu}E(\\sigma\\setminus[1/4,1/2)).\n$$\nTherefore, $\\sigma(AE[1/4,1/2))\\subseteq [1/4,1/2]\\cup\\{0\\}$. Conversely $0\\in\\sigma(AE[1/4,1/2))$ was noted above, and, for any $\\mu\\in(1/4,1/2)$, the projections $E(\\mu-\\delta,\\mu+\\delta)\\ne 0$ for all $\\delta &gt; 0$, which gives the existence of a non-zero vector $x_{\\delta}$ such that $E(\\mu-\\delta,\\mu+\\delta)x_{\\delta}=x_{\\delta}$ and, hence,\n\\begin{align}\n \\|AE[1/4,1/2)x_{\\delta}-\\mu x_{\\delta}\\|\n &amp; = \\|(A-\\mu I)E(\\mu-\\delta,\\mu+\\delta)x_{\\delta}\\| \\\\\n &amp; \\le \\delta \\|E(\\mu-\\delta,\\mu+\\delta)x_{\\delta}\\| \\\\\n &amp; = \\delta\\|x_{\\delta}\\|.\n\\end{align}\nSo $AE[1/4,1/2)-\\mu I$ cannot be continuously invertible, which proves\n$$\n \\{0\\}\\cup (1/4,1/2) \\subseteq \\sigma(AE[1/4,1/2))\n$$\nBecause the spectrum is closed,\n$$\n \\{0\\} \\cup [1/4,1/2]\\subseteq \\sigma(AE[1/4,1/2)).\n$$\nThe opposite inclusion was previously shown. So\n$$\n \\sigma(AE[1/4,1/2))=\\{0\\}\\cup[1/4,1/2].\n$$\nThe operator $AE[1/4,1/2]$ is selfadjoint. So its norm is its spectral radius, which gives $\\|AE[1/4,1/2)\\|=1/2$.</p>\n\n<p>I'll let you consider the other cases. Note for example that $S=[1/3,1/2]\\cap\\mathbb{Q}$ could be such that $E(S)=0$, or it could give $E(S)=E[1/3,1/2]$, or $E(S)=E(T)$ could hold for a lot of closed subsets $T$ of $[1/3,1/2]$ because the spectrum is closed, and every subset of $[1/3,1/2]\\cap\\mathbb{Q}$ could consist of eigenvalues.</p>\n" }, { "answer_id": 2131605, "score": 2, "text": "<p>In the second case where $\\Omega=[\\tfrac{1}{4}, \\tfrac{1}{3}) \\cup ([\\tfrac{1}{3}, \\tfrac{1}{2}) \\cap \\mathbb{Q})$ there is no definite answer <em>even if we assume the existence of a cyclic vector</em>.</p>\n\n<p>First, consider $A$ to be multiplication by $f(x)=x$ on $L^2[0,1]$ with the usual Lebesgue measure. A cyclic vector is given by the constant function 1 since polynomials are dense in $L^2[0,1]$. Then the spectral measure $E_\\Omega$ is given explicitly by the functional calculus as the operator $\\chi_\\Omega (A) \\phi (x) = \\chi_\\Omega(x) \\phi$.</p>\n\n<p>In particular, the operator $E_\\Omega A$ is given by multiplication with $g(x)=x \\chi_\\Omega(x)$. Computing its norm amounts to computing its spectrum which amounts to computing its essential range with respect to the Lebesgue measure. Clearly, the essential range of $g$ is $\\{ 0 \\} \\cup [\\tfrac{1}{4}, \\tfrac{1}{3}]$, so $|| E_\\Omega A ||= \\tfrac{1}{3}$.</p>\n\n<p>Second, consider $L^2[0,1]$ with the measure given by $\\mu = \\lambda + \\delta_{\\tfrac{1}{2}}$, where $\\lambda$ denotes the Lebesgue measure. I.e. we give the point $\\tfrac{1}{2}$ mass 1. Then, by the same argument as above we have to find the essential range of $g(x)=x \\chi_\\Omega(x)$ w.r.t. $\\mu$. Here, we have that the essential range is $\\{ 0 \\} \\cup [\\tfrac{1}{4}, \\tfrac{1}{3}] \\cup \\{ \\tfrac{1}{2} \\}$, so $|| E_\\Omega A ||= \\tfrac{1}{2}$.</p>\n\n<p>Note: In the second case, we also have a cyclic vector, namely the constant function 1. This is consequence of the following: For finite, regular Borel measures on compact subsets $K$ of $\\mathbb{R}$ the continuous functions are dense in $L^2(K)$. Now, the polynomials are dense in the continuous functions (w.r.t. to the sup-norm), so particular they are dense w.r.t. to the $L^2$ norm.</p>\n" }, { "answer_id": 2108371, "score": 4, "text": "<p>Whenever you apply a spectral projection $E(S)\\ne I$ to $A$, you end up with $0$ in the point spectrum of $E(S)A=AE(S)$ because $\\{AE(S)\\}E(\\sigma\\setminus S)=0$ and $E(\\sigma\\setminus S) \\ne 0$. So that special case always requires special attention.</p>\n\n<p>If $(a,b)\\subseteq\\sigma(A)$, then $E(a,b) \\ne 0$; otherwise $\\mu \\in (a,b)$ would lead to a bounded operator,\n$$\n R(\\mu)=\\int_{\\sigma}\\frac{1}{\\lambda-\\mu}dE(\\mu),\n$$\nwhich would have to be the resolvent $R(\\mu)=(A-\\mu I)^{-1}$. And that would force $(a,b)\\in\\rho(A)$, contrary to assumption.</p>\n\n<p>Suppose $[0,1]\\subseteq\\sigma(A)$. Then $E(a,b) \\ne 0$ for $(a,b)\\subseteq[0,1]$. I'll first consider your first question where you ask about the spectrum of $AE[1/4,1/2)$. Automatically $0\\in\\sigma(AE[1/4,1/2))$ because $\\{AE[1/4,1/2)\\}E(1/2,1)=0$ and $E(1/2,1)\\ne 0$. For $\\mu\\ne 0$ and $\\mu\\notin [1/4,1/2]$,\n$$\n AE[1/4,1/2)-\\mu I=(A-\\mu I)E[1/4,1/2)-\\mu E(\\sigma\\setminus[1/4,1/2))\n$$\nhas a bounded inverse given by\n$$\n (AE[1/4,1/2)-\\mu I)^{-1}=\\int_{[1/4,1/2)}\\frac{1}{\\lambda-\\mu}dE(\\lambda)-\\frac{1}{\\mu}E(\\sigma\\setminus[1/4,1/2)).\n$$\nTherefore, $\\sigma(AE[1/4,1/2))\\subseteq [1/4,1/2]\\cup\\{0\\}$. Conversely $0\\in\\sigma(AE[1/4,1/2))$ was noted above, and, for any $\\mu\\in(1/4,1/2)$, the projections $E(\\mu-\\delta,\\mu+\\delta)\\ne 0$ for all $\\delta &gt; 0$, which gives the existence of a non-zero vector $x_{\\delta}$ such that $E(\\mu-\\delta,\\mu+\\delta)x_{\\delta}=x_{\\delta}$ and, hence,\n\\begin{align}\n \\|AE[1/4,1/2)x_{\\delta}-\\mu x_{\\delta}\\|\n &amp; = \\|(A-\\mu I)E(\\mu-\\delta,\\mu+\\delta)x_{\\delta}\\| \\\\\n &amp; \\le \\delta \\|E(\\mu-\\delta,\\mu+\\delta)x_{\\delta}\\| \\\\\n &amp; = \\delta\\|x_{\\delta}\\|.\n\\end{align}\nSo $AE[1/4,1/2)-\\mu I$ cannot be continuously invertible, which proves\n$$\n \\{0\\}\\cup (1/4,1/2) \\subseteq \\sigma(AE[1/4,1/2))\n$$\nBecause the spectrum is closed,\n$$\n \\{0\\} \\cup [1/4,1/2]\\subseteq \\sigma(AE[1/4,1/2)).\n$$\nThe opposite inclusion was previously shown. So\n$$\n \\sigma(AE[1/4,1/2))=\\{0\\}\\cup[1/4,1/2].\n$$\nThe operator $AE[1/4,1/2]$ is selfadjoint. So its norm is its spectral radius, which gives $\\|AE[1/4,1/2)\\|=1/2$.</p>\n\n<p>I'll let you consider the other cases. Note for example that $S=[1/3,1/2]\\cap\\mathbb{Q}$ could be such that $E(S)=0$, or it could give $E(S)=E[1/3,1/2]$, or $E(S)=E(T)$ could hold for a lot of closed subsets $T$ of $[1/3,1/2]$ because the spectrum is closed, and every subset of $[1/3,1/2]\\cap\\mathbb{Q}$ could consist of eigenvalues.</p>\n" }, { "answer_id": 2131605, "score": 2, "text": "<p>In the second case where $\\Omega=[\\tfrac{1}{4}, \\tfrac{1}{3}) \\cup ([\\tfrac{1}{3}, \\tfrac{1}{2}) \\cap \\mathbb{Q})$ there is no definite answer <em>even if we assume the existence of a cyclic vector</em>.</p>\n\n<p>First, consider $A$ to be multiplication by $f(x)=x$ on $L^2[0,1]$ with the usual Lebesgue measure. A cyclic vector is given by the constant function 1 since polynomials are dense in $L^2[0,1]$. Then the spectral measure $E_\\Omega$ is given explicitly by the functional calculus as the operator $\\chi_\\Omega (A) \\phi (x) = \\chi_\\Omega(x) \\phi$.</p>\n\n<p>In particular, the operator $E_\\Omega A$ is given by multiplication with $g(x)=x \\chi_\\Omega(x)$. Computing its norm amounts to computing its spectrum which amounts to computing its essential range with respect to the Lebesgue measure. Clearly, the essential range of $g$ is $\\{ 0 \\} \\cup [\\tfrac{1}{4}, \\tfrac{1}{3}]$, so $|| E_\\Omega A ||= \\tfrac{1}{3}$.</p>\n\n<p>Second, consider $L^2[0,1]$ with the measure given by $\\mu = \\lambda + \\delta_{\\tfrac{1}{2}}$, where $\\lambda$ denotes the Lebesgue measure. I.e. we give the point $\\tfrac{1}{2}$ mass 1. Then, by the same argument as above we have to find the essential range of $g(x)=x \\chi_\\Omega(x)$ w.r.t. $\\mu$. Here, we have that the essential range is $\\{ 0 \\} \\cup [\\tfrac{1}{4}, \\tfrac{1}{3}] \\cup \\{ \\tfrac{1}{2} \\}$, so $|| E_\\Omega A ||= \\tfrac{1}{2}$.</p>\n\n<p>Note: In the second case, we also have a cyclic vector, namely the constant function 1. This is consequence of the following: For finite, regular Borel measures on compact subsets $K$ of $\\mathbb{R}$ the continuous functions are dense in $L^2(K)$. Now, the polynomials are dense in the continuous functions (w.r.t. to the sup-norm), so particular they are dense w.r.t. to the $L^2$ norm.</p>\n" }, { "answer_id": 2108371, "score": 4, "text": "<p>Whenever you apply a spectral projection $E(S)\\ne I$ to $A$, you end up with $0$ in the point spectrum of $E(S)A=AE(S)$ because $\\{AE(S)\\}E(\\sigma\\setminus S)=0$ and $E(\\sigma\\setminus S) \\ne 0$. So that special case always requires special attention.</p>\n\n<p>If $(a,b)\\subseteq\\sigma(A)$, then $E(a,b) \\ne 0$; otherwise $\\mu \\in (a,b)$ would lead to a bounded operator,\n$$\n R(\\mu)=\\int_{\\sigma}\\frac{1}{\\lambda-\\mu}dE(\\mu),\n$$\nwhich would have to be the resolvent $R(\\mu)=(A-\\mu I)^{-1}$. And that would force $(a,b)\\in\\rho(A)$, contrary to assumption.</p>\n\n<p>Suppose $[0,1]\\subseteq\\sigma(A)$. Then $E(a,b) \\ne 0$ for $(a,b)\\subseteq[0,1]$. I'll first consider your first question where you ask about the spectrum of $AE[1/4,1/2)$. Automatically $0\\in\\sigma(AE[1/4,1/2))$ because $\\{AE[1/4,1/2)\\}E(1/2,1)=0$ and $E(1/2,1)\\ne 0$. For $\\mu\\ne 0$ and $\\mu\\notin [1/4,1/2]$,\n$$\n AE[1/4,1/2)-\\mu I=(A-\\mu I)E[1/4,1/2)-\\mu E(\\sigma\\setminus[1/4,1/2))\n$$\nhas a bounded inverse given by\n$$\n (AE[1/4,1/2)-\\mu I)^{-1}=\\int_{[1/4,1/2)}\\frac{1}{\\lambda-\\mu}dE(\\lambda)-\\frac{1}{\\mu}E(\\sigma\\setminus[1/4,1/2)).\n$$\nTherefore, $\\sigma(AE[1/4,1/2))\\subseteq [1/4,1/2]\\cup\\{0\\}$. Conversely $0\\in\\sigma(AE[1/4,1/2))$ was noted above, and, for any $\\mu\\in(1/4,1/2)$, the projections $E(\\mu-\\delta,\\mu+\\delta)\\ne 0$ for all $\\delta &gt; 0$, which gives the existence of a non-zero vector $x_{\\delta}$ such that $E(\\mu-\\delta,\\mu+\\delta)x_{\\delta}=x_{\\delta}$ and, hence,\n\\begin{align}\n \\|AE[1/4,1/2)x_{\\delta}-\\mu x_{\\delta}\\|\n &amp; = \\|(A-\\mu I)E(\\mu-\\delta,\\mu+\\delta)x_{\\delta}\\| \\\\\n &amp; \\le \\delta \\|E(\\mu-\\delta,\\mu+\\delta)x_{\\delta}\\| \\\\\n &amp; = \\delta\\|x_{\\delta}\\|.\n\\end{align}\nSo $AE[1/4,1/2)-\\mu I$ cannot be continuously invertible, which proves\n$$\n \\{0\\}\\cup (1/4,1/2) \\subseteq \\sigma(AE[1/4,1/2))\n$$\nBecause the spectrum is closed,\n$$\n \\{0\\} \\cup [1/4,1/2]\\subseteq \\sigma(AE[1/4,1/2)).\n$$\nThe opposite inclusion was previously shown. So\n$$\n \\sigma(AE[1/4,1/2))=\\{0\\}\\cup[1/4,1/2].\n$$\nThe operator $AE[1/4,1/2]$ is selfadjoint. So its norm is its spectral radius, which gives $\\|AE[1/4,1/2)\\|=1/2$.</p>\n\n<p>I'll let you consider the other cases. Note for example that $S=[1/3,1/2]\\cap\\mathbb{Q}$ could be such that $E(S)=0$, or it could give $E(S)=E[1/3,1/2]$, or $E(S)=E(T)$ could hold for a lot of closed subsets $T$ of $[1/3,1/2]$ because the spectrum is closed, and every subset of $[1/3,1/2]\\cap\\mathbb{Q}$ could consist of eigenvalues.</p>\n" }, { "answer_id": 2131605, "score": 2, "text": "<p>In the second case where $\\Omega=[\\tfrac{1}{4}, \\tfrac{1}{3}) \\cup ([\\tfrac{1}{3}, \\tfrac{1}{2}) \\cap \\mathbb{Q})$ there is no definite answer <em>even if we assume the existence of a cyclic vector</em>.</p>\n\n<p>First, consider $A$ to be multiplication by $f(x)=x$ on $L^2[0,1]$ with the usual Lebesgue measure. A cyclic vector is given by the constant function 1 since polynomials are dense in $L^2[0,1]$. Then the spectral measure $E_\\Omega$ is given explicitly by the functional calculus as the operator $\\chi_\\Omega (A) \\phi (x) = \\chi_\\Omega(x) \\phi$.</p>\n\n<p>In particular, the operator $E_\\Omega A$ is given by multiplication with $g(x)=x \\chi_\\Omega(x)$. Computing its norm amounts to computing its spectrum which amounts to computing its essential range with respect to the Lebesgue measure. Clearly, the essential range of $g$ is $\\{ 0 \\} \\cup [\\tfrac{1}{4}, \\tfrac{1}{3}]$, so $|| E_\\Omega A ||= \\tfrac{1}{3}$.</p>\n\n<p>Second, consider $L^2[0,1]$ with the measure given by $\\mu = \\lambda + \\delta_{\\tfrac{1}{2}}$, where $\\lambda$ denotes the Lebesgue measure. I.e. we give the point $\\tfrac{1}{2}$ mass 1. Then, by the same argument as above we have to find the essential range of $g(x)=x \\chi_\\Omega(x)$ w.r.t. $\\mu$. Here, we have that the essential range is $\\{ 0 \\} \\cup [\\tfrac{1}{4}, \\tfrac{1}{3}] \\cup \\{ \\tfrac{1}{2} \\}$, so $|| E_\\Omega A ||= \\tfrac{1}{2}$.</p>\n\n<p>Note: In the second case, we also have a cyclic vector, namely the constant function 1. This is consequence of the following: For finite, regular Borel measures on compact subsets $K$ of $\\mathbb{R}$ the continuous functions are dense in $L^2(K)$. Now, the polynomials are dense in the continuous functions (w.r.t. to the sup-norm), so particular they are dense w.r.t. to the $L^2$ norm.</p>\n" }, { "answer_id": 2108371, "score": 4, "text": "<p>Whenever you apply a spectral projection $E(S)\\ne I$ to $A$, you end up with $0$ in the point spectrum of $E(S)A=AE(S)$ because $\\{AE(S)\\}E(\\sigma\\setminus S)=0$ and $E(\\sigma\\setminus S) \\ne 0$. So that special case always requires special attention.</p>\n\n<p>If $(a,b)\\subseteq\\sigma(A)$, then $E(a,b) \\ne 0$; otherwise $\\mu \\in (a,b)$ would lead to a bounded operator,\n$$\n R(\\mu)=\\int_{\\sigma}\\frac{1}{\\lambda-\\mu}dE(\\mu),\n$$\nwhich would have to be the resolvent $R(\\mu)=(A-\\mu I)^{-1}$. And that would force $(a,b)\\in\\rho(A)$, contrary to assumption.</p>\n\n<p>Suppose $[0,1]\\subseteq\\sigma(A)$. Then $E(a,b) \\ne 0$ for $(a,b)\\subseteq[0,1]$. I'll first consider your first question where you ask about the spectrum of $AE[1/4,1/2)$. Automatically $0\\in\\sigma(AE[1/4,1/2))$ because $\\{AE[1/4,1/2)\\}E(1/2,1)=0$ and $E(1/2,1)\\ne 0$. For $\\mu\\ne 0$ and $\\mu\\notin [1/4,1/2]$,\n$$\n AE[1/4,1/2)-\\mu I=(A-\\mu I)E[1/4,1/2)-\\mu E(\\sigma\\setminus[1/4,1/2))\n$$\nhas a bounded inverse given by\n$$\n (AE[1/4,1/2)-\\mu I)^{-1}=\\int_{[1/4,1/2)}\\frac{1}{\\lambda-\\mu}dE(\\lambda)-\\frac{1}{\\mu}E(\\sigma\\setminus[1/4,1/2)).\n$$\nTherefore, $\\sigma(AE[1/4,1/2))\\subseteq [1/4,1/2]\\cup\\{0\\}$. Conversely $0\\in\\sigma(AE[1/4,1/2))$ was noted above, and, for any $\\mu\\in(1/4,1/2)$, the projections $E(\\mu-\\delta,\\mu+\\delta)\\ne 0$ for all $\\delta &gt; 0$, which gives the existence of a non-zero vector $x_{\\delta}$ such that $E(\\mu-\\delta,\\mu+\\delta)x_{\\delta}=x_{\\delta}$ and, hence,\n\\begin{align}\n \\|AE[1/4,1/2)x_{\\delta}-\\mu x_{\\delta}\\|\n &amp; = \\|(A-\\mu I)E(\\mu-\\delta,\\mu+\\delta)x_{\\delta}\\| \\\\\n &amp; \\le \\delta \\|E(\\mu-\\delta,\\mu+\\delta)x_{\\delta}\\| \\\\\n &amp; = \\delta\\|x_{\\delta}\\|.\n\\end{align}\nSo $AE[1/4,1/2)-\\mu I$ cannot be continuously invertible, which proves\n$$\n \\{0\\}\\cup (1/4,1/2) \\subseteq \\sigma(AE[1/4,1/2))\n$$\nBecause the spectrum is closed,\n$$\n \\{0\\} \\cup [1/4,1/2]\\subseteq \\sigma(AE[1/4,1/2)).\n$$\nThe opposite inclusion was previously shown. So\n$$\n \\sigma(AE[1/4,1/2))=\\{0\\}\\cup[1/4,1/2].\n$$\nThe operator $AE[1/4,1/2]$ is selfadjoint. So its norm is its spectral radius, which gives $\\|AE[1/4,1/2)\\|=1/2$.</p>\n\n<p>I'll let you consider the other cases. Note for example that $S=[1/3,1/2]\\cap\\mathbb{Q}$ could be such that $E(S)=0$, or it could give $E(S)=E[1/3,1/2]$, or $E(S)=E(T)$ could hold for a lot of closed subsets $T$ of $[1/3,1/2]$ because the spectrum is closed, and every subset of $[1/3,1/2]\\cap\\mathbb{Q}$ could consist of eigenvalues.</p>\n" }, { "answer_id": 2131605, "score": 2, "text": "<p>In the second case where $\\Omega=[\\tfrac{1}{4}, \\tfrac{1}{3}) \\cup ([\\tfrac{1}{3}, \\tfrac{1}{2}) \\cap \\mathbb{Q})$ there is no definite answer <em>even if we assume the existence of a cyclic vector</em>.</p>\n\n<p>First, consider $A$ to be multiplication by $f(x)=x$ on $L^2[0,1]$ with the usual Lebesgue measure. A cyclic vector is given by the constant function 1 since polynomials are dense in $L^2[0,1]$. Then the spectral measure $E_\\Omega$ is given explicitly by the functional calculus as the operator $\\chi_\\Omega (A) \\phi (x) = \\chi_\\Omega(x) \\phi$.</p>\n\n<p>In particular, the operator $E_\\Omega A$ is given by multiplication with $g(x)=x \\chi_\\Omega(x)$. Computing its norm amounts to computing its spectrum which amounts to computing its essential range with respect to the Lebesgue measure. Clearly, the essential range of $g$ is $\\{ 0 \\} \\cup [\\tfrac{1}{4}, \\tfrac{1}{3}]$, so $|| E_\\Omega A ||= \\tfrac{1}{3}$.</p>\n\n<p>Second, consider $L^2[0,1]$ with the measure given by $\\mu = \\lambda + \\delta_{\\tfrac{1}{2}}$, where $\\lambda$ denotes the Lebesgue measure. I.e. we give the point $\\tfrac{1}{2}$ mass 1. Then, by the same argument as above we have to find the essential range of $g(x)=x \\chi_\\Omega(x)$ w.r.t. $\\mu$. Here, we have that the essential range is $\\{ 0 \\} \\cup [\\tfrac{1}{4}, \\tfrac{1}{3}] \\cup \\{ \\tfrac{1}{2} \\}$, so $|| E_\\Omega A ||= \\tfrac{1}{2}$.</p>\n\n<p>Note: In the second case, we also have a cyclic vector, namely the constant function 1. This is consequence of the following: For finite, regular Borel measures on compact subsets $K$ of $\\mathbb{R}$ the continuous functions are dense in $L^2(K)$. Now, the polynomials are dense in the continuous functions (w.r.t. to the sup-norm), so particular they are dense w.r.t. to the $L^2$ norm.</p>\n" }, { "answer_id": 2108371, "score": 4, "text": "<p>Whenever you apply a spectral projection $E(S)\\ne I$ to $A$, you end up with $0$ in the point spectrum of $E(S)A=AE(S)$ because $\\{AE(S)\\}E(\\sigma\\setminus S)=0$ and $E(\\sigma\\setminus S) \\ne 0$. So that special case always requires special attention.</p>\n\n<p>If $(a,b)\\subseteq\\sigma(A)$, then $E(a,b) \\ne 0$; otherwise $\\mu \\in (a,b)$ would lead to a bounded operator,\n$$\n R(\\mu)=\\int_{\\sigma}\\frac{1}{\\lambda-\\mu}dE(\\mu),\n$$\nwhich would have to be the resolvent $R(\\mu)=(A-\\mu I)^{-1}$. And that would force $(a,b)\\in\\rho(A)$, contrary to assumption.</p>\n\n<p>Suppose $[0,1]\\subseteq\\sigma(A)$. Then $E(a,b) \\ne 0$ for $(a,b)\\subseteq[0,1]$. I'll first consider your first question where you ask about the spectrum of $AE[1/4,1/2)$. Automatically $0\\in\\sigma(AE[1/4,1/2))$ because $\\{AE[1/4,1/2)\\}E(1/2,1)=0$ and $E(1/2,1)\\ne 0$. For $\\mu\\ne 0$ and $\\mu\\notin [1/4,1/2]$,\n$$\n AE[1/4,1/2)-\\mu I=(A-\\mu I)E[1/4,1/2)-\\mu E(\\sigma\\setminus[1/4,1/2))\n$$\nhas a bounded inverse given by\n$$\n (AE[1/4,1/2)-\\mu I)^{-1}=\\int_{[1/4,1/2)}\\frac{1}{\\lambda-\\mu}dE(\\lambda)-\\frac{1}{\\mu}E(\\sigma\\setminus[1/4,1/2)).\n$$\nTherefore, $\\sigma(AE[1/4,1/2))\\subseteq [1/4,1/2]\\cup\\{0\\}$. Conversely $0\\in\\sigma(AE[1/4,1/2))$ was noted above, and, for any $\\mu\\in(1/4,1/2)$, the projections $E(\\mu-\\delta,\\mu+\\delta)\\ne 0$ for all $\\delta &gt; 0$, which gives the existence of a non-zero vector $x_{\\delta}$ such that $E(\\mu-\\delta,\\mu+\\delta)x_{\\delta}=x_{\\delta}$ and, hence,\n\\begin{align}\n \\|AE[1/4,1/2)x_{\\delta}-\\mu x_{\\delta}\\|\n &amp; = \\|(A-\\mu I)E(\\mu-\\delta,\\mu+\\delta)x_{\\delta}\\| \\\\\n &amp; \\le \\delta \\|E(\\mu-\\delta,\\mu+\\delta)x_{\\delta}\\| \\\\\n &amp; = \\delta\\|x_{\\delta}\\|.\n\\end{align}\nSo $AE[1/4,1/2)-\\mu I$ cannot be continuously invertible, which proves\n$$\n \\{0\\}\\cup (1/4,1/2) \\subseteq \\sigma(AE[1/4,1/2))\n$$\nBecause the spectrum is closed,\n$$\n \\{0\\} \\cup [1/4,1/2]\\subseteq \\sigma(AE[1/4,1/2)).\n$$\nThe opposite inclusion was previously shown. So\n$$\n \\sigma(AE[1/4,1/2))=\\{0\\}\\cup[1/4,1/2].\n$$\nThe operator $AE[1/4,1/2]$ is selfadjoint. So its norm is its spectral radius, which gives $\\|AE[1/4,1/2)\\|=1/2$.</p>\n\n<p>I'll let you consider the other cases. Note for example that $S=[1/3,1/2]\\cap\\mathbb{Q}$ could be such that $E(S)=0$, or it could give $E(S)=E[1/3,1/2]$, or $E(S)=E(T)$ could hold for a lot of closed subsets $T$ of $[1/3,1/2]$ because the spectrum is closed, and every subset of $[1/3,1/2]\\cap\\mathbb{Q}$ could consist of eigenvalues.</p>\n" }, { "answer_id": 2131605, "score": 2, "text": "<p>In the second case where $\\Omega=[\\tfrac{1}{4}, \\tfrac{1}{3}) \\cup ([\\tfrac{1}{3}, \\tfrac{1}{2}) \\cap \\mathbb{Q})$ there is no definite answer <em>even if we assume the existence of a cyclic vector</em>.</p>\n\n<p>First, consider $A$ to be multiplication by $f(x)=x$ on $L^2[0,1]$ with the usual Lebesgue measure. A cyclic vector is given by the constant function 1 since polynomials are dense in $L^2[0,1]$. Then the spectral measure $E_\\Omega$ is given explicitly by the functional calculus as the operator $\\chi_\\Omega (A) \\phi (x) = \\chi_\\Omega(x) \\phi$.</p>\n\n<p>In particular, the operator $E_\\Omega A$ is given by multiplication with $g(x)=x \\chi_\\Omega(x)$. Computing its norm amounts to computing its spectrum which amounts to computing its essential range with respect to the Lebesgue measure. Clearly, the essential range of $g$ is $\\{ 0 \\} \\cup [\\tfrac{1}{4}, \\tfrac{1}{3}]$, so $|| E_\\Omega A ||= \\tfrac{1}{3}$.</p>\n\n<p>Second, consider $L^2[0,1]$ with the measure given by $\\mu = \\lambda + \\delta_{\\tfrac{1}{2}}$, where $\\lambda$ denotes the Lebesgue measure. I.e. we give the point $\\tfrac{1}{2}$ mass 1. Then, by the same argument as above we have to find the essential range of $g(x)=x \\chi_\\Omega(x)$ w.r.t. $\\mu$. Here, we have that the essential range is $\\{ 0 \\} \\cup [\\tfrac{1}{4}, \\tfrac{1}{3}] \\cup \\{ \\tfrac{1}{2} \\}$, so $|| E_\\Omega A ||= \\tfrac{1}{2}$.</p>\n\n<p>Note: In the second case, we also have a cyclic vector, namely the constant function 1. This is consequence of the following: For finite, regular Borel measures on compact subsets $K$ of $\\mathbb{R}$ the continuous functions are dense in $L^2(K)$. Now, the polynomials are dense in the continuous functions (w.r.t. to the sup-norm), so particular they are dense w.r.t. to the $L^2$ norm.</p>\n" }, { "answer_id": 2108371, "score": 4, "text": "<p>Whenever you apply a spectral projection $E(S)\\ne I$ to $A$, you end up with $0$ in the point spectrum of $E(S)A=AE(S)$ because $\\{AE(S)\\}E(\\sigma\\setminus S)=0$ and $E(\\sigma\\setminus S) \\ne 0$. So that special case always requires special attention.</p>\n\n<p>If $(a,b)\\subseteq\\sigma(A)$, then $E(a,b) \\ne 0$; otherwise $\\mu \\in (a,b)$ would lead to a bounded operator,\n$$\n R(\\mu)=\\int_{\\sigma}\\frac{1}{\\lambda-\\mu}dE(\\mu),\n$$\nwhich would have to be the resolvent $R(\\mu)=(A-\\mu I)^{-1}$. And that would force $(a,b)\\in\\rho(A)$, contrary to assumption.</p>\n\n<p>Suppose $[0,1]\\subseteq\\sigma(A)$. Then $E(a,b) \\ne 0$ for $(a,b)\\subseteq[0,1]$. I'll first consider your first question where you ask about the spectrum of $AE[1/4,1/2)$. Automatically $0\\in\\sigma(AE[1/4,1/2))$ because $\\{AE[1/4,1/2)\\}E(1/2,1)=0$ and $E(1/2,1)\\ne 0$. For $\\mu\\ne 0$ and $\\mu\\notin [1/4,1/2]$,\n$$\n AE[1/4,1/2)-\\mu I=(A-\\mu I)E[1/4,1/2)-\\mu E(\\sigma\\setminus[1/4,1/2))\n$$\nhas a bounded inverse given by\n$$\n (AE[1/4,1/2)-\\mu I)^{-1}=\\int_{[1/4,1/2)}\\frac{1}{\\lambda-\\mu}dE(\\lambda)-\\frac{1}{\\mu}E(\\sigma\\setminus[1/4,1/2)).\n$$\nTherefore, $\\sigma(AE[1/4,1/2))\\subseteq [1/4,1/2]\\cup\\{0\\}$. Conversely $0\\in\\sigma(AE[1/4,1/2))$ was noted above, and, for any $\\mu\\in(1/4,1/2)$, the projections $E(\\mu-\\delta,\\mu+\\delta)\\ne 0$ for all $\\delta &gt; 0$, which gives the existence of a non-zero vector $x_{\\delta}$ such that $E(\\mu-\\delta,\\mu+\\delta)x_{\\delta}=x_{\\delta}$ and, hence,\n\\begin{align}\n \\|AE[1/4,1/2)x_{\\delta}-\\mu x_{\\delta}\\|\n &amp; = \\|(A-\\mu I)E(\\mu-\\delta,\\mu+\\delta)x_{\\delta}\\| \\\\\n &amp; \\le \\delta \\|E(\\mu-\\delta,\\mu+\\delta)x_{\\delta}\\| \\\\\n &amp; = \\delta\\|x_{\\delta}\\|.\n\\end{align}\nSo $AE[1/4,1/2)-\\mu I$ cannot be continuously invertible, which proves\n$$\n \\{0\\}\\cup (1/4,1/2) \\subseteq \\sigma(AE[1/4,1/2))\n$$\nBecause the spectrum is closed,\n$$\n \\{0\\} \\cup [1/4,1/2]\\subseteq \\sigma(AE[1/4,1/2)).\n$$\nThe opposite inclusion was previously shown. So\n$$\n \\sigma(AE[1/4,1/2))=\\{0\\}\\cup[1/4,1/2].\n$$\nThe operator $AE[1/4,1/2]$ is selfadjoint. So its norm is its spectral radius, which gives $\\|AE[1/4,1/2)\\|=1/2$.</p>\n\n<p>I'll let you consider the other cases. Note for example that $S=[1/3,1/2]\\cap\\mathbb{Q}$ could be such that $E(S)=0$, or it could give $E(S)=E[1/3,1/2]$, or $E(S)=E(T)$ could hold for a lot of closed subsets $T$ of $[1/3,1/2]$ because the spectrum is closed, and every subset of $[1/3,1/2]\\cap\\mathbb{Q}$ could consist of eigenvalues.</p>\n" }, { "answer_id": 2131605, "score": 2, "text": "<p>In the second case where $\\Omega=[\\tfrac{1}{4}, \\tfrac{1}{3}) \\cup ([\\tfrac{1}{3}, \\tfrac{1}{2}) \\cap \\mathbb{Q})$ there is no definite answer <em>even if we assume the existence of a cyclic vector</em>.</p>\n\n<p>First, consider $A$ to be multiplication by $f(x)=x$ on $L^2[0,1]$ with the usual Lebesgue measure. A cyclic vector is given by the constant function 1 since polynomials are dense in $L^2[0,1]$. Then the spectral measure $E_\\Omega$ is given explicitly by the functional calculus as the operator $\\chi_\\Omega (A) \\phi (x) = \\chi_\\Omega(x) \\phi$.</p>\n\n<p>In particular, the operator $E_\\Omega A$ is given by multiplication with $g(x)=x \\chi_\\Omega(x)$. Computing its norm amounts to computing its spectrum which amounts to computing its essential range with respect to the Lebesgue measure. Clearly, the essential range of $g$ is $\\{ 0 \\} \\cup [\\tfrac{1}{4}, \\tfrac{1}{3}]$, so $|| E_\\Omega A ||= \\tfrac{1}{3}$.</p>\n\n<p>Second, consider $L^2[0,1]$ with the measure given by $\\mu = \\lambda + \\delta_{\\tfrac{1}{2}}$, where $\\lambda$ denotes the Lebesgue measure. I.e. we give the point $\\tfrac{1}{2}$ mass 1. Then, by the same argument as above we have to find the essential range of $g(x)=x \\chi_\\Omega(x)$ w.r.t. $\\mu$. Here, we have that the essential range is $\\{ 0 \\} \\cup [\\tfrac{1}{4}, \\tfrac{1}{3}] \\cup \\{ \\tfrac{1}{2} \\}$, so $|| E_\\Omega A ||= \\tfrac{1}{2}$.</p>\n\n<p>Note: In the second case, we also have a cyclic vector, namely the constant function 1. This is consequence of the following: For finite, regular Borel measures on compact subsets $K$ of $\\mathbb{R}$ the continuous functions are dense in $L^2(K)$. Now, the polynomials are dense in the continuous functions (w.r.t. to the sup-norm), so particular they are dense w.r.t. to the $L^2$ norm.</p>\n" }, { "answer_id": 2108371, "score": 4, "text": "<p>Whenever you apply a spectral projection $E(S)\\ne I$ to $A$, you end up with $0$ in the point spectrum of $E(S)A=AE(S)$ because $\\{AE(S)\\}E(\\sigma\\setminus S)=0$ and $E(\\sigma\\setminus S) \\ne 0$. So that special case always requires special attention.</p>\n\n<p>If $(a,b)\\subseteq\\sigma(A)$, then $E(a,b) \\ne 0$; otherwise $\\mu \\in (a,b)$ would lead to a bounded operator,\n$$\n R(\\mu)=\\int_{\\sigma}\\frac{1}{\\lambda-\\mu}dE(\\mu),\n$$\nwhich would have to be the resolvent $R(\\mu)=(A-\\mu I)^{-1}$. And that would force $(a,b)\\in\\rho(A)$, contrary to assumption.</p>\n\n<p>Suppose $[0,1]\\subseteq\\sigma(A)$. Then $E(a,b) \\ne 0$ for $(a,b)\\subseteq[0,1]$. I'll first consider your first question where you ask about the spectrum of $AE[1/4,1/2)$. Automatically $0\\in\\sigma(AE[1/4,1/2))$ because $\\{AE[1/4,1/2)\\}E(1/2,1)=0$ and $E(1/2,1)\\ne 0$. For $\\mu\\ne 0$ and $\\mu\\notin [1/4,1/2]$,\n$$\n AE[1/4,1/2)-\\mu I=(A-\\mu I)E[1/4,1/2)-\\mu E(\\sigma\\setminus[1/4,1/2))\n$$\nhas a bounded inverse given by\n$$\n (AE[1/4,1/2)-\\mu I)^{-1}=\\int_{[1/4,1/2)}\\frac{1}{\\lambda-\\mu}dE(\\lambda)-\\frac{1}{\\mu}E(\\sigma\\setminus[1/4,1/2)).\n$$\nTherefore, $\\sigma(AE[1/4,1/2))\\subseteq [1/4,1/2]\\cup\\{0\\}$. Conversely $0\\in\\sigma(AE[1/4,1/2))$ was noted above, and, for any $\\mu\\in(1/4,1/2)$, the projections $E(\\mu-\\delta,\\mu+\\delta)\\ne 0$ for all $\\delta &gt; 0$, which gives the existence of a non-zero vector $x_{\\delta}$ such that $E(\\mu-\\delta,\\mu+\\delta)x_{\\delta}=x_{\\delta}$ and, hence,\n\\begin{align}\n \\|AE[1/4,1/2)x_{\\delta}-\\mu x_{\\delta}\\|\n &amp; = \\|(A-\\mu I)E(\\mu-\\delta,\\mu+\\delta)x_{\\delta}\\| \\\\\n &amp; \\le \\delta \\|E(\\mu-\\delta,\\mu+\\delta)x_{\\delta}\\| \\\\\n &amp; = \\delta\\|x_{\\delta}\\|.\n\\end{align}\nSo $AE[1/4,1/2)-\\mu I$ cannot be continuously invertible, which proves\n$$\n \\{0\\}\\cup (1/4,1/2) \\subseteq \\sigma(AE[1/4,1/2))\n$$\nBecause the spectrum is closed,\n$$\n \\{0\\} \\cup [1/4,1/2]\\subseteq \\sigma(AE[1/4,1/2)).\n$$\nThe opposite inclusion was previously shown. So\n$$\n \\sigma(AE[1/4,1/2))=\\{0\\}\\cup[1/4,1/2].\n$$\nThe operator $AE[1/4,1/2]$ is selfadjoint. So its norm is its spectral radius, which gives $\\|AE[1/4,1/2)\\|=1/2$.</p>\n\n<p>I'll let you consider the other cases. Note for example that $S=[1/3,1/2]\\cap\\mathbb{Q}$ could be such that $E(S)=0$, or it could give $E(S)=E[1/3,1/2]$, or $E(S)=E(T)$ could hold for a lot of closed subsets $T$ of $[1/3,1/2]$ because the spectrum is closed, and every subset of $[1/3,1/2]\\cap\\mathbb{Q}$ could consist of eigenvalues.</p>\n" }, { "answer_id": 2131605, "score": 2, "text": "<p>In the second case where $\\Omega=[\\tfrac{1}{4}, \\tfrac{1}{3}) \\cup ([\\tfrac{1}{3}, \\tfrac{1}{2}) \\cap \\mathbb{Q})$ there is no definite answer <em>even if we assume the existence of a cyclic vector</em>.</p>\n\n<p>First, consider $A$ to be multiplication by $f(x)=x$ on $L^2[0,1]$ with the usual Lebesgue measure. A cyclic vector is given by the constant function 1 since polynomials are dense in $L^2[0,1]$. Then the spectral measure $E_\\Omega$ is given explicitly by the functional calculus as the operator $\\chi_\\Omega (A) \\phi (x) = \\chi_\\Omega(x) \\phi$.</p>\n\n<p>In particular, the operator $E_\\Omega A$ is given by multiplication with $g(x)=x \\chi_\\Omega(x)$. Computing its norm amounts to computing its spectrum which amounts to computing its essential range with respect to the Lebesgue measure. Clearly, the essential range of $g$ is $\\{ 0 \\} \\cup [\\tfrac{1}{4}, \\tfrac{1}{3}]$, so $|| E_\\Omega A ||= \\tfrac{1}{3}$.</p>\n\n<p>Second, consider $L^2[0,1]$ with the measure given by $\\mu = \\lambda + \\delta_{\\tfrac{1}{2}}$, where $\\lambda$ denotes the Lebesgue measure. I.e. we give the point $\\tfrac{1}{2}$ mass 1. Then, by the same argument as above we have to find the essential range of $g(x)=x \\chi_\\Omega(x)$ w.r.t. $\\mu$. Here, we have that the essential range is $\\{ 0 \\} \\cup [\\tfrac{1}{4}, \\tfrac{1}{3}] \\cup \\{ \\tfrac{1}{2} \\}$, so $|| E_\\Omega A ||= \\tfrac{1}{2}$.</p>\n\n<p>Note: In the second case, we also have a cyclic vector, namely the constant function 1. This is consequence of the following: For finite, regular Borel measures on compact subsets $K$ of $\\mathbb{R}$ the continuous functions are dense in $L^2(K)$. Now, the polynomials are dense in the continuous functions (w.r.t. to the sup-norm), so particular they are dense w.r.t. to the $L^2$ norm.</p>\n" }, { "answer_id": 2108371, "score": 4, "text": "<p>Whenever you apply a spectral projection $E(S)\\ne I$ to $A$, you end up with $0$ in the point spectrum of $E(S)A=AE(S)$ because $\\{AE(S)\\}E(\\sigma\\setminus S)=0$ and $E(\\sigma\\setminus S) \\ne 0$. So that special case always requires special attention.</p>\n\n<p>If $(a,b)\\subseteq\\sigma(A)$, then $E(a,b) \\ne 0$; otherwise $\\mu \\in (a,b)$ would lead to a bounded operator,\n$$\n R(\\mu)=\\int_{\\sigma}\\frac{1}{\\lambda-\\mu}dE(\\mu),\n$$\nwhich would have to be the resolvent $R(\\mu)=(A-\\mu I)^{-1}$. And that would force $(a,b)\\in\\rho(A)$, contrary to assumption.</p>\n\n<p>Suppose $[0,1]\\subseteq\\sigma(A)$. Then $E(a,b) \\ne 0$ for $(a,b)\\subseteq[0,1]$. I'll first consider your first question where you ask about the spectrum of $AE[1/4,1/2)$. Automatically $0\\in\\sigma(AE[1/4,1/2))$ because $\\{AE[1/4,1/2)\\}E(1/2,1)=0$ and $E(1/2,1)\\ne 0$. For $\\mu\\ne 0$ and $\\mu\\notin [1/4,1/2]$,\n$$\n AE[1/4,1/2)-\\mu I=(A-\\mu I)E[1/4,1/2)-\\mu E(\\sigma\\setminus[1/4,1/2))\n$$\nhas a bounded inverse given by\n$$\n (AE[1/4,1/2)-\\mu I)^{-1}=\\int_{[1/4,1/2)}\\frac{1}{\\lambda-\\mu}dE(\\lambda)-\\frac{1}{\\mu}E(\\sigma\\setminus[1/4,1/2)).\n$$\nTherefore, $\\sigma(AE[1/4,1/2))\\subseteq [1/4,1/2]\\cup\\{0\\}$. Conversely $0\\in\\sigma(AE[1/4,1/2))$ was noted above, and, for any $\\mu\\in(1/4,1/2)$, the projections $E(\\mu-\\delta,\\mu+\\delta)\\ne 0$ for all $\\delta &gt; 0$, which gives the existence of a non-zero vector $x_{\\delta}$ such that $E(\\mu-\\delta,\\mu+\\delta)x_{\\delta}=x_{\\delta}$ and, hence,\n\\begin{align}\n \\|AE[1/4,1/2)x_{\\delta}-\\mu x_{\\delta}\\|\n &amp; = \\|(A-\\mu I)E(\\mu-\\delta,\\mu+\\delta)x_{\\delta}\\| \\\\\n &amp; \\le \\delta \\|E(\\mu-\\delta,\\mu+\\delta)x_{\\delta}\\| \\\\\n &amp; = \\delta\\|x_{\\delta}\\|.\n\\end{align}\nSo $AE[1/4,1/2)-\\mu I$ cannot be continuously invertible, which proves\n$$\n \\{0\\}\\cup (1/4,1/2) \\subseteq \\sigma(AE[1/4,1/2))\n$$\nBecause the spectrum is closed,\n$$\n \\{0\\} \\cup [1/4,1/2]\\subseteq \\sigma(AE[1/4,1/2)).\n$$\nThe opposite inclusion was previously shown. So\n$$\n \\sigma(AE[1/4,1/2))=\\{0\\}\\cup[1/4,1/2].\n$$\nThe operator $AE[1/4,1/2]$ is selfadjoint. So its norm is its spectral radius, which gives $\\|AE[1/4,1/2)\\|=1/2$.</p>\n\n<p>I'll let you consider the other cases. Note for example that $S=[1/3,1/2]\\cap\\mathbb{Q}$ could be such that $E(S)=0$, or it could give $E(S)=E[1/3,1/2]$, or $E(S)=E(T)$ could hold for a lot of closed subsets $T$ of $[1/3,1/2]$ because the spectrum is closed, and every subset of $[1/3,1/2]\\cap\\mathbb{Q}$ could consist of eigenvalues.</p>\n" }, { "answer_id": 2131605, "score": 2, "text": "<p>In the second case where $\\Omega=[\\tfrac{1}{4}, \\tfrac{1}{3}) \\cup ([\\tfrac{1}{3}, \\tfrac{1}{2}) \\cap \\mathbb{Q})$ there is no definite answer <em>even if we assume the existence of a cyclic vector</em>.</p>\n\n<p>First, consider $A$ to be multiplication by $f(x)=x$ on $L^2[0,1]$ with the usual Lebesgue measure. A cyclic vector is given by the constant function 1 since polynomials are dense in $L^2[0,1]$. Then the spectral measure $E_\\Omega$ is given explicitly by the functional calculus as the operator $\\chi_\\Omega (A) \\phi (x) = \\chi_\\Omega(x) \\phi$.</p>\n\n<p>In particular, the operator $E_\\Omega A$ is given by multiplication with $g(x)=x \\chi_\\Omega(x)$. Computing its norm amounts to computing its spectrum which amounts to computing its essential range with respect to the Lebesgue measure. Clearly, the essential range of $g$ is $\\{ 0 \\} \\cup [\\tfrac{1}{4}, \\tfrac{1}{3}]$, so $|| E_\\Omega A ||= \\tfrac{1}{3}$.</p>\n\n<p>Second, consider $L^2[0,1]$ with the measure given by $\\mu = \\lambda + \\delta_{\\tfrac{1}{2}}$, where $\\lambda$ denotes the Lebesgue measure. I.e. we give the point $\\tfrac{1}{2}$ mass 1. Then, by the same argument as above we have to find the essential range of $g(x)=x \\chi_\\Omega(x)$ w.r.t. $\\mu$. Here, we have that the essential range is $\\{ 0 \\} \\cup [\\tfrac{1}{4}, \\tfrac{1}{3}] \\cup \\{ \\tfrac{1}{2} \\}$, so $|| E_\\Omega A ||= \\tfrac{1}{2}$.</p>\n\n<p>Note: In the second case, we also have a cyclic vector, namely the constant function 1. This is consequence of the following: For finite, regular Borel measures on compact subsets $K$ of $\\mathbb{R}$ the continuous functions are dense in $L^2(K)$. Now, the polynomials are dense in the continuous functions (w.r.t. to the sup-norm), so particular they are dense w.r.t. to the $L^2$ norm.</p>\n" } ]
Question: Norm of the product of a self-adjoint operator and one of its spectral projections Answer 1: <p>Whenever you apply a spectral projection $E(S)\ne I$ to $A$, you end up with $0$ in the point spectrum of $E(S)A=AE(S)$ because $\{AE(S)\}E(\sigma\setminus S)=0$ and $E(\sigma\setminus S) \ne 0$. So that special case always requires special attention.</p> <p>If $(a,b)\subseteq\sigma(A)$, then $E(a,b) \ne 0$; otherwise $\mu \in (a,b)$ would lead to a bounded operator, $$ R(\mu)=\int_{\sigma}\frac{1}{\lambda-\mu}dE(\mu), $$ which would have to be the resolvent $R(\mu)=(A-\mu I)^{-1}$. And that would force $(a,b)\in\rho(A)$, contrary to assumption.</p> <p>Suppose $[0,1]\subseteq\sigma(A)$. Then $E(a,b) \ne 0$ for $(a,b)\subseteq[0,1]$. I'll first consider your first question where you ask about the spectrum of $AE[1/4,1/2)$. Automatically $0\in\sigma(AE[1/4,1/2))$ because $\{AE[1/4,1/2)\}E(1/2,1)=0$ and $E(1/2,1)\ne 0$. For $\mu\ne 0$ and $\mu\notin [1/4,1/2]$, $$ AE[1/4,1/2)-\mu I=(A-\mu I)E[1/4,1/2)-\mu E(\sigma\setminus[1/4,1/2)) $$ has a bounded inverse given by $$ (AE[1/4,1/2)-\mu I)^{-1}=\int_{[1/4,1/2)}\frac{1}{\lambda-\mu}dE(\lambda)-\frac{1}{\mu}E(\sigma\setminus[1/4,1/2)). $$ Therefore, $\sigma(AE[1/4,1/2))\subseteq [1/4,1/2]\cup\{0\}$. Conversely $0\in\sigma(AE[1/4,1/2))$ was noted above, and, for any $\mu\in(1/4,1/2)$, the projections $E(\mu-\delta,\mu+\delta)\ne 0$ for all $\delta &gt; 0$, which gives the existence of a non-zero vector $x_{\delta}$ such that $E(\mu-\delta,\mu+\delta)x_{\delta}=x_{\delta}$ and, hence, \begin{align} \|AE[1/4,1/2)x_{\delta}-\mu x_{\delta}\| &amp; = \|(A-\mu I)E(\mu-\delta,\mu+\delta)x_{\delta}\| \\ &amp; \le \delta \|E(\mu-\delta,\mu+\delta)x_{\delta}\| \\ &amp; = \delta\|x_{\delta}\|. \end{align} So $AE[1/4,1/2)-\mu I$ cannot be continuously invertible, which proves $$ \{0\}\cup (1/4,1/2) \subseteq \sigma(AE[1/4,1/2)) $$ Because the spectrum is closed, $$ \{0\} \cup [1/4,1/2]\subseteq \sigma(AE[1/4,1/2)). $$ The opposite inclusion was previously shown. So $$ \sigma(AE[1/4,1/2))=\{0\}\cup[1/4,1/2]. $$ The operator $AE[1/4,1/2]$ is selfadjoint. So its norm is its spectral radius, which gives $\|AE[1/4,1/2)\|=1/2$.</p> <p>I'll let you consider the other cases. Note for example that $S=[1/3,1/2]\cap\mathbb{Q}$ could be such that $E(S)=0$, or it could give $E(S)=E[1/3,1/2]$, or $E(S)=E(T)$ could hold for a lot of closed subsets $T$ of $[1/3,1/2]$ because the spectrum is closed, and every subset of $[1/3,1/2]\cap\mathbb{Q}$ could consist of eigenvalues.</p> Answer 2: <p>In the second case where $\Omega=[\tfrac{1}{4}, \tfrac{1}{3}) \cup ([\tfrac{1}{3}, \tfrac{1}{2}) \cap \mathbb{Q})$ there is no definite answer <em>even if we assume the existence of a cyclic vector</em>.</p> <p>First, consider $A$ to be multiplication by $f(x)=x$ on $L^2[0,1]$ with the usual Lebesgue measure. A cyclic vector is given by the constant function 1 since polynomials are dense in $L^2[0,1]$. Then the spectral measure $E_\Omega$ is given explicitly by the functional calculus as the operator $\chi_\Omega (A) \phi (x) = \chi_\Omega(x) \phi$.</p> <p>In particular, the operator $E_\Omega A$ is given by multiplication with $g(x)=x \chi_\Omega(x)$. Computing its norm amounts to computing its spectrum which amounts to computing its essential range with respect to the Lebesgue measure. Clearly, the essential range of $g$ is $\{ 0 \} \cup [\tfrac{1}{4}, \tfrac{1}{3}]$, so $|| E_\Omega A ||= \tfrac{1}{3}$.</p> <p>Second, consider $L^2[0,1]$ with the measure given by $\mu = \lambda + \delta_{\tfrac{1}{2}}$, where $\lambda$ denotes the Lebesgue measure. I.e. we give the point $\tfrac{1}{2}$ mass 1. Then, by the same argument as above we have to find the essential range of $g(x)=x \chi_\Omega(x)$ w.r.t. $\mu$. Here, we have that the essential range is $\{ 0 \} \cup [\tfrac{1}{4}, \tfrac{1}{3}] \cup \{ \tfrac{1}{2} \}$, so $|| E_\Omega A ||= \tfrac{1}{2}$.</p> <p>Note: In the second case, we also have a cyclic vector, namely the constant function 1. This is consequence of the following: For finite, regular Borel measures on compact subsets $K$ of $\mathbb{R}$ the continuous functions are dense in $L^2(K)$. Now, the polynomials are dense in the continuous functions (w.r.t. to the sup-norm), so particular they are dense w.r.t. to the $L^2$ norm.</p> Answer 3: <p>Whenever you apply a spectral projection $E(S)\ne I$ to $A$, you end up with $0$ in the point spectrum of $E(S)A=AE(S)$ because $\{AE(S)\}E(\sigma\setminus S)=0$ and $E(\sigma\setminus S) \ne 0$. So that special case always requires special attention.</p> <p>If $(a,b)\subseteq\sigma(A)$, then $E(a,b) \ne 0$; otherwise $\mu \in (a,b)$ would lead to a bounded operator, $$ R(\mu)=\int_{\sigma}\frac{1}{\lambda-\mu}dE(\mu), $$ which would have to be the resolvent $R(\mu)=(A-\mu I)^{-1}$. And that would force $(a,b)\in\rho(A)$, contrary to assumption.</p> <p>Suppose $[0,1]\subseteq\sigma(A)$. Then $E(a,b) \ne 0$ for $(a,b)\subseteq[0,1]$. I'll first consider your first question where you ask about the spectrum of $AE[1/4,1/2)$. Automatically $0\in\sigma(AE[1/4,1/2))$ because $\{AE[1/4,1/2)\}E(1/2,1)=0$ and $E(1/2,1)\ne 0$. For $\mu\ne 0$ and $\mu\notin [1/4,1/2]$, $$ AE[1/4,1/2)-\mu I=(A-\mu I)E[1/4,1/2)-\mu E(\sigma\setminus[1/4,1/2)) $$ has a bounded inverse given by $$ (AE[1/4,1/2)-\mu I)^{-1}=\int_{[1/4,1/2)}\frac{1}{\lambda-\mu}dE(\lambda)-\frac{1}{\mu}E(\sigma\setminus[1/4,1/2)). $$ Therefore, $\sigma(AE[1/4,1/2))\subseteq [1/4,1/2]\cup\{0\}$. Conversely $0\in\sigma(AE[1/4,1/2))$ was noted above, and, for any $\mu\in(1/4,1/2)$, the projections $E(\mu-\delta,\mu+\delta)\ne 0$ for all $\delta &gt; 0$, which gives the existence of a non-zero vector $x_{\delta}$ such that $E(\mu-\delta,\mu+\delta)x_{\delta}=x_{\delta}$ and, hence, \begin{align} \|AE[1/4,1/2)x_{\delta}-\mu x_{\delta}\| &amp; = \|(A-\mu I)E(\mu-\delta,\mu+\delta)x_{\delta}\| \\ &amp; \le \delta \|E(\mu-\delta,\mu+\delta)x_{\delta}\| \\ &amp; = \delta\|x_{\delta}\|. \end{align} So $AE[1/4,1/2)-\mu I$ cannot be continuously invertible, which proves $$ \{0\}\cup (1/4,1/2) \subseteq \sigma(AE[1/4,1/2)) $$ Because the spectrum is closed, $$ \{0\} \cup [1/4,1/2]\subseteq \sigma(AE[1/4,1/2)). $$ The opposite inclusion was previously shown. So $$ \sigma(AE[1/4,1/2))=\{0\}\cup[1/4,1/2]. $$ The operator $AE[1/4,1/2]$ is selfadjoint. So its norm is its spectral radius, which gives $\|AE[1/4,1/2)\|=1/2$.</p> <p>I'll let you consider the other cases. Note for example that $S=[1/3,1/2]\cap\mathbb{Q}$ could be such that $E(S)=0$, or it could give $E(S)=E[1/3,1/2]$, or $E(S)=E(T)$ could hold for a lot of closed subsets $T$ of $[1/3,1/2]$ because the spectrum is closed, and every subset of $[1/3,1/2]\cap\mathbb{Q}$ could consist of eigenvalues.</p> Answer 4: <p>In the second case where $\Omega=[\tfrac{1}{4}, \tfrac{1}{3}) \cup ([\tfrac{1}{3}, \tfrac{1}{2}) \cap \mathbb{Q})$ there is no definite answer <em>even if we assume the existence of a cyclic vector</em>.</p> <p>First, consider $A$ to be multiplication by $f(x)=x$ on $L^2[0,1]$ with the usual Lebesgue measure. A cyclic vector is given by the constant function 1 since polynomials are dense in $L^2[0,1]$. Then the spectral measure $E_\Omega$ is given explicitly by the functional calculus as the operator $\chi_\Omega (A) \phi (x) = \chi_\Omega(x) \phi$.</p> <p>In particular, the operator $E_\Omega A$ is given by multiplication with $g(x)=x \chi_\Omega(x)$. Computing its norm amounts to computing its spectrum which amounts to computing its essential range with respect to the Lebesgue measure. Clearly, the essential range of $g$ is $\{ 0 \} \cup [\tfrac{1}{4}, \tfrac{1}{3}]$, so $|| E_\Omega A ||= \tfrac{1}{3}$.</p> <p>Second, consider $L^2[0,1]$ with the measure given by $\mu = \lambda + \delta_{\tfrac{1}{2}}$, where $\lambda$ denotes the Lebesgue measure. I.e. we give the point $\tfrac{1}{2}$ mass 1. Then, by the same argument as above we have to find the essential range of $g(x)=x \chi_\Omega(x)$ w.r.t. $\mu$. Here, we have that the essential range is $\{ 0 \} \cup [\tfrac{1}{4}, \tfrac{1}{3}] \cup \{ \tfrac{1}{2} \}$, so $|| E_\Omega A ||= \tfrac{1}{2}$.</p> <p>Note: In the second case, we also have a cyclic vector, namely the constant function 1. This is consequence of the following: For finite, regular Borel measures on compact subsets $K$ of $\mathbb{R}$ the continuous functions are dense in $L^2(K)$. Now, the polynomials are dense in the continuous functions (w.r.t. to the sup-norm), so particular they are dense w.r.t. to the $L^2$ norm.</p> Answer 5: <p>Whenever you apply a spectral projection $E(S)\ne I$ to $A$, you end up with $0$ in the point spectrum of $E(S)A=AE(S)$ because $\{AE(S)\}E(\sigma\setminus S)=0$ and $E(\sigma\setminus S) \ne 0$. So that special case always requires special attention.</p> <p>If $(a,b)\subseteq\sigma(A)$, then $E(a,b) \ne 0$; otherwise $\mu \in (a,b)$ would lead to a bounded operator, $$ R(\mu)=\int_{\sigma}\frac{1}{\lambda-\mu}dE(\mu), $$ which would have to be the resolvent $R(\mu)=(A-\mu I)^{-1}$. And that would force $(a,b)\in\rho(A)$, contrary to assumption.</p> <p>Suppose $[0,1]\subseteq\sigma(A)$. Then $E(a,b) \ne 0$ for $(a,b)\subseteq[0,1]$. I'll first consider your first question where you ask about the spectrum of $AE[1/4,1/2)$. Automatically $0\in\sigma(AE[1/4,1/2))$ because $\{AE[1/4,1/2)\}E(1/2,1)=0$ and $E(1/2,1)\ne 0$. For $\mu\ne 0$ and $\mu\notin [1/4,1/2]$, $$ AE[1/4,1/2)-\mu I=(A-\mu I)E[1/4,1/2)-\mu E(\sigma\setminus[1/4,1/2)) $$ has a bounded inverse given by $$ (AE[1/4,1/2)-\mu I)^{-1}=\int_{[1/4,1/2)}\frac{1}{\lambda-\mu}dE(\lambda)-\frac{1}{\mu}E(\sigma\setminus[1/4,1/2)). $$ Therefore, $\sigma(AE[1/4,1/2))\subseteq [1/4,1/2]\cup\{0\}$. Conversely $0\in\sigma(AE[1/4,1/2))$ was noted above, and, for any $\mu\in(1/4,1/2)$, the projections $E(\mu-\delta,\mu+\delta)\ne 0$ for all $\delta &gt; 0$, which gives the existence of a non-zero vector $x_{\delta}$ such that $E(\mu-\delta,\mu+\delta)x_{\delta}=x_{\delta}$ and, hence, \begin{align} \|AE[1/4,1/2)x_{\delta}-\mu x_{\delta}\| &amp; = \|(A-\mu I)E(\mu-\delta,\mu+\delta)x_{\delta}\| \\ &amp; \le \delta \|E(\mu-\delta,\mu+\delta)x_{\delta}\| \\ &amp; = \delta\|x_{\delta}\|. \end{align} So $AE[1/4,1/2)-\mu I$ cannot be continuously invertible, which proves $$ \{0\}\cup (1/4,1/2) \subseteq \sigma(AE[1/4,1/2)) $$ Because the spectrum is closed, $$ \{0\} \cup [1/4,1/2]\subseteq \sigma(AE[1/4,1/2)). $$ The opposite inclusion was previously shown. So $$ \sigma(AE[1/4,1/2))=\{0\}\cup[1/4,1/2]. $$ The operator $AE[1/4,1/2]$ is selfadjoint. So its norm is its spectral radius, which gives $\|AE[1/4,1/2)\|=1/2$.</p> <p>I'll let you consider the other cases. Note for example that $S=[1/3,1/2]\cap\mathbb{Q}$ could be such that $E(S)=0$, or it could give $E(S)=E[1/3,1/2]$, or $E(S)=E(T)$ could hold for a lot of closed subsets $T$ of $[1/3,1/2]$ because the spectrum is closed, and every subset of $[1/3,1/2]\cap\mathbb{Q}$ could consist of eigenvalues.</p> Answer 6: <p>In the second case where $\Omega=[\tfrac{1}{4}, \tfrac{1}{3}) \cup ([\tfrac{1}{3}, \tfrac{1}{2}) \cap \mathbb{Q})$ there is no definite answer <em>even if we assume the existence of a cyclic vector</em>.</p> <p>First, consider $A$ to be multiplication by $f(x)=x$ on $L^2[0,1]$ with the usual Lebesgue measure. A cyclic vector is given by the constant function 1 since polynomials are dense in $L^2[0,1]$. Then the spectral measure $E_\Omega$ is given explicitly by the functional calculus as the operator $\chi_\Omega (A) \phi (x) = \chi_\Omega(x) \phi$.</p> <p>In particular, the operator $E_\Omega A$ is given by multiplication with $g(x)=x \chi_\Omega(x)$. Computing its norm amounts to computing its spectrum which amounts to computing its essential range with respect to the Lebesgue measure. Clearly, the essential range of $g$ is $\{ 0 \} \cup [\tfrac{1}{4}, \tfrac{1}{3}]$, so $|| E_\Omega A ||= \tfrac{1}{3}$.</p> <p>Second, consider $L^2[0,1]$ with the measure given by $\mu = \lambda + \delta_{\tfrac{1}{2}}$, where $\lambda$ denotes the Lebesgue measure. I.e. we give the point $\tfrac{1}{2}$ mass 1. Then, by the same argument as above we have to find the essential range of $g(x)=x \chi_\Omega(x)$ w.r.t. $\mu$. Here, we have that the essential range is $\{ 0 \} \cup [\tfrac{1}{4}, \tfrac{1}{3}] \cup \{ \tfrac{1}{2} \}$, so $|| E_\Omega A ||= \tfrac{1}{2}$.</p> <p>Note: In the second case, we also have a cyclic vector, namely the constant function 1. This is consequence of the following: For finite, regular Borel measures on compact subsets $K$ of $\mathbb{R}$ the continuous functions are dense in $L^2(K)$. Now, the polynomials are dense in the continuous functions (w.r.t. to the sup-norm), so particular they are dense w.r.t. to the $L^2$ norm.</p> Answer 7: <p>Whenever you apply a spectral projection $E(S)\ne I$ to $A$, you end up with $0$ in the point spectrum of $E(S)A=AE(S)$ because $\{AE(S)\}E(\sigma\setminus S)=0$ and $E(\sigma\setminus S) \ne 0$. So that special case always requires special attention.</p> <p>If $(a,b)\subseteq\sigma(A)$, then $E(a,b) \ne 0$; otherwise $\mu \in (a,b)$ would lead to a bounded operator, $$ R(\mu)=\int_{\sigma}\frac{1}{\lambda-\mu}dE(\mu), $$ which would have to be the resolvent $R(\mu)=(A-\mu I)^{-1}$. And that would force $(a,b)\in\rho(A)$, contrary to assumption.</p> <p>Suppose $[0,1]\subseteq\sigma(A)$. Then $E(a,b) \ne 0$ for $(a,b)\subseteq[0,1]$. I'll first consider your first question where you ask about the spectrum of $AE[1/4,1/2)$. Automatically $0\in\sigma(AE[1/4,1/2))$ because $\{AE[1/4,1/2)\}E(1/2,1)=0$ and $E(1/2,1)\ne 0$. For $\mu\ne 0$ and $\mu\notin [1/4,1/2]$, $$ AE[1/4,1/2)-\mu I=(A-\mu I)E[1/4,1/2)-\mu E(\sigma\setminus[1/4,1/2)) $$ has a bounded inverse given by $$ (AE[1/4,1/2)-\mu I)^{-1}=\int_{[1/4,1/2)}\frac{1}{\lambda-\mu}dE(\lambda)-\frac{1}{\mu}E(\sigma\setminus[1/4,1/2)). $$ Therefore, $\sigma(AE[1/4,1/2))\subseteq [1/4,1/2]\cup\{0\}$. Conversely $0\in\sigma(AE[1/4,1/2))$ was noted above, and, for any $\mu\in(1/4,1/2)$, the projections $E(\mu-\delta,\mu+\delta)\ne 0$ for all $\delta &gt; 0$, which gives the existence of a non-zero vector $x_{\delta}$ such that $E(\mu-\delta,\mu+\delta)x_{\delta}=x_{\delta}$ and, hence, \begin{align} \|AE[1/4,1/2)x_{\delta}-\mu x_{\delta}\| &amp; = \|(A-\mu I)E(\mu-\delta,\mu+\delta)x_{\delta}\| \\ &amp; \le \delta \|E(\mu-\delta,\mu+\delta)x_{\delta}\| \\ &amp; = \delta\|x_{\delta}\|. \end{align} So $AE[1/4,1/2)-\mu I$ cannot be continuously invertible, which proves $$ \{0\}\cup (1/4,1/2) \subseteq \sigma(AE[1/4,1/2)) $$ Because the spectrum is closed, $$ \{0\} \cup [1/4,1/2]\subseteq \sigma(AE[1/4,1/2)). $$ The opposite inclusion was previously shown. So $$ \sigma(AE[1/4,1/2))=\{0\}\cup[1/4,1/2]. $$ The operator $AE[1/4,1/2]$ is selfadjoint. So its norm is its spectral radius, which gives $\|AE[1/4,1/2)\|=1/2$.</p> <p>I'll let you consider the other cases. Note for example that $S=[1/3,1/2]\cap\mathbb{Q}$ could be such that $E(S)=0$, or it could give $E(S)=E[1/3,1/2]$, or $E(S)=E(T)$ could hold for a lot of closed subsets $T$ of $[1/3,1/2]$ because the spectrum is closed, and every subset of $[1/3,1/2]\cap\mathbb{Q}$ could consist of eigenvalues.</p> Answer 8: <p>In the second case where $\Omega=[\tfrac{1}{4}, \tfrac{1}{3}) \cup ([\tfrac{1}{3}, \tfrac{1}{2}) \cap \mathbb{Q})$ there is no definite answer <em>even if we assume the existence of a cyclic vector</em>.</p> <p>First, consider $A$ to be multiplication by $f(x)=x$ on $L^2[0,1]$ with the usual Lebesgue measure. A cyclic vector is given by the constant function 1 since polynomials are dense in $L^2[0,1]$. Then the spectral measure $E_\Omega$ is given explicitly by the functional calculus as the operator $\chi_\Omega (A) \phi (x) = \chi_\Omega(x) \phi$.</p> <p>In particular, the operator $E_\Omega A$ is given by multiplication with $g(x)=x \chi_\Omega(x)$. Computing its norm amounts to computing its spectrum which amounts to computing its essential range with respect to the Lebesgue measure. Clearly, the essential range of $g$ is $\{ 0 \} \cup [\tfrac{1}{4}, \tfrac{1}{3}]$, so $|| E_\Omega A ||= \tfrac{1}{3}$.</p> <p>Second, consider $L^2[0,1]$ with the measure given by $\mu = \lambda + \delta_{\tfrac{1}{2}}$, where $\lambda$ denotes the Lebesgue measure. I.e. we give the point $\tfrac{1}{2}$ mass 1. Then, by the same argument as above we have to find the essential range of $g(x)=x \chi_\Omega(x)$ w.r.t. $\mu$. Here, we have that the essential range is $\{ 0 \} \cup [\tfrac{1}{4}, \tfrac{1}{3}] \cup \{ \tfrac{1}{2} \}$, so $|| E_\Omega A ||= \tfrac{1}{2}$.</p> <p>Note: In the second case, we also have a cyclic vector, namely the constant function 1. This is consequence of the following: For finite, regular Borel measures on compact subsets $K$ of $\mathbb{R}$ the continuous functions are dense in $L^2(K)$. Now, the polynomials are dense in the continuous functions (w.r.t. to the sup-norm), so particular they are dense w.r.t. to the $L^2$ norm.</p> Answer 9: <p>Whenever you apply a spectral projection $E(S)\ne I$ to $A$, you end up with $0$ in the point spectrum of $E(S)A=AE(S)$ because $\{AE(S)\}E(\sigma\setminus S)=0$ and $E(\sigma\setminus S) \ne 0$. So that special case always requires special attention.</p> <p>If $(a,b)\subseteq\sigma(A)$, then $E(a,b) \ne 0$; otherwise $\mu \in (a,b)$ would lead to a bounded operator, $$ R(\mu)=\int_{\sigma}\frac{1}{\lambda-\mu}dE(\mu), $$ which would have to be the resolvent $R(\mu)=(A-\mu I)^{-1}$. And that would force $(a,b)\in\rho(A)$, contrary to assumption.</p> <p>Suppose $[0,1]\subseteq\sigma(A)$. Then $E(a,b) \ne 0$ for $(a,b)\subseteq[0,1]$. I'll first consider your first question where you ask about the spectrum of $AE[1/4,1/2)$. Automatically $0\in\sigma(AE[1/4,1/2))$ because $\{AE[1/4,1/2)\}E(1/2,1)=0$ and $E(1/2,1)\ne 0$. For $\mu\ne 0$ and $\mu\notin [1/4,1/2]$, $$ AE[1/4,1/2)-\mu I=(A-\mu I)E[1/4,1/2)-\mu E(\sigma\setminus[1/4,1/2)) $$ has a bounded inverse given by $$ (AE[1/4,1/2)-\mu I)^{-1}=\int_{[1/4,1/2)}\frac{1}{\lambda-\mu}dE(\lambda)-\frac{1}{\mu}E(\sigma\setminus[1/4,1/2)). $$ Therefore, $\sigma(AE[1/4,1/2))\subseteq [1/4,1/2]\cup\{0\}$. Conversely $0\in\sigma(AE[1/4,1/2))$ was noted above, and, for any $\mu\in(1/4,1/2)$, the projections $E(\mu-\delta,\mu+\delta)\ne 0$ for all $\delta &gt; 0$, which gives the existence of a non-zero vector $x_{\delta}$ such that $E(\mu-\delta,\mu+\delta)x_{\delta}=x_{\delta}$ and, hence, \begin{align} \|AE[1/4,1/2)x_{\delta}-\mu x_{\delta}\| &amp; = \|(A-\mu I)E(\mu-\delta,\mu+\delta)x_{\delta}\| \\ &amp; \le \delta \|E(\mu-\delta,\mu+\delta)x_{\delta}\| \\ &amp; = \delta\|x_{\delta}\|. \end{align} So $AE[1/4,1/2)-\mu I$ cannot be continuously invertible, which proves $$ \{0\}\cup (1/4,1/2) \subseteq \sigma(AE[1/4,1/2)) $$ Because the spectrum is closed, $$ \{0\} \cup [1/4,1/2]\subseteq \sigma(AE[1/4,1/2)). $$ The opposite inclusion was previously shown. So $$ \sigma(AE[1/4,1/2))=\{0\}\cup[1/4,1/2]. $$ The operator $AE[1/4,1/2]$ is selfadjoint. So its norm is its spectral radius, which gives $\|AE[1/4,1/2)\|=1/2$.</p> <p>I'll let you consider the other cases. Note for example that $S=[1/3,1/2]\cap\mathbb{Q}$ could be such that $E(S)=0$, or it could give $E(S)=E[1/3,1/2]$, or $E(S)=E(T)$ could hold for a lot of closed subsets $T$ of $[1/3,1/2]$ because the spectrum is closed, and every subset of $[1/3,1/2]\cap\mathbb{Q}$ could consist of eigenvalues.</p> Answer 10: <p>In the second case where $\Omega=[\tfrac{1}{4}, \tfrac{1}{3}) \cup ([\tfrac{1}{3}, \tfrac{1}{2}) \cap \mathbb{Q})$ there is no definite answer <em>even if we assume the existence of a cyclic vector</em>.</p> <p>First, consider $A$ to be multiplication by $f(x)=x$ on $L^2[0,1]$ with the usual Lebesgue measure. A cyclic vector is given by the constant function 1 since polynomials are dense in $L^2[0,1]$. Then the spectral measure $E_\Omega$ is given explicitly by the functional calculus as the operator $\chi_\Omega (A) \phi (x) = \chi_\Omega(x) \phi$.</p> <p>In particular, the operator $E_\Omega A$ is given by multiplication with $g(x)=x \chi_\Omega(x)$. Computing its norm amounts to computing its spectrum which amounts to computing its essential range with respect to the Lebesgue measure. Clearly, the essential range of $g$ is $\{ 0 \} \cup [\tfrac{1}{4}, \tfrac{1}{3}]$, so $|| E_\Omega A ||= \tfrac{1}{3}$.</p> <p>Second, consider $L^2[0,1]$ with the measure given by $\mu = \lambda + \delta_{\tfrac{1}{2}}$, where $\lambda$ denotes the Lebesgue measure. I.e. we give the point $\tfrac{1}{2}$ mass 1. Then, by the same argument as above we have to find the essential range of $g(x)=x \chi_\Omega(x)$ w.r.t. $\mu$. Here, we have that the essential range is $\{ 0 \} \cup [\tfrac{1}{4}, \tfrac{1}{3}] \cup \{ \tfrac{1}{2} \}$, so $|| E_\Omega A ||= \tfrac{1}{2}$.</p> <p>Note: In the second case, we also have a cyclic vector, namely the constant function 1. This is consequence of the following: For finite, regular Borel measures on compact subsets $K$ of $\mathbb{R}$ the continuous functions are dense in $L^2(K)$. Now, the polynomials are dense in the continuous functions (w.r.t. to the sup-norm), so particular they are dense w.r.t. to the $L^2$ norm.</p> Answer 11: <p>Whenever you apply a spectral projection $E(S)\ne I$ to $A$, you end up with $0$ in the point spectrum of $E(S)A=AE(S)$ because $\{AE(S)\}E(\sigma\setminus S)=0$ and $E(\sigma\setminus S) \ne 0$. So that special case always requires special attention.</p> <p>If $(a,b)\subseteq\sigma(A)$, then $E(a,b) \ne 0$; otherwise $\mu \in (a,b)$ would lead to a bounded operator, $$ R(\mu)=\int_{\sigma}\frac{1}{\lambda-\mu}dE(\mu), $$ which would have to be the resolvent $R(\mu)=(A-\mu I)^{-1}$. And that would force $(a,b)\in\rho(A)$, contrary to assumption.</p> <p>Suppose $[0,1]\subseteq\sigma(A)$. Then $E(a,b) \ne 0$ for $(a,b)\subseteq[0,1]$. I'll first consider your first question where you ask about the spectrum of $AE[1/4,1/2)$. Automatically $0\in\sigma(AE[1/4,1/2))$ because $\{AE[1/4,1/2)\}E(1/2,1)=0$ and $E(1/2,1)\ne 0$. For $\mu\ne 0$ and $\mu\notin [1/4,1/2]$, $$ AE[1/4,1/2)-\mu I=(A-\mu I)E[1/4,1/2)-\mu E(\sigma\setminus[1/4,1/2)) $$ has a bounded inverse given by $$ (AE[1/4,1/2)-\mu I)^{-1}=\int_{[1/4,1/2)}\frac{1}{\lambda-\mu}dE(\lambda)-\frac{1}{\mu}E(\sigma\setminus[1/4,1/2)). $$ Therefore, $\sigma(AE[1/4,1/2))\subseteq [1/4,1/2]\cup\{0\}$. Conversely $0\in\sigma(AE[1/4,1/2))$ was noted above, and, for any $\mu\in(1/4,1/2)$, the projections $E(\mu-\delta,\mu+\delta)\ne 0$ for all $\delta &gt; 0$, which gives the existence of a non-zero vector $x_{\delta}$ such that $E(\mu-\delta,\mu+\delta)x_{\delta}=x_{\delta}$ and, hence, \begin{align} \|AE[1/4,1/2)x_{\delta}-\mu x_{\delta}\| &amp; = \|(A-\mu I)E(\mu-\delta,\mu+\delta)x_{\delta}\| \\ &amp; \le \delta \|E(\mu-\delta,\mu+\delta)x_{\delta}\| \\ &amp; = \delta\|x_{\delta}\|. \end{align} So $AE[1/4,1/2)-\mu I$ cannot be continuously invertible, which proves $$ \{0\}\cup (1/4,1/2) \subseteq \sigma(AE[1/4,1/2)) $$ Because the spectrum is closed, $$ \{0\} \cup [1/4,1/2]\subseteq \sigma(AE[1/4,1/2)). $$ The opposite inclusion was previously shown. So $$ \sigma(AE[1/4,1/2))=\{0\}\cup[1/4,1/2]. $$ The operator $AE[1/4,1/2]$ is selfadjoint. So its norm is its spectral radius, which gives $\|AE[1/4,1/2)\|=1/2$.</p> <p>I'll let you consider the other cases. Note for example that $S=[1/3,1/2]\cap\mathbb{Q}$ could be such that $E(S)=0$, or it could give $E(S)=E[1/3,1/2]$, or $E(S)=E(T)$ could hold for a lot of closed subsets $T$ of $[1/3,1/2]$ because the spectrum is closed, and every subset of $[1/3,1/2]\cap\mathbb{Q}$ could consist of eigenvalues.</p> Answer 12: <p>In the second case where $\Omega=[\tfrac{1}{4}, \tfrac{1}{3}) \cup ([\tfrac{1}{3}, \tfrac{1}{2}) \cap \mathbb{Q})$ there is no definite answer <em>even if we assume the existence of a cyclic vector</em>.</p> <p>First, consider $A$ to be multiplication by $f(x)=x$ on $L^2[0,1]$ with the usual Lebesgue measure. A cyclic vector is given by the constant function 1 since polynomials are dense in $L^2[0,1]$. Then the spectral measure $E_\Omega$ is given explicitly by the functional calculus as the operator $\chi_\Omega (A) \phi (x) = \chi_\Omega(x) \phi$.</p> <p>In particular, the operator $E_\Omega A$ is given by multiplication with $g(x)=x \chi_\Omega(x)$. Computing its norm amounts to computing its spectrum which amounts to computing its essential range with respect to the Lebesgue measure. Clearly, the essential range of $g$ is $\{ 0 \} \cup [\tfrac{1}{4}, \tfrac{1}{3}]$, so $|| E_\Omega A ||= \tfrac{1}{3}$.</p> <p>Second, consider $L^2[0,1]$ with the measure given by $\mu = \lambda + \delta_{\tfrac{1}{2}}$, where $\lambda$ denotes the Lebesgue measure. I.e. we give the point $\tfrac{1}{2}$ mass 1. Then, by the same argument as above we have to find the essential range of $g(x)=x \chi_\Omega(x)$ w.r.t. $\mu$. Here, we have that the essential range is $\{ 0 \} \cup [\tfrac{1}{4}, \tfrac{1}{3}] \cup \{ \tfrac{1}{2} \}$, so $|| E_\Omega A ||= \tfrac{1}{2}$.</p> <p>Note: In the second case, we also have a cyclic vector, namely the constant function 1. This is consequence of the following: For finite, regular Borel measures on compact subsets $K$ of $\mathbb{R}$ the continuous functions are dense in $L^2(K)$. Now, the polynomials are dense in the continuous functions (w.r.t. to the sup-norm), so particular they are dense w.r.t. to the $L^2$ norm.</p> Answer 13: <p>Whenever you apply a spectral projection $E(S)\ne I$ to $A$, you end up with $0$ in the point spectrum of $E(S)A=AE(S)$ because $\{AE(S)\}E(\sigma\setminus S)=0$ and $E(\sigma\setminus S) \ne 0$. So that special case always requires special attention.</p> <p>If $(a,b)\subseteq\sigma(A)$, then $E(a,b) \ne 0$; otherwise $\mu \in (a,b)$ would lead to a bounded operator, $$ R(\mu)=\int_{\sigma}\frac{1}{\lambda-\mu}dE(\mu), $$ which would have to be the resolvent $R(\mu)=(A-\mu I)^{-1}$. And that would force $(a,b)\in\rho(A)$, contrary to assumption.</p> <p>Suppose $[0,1]\subseteq\sigma(A)$. Then $E(a,b) \ne 0$ for $(a,b)\subseteq[0,1]$. I'll first consider your first question where you ask about the spectrum of $AE[1/4,1/2)$. Automatically $0\in\sigma(AE[1/4,1/2))$ because $\{AE[1/4,1/2)\}E(1/2,1)=0$ and $E(1/2,1)\ne 0$. For $\mu\ne 0$ and $\mu\notin [1/4,1/2]$, $$ AE[1/4,1/2)-\mu I=(A-\mu I)E[1/4,1/2)-\mu E(\sigma\setminus[1/4,1/2)) $$ has a bounded inverse given by $$ (AE[1/4,1/2)-\mu I)^{-1}=\int_{[1/4,1/2)}\frac{1}{\lambda-\mu}dE(\lambda)-\frac{1}{\mu}E(\sigma\setminus[1/4,1/2)). $$ Therefore, $\sigma(AE[1/4,1/2))\subseteq [1/4,1/2]\cup\{0\}$. Conversely $0\in\sigma(AE[1/4,1/2))$ was noted above, and, for any $\mu\in(1/4,1/2)$, the projections $E(\mu-\delta,\mu+\delta)\ne 0$ for all $\delta &gt; 0$, which gives the existence of a non-zero vector $x_{\delta}$ such that $E(\mu-\delta,\mu+\delta)x_{\delta}=x_{\delta}$ and, hence, \begin{align} \|AE[1/4,1/2)x_{\delta}-\mu x_{\delta}\| &amp; = \|(A-\mu I)E(\mu-\delta,\mu+\delta)x_{\delta}\| \\ &amp; \le \delta \|E(\mu-\delta,\mu+\delta)x_{\delta}\| \\ &amp; = \delta\|x_{\delta}\|. \end{align} So $AE[1/4,1/2)-\mu I$ cannot be continuously invertible, which proves $$ \{0\}\cup (1/4,1/2) \subseteq \sigma(AE[1/4,1/2)) $$ Because the spectrum is closed, $$ \{0\} \cup [1/4,1/2]\subseteq \sigma(AE[1/4,1/2)). $$ The opposite inclusion was previously shown. So $$ \sigma(AE[1/4,1/2))=\{0\}\cup[1/4,1/2]. $$ The operator $AE[1/4,1/2]$ is selfadjoint. So its norm is its spectral radius, which gives $\|AE[1/4,1/2)\|=1/2$.</p> <p>I'll let you consider the other cases. Note for example that $S=[1/3,1/2]\cap\mathbb{Q}$ could be such that $E(S)=0$, or it could give $E(S)=E[1/3,1/2]$, or $E(S)=E(T)$ could hold for a lot of closed subsets $T$ of $[1/3,1/2]$ because the spectrum is closed, and every subset of $[1/3,1/2]\cap\mathbb{Q}$ could consist of eigenvalues.</p> Answer 14: <p>In the second case where $\Omega=[\tfrac{1}{4}, \tfrac{1}{3}) \cup ([\tfrac{1}{3}, \tfrac{1}{2}) \cap \mathbb{Q})$ there is no definite answer <em>even if we assume the existence of a cyclic vector</em>.</p> <p>First, consider $A$ to be multiplication by $f(x)=x$ on $L^2[0,1]$ with the usual Lebesgue measure. A cyclic vector is given by the constant function 1 since polynomials are dense in $L^2[0,1]$. Then the spectral measure $E_\Omega$ is given explicitly by the functional calculus as the operator $\chi_\Omega (A) \phi (x) = \chi_\Omega(x) \phi$.</p> <p>In particular, the operator $E_\Omega A$ is given by multiplication with $g(x)=x \chi_\Omega(x)$. Computing its norm amounts to computing its spectrum which amounts to computing its essential range with respect to the Lebesgue measure. Clearly, the essential range of $g$ is $\{ 0 \} \cup [\tfrac{1}{4}, \tfrac{1}{3}]$, so $|| E_\Omega A ||= \tfrac{1}{3}$.</p> <p>Second, consider $L^2[0,1]$ with the measure given by $\mu = \lambda + \delta_{\tfrac{1}{2}}$, where $\lambda$ denotes the Lebesgue measure. I.e. we give the point $\tfrac{1}{2}$ mass 1. Then, by the same argument as above we have to find the essential range of $g(x)=x \chi_\Omega(x)$ w.r.t. $\mu$. Here, we have that the essential range is $\{ 0 \} \cup [\tfrac{1}{4}, \tfrac{1}{3}] \cup \{ \tfrac{1}{2} \}$, so $|| E_\Omega A ||= \tfrac{1}{2}$.</p> <p>Note: In the second case, we also have a cyclic vector, namely the constant function 1. This is consequence of the following: For finite, regular Borel measures on compact subsets $K$ of $\mathbb{R}$ the continuous functions are dense in $L^2(K)$. Now, the polynomials are dense in the continuous functions (w.r.t. to the sup-norm), so particular they are dense w.r.t. to the $L^2$ norm.</p> Answer 15: <p>Whenever you apply a spectral projection $E(S)\ne I$ to $A$, you end up with $0$ in the point spectrum of $E(S)A=AE(S)$ because $\{AE(S)\}E(\sigma\setminus S)=0$ and $E(\sigma\setminus S) \ne 0$. So that special case always requires special attention.</p> <p>If $(a,b)\subseteq\sigma(A)$, then $E(a,b) \ne 0$; otherwise $\mu \in (a,b)$ would lead to a bounded operator, $$ R(\mu)=\int_{\sigma}\frac{1}{\lambda-\mu}dE(\mu), $$ which would have to be the resolvent $R(\mu)=(A-\mu I)^{-1}$. And that would force $(a,b)\in\rho(A)$, contrary to assumption.</p> <p>Suppose $[0,1]\subseteq\sigma(A)$. Then $E(a,b) \ne 0$ for $(a,b)\subseteq[0,1]$. I'll first consider your first question where you ask about the spectrum of $AE[1/4,1/2)$. Automatically $0\in\sigma(AE[1/4,1/2))$ because $\{AE[1/4,1/2)\}E(1/2,1)=0$ and $E(1/2,1)\ne 0$. For $\mu\ne 0$ and $\mu\notin [1/4,1/2]$, $$ AE[1/4,1/2)-\mu I=(A-\mu I)E[1/4,1/2)-\mu E(\sigma\setminus[1/4,1/2)) $$ has a bounded inverse given by $$ (AE[1/4,1/2)-\mu I)^{-1}=\int_{[1/4,1/2)}\frac{1}{\lambda-\mu}dE(\lambda)-\frac{1}{\mu}E(\sigma\setminus[1/4,1/2)). $$ Therefore, $\sigma(AE[1/4,1/2))\subseteq [1/4,1/2]\cup\{0\}$. Conversely $0\in\sigma(AE[1/4,1/2))$ was noted above, and, for any $\mu\in(1/4,1/2)$, the projections $E(\mu-\delta,\mu+\delta)\ne 0$ for all $\delta &gt; 0$, which gives the existence of a non-zero vector $x_{\delta}$ such that $E(\mu-\delta,\mu+\delta)x_{\delta}=x_{\delta}$ and, hence, \begin{align} \|AE[1/4,1/2)x_{\delta}-\mu x_{\delta}\| &amp; = \|(A-\mu I)E(\mu-\delta,\mu+\delta)x_{\delta}\| \\ &amp; \le \delta \|E(\mu-\delta,\mu+\delta)x_{\delta}\| \\ &amp; = \delta\|x_{\delta}\|. \end{align} So $AE[1/4,1/2)-\mu I$ cannot be continuously invertible, which proves $$ \{0\}\cup (1/4,1/2) \subseteq \sigma(AE[1/4,1/2)) $$ Because the spectrum is closed, $$ \{0\} \cup [1/4,1/2]\subseteq \sigma(AE[1/4,1/2)). $$ The opposite inclusion was previously shown. So $$ \sigma(AE[1/4,1/2))=\{0\}\cup[1/4,1/2]. $$ The operator $AE[1/4,1/2]$ is selfadjoint. So its norm is its spectral radius, which gives $\|AE[1/4,1/2)\|=1/2$.</p> <p>I'll let you consider the other cases. Note for example that $S=[1/3,1/2]\cap\mathbb{Q}$ could be such that $E(S)=0$, or it could give $E(S)=E[1/3,1/2]$, or $E(S)=E(T)$ could hold for a lot of closed subsets $T$ of $[1/3,1/2]$ because the spectrum is closed, and every subset of $[1/3,1/2]\cap\mathbb{Q}$ could consist of eigenvalues.</p> Answer 16: <p>In the second case where $\Omega=[\tfrac{1}{4}, \tfrac{1}{3}) \cup ([\tfrac{1}{3}, \tfrac{1}{2}) \cap \mathbb{Q})$ there is no definite answer <em>even if we assume the existence of a cyclic vector</em>.</p> <p>First, consider $A$ to be multiplication by $f(x)=x$ on $L^2[0,1]$ with the usual Lebesgue measure. A cyclic vector is given by the constant function 1 since polynomials are dense in $L^2[0,1]$. Then the spectral measure $E_\Omega$ is given explicitly by the functional calculus as the operator $\chi_\Omega (A) \phi (x) = \chi_\Omega(x) \phi$.</p> <p>In particular, the operator $E_\Omega A$ is given by multiplication with $g(x)=x \chi_\Omega(x)$. Computing its norm amounts to computing its spectrum which amounts to computing its essential range with respect to the Lebesgue measure. Clearly, the essential range of $g$ is $\{ 0 \} \cup [\tfrac{1}{4}, \tfrac{1}{3}]$, so $|| E_\Omega A ||= \tfrac{1}{3}$.</p> <p>Second, consider $L^2[0,1]$ with the measure given by $\mu = \lambda + \delta_{\tfrac{1}{2}}$, where $\lambda$ denotes the Lebesgue measure. I.e. we give the point $\tfrac{1}{2}$ mass 1. Then, by the same argument as above we have to find the essential range of $g(x)=x \chi_\Omega(x)$ w.r.t. $\mu$. Here, we have that the essential range is $\{ 0 \} \cup [\tfrac{1}{4}, \tfrac{1}{3}] \cup \{ \tfrac{1}{2} \}$, so $|| E_\Omega A ||= \tfrac{1}{2}$.</p> <p>Note: In the second case, we also have a cyclic vector, namely the constant function 1. This is consequence of the following: For finite, regular Borel measures on compact subsets $K$ of $\mathbb{R}$ the continuous functions are dense in $L^2(K)$. Now, the polynomials are dense in the continuous functions (w.r.t. to the sup-norm), so particular they are dense w.r.t. to the $L^2$ norm.</p> Answer 17: <p>Whenever you apply a spectral projection $E(S)\ne I$ to $A$, you end up with $0$ in the point spectrum of $E(S)A=AE(S)$ because $\{AE(S)\}E(\sigma\setminus S)=0$ and $E(\sigma\setminus S) \ne 0$. So that special case always requires special attention.</p> <p>If $(a,b)\subseteq\sigma(A)$, then $E(a,b) \ne 0$; otherwise $\mu \in (a,b)$ would lead to a bounded operator, $$ R(\mu)=\int_{\sigma}\frac{1}{\lambda-\mu}dE(\mu), $$ which would have to be the resolvent $R(\mu)=(A-\mu I)^{-1}$. And that would force $(a,b)\in\rho(A)$, contrary to assumption.</p> <p>Suppose $[0,1]\subseteq\sigma(A)$. Then $E(a,b) \ne 0$ for $(a,b)\subseteq[0,1]$. I'll first consider your first question where you ask about the spectrum of $AE[1/4,1/2)$. Automatically $0\in\sigma(AE[1/4,1/2))$ because $\{AE[1/4,1/2)\}E(1/2,1)=0$ and $E(1/2,1)\ne 0$. For $\mu\ne 0$ and $\mu\notin [1/4,1/2]$, $$ AE[1/4,1/2)-\mu I=(A-\mu I)E[1/4,1/2)-\mu E(\sigma\setminus[1/4,1/2)) $$ has a bounded inverse given by $$ (AE[1/4,1/2)-\mu I)^{-1}=\int_{[1/4,1/2)}\frac{1}{\lambda-\mu}dE(\lambda)-\frac{1}{\mu}E(\sigma\setminus[1/4,1/2)). $$ Therefore, $\sigma(AE[1/4,1/2))\subseteq [1/4,1/2]\cup\{0\}$. Conversely $0\in\sigma(AE[1/4,1/2))$ was noted above, and, for any $\mu\in(1/4,1/2)$, the projections $E(\mu-\delta,\mu+\delta)\ne 0$ for all $\delta &gt; 0$, which gives the existence of a non-zero vector $x_{\delta}$ such that $E(\mu-\delta,\mu+\delta)x_{\delta}=x_{\delta}$ and, hence, \begin{align} \|AE[1/4,1/2)x_{\delta}-\mu x_{\delta}\| &amp; = \|(A-\mu I)E(\mu-\delta,\mu+\delta)x_{\delta}\| \\ &amp; \le \delta \|E(\mu-\delta,\mu+\delta)x_{\delta}\| \\ &amp; = \delta\|x_{\delta}\|. \end{align} So $AE[1/4,1/2)-\mu I$ cannot be continuously invertible, which proves $$ \{0\}\cup (1/4,1/2) \subseteq \sigma(AE[1/4,1/2)) $$ Because the spectrum is closed, $$ \{0\} \cup [1/4,1/2]\subseteq \sigma(AE[1/4,1/2)). $$ The opposite inclusion was previously shown. So $$ \sigma(AE[1/4,1/2))=\{0\}\cup[1/4,1/2]. $$ The operator $AE[1/4,1/2]$ is selfadjoint. So its norm is its spectral radius, which gives $\|AE[1/4,1/2)\|=1/2$.</p> <p>I'll let you consider the other cases. Note for example that $S=[1/3,1/2]\cap\mathbb{Q}$ could be such that $E(S)=0$, or it could give $E(S)=E[1/3,1/2]$, or $E(S)=E(T)$ could hold for a lot of closed subsets $T$ of $[1/3,1/2]$ because the spectrum is closed, and every subset of $[1/3,1/2]\cap\mathbb{Q}$ could consist of eigenvalues.</p> Answer 18: <p>In the second case where $\Omega=[\tfrac{1}{4}, \tfrac{1}{3}) \cup ([\tfrac{1}{3}, \tfrac{1}{2}) \cap \mathbb{Q})$ there is no definite answer <em>even if we assume the existence of a cyclic vector</em>.</p> <p>First, consider $A$ to be multiplication by $f(x)=x$ on $L^2[0,1]$ with the usual Lebesgue measure. A cyclic vector is given by the constant function 1 since polynomials are dense in $L^2[0,1]$. Then the spectral measure $E_\Omega$ is given explicitly by the functional calculus as the operator $\chi_\Omega (A) \phi (x) = \chi_\Omega(x) \phi$.</p> <p>In particular, the operator $E_\Omega A$ is given by multiplication with $g(x)=x \chi_\Omega(x)$. Computing its norm amounts to computing its spectrum which amounts to computing its essential range with respect to the Lebesgue measure. Clearly, the essential range of $g$ is $\{ 0 \} \cup [\tfrac{1}{4}, \tfrac{1}{3}]$, so $|| E_\Omega A ||= \tfrac{1}{3}$.</p> <p>Second, consider $L^2[0,1]$ with the measure given by $\mu = \lambda + \delta_{\tfrac{1}{2}}$, where $\lambda$ denotes the Lebesgue measure. I.e. we give the point $\tfrac{1}{2}$ mass 1. Then, by the same argument as above we have to find the essential range of $g(x)=x \chi_\Omega(x)$ w.r.t. $\mu$. Here, we have that the essential range is $\{ 0 \} \cup [\tfrac{1}{4}, \tfrac{1}{3}] \cup \{ \tfrac{1}{2} \}$, so $|| E_\Omega A ||= \tfrac{1}{2}$.</p> <p>Note: In the second case, we also have a cyclic vector, namely the constant function 1. This is consequence of the following: For finite, regular Borel measures on compact subsets $K$ of $\mathbb{R}$ the continuous functions are dense in $L^2(K)$. Now, the polynomials are dense in the continuous functions (w.r.t. to the sup-norm), so particular they are dense w.r.t. to the $L^2$ norm.</p> Answer 19: <p>Whenever you apply a spectral projection $E(S)\ne I$ to $A$, you end up with $0$ in the point spectrum of $E(S)A=AE(S)$ because $\{AE(S)\}E(\sigma\setminus S)=0$ and $E(\sigma\setminus S) \ne 0$. So that special case always requires special attention.</p> <p>If $(a,b)\subseteq\sigma(A)$, then $E(a,b) \ne 0$; otherwise $\mu \in (a,b)$ would lead to a bounded operator, $$ R(\mu)=\int_{\sigma}\frac{1}{\lambda-\mu}dE(\mu), $$ which would have to be the resolvent $R(\mu)=(A-\mu I)^{-1}$. And that would force $(a,b)\in\rho(A)$, contrary to assumption.</p> <p>Suppose $[0,1]\subseteq\sigma(A)$. Then $E(a,b) \ne 0$ for $(a,b)\subseteq[0,1]$. I'll first consider your first question where you ask about the spectrum of $AE[1/4,1/2)$. Automatically $0\in\sigma(AE[1/4,1/2))$ because $\{AE[1/4,1/2)\}E(1/2,1)=0$ and $E(1/2,1)\ne 0$. For $\mu\ne 0$ and $\mu\notin [1/4,1/2]$, $$ AE[1/4,1/2)-\mu I=(A-\mu I)E[1/4,1/2)-\mu E(\sigma\setminus[1/4,1/2)) $$ has a bounded inverse given by $$ (AE[1/4,1/2)-\mu I)^{-1}=\int_{[1/4,1/2)}\frac{1}{\lambda-\mu}dE(\lambda)-\frac{1}{\mu}E(\sigma\setminus[1/4,1/2)). $$ Therefore, $\sigma(AE[1/4,1/2))\subseteq [1/4,1/2]\cup\{0\}$. Conversely $0\in\sigma(AE[1/4,1/2))$ was noted above, and, for any $\mu\in(1/4,1/2)$, the projections $E(\mu-\delta,\mu+\delta)\ne 0$ for all $\delta &gt; 0$, which gives the existence of a non-zero vector $x_{\delta}$ such that $E(\mu-\delta,\mu+\delta)x_{\delta}=x_{\delta}$ and, hence, \begin{align} \|AE[1/4,1/2)x_{\delta}-\mu x_{\delta}\| &amp; = \|(A-\mu I)E(\mu-\delta,\mu+\delta)x_{\delta}\| \\ &amp; \le \delta \|E(\mu-\delta,\mu+\delta)x_{\delta}\| \\ &amp; = \delta\|x_{\delta}\|. \end{align} So $AE[1/4,1/2)-\mu I$ cannot be continuously invertible, which proves $$ \{0\}\cup (1/4,1/2) \subseteq \sigma(AE[1/4,1/2)) $$ Because the spectrum is closed, $$ \{0\} \cup [1/4,1/2]\subseteq \sigma(AE[1/4,1/2)). $$ The opposite inclusion was previously shown. So $$ \sigma(AE[1/4,1/2))=\{0\}\cup[1/4,1/2]. $$ The operator $AE[1/4,1/2]$ is selfadjoint. So its norm is its spectral radius, which gives $\|AE[1/4,1/2)\|=1/2$.</p> <p>I'll let you consider the other cases. Note for example that $S=[1/3,1/2]\cap\mathbb{Q}$ could be such that $E(S)=0$, or it could give $E(S)=E[1/3,1/2]$, or $E(S)=E(T)$ could hold for a lot of closed subsets $T$ of $[1/3,1/2]$ because the spectrum is closed, and every subset of $[1/3,1/2]\cap\mathbb{Q}$ could consist of eigenvalues.</p> Answer 20: <p>In the second case where $\Omega=[\tfrac{1}{4}, \tfrac{1}{3}) \cup ([\tfrac{1}{3}, \tfrac{1}{2}) \cap \mathbb{Q})$ there is no definite answer <em>even if we assume the existence of a cyclic vector</em>.</p> <p>First, consider $A$ to be multiplication by $f(x)=x$ on $L^2[0,1]$ with the usual Lebesgue measure. A cyclic vector is given by the constant function 1 since polynomials are dense in $L^2[0,1]$. Then the spectral measure $E_\Omega$ is given explicitly by the functional calculus as the operator $\chi_\Omega (A) \phi (x) = \chi_\Omega(x) \phi$.</p> <p>In particular, the operator $E_\Omega A$ is given by multiplication with $g(x)=x \chi_\Omega(x)$. Computing its norm amounts to computing its spectrum which amounts to computing its essential range with respect to the Lebesgue measure. Clearly, the essential range of $g$ is $\{ 0 \} \cup [\tfrac{1}{4}, \tfrac{1}{3}]$, so $|| E_\Omega A ||= \tfrac{1}{3}$.</p> <p>Second, consider $L^2[0,1]$ with the measure given by $\mu = \lambda + \delta_{\tfrac{1}{2}}$, where $\lambda$ denotes the Lebesgue measure. I.e. we give the point $\tfrac{1}{2}$ mass 1. Then, by the same argument as above we have to find the essential range of $g(x)=x \chi_\Omega(x)$ w.r.t. $\mu$. Here, we have that the essential range is $\{ 0 \} \cup [\tfrac{1}{4}, \tfrac{1}{3}] \cup \{ \tfrac{1}{2} \}$, so $|| E_\Omega A ||= \tfrac{1}{2}$.</p> <p>Note: In the second case, we also have a cyclic vector, namely the constant function 1. This is consequence of the following: For finite, regular Borel measures on compact subsets $K$ of $\mathbb{R}$ the continuous functions are dense in $L^2(K)$. Now, the polynomials are dense in the continuous functions (w.r.t. to the sup-norm), so particular they are dense w.r.t. to the $L^2$ norm.</p>
How can we know which function is greater than the other without drawing them?
2,108,203
6
https://math.stackexchange.com/questions/2108203/how-can-we-know-which-function-is-greater-than-the-other-without-drawing-them
<p><a href="https://i.stack.imgur.com/zeumD.gif" rel="noreferrer">Two functions</a></p> <p>In other words, what is a a conventional way to know if the function $f(x)$ is greater than $g(x)$, taking into account that sometimes they exchange the highest position after intersecting. </p> <p>Thus a more specific question could be: How to know on each range, which function is greater than the other?</p>
[ { "answer_id": 2108216, "score": 1, "text": "<p>Compute the roots of $h(x)=f(x)-g(x)$.</p>\n\n<p>Then you know what intervalls are interesting.</p>\n\n<p>Those intervalls are a finite number for most excercises you encounter, or have a very easy to recognize pattern (like sin(x)-cos(x) ).</p>\n\n<p>Then check with values inside those intervalls, wether f(x)>g(x) or vice versa.</p>\n\n<p>Can you explain why this works?</p>\n" }, { "answer_id": 2108208, "score": 0, "text": "<p>There's only one way in general: plug in a number and see what you get. If $f(x) &gt; g(x)$ for that value of $x$, then $f$ will be larger than $g$ until they intersect again. I assume, though, that you're looking for something simpler, something you could use when $f$ and $g$ are hard to compute. The answer is that there isn't any technique that will work all the time - there's just a few tricks you can use in specific cases.</p>\n\n<p>For example, in your image you're looking at the functions $f(x) = x^2$ and $g(x) = \\sqrt{x}$. When $x$ is very small (very close to zero), $x^2$ is <em>very</em> small (a very small piece of a very small number) while $\\sqrt{x}$ is \"smallish\" (only its square is \"very small\"). So $\\sqrt{x} &gt; x^2$ when $x$ is a little bigger than zero, and it will remain that way until the curves intersect again (at $x = 1$).</p>\n" }, { "answer_id": 2108214, "score": 0, "text": "<p>Over the same interval $[a,b]$, a function $f(x)$ is higher (or lower) than function $g(x)$ if:</p>\n\n<p>$f(a)&gt;g(a) (or &lt;)$ and $f(x)-g(x)=0$ has no root in $(a,b]$.</p>\n\n<p>Your $f(x)=x^2$ and $g(x)=\\sqrt{x}$ can be compared over $[1,\\infty)$ as $f(1)=g(1)$ at the initial point but $x^2-\\sqrt x=0$ has at most $1$ positive and no negative real root in $\\mathbb R$ (<em>Descarte's rule</em>) hence no roots in $(1,\\infty)$. So $f(x)&gt;g(x)$ in $(1,\\infty)$.</p>\n" }, { "answer_id": 2108334, "score": 0, "text": "<p>Observe that the domain of $\\sqrt{x}$ is $x&gt;0$ and $x=0$ and that there is no restriction on $x^2$. So the restriction on both of these domains is the common domain i.e. that of $\\sqrt{x}$.</p>\n\n<p>Then finding where one function is greater than the other is a matter of solving the inequality $x^2&gt;\\sqrt{x}, x&gt;0$ which works out to $x(x^3 - 1)&gt;0$ which can be graphed and solved.</p>\n" }, { "answer_id": 2108216, "score": 1, "text": "<p>Compute the roots of $h(x)=f(x)-g(x)$.</p>\n\n<p>Then you know what intervalls are interesting.</p>\n\n<p>Those intervalls are a finite number for most excercises you encounter, or have a very easy to recognize pattern (like sin(x)-cos(x) ).</p>\n\n<p>Then check with values inside those intervalls, wether f(x)>g(x) or vice versa.</p>\n\n<p>Can you explain why this works?</p>\n" }, { "answer_id": 2108208, "score": 0, "text": "<p>There's only one way in general: plug in a number and see what you get. If $f(x) &gt; g(x)$ for that value of $x$, then $f$ will be larger than $g$ until they intersect again. I assume, though, that you're looking for something simpler, something you could use when $f$ and $g$ are hard to compute. The answer is that there isn't any technique that will work all the time - there's just a few tricks you can use in specific cases.</p>\n\n<p>For example, in your image you're looking at the functions $f(x) = x^2$ and $g(x) = \\sqrt{x}$. When $x$ is very small (very close to zero), $x^2$ is <em>very</em> small (a very small piece of a very small number) while $\\sqrt{x}$ is \"smallish\" (only its square is \"very small\"). So $\\sqrt{x} &gt; x^2$ when $x$ is a little bigger than zero, and it will remain that way until the curves intersect again (at $x = 1$).</p>\n" }, { "answer_id": 2108214, "score": 0, "text": "<p>Over the same interval $[a,b]$, a function $f(x)$ is higher (or lower) than function $g(x)$ if:</p>\n\n<p>$f(a)&gt;g(a) (or &lt;)$ and $f(x)-g(x)=0$ has no root in $(a,b]$.</p>\n\n<p>Your $f(x)=x^2$ and $g(x)=\\sqrt{x}$ can be compared over $[1,\\infty)$ as $f(1)=g(1)$ at the initial point but $x^2-\\sqrt x=0$ has at most $1$ positive and no negative real root in $\\mathbb R$ (<em>Descarte's rule</em>) hence no roots in $(1,\\infty)$. So $f(x)&gt;g(x)$ in $(1,\\infty)$.</p>\n" }, { "answer_id": 2108334, "score": 0, "text": "<p>Observe that the domain of $\\sqrt{x}$ is $x&gt;0$ and $x=0$ and that there is no restriction on $x^2$. So the restriction on both of these domains is the common domain i.e. that of $\\sqrt{x}$.</p>\n\n<p>Then finding where one function is greater than the other is a matter of solving the inequality $x^2&gt;\\sqrt{x}, x&gt;0$ which works out to $x(x^3 - 1)&gt;0$ which can be graphed and solved.</p>\n" }, { "answer_id": 2108216, "score": 1, "text": "<p>Compute the roots of $h(x)=f(x)-g(x)$.</p>\n\n<p>Then you know what intervalls are interesting.</p>\n\n<p>Those intervalls are a finite number for most excercises you encounter, or have a very easy to recognize pattern (like sin(x)-cos(x) ).</p>\n\n<p>Then check with values inside those intervalls, wether f(x)>g(x) or vice versa.</p>\n\n<p>Can you explain why this works?</p>\n" }, { "answer_id": 2108208, "score": 0, "text": "<p>There's only one way in general: plug in a number and see what you get. If $f(x) &gt; g(x)$ for that value of $x$, then $f$ will be larger than $g$ until they intersect again. I assume, though, that you're looking for something simpler, something you could use when $f$ and $g$ are hard to compute. The answer is that there isn't any technique that will work all the time - there's just a few tricks you can use in specific cases.</p>\n\n<p>For example, in your image you're looking at the functions $f(x) = x^2$ and $g(x) = \\sqrt{x}$. When $x$ is very small (very close to zero), $x^2$ is <em>very</em> small (a very small piece of a very small number) while $\\sqrt{x}$ is \"smallish\" (only its square is \"very small\"). So $\\sqrt{x} &gt; x^2$ when $x$ is a little bigger than zero, and it will remain that way until the curves intersect again (at $x = 1$).</p>\n" }, { "answer_id": 2108214, "score": 0, "text": "<p>Over the same interval $[a,b]$, a function $f(x)$ is higher (or lower) than function $g(x)$ if:</p>\n\n<p>$f(a)&gt;g(a) (or &lt;)$ and $f(x)-g(x)=0$ has no root in $(a,b]$.</p>\n\n<p>Your $f(x)=x^2$ and $g(x)=\\sqrt{x}$ can be compared over $[1,\\infty)$ as $f(1)=g(1)$ at the initial point but $x^2-\\sqrt x=0$ has at most $1$ positive and no negative real root in $\\mathbb R$ (<em>Descarte's rule</em>) hence no roots in $(1,\\infty)$. So $f(x)&gt;g(x)$ in $(1,\\infty)$.</p>\n" }, { "answer_id": 2108334, "score": 0, "text": "<p>Observe that the domain of $\\sqrt{x}$ is $x&gt;0$ and $x=0$ and that there is no restriction on $x^2$. So the restriction on both of these domains is the common domain i.e. that of $\\sqrt{x}$.</p>\n\n<p>Then finding where one function is greater than the other is a matter of solving the inequality $x^2&gt;\\sqrt{x}, x&gt;0$ which works out to $x(x^3 - 1)&gt;0$ which can be graphed and solved.</p>\n" }, { "answer_id": 2108216, "score": 1, "text": "<p>Compute the roots of $h(x)=f(x)-g(x)$.</p>\n\n<p>Then you know what intervalls are interesting.</p>\n\n<p>Those intervalls are a finite number for most excercises you encounter, or have a very easy to recognize pattern (like sin(x)-cos(x) ).</p>\n\n<p>Then check with values inside those intervalls, wether f(x)>g(x) or vice versa.</p>\n\n<p>Can you explain why this works?</p>\n" }, { "answer_id": 2108208, "score": 0, "text": "<p>There's only one way in general: plug in a number and see what you get. If $f(x) &gt; g(x)$ for that value of $x$, then $f$ will be larger than $g$ until they intersect again. I assume, though, that you're looking for something simpler, something you could use when $f$ and $g$ are hard to compute. The answer is that there isn't any technique that will work all the time - there's just a few tricks you can use in specific cases.</p>\n\n<p>For example, in your image you're looking at the functions $f(x) = x^2$ and $g(x) = \\sqrt{x}$. When $x$ is very small (very close to zero), $x^2$ is <em>very</em> small (a very small piece of a very small number) while $\\sqrt{x}$ is \"smallish\" (only its square is \"very small\"). So $\\sqrt{x} &gt; x^2$ when $x$ is a little bigger than zero, and it will remain that way until the curves intersect again (at $x = 1$).</p>\n" }, { "answer_id": 2108214, "score": 0, "text": "<p>Over the same interval $[a,b]$, a function $f(x)$ is higher (or lower) than function $g(x)$ if:</p>\n\n<p>$f(a)&gt;g(a) (or &lt;)$ and $f(x)-g(x)=0$ has no root in $(a,b]$.</p>\n\n<p>Your $f(x)=x^2$ and $g(x)=\\sqrt{x}$ can be compared over $[1,\\infty)$ as $f(1)=g(1)$ at the initial point but $x^2-\\sqrt x=0$ has at most $1$ positive and no negative real root in $\\mathbb R$ (<em>Descarte's rule</em>) hence no roots in $(1,\\infty)$. So $f(x)&gt;g(x)$ in $(1,\\infty)$.</p>\n" }, { "answer_id": 2108334, "score": 0, "text": "<p>Observe that the domain of $\\sqrt{x}$ is $x&gt;0$ and $x=0$ and that there is no restriction on $x^2$. So the restriction on both of these domains is the common domain i.e. that of $\\sqrt{x}$.</p>\n\n<p>Then finding where one function is greater than the other is a matter of solving the inequality $x^2&gt;\\sqrt{x}, x&gt;0$ which works out to $x(x^3 - 1)&gt;0$ which can be graphed and solved.</p>\n" }, { "answer_id": 2108216, "score": 1, "text": "<p>Compute the roots of $h(x)=f(x)-g(x)$.</p>\n\n<p>Then you know what intervalls are interesting.</p>\n\n<p>Those intervalls are a finite number for most excercises you encounter, or have a very easy to recognize pattern (like sin(x)-cos(x) ).</p>\n\n<p>Then check with values inside those intervalls, wether f(x)>g(x) or vice versa.</p>\n\n<p>Can you explain why this works?</p>\n" }, { "answer_id": 2108208, "score": 0, "text": "<p>There's only one way in general: plug in a number and see what you get. If $f(x) &gt; g(x)$ for that value of $x$, then $f$ will be larger than $g$ until they intersect again. I assume, though, that you're looking for something simpler, something you could use when $f$ and $g$ are hard to compute. The answer is that there isn't any technique that will work all the time - there's just a few tricks you can use in specific cases.</p>\n\n<p>For example, in your image you're looking at the functions $f(x) = x^2$ and $g(x) = \\sqrt{x}$. When $x$ is very small (very close to zero), $x^2$ is <em>very</em> small (a very small piece of a very small number) while $\\sqrt{x}$ is \"smallish\" (only its square is \"very small\"). So $\\sqrt{x} &gt; x^2$ when $x$ is a little bigger than zero, and it will remain that way until the curves intersect again (at $x = 1$).</p>\n" }, { "answer_id": 2108214, "score": 0, "text": "<p>Over the same interval $[a,b]$, a function $f(x)$ is higher (or lower) than function $g(x)$ if:</p>\n\n<p>$f(a)&gt;g(a) (or &lt;)$ and $f(x)-g(x)=0$ has no root in $(a,b]$.</p>\n\n<p>Your $f(x)=x^2$ and $g(x)=\\sqrt{x}$ can be compared over $[1,\\infty)$ as $f(1)=g(1)$ at the initial point but $x^2-\\sqrt x=0$ has at most $1$ positive and no negative real root in $\\mathbb R$ (<em>Descarte's rule</em>) hence no roots in $(1,\\infty)$. So $f(x)&gt;g(x)$ in $(1,\\infty)$.</p>\n" }, { "answer_id": 2108334, "score": 0, "text": "<p>Observe that the domain of $\\sqrt{x}$ is $x&gt;0$ and $x=0$ and that there is no restriction on $x^2$. So the restriction on both of these domains is the common domain i.e. that of $\\sqrt{x}$.</p>\n\n<p>Then finding where one function is greater than the other is a matter of solving the inequality $x^2&gt;\\sqrt{x}, x&gt;0$ which works out to $x(x^3 - 1)&gt;0$ which can be graphed and solved.</p>\n" }, { "answer_id": 2108216, "score": 1, "text": "<p>Compute the roots of $h(x)=f(x)-g(x)$.</p>\n\n<p>Then you know what intervalls are interesting.</p>\n\n<p>Those intervalls are a finite number for most excercises you encounter, or have a very easy to recognize pattern (like sin(x)-cos(x) ).</p>\n\n<p>Then check with values inside those intervalls, wether f(x)>g(x) or vice versa.</p>\n\n<p>Can you explain why this works?</p>\n" }, { "answer_id": 2108208, "score": 0, "text": "<p>There's only one way in general: plug in a number and see what you get. If $f(x) &gt; g(x)$ for that value of $x$, then $f$ will be larger than $g$ until they intersect again. I assume, though, that you're looking for something simpler, something you could use when $f$ and $g$ are hard to compute. The answer is that there isn't any technique that will work all the time - there's just a few tricks you can use in specific cases.</p>\n\n<p>For example, in your image you're looking at the functions $f(x) = x^2$ and $g(x) = \\sqrt{x}$. When $x$ is very small (very close to zero), $x^2$ is <em>very</em> small (a very small piece of a very small number) while $\\sqrt{x}$ is \"smallish\" (only its square is \"very small\"). So $\\sqrt{x} &gt; x^2$ when $x$ is a little bigger than zero, and it will remain that way until the curves intersect again (at $x = 1$).</p>\n" }, { "answer_id": 2108214, "score": 0, "text": "<p>Over the same interval $[a,b]$, a function $f(x)$ is higher (or lower) than function $g(x)$ if:</p>\n\n<p>$f(a)&gt;g(a) (or &lt;)$ and $f(x)-g(x)=0$ has no root in $(a,b]$.</p>\n\n<p>Your $f(x)=x^2$ and $g(x)=\\sqrt{x}$ can be compared over $[1,\\infty)$ as $f(1)=g(1)$ at the initial point but $x^2-\\sqrt x=0$ has at most $1$ positive and no negative real root in $\\mathbb R$ (<em>Descarte's rule</em>) hence no roots in $(1,\\infty)$. So $f(x)&gt;g(x)$ in $(1,\\infty)$.</p>\n" }, { "answer_id": 2108334, "score": 0, "text": "<p>Observe that the domain of $\\sqrt{x}$ is $x&gt;0$ and $x=0$ and that there is no restriction on $x^2$. So the restriction on both of these domains is the common domain i.e. that of $\\sqrt{x}$.</p>\n\n<p>Then finding where one function is greater than the other is a matter of solving the inequality $x^2&gt;\\sqrt{x}, x&gt;0$ which works out to $x(x^3 - 1)&gt;0$ which can be graphed and solved.</p>\n" }, { "answer_id": 2108216, "score": 1, "text": "<p>Compute the roots of $h(x)=f(x)-g(x)$.</p>\n\n<p>Then you know what intervalls are interesting.</p>\n\n<p>Those intervalls are a finite number for most excercises you encounter, or have a very easy to recognize pattern (like sin(x)-cos(x) ).</p>\n\n<p>Then check with values inside those intervalls, wether f(x)>g(x) or vice versa.</p>\n\n<p>Can you explain why this works?</p>\n" }, { "answer_id": 2108208, "score": 0, "text": "<p>There's only one way in general: plug in a number and see what you get. If $f(x) &gt; g(x)$ for that value of $x$, then $f$ will be larger than $g$ until they intersect again. I assume, though, that you're looking for something simpler, something you could use when $f$ and $g$ are hard to compute. The answer is that there isn't any technique that will work all the time - there's just a few tricks you can use in specific cases.</p>\n\n<p>For example, in your image you're looking at the functions $f(x) = x^2$ and $g(x) = \\sqrt{x}$. When $x$ is very small (very close to zero), $x^2$ is <em>very</em> small (a very small piece of a very small number) while $\\sqrt{x}$ is \"smallish\" (only its square is \"very small\"). So $\\sqrt{x} &gt; x^2$ when $x$ is a little bigger than zero, and it will remain that way until the curves intersect again (at $x = 1$).</p>\n" }, { "answer_id": 2108214, "score": 0, "text": "<p>Over the same interval $[a,b]$, a function $f(x)$ is higher (or lower) than function $g(x)$ if:</p>\n\n<p>$f(a)&gt;g(a) (or &lt;)$ and $f(x)-g(x)=0$ has no root in $(a,b]$.</p>\n\n<p>Your $f(x)=x^2$ and $g(x)=\\sqrt{x}$ can be compared over $[1,\\infty)$ as $f(1)=g(1)$ at the initial point but $x^2-\\sqrt x=0$ has at most $1$ positive and no negative real root in $\\mathbb R$ (<em>Descarte's rule</em>) hence no roots in $(1,\\infty)$. So $f(x)&gt;g(x)$ in $(1,\\infty)$.</p>\n" }, { "answer_id": 2108334, "score": 0, "text": "<p>Observe that the domain of $\\sqrt{x}$ is $x&gt;0$ and $x=0$ and that there is no restriction on $x^2$. So the restriction on both of these domains is the common domain i.e. that of $\\sqrt{x}$.</p>\n\n<p>Then finding where one function is greater than the other is a matter of solving the inequality $x^2&gt;\\sqrt{x}, x&gt;0$ which works out to $x(x^3 - 1)&gt;0$ which can be graphed and solved.</p>\n" }, { "answer_id": 2108216, "score": 1, "text": "<p>Compute the roots of $h(x)=f(x)-g(x)$.</p>\n\n<p>Then you know what intervalls are interesting.</p>\n\n<p>Those intervalls are a finite number for most excercises you encounter, or have a very easy to recognize pattern (like sin(x)-cos(x) ).</p>\n\n<p>Then check with values inside those intervalls, wether f(x)>g(x) or vice versa.</p>\n\n<p>Can you explain why this works?</p>\n" }, { "answer_id": 2108208, "score": 0, "text": "<p>There's only one way in general: plug in a number and see what you get. If $f(x) &gt; g(x)$ for that value of $x$, then $f$ will be larger than $g$ until they intersect again. I assume, though, that you're looking for something simpler, something you could use when $f$ and $g$ are hard to compute. The answer is that there isn't any technique that will work all the time - there's just a few tricks you can use in specific cases.</p>\n\n<p>For example, in your image you're looking at the functions $f(x) = x^2$ and $g(x) = \\sqrt{x}$. When $x$ is very small (very close to zero), $x^2$ is <em>very</em> small (a very small piece of a very small number) while $\\sqrt{x}$ is \"smallish\" (only its square is \"very small\"). So $\\sqrt{x} &gt; x^2$ when $x$ is a little bigger than zero, and it will remain that way until the curves intersect again (at $x = 1$).</p>\n" }, { "answer_id": 2108214, "score": 0, "text": "<p>Over the same interval $[a,b]$, a function $f(x)$ is higher (or lower) than function $g(x)$ if:</p>\n\n<p>$f(a)&gt;g(a) (or &lt;)$ and $f(x)-g(x)=0$ has no root in $(a,b]$.</p>\n\n<p>Your $f(x)=x^2$ and $g(x)=\\sqrt{x}$ can be compared over $[1,\\infty)$ as $f(1)=g(1)$ at the initial point but $x^2-\\sqrt x=0$ has at most $1$ positive and no negative real root in $\\mathbb R$ (<em>Descarte's rule</em>) hence no roots in $(1,\\infty)$. So $f(x)&gt;g(x)$ in $(1,\\infty)$.</p>\n" }, { "answer_id": 2108334, "score": 0, "text": "<p>Observe that the domain of $\\sqrt{x}$ is $x&gt;0$ and $x=0$ and that there is no restriction on $x^2$. So the restriction on both of these domains is the common domain i.e. that of $\\sqrt{x}$.</p>\n\n<p>Then finding where one function is greater than the other is a matter of solving the inequality $x^2&gt;\\sqrt{x}, x&gt;0$ which works out to $x(x^3 - 1)&gt;0$ which can be graphed and solved.</p>\n" }, { "answer_id": 2108216, "score": 1, "text": "<p>Compute the roots of $h(x)=f(x)-g(x)$.</p>\n\n<p>Then you know what intervalls are interesting.</p>\n\n<p>Those intervalls are a finite number for most excercises you encounter, or have a very easy to recognize pattern (like sin(x)-cos(x) ).</p>\n\n<p>Then check with values inside those intervalls, wether f(x)>g(x) or vice versa.</p>\n\n<p>Can you explain why this works?</p>\n" }, { "answer_id": 2108208, "score": 0, "text": "<p>There's only one way in general: plug in a number and see what you get. If $f(x) &gt; g(x)$ for that value of $x$, then $f$ will be larger than $g$ until they intersect again. I assume, though, that you're looking for something simpler, something you could use when $f$ and $g$ are hard to compute. The answer is that there isn't any technique that will work all the time - there's just a few tricks you can use in specific cases.</p>\n\n<p>For example, in your image you're looking at the functions $f(x) = x^2$ and $g(x) = \\sqrt{x}$. When $x$ is very small (very close to zero), $x^2$ is <em>very</em> small (a very small piece of a very small number) while $\\sqrt{x}$ is \"smallish\" (only its square is \"very small\"). So $\\sqrt{x} &gt; x^2$ when $x$ is a little bigger than zero, and it will remain that way until the curves intersect again (at $x = 1$).</p>\n" }, { "answer_id": 2108214, "score": 0, "text": "<p>Over the same interval $[a,b]$, a function $f(x)$ is higher (or lower) than function $g(x)$ if:</p>\n\n<p>$f(a)&gt;g(a) (or &lt;)$ and $f(x)-g(x)=0$ has no root in $(a,b]$.</p>\n\n<p>Your $f(x)=x^2$ and $g(x)=\\sqrt{x}$ can be compared over $[1,\\infty)$ as $f(1)=g(1)$ at the initial point but $x^2-\\sqrt x=0$ has at most $1$ positive and no negative real root in $\\mathbb R$ (<em>Descarte's rule</em>) hence no roots in $(1,\\infty)$. So $f(x)&gt;g(x)$ in $(1,\\infty)$.</p>\n" }, { "answer_id": 2108334, "score": 0, "text": "<p>Observe that the domain of $\\sqrt{x}$ is $x&gt;0$ and $x=0$ and that there is no restriction on $x^2$. So the restriction on both of these domains is the common domain i.e. that of $\\sqrt{x}$.</p>\n\n<p>Then finding where one function is greater than the other is a matter of solving the inequality $x^2&gt;\\sqrt{x}, x&gt;0$ which works out to $x(x^3 - 1)&gt;0$ which can be graphed and solved.</p>\n" }, { "answer_id": 2108216, "score": 1, "text": "<p>Compute the roots of $h(x)=f(x)-g(x)$.</p>\n\n<p>Then you know what intervalls are interesting.</p>\n\n<p>Those intervalls are a finite number for most excercises you encounter, or have a very easy to recognize pattern (like sin(x)-cos(x) ).</p>\n\n<p>Then check with values inside those intervalls, wether f(x)>g(x) or vice versa.</p>\n\n<p>Can you explain why this works?</p>\n" }, { "answer_id": 2108208, "score": 0, "text": "<p>There's only one way in general: plug in a number and see what you get. If $f(x) &gt; g(x)$ for that value of $x$, then $f$ will be larger than $g$ until they intersect again. I assume, though, that you're looking for something simpler, something you could use when $f$ and $g$ are hard to compute. The answer is that there isn't any technique that will work all the time - there's just a few tricks you can use in specific cases.</p>\n\n<p>For example, in your image you're looking at the functions $f(x) = x^2$ and $g(x) = \\sqrt{x}$. When $x$ is very small (very close to zero), $x^2$ is <em>very</em> small (a very small piece of a very small number) while $\\sqrt{x}$ is \"smallish\" (only its square is \"very small\"). So $\\sqrt{x} &gt; x^2$ when $x$ is a little bigger than zero, and it will remain that way until the curves intersect again (at $x = 1$).</p>\n" }, { "answer_id": 2108214, "score": 0, "text": "<p>Over the same interval $[a,b]$, a function $f(x)$ is higher (or lower) than function $g(x)$ if:</p>\n\n<p>$f(a)&gt;g(a) (or &lt;)$ and $f(x)-g(x)=0$ has no root in $(a,b]$.</p>\n\n<p>Your $f(x)=x^2$ and $g(x)=\\sqrt{x}$ can be compared over $[1,\\infty)$ as $f(1)=g(1)$ at the initial point but $x^2-\\sqrt x=0$ has at most $1$ positive and no negative real root in $\\mathbb R$ (<em>Descarte's rule</em>) hence no roots in $(1,\\infty)$. So $f(x)&gt;g(x)$ in $(1,\\infty)$.</p>\n" }, { "answer_id": 2108334, "score": 0, "text": "<p>Observe that the domain of $\\sqrt{x}$ is $x&gt;0$ and $x=0$ and that there is no restriction on $x^2$. So the restriction on both of these domains is the common domain i.e. that of $\\sqrt{x}$.</p>\n\n<p>Then finding where one function is greater than the other is a matter of solving the inequality $x^2&gt;\\sqrt{x}, x&gt;0$ which works out to $x(x^3 - 1)&gt;0$ which can be graphed and solved.</p>\n" } ]
Question: How can we know which function is greater than the other without drawing them? Answer 1: <p>Compute the roots of $h(x)=f(x)-g(x)$.</p> <p>Then you know what intervalls are interesting.</p> <p>Those intervalls are a finite number for most excercises you encounter, or have a very easy to recognize pattern (like sin(x)-cos(x) ).</p> <p>Then check with values inside those intervalls, wether f(x)>g(x) or vice versa.</p> <p>Can you explain why this works?</p> Answer 2: <p>There's only one way in general: plug in a number and see what you get. If $f(x) &gt; g(x)$ for that value of $x$, then $f$ will be larger than $g$ until they intersect again. I assume, though, that you're looking for something simpler, something you could use when $f$ and $g$ are hard to compute. The answer is that there isn't any technique that will work all the time - there's just a few tricks you can use in specific cases.</p> <p>For example, in your image you're looking at the functions $f(x) = x^2$ and $g(x) = \sqrt{x}$. When $x$ is very small (very close to zero), $x^2$ is <em>very</em> small (a very small piece of a very small number) while $\sqrt{x}$ is "smallish" (only its square is "very small"). So $\sqrt{x} &gt; x^2$ when $x$ is a little bigger than zero, and it will remain that way until the curves intersect again (at $x = 1$).</p> Answer 3: <p>Over the same interval $[a,b]$, a function $f(x)$ is higher (or lower) than function $g(x)$ if:</p> <p>$f(a)&gt;g(a) (or &lt;)$ and $f(x)-g(x)=0$ has no root in $(a,b]$.</p> <p>Your $f(x)=x^2$ and $g(x)=\sqrt{x}$ can be compared over $[1,\infty)$ as $f(1)=g(1)$ at the initial point but $x^2-\sqrt x=0$ has at most $1$ positive and no negative real root in $\mathbb R$ (<em>Descarte's rule</em>) hence no roots in $(1,\infty)$. So $f(x)&gt;g(x)$ in $(1,\infty)$.</p> Answer 4: <p>Observe that the domain of $\sqrt{x}$ is $x&gt;0$ and $x=0$ and that there is no restriction on $x^2$. So the restriction on both of these domains is the common domain i.e. that of $\sqrt{x}$.</p> <p>Then finding where one function is greater than the other is a matter of solving the inequality $x^2&gt;\sqrt{x}, x&gt;0$ which works out to $x(x^3 - 1)&gt;0$ which can be graphed and solved.</p> Answer 5: <p>Compute the roots of $h(x)=f(x)-g(x)$.</p> <p>Then you know what intervalls are interesting.</p> <p>Those intervalls are a finite number for most excercises you encounter, or have a very easy to recognize pattern (like sin(x)-cos(x) ).</p> <p>Then check with values inside those intervalls, wether f(x)>g(x) or vice versa.</p> <p>Can you explain why this works?</p> Answer 6: <p>There's only one way in general: plug in a number and see what you get. If $f(x) &gt; g(x)$ for that value of $x$, then $f$ will be larger than $g$ until they intersect again. I assume, though, that you're looking for something simpler, something you could use when $f$ and $g$ are hard to compute. The answer is that there isn't any technique that will work all the time - there's just a few tricks you can use in specific cases.</p> <p>For example, in your image you're looking at the functions $f(x) = x^2$ and $g(x) = \sqrt{x}$. When $x$ is very small (very close to zero), $x^2$ is <em>very</em> small (a very small piece of a very small number) while $\sqrt{x}$ is "smallish" (only its square is "very small"). So $\sqrt{x} &gt; x^2$ when $x$ is a little bigger than zero, and it will remain that way until the curves intersect again (at $x = 1$).</p> Answer 7: <p>Over the same interval $[a,b]$, a function $f(x)$ is higher (or lower) than function $g(x)$ if:</p> <p>$f(a)&gt;g(a) (or &lt;)$ and $f(x)-g(x)=0$ has no root in $(a,b]$.</p> <p>Your $f(x)=x^2$ and $g(x)=\sqrt{x}$ can be compared over $[1,\infty)$ as $f(1)=g(1)$ at the initial point but $x^2-\sqrt x=0$ has at most $1$ positive and no negative real root in $\mathbb R$ (<em>Descarte's rule</em>) hence no roots in $(1,\infty)$. So $f(x)&gt;g(x)$ in $(1,\infty)$.</p> Answer 8: <p>Observe that the domain of $\sqrt{x}$ is $x&gt;0$ and $x=0$ and that there is no restriction on $x^2$. So the restriction on both of these domains is the common domain i.e. that of $\sqrt{x}$.</p> <p>Then finding where one function is greater than the other is a matter of solving the inequality $x^2&gt;\sqrt{x}, x&gt;0$ which works out to $x(x^3 - 1)&gt;0$ which can be graphed and solved.</p> Answer 9: <p>Compute the roots of $h(x)=f(x)-g(x)$.</p> <p>Then you know what intervalls are interesting.</p> <p>Those intervalls are a finite number for most excercises you encounter, or have a very easy to recognize pattern (like sin(x)-cos(x) ).</p> <p>Then check with values inside those intervalls, wether f(x)>g(x) or vice versa.</p> <p>Can you explain why this works?</p> Answer 10: <p>There's only one way in general: plug in a number and see what you get. If $f(x) &gt; g(x)$ for that value of $x$, then $f$ will be larger than $g$ until they intersect again. I assume, though, that you're looking for something simpler, something you could use when $f$ and $g$ are hard to compute. The answer is that there isn't any technique that will work all the time - there's just a few tricks you can use in specific cases.</p> <p>For example, in your image you're looking at the functions $f(x) = x^2$ and $g(x) = \sqrt{x}$. When $x$ is very small (very close to zero), $x^2$ is <em>very</em> small (a very small piece of a very small number) while $\sqrt{x}$ is "smallish" (only its square is "very small"). So $\sqrt{x} &gt; x^2$ when $x$ is a little bigger than zero, and it will remain that way until the curves intersect again (at $x = 1$).</p> Answer 11: <p>Over the same interval $[a,b]$, a function $f(x)$ is higher (or lower) than function $g(x)$ if:</p> <p>$f(a)&gt;g(a) (or &lt;)$ and $f(x)-g(x)=0$ has no root in $(a,b]$.</p> <p>Your $f(x)=x^2$ and $g(x)=\sqrt{x}$ can be compared over $[1,\infty)$ as $f(1)=g(1)$ at the initial point but $x^2-\sqrt x=0$ has at most $1$ positive and no negative real root in $\mathbb R$ (<em>Descarte's rule</em>) hence no roots in $(1,\infty)$. So $f(x)&gt;g(x)$ in $(1,\infty)$.</p> Answer 12: <p>Observe that the domain of $\sqrt{x}$ is $x&gt;0$ and $x=0$ and that there is no restriction on $x^2$. So the restriction on both of these domains is the common domain i.e. that of $\sqrt{x}$.</p> <p>Then finding where one function is greater than the other is a matter of solving the inequality $x^2&gt;\sqrt{x}, x&gt;0$ which works out to $x(x^3 - 1)&gt;0$ which can be graphed and solved.</p> Answer 13: <p>Compute the roots of $h(x)=f(x)-g(x)$.</p> <p>Then you know what intervalls are interesting.</p> <p>Those intervalls are a finite number for most excercises you encounter, or have a very easy to recognize pattern (like sin(x)-cos(x) ).</p> <p>Then check with values inside those intervalls, wether f(x)>g(x) or vice versa.</p> <p>Can you explain why this works?</p> Answer 14: <p>There's only one way in general: plug in a number and see what you get. If $f(x) &gt; g(x)$ for that value of $x$, then $f$ will be larger than $g$ until they intersect again. I assume, though, that you're looking for something simpler, something you could use when $f$ and $g$ are hard to compute. The answer is that there isn't any technique that will work all the time - there's just a few tricks you can use in specific cases.</p> <p>For example, in your image you're looking at the functions $f(x) = x^2$ and $g(x) = \sqrt{x}$. When $x$ is very small (very close to zero), $x^2$ is <em>very</em> small (a very small piece of a very small number) while $\sqrt{x}$ is "smallish" (only its square is "very small"). So $\sqrt{x} &gt; x^2$ when $x$ is a little bigger than zero, and it will remain that way until the curves intersect again (at $x = 1$).</p> Answer 15: <p>Over the same interval $[a,b]$, a function $f(x)$ is higher (or lower) than function $g(x)$ if:</p> <p>$f(a)&gt;g(a) (or &lt;)$ and $f(x)-g(x)=0$ has no root in $(a,b]$.</p> <p>Your $f(x)=x^2$ and $g(x)=\sqrt{x}$ can be compared over $[1,\infty)$ as $f(1)=g(1)$ at the initial point but $x^2-\sqrt x=0$ has at most $1$ positive and no negative real root in $\mathbb R$ (<em>Descarte's rule</em>) hence no roots in $(1,\infty)$. So $f(x)&gt;g(x)$ in $(1,\infty)$.</p> Answer 16: <p>Observe that the domain of $\sqrt{x}$ is $x&gt;0$ and $x=0$ and that there is no restriction on $x^2$. So the restriction on both of these domains is the common domain i.e. that of $\sqrt{x}$.</p> <p>Then finding where one function is greater than the other is a matter of solving the inequality $x^2&gt;\sqrt{x}, x&gt;0$ which works out to $x(x^3 - 1)&gt;0$ which can be graphed and solved.</p> Answer 17: <p>Compute the roots of $h(x)=f(x)-g(x)$.</p> <p>Then you know what intervalls are interesting.</p> <p>Those intervalls are a finite number for most excercises you encounter, or have a very easy to recognize pattern (like sin(x)-cos(x) ).</p> <p>Then check with values inside those intervalls, wether f(x)>g(x) or vice versa.</p> <p>Can you explain why this works?</p> Answer 18: <p>There's only one way in general: plug in a number and see what you get. If $f(x) &gt; g(x)$ for that value of $x$, then $f$ will be larger than $g$ until they intersect again. I assume, though, that you're looking for something simpler, something you could use when $f$ and $g$ are hard to compute. The answer is that there isn't any technique that will work all the time - there's just a few tricks you can use in specific cases.</p> <p>For example, in your image you're looking at the functions $f(x) = x^2$ and $g(x) = \sqrt{x}$. When $x$ is very small (very close to zero), $x^2$ is <em>very</em> small (a very small piece of a very small number) while $\sqrt{x}$ is "smallish" (only its square is "very small"). So $\sqrt{x} &gt; x^2$ when $x$ is a little bigger than zero, and it will remain that way until the curves intersect again (at $x = 1$).</p> Answer 19: <p>Over the same interval $[a,b]$, a function $f(x)$ is higher (or lower) than function $g(x)$ if:</p> <p>$f(a)&gt;g(a) (or &lt;)$ and $f(x)-g(x)=0$ has no root in $(a,b]$.</p> <p>Your $f(x)=x^2$ and $g(x)=\sqrt{x}$ can be compared over $[1,\infty)$ as $f(1)=g(1)$ at the initial point but $x^2-\sqrt x=0$ has at most $1$ positive and no negative real root in $\mathbb R$ (<em>Descarte's rule</em>) hence no roots in $(1,\infty)$. So $f(x)&gt;g(x)$ in $(1,\infty)$.</p> Answer 20: <p>Observe that the domain of $\sqrt{x}$ is $x&gt;0$ and $x=0$ and that there is no restriction on $x^2$. So the restriction on both of these domains is the common domain i.e. that of $\sqrt{x}$.</p> <p>Then finding where one function is greater than the other is a matter of solving the inequality $x^2&gt;\sqrt{x}, x&gt;0$ which works out to $x(x^3 - 1)&gt;0$ which can be graphed and solved.</p> Answer 21: <p>Compute the roots of $h(x)=f(x)-g(x)$.</p> <p>Then you know what intervalls are interesting.</p> <p>Those intervalls are a finite number for most excercises you encounter, or have a very easy to recognize pattern (like sin(x)-cos(x) ).</p> <p>Then check with values inside those intervalls, wether f(x)>g(x) or vice versa.</p> <p>Can you explain why this works?</p> Answer 22: <p>There's only one way in general: plug in a number and see what you get. If $f(x) &gt; g(x)$ for that value of $x$, then $f$ will be larger than $g$ until they intersect again. I assume, though, that you're looking for something simpler, something you could use when $f$ and $g$ are hard to compute. The answer is that there isn't any technique that will work all the time - there's just a few tricks you can use in specific cases.</p> <p>For example, in your image you're looking at the functions $f(x) = x^2$ and $g(x) = \sqrt{x}$. When $x$ is very small (very close to zero), $x^2$ is <em>very</em> small (a very small piece of a very small number) while $\sqrt{x}$ is "smallish" (only its square is "very small"). So $\sqrt{x} &gt; x^2$ when $x$ is a little bigger than zero, and it will remain that way until the curves intersect again (at $x = 1$).</p> Answer 23: <p>Over the same interval $[a,b]$, a function $f(x)$ is higher (or lower) than function $g(x)$ if:</p> <p>$f(a)&gt;g(a) (or &lt;)$ and $f(x)-g(x)=0$ has no root in $(a,b]$.</p> <p>Your $f(x)=x^2$ and $g(x)=\sqrt{x}$ can be compared over $[1,\infty)$ as $f(1)=g(1)$ at the initial point but $x^2-\sqrt x=0$ has at most $1$ positive and no negative real root in $\mathbb R$ (<em>Descarte's rule</em>) hence no roots in $(1,\infty)$. So $f(x)&gt;g(x)$ in $(1,\infty)$.</p> Answer 24: <p>Observe that the domain of $\sqrt{x}$ is $x&gt;0$ and $x=0$ and that there is no restriction on $x^2$. So the restriction on both of these domains is the common domain i.e. that of $\sqrt{x}$.</p> <p>Then finding where one function is greater than the other is a matter of solving the inequality $x^2&gt;\sqrt{x}, x&gt;0$ which works out to $x(x^3 - 1)&gt;0$ which can be graphed and solved.</p> Answer 25: <p>Compute the roots of $h(x)=f(x)-g(x)$.</p> <p>Then you know what intervalls are interesting.</p> <p>Those intervalls are a finite number for most excercises you encounter, or have a very easy to recognize pattern (like sin(x)-cos(x) ).</p> <p>Then check with values inside those intervalls, wether f(x)>g(x) or vice versa.</p> <p>Can you explain why this works?</p> Answer 26: <p>There's only one way in general: plug in a number and see what you get. If $f(x) &gt; g(x)$ for that value of $x$, then $f$ will be larger than $g$ until they intersect again. I assume, though, that you're looking for something simpler, something you could use when $f$ and $g$ are hard to compute. The answer is that there isn't any technique that will work all the time - there's just a few tricks you can use in specific cases.</p> <p>For example, in your image you're looking at the functions $f(x) = x^2$ and $g(x) = \sqrt{x}$. When $x$ is very small (very close to zero), $x^2$ is <em>very</em> small (a very small piece of a very small number) while $\sqrt{x}$ is "smallish" (only its square is "very small"). So $\sqrt{x} &gt; x^2$ when $x$ is a little bigger than zero, and it will remain that way until the curves intersect again (at $x = 1$).</p> Answer 27: <p>Over the same interval $[a,b]$, a function $f(x)$ is higher (or lower) than function $g(x)$ if:</p> <p>$f(a)&gt;g(a) (or &lt;)$ and $f(x)-g(x)=0$ has no root in $(a,b]$.</p> <p>Your $f(x)=x^2$ and $g(x)=\sqrt{x}$ can be compared over $[1,\infty)$ as $f(1)=g(1)$ at the initial point but $x^2-\sqrt x=0$ has at most $1$ positive and no negative real root in $\mathbb R$ (<em>Descarte's rule</em>) hence no roots in $(1,\infty)$. So $f(x)&gt;g(x)$ in $(1,\infty)$.</p> Answer 28: <p>Observe that the domain of $\sqrt{x}$ is $x&gt;0$ and $x=0$ and that there is no restriction on $x^2$. So the restriction on both of these domains is the common domain i.e. that of $\sqrt{x}$.</p> <p>Then finding where one function is greater than the other is a matter of solving the inequality $x^2&gt;\sqrt{x}, x&gt;0$ which works out to $x(x^3 - 1)&gt;0$ which can be graphed and solved.</p> Answer 29: <p>Compute the roots of $h(x)=f(x)-g(x)$.</p> <p>Then you know what intervalls are interesting.</p> <p>Those intervalls are a finite number for most excercises you encounter, or have a very easy to recognize pattern (like sin(x)-cos(x) ).</p> <p>Then check with values inside those intervalls, wether f(x)>g(x) or vice versa.</p> <p>Can you explain why this works?</p> Answer 30: <p>There's only one way in general: plug in a number and see what you get. If $f(x) &gt; g(x)$ for that value of $x$, then $f$ will be larger than $g$ until they intersect again. I assume, though, that you're looking for something simpler, something you could use when $f$ and $g$ are hard to compute. The answer is that there isn't any technique that will work all the time - there's just a few tricks you can use in specific cases.</p> <p>For example, in your image you're looking at the functions $f(x) = x^2$ and $g(x) = \sqrt{x}$. When $x$ is very small (very close to zero), $x^2$ is <em>very</em> small (a very small piece of a very small number) while $\sqrt{x}$ is "smallish" (only its square is "very small"). So $\sqrt{x} &gt; x^2$ when $x$ is a little bigger than zero, and it will remain that way until the curves intersect again (at $x = 1$).</p> Answer 31: <p>Over the same interval $[a,b]$, a function $f(x)$ is higher (or lower) than function $g(x)$ if:</p> <p>$f(a)&gt;g(a) (or &lt;)$ and $f(x)-g(x)=0$ has no root in $(a,b]$.</p> <p>Your $f(x)=x^2$ and $g(x)=\sqrt{x}$ can be compared over $[1,\infty)$ as $f(1)=g(1)$ at the initial point but $x^2-\sqrt x=0$ has at most $1$ positive and no negative real root in $\mathbb R$ (<em>Descarte's rule</em>) hence no roots in $(1,\infty)$. So $f(x)&gt;g(x)$ in $(1,\infty)$.</p> Answer 32: <p>Observe that the domain of $\sqrt{x}$ is $x&gt;0$ and $x=0$ and that there is no restriction on $x^2$. So the restriction on both of these domains is the common domain i.e. that of $\sqrt{x}$.</p> <p>Then finding where one function is greater than the other is a matter of solving the inequality $x^2&gt;\sqrt{x}, x&gt;0$ which works out to $x(x^3 - 1)&gt;0$ which can be graphed and solved.</p> Answer 33: <p>Compute the roots of $h(x)=f(x)-g(x)$.</p> <p>Then you know what intervalls are interesting.</p> <p>Those intervalls are a finite number for most excercises you encounter, or have a very easy to recognize pattern (like sin(x)-cos(x) ).</p> <p>Then check with values inside those intervalls, wether f(x)>g(x) or vice versa.</p> <p>Can you explain why this works?</p> Answer 34: <p>There's only one way in general: plug in a number and see what you get. If $f(x) &gt; g(x)$ for that value of $x$, then $f$ will be larger than $g$ until they intersect again. I assume, though, that you're looking for something simpler, something you could use when $f$ and $g$ are hard to compute. The answer is that there isn't any technique that will work all the time - there's just a few tricks you can use in specific cases.</p> <p>For example, in your image you're looking at the functions $f(x) = x^2$ and $g(x) = \sqrt{x}$. When $x$ is very small (very close to zero), $x^2$ is <em>very</em> small (a very small piece of a very small number) while $\sqrt{x}$ is "smallish" (only its square is "very small"). So $\sqrt{x} &gt; x^2$ when $x$ is a little bigger than zero, and it will remain that way until the curves intersect again (at $x = 1$).</p> Answer 35: <p>Over the same interval $[a,b]$, a function $f(x)$ is higher (or lower) than function $g(x)$ if:</p> <p>$f(a)&gt;g(a) (or &lt;)$ and $f(x)-g(x)=0$ has no root in $(a,b]$.</p> <p>Your $f(x)=x^2$ and $g(x)=\sqrt{x}$ can be compared over $[1,\infty)$ as $f(1)=g(1)$ at the initial point but $x^2-\sqrt x=0$ has at most $1$ positive and no negative real root in $\mathbb R$ (<em>Descarte's rule</em>) hence no roots in $(1,\infty)$. So $f(x)&gt;g(x)$ in $(1,\infty)$.</p> Answer 36: <p>Observe that the domain of $\sqrt{x}$ is $x&gt;0$ and $x=0$ and that there is no restriction on $x^2$. So the restriction on both of these domains is the common domain i.e. that of $\sqrt{x}$.</p> <p>Then finding where one function is greater than the other is a matter of solving the inequality $x^2&gt;\sqrt{x}, x&gt;0$ which works out to $x(x^3 - 1)&gt;0$ which can be graphed and solved.</p> Answer 37: <p>Compute the roots of $h(x)=f(x)-g(x)$.</p> <p>Then you know what intervalls are interesting.</p> <p>Those intervalls are a finite number for most excercises you encounter, or have a very easy to recognize pattern (like sin(x)-cos(x) ).</p> <p>Then check with values inside those intervalls, wether f(x)>g(x) or vice versa.</p> <p>Can you explain why this works?</p> Answer 38: <p>There's only one way in general: plug in a number and see what you get. If $f(x) &gt; g(x)$ for that value of $x$, then $f$ will be larger than $g$ until they intersect again. I assume, though, that you're looking for something simpler, something you could use when $f$ and $g$ are hard to compute. The answer is that there isn't any technique that will work all the time - there's just a few tricks you can use in specific cases.</p> <p>For example, in your image you're looking at the functions $f(x) = x^2$ and $g(x) = \sqrt{x}$. When $x$ is very small (very close to zero), $x^2$ is <em>very</em> small (a very small piece of a very small number) while $\sqrt{x}$ is "smallish" (only its square is "very small"). So $\sqrt{x} &gt; x^2$ when $x$ is a little bigger than zero, and it will remain that way until the curves intersect again (at $x = 1$).</p> Answer 39: <p>Over the same interval $[a,b]$, a function $f(x)$ is higher (or lower) than function $g(x)$ if:</p> <p>$f(a)&gt;g(a) (or &lt;)$ and $f(x)-g(x)=0$ has no root in $(a,b]$.</p> <p>Your $f(x)=x^2$ and $g(x)=\sqrt{x}$ can be compared over $[1,\infty)$ as $f(1)=g(1)$ at the initial point but $x^2-\sqrt x=0$ has at most $1$ positive and no negative real root in $\mathbb R$ (<em>Descarte's rule</em>) hence no roots in $(1,\infty)$. So $f(x)&gt;g(x)$ in $(1,\infty)$.</p> Answer 40: <p>Observe that the domain of $\sqrt{x}$ is $x&gt;0$ and $x=0$ and that there is no restriction on $x^2$. So the restriction on both of these domains is the common domain i.e. that of $\sqrt{x}$.</p> <p>Then finding where one function is greater than the other is a matter of solving the inequality $x^2&gt;\sqrt{x}, x&gt;0$ which works out to $x(x^3 - 1)&gt;0$ which can be graphed and solved.</p>
For all triangle prove that $\sum\limits_{cyc}\frac{a}{(a+b)^2}\geq\frac{9}{4(a+b+c)}$
2,109,187
6
https://math.stackexchange.com/questions/2109187/for-all-triangle-prove-that-sum-limits-cyc-fracaab2-geq-frac94a
<blockquote> <p>Let $a$, $b$ and $c$ be a sides-lengths of the triangle. Prove that: $$\frac{a}{(a+b)^2}+\frac{b}{(b+c)^2}+\frac{c}{(c+a)^2}\geq\frac{9}{4(a+b+c)}$$</p> </blockquote> <p>The <a href="https://brilliant.org/discussions/thread/the-buffalo-way/" rel="noreferrer">Buffalo way</a> kills it, but I am looking for a nice proof for this nice inequality.</p> <p>SOS (sums of squares) gives $\sum\limits_{cyc}\frac{(a-b)^2(ab+b^2-ac)}{(a+b)^2}\geq0$ and I don't see what comes next.</p>
[ { "answer_id": 3701941, "score": 2, "text": "<p>After using Ravi subtitution, we need to prove\n<span class=\"math-container\">$$\\sum \\frac{x+y}{(2y+z+x)^2} \\geqslant \\frac{9}{8(x+y+z)}.$$</span>\nBy the Cauchy-Schwarz inequality we get\n<span class=\"math-container\">$$\\sum \\frac{x+y}{(2y+z+x)^2} \\geqslant \\frac{\\left[\\displaystyle \\sum (x+y)(42x+3y+55z) \\right]^2}{\\displaystyle \\sum (2y+z+x)^2(x+y)(42x+3y+55z)^2}.$$</span>\nTherefore, we will show that\n<span class=\"math-container\">$$8(x+y+z)\\left[\\displaystyle \\sum (x+y)(42x+3y+55z) \\right]^2 \\geqslant 9\\sum (2y+z+x)^2(x+y)(42x+3y+55z)^2,$$</span>\nor\n<span class=\"math-container\">$$11700 \\sum xy^2(y-z)^2 + \\sum \\left(1954xyz+6723x^2y+\\frac{3104xy^2}{3}+\\frac{5053yz^2}{3}\\right)(z+x-2y)^2$$</span>\n<span class=\"math-container\">$$\\geqslant 0.$$</span>\nDone.</p>\n" }, { "answer_id": 2223715, "score": 1, "text": "<p>clearing the denominators you inequality is equivalent to\n$$4\\,b{a}^{5}+7\\,{b}^{2}{a}^{4}+2\\,bc{a}^{4}-5\\,{c}^{2}{a}^{4}-2\\,{b}^{3\n}{a}^{3}+2\\,{b}^{2}c{a}^{3}-6\\,b{c}^{2}{a}^{3}-2\\,{c}^{3}{a}^{3}-5\\,{b\n}^{4}{a}^{2}-6\\,{b}^{3}c{a}^{2}-6\\,{b}^{2}{c}^{2}{a}^{2}+2\\,b{c}^{3}{a\n}^{2}+7\\,{c}^{4}{a}^{2}+2\\,{b}^{4}ca+2\\,{b}^{3}{c}^{2}a-6\\,{b}^{2}{c}^\n{3}a+2\\,b{c}^{4}a+4\\,{c}^{5}a+4\\,{b}^{5}c+7\\,{b}^{4}{c}^{2}-2\\,{b}^{3}\n{c}^{3}-5\\,{b}^{2}{c}^{4}\n\\geq 0$$\nnow we use the Ravi-substitution\n$$a=z+y,b=x+z,c=x+y$$ and your inequality is equivalent to\n$$4\\,{x}^{6}-4\\,{x}^{5}y+36\\,{x}^{5}z-21\\,{x}^{4}{y}^{2}+22\\,{x}^{4}yz+\n67\\,{x}^{4}{z}^{2}+22\\,{x}^{3}{y}^{3}-62\\,{x}^{3}{y}^{2}z-14\\,{x}^{3}y\n{z}^{2}+22\\,{x}^{3}{z}^{3}+67\\,{x}^{2}{y}^{4}-14\\,{x}^{2}{y}^{3}z-150\n\\,{x}^{2}{y}^{2}{z}^{2}-62\\,{x}^{2}y{z}^{3}-21\\,{x}^{2}{z}^{4}+36\\,x{y\n}^{5}+22\\,x{y}^{4}z-62\\,x{y}^{3}{z}^{2}-14\\,x{y}^{2}{z}^{3}+22\\,xy{z}^\n{4}-4\\,x{z}^{5}+4\\,{y}^{6}-4\\,{y}^{5}z-21\\,{y}^{4}{z}^{2}+22\\,{y}^{3}{\nz}^{3}+67\\,{y}^{2}{z}^{4}+36\\,y{z}^{5}+4\\,{z}^{6}\n\\geq 0$$\nnow we use $$y=x+u,z=x+u+v$$ and we get the equivalent inequality\n$$\\left( 512\\,{u}^{2}+512\\,uv+512\\,{v}^{2} \\right) {x}^{4}+ \\left( 1408\n\\,{u}^{3}+2688\\,{u}^{2}v+2560\\,u{v}^{2}+640\\,{v}^{3} \\right) {x}^{3}+\n \\left( 1440\\,{u}^{4}+4032\\,{u}^{3}v+4704\\,{u}^{2}{v}^{2}+2112\\,u{v}^{\n3}+288\\,{v}^{4} \\right) {x}^{2}+ \\left( 648\\,{u}^{5}+2384\\,{u}^{4}v+\n3456\\,{u}^{3}{v}^{2}+2224\\,{u}^{2}{v}^{3}+616\\,u{v}^{4}+56\\,{v}^{5}\n \\right) x+108\\,{u}^{6}+492\\,{u}^{5}v+867\\,{u}^{4}{v}^{2}+730\\,{u}^{3}\n{v}^{3}+307\\,{u}^{2}{v}^{4}+60\\,u{v}^{5}+4\\,{v}^{6}\n\\geq 0$$\nwhich is true.</p>\n" }, { "answer_id": 2347350, "score": 0, "text": "<p>Have: $\\Leftrightarrow \\left(\\sum _{cyc}a\\right)\\left(\\sum _{cyc}\\frac{a}{\\left(b+c\\right)^2}\\right)\\ge \\frac{9}{4}$</p>\n\n<p>By C-S and Nesbitt : $LHS\\ge \\left(\\frac{a}{b+c}+\\frac{b}{b+c}+\\frac{c}{c+a}\\right)^2=\\frac{9}{4}$</p>\n" }, { "answer_id": 2347924, "score": 0, "text": "<p>WLOG $a+b+c=1$</p>\n\n<p>$f\\left(0;1\\right)\\rightarrow \\left(0;\\infty \\right);f\\left(x\\right)=\\frac{x}{\\left(1-x\\right)^2}$</p>\n\n<p>See $f$ is a convex function, then use Jensen's inequality</p>\n" }, { "answer_id": 3701941, "score": 2, "text": "<p>After using Ravi subtitution, we need to prove\n<span class=\"math-container\">$$\\sum \\frac{x+y}{(2y+z+x)^2} \\geqslant \\frac{9}{8(x+y+z)}.$$</span>\nBy the Cauchy-Schwarz inequality we get\n<span class=\"math-container\">$$\\sum \\frac{x+y}{(2y+z+x)^2} \\geqslant \\frac{\\left[\\displaystyle \\sum (x+y)(42x+3y+55z) \\right]^2}{\\displaystyle \\sum (2y+z+x)^2(x+y)(42x+3y+55z)^2}.$$</span>\nTherefore, we will show that\n<span class=\"math-container\">$$8(x+y+z)\\left[\\displaystyle \\sum (x+y)(42x+3y+55z) \\right]^2 \\geqslant 9\\sum (2y+z+x)^2(x+y)(42x+3y+55z)^2,$$</span>\nor\n<span class=\"math-container\">$$11700 \\sum xy^2(y-z)^2 + \\sum \\left(1954xyz+6723x^2y+\\frac{3104xy^2}{3}+\\frac{5053yz^2}{3}\\right)(z+x-2y)^2$$</span>\n<span class=\"math-container\">$$\\geqslant 0.$$</span>\nDone.</p>\n" }, { "answer_id": 2223715, "score": 1, "text": "<p>clearing the denominators you inequality is equivalent to\n$$4\\,b{a}^{5}+7\\,{b}^{2}{a}^{4}+2\\,bc{a}^{4}-5\\,{c}^{2}{a}^{4}-2\\,{b}^{3\n}{a}^{3}+2\\,{b}^{2}c{a}^{3}-6\\,b{c}^{2}{a}^{3}-2\\,{c}^{3}{a}^{3}-5\\,{b\n}^{4}{a}^{2}-6\\,{b}^{3}c{a}^{2}-6\\,{b}^{2}{c}^{2}{a}^{2}+2\\,b{c}^{3}{a\n}^{2}+7\\,{c}^{4}{a}^{2}+2\\,{b}^{4}ca+2\\,{b}^{3}{c}^{2}a-6\\,{b}^{2}{c}^\n{3}a+2\\,b{c}^{4}a+4\\,{c}^{5}a+4\\,{b}^{5}c+7\\,{b}^{4}{c}^{2}-2\\,{b}^{3}\n{c}^{3}-5\\,{b}^{2}{c}^{4}\n\\geq 0$$\nnow we use the Ravi-substitution\n$$a=z+y,b=x+z,c=x+y$$ and your inequality is equivalent to\n$$4\\,{x}^{6}-4\\,{x}^{5}y+36\\,{x}^{5}z-21\\,{x}^{4}{y}^{2}+22\\,{x}^{4}yz+\n67\\,{x}^{4}{z}^{2}+22\\,{x}^{3}{y}^{3}-62\\,{x}^{3}{y}^{2}z-14\\,{x}^{3}y\n{z}^{2}+22\\,{x}^{3}{z}^{3}+67\\,{x}^{2}{y}^{4}-14\\,{x}^{2}{y}^{3}z-150\n\\,{x}^{2}{y}^{2}{z}^{2}-62\\,{x}^{2}y{z}^{3}-21\\,{x}^{2}{z}^{4}+36\\,x{y\n}^{5}+22\\,x{y}^{4}z-62\\,x{y}^{3}{z}^{2}-14\\,x{y}^{2}{z}^{3}+22\\,xy{z}^\n{4}-4\\,x{z}^{5}+4\\,{y}^{6}-4\\,{y}^{5}z-21\\,{y}^{4}{z}^{2}+22\\,{y}^{3}{\nz}^{3}+67\\,{y}^{2}{z}^{4}+36\\,y{z}^{5}+4\\,{z}^{6}\n\\geq 0$$\nnow we use $$y=x+u,z=x+u+v$$ and we get the equivalent inequality\n$$\\left( 512\\,{u}^{2}+512\\,uv+512\\,{v}^{2} \\right) {x}^{4}+ \\left( 1408\n\\,{u}^{3}+2688\\,{u}^{2}v+2560\\,u{v}^{2}+640\\,{v}^{3} \\right) {x}^{3}+\n \\left( 1440\\,{u}^{4}+4032\\,{u}^{3}v+4704\\,{u}^{2}{v}^{2}+2112\\,u{v}^{\n3}+288\\,{v}^{4} \\right) {x}^{2}+ \\left( 648\\,{u}^{5}+2384\\,{u}^{4}v+\n3456\\,{u}^{3}{v}^{2}+2224\\,{u}^{2}{v}^{3}+616\\,u{v}^{4}+56\\,{v}^{5}\n \\right) x+108\\,{u}^{6}+492\\,{u}^{5}v+867\\,{u}^{4}{v}^{2}+730\\,{u}^{3}\n{v}^{3}+307\\,{u}^{2}{v}^{4}+60\\,u{v}^{5}+4\\,{v}^{6}\n\\geq 0$$\nwhich is true.</p>\n" }, { "answer_id": 2347350, "score": 0, "text": "<p>Have: $\\Leftrightarrow \\left(\\sum _{cyc}a\\right)\\left(\\sum _{cyc}\\frac{a}{\\left(b+c\\right)^2}\\right)\\ge \\frac{9}{4}$</p>\n\n<p>By C-S and Nesbitt : $LHS\\ge \\left(\\frac{a}{b+c}+\\frac{b}{b+c}+\\frac{c}{c+a}\\right)^2=\\frac{9}{4}$</p>\n" }, { "answer_id": 2347924, "score": 0, "text": "<p>WLOG $a+b+c=1$</p>\n\n<p>$f\\left(0;1\\right)\\rightarrow \\left(0;\\infty \\right);f\\left(x\\right)=\\frac{x}{\\left(1-x\\right)^2}$</p>\n\n<p>See $f$ is a convex function, then use Jensen's inequality</p>\n" }, { "answer_id": 3701941, "score": 2, "text": "<p>After using Ravi subtitution, we need to prove\n<span class=\"math-container\">$$\\sum \\frac{x+y}{(2y+z+x)^2} \\geqslant \\frac{9}{8(x+y+z)}.$$</span>\nBy the Cauchy-Schwarz inequality we get\n<span class=\"math-container\">$$\\sum \\frac{x+y}{(2y+z+x)^2} \\geqslant \\frac{\\left[\\displaystyle \\sum (x+y)(42x+3y+55z) \\right]^2}{\\displaystyle \\sum (2y+z+x)^2(x+y)(42x+3y+55z)^2}.$$</span>\nTherefore, we will show that\n<span class=\"math-container\">$$8(x+y+z)\\left[\\displaystyle \\sum (x+y)(42x+3y+55z) \\right]^2 \\geqslant 9\\sum (2y+z+x)^2(x+y)(42x+3y+55z)^2,$$</span>\nor\n<span class=\"math-container\">$$11700 \\sum xy^2(y-z)^2 + \\sum \\left(1954xyz+6723x^2y+\\frac{3104xy^2}{3}+\\frac{5053yz^2}{3}\\right)(z+x-2y)^2$$</span>\n<span class=\"math-container\">$$\\geqslant 0.$$</span>\nDone.</p>\n" }, { "answer_id": 2223715, "score": 1, "text": "<p>clearing the denominators you inequality is equivalent to\n$$4\\,b{a}^{5}+7\\,{b}^{2}{a}^{4}+2\\,bc{a}^{4}-5\\,{c}^{2}{a}^{4}-2\\,{b}^{3\n}{a}^{3}+2\\,{b}^{2}c{a}^{3}-6\\,b{c}^{2}{a}^{3}-2\\,{c}^{3}{a}^{3}-5\\,{b\n}^{4}{a}^{2}-6\\,{b}^{3}c{a}^{2}-6\\,{b}^{2}{c}^{2}{a}^{2}+2\\,b{c}^{3}{a\n}^{2}+7\\,{c}^{4}{a}^{2}+2\\,{b}^{4}ca+2\\,{b}^{3}{c}^{2}a-6\\,{b}^{2}{c}^\n{3}a+2\\,b{c}^{4}a+4\\,{c}^{5}a+4\\,{b}^{5}c+7\\,{b}^{4}{c}^{2}-2\\,{b}^{3}\n{c}^{3}-5\\,{b}^{2}{c}^{4}\n\\geq 0$$\nnow we use the Ravi-substitution\n$$a=z+y,b=x+z,c=x+y$$ and your inequality is equivalent to\n$$4\\,{x}^{6}-4\\,{x}^{5}y+36\\,{x}^{5}z-21\\,{x}^{4}{y}^{2}+22\\,{x}^{4}yz+\n67\\,{x}^{4}{z}^{2}+22\\,{x}^{3}{y}^{3}-62\\,{x}^{3}{y}^{2}z-14\\,{x}^{3}y\n{z}^{2}+22\\,{x}^{3}{z}^{3}+67\\,{x}^{2}{y}^{4}-14\\,{x}^{2}{y}^{3}z-150\n\\,{x}^{2}{y}^{2}{z}^{2}-62\\,{x}^{2}y{z}^{3}-21\\,{x}^{2}{z}^{4}+36\\,x{y\n}^{5}+22\\,x{y}^{4}z-62\\,x{y}^{3}{z}^{2}-14\\,x{y}^{2}{z}^{3}+22\\,xy{z}^\n{4}-4\\,x{z}^{5}+4\\,{y}^{6}-4\\,{y}^{5}z-21\\,{y}^{4}{z}^{2}+22\\,{y}^{3}{\nz}^{3}+67\\,{y}^{2}{z}^{4}+36\\,y{z}^{5}+4\\,{z}^{6}\n\\geq 0$$\nnow we use $$y=x+u,z=x+u+v$$ and we get the equivalent inequality\n$$\\left( 512\\,{u}^{2}+512\\,uv+512\\,{v}^{2} \\right) {x}^{4}+ \\left( 1408\n\\,{u}^{3}+2688\\,{u}^{2}v+2560\\,u{v}^{2}+640\\,{v}^{3} \\right) {x}^{3}+\n \\left( 1440\\,{u}^{4}+4032\\,{u}^{3}v+4704\\,{u}^{2}{v}^{2}+2112\\,u{v}^{\n3}+288\\,{v}^{4} \\right) {x}^{2}+ \\left( 648\\,{u}^{5}+2384\\,{u}^{4}v+\n3456\\,{u}^{3}{v}^{2}+2224\\,{u}^{2}{v}^{3}+616\\,u{v}^{4}+56\\,{v}^{5}\n \\right) x+108\\,{u}^{6}+492\\,{u}^{5}v+867\\,{u}^{4}{v}^{2}+730\\,{u}^{3}\n{v}^{3}+307\\,{u}^{2}{v}^{4}+60\\,u{v}^{5}+4\\,{v}^{6}\n\\geq 0$$\nwhich is true.</p>\n" }, { "answer_id": 2347350, "score": 0, "text": "<p>Have: $\\Leftrightarrow \\left(\\sum _{cyc}a\\right)\\left(\\sum _{cyc}\\frac{a}{\\left(b+c\\right)^2}\\right)\\ge \\frac{9}{4}$</p>\n\n<p>By C-S and Nesbitt : $LHS\\ge \\left(\\frac{a}{b+c}+\\frac{b}{b+c}+\\frac{c}{c+a}\\right)^2=\\frac{9}{4}$</p>\n" }, { "answer_id": 2347924, "score": 0, "text": "<p>WLOG $a+b+c=1$</p>\n\n<p>$f\\left(0;1\\right)\\rightarrow \\left(0;\\infty \\right);f\\left(x\\right)=\\frac{x}{\\left(1-x\\right)^2}$</p>\n\n<p>See $f$ is a convex function, then use Jensen's inequality</p>\n" }, { "answer_id": 3701941, "score": 2, "text": "<p>After using Ravi subtitution, we need to prove\n<span class=\"math-container\">$$\\sum \\frac{x+y}{(2y+z+x)^2} \\geqslant \\frac{9}{8(x+y+z)}.$$</span>\nBy the Cauchy-Schwarz inequality we get\n<span class=\"math-container\">$$\\sum \\frac{x+y}{(2y+z+x)^2} \\geqslant \\frac{\\left[\\displaystyle \\sum (x+y)(42x+3y+55z) \\right]^2}{\\displaystyle \\sum (2y+z+x)^2(x+y)(42x+3y+55z)^2}.$$</span>\nTherefore, we will show that\n<span class=\"math-container\">$$8(x+y+z)\\left[\\displaystyle \\sum (x+y)(42x+3y+55z) \\right]^2 \\geqslant 9\\sum (2y+z+x)^2(x+y)(42x+3y+55z)^2,$$</span>\nor\n<span class=\"math-container\">$$11700 \\sum xy^2(y-z)^2 + \\sum \\left(1954xyz+6723x^2y+\\frac{3104xy^2}{3}+\\frac{5053yz^2}{3}\\right)(z+x-2y)^2$$</span>\n<span class=\"math-container\">$$\\geqslant 0.$$</span>\nDone.</p>\n" }, { "answer_id": 2223715, "score": 1, "text": "<p>clearing the denominators you inequality is equivalent to\n$$4\\,b{a}^{5}+7\\,{b}^{2}{a}^{4}+2\\,bc{a}^{4}-5\\,{c}^{2}{a}^{4}-2\\,{b}^{3\n}{a}^{3}+2\\,{b}^{2}c{a}^{3}-6\\,b{c}^{2}{a}^{3}-2\\,{c}^{3}{a}^{3}-5\\,{b\n}^{4}{a}^{2}-6\\,{b}^{3}c{a}^{2}-6\\,{b}^{2}{c}^{2}{a}^{2}+2\\,b{c}^{3}{a\n}^{2}+7\\,{c}^{4}{a}^{2}+2\\,{b}^{4}ca+2\\,{b}^{3}{c}^{2}a-6\\,{b}^{2}{c}^\n{3}a+2\\,b{c}^{4}a+4\\,{c}^{5}a+4\\,{b}^{5}c+7\\,{b}^{4}{c}^{2}-2\\,{b}^{3}\n{c}^{3}-5\\,{b}^{2}{c}^{4}\n\\geq 0$$\nnow we use the Ravi-substitution\n$$a=z+y,b=x+z,c=x+y$$ and your inequality is equivalent to\n$$4\\,{x}^{6}-4\\,{x}^{5}y+36\\,{x}^{5}z-21\\,{x}^{4}{y}^{2}+22\\,{x}^{4}yz+\n67\\,{x}^{4}{z}^{2}+22\\,{x}^{3}{y}^{3}-62\\,{x}^{3}{y}^{2}z-14\\,{x}^{3}y\n{z}^{2}+22\\,{x}^{3}{z}^{3}+67\\,{x}^{2}{y}^{4}-14\\,{x}^{2}{y}^{3}z-150\n\\,{x}^{2}{y}^{2}{z}^{2}-62\\,{x}^{2}y{z}^{3}-21\\,{x}^{2}{z}^{4}+36\\,x{y\n}^{5}+22\\,x{y}^{4}z-62\\,x{y}^{3}{z}^{2}-14\\,x{y}^{2}{z}^{3}+22\\,xy{z}^\n{4}-4\\,x{z}^{5}+4\\,{y}^{6}-4\\,{y}^{5}z-21\\,{y}^{4}{z}^{2}+22\\,{y}^{3}{\nz}^{3}+67\\,{y}^{2}{z}^{4}+36\\,y{z}^{5}+4\\,{z}^{6}\n\\geq 0$$\nnow we use $$y=x+u,z=x+u+v$$ and we get the equivalent inequality\n$$\\left( 512\\,{u}^{2}+512\\,uv+512\\,{v}^{2} \\right) {x}^{4}+ \\left( 1408\n\\,{u}^{3}+2688\\,{u}^{2}v+2560\\,u{v}^{2}+640\\,{v}^{3} \\right) {x}^{3}+\n \\left( 1440\\,{u}^{4}+4032\\,{u}^{3}v+4704\\,{u}^{2}{v}^{2}+2112\\,u{v}^{\n3}+288\\,{v}^{4} \\right) {x}^{2}+ \\left( 648\\,{u}^{5}+2384\\,{u}^{4}v+\n3456\\,{u}^{3}{v}^{2}+2224\\,{u}^{2}{v}^{3}+616\\,u{v}^{4}+56\\,{v}^{5}\n \\right) x+108\\,{u}^{6}+492\\,{u}^{5}v+867\\,{u}^{4}{v}^{2}+730\\,{u}^{3}\n{v}^{3}+307\\,{u}^{2}{v}^{4}+60\\,u{v}^{5}+4\\,{v}^{6}\n\\geq 0$$\nwhich is true.</p>\n" }, { "answer_id": 2347350, "score": 0, "text": "<p>Have: $\\Leftrightarrow \\left(\\sum _{cyc}a\\right)\\left(\\sum _{cyc}\\frac{a}{\\left(b+c\\right)^2}\\right)\\ge \\frac{9}{4}$</p>\n\n<p>By C-S and Nesbitt : $LHS\\ge \\left(\\frac{a}{b+c}+\\frac{b}{b+c}+\\frac{c}{c+a}\\right)^2=\\frac{9}{4}$</p>\n" }, { "answer_id": 2347924, "score": 0, "text": "<p>WLOG $a+b+c=1$</p>\n\n<p>$f\\left(0;1\\right)\\rightarrow \\left(0;\\infty \\right);f\\left(x\\right)=\\frac{x}{\\left(1-x\\right)^2}$</p>\n\n<p>See $f$ is a convex function, then use Jensen's inequality</p>\n" }, { "answer_id": 3701941, "score": 2, "text": "<p>After using Ravi subtitution, we need to prove\n<span class=\"math-container\">$$\\sum \\frac{x+y}{(2y+z+x)^2} \\geqslant \\frac{9}{8(x+y+z)}.$$</span>\nBy the Cauchy-Schwarz inequality we get\n<span class=\"math-container\">$$\\sum \\frac{x+y}{(2y+z+x)^2} \\geqslant \\frac{\\left[\\displaystyle \\sum (x+y)(42x+3y+55z) \\right]^2}{\\displaystyle \\sum (2y+z+x)^2(x+y)(42x+3y+55z)^2}.$$</span>\nTherefore, we will show that\n<span class=\"math-container\">$$8(x+y+z)\\left[\\displaystyle \\sum (x+y)(42x+3y+55z) \\right]^2 \\geqslant 9\\sum (2y+z+x)^2(x+y)(42x+3y+55z)^2,$$</span>\nor\n<span class=\"math-container\">$$11700 \\sum xy^2(y-z)^2 + \\sum \\left(1954xyz+6723x^2y+\\frac{3104xy^2}{3}+\\frac{5053yz^2}{3}\\right)(z+x-2y)^2$$</span>\n<span class=\"math-container\">$$\\geqslant 0.$$</span>\nDone.</p>\n" }, { "answer_id": 2223715, "score": 1, "text": "<p>clearing the denominators you inequality is equivalent to\n$$4\\,b{a}^{5}+7\\,{b}^{2}{a}^{4}+2\\,bc{a}^{4}-5\\,{c}^{2}{a}^{4}-2\\,{b}^{3\n}{a}^{3}+2\\,{b}^{2}c{a}^{3}-6\\,b{c}^{2}{a}^{3}-2\\,{c}^{3}{a}^{3}-5\\,{b\n}^{4}{a}^{2}-6\\,{b}^{3}c{a}^{2}-6\\,{b}^{2}{c}^{2}{a}^{2}+2\\,b{c}^{3}{a\n}^{2}+7\\,{c}^{4}{a}^{2}+2\\,{b}^{4}ca+2\\,{b}^{3}{c}^{2}a-6\\,{b}^{2}{c}^\n{3}a+2\\,b{c}^{4}a+4\\,{c}^{5}a+4\\,{b}^{5}c+7\\,{b}^{4}{c}^{2}-2\\,{b}^{3}\n{c}^{3}-5\\,{b}^{2}{c}^{4}\n\\geq 0$$\nnow we use the Ravi-substitution\n$$a=z+y,b=x+z,c=x+y$$ and your inequality is equivalent to\n$$4\\,{x}^{6}-4\\,{x}^{5}y+36\\,{x}^{5}z-21\\,{x}^{4}{y}^{2}+22\\,{x}^{4}yz+\n67\\,{x}^{4}{z}^{2}+22\\,{x}^{3}{y}^{3}-62\\,{x}^{3}{y}^{2}z-14\\,{x}^{3}y\n{z}^{2}+22\\,{x}^{3}{z}^{3}+67\\,{x}^{2}{y}^{4}-14\\,{x}^{2}{y}^{3}z-150\n\\,{x}^{2}{y}^{2}{z}^{2}-62\\,{x}^{2}y{z}^{3}-21\\,{x}^{2}{z}^{4}+36\\,x{y\n}^{5}+22\\,x{y}^{4}z-62\\,x{y}^{3}{z}^{2}-14\\,x{y}^{2}{z}^{3}+22\\,xy{z}^\n{4}-4\\,x{z}^{5}+4\\,{y}^{6}-4\\,{y}^{5}z-21\\,{y}^{4}{z}^{2}+22\\,{y}^{3}{\nz}^{3}+67\\,{y}^{2}{z}^{4}+36\\,y{z}^{5}+4\\,{z}^{6}\n\\geq 0$$\nnow we use $$y=x+u,z=x+u+v$$ and we get the equivalent inequality\n$$\\left( 512\\,{u}^{2}+512\\,uv+512\\,{v}^{2} \\right) {x}^{4}+ \\left( 1408\n\\,{u}^{3}+2688\\,{u}^{2}v+2560\\,u{v}^{2}+640\\,{v}^{3} \\right) {x}^{3}+\n \\left( 1440\\,{u}^{4}+4032\\,{u}^{3}v+4704\\,{u}^{2}{v}^{2}+2112\\,u{v}^{\n3}+288\\,{v}^{4} \\right) {x}^{2}+ \\left( 648\\,{u}^{5}+2384\\,{u}^{4}v+\n3456\\,{u}^{3}{v}^{2}+2224\\,{u}^{2}{v}^{3}+616\\,u{v}^{4}+56\\,{v}^{5}\n \\right) x+108\\,{u}^{6}+492\\,{u}^{5}v+867\\,{u}^{4}{v}^{2}+730\\,{u}^{3}\n{v}^{3}+307\\,{u}^{2}{v}^{4}+60\\,u{v}^{5}+4\\,{v}^{6}\n\\geq 0$$\nwhich is true.</p>\n" }, { "answer_id": 2347350, "score": 0, "text": "<p>Have: $\\Leftrightarrow \\left(\\sum _{cyc}a\\right)\\left(\\sum _{cyc}\\frac{a}{\\left(b+c\\right)^2}\\right)\\ge \\frac{9}{4}$</p>\n\n<p>By C-S and Nesbitt : $LHS\\ge \\left(\\frac{a}{b+c}+\\frac{b}{b+c}+\\frac{c}{c+a}\\right)^2=\\frac{9}{4}$</p>\n" }, { "answer_id": 2347924, "score": 0, "text": "<p>WLOG $a+b+c=1$</p>\n\n<p>$f\\left(0;1\\right)\\rightarrow \\left(0;\\infty \\right);f\\left(x\\right)=\\frac{x}{\\left(1-x\\right)^2}$</p>\n\n<p>See $f$ is a convex function, then use Jensen's inequality</p>\n" }, { "answer_id": 3701941, "score": 2, "text": "<p>After using Ravi subtitution, we need to prove\n<span class=\"math-container\">$$\\sum \\frac{x+y}{(2y+z+x)^2} \\geqslant \\frac{9}{8(x+y+z)}.$$</span>\nBy the Cauchy-Schwarz inequality we get\n<span class=\"math-container\">$$\\sum \\frac{x+y}{(2y+z+x)^2} \\geqslant \\frac{\\left[\\displaystyle \\sum (x+y)(42x+3y+55z) \\right]^2}{\\displaystyle \\sum (2y+z+x)^2(x+y)(42x+3y+55z)^2}.$$</span>\nTherefore, we will show that\n<span class=\"math-container\">$$8(x+y+z)\\left[\\displaystyle \\sum (x+y)(42x+3y+55z) \\right]^2 \\geqslant 9\\sum (2y+z+x)^2(x+y)(42x+3y+55z)^2,$$</span>\nor\n<span class=\"math-container\">$$11700 \\sum xy^2(y-z)^2 + \\sum \\left(1954xyz+6723x^2y+\\frac{3104xy^2}{3}+\\frac{5053yz^2}{3}\\right)(z+x-2y)^2$$</span>\n<span class=\"math-container\">$$\\geqslant 0.$$</span>\nDone.</p>\n" }, { "answer_id": 2223715, "score": 1, "text": "<p>clearing the denominators you inequality is equivalent to\n$$4\\,b{a}^{5}+7\\,{b}^{2}{a}^{4}+2\\,bc{a}^{4}-5\\,{c}^{2}{a}^{4}-2\\,{b}^{3\n}{a}^{3}+2\\,{b}^{2}c{a}^{3}-6\\,b{c}^{2}{a}^{3}-2\\,{c}^{3}{a}^{3}-5\\,{b\n}^{4}{a}^{2}-6\\,{b}^{3}c{a}^{2}-6\\,{b}^{2}{c}^{2}{a}^{2}+2\\,b{c}^{3}{a\n}^{2}+7\\,{c}^{4}{a}^{2}+2\\,{b}^{4}ca+2\\,{b}^{3}{c}^{2}a-6\\,{b}^{2}{c}^\n{3}a+2\\,b{c}^{4}a+4\\,{c}^{5}a+4\\,{b}^{5}c+7\\,{b}^{4}{c}^{2}-2\\,{b}^{3}\n{c}^{3}-5\\,{b}^{2}{c}^{4}\n\\geq 0$$\nnow we use the Ravi-substitution\n$$a=z+y,b=x+z,c=x+y$$ and your inequality is equivalent to\n$$4\\,{x}^{6}-4\\,{x}^{5}y+36\\,{x}^{5}z-21\\,{x}^{4}{y}^{2}+22\\,{x}^{4}yz+\n67\\,{x}^{4}{z}^{2}+22\\,{x}^{3}{y}^{3}-62\\,{x}^{3}{y}^{2}z-14\\,{x}^{3}y\n{z}^{2}+22\\,{x}^{3}{z}^{3}+67\\,{x}^{2}{y}^{4}-14\\,{x}^{2}{y}^{3}z-150\n\\,{x}^{2}{y}^{2}{z}^{2}-62\\,{x}^{2}y{z}^{3}-21\\,{x}^{2}{z}^{4}+36\\,x{y\n}^{5}+22\\,x{y}^{4}z-62\\,x{y}^{3}{z}^{2}-14\\,x{y}^{2}{z}^{3}+22\\,xy{z}^\n{4}-4\\,x{z}^{5}+4\\,{y}^{6}-4\\,{y}^{5}z-21\\,{y}^{4}{z}^{2}+22\\,{y}^{3}{\nz}^{3}+67\\,{y}^{2}{z}^{4}+36\\,y{z}^{5}+4\\,{z}^{6}\n\\geq 0$$\nnow we use $$y=x+u,z=x+u+v$$ and we get the equivalent inequality\n$$\\left( 512\\,{u}^{2}+512\\,uv+512\\,{v}^{2} \\right) {x}^{4}+ \\left( 1408\n\\,{u}^{3}+2688\\,{u}^{2}v+2560\\,u{v}^{2}+640\\,{v}^{3} \\right) {x}^{3}+\n \\left( 1440\\,{u}^{4}+4032\\,{u}^{3}v+4704\\,{u}^{2}{v}^{2}+2112\\,u{v}^{\n3}+288\\,{v}^{4} \\right) {x}^{2}+ \\left( 648\\,{u}^{5}+2384\\,{u}^{4}v+\n3456\\,{u}^{3}{v}^{2}+2224\\,{u}^{2}{v}^{3}+616\\,u{v}^{4}+56\\,{v}^{5}\n \\right) x+108\\,{u}^{6}+492\\,{u}^{5}v+867\\,{u}^{4}{v}^{2}+730\\,{u}^{3}\n{v}^{3}+307\\,{u}^{2}{v}^{4}+60\\,u{v}^{5}+4\\,{v}^{6}\n\\geq 0$$\nwhich is true.</p>\n" }, { "answer_id": 2347350, "score": 0, "text": "<p>Have: $\\Leftrightarrow \\left(\\sum _{cyc}a\\right)\\left(\\sum _{cyc}\\frac{a}{\\left(b+c\\right)^2}\\right)\\ge \\frac{9}{4}$</p>\n\n<p>By C-S and Nesbitt : $LHS\\ge \\left(\\frac{a}{b+c}+\\frac{b}{b+c}+\\frac{c}{c+a}\\right)^2=\\frac{9}{4}$</p>\n" }, { "answer_id": 2347924, "score": 0, "text": "<p>WLOG $a+b+c=1$</p>\n\n<p>$f\\left(0;1\\right)\\rightarrow \\left(0;\\infty \\right);f\\left(x\\right)=\\frac{x}{\\left(1-x\\right)^2}$</p>\n\n<p>See $f$ is a convex function, then use Jensen's inequality</p>\n" }, { "answer_id": 3701941, "score": 2, "text": "<p>After using Ravi subtitution, we need to prove\n<span class=\"math-container\">$$\\sum \\frac{x+y}{(2y+z+x)^2} \\geqslant \\frac{9}{8(x+y+z)}.$$</span>\nBy the Cauchy-Schwarz inequality we get\n<span class=\"math-container\">$$\\sum \\frac{x+y}{(2y+z+x)^2} \\geqslant \\frac{\\left[\\displaystyle \\sum (x+y)(42x+3y+55z) \\right]^2}{\\displaystyle \\sum (2y+z+x)^2(x+y)(42x+3y+55z)^2}.$$</span>\nTherefore, we will show that\n<span class=\"math-container\">$$8(x+y+z)\\left[\\displaystyle \\sum (x+y)(42x+3y+55z) \\right]^2 \\geqslant 9\\sum (2y+z+x)^2(x+y)(42x+3y+55z)^2,$$</span>\nor\n<span class=\"math-container\">$$11700 \\sum xy^2(y-z)^2 + \\sum \\left(1954xyz+6723x^2y+\\frac{3104xy^2}{3}+\\frac{5053yz^2}{3}\\right)(z+x-2y)^2$$</span>\n<span class=\"math-container\">$$\\geqslant 0.$$</span>\nDone.</p>\n" }, { "answer_id": 2223715, "score": 1, "text": "<p>clearing the denominators you inequality is equivalent to\n$$4\\,b{a}^{5}+7\\,{b}^{2}{a}^{4}+2\\,bc{a}^{4}-5\\,{c}^{2}{a}^{4}-2\\,{b}^{3\n}{a}^{3}+2\\,{b}^{2}c{a}^{3}-6\\,b{c}^{2}{a}^{3}-2\\,{c}^{3}{a}^{3}-5\\,{b\n}^{4}{a}^{2}-6\\,{b}^{3}c{a}^{2}-6\\,{b}^{2}{c}^{2}{a}^{2}+2\\,b{c}^{3}{a\n}^{2}+7\\,{c}^{4}{a}^{2}+2\\,{b}^{4}ca+2\\,{b}^{3}{c}^{2}a-6\\,{b}^{2}{c}^\n{3}a+2\\,b{c}^{4}a+4\\,{c}^{5}a+4\\,{b}^{5}c+7\\,{b}^{4}{c}^{2}-2\\,{b}^{3}\n{c}^{3}-5\\,{b}^{2}{c}^{4}\n\\geq 0$$\nnow we use the Ravi-substitution\n$$a=z+y,b=x+z,c=x+y$$ and your inequality is equivalent to\n$$4\\,{x}^{6}-4\\,{x}^{5}y+36\\,{x}^{5}z-21\\,{x}^{4}{y}^{2}+22\\,{x}^{4}yz+\n67\\,{x}^{4}{z}^{2}+22\\,{x}^{3}{y}^{3}-62\\,{x}^{3}{y}^{2}z-14\\,{x}^{3}y\n{z}^{2}+22\\,{x}^{3}{z}^{3}+67\\,{x}^{2}{y}^{4}-14\\,{x}^{2}{y}^{3}z-150\n\\,{x}^{2}{y}^{2}{z}^{2}-62\\,{x}^{2}y{z}^{3}-21\\,{x}^{2}{z}^{4}+36\\,x{y\n}^{5}+22\\,x{y}^{4}z-62\\,x{y}^{3}{z}^{2}-14\\,x{y}^{2}{z}^{3}+22\\,xy{z}^\n{4}-4\\,x{z}^{5}+4\\,{y}^{6}-4\\,{y}^{5}z-21\\,{y}^{4}{z}^{2}+22\\,{y}^{3}{\nz}^{3}+67\\,{y}^{2}{z}^{4}+36\\,y{z}^{5}+4\\,{z}^{6}\n\\geq 0$$\nnow we use $$y=x+u,z=x+u+v$$ and we get the equivalent inequality\n$$\\left( 512\\,{u}^{2}+512\\,uv+512\\,{v}^{2} \\right) {x}^{4}+ \\left( 1408\n\\,{u}^{3}+2688\\,{u}^{2}v+2560\\,u{v}^{2}+640\\,{v}^{3} \\right) {x}^{3}+\n \\left( 1440\\,{u}^{4}+4032\\,{u}^{3}v+4704\\,{u}^{2}{v}^{2}+2112\\,u{v}^{\n3}+288\\,{v}^{4} \\right) {x}^{2}+ \\left( 648\\,{u}^{5}+2384\\,{u}^{4}v+\n3456\\,{u}^{3}{v}^{2}+2224\\,{u}^{2}{v}^{3}+616\\,u{v}^{4}+56\\,{v}^{5}\n \\right) x+108\\,{u}^{6}+492\\,{u}^{5}v+867\\,{u}^{4}{v}^{2}+730\\,{u}^{3}\n{v}^{3}+307\\,{u}^{2}{v}^{4}+60\\,u{v}^{5}+4\\,{v}^{6}\n\\geq 0$$\nwhich is true.</p>\n" }, { "answer_id": 2347350, "score": 0, "text": "<p>Have: $\\Leftrightarrow \\left(\\sum _{cyc}a\\right)\\left(\\sum _{cyc}\\frac{a}{\\left(b+c\\right)^2}\\right)\\ge \\frac{9}{4}$</p>\n\n<p>By C-S and Nesbitt : $LHS\\ge \\left(\\frac{a}{b+c}+\\frac{b}{b+c}+\\frac{c}{c+a}\\right)^2=\\frac{9}{4}$</p>\n" }, { "answer_id": 2347924, "score": 0, "text": "<p>WLOG $a+b+c=1$</p>\n\n<p>$f\\left(0;1\\right)\\rightarrow \\left(0;\\infty \\right);f\\left(x\\right)=\\frac{x}{\\left(1-x\\right)^2}$</p>\n\n<p>See $f$ is a convex function, then use Jensen's inequality</p>\n" }, { "answer_id": 3701941, "score": 2, "text": "<p>After using Ravi subtitution, we need to prove\n<span class=\"math-container\">$$\\sum \\frac{x+y}{(2y+z+x)^2} \\geqslant \\frac{9}{8(x+y+z)}.$$</span>\nBy the Cauchy-Schwarz inequality we get\n<span class=\"math-container\">$$\\sum \\frac{x+y}{(2y+z+x)^2} \\geqslant \\frac{\\left[\\displaystyle \\sum (x+y)(42x+3y+55z) \\right]^2}{\\displaystyle \\sum (2y+z+x)^2(x+y)(42x+3y+55z)^2}.$$</span>\nTherefore, we will show that\n<span class=\"math-container\">$$8(x+y+z)\\left[\\displaystyle \\sum (x+y)(42x+3y+55z) \\right]^2 \\geqslant 9\\sum (2y+z+x)^2(x+y)(42x+3y+55z)^2,$$</span>\nor\n<span class=\"math-container\">$$11700 \\sum xy^2(y-z)^2 + \\sum \\left(1954xyz+6723x^2y+\\frac{3104xy^2}{3}+\\frac{5053yz^2}{3}\\right)(z+x-2y)^2$$</span>\n<span class=\"math-container\">$$\\geqslant 0.$$</span>\nDone.</p>\n" }, { "answer_id": 2223715, "score": 1, "text": "<p>clearing the denominators you inequality is equivalent to\n$$4\\,b{a}^{5}+7\\,{b}^{2}{a}^{4}+2\\,bc{a}^{4}-5\\,{c}^{2}{a}^{4}-2\\,{b}^{3\n}{a}^{3}+2\\,{b}^{2}c{a}^{3}-6\\,b{c}^{2}{a}^{3}-2\\,{c}^{3}{a}^{3}-5\\,{b\n}^{4}{a}^{2}-6\\,{b}^{3}c{a}^{2}-6\\,{b}^{2}{c}^{2}{a}^{2}+2\\,b{c}^{3}{a\n}^{2}+7\\,{c}^{4}{a}^{2}+2\\,{b}^{4}ca+2\\,{b}^{3}{c}^{2}a-6\\,{b}^{2}{c}^\n{3}a+2\\,b{c}^{4}a+4\\,{c}^{5}a+4\\,{b}^{5}c+7\\,{b}^{4}{c}^{2}-2\\,{b}^{3}\n{c}^{3}-5\\,{b}^{2}{c}^{4}\n\\geq 0$$\nnow we use the Ravi-substitution\n$$a=z+y,b=x+z,c=x+y$$ and your inequality is equivalent to\n$$4\\,{x}^{6}-4\\,{x}^{5}y+36\\,{x}^{5}z-21\\,{x}^{4}{y}^{2}+22\\,{x}^{4}yz+\n67\\,{x}^{4}{z}^{2}+22\\,{x}^{3}{y}^{3}-62\\,{x}^{3}{y}^{2}z-14\\,{x}^{3}y\n{z}^{2}+22\\,{x}^{3}{z}^{3}+67\\,{x}^{2}{y}^{4}-14\\,{x}^{2}{y}^{3}z-150\n\\,{x}^{2}{y}^{2}{z}^{2}-62\\,{x}^{2}y{z}^{3}-21\\,{x}^{2}{z}^{4}+36\\,x{y\n}^{5}+22\\,x{y}^{4}z-62\\,x{y}^{3}{z}^{2}-14\\,x{y}^{2}{z}^{3}+22\\,xy{z}^\n{4}-4\\,x{z}^{5}+4\\,{y}^{6}-4\\,{y}^{5}z-21\\,{y}^{4}{z}^{2}+22\\,{y}^{3}{\nz}^{3}+67\\,{y}^{2}{z}^{4}+36\\,y{z}^{5}+4\\,{z}^{6}\n\\geq 0$$\nnow we use $$y=x+u,z=x+u+v$$ and we get the equivalent inequality\n$$\\left( 512\\,{u}^{2}+512\\,uv+512\\,{v}^{2} \\right) {x}^{4}+ \\left( 1408\n\\,{u}^{3}+2688\\,{u}^{2}v+2560\\,u{v}^{2}+640\\,{v}^{3} \\right) {x}^{3}+\n \\left( 1440\\,{u}^{4}+4032\\,{u}^{3}v+4704\\,{u}^{2}{v}^{2}+2112\\,u{v}^{\n3}+288\\,{v}^{4} \\right) {x}^{2}+ \\left( 648\\,{u}^{5}+2384\\,{u}^{4}v+\n3456\\,{u}^{3}{v}^{2}+2224\\,{u}^{2}{v}^{3}+616\\,u{v}^{4}+56\\,{v}^{5}\n \\right) x+108\\,{u}^{6}+492\\,{u}^{5}v+867\\,{u}^{4}{v}^{2}+730\\,{u}^{3}\n{v}^{3}+307\\,{u}^{2}{v}^{4}+60\\,u{v}^{5}+4\\,{v}^{6}\n\\geq 0$$\nwhich is true.</p>\n" }, { "answer_id": 2347350, "score": 0, "text": "<p>Have: $\\Leftrightarrow \\left(\\sum _{cyc}a\\right)\\left(\\sum _{cyc}\\frac{a}{\\left(b+c\\right)^2}\\right)\\ge \\frac{9}{4}$</p>\n\n<p>By C-S and Nesbitt : $LHS\\ge \\left(\\frac{a}{b+c}+\\frac{b}{b+c}+\\frac{c}{c+a}\\right)^2=\\frac{9}{4}$</p>\n" }, { "answer_id": 2347924, "score": 0, "text": "<p>WLOG $a+b+c=1$</p>\n\n<p>$f\\left(0;1\\right)\\rightarrow \\left(0;\\infty \\right);f\\left(x\\right)=\\frac{x}{\\left(1-x\\right)^2}$</p>\n\n<p>See $f$ is a convex function, then use Jensen's inequality</p>\n" }, { "answer_id": 3701941, "score": 2, "text": "<p>After using Ravi subtitution, we need to prove\n<span class=\"math-container\">$$\\sum \\frac{x+y}{(2y+z+x)^2} \\geqslant \\frac{9}{8(x+y+z)}.$$</span>\nBy the Cauchy-Schwarz inequality we get\n<span class=\"math-container\">$$\\sum \\frac{x+y}{(2y+z+x)^2} \\geqslant \\frac{\\left[\\displaystyle \\sum (x+y)(42x+3y+55z) \\right]^2}{\\displaystyle \\sum (2y+z+x)^2(x+y)(42x+3y+55z)^2}.$$</span>\nTherefore, we will show that\n<span class=\"math-container\">$$8(x+y+z)\\left[\\displaystyle \\sum (x+y)(42x+3y+55z) \\right]^2 \\geqslant 9\\sum (2y+z+x)^2(x+y)(42x+3y+55z)^2,$$</span>\nor\n<span class=\"math-container\">$$11700 \\sum xy^2(y-z)^2 + \\sum \\left(1954xyz+6723x^2y+\\frac{3104xy^2}{3}+\\frac{5053yz^2}{3}\\right)(z+x-2y)^2$$</span>\n<span class=\"math-container\">$$\\geqslant 0.$$</span>\nDone.</p>\n" }, { "answer_id": 2223715, "score": 1, "text": "<p>clearing the denominators you inequality is equivalent to\n$$4\\,b{a}^{5}+7\\,{b}^{2}{a}^{4}+2\\,bc{a}^{4}-5\\,{c}^{2}{a}^{4}-2\\,{b}^{3\n}{a}^{3}+2\\,{b}^{2}c{a}^{3}-6\\,b{c}^{2}{a}^{3}-2\\,{c}^{3}{a}^{3}-5\\,{b\n}^{4}{a}^{2}-6\\,{b}^{3}c{a}^{2}-6\\,{b}^{2}{c}^{2}{a}^{2}+2\\,b{c}^{3}{a\n}^{2}+7\\,{c}^{4}{a}^{2}+2\\,{b}^{4}ca+2\\,{b}^{3}{c}^{2}a-6\\,{b}^{2}{c}^\n{3}a+2\\,b{c}^{4}a+4\\,{c}^{5}a+4\\,{b}^{5}c+7\\,{b}^{4}{c}^{2}-2\\,{b}^{3}\n{c}^{3}-5\\,{b}^{2}{c}^{4}\n\\geq 0$$\nnow we use the Ravi-substitution\n$$a=z+y,b=x+z,c=x+y$$ and your inequality is equivalent to\n$$4\\,{x}^{6}-4\\,{x}^{5}y+36\\,{x}^{5}z-21\\,{x}^{4}{y}^{2}+22\\,{x}^{4}yz+\n67\\,{x}^{4}{z}^{2}+22\\,{x}^{3}{y}^{3}-62\\,{x}^{3}{y}^{2}z-14\\,{x}^{3}y\n{z}^{2}+22\\,{x}^{3}{z}^{3}+67\\,{x}^{2}{y}^{4}-14\\,{x}^{2}{y}^{3}z-150\n\\,{x}^{2}{y}^{2}{z}^{2}-62\\,{x}^{2}y{z}^{3}-21\\,{x}^{2}{z}^{4}+36\\,x{y\n}^{5}+22\\,x{y}^{4}z-62\\,x{y}^{3}{z}^{2}-14\\,x{y}^{2}{z}^{3}+22\\,xy{z}^\n{4}-4\\,x{z}^{5}+4\\,{y}^{6}-4\\,{y}^{5}z-21\\,{y}^{4}{z}^{2}+22\\,{y}^{3}{\nz}^{3}+67\\,{y}^{2}{z}^{4}+36\\,y{z}^{5}+4\\,{z}^{6}\n\\geq 0$$\nnow we use $$y=x+u,z=x+u+v$$ and we get the equivalent inequality\n$$\\left( 512\\,{u}^{2}+512\\,uv+512\\,{v}^{2} \\right) {x}^{4}+ \\left( 1408\n\\,{u}^{3}+2688\\,{u}^{2}v+2560\\,u{v}^{2}+640\\,{v}^{3} \\right) {x}^{3}+\n \\left( 1440\\,{u}^{4}+4032\\,{u}^{3}v+4704\\,{u}^{2}{v}^{2}+2112\\,u{v}^{\n3}+288\\,{v}^{4} \\right) {x}^{2}+ \\left( 648\\,{u}^{5}+2384\\,{u}^{4}v+\n3456\\,{u}^{3}{v}^{2}+2224\\,{u}^{2}{v}^{3}+616\\,u{v}^{4}+56\\,{v}^{5}\n \\right) x+108\\,{u}^{6}+492\\,{u}^{5}v+867\\,{u}^{4}{v}^{2}+730\\,{u}^{3}\n{v}^{3}+307\\,{u}^{2}{v}^{4}+60\\,u{v}^{5}+4\\,{v}^{6}\n\\geq 0$$\nwhich is true.</p>\n" }, { "answer_id": 2347350, "score": 0, "text": "<p>Have: $\\Leftrightarrow \\left(\\sum _{cyc}a\\right)\\left(\\sum _{cyc}\\frac{a}{\\left(b+c\\right)^2}\\right)\\ge \\frac{9}{4}$</p>\n\n<p>By C-S and Nesbitt : $LHS\\ge \\left(\\frac{a}{b+c}+\\frac{b}{b+c}+\\frac{c}{c+a}\\right)^2=\\frac{9}{4}$</p>\n" }, { "answer_id": 2347924, "score": 0, "text": "<p>WLOG $a+b+c=1$</p>\n\n<p>$f\\left(0;1\\right)\\rightarrow \\left(0;\\infty \\right);f\\left(x\\right)=\\frac{x}{\\left(1-x\\right)^2}$</p>\n\n<p>See $f$ is a convex function, then use Jensen's inequality</p>\n" }, { "answer_id": 3701941, "score": 2, "text": "<p>After using Ravi subtitution, we need to prove\n<span class=\"math-container\">$$\\sum \\frac{x+y}{(2y+z+x)^2} \\geqslant \\frac{9}{8(x+y+z)}.$$</span>\nBy the Cauchy-Schwarz inequality we get\n<span class=\"math-container\">$$\\sum \\frac{x+y}{(2y+z+x)^2} \\geqslant \\frac{\\left[\\displaystyle \\sum (x+y)(42x+3y+55z) \\right]^2}{\\displaystyle \\sum (2y+z+x)^2(x+y)(42x+3y+55z)^2}.$$</span>\nTherefore, we will show that\n<span class=\"math-container\">$$8(x+y+z)\\left[\\displaystyle \\sum (x+y)(42x+3y+55z) \\right]^2 \\geqslant 9\\sum (2y+z+x)^2(x+y)(42x+3y+55z)^2,$$</span>\nor\n<span class=\"math-container\">$$11700 \\sum xy^2(y-z)^2 + \\sum \\left(1954xyz+6723x^2y+\\frac{3104xy^2}{3}+\\frac{5053yz^2}{3}\\right)(z+x-2y)^2$$</span>\n<span class=\"math-container\">$$\\geqslant 0.$$</span>\nDone.</p>\n" }, { "answer_id": 2223715, "score": 1, "text": "<p>clearing the denominators you inequality is equivalent to\n$$4\\,b{a}^{5}+7\\,{b}^{2}{a}^{4}+2\\,bc{a}^{4}-5\\,{c}^{2}{a}^{4}-2\\,{b}^{3\n}{a}^{3}+2\\,{b}^{2}c{a}^{3}-6\\,b{c}^{2}{a}^{3}-2\\,{c}^{3}{a}^{3}-5\\,{b\n}^{4}{a}^{2}-6\\,{b}^{3}c{a}^{2}-6\\,{b}^{2}{c}^{2}{a}^{2}+2\\,b{c}^{3}{a\n}^{2}+7\\,{c}^{4}{a}^{2}+2\\,{b}^{4}ca+2\\,{b}^{3}{c}^{2}a-6\\,{b}^{2}{c}^\n{3}a+2\\,b{c}^{4}a+4\\,{c}^{5}a+4\\,{b}^{5}c+7\\,{b}^{4}{c}^{2}-2\\,{b}^{3}\n{c}^{3}-5\\,{b}^{2}{c}^{4}\n\\geq 0$$\nnow we use the Ravi-substitution\n$$a=z+y,b=x+z,c=x+y$$ and your inequality is equivalent to\n$$4\\,{x}^{6}-4\\,{x}^{5}y+36\\,{x}^{5}z-21\\,{x}^{4}{y}^{2}+22\\,{x}^{4}yz+\n67\\,{x}^{4}{z}^{2}+22\\,{x}^{3}{y}^{3}-62\\,{x}^{3}{y}^{2}z-14\\,{x}^{3}y\n{z}^{2}+22\\,{x}^{3}{z}^{3}+67\\,{x}^{2}{y}^{4}-14\\,{x}^{2}{y}^{3}z-150\n\\,{x}^{2}{y}^{2}{z}^{2}-62\\,{x}^{2}y{z}^{3}-21\\,{x}^{2}{z}^{4}+36\\,x{y\n}^{5}+22\\,x{y}^{4}z-62\\,x{y}^{3}{z}^{2}-14\\,x{y}^{2}{z}^{3}+22\\,xy{z}^\n{4}-4\\,x{z}^{5}+4\\,{y}^{6}-4\\,{y}^{5}z-21\\,{y}^{4}{z}^{2}+22\\,{y}^{3}{\nz}^{3}+67\\,{y}^{2}{z}^{4}+36\\,y{z}^{5}+4\\,{z}^{6}\n\\geq 0$$\nnow we use $$y=x+u,z=x+u+v$$ and we get the equivalent inequality\n$$\\left( 512\\,{u}^{2}+512\\,uv+512\\,{v}^{2} \\right) {x}^{4}+ \\left( 1408\n\\,{u}^{3}+2688\\,{u}^{2}v+2560\\,u{v}^{2}+640\\,{v}^{3} \\right) {x}^{3}+\n \\left( 1440\\,{u}^{4}+4032\\,{u}^{3}v+4704\\,{u}^{2}{v}^{2}+2112\\,u{v}^{\n3}+288\\,{v}^{4} \\right) {x}^{2}+ \\left( 648\\,{u}^{5}+2384\\,{u}^{4}v+\n3456\\,{u}^{3}{v}^{2}+2224\\,{u}^{2}{v}^{3}+616\\,u{v}^{4}+56\\,{v}^{5}\n \\right) x+108\\,{u}^{6}+492\\,{u}^{5}v+867\\,{u}^{4}{v}^{2}+730\\,{u}^{3}\n{v}^{3}+307\\,{u}^{2}{v}^{4}+60\\,u{v}^{5}+4\\,{v}^{6}\n\\geq 0$$\nwhich is true.</p>\n" }, { "answer_id": 2347350, "score": 0, "text": "<p>Have: $\\Leftrightarrow \\left(\\sum _{cyc}a\\right)\\left(\\sum _{cyc}\\frac{a}{\\left(b+c\\right)^2}\\right)\\ge \\frac{9}{4}$</p>\n\n<p>By C-S and Nesbitt : $LHS\\ge \\left(\\frac{a}{b+c}+\\frac{b}{b+c}+\\frac{c}{c+a}\\right)^2=\\frac{9}{4}$</p>\n" }, { "answer_id": 2347924, "score": 0, "text": "<p>WLOG $a+b+c=1$</p>\n\n<p>$f\\left(0;1\\right)\\rightarrow \\left(0;\\infty \\right);f\\left(x\\right)=\\frac{x}{\\left(1-x\\right)^2}$</p>\n\n<p>See $f$ is a convex function, then use Jensen's inequality</p>\n" } ]
Question: For all triangle prove that $\sum\limits_{cyc}\frac{a}{(a+b)^2}\geq\frac{9}{4(a+b+c)}$ Answer 1: <p>After using Ravi subtitution, we need to prove <span class="math-container">$$\sum \frac{x+y}{(2y+z+x)^2} \geqslant \frac{9}{8(x+y+z)}.$$</span> By the Cauchy-Schwarz inequality we get <span class="math-container">$$\sum \frac{x+y}{(2y+z+x)^2} \geqslant \frac{\left[\displaystyle \sum (x+y)(42x+3y+55z) \right]^2}{\displaystyle \sum (2y+z+x)^2(x+y)(42x+3y+55z)^2}.$$</span> Therefore, we will show that <span class="math-container">$$8(x+y+z)\left[\displaystyle \sum (x+y)(42x+3y+55z) \right]^2 \geqslant 9\sum (2y+z+x)^2(x+y)(42x+3y+55z)^2,$$</span> or <span class="math-container">$$11700 \sum xy^2(y-z)^2 + \sum \left(1954xyz+6723x^2y+\frac{3104xy^2}{3}+\frac{5053yz^2}{3}\right)(z+x-2y)^2$$</span> <span class="math-container">$$\geqslant 0.$$</span> Done.</p> Answer 2: <p>clearing the denominators you inequality is equivalent to $$4\,b{a}^{5}+7\,{b}^{2}{a}^{4}+2\,bc{a}^{4}-5\,{c}^{2}{a}^{4}-2\,{b}^{3 }{a}^{3}+2\,{b}^{2}c{a}^{3}-6\,b{c}^{2}{a}^{3}-2\,{c}^{3}{a}^{3}-5\,{b }^{4}{a}^{2}-6\,{b}^{3}c{a}^{2}-6\,{b}^{2}{c}^{2}{a}^{2}+2\,b{c}^{3}{a }^{2}+7\,{c}^{4}{a}^{2}+2\,{b}^{4}ca+2\,{b}^{3}{c}^{2}a-6\,{b}^{2}{c}^ {3}a+2\,b{c}^{4}a+4\,{c}^{5}a+4\,{b}^{5}c+7\,{b}^{4}{c}^{2}-2\,{b}^{3} {c}^{3}-5\,{b}^{2}{c}^{4} \geq 0$$ now we use the Ravi-substitution $$a=z+y,b=x+z,c=x+y$$ and your inequality is equivalent to $$4\,{x}^{6}-4\,{x}^{5}y+36\,{x}^{5}z-21\,{x}^{4}{y}^{2}+22\,{x}^{4}yz+ 67\,{x}^{4}{z}^{2}+22\,{x}^{3}{y}^{3}-62\,{x}^{3}{y}^{2}z-14\,{x}^{3}y {z}^{2}+22\,{x}^{3}{z}^{3}+67\,{x}^{2}{y}^{4}-14\,{x}^{2}{y}^{3}z-150 \,{x}^{2}{y}^{2}{z}^{2}-62\,{x}^{2}y{z}^{3}-21\,{x}^{2}{z}^{4}+36\,x{y }^{5}+22\,x{y}^{4}z-62\,x{y}^{3}{z}^{2}-14\,x{y}^{2}{z}^{3}+22\,xy{z}^ {4}-4\,x{z}^{5}+4\,{y}^{6}-4\,{y}^{5}z-21\,{y}^{4}{z}^{2}+22\,{y}^{3}{ z}^{3}+67\,{y}^{2}{z}^{4}+36\,y{z}^{5}+4\,{z}^{6} \geq 0$$ now we use $$y=x+u,z=x+u+v$$ and we get the equivalent inequality $$\left( 512\,{u}^{2}+512\,uv+512\,{v}^{2} \right) {x}^{4}+ \left( 1408 \,{u}^{3}+2688\,{u}^{2}v+2560\,u{v}^{2}+640\,{v}^{3} \right) {x}^{3}+ \left( 1440\,{u}^{4}+4032\,{u}^{3}v+4704\,{u}^{2}{v}^{2}+2112\,u{v}^{ 3}+288\,{v}^{4} \right) {x}^{2}+ \left( 648\,{u}^{5}+2384\,{u}^{4}v+ 3456\,{u}^{3}{v}^{2}+2224\,{u}^{2}{v}^{3}+616\,u{v}^{4}+56\,{v}^{5} \right) x+108\,{u}^{6}+492\,{u}^{5}v+867\,{u}^{4}{v}^{2}+730\,{u}^{3} {v}^{3}+307\,{u}^{2}{v}^{4}+60\,u{v}^{5}+4\,{v}^{6} \geq 0$$ which is true.</p> Answer 3: <p>Have: $\Leftrightarrow \left(\sum _{cyc}a\right)\left(\sum _{cyc}\frac{a}{\left(b+c\right)^2}\right)\ge \frac{9}{4}$</p> <p>By C-S and Nesbitt : $LHS\ge \left(\frac{a}{b+c}+\frac{b}{b+c}+\frac{c}{c+a}\right)^2=\frac{9}{4}$</p> Answer 4: <p>WLOG $a+b+c=1$</p> <p>$f\left(0;1\right)\rightarrow \left(0;\infty \right);f\left(x\right)=\frac{x}{\left(1-x\right)^2}$</p> <p>See $f$ is a convex function, then use Jensen's inequality</p> Answer 5: <p>After using Ravi subtitution, we need to prove <span class="math-container">$$\sum \frac{x+y}{(2y+z+x)^2} \geqslant \frac{9}{8(x+y+z)}.$$</span> By the Cauchy-Schwarz inequality we get <span class="math-container">$$\sum \frac{x+y}{(2y+z+x)^2} \geqslant \frac{\left[\displaystyle \sum (x+y)(42x+3y+55z) \right]^2}{\displaystyle \sum (2y+z+x)^2(x+y)(42x+3y+55z)^2}.$$</span> Therefore, we will show that <span class="math-container">$$8(x+y+z)\left[\displaystyle \sum (x+y)(42x+3y+55z) \right]^2 \geqslant 9\sum (2y+z+x)^2(x+y)(42x+3y+55z)^2,$$</span> or <span class="math-container">$$11700 \sum xy^2(y-z)^2 + \sum \left(1954xyz+6723x^2y+\frac{3104xy^2}{3}+\frac{5053yz^2}{3}\right)(z+x-2y)^2$$</span> <span class="math-container">$$\geqslant 0.$$</span> Done.</p> Answer 6: <p>clearing the denominators you inequality is equivalent to $$4\,b{a}^{5}+7\,{b}^{2}{a}^{4}+2\,bc{a}^{4}-5\,{c}^{2}{a}^{4}-2\,{b}^{3 }{a}^{3}+2\,{b}^{2}c{a}^{3}-6\,b{c}^{2}{a}^{3}-2\,{c}^{3}{a}^{3}-5\,{b }^{4}{a}^{2}-6\,{b}^{3}c{a}^{2}-6\,{b}^{2}{c}^{2}{a}^{2}+2\,b{c}^{3}{a }^{2}+7\,{c}^{4}{a}^{2}+2\,{b}^{4}ca+2\,{b}^{3}{c}^{2}a-6\,{b}^{2}{c}^ {3}a+2\,b{c}^{4}a+4\,{c}^{5}a+4\,{b}^{5}c+7\,{b}^{4}{c}^{2}-2\,{b}^{3} {c}^{3}-5\,{b}^{2}{c}^{4} \geq 0$$ now we use the Ravi-substitution $$a=z+y,b=x+z,c=x+y$$ and your inequality is equivalent to $$4\,{x}^{6}-4\,{x}^{5}y+36\,{x}^{5}z-21\,{x}^{4}{y}^{2}+22\,{x}^{4}yz+ 67\,{x}^{4}{z}^{2}+22\,{x}^{3}{y}^{3}-62\,{x}^{3}{y}^{2}z-14\,{x}^{3}y {z}^{2}+22\,{x}^{3}{z}^{3}+67\,{x}^{2}{y}^{4}-14\,{x}^{2}{y}^{3}z-150 \,{x}^{2}{y}^{2}{z}^{2}-62\,{x}^{2}y{z}^{3}-21\,{x}^{2}{z}^{4}+36\,x{y }^{5}+22\,x{y}^{4}z-62\,x{y}^{3}{z}^{2}-14\,x{y}^{2}{z}^{3}+22\,xy{z}^ {4}-4\,x{z}^{5}+4\,{y}^{6}-4\,{y}^{5}z-21\,{y}^{4}{z}^{2}+22\,{y}^{3}{ z}^{3}+67\,{y}^{2}{z}^{4}+36\,y{z}^{5}+4\,{z}^{6} \geq 0$$ now we use $$y=x+u,z=x+u+v$$ and we get the equivalent inequality $$\left( 512\,{u}^{2}+512\,uv+512\,{v}^{2} \right) {x}^{4}+ \left( 1408 \,{u}^{3}+2688\,{u}^{2}v+2560\,u{v}^{2}+640\,{v}^{3} \right) {x}^{3}+ \left( 1440\,{u}^{4}+4032\,{u}^{3}v+4704\,{u}^{2}{v}^{2}+2112\,u{v}^{ 3}+288\,{v}^{4} \right) {x}^{2}+ \left( 648\,{u}^{5}+2384\,{u}^{4}v+ 3456\,{u}^{3}{v}^{2}+2224\,{u}^{2}{v}^{3}+616\,u{v}^{4}+56\,{v}^{5} \right) x+108\,{u}^{6}+492\,{u}^{5}v+867\,{u}^{4}{v}^{2}+730\,{u}^{3} {v}^{3}+307\,{u}^{2}{v}^{4}+60\,u{v}^{5}+4\,{v}^{6} \geq 0$$ which is true.</p> Answer 7: <p>Have: $\Leftrightarrow \left(\sum _{cyc}a\right)\left(\sum _{cyc}\frac{a}{\left(b+c\right)^2}\right)\ge \frac{9}{4}$</p> <p>By C-S and Nesbitt : $LHS\ge \left(\frac{a}{b+c}+\frac{b}{b+c}+\frac{c}{c+a}\right)^2=\frac{9}{4}$</p> Answer 8: <p>WLOG $a+b+c=1$</p> <p>$f\left(0;1\right)\rightarrow \left(0;\infty \right);f\left(x\right)=\frac{x}{\left(1-x\right)^2}$</p> <p>See $f$ is a convex function, then use Jensen's inequality</p> Answer 9: <p>After using Ravi subtitution, we need to prove <span class="math-container">$$\sum \frac{x+y}{(2y+z+x)^2} \geqslant \frac{9}{8(x+y+z)}.$$</span> By the Cauchy-Schwarz inequality we get <span class="math-container">$$\sum \frac{x+y}{(2y+z+x)^2} \geqslant \frac{\left[\displaystyle \sum (x+y)(42x+3y+55z) \right]^2}{\displaystyle \sum (2y+z+x)^2(x+y)(42x+3y+55z)^2}.$$</span> Therefore, we will show that <span class="math-container">$$8(x+y+z)\left[\displaystyle \sum (x+y)(42x+3y+55z) \right]^2 \geqslant 9\sum (2y+z+x)^2(x+y)(42x+3y+55z)^2,$$</span> or <span class="math-container">$$11700 \sum xy^2(y-z)^2 + \sum \left(1954xyz+6723x^2y+\frac{3104xy^2}{3}+\frac{5053yz^2}{3}\right)(z+x-2y)^2$$</span> <span class="math-container">$$\geqslant 0.$$</span> Done.</p> Answer 10: <p>clearing the denominators you inequality is equivalent to $$4\,b{a}^{5}+7\,{b}^{2}{a}^{4}+2\,bc{a}^{4}-5\,{c}^{2}{a}^{4}-2\,{b}^{3 }{a}^{3}+2\,{b}^{2}c{a}^{3}-6\,b{c}^{2}{a}^{3}-2\,{c}^{3}{a}^{3}-5\,{b }^{4}{a}^{2}-6\,{b}^{3}c{a}^{2}-6\,{b}^{2}{c}^{2}{a}^{2}+2\,b{c}^{3}{a }^{2}+7\,{c}^{4}{a}^{2}+2\,{b}^{4}ca+2\,{b}^{3}{c}^{2}a-6\,{b}^{2}{c}^ {3}a+2\,b{c}^{4}a+4\,{c}^{5}a+4\,{b}^{5}c+7\,{b}^{4}{c}^{2}-2\,{b}^{3} {c}^{3}-5\,{b}^{2}{c}^{4} \geq 0$$ now we use the Ravi-substitution $$a=z+y,b=x+z,c=x+y$$ and your inequality is equivalent to $$4\,{x}^{6}-4\,{x}^{5}y+36\,{x}^{5}z-21\,{x}^{4}{y}^{2}+22\,{x}^{4}yz+ 67\,{x}^{4}{z}^{2}+22\,{x}^{3}{y}^{3}-62\,{x}^{3}{y}^{2}z-14\,{x}^{3}y {z}^{2}+22\,{x}^{3}{z}^{3}+67\,{x}^{2}{y}^{4}-14\,{x}^{2}{y}^{3}z-150 \,{x}^{2}{y}^{2}{z}^{2}-62\,{x}^{2}y{z}^{3}-21\,{x}^{2}{z}^{4}+36\,x{y }^{5}+22\,x{y}^{4}z-62\,x{y}^{3}{z}^{2}-14\,x{y}^{2}{z}^{3}+22\,xy{z}^ {4}-4\,x{z}^{5}+4\,{y}^{6}-4\,{y}^{5}z-21\,{y}^{4}{z}^{2}+22\,{y}^{3}{ z}^{3}+67\,{y}^{2}{z}^{4}+36\,y{z}^{5}+4\,{z}^{6} \geq 0$$ now we use $$y=x+u,z=x+u+v$$ and we get the equivalent inequality $$\left( 512\,{u}^{2}+512\,uv+512\,{v}^{2} \right) {x}^{4}+ \left( 1408 \,{u}^{3}+2688\,{u}^{2}v+2560\,u{v}^{2}+640\,{v}^{3} \right) {x}^{3}+ \left( 1440\,{u}^{4}+4032\,{u}^{3}v+4704\,{u}^{2}{v}^{2}+2112\,u{v}^{ 3}+288\,{v}^{4} \right) {x}^{2}+ \left( 648\,{u}^{5}+2384\,{u}^{4}v+ 3456\,{u}^{3}{v}^{2}+2224\,{u}^{2}{v}^{3}+616\,u{v}^{4}+56\,{v}^{5} \right) x+108\,{u}^{6}+492\,{u}^{5}v+867\,{u}^{4}{v}^{2}+730\,{u}^{3} {v}^{3}+307\,{u}^{2}{v}^{4}+60\,u{v}^{5}+4\,{v}^{6} \geq 0$$ which is true.</p> Answer 11: <p>Have: $\Leftrightarrow \left(\sum _{cyc}a\right)\left(\sum _{cyc}\frac{a}{\left(b+c\right)^2}\right)\ge \frac{9}{4}$</p> <p>By C-S and Nesbitt : $LHS\ge \left(\frac{a}{b+c}+\frac{b}{b+c}+\frac{c}{c+a}\right)^2=\frac{9}{4}$</p> Answer 12: <p>WLOG $a+b+c=1$</p> <p>$f\left(0;1\right)\rightarrow \left(0;\infty \right);f\left(x\right)=\frac{x}{\left(1-x\right)^2}$</p> <p>See $f$ is a convex function, then use Jensen's inequality</p> Answer 13: <p>After using Ravi subtitution, we need to prove <span class="math-container">$$\sum \frac{x+y}{(2y+z+x)^2} \geqslant \frac{9}{8(x+y+z)}.$$</span> By the Cauchy-Schwarz inequality we get <span class="math-container">$$\sum \frac{x+y}{(2y+z+x)^2} \geqslant \frac{\left[\displaystyle \sum (x+y)(42x+3y+55z) \right]^2}{\displaystyle \sum (2y+z+x)^2(x+y)(42x+3y+55z)^2}.$$</span> Therefore, we will show that <span class="math-container">$$8(x+y+z)\left[\displaystyle \sum (x+y)(42x+3y+55z) \right]^2 \geqslant 9\sum (2y+z+x)^2(x+y)(42x+3y+55z)^2,$$</span> or <span class="math-container">$$11700 \sum xy^2(y-z)^2 + \sum \left(1954xyz+6723x^2y+\frac{3104xy^2}{3}+\frac{5053yz^2}{3}\right)(z+x-2y)^2$$</span> <span class="math-container">$$\geqslant 0.$$</span> Done.</p> Answer 14: <p>clearing the denominators you inequality is equivalent to $$4\,b{a}^{5}+7\,{b}^{2}{a}^{4}+2\,bc{a}^{4}-5\,{c}^{2}{a}^{4}-2\,{b}^{3 }{a}^{3}+2\,{b}^{2}c{a}^{3}-6\,b{c}^{2}{a}^{3}-2\,{c}^{3}{a}^{3}-5\,{b }^{4}{a}^{2}-6\,{b}^{3}c{a}^{2}-6\,{b}^{2}{c}^{2}{a}^{2}+2\,b{c}^{3}{a }^{2}+7\,{c}^{4}{a}^{2}+2\,{b}^{4}ca+2\,{b}^{3}{c}^{2}a-6\,{b}^{2}{c}^ {3}a+2\,b{c}^{4}a+4\,{c}^{5}a+4\,{b}^{5}c+7\,{b}^{4}{c}^{2}-2\,{b}^{3} {c}^{3}-5\,{b}^{2}{c}^{4} \geq 0$$ now we use the Ravi-substitution $$a=z+y,b=x+z,c=x+y$$ and your inequality is equivalent to $$4\,{x}^{6}-4\,{x}^{5}y+36\,{x}^{5}z-21\,{x}^{4}{y}^{2}+22\,{x}^{4}yz+ 67\,{x}^{4}{z}^{2}+22\,{x}^{3}{y}^{3}-62\,{x}^{3}{y}^{2}z-14\,{x}^{3}y {z}^{2}+22\,{x}^{3}{z}^{3}+67\,{x}^{2}{y}^{4}-14\,{x}^{2}{y}^{3}z-150 \,{x}^{2}{y}^{2}{z}^{2}-62\,{x}^{2}y{z}^{3}-21\,{x}^{2}{z}^{4}+36\,x{y }^{5}+22\,x{y}^{4}z-62\,x{y}^{3}{z}^{2}-14\,x{y}^{2}{z}^{3}+22\,xy{z}^ {4}-4\,x{z}^{5}+4\,{y}^{6}-4\,{y}^{5}z-21\,{y}^{4}{z}^{2}+22\,{y}^{3}{ z}^{3}+67\,{y}^{2}{z}^{4}+36\,y{z}^{5}+4\,{z}^{6} \geq 0$$ now we use $$y=x+u,z=x+u+v$$ and we get the equivalent inequality $$\left( 512\,{u}^{2}+512\,uv+512\,{v}^{2} \right) {x}^{4}+ \left( 1408 \,{u}^{3}+2688\,{u}^{2}v+2560\,u{v}^{2}+640\,{v}^{3} \right) {x}^{3}+ \left( 1440\,{u}^{4}+4032\,{u}^{3}v+4704\,{u}^{2}{v}^{2}+2112\,u{v}^{ 3}+288\,{v}^{4} \right) {x}^{2}+ \left( 648\,{u}^{5}+2384\,{u}^{4}v+ 3456\,{u}^{3}{v}^{2}+2224\,{u}^{2}{v}^{3}+616\,u{v}^{4}+56\,{v}^{5} \right) x+108\,{u}^{6}+492\,{u}^{5}v+867\,{u}^{4}{v}^{2}+730\,{u}^{3} {v}^{3}+307\,{u}^{2}{v}^{4}+60\,u{v}^{5}+4\,{v}^{6} \geq 0$$ which is true.</p> Answer 15: <p>Have: $\Leftrightarrow \left(\sum _{cyc}a\right)\left(\sum _{cyc}\frac{a}{\left(b+c\right)^2}\right)\ge \frac{9}{4}$</p> <p>By C-S and Nesbitt : $LHS\ge \left(\frac{a}{b+c}+\frac{b}{b+c}+\frac{c}{c+a}\right)^2=\frac{9}{4}$</p> Answer 16: <p>WLOG $a+b+c=1$</p> <p>$f\left(0;1\right)\rightarrow \left(0;\infty \right);f\left(x\right)=\frac{x}{\left(1-x\right)^2}$</p> <p>See $f$ is a convex function, then use Jensen's inequality</p> Answer 17: <p>After using Ravi subtitution, we need to prove <span class="math-container">$$\sum \frac{x+y}{(2y+z+x)^2} \geqslant \frac{9}{8(x+y+z)}.$$</span> By the Cauchy-Schwarz inequality we get <span class="math-container">$$\sum \frac{x+y}{(2y+z+x)^2} \geqslant \frac{\left[\displaystyle \sum (x+y)(42x+3y+55z) \right]^2}{\displaystyle \sum (2y+z+x)^2(x+y)(42x+3y+55z)^2}.$$</span> Therefore, we will show that <span class="math-container">$$8(x+y+z)\left[\displaystyle \sum (x+y)(42x+3y+55z) \right]^2 \geqslant 9\sum (2y+z+x)^2(x+y)(42x+3y+55z)^2,$$</span> or <span class="math-container">$$11700 \sum xy^2(y-z)^2 + \sum \left(1954xyz+6723x^2y+\frac{3104xy^2}{3}+\frac{5053yz^2}{3}\right)(z+x-2y)^2$$</span> <span class="math-container">$$\geqslant 0.$$</span> Done.</p> Answer 18: <p>clearing the denominators you inequality is equivalent to $$4\,b{a}^{5}+7\,{b}^{2}{a}^{4}+2\,bc{a}^{4}-5\,{c}^{2}{a}^{4}-2\,{b}^{3 }{a}^{3}+2\,{b}^{2}c{a}^{3}-6\,b{c}^{2}{a}^{3}-2\,{c}^{3}{a}^{3}-5\,{b }^{4}{a}^{2}-6\,{b}^{3}c{a}^{2}-6\,{b}^{2}{c}^{2}{a}^{2}+2\,b{c}^{3}{a }^{2}+7\,{c}^{4}{a}^{2}+2\,{b}^{4}ca+2\,{b}^{3}{c}^{2}a-6\,{b}^{2}{c}^ {3}a+2\,b{c}^{4}a+4\,{c}^{5}a+4\,{b}^{5}c+7\,{b}^{4}{c}^{2}-2\,{b}^{3} {c}^{3}-5\,{b}^{2}{c}^{4} \geq 0$$ now we use the Ravi-substitution $$a=z+y,b=x+z,c=x+y$$ and your inequality is equivalent to $$4\,{x}^{6}-4\,{x}^{5}y+36\,{x}^{5}z-21\,{x}^{4}{y}^{2}+22\,{x}^{4}yz+ 67\,{x}^{4}{z}^{2}+22\,{x}^{3}{y}^{3}-62\,{x}^{3}{y}^{2}z-14\,{x}^{3}y {z}^{2}+22\,{x}^{3}{z}^{3}+67\,{x}^{2}{y}^{4}-14\,{x}^{2}{y}^{3}z-150 \,{x}^{2}{y}^{2}{z}^{2}-62\,{x}^{2}y{z}^{3}-21\,{x}^{2}{z}^{4}+36\,x{y }^{5}+22\,x{y}^{4}z-62\,x{y}^{3}{z}^{2}-14\,x{y}^{2}{z}^{3}+22\,xy{z}^ {4}-4\,x{z}^{5}+4\,{y}^{6}-4\,{y}^{5}z-21\,{y}^{4}{z}^{2}+22\,{y}^{3}{ z}^{3}+67\,{y}^{2}{z}^{4}+36\,y{z}^{5}+4\,{z}^{6} \geq 0$$ now we use $$y=x+u,z=x+u+v$$ and we get the equivalent inequality $$\left( 512\,{u}^{2}+512\,uv+512\,{v}^{2} \right) {x}^{4}+ \left( 1408 \,{u}^{3}+2688\,{u}^{2}v+2560\,u{v}^{2}+640\,{v}^{3} \right) {x}^{3}+ \left( 1440\,{u}^{4}+4032\,{u}^{3}v+4704\,{u}^{2}{v}^{2}+2112\,u{v}^{ 3}+288\,{v}^{4} \right) {x}^{2}+ \left( 648\,{u}^{5}+2384\,{u}^{4}v+ 3456\,{u}^{3}{v}^{2}+2224\,{u}^{2}{v}^{3}+616\,u{v}^{4}+56\,{v}^{5} \right) x+108\,{u}^{6}+492\,{u}^{5}v+867\,{u}^{4}{v}^{2}+730\,{u}^{3} {v}^{3}+307\,{u}^{2}{v}^{4}+60\,u{v}^{5}+4\,{v}^{6} \geq 0$$ which is true.</p> Answer 19: <p>Have: $\Leftrightarrow \left(\sum _{cyc}a\right)\left(\sum _{cyc}\frac{a}{\left(b+c\right)^2}\right)\ge \frac{9}{4}$</p> <p>By C-S and Nesbitt : $LHS\ge \left(\frac{a}{b+c}+\frac{b}{b+c}+\frac{c}{c+a}\right)^2=\frac{9}{4}$</p> Answer 20: <p>WLOG $a+b+c=1$</p> <p>$f\left(0;1\right)\rightarrow \left(0;\infty \right);f\left(x\right)=\frac{x}{\left(1-x\right)^2}$</p> <p>See $f$ is a convex function, then use Jensen's inequality</p> Answer 21: <p>After using Ravi subtitution, we need to prove <span class="math-container">$$\sum \frac{x+y}{(2y+z+x)^2} \geqslant \frac{9}{8(x+y+z)}.$$</span> By the Cauchy-Schwarz inequality we get <span class="math-container">$$\sum \frac{x+y}{(2y+z+x)^2} \geqslant \frac{\left[\displaystyle \sum (x+y)(42x+3y+55z) \right]^2}{\displaystyle \sum (2y+z+x)^2(x+y)(42x+3y+55z)^2}.$$</span> Therefore, we will show that <span class="math-container">$$8(x+y+z)\left[\displaystyle \sum (x+y)(42x+3y+55z) \right]^2 \geqslant 9\sum (2y+z+x)^2(x+y)(42x+3y+55z)^2,$$</span> or <span class="math-container">$$11700 \sum xy^2(y-z)^2 + \sum \left(1954xyz+6723x^2y+\frac{3104xy^2}{3}+\frac{5053yz^2}{3}\right)(z+x-2y)^2$$</span> <span class="math-container">$$\geqslant 0.$$</span> Done.</p> Answer 22: <p>clearing the denominators you inequality is equivalent to $$4\,b{a}^{5}+7\,{b}^{2}{a}^{4}+2\,bc{a}^{4}-5\,{c}^{2}{a}^{4}-2\,{b}^{3 }{a}^{3}+2\,{b}^{2}c{a}^{3}-6\,b{c}^{2}{a}^{3}-2\,{c}^{3}{a}^{3}-5\,{b }^{4}{a}^{2}-6\,{b}^{3}c{a}^{2}-6\,{b}^{2}{c}^{2}{a}^{2}+2\,b{c}^{3}{a }^{2}+7\,{c}^{4}{a}^{2}+2\,{b}^{4}ca+2\,{b}^{3}{c}^{2}a-6\,{b}^{2}{c}^ {3}a+2\,b{c}^{4}a+4\,{c}^{5}a+4\,{b}^{5}c+7\,{b}^{4}{c}^{2}-2\,{b}^{3} {c}^{3}-5\,{b}^{2}{c}^{4} \geq 0$$ now we use the Ravi-substitution $$a=z+y,b=x+z,c=x+y$$ and your inequality is equivalent to $$4\,{x}^{6}-4\,{x}^{5}y+36\,{x}^{5}z-21\,{x}^{4}{y}^{2}+22\,{x}^{4}yz+ 67\,{x}^{4}{z}^{2}+22\,{x}^{3}{y}^{3}-62\,{x}^{3}{y}^{2}z-14\,{x}^{3}y {z}^{2}+22\,{x}^{3}{z}^{3}+67\,{x}^{2}{y}^{4}-14\,{x}^{2}{y}^{3}z-150 \,{x}^{2}{y}^{2}{z}^{2}-62\,{x}^{2}y{z}^{3}-21\,{x}^{2}{z}^{4}+36\,x{y }^{5}+22\,x{y}^{4}z-62\,x{y}^{3}{z}^{2}-14\,x{y}^{2}{z}^{3}+22\,xy{z}^ {4}-4\,x{z}^{5}+4\,{y}^{6}-4\,{y}^{5}z-21\,{y}^{4}{z}^{2}+22\,{y}^{3}{ z}^{3}+67\,{y}^{2}{z}^{4}+36\,y{z}^{5}+4\,{z}^{6} \geq 0$$ now we use $$y=x+u,z=x+u+v$$ and we get the equivalent inequality $$\left( 512\,{u}^{2}+512\,uv+512\,{v}^{2} \right) {x}^{4}+ \left( 1408 \,{u}^{3}+2688\,{u}^{2}v+2560\,u{v}^{2}+640\,{v}^{3} \right) {x}^{3}+ \left( 1440\,{u}^{4}+4032\,{u}^{3}v+4704\,{u}^{2}{v}^{2}+2112\,u{v}^{ 3}+288\,{v}^{4} \right) {x}^{2}+ \left( 648\,{u}^{5}+2384\,{u}^{4}v+ 3456\,{u}^{3}{v}^{2}+2224\,{u}^{2}{v}^{3}+616\,u{v}^{4}+56\,{v}^{5} \right) x+108\,{u}^{6}+492\,{u}^{5}v+867\,{u}^{4}{v}^{2}+730\,{u}^{3} {v}^{3}+307\,{u}^{2}{v}^{4}+60\,u{v}^{5}+4\,{v}^{6} \geq 0$$ which is true.</p> Answer 23: <p>Have: $\Leftrightarrow \left(\sum _{cyc}a\right)\left(\sum _{cyc}\frac{a}{\left(b+c\right)^2}\right)\ge \frac{9}{4}$</p> <p>By C-S and Nesbitt : $LHS\ge \left(\frac{a}{b+c}+\frac{b}{b+c}+\frac{c}{c+a}\right)^2=\frac{9}{4}$</p> Answer 24: <p>WLOG $a+b+c=1$</p> <p>$f\left(0;1\right)\rightarrow \left(0;\infty \right);f\left(x\right)=\frac{x}{\left(1-x\right)^2}$</p> <p>See $f$ is a convex function, then use Jensen's inequality</p> Answer 25: <p>After using Ravi subtitution, we need to prove <span class="math-container">$$\sum \frac{x+y}{(2y+z+x)^2} \geqslant \frac{9}{8(x+y+z)}.$$</span> By the Cauchy-Schwarz inequality we get <span class="math-container">$$\sum \frac{x+y}{(2y+z+x)^2} \geqslant \frac{\left[\displaystyle \sum (x+y)(42x+3y+55z) \right]^2}{\displaystyle \sum (2y+z+x)^2(x+y)(42x+3y+55z)^2}.$$</span> Therefore, we will show that <span class="math-container">$$8(x+y+z)\left[\displaystyle \sum (x+y)(42x+3y+55z) \right]^2 \geqslant 9\sum (2y+z+x)^2(x+y)(42x+3y+55z)^2,$$</span> or <span class="math-container">$$11700 \sum xy^2(y-z)^2 + \sum \left(1954xyz+6723x^2y+\frac{3104xy^2}{3}+\frac{5053yz^2}{3}\right)(z+x-2y)^2$$</span> <span class="math-container">$$\geqslant 0.$$</span> Done.</p> Answer 26: <p>clearing the denominators you inequality is equivalent to $$4\,b{a}^{5}+7\,{b}^{2}{a}^{4}+2\,bc{a}^{4}-5\,{c}^{2}{a}^{4}-2\,{b}^{3 }{a}^{3}+2\,{b}^{2}c{a}^{3}-6\,b{c}^{2}{a}^{3}-2\,{c}^{3}{a}^{3}-5\,{b }^{4}{a}^{2}-6\,{b}^{3}c{a}^{2}-6\,{b}^{2}{c}^{2}{a}^{2}+2\,b{c}^{3}{a }^{2}+7\,{c}^{4}{a}^{2}+2\,{b}^{4}ca+2\,{b}^{3}{c}^{2}a-6\,{b}^{2}{c}^ {3}a+2\,b{c}^{4}a+4\,{c}^{5}a+4\,{b}^{5}c+7\,{b}^{4}{c}^{2}-2\,{b}^{3} {c}^{3}-5\,{b}^{2}{c}^{4} \geq 0$$ now we use the Ravi-substitution $$a=z+y,b=x+z,c=x+y$$ and your inequality is equivalent to $$4\,{x}^{6}-4\,{x}^{5}y+36\,{x}^{5}z-21\,{x}^{4}{y}^{2}+22\,{x}^{4}yz+ 67\,{x}^{4}{z}^{2}+22\,{x}^{3}{y}^{3}-62\,{x}^{3}{y}^{2}z-14\,{x}^{3}y {z}^{2}+22\,{x}^{3}{z}^{3}+67\,{x}^{2}{y}^{4}-14\,{x}^{2}{y}^{3}z-150 \,{x}^{2}{y}^{2}{z}^{2}-62\,{x}^{2}y{z}^{3}-21\,{x}^{2}{z}^{4}+36\,x{y }^{5}+22\,x{y}^{4}z-62\,x{y}^{3}{z}^{2}-14\,x{y}^{2}{z}^{3}+22\,xy{z}^ {4}-4\,x{z}^{5}+4\,{y}^{6}-4\,{y}^{5}z-21\,{y}^{4}{z}^{2}+22\,{y}^{3}{ z}^{3}+67\,{y}^{2}{z}^{4}+36\,y{z}^{5}+4\,{z}^{6} \geq 0$$ now we use $$y=x+u,z=x+u+v$$ and we get the equivalent inequality $$\left( 512\,{u}^{2}+512\,uv+512\,{v}^{2} \right) {x}^{4}+ \left( 1408 \,{u}^{3}+2688\,{u}^{2}v+2560\,u{v}^{2}+640\,{v}^{3} \right) {x}^{3}+ \left( 1440\,{u}^{4}+4032\,{u}^{3}v+4704\,{u}^{2}{v}^{2}+2112\,u{v}^{ 3}+288\,{v}^{4} \right) {x}^{2}+ \left( 648\,{u}^{5}+2384\,{u}^{4}v+ 3456\,{u}^{3}{v}^{2}+2224\,{u}^{2}{v}^{3}+616\,u{v}^{4}+56\,{v}^{5} \right) x+108\,{u}^{6}+492\,{u}^{5}v+867\,{u}^{4}{v}^{2}+730\,{u}^{3} {v}^{3}+307\,{u}^{2}{v}^{4}+60\,u{v}^{5}+4\,{v}^{6} \geq 0$$ which is true.</p> Answer 27: <p>Have: $\Leftrightarrow \left(\sum _{cyc}a\right)\left(\sum _{cyc}\frac{a}{\left(b+c\right)^2}\right)\ge \frac{9}{4}$</p> <p>By C-S and Nesbitt : $LHS\ge \left(\frac{a}{b+c}+\frac{b}{b+c}+\frac{c}{c+a}\right)^2=\frac{9}{4}$</p> Answer 28: <p>WLOG $a+b+c=1$</p> <p>$f\left(0;1\right)\rightarrow \left(0;\infty \right);f\left(x\right)=\frac{x}{\left(1-x\right)^2}$</p> <p>See $f$ is a convex function, then use Jensen's inequality</p> Answer 29: <p>After using Ravi subtitution, we need to prove <span class="math-container">$$\sum \frac{x+y}{(2y+z+x)^2} \geqslant \frac{9}{8(x+y+z)}.$$</span> By the Cauchy-Schwarz inequality we get <span class="math-container">$$\sum \frac{x+y}{(2y+z+x)^2} \geqslant \frac{\left[\displaystyle \sum (x+y)(42x+3y+55z) \right]^2}{\displaystyle \sum (2y+z+x)^2(x+y)(42x+3y+55z)^2}.$$</span> Therefore, we will show that <span class="math-container">$$8(x+y+z)\left[\displaystyle \sum (x+y)(42x+3y+55z) \right]^2 \geqslant 9\sum (2y+z+x)^2(x+y)(42x+3y+55z)^2,$$</span> or <span class="math-container">$$11700 \sum xy^2(y-z)^2 + \sum \left(1954xyz+6723x^2y+\frac{3104xy^2}{3}+\frac{5053yz^2}{3}\right)(z+x-2y)^2$$</span> <span class="math-container">$$\geqslant 0.$$</span> Done.</p> Answer 30: <p>clearing the denominators you inequality is equivalent to $$4\,b{a}^{5}+7\,{b}^{2}{a}^{4}+2\,bc{a}^{4}-5\,{c}^{2}{a}^{4}-2\,{b}^{3 }{a}^{3}+2\,{b}^{2}c{a}^{3}-6\,b{c}^{2}{a}^{3}-2\,{c}^{3}{a}^{3}-5\,{b }^{4}{a}^{2}-6\,{b}^{3}c{a}^{2}-6\,{b}^{2}{c}^{2}{a}^{2}+2\,b{c}^{3}{a }^{2}+7\,{c}^{4}{a}^{2}+2\,{b}^{4}ca+2\,{b}^{3}{c}^{2}a-6\,{b}^{2}{c}^ {3}a+2\,b{c}^{4}a+4\,{c}^{5}a+4\,{b}^{5}c+7\,{b}^{4}{c}^{2}-2\,{b}^{3} {c}^{3}-5\,{b}^{2}{c}^{4} \geq 0$$ now we use the Ravi-substitution $$a=z+y,b=x+z,c=x+y$$ and your inequality is equivalent to $$4\,{x}^{6}-4\,{x}^{5}y+36\,{x}^{5}z-21\,{x}^{4}{y}^{2}+22\,{x}^{4}yz+ 67\,{x}^{4}{z}^{2}+22\,{x}^{3}{y}^{3}-62\,{x}^{3}{y}^{2}z-14\,{x}^{3}y {z}^{2}+22\,{x}^{3}{z}^{3}+67\,{x}^{2}{y}^{4}-14\,{x}^{2}{y}^{3}z-150 \,{x}^{2}{y}^{2}{z}^{2}-62\,{x}^{2}y{z}^{3}-21\,{x}^{2}{z}^{4}+36\,x{y }^{5}+22\,x{y}^{4}z-62\,x{y}^{3}{z}^{2}-14\,x{y}^{2}{z}^{3}+22\,xy{z}^ {4}-4\,x{z}^{5}+4\,{y}^{6}-4\,{y}^{5}z-21\,{y}^{4}{z}^{2}+22\,{y}^{3}{ z}^{3}+67\,{y}^{2}{z}^{4}+36\,y{z}^{5}+4\,{z}^{6} \geq 0$$ now we use $$y=x+u,z=x+u+v$$ and we get the equivalent inequality $$\left( 512\,{u}^{2}+512\,uv+512\,{v}^{2} \right) {x}^{4}+ \left( 1408 \,{u}^{3}+2688\,{u}^{2}v+2560\,u{v}^{2}+640\,{v}^{3} \right) {x}^{3}+ \left( 1440\,{u}^{4}+4032\,{u}^{3}v+4704\,{u}^{2}{v}^{2}+2112\,u{v}^{ 3}+288\,{v}^{4} \right) {x}^{2}+ \left( 648\,{u}^{5}+2384\,{u}^{4}v+ 3456\,{u}^{3}{v}^{2}+2224\,{u}^{2}{v}^{3}+616\,u{v}^{4}+56\,{v}^{5} \right) x+108\,{u}^{6}+492\,{u}^{5}v+867\,{u}^{4}{v}^{2}+730\,{u}^{3} {v}^{3}+307\,{u}^{2}{v}^{4}+60\,u{v}^{5}+4\,{v}^{6} \geq 0$$ which is true.</p> Answer 31: <p>Have: $\Leftrightarrow \left(\sum _{cyc}a\right)\left(\sum _{cyc}\frac{a}{\left(b+c\right)^2}\right)\ge \frac{9}{4}$</p> <p>By C-S and Nesbitt : $LHS\ge \left(\frac{a}{b+c}+\frac{b}{b+c}+\frac{c}{c+a}\right)^2=\frac{9}{4}$</p> Answer 32: <p>WLOG $a+b+c=1$</p> <p>$f\left(0;1\right)\rightarrow \left(0;\infty \right);f\left(x\right)=\frac{x}{\left(1-x\right)^2}$</p> <p>See $f$ is a convex function, then use Jensen's inequality</p> Answer 33: <p>After using Ravi subtitution, we need to prove <span class="math-container">$$\sum \frac{x+y}{(2y+z+x)^2} \geqslant \frac{9}{8(x+y+z)}.$$</span> By the Cauchy-Schwarz inequality we get <span class="math-container">$$\sum \frac{x+y}{(2y+z+x)^2} \geqslant \frac{\left[\displaystyle \sum (x+y)(42x+3y+55z) \right]^2}{\displaystyle \sum (2y+z+x)^2(x+y)(42x+3y+55z)^2}.$$</span> Therefore, we will show that <span class="math-container">$$8(x+y+z)\left[\displaystyle \sum (x+y)(42x+3y+55z) \right]^2 \geqslant 9\sum (2y+z+x)^2(x+y)(42x+3y+55z)^2,$$</span> or <span class="math-container">$$11700 \sum xy^2(y-z)^2 + \sum \left(1954xyz+6723x^2y+\frac{3104xy^2}{3}+\frac{5053yz^2}{3}\right)(z+x-2y)^2$$</span> <span class="math-container">$$\geqslant 0.$$</span> Done.</p> Answer 34: <p>clearing the denominators you inequality is equivalent to $$4\,b{a}^{5}+7\,{b}^{2}{a}^{4}+2\,bc{a}^{4}-5\,{c}^{2}{a}^{4}-2\,{b}^{3 }{a}^{3}+2\,{b}^{2}c{a}^{3}-6\,b{c}^{2}{a}^{3}-2\,{c}^{3}{a}^{3}-5\,{b }^{4}{a}^{2}-6\,{b}^{3}c{a}^{2}-6\,{b}^{2}{c}^{2}{a}^{2}+2\,b{c}^{3}{a }^{2}+7\,{c}^{4}{a}^{2}+2\,{b}^{4}ca+2\,{b}^{3}{c}^{2}a-6\,{b}^{2}{c}^ {3}a+2\,b{c}^{4}a+4\,{c}^{5}a+4\,{b}^{5}c+7\,{b}^{4}{c}^{2}-2\,{b}^{3} {c}^{3}-5\,{b}^{2}{c}^{4} \geq 0$$ now we use the Ravi-substitution $$a=z+y,b=x+z,c=x+y$$ and your inequality is equivalent to $$4\,{x}^{6}-4\,{x}^{5}y+36\,{x}^{5}z-21\,{x}^{4}{y}^{2}+22\,{x}^{4}yz+ 67\,{x}^{4}{z}^{2}+22\,{x}^{3}{y}^{3}-62\,{x}^{3}{y}^{2}z-14\,{x}^{3}y {z}^{2}+22\,{x}^{3}{z}^{3}+67\,{x}^{2}{y}^{4}-14\,{x}^{2}{y}^{3}z-150 \,{x}^{2}{y}^{2}{z}^{2}-62\,{x}^{2}y{z}^{3}-21\,{x}^{2}{z}^{4}+36\,x{y }^{5}+22\,x{y}^{4}z-62\,x{y}^{3}{z}^{2}-14\,x{y}^{2}{z}^{3}+22\,xy{z}^ {4}-4\,x{z}^{5}+4\,{y}^{6}-4\,{y}^{5}z-21\,{y}^{4}{z}^{2}+22\,{y}^{3}{ z}^{3}+67\,{y}^{2}{z}^{4}+36\,y{z}^{5}+4\,{z}^{6} \geq 0$$ now we use $$y=x+u,z=x+u+v$$ and we get the equivalent inequality $$\left( 512\,{u}^{2}+512\,uv+512\,{v}^{2} \right) {x}^{4}+ \left( 1408 \,{u}^{3}+2688\,{u}^{2}v+2560\,u{v}^{2}+640\,{v}^{3} \right) {x}^{3}+ \left( 1440\,{u}^{4}+4032\,{u}^{3}v+4704\,{u}^{2}{v}^{2}+2112\,u{v}^{ 3}+288\,{v}^{4} \right) {x}^{2}+ \left( 648\,{u}^{5}+2384\,{u}^{4}v+ 3456\,{u}^{3}{v}^{2}+2224\,{u}^{2}{v}^{3}+616\,u{v}^{4}+56\,{v}^{5} \right) x+108\,{u}^{6}+492\,{u}^{5}v+867\,{u}^{4}{v}^{2}+730\,{u}^{3} {v}^{3}+307\,{u}^{2}{v}^{4}+60\,u{v}^{5}+4\,{v}^{6} \geq 0$$ which is true.</p> Answer 35: <p>Have: $\Leftrightarrow \left(\sum _{cyc}a\right)\left(\sum _{cyc}\frac{a}{\left(b+c\right)^2}\right)\ge \frac{9}{4}$</p> <p>By C-S and Nesbitt : $LHS\ge \left(\frac{a}{b+c}+\frac{b}{b+c}+\frac{c}{c+a}\right)^2=\frac{9}{4}$</p> Answer 36: <p>WLOG $a+b+c=1$</p> <p>$f\left(0;1\right)\rightarrow \left(0;\infty \right);f\left(x\right)=\frac{x}{\left(1-x\right)^2}$</p> <p>See $f$ is a convex function, then use Jensen's inequality</p> Answer 37: <p>After using Ravi subtitution, we need to prove <span class="math-container">$$\sum \frac{x+y}{(2y+z+x)^2} \geqslant \frac{9}{8(x+y+z)}.$$</span> By the Cauchy-Schwarz inequality we get <span class="math-container">$$\sum \frac{x+y}{(2y+z+x)^2} \geqslant \frac{\left[\displaystyle \sum (x+y)(42x+3y+55z) \right]^2}{\displaystyle \sum (2y+z+x)^2(x+y)(42x+3y+55z)^2}.$$</span> Therefore, we will show that <span class="math-container">$$8(x+y+z)\left[\displaystyle \sum (x+y)(42x+3y+55z) \right]^2 \geqslant 9\sum (2y+z+x)^2(x+y)(42x+3y+55z)^2,$$</span> or <span class="math-container">$$11700 \sum xy^2(y-z)^2 + \sum \left(1954xyz+6723x^2y+\frac{3104xy^2}{3}+\frac{5053yz^2}{3}\right)(z+x-2y)^2$$</span> <span class="math-container">$$\geqslant 0.$$</span> Done.</p> Answer 38: <p>clearing the denominators you inequality is equivalent to $$4\,b{a}^{5}+7\,{b}^{2}{a}^{4}+2\,bc{a}^{4}-5\,{c}^{2}{a}^{4}-2\,{b}^{3 }{a}^{3}+2\,{b}^{2}c{a}^{3}-6\,b{c}^{2}{a}^{3}-2\,{c}^{3}{a}^{3}-5\,{b }^{4}{a}^{2}-6\,{b}^{3}c{a}^{2}-6\,{b}^{2}{c}^{2}{a}^{2}+2\,b{c}^{3}{a }^{2}+7\,{c}^{4}{a}^{2}+2\,{b}^{4}ca+2\,{b}^{3}{c}^{2}a-6\,{b}^{2}{c}^ {3}a+2\,b{c}^{4}a+4\,{c}^{5}a+4\,{b}^{5}c+7\,{b}^{4}{c}^{2}-2\,{b}^{3} {c}^{3}-5\,{b}^{2}{c}^{4} \geq 0$$ now we use the Ravi-substitution $$a=z+y,b=x+z,c=x+y$$ and your inequality is equivalent to $$4\,{x}^{6}-4\,{x}^{5}y+36\,{x}^{5}z-21\,{x}^{4}{y}^{2}+22\,{x}^{4}yz+ 67\,{x}^{4}{z}^{2}+22\,{x}^{3}{y}^{3}-62\,{x}^{3}{y}^{2}z-14\,{x}^{3}y {z}^{2}+22\,{x}^{3}{z}^{3}+67\,{x}^{2}{y}^{4}-14\,{x}^{2}{y}^{3}z-150 \,{x}^{2}{y}^{2}{z}^{2}-62\,{x}^{2}y{z}^{3}-21\,{x}^{2}{z}^{4}+36\,x{y }^{5}+22\,x{y}^{4}z-62\,x{y}^{3}{z}^{2}-14\,x{y}^{2}{z}^{3}+22\,xy{z}^ {4}-4\,x{z}^{5}+4\,{y}^{6}-4\,{y}^{5}z-21\,{y}^{4}{z}^{2}+22\,{y}^{3}{ z}^{3}+67\,{y}^{2}{z}^{4}+36\,y{z}^{5}+4\,{z}^{6} \geq 0$$ now we use $$y=x+u,z=x+u+v$$ and we get the equivalent inequality $$\left( 512\,{u}^{2}+512\,uv+512\,{v}^{2} \right) {x}^{4}+ \left( 1408 \,{u}^{3}+2688\,{u}^{2}v+2560\,u{v}^{2}+640\,{v}^{3} \right) {x}^{3}+ \left( 1440\,{u}^{4}+4032\,{u}^{3}v+4704\,{u}^{2}{v}^{2}+2112\,u{v}^{ 3}+288\,{v}^{4} \right) {x}^{2}+ \left( 648\,{u}^{5}+2384\,{u}^{4}v+ 3456\,{u}^{3}{v}^{2}+2224\,{u}^{2}{v}^{3}+616\,u{v}^{4}+56\,{v}^{5} \right) x+108\,{u}^{6}+492\,{u}^{5}v+867\,{u}^{4}{v}^{2}+730\,{u}^{3} {v}^{3}+307\,{u}^{2}{v}^{4}+60\,u{v}^{5}+4\,{v}^{6} \geq 0$$ which is true.</p> Answer 39: <p>Have: $\Leftrightarrow \left(\sum _{cyc}a\right)\left(\sum _{cyc}\frac{a}{\left(b+c\right)^2}\right)\ge \frac{9}{4}$</p> <p>By C-S and Nesbitt : $LHS\ge \left(\frac{a}{b+c}+\frac{b}{b+c}+\frac{c}{c+a}\right)^2=\frac{9}{4}$</p> Answer 40: <p>WLOG $a+b+c=1$</p> <p>$f\left(0;1\right)\rightarrow \left(0;\infty \right);f\left(x\right)=\frac{x}{\left(1-x\right)^2}$</p> <p>See $f$ is a convex function, then use Jensen's inequality</p>
Simplifying fraction with factorials: $\frac{(3(n+1))!}{(3n)!}$
2,109,192
6
https://math.stackexchange.com/questions/2109192/simplifying-fraction-with-factorials-frac3n13n
<p>I was trying to solve the limit:</p> <p>$$\lim_{n \to \infty} \sqrt[n]{\frac{(3n)!}{(5n)^{3n}}}$$</p> <p>By using the root's criterion for limits (which is valid in this case, since $b_n$ increases monotonically):</p> <p>$$L= \lim_{n \to \infty} \frac{a_n}{b_n} = \lim_{n \to \infty} \frac{a_{n+1}-a_n}{b_{n+1}-b_n}$$</p> <p>Now I realise using Sterling's formula would make everything easier, but my first approach was simplifying the factorial after applying the criterion I mentioned before. So, after a few failed attempts I looked it up on Mathematica and it said that $\frac{(3(n+1))!}{(3n)!}$ (which is one of the fractions you have to simplify) equals $3(n+1)(3n+1)(3n+2)$. Since I can't get there myself I want to know how you would do it.</p> <p>Just so you can correct me, my reasoning was:</p> <p>$$\frac{(3(n+1))!}{(3n)!} = \frac{3\cdot 2 \cdot 3 \cdot 3 \cdot 3 \cdot 4 \cdot (...) \cdot 3 \cdot (n+1)}{3 \cdot 1 \cdot 3 \cdot 2 \cdot 3 \cdot 3 \cdot (...) \cdot 3 \cdot n } = $$ $$= \frac{3^n(n+1)!}{3^{n}n!} = \frac{(n+1)!}{n!} = n+1$$</p> <p>Which apparently isn't correct. I must have failed at something very silly. Thanks in advance!</p>
[ { "answer_id": 2109199, "score": 5, "text": "<p>By Stirling's inequality the answer is clearly $\\left(\\frac{3}{5e}\\right)^3$. To prove it, you may notice that by setting\n$$ a_n = \\frac{(3n)!}{(5n)^{3n}} $$\nyou have:\n$$ \\frac{a_{n+1}}{a_n} = \\frac{(3n+3)(3n+2)(3n+1)(5n)^{3n}}{(5n+5)^{3n+3}} = \\frac{\\frac{3n+3}{5n+5}\\cdot\\frac{3n+2}{5n+5}\\cdot\\frac{3n+1}{5n+5}}{\\left(1+\\frac{1}{n}\\right)^{3n}}\\to\\frac{\\left(\\frac{3}{5}\\right)^3}{e^3}$$\nas $n\\to +\\infty$.</p>\n" }, { "answer_id": 2109309, "score": 3, "text": "<blockquote>\n <p>I thought it might be useful to present an approach that relies on elementary tools only. To that end, we proceed.</p>\n</blockquote>\n\n<p>First, we write </p>\n\n<p>$$\\begin{align}\n\\frac{1}{n}\\log((3n!))&amp;=\\frac1n\\sum_{k=1}^{3n}\\log(k)\\\\\\\\\n&amp;=\\left(\\frac1n\\sum_{k=1}^{3n}\\log(k/n)\\right)+3\\log(n) \\tag 1\n\\end{align}$$</p>\n\n<p>Now, note that the parenthetical term on the right-hand side of $(1)$ is the Riemann sum for $\\int_0^3 \\log(x)\\,dx=3\\log(3)-3$. </p>\n\n<p>Using $(1)$, we have</p>\n\n<p>$$\\begin{align}\n\\lim_{n\\to \\infty}\\sqrt[n]{\\frac{(3n)!}{(5n)^{3n}}}&amp;=\\lim_{n\\to \\infty}e^{\\frac1n \\log((3n)!)-3\\log(5n)}\\\\\\\\\n&amp;=\\lim_{n\\to \\infty}e^{\\left(\\frac1n\\sum_{k=1}^{3n}\\log(k/n)\\right)+3\\log(n)-3\\log(5n)}\\\\\\\\\n&amp;=\\lim_{n\\to \\infty}e^{\\left(\\frac1n\\sum_{k=1}^{3n}\\log(k/n)\\right)-3\\log(5)}\\\\\\\\\n&amp;= e^{3\\log(3)-3-3\\log(5)}\\\\\\\\\n&amp;=\\left(\\frac{3}{5e}\\right)^3\n\\end{align}$$</p>\n\n<p>as expected!</p>\n\n<blockquote>\n <p><strong>Tools used: Riemann sums and the continuity of the exponential function.</strong></p>\n</blockquote>\n" }, { "answer_id": 2109210, "score": 1, "text": "<p>$$ (3(n+1))! \\neq 3 \\cdot 2 \\cdot 3 \\cdot 3 \\cdot 3 \\cdot 4 \\cdots 3 \\cdot (n+1) $$ To make it clear what the problem is, let's write the right-hand side with brackets: $$ (3 \\cdot 2) \\cdot (3 \\cdot 3) \\cdot (3 \\cdot 4) \\cdots (3 \\cdot (n+1)) $$ That's just multiplying all the positive multiples of 3 less than $ 3(n+1) $ together; the factorial is defined as multiplying together <em>all</em> positive integers less than $ 3(n+1) $. So the correct expansions are \\begin{align*}\n(3(n+1))! &amp;= 2 \\cdot 3 \\cdot 4 \\cdot 5 \\cdot 6 \\cdots (3n-2) \\cdot (3n-1) \\cdot 3n \\cdot (3n+1) \\cdot (3n+2) \\cdot 3(n+1) \\\\\n(3n)! &amp;= 2 \\cdot 3 \\cdot 4 \\cdot 5 \\cdot 6 \\cdots (3n-2) \\cdot (3n-1) \\cdot 3n\n\\end{align*}\nwhich clearly have the quotient Mathematica gave you. </p>\n" }, { "answer_id": 2109237, "score": 1, "text": "<p>My favorite elementary inequality\nfor $n!$:\n$\n\\def\\lfrac#1#2{{\\large\\frac{#1}{#2}}}\n$\n$(n/e)^n\n&lt; n!\n&lt; (n/e)^{n+1}\n$.</p>\n\n<p>Taking n-th roots,\n$n/e\n&lt; (n!)^{1/n}\n&lt; (n/e)^{1+1/n}\n$.</p>\n\n<p>Therefore\n$\\sqrt[n]{\\lfrac{(3n)!}{(5n)^{3n}}}\n=\\lfrac{((3n)!)^{1/n}}{(5n)^{3}}\n&gt;\\lfrac{((3n/e)^{3n})^{1/n}}{(5n)^{3}}\n=\\lfrac{(3n/e)^{3}}{(5n)^{3}}\n=\\lfrac{27n^3}{125e^3n^{3}}\n=\\lfrac{27}{125e^3}\n$.</p>\n\n<p>Arguing the other way,\n$\\sqrt[n]{\\lfrac{(3n)!}{(5n)^{3n}}}\n=\\lfrac{((3n)!)^{1/n}}{(5n)^{3}}\n&lt;\\lfrac{((3n/e)^{3n+1})^{1/n}}{(5n)^{3}}\n=\\lfrac{(3n/e)^{3}(3n/e)^{1/n}}{(5n)^{3}}\n=\\lfrac{27n^3(3n/e)^{1/n}}{125e^3n^{3}}\n=\\lfrac{27}{125e^3}(3n/e)^{1/n}\n$.\nSince,\nas $n \\to \\infty$,\n$n^{1/n}\n\\to 1\n$\nand\n$a^{1/n}\n\\to 1\n$\nfor any $a &gt; 0$,\n$(3n/e)^{1/n}\n\\to 1\n$ so the limit is\n$\\lfrac{27}{125e^3}\n$.</p>\n\n<hr>\n\n<p>A more general result:</p>\n\n<p>$\\sqrt[n]{\\lfrac{(an)!}{(bn)^{an}}}\n=\\lfrac{((an)!)^{1/n}}{(bn)^a}\n\\gt \\lfrac{((an/e)^{an})^{1/n}}{(bn)^a}\n=\\lfrac{(an/e)^a}{(bn)^a}\n=\\lfrac{a^a n^a}{b^ae^a}\n=\\lfrac{a^a}{b^ae^a}\n=\\left(\\lfrac{a}{be}\\right)^a\n$.</p>\n\n<p>The other way goes exactly as above,\nso that\n$\\lim_{n \\to \\infty} \\sqrt[n]{\\lfrac{(an)!}{(bn)^{an}}}\n=\\left(\\lfrac{a}{be}\\right)^a\n$.</p>\n" }, { "answer_id": 2109199, "score": 5, "text": "<p>By Stirling's inequality the answer is clearly $\\left(\\frac{3}{5e}\\right)^3$. To prove it, you may notice that by setting\n$$ a_n = \\frac{(3n)!}{(5n)^{3n}} $$\nyou have:\n$$ \\frac{a_{n+1}}{a_n} = \\frac{(3n+3)(3n+2)(3n+1)(5n)^{3n}}{(5n+5)^{3n+3}} = \\frac{\\frac{3n+3}{5n+5}\\cdot\\frac{3n+2}{5n+5}\\cdot\\frac{3n+1}{5n+5}}{\\left(1+\\frac{1}{n}\\right)^{3n}}\\to\\frac{\\left(\\frac{3}{5}\\right)^3}{e^3}$$\nas $n\\to +\\infty$.</p>\n" }, { "answer_id": 2109309, "score": 3, "text": "<blockquote>\n <p>I thought it might be useful to present an approach that relies on elementary tools only. To that end, we proceed.</p>\n</blockquote>\n\n<p>First, we write </p>\n\n<p>$$\\begin{align}\n\\frac{1}{n}\\log((3n!))&amp;=\\frac1n\\sum_{k=1}^{3n}\\log(k)\\\\\\\\\n&amp;=\\left(\\frac1n\\sum_{k=1}^{3n}\\log(k/n)\\right)+3\\log(n) \\tag 1\n\\end{align}$$</p>\n\n<p>Now, note that the parenthetical term on the right-hand side of $(1)$ is the Riemann sum for $\\int_0^3 \\log(x)\\,dx=3\\log(3)-3$. </p>\n\n<p>Using $(1)$, we have</p>\n\n<p>$$\\begin{align}\n\\lim_{n\\to \\infty}\\sqrt[n]{\\frac{(3n)!}{(5n)^{3n}}}&amp;=\\lim_{n\\to \\infty}e^{\\frac1n \\log((3n)!)-3\\log(5n)}\\\\\\\\\n&amp;=\\lim_{n\\to \\infty}e^{\\left(\\frac1n\\sum_{k=1}^{3n}\\log(k/n)\\right)+3\\log(n)-3\\log(5n)}\\\\\\\\\n&amp;=\\lim_{n\\to \\infty}e^{\\left(\\frac1n\\sum_{k=1}^{3n}\\log(k/n)\\right)-3\\log(5)}\\\\\\\\\n&amp;= e^{3\\log(3)-3-3\\log(5)}\\\\\\\\\n&amp;=\\left(\\frac{3}{5e}\\right)^3\n\\end{align}$$</p>\n\n<p>as expected!</p>\n\n<blockquote>\n <p><strong>Tools used: Riemann sums and the continuity of the exponential function.</strong></p>\n</blockquote>\n" }, { "answer_id": 2109210, "score": 1, "text": "<p>$$ (3(n+1))! \\neq 3 \\cdot 2 \\cdot 3 \\cdot 3 \\cdot 3 \\cdot 4 \\cdots 3 \\cdot (n+1) $$ To make it clear what the problem is, let's write the right-hand side with brackets: $$ (3 \\cdot 2) \\cdot (3 \\cdot 3) \\cdot (3 \\cdot 4) \\cdots (3 \\cdot (n+1)) $$ That's just multiplying all the positive multiples of 3 less than $ 3(n+1) $ together; the factorial is defined as multiplying together <em>all</em> positive integers less than $ 3(n+1) $. So the correct expansions are \\begin{align*}\n(3(n+1))! &amp;= 2 \\cdot 3 \\cdot 4 \\cdot 5 \\cdot 6 \\cdots (3n-2) \\cdot (3n-1) \\cdot 3n \\cdot (3n+1) \\cdot (3n+2) \\cdot 3(n+1) \\\\\n(3n)! &amp;= 2 \\cdot 3 \\cdot 4 \\cdot 5 \\cdot 6 \\cdots (3n-2) \\cdot (3n-1) \\cdot 3n\n\\end{align*}\nwhich clearly have the quotient Mathematica gave you. </p>\n" }, { "answer_id": 2109237, "score": 1, "text": "<p>My favorite elementary inequality\nfor $n!$:\n$\n\\def\\lfrac#1#2{{\\large\\frac{#1}{#2}}}\n$\n$(n/e)^n\n&lt; n!\n&lt; (n/e)^{n+1}\n$.</p>\n\n<p>Taking n-th roots,\n$n/e\n&lt; (n!)^{1/n}\n&lt; (n/e)^{1+1/n}\n$.</p>\n\n<p>Therefore\n$\\sqrt[n]{\\lfrac{(3n)!}{(5n)^{3n}}}\n=\\lfrac{((3n)!)^{1/n}}{(5n)^{3}}\n&gt;\\lfrac{((3n/e)^{3n})^{1/n}}{(5n)^{3}}\n=\\lfrac{(3n/e)^{3}}{(5n)^{3}}\n=\\lfrac{27n^3}{125e^3n^{3}}\n=\\lfrac{27}{125e^3}\n$.</p>\n\n<p>Arguing the other way,\n$\\sqrt[n]{\\lfrac{(3n)!}{(5n)^{3n}}}\n=\\lfrac{((3n)!)^{1/n}}{(5n)^{3}}\n&lt;\\lfrac{((3n/e)^{3n+1})^{1/n}}{(5n)^{3}}\n=\\lfrac{(3n/e)^{3}(3n/e)^{1/n}}{(5n)^{3}}\n=\\lfrac{27n^3(3n/e)^{1/n}}{125e^3n^{3}}\n=\\lfrac{27}{125e^3}(3n/e)^{1/n}\n$.\nSince,\nas $n \\to \\infty$,\n$n^{1/n}\n\\to 1\n$\nand\n$a^{1/n}\n\\to 1\n$\nfor any $a &gt; 0$,\n$(3n/e)^{1/n}\n\\to 1\n$ so the limit is\n$\\lfrac{27}{125e^3}\n$.</p>\n\n<hr>\n\n<p>A more general result:</p>\n\n<p>$\\sqrt[n]{\\lfrac{(an)!}{(bn)^{an}}}\n=\\lfrac{((an)!)^{1/n}}{(bn)^a}\n\\gt \\lfrac{((an/e)^{an})^{1/n}}{(bn)^a}\n=\\lfrac{(an/e)^a}{(bn)^a}\n=\\lfrac{a^a n^a}{b^ae^a}\n=\\lfrac{a^a}{b^ae^a}\n=\\left(\\lfrac{a}{be}\\right)^a\n$.</p>\n\n<p>The other way goes exactly as above,\nso that\n$\\lim_{n \\to \\infty} \\sqrt[n]{\\lfrac{(an)!}{(bn)^{an}}}\n=\\left(\\lfrac{a}{be}\\right)^a\n$.</p>\n" }, { "answer_id": 2109199, "score": 5, "text": "<p>By Stirling's inequality the answer is clearly $\\left(\\frac{3}{5e}\\right)^3$. To prove it, you may notice that by setting\n$$ a_n = \\frac{(3n)!}{(5n)^{3n}} $$\nyou have:\n$$ \\frac{a_{n+1}}{a_n} = \\frac{(3n+3)(3n+2)(3n+1)(5n)^{3n}}{(5n+5)^{3n+3}} = \\frac{\\frac{3n+3}{5n+5}\\cdot\\frac{3n+2}{5n+5}\\cdot\\frac{3n+1}{5n+5}}{\\left(1+\\frac{1}{n}\\right)^{3n}}\\to\\frac{\\left(\\frac{3}{5}\\right)^3}{e^3}$$\nas $n\\to +\\infty$.</p>\n" }, { "answer_id": 2109309, "score": 3, "text": "<blockquote>\n <p>I thought it might be useful to present an approach that relies on elementary tools only. To that end, we proceed.</p>\n</blockquote>\n\n<p>First, we write </p>\n\n<p>$$\\begin{align}\n\\frac{1}{n}\\log((3n!))&amp;=\\frac1n\\sum_{k=1}^{3n}\\log(k)\\\\\\\\\n&amp;=\\left(\\frac1n\\sum_{k=1}^{3n}\\log(k/n)\\right)+3\\log(n) \\tag 1\n\\end{align}$$</p>\n\n<p>Now, note that the parenthetical term on the right-hand side of $(1)$ is the Riemann sum for $\\int_0^3 \\log(x)\\,dx=3\\log(3)-3$. </p>\n\n<p>Using $(1)$, we have</p>\n\n<p>$$\\begin{align}\n\\lim_{n\\to \\infty}\\sqrt[n]{\\frac{(3n)!}{(5n)^{3n}}}&amp;=\\lim_{n\\to \\infty}e^{\\frac1n \\log((3n)!)-3\\log(5n)}\\\\\\\\\n&amp;=\\lim_{n\\to \\infty}e^{\\left(\\frac1n\\sum_{k=1}^{3n}\\log(k/n)\\right)+3\\log(n)-3\\log(5n)}\\\\\\\\\n&amp;=\\lim_{n\\to \\infty}e^{\\left(\\frac1n\\sum_{k=1}^{3n}\\log(k/n)\\right)-3\\log(5)}\\\\\\\\\n&amp;= e^{3\\log(3)-3-3\\log(5)}\\\\\\\\\n&amp;=\\left(\\frac{3}{5e}\\right)^3\n\\end{align}$$</p>\n\n<p>as expected!</p>\n\n<blockquote>\n <p><strong>Tools used: Riemann sums and the continuity of the exponential function.</strong></p>\n</blockquote>\n" }, { "answer_id": 2109210, "score": 1, "text": "<p>$$ (3(n+1))! \\neq 3 \\cdot 2 \\cdot 3 \\cdot 3 \\cdot 3 \\cdot 4 \\cdots 3 \\cdot (n+1) $$ To make it clear what the problem is, let's write the right-hand side with brackets: $$ (3 \\cdot 2) \\cdot (3 \\cdot 3) \\cdot (3 \\cdot 4) \\cdots (3 \\cdot (n+1)) $$ That's just multiplying all the positive multiples of 3 less than $ 3(n+1) $ together; the factorial is defined as multiplying together <em>all</em> positive integers less than $ 3(n+1) $. So the correct expansions are \\begin{align*}\n(3(n+1))! &amp;= 2 \\cdot 3 \\cdot 4 \\cdot 5 \\cdot 6 \\cdots (3n-2) \\cdot (3n-1) \\cdot 3n \\cdot (3n+1) \\cdot (3n+2) \\cdot 3(n+1) \\\\\n(3n)! &amp;= 2 \\cdot 3 \\cdot 4 \\cdot 5 \\cdot 6 \\cdots (3n-2) \\cdot (3n-1) \\cdot 3n\n\\end{align*}\nwhich clearly have the quotient Mathematica gave you. </p>\n" }, { "answer_id": 2109237, "score": 1, "text": "<p>My favorite elementary inequality\nfor $n!$:\n$\n\\def\\lfrac#1#2{{\\large\\frac{#1}{#2}}}\n$\n$(n/e)^n\n&lt; n!\n&lt; (n/e)^{n+1}\n$.</p>\n\n<p>Taking n-th roots,\n$n/e\n&lt; (n!)^{1/n}\n&lt; (n/e)^{1+1/n}\n$.</p>\n\n<p>Therefore\n$\\sqrt[n]{\\lfrac{(3n)!}{(5n)^{3n}}}\n=\\lfrac{((3n)!)^{1/n}}{(5n)^{3}}\n&gt;\\lfrac{((3n/e)^{3n})^{1/n}}{(5n)^{3}}\n=\\lfrac{(3n/e)^{3}}{(5n)^{3}}\n=\\lfrac{27n^3}{125e^3n^{3}}\n=\\lfrac{27}{125e^3}\n$.</p>\n\n<p>Arguing the other way,\n$\\sqrt[n]{\\lfrac{(3n)!}{(5n)^{3n}}}\n=\\lfrac{((3n)!)^{1/n}}{(5n)^{3}}\n&lt;\\lfrac{((3n/e)^{3n+1})^{1/n}}{(5n)^{3}}\n=\\lfrac{(3n/e)^{3}(3n/e)^{1/n}}{(5n)^{3}}\n=\\lfrac{27n^3(3n/e)^{1/n}}{125e^3n^{3}}\n=\\lfrac{27}{125e^3}(3n/e)^{1/n}\n$.\nSince,\nas $n \\to \\infty$,\n$n^{1/n}\n\\to 1\n$\nand\n$a^{1/n}\n\\to 1\n$\nfor any $a &gt; 0$,\n$(3n/e)^{1/n}\n\\to 1\n$ so the limit is\n$\\lfrac{27}{125e^3}\n$.</p>\n\n<hr>\n\n<p>A more general result:</p>\n\n<p>$\\sqrt[n]{\\lfrac{(an)!}{(bn)^{an}}}\n=\\lfrac{((an)!)^{1/n}}{(bn)^a}\n\\gt \\lfrac{((an/e)^{an})^{1/n}}{(bn)^a}\n=\\lfrac{(an/e)^a}{(bn)^a}\n=\\lfrac{a^a n^a}{b^ae^a}\n=\\lfrac{a^a}{b^ae^a}\n=\\left(\\lfrac{a}{be}\\right)^a\n$.</p>\n\n<p>The other way goes exactly as above,\nso that\n$\\lim_{n \\to \\infty} \\sqrt[n]{\\lfrac{(an)!}{(bn)^{an}}}\n=\\left(\\lfrac{a}{be}\\right)^a\n$.</p>\n" }, { "answer_id": 2109199, "score": 5, "text": "<p>By Stirling's inequality the answer is clearly $\\left(\\frac{3}{5e}\\right)^3$. To prove it, you may notice that by setting\n$$ a_n = \\frac{(3n)!}{(5n)^{3n}} $$\nyou have:\n$$ \\frac{a_{n+1}}{a_n} = \\frac{(3n+3)(3n+2)(3n+1)(5n)^{3n}}{(5n+5)^{3n+3}} = \\frac{\\frac{3n+3}{5n+5}\\cdot\\frac{3n+2}{5n+5}\\cdot\\frac{3n+1}{5n+5}}{\\left(1+\\frac{1}{n}\\right)^{3n}}\\to\\frac{\\left(\\frac{3}{5}\\right)^3}{e^3}$$\nas $n\\to +\\infty$.</p>\n" }, { "answer_id": 2109309, "score": 3, "text": "<blockquote>\n <p>I thought it might be useful to present an approach that relies on elementary tools only. To that end, we proceed.</p>\n</blockquote>\n\n<p>First, we write </p>\n\n<p>$$\\begin{align}\n\\frac{1}{n}\\log((3n!))&amp;=\\frac1n\\sum_{k=1}^{3n}\\log(k)\\\\\\\\\n&amp;=\\left(\\frac1n\\sum_{k=1}^{3n}\\log(k/n)\\right)+3\\log(n) \\tag 1\n\\end{align}$$</p>\n\n<p>Now, note that the parenthetical term on the right-hand side of $(1)$ is the Riemann sum for $\\int_0^3 \\log(x)\\,dx=3\\log(3)-3$. </p>\n\n<p>Using $(1)$, we have</p>\n\n<p>$$\\begin{align}\n\\lim_{n\\to \\infty}\\sqrt[n]{\\frac{(3n)!}{(5n)^{3n}}}&amp;=\\lim_{n\\to \\infty}e^{\\frac1n \\log((3n)!)-3\\log(5n)}\\\\\\\\\n&amp;=\\lim_{n\\to \\infty}e^{\\left(\\frac1n\\sum_{k=1}^{3n}\\log(k/n)\\right)+3\\log(n)-3\\log(5n)}\\\\\\\\\n&amp;=\\lim_{n\\to \\infty}e^{\\left(\\frac1n\\sum_{k=1}^{3n}\\log(k/n)\\right)-3\\log(5)}\\\\\\\\\n&amp;= e^{3\\log(3)-3-3\\log(5)}\\\\\\\\\n&amp;=\\left(\\frac{3}{5e}\\right)^3\n\\end{align}$$</p>\n\n<p>as expected!</p>\n\n<blockquote>\n <p><strong>Tools used: Riemann sums and the continuity of the exponential function.</strong></p>\n</blockquote>\n" }, { "answer_id": 2109210, "score": 1, "text": "<p>$$ (3(n+1))! \\neq 3 \\cdot 2 \\cdot 3 \\cdot 3 \\cdot 3 \\cdot 4 \\cdots 3 \\cdot (n+1) $$ To make it clear what the problem is, let's write the right-hand side with brackets: $$ (3 \\cdot 2) \\cdot (3 \\cdot 3) \\cdot (3 \\cdot 4) \\cdots (3 \\cdot (n+1)) $$ That's just multiplying all the positive multiples of 3 less than $ 3(n+1) $ together; the factorial is defined as multiplying together <em>all</em> positive integers less than $ 3(n+1) $. So the correct expansions are \\begin{align*}\n(3(n+1))! &amp;= 2 \\cdot 3 \\cdot 4 \\cdot 5 \\cdot 6 \\cdots (3n-2) \\cdot (3n-1) \\cdot 3n \\cdot (3n+1) \\cdot (3n+2) \\cdot 3(n+1) \\\\\n(3n)! &amp;= 2 \\cdot 3 \\cdot 4 \\cdot 5 \\cdot 6 \\cdots (3n-2) \\cdot (3n-1) \\cdot 3n\n\\end{align*}\nwhich clearly have the quotient Mathematica gave you. </p>\n" }, { "answer_id": 2109237, "score": 1, "text": "<p>My favorite elementary inequality\nfor $n!$:\n$\n\\def\\lfrac#1#2{{\\large\\frac{#1}{#2}}}\n$\n$(n/e)^n\n&lt; n!\n&lt; (n/e)^{n+1}\n$.</p>\n\n<p>Taking n-th roots,\n$n/e\n&lt; (n!)^{1/n}\n&lt; (n/e)^{1+1/n}\n$.</p>\n\n<p>Therefore\n$\\sqrt[n]{\\lfrac{(3n)!}{(5n)^{3n}}}\n=\\lfrac{((3n)!)^{1/n}}{(5n)^{3}}\n&gt;\\lfrac{((3n/e)^{3n})^{1/n}}{(5n)^{3}}\n=\\lfrac{(3n/e)^{3}}{(5n)^{3}}\n=\\lfrac{27n^3}{125e^3n^{3}}\n=\\lfrac{27}{125e^3}\n$.</p>\n\n<p>Arguing the other way,\n$\\sqrt[n]{\\lfrac{(3n)!}{(5n)^{3n}}}\n=\\lfrac{((3n)!)^{1/n}}{(5n)^{3}}\n&lt;\\lfrac{((3n/e)^{3n+1})^{1/n}}{(5n)^{3}}\n=\\lfrac{(3n/e)^{3}(3n/e)^{1/n}}{(5n)^{3}}\n=\\lfrac{27n^3(3n/e)^{1/n}}{125e^3n^{3}}\n=\\lfrac{27}{125e^3}(3n/e)^{1/n}\n$.\nSince,\nas $n \\to \\infty$,\n$n^{1/n}\n\\to 1\n$\nand\n$a^{1/n}\n\\to 1\n$\nfor any $a &gt; 0$,\n$(3n/e)^{1/n}\n\\to 1\n$ so the limit is\n$\\lfrac{27}{125e^3}\n$.</p>\n\n<hr>\n\n<p>A more general result:</p>\n\n<p>$\\sqrt[n]{\\lfrac{(an)!}{(bn)^{an}}}\n=\\lfrac{((an)!)^{1/n}}{(bn)^a}\n\\gt \\lfrac{((an/e)^{an})^{1/n}}{(bn)^a}\n=\\lfrac{(an/e)^a}{(bn)^a}\n=\\lfrac{a^a n^a}{b^ae^a}\n=\\lfrac{a^a}{b^ae^a}\n=\\left(\\lfrac{a}{be}\\right)^a\n$.</p>\n\n<p>The other way goes exactly as above,\nso that\n$\\lim_{n \\to \\infty} \\sqrt[n]{\\lfrac{(an)!}{(bn)^{an}}}\n=\\left(\\lfrac{a}{be}\\right)^a\n$.</p>\n" }, { "answer_id": 2109199, "score": 5, "text": "<p>By Stirling's inequality the answer is clearly $\\left(\\frac{3}{5e}\\right)^3$. To prove it, you may notice that by setting\n$$ a_n = \\frac{(3n)!}{(5n)^{3n}} $$\nyou have:\n$$ \\frac{a_{n+1}}{a_n} = \\frac{(3n+3)(3n+2)(3n+1)(5n)^{3n}}{(5n+5)^{3n+3}} = \\frac{\\frac{3n+3}{5n+5}\\cdot\\frac{3n+2}{5n+5}\\cdot\\frac{3n+1}{5n+5}}{\\left(1+\\frac{1}{n}\\right)^{3n}}\\to\\frac{\\left(\\frac{3}{5}\\right)^3}{e^3}$$\nas $n\\to +\\infty$.</p>\n" }, { "answer_id": 2109309, "score": 3, "text": "<blockquote>\n <p>I thought it might be useful to present an approach that relies on elementary tools only. To that end, we proceed.</p>\n</blockquote>\n\n<p>First, we write </p>\n\n<p>$$\\begin{align}\n\\frac{1}{n}\\log((3n!))&amp;=\\frac1n\\sum_{k=1}^{3n}\\log(k)\\\\\\\\\n&amp;=\\left(\\frac1n\\sum_{k=1}^{3n}\\log(k/n)\\right)+3\\log(n) \\tag 1\n\\end{align}$$</p>\n\n<p>Now, note that the parenthetical term on the right-hand side of $(1)$ is the Riemann sum for $\\int_0^3 \\log(x)\\,dx=3\\log(3)-3$. </p>\n\n<p>Using $(1)$, we have</p>\n\n<p>$$\\begin{align}\n\\lim_{n\\to \\infty}\\sqrt[n]{\\frac{(3n)!}{(5n)^{3n}}}&amp;=\\lim_{n\\to \\infty}e^{\\frac1n \\log((3n)!)-3\\log(5n)}\\\\\\\\\n&amp;=\\lim_{n\\to \\infty}e^{\\left(\\frac1n\\sum_{k=1}^{3n}\\log(k/n)\\right)+3\\log(n)-3\\log(5n)}\\\\\\\\\n&amp;=\\lim_{n\\to \\infty}e^{\\left(\\frac1n\\sum_{k=1}^{3n}\\log(k/n)\\right)-3\\log(5)}\\\\\\\\\n&amp;= e^{3\\log(3)-3-3\\log(5)}\\\\\\\\\n&amp;=\\left(\\frac{3}{5e}\\right)^3\n\\end{align}$$</p>\n\n<p>as expected!</p>\n\n<blockquote>\n <p><strong>Tools used: Riemann sums and the continuity of the exponential function.</strong></p>\n</blockquote>\n" }, { "answer_id": 2109210, "score": 1, "text": "<p>$$ (3(n+1))! \\neq 3 \\cdot 2 \\cdot 3 \\cdot 3 \\cdot 3 \\cdot 4 \\cdots 3 \\cdot (n+1) $$ To make it clear what the problem is, let's write the right-hand side with brackets: $$ (3 \\cdot 2) \\cdot (3 \\cdot 3) \\cdot (3 \\cdot 4) \\cdots (3 \\cdot (n+1)) $$ That's just multiplying all the positive multiples of 3 less than $ 3(n+1) $ together; the factorial is defined as multiplying together <em>all</em> positive integers less than $ 3(n+1) $. So the correct expansions are \\begin{align*}\n(3(n+1))! &amp;= 2 \\cdot 3 \\cdot 4 \\cdot 5 \\cdot 6 \\cdots (3n-2) \\cdot (3n-1) \\cdot 3n \\cdot (3n+1) \\cdot (3n+2) \\cdot 3(n+1) \\\\\n(3n)! &amp;= 2 \\cdot 3 \\cdot 4 \\cdot 5 \\cdot 6 \\cdots (3n-2) \\cdot (3n-1) \\cdot 3n\n\\end{align*}\nwhich clearly have the quotient Mathematica gave you. </p>\n" }, { "answer_id": 2109237, "score": 1, "text": "<p>My favorite elementary inequality\nfor $n!$:\n$\n\\def\\lfrac#1#2{{\\large\\frac{#1}{#2}}}\n$\n$(n/e)^n\n&lt; n!\n&lt; (n/e)^{n+1}\n$.</p>\n\n<p>Taking n-th roots,\n$n/e\n&lt; (n!)^{1/n}\n&lt; (n/e)^{1+1/n}\n$.</p>\n\n<p>Therefore\n$\\sqrt[n]{\\lfrac{(3n)!}{(5n)^{3n}}}\n=\\lfrac{((3n)!)^{1/n}}{(5n)^{3}}\n&gt;\\lfrac{((3n/e)^{3n})^{1/n}}{(5n)^{3}}\n=\\lfrac{(3n/e)^{3}}{(5n)^{3}}\n=\\lfrac{27n^3}{125e^3n^{3}}\n=\\lfrac{27}{125e^3}\n$.</p>\n\n<p>Arguing the other way,\n$\\sqrt[n]{\\lfrac{(3n)!}{(5n)^{3n}}}\n=\\lfrac{((3n)!)^{1/n}}{(5n)^{3}}\n&lt;\\lfrac{((3n/e)^{3n+1})^{1/n}}{(5n)^{3}}\n=\\lfrac{(3n/e)^{3}(3n/e)^{1/n}}{(5n)^{3}}\n=\\lfrac{27n^3(3n/e)^{1/n}}{125e^3n^{3}}\n=\\lfrac{27}{125e^3}(3n/e)^{1/n}\n$.\nSince,\nas $n \\to \\infty$,\n$n^{1/n}\n\\to 1\n$\nand\n$a^{1/n}\n\\to 1\n$\nfor any $a &gt; 0$,\n$(3n/e)^{1/n}\n\\to 1\n$ so the limit is\n$\\lfrac{27}{125e^3}\n$.</p>\n\n<hr>\n\n<p>A more general result:</p>\n\n<p>$\\sqrt[n]{\\lfrac{(an)!}{(bn)^{an}}}\n=\\lfrac{((an)!)^{1/n}}{(bn)^a}\n\\gt \\lfrac{((an/e)^{an})^{1/n}}{(bn)^a}\n=\\lfrac{(an/e)^a}{(bn)^a}\n=\\lfrac{a^a n^a}{b^ae^a}\n=\\lfrac{a^a}{b^ae^a}\n=\\left(\\lfrac{a}{be}\\right)^a\n$.</p>\n\n<p>The other way goes exactly as above,\nso that\n$\\lim_{n \\to \\infty} \\sqrt[n]{\\lfrac{(an)!}{(bn)^{an}}}\n=\\left(\\lfrac{a}{be}\\right)^a\n$.</p>\n" }, { "answer_id": 2109199, "score": 5, "text": "<p>By Stirling's inequality the answer is clearly $\\left(\\frac{3}{5e}\\right)^3$. To prove it, you may notice that by setting\n$$ a_n = \\frac{(3n)!}{(5n)^{3n}} $$\nyou have:\n$$ \\frac{a_{n+1}}{a_n} = \\frac{(3n+3)(3n+2)(3n+1)(5n)^{3n}}{(5n+5)^{3n+3}} = \\frac{\\frac{3n+3}{5n+5}\\cdot\\frac{3n+2}{5n+5}\\cdot\\frac{3n+1}{5n+5}}{\\left(1+\\frac{1}{n}\\right)^{3n}}\\to\\frac{\\left(\\frac{3}{5}\\right)^3}{e^3}$$\nas $n\\to +\\infty$.</p>\n" }, { "answer_id": 2109309, "score": 3, "text": "<blockquote>\n <p>I thought it might be useful to present an approach that relies on elementary tools only. To that end, we proceed.</p>\n</blockquote>\n\n<p>First, we write </p>\n\n<p>$$\\begin{align}\n\\frac{1}{n}\\log((3n!))&amp;=\\frac1n\\sum_{k=1}^{3n}\\log(k)\\\\\\\\\n&amp;=\\left(\\frac1n\\sum_{k=1}^{3n}\\log(k/n)\\right)+3\\log(n) \\tag 1\n\\end{align}$$</p>\n\n<p>Now, note that the parenthetical term on the right-hand side of $(1)$ is the Riemann sum for $\\int_0^3 \\log(x)\\,dx=3\\log(3)-3$. </p>\n\n<p>Using $(1)$, we have</p>\n\n<p>$$\\begin{align}\n\\lim_{n\\to \\infty}\\sqrt[n]{\\frac{(3n)!}{(5n)^{3n}}}&amp;=\\lim_{n\\to \\infty}e^{\\frac1n \\log((3n)!)-3\\log(5n)}\\\\\\\\\n&amp;=\\lim_{n\\to \\infty}e^{\\left(\\frac1n\\sum_{k=1}^{3n}\\log(k/n)\\right)+3\\log(n)-3\\log(5n)}\\\\\\\\\n&amp;=\\lim_{n\\to \\infty}e^{\\left(\\frac1n\\sum_{k=1}^{3n}\\log(k/n)\\right)-3\\log(5)}\\\\\\\\\n&amp;= e^{3\\log(3)-3-3\\log(5)}\\\\\\\\\n&amp;=\\left(\\frac{3}{5e}\\right)^3\n\\end{align}$$</p>\n\n<p>as expected!</p>\n\n<blockquote>\n <p><strong>Tools used: Riemann sums and the continuity of the exponential function.</strong></p>\n</blockquote>\n" }, { "answer_id": 2109210, "score": 1, "text": "<p>$$ (3(n+1))! \\neq 3 \\cdot 2 \\cdot 3 \\cdot 3 \\cdot 3 \\cdot 4 \\cdots 3 \\cdot (n+1) $$ To make it clear what the problem is, let's write the right-hand side with brackets: $$ (3 \\cdot 2) \\cdot (3 \\cdot 3) \\cdot (3 \\cdot 4) \\cdots (3 \\cdot (n+1)) $$ That's just multiplying all the positive multiples of 3 less than $ 3(n+1) $ together; the factorial is defined as multiplying together <em>all</em> positive integers less than $ 3(n+1) $. So the correct expansions are \\begin{align*}\n(3(n+1))! &amp;= 2 \\cdot 3 \\cdot 4 \\cdot 5 \\cdot 6 \\cdots (3n-2) \\cdot (3n-1) \\cdot 3n \\cdot (3n+1) \\cdot (3n+2) \\cdot 3(n+1) \\\\\n(3n)! &amp;= 2 \\cdot 3 \\cdot 4 \\cdot 5 \\cdot 6 \\cdots (3n-2) \\cdot (3n-1) \\cdot 3n\n\\end{align*}\nwhich clearly have the quotient Mathematica gave you. </p>\n" }, { "answer_id": 2109237, "score": 1, "text": "<p>My favorite elementary inequality\nfor $n!$:\n$\n\\def\\lfrac#1#2{{\\large\\frac{#1}{#2}}}\n$\n$(n/e)^n\n&lt; n!\n&lt; (n/e)^{n+1}\n$.</p>\n\n<p>Taking n-th roots,\n$n/e\n&lt; (n!)^{1/n}\n&lt; (n/e)^{1+1/n}\n$.</p>\n\n<p>Therefore\n$\\sqrt[n]{\\lfrac{(3n)!}{(5n)^{3n}}}\n=\\lfrac{((3n)!)^{1/n}}{(5n)^{3}}\n&gt;\\lfrac{((3n/e)^{3n})^{1/n}}{(5n)^{3}}\n=\\lfrac{(3n/e)^{3}}{(5n)^{3}}\n=\\lfrac{27n^3}{125e^3n^{3}}\n=\\lfrac{27}{125e^3}\n$.</p>\n\n<p>Arguing the other way,\n$\\sqrt[n]{\\lfrac{(3n)!}{(5n)^{3n}}}\n=\\lfrac{((3n)!)^{1/n}}{(5n)^{3}}\n&lt;\\lfrac{((3n/e)^{3n+1})^{1/n}}{(5n)^{3}}\n=\\lfrac{(3n/e)^{3}(3n/e)^{1/n}}{(5n)^{3}}\n=\\lfrac{27n^3(3n/e)^{1/n}}{125e^3n^{3}}\n=\\lfrac{27}{125e^3}(3n/e)^{1/n}\n$.\nSince,\nas $n \\to \\infty$,\n$n^{1/n}\n\\to 1\n$\nand\n$a^{1/n}\n\\to 1\n$\nfor any $a &gt; 0$,\n$(3n/e)^{1/n}\n\\to 1\n$ so the limit is\n$\\lfrac{27}{125e^3}\n$.</p>\n\n<hr>\n\n<p>A more general result:</p>\n\n<p>$\\sqrt[n]{\\lfrac{(an)!}{(bn)^{an}}}\n=\\lfrac{((an)!)^{1/n}}{(bn)^a}\n\\gt \\lfrac{((an/e)^{an})^{1/n}}{(bn)^a}\n=\\lfrac{(an/e)^a}{(bn)^a}\n=\\lfrac{a^a n^a}{b^ae^a}\n=\\lfrac{a^a}{b^ae^a}\n=\\left(\\lfrac{a}{be}\\right)^a\n$.</p>\n\n<p>The other way goes exactly as above,\nso that\n$\\lim_{n \\to \\infty} \\sqrt[n]{\\lfrac{(an)!}{(bn)^{an}}}\n=\\left(\\lfrac{a}{be}\\right)^a\n$.</p>\n" }, { "answer_id": 2109199, "score": 5, "text": "<p>By Stirling's inequality the answer is clearly $\\left(\\frac{3}{5e}\\right)^3$. To prove it, you may notice that by setting\n$$ a_n = \\frac{(3n)!}{(5n)^{3n}} $$\nyou have:\n$$ \\frac{a_{n+1}}{a_n} = \\frac{(3n+3)(3n+2)(3n+1)(5n)^{3n}}{(5n+5)^{3n+3}} = \\frac{\\frac{3n+3}{5n+5}\\cdot\\frac{3n+2}{5n+5}\\cdot\\frac{3n+1}{5n+5}}{\\left(1+\\frac{1}{n}\\right)^{3n}}\\to\\frac{\\left(\\frac{3}{5}\\right)^3}{e^3}$$\nas $n\\to +\\infty$.</p>\n" }, { "answer_id": 2109309, "score": 3, "text": "<blockquote>\n <p>I thought it might be useful to present an approach that relies on elementary tools only. To that end, we proceed.</p>\n</blockquote>\n\n<p>First, we write </p>\n\n<p>$$\\begin{align}\n\\frac{1}{n}\\log((3n!))&amp;=\\frac1n\\sum_{k=1}^{3n}\\log(k)\\\\\\\\\n&amp;=\\left(\\frac1n\\sum_{k=1}^{3n}\\log(k/n)\\right)+3\\log(n) \\tag 1\n\\end{align}$$</p>\n\n<p>Now, note that the parenthetical term on the right-hand side of $(1)$ is the Riemann sum for $\\int_0^3 \\log(x)\\,dx=3\\log(3)-3$. </p>\n\n<p>Using $(1)$, we have</p>\n\n<p>$$\\begin{align}\n\\lim_{n\\to \\infty}\\sqrt[n]{\\frac{(3n)!}{(5n)^{3n}}}&amp;=\\lim_{n\\to \\infty}e^{\\frac1n \\log((3n)!)-3\\log(5n)}\\\\\\\\\n&amp;=\\lim_{n\\to \\infty}e^{\\left(\\frac1n\\sum_{k=1}^{3n}\\log(k/n)\\right)+3\\log(n)-3\\log(5n)}\\\\\\\\\n&amp;=\\lim_{n\\to \\infty}e^{\\left(\\frac1n\\sum_{k=1}^{3n}\\log(k/n)\\right)-3\\log(5)}\\\\\\\\\n&amp;= e^{3\\log(3)-3-3\\log(5)}\\\\\\\\\n&amp;=\\left(\\frac{3}{5e}\\right)^3\n\\end{align}$$</p>\n\n<p>as expected!</p>\n\n<blockquote>\n <p><strong>Tools used: Riemann sums and the continuity of the exponential function.</strong></p>\n</blockquote>\n" }, { "answer_id": 2109210, "score": 1, "text": "<p>$$ (3(n+1))! \\neq 3 \\cdot 2 \\cdot 3 \\cdot 3 \\cdot 3 \\cdot 4 \\cdots 3 \\cdot (n+1) $$ To make it clear what the problem is, let's write the right-hand side with brackets: $$ (3 \\cdot 2) \\cdot (3 \\cdot 3) \\cdot (3 \\cdot 4) \\cdots (3 \\cdot (n+1)) $$ That's just multiplying all the positive multiples of 3 less than $ 3(n+1) $ together; the factorial is defined as multiplying together <em>all</em> positive integers less than $ 3(n+1) $. So the correct expansions are \\begin{align*}\n(3(n+1))! &amp;= 2 \\cdot 3 \\cdot 4 \\cdot 5 \\cdot 6 \\cdots (3n-2) \\cdot (3n-1) \\cdot 3n \\cdot (3n+1) \\cdot (3n+2) \\cdot 3(n+1) \\\\\n(3n)! &amp;= 2 \\cdot 3 \\cdot 4 \\cdot 5 \\cdot 6 \\cdots (3n-2) \\cdot (3n-1) \\cdot 3n\n\\end{align*}\nwhich clearly have the quotient Mathematica gave you. </p>\n" }, { "answer_id": 2109237, "score": 1, "text": "<p>My favorite elementary inequality\nfor $n!$:\n$\n\\def\\lfrac#1#2{{\\large\\frac{#1}{#2}}}\n$\n$(n/e)^n\n&lt; n!\n&lt; (n/e)^{n+1}\n$.</p>\n\n<p>Taking n-th roots,\n$n/e\n&lt; (n!)^{1/n}\n&lt; (n/e)^{1+1/n}\n$.</p>\n\n<p>Therefore\n$\\sqrt[n]{\\lfrac{(3n)!}{(5n)^{3n}}}\n=\\lfrac{((3n)!)^{1/n}}{(5n)^{3}}\n&gt;\\lfrac{((3n/e)^{3n})^{1/n}}{(5n)^{3}}\n=\\lfrac{(3n/e)^{3}}{(5n)^{3}}\n=\\lfrac{27n^3}{125e^3n^{3}}\n=\\lfrac{27}{125e^3}\n$.</p>\n\n<p>Arguing the other way,\n$\\sqrt[n]{\\lfrac{(3n)!}{(5n)^{3n}}}\n=\\lfrac{((3n)!)^{1/n}}{(5n)^{3}}\n&lt;\\lfrac{((3n/e)^{3n+1})^{1/n}}{(5n)^{3}}\n=\\lfrac{(3n/e)^{3}(3n/e)^{1/n}}{(5n)^{3}}\n=\\lfrac{27n^3(3n/e)^{1/n}}{125e^3n^{3}}\n=\\lfrac{27}{125e^3}(3n/e)^{1/n}\n$.\nSince,\nas $n \\to \\infty$,\n$n^{1/n}\n\\to 1\n$\nand\n$a^{1/n}\n\\to 1\n$\nfor any $a &gt; 0$,\n$(3n/e)^{1/n}\n\\to 1\n$ so the limit is\n$\\lfrac{27}{125e^3}\n$.</p>\n\n<hr>\n\n<p>A more general result:</p>\n\n<p>$\\sqrt[n]{\\lfrac{(an)!}{(bn)^{an}}}\n=\\lfrac{((an)!)^{1/n}}{(bn)^a}\n\\gt \\lfrac{((an/e)^{an})^{1/n}}{(bn)^a}\n=\\lfrac{(an/e)^a}{(bn)^a}\n=\\lfrac{a^a n^a}{b^ae^a}\n=\\lfrac{a^a}{b^ae^a}\n=\\left(\\lfrac{a}{be}\\right)^a\n$.</p>\n\n<p>The other way goes exactly as above,\nso that\n$\\lim_{n \\to \\infty} \\sqrt[n]{\\lfrac{(an)!}{(bn)^{an}}}\n=\\left(\\lfrac{a}{be}\\right)^a\n$.</p>\n" }, { "answer_id": 2109199, "score": 5, "text": "<p>By Stirling's inequality the answer is clearly $\\left(\\frac{3}{5e}\\right)^3$. To prove it, you may notice that by setting\n$$ a_n = \\frac{(3n)!}{(5n)^{3n}} $$\nyou have:\n$$ \\frac{a_{n+1}}{a_n} = \\frac{(3n+3)(3n+2)(3n+1)(5n)^{3n}}{(5n+5)^{3n+3}} = \\frac{\\frac{3n+3}{5n+5}\\cdot\\frac{3n+2}{5n+5}\\cdot\\frac{3n+1}{5n+5}}{\\left(1+\\frac{1}{n}\\right)^{3n}}\\to\\frac{\\left(\\frac{3}{5}\\right)^3}{e^3}$$\nas $n\\to +\\infty$.</p>\n" }, { "answer_id": 2109309, "score": 3, "text": "<blockquote>\n <p>I thought it might be useful to present an approach that relies on elementary tools only. To that end, we proceed.</p>\n</blockquote>\n\n<p>First, we write </p>\n\n<p>$$\\begin{align}\n\\frac{1}{n}\\log((3n!))&amp;=\\frac1n\\sum_{k=1}^{3n}\\log(k)\\\\\\\\\n&amp;=\\left(\\frac1n\\sum_{k=1}^{3n}\\log(k/n)\\right)+3\\log(n) \\tag 1\n\\end{align}$$</p>\n\n<p>Now, note that the parenthetical term on the right-hand side of $(1)$ is the Riemann sum for $\\int_0^3 \\log(x)\\,dx=3\\log(3)-3$. </p>\n\n<p>Using $(1)$, we have</p>\n\n<p>$$\\begin{align}\n\\lim_{n\\to \\infty}\\sqrt[n]{\\frac{(3n)!}{(5n)^{3n}}}&amp;=\\lim_{n\\to \\infty}e^{\\frac1n \\log((3n)!)-3\\log(5n)}\\\\\\\\\n&amp;=\\lim_{n\\to \\infty}e^{\\left(\\frac1n\\sum_{k=1}^{3n}\\log(k/n)\\right)+3\\log(n)-3\\log(5n)}\\\\\\\\\n&amp;=\\lim_{n\\to \\infty}e^{\\left(\\frac1n\\sum_{k=1}^{3n}\\log(k/n)\\right)-3\\log(5)}\\\\\\\\\n&amp;= e^{3\\log(3)-3-3\\log(5)}\\\\\\\\\n&amp;=\\left(\\frac{3}{5e}\\right)^3\n\\end{align}$$</p>\n\n<p>as expected!</p>\n\n<blockquote>\n <p><strong>Tools used: Riemann sums and the continuity of the exponential function.</strong></p>\n</blockquote>\n" }, { "answer_id": 2109210, "score": 1, "text": "<p>$$ (3(n+1))! \\neq 3 \\cdot 2 \\cdot 3 \\cdot 3 \\cdot 3 \\cdot 4 \\cdots 3 \\cdot (n+1) $$ To make it clear what the problem is, let's write the right-hand side with brackets: $$ (3 \\cdot 2) \\cdot (3 \\cdot 3) \\cdot (3 \\cdot 4) \\cdots (3 \\cdot (n+1)) $$ That's just multiplying all the positive multiples of 3 less than $ 3(n+1) $ together; the factorial is defined as multiplying together <em>all</em> positive integers less than $ 3(n+1) $. So the correct expansions are \\begin{align*}\n(3(n+1))! &amp;= 2 \\cdot 3 \\cdot 4 \\cdot 5 \\cdot 6 \\cdots (3n-2) \\cdot (3n-1) \\cdot 3n \\cdot (3n+1) \\cdot (3n+2) \\cdot 3(n+1) \\\\\n(3n)! &amp;= 2 \\cdot 3 \\cdot 4 \\cdot 5 \\cdot 6 \\cdots (3n-2) \\cdot (3n-1) \\cdot 3n\n\\end{align*}\nwhich clearly have the quotient Mathematica gave you. </p>\n" }, { "answer_id": 2109237, "score": 1, "text": "<p>My favorite elementary inequality\nfor $n!$:\n$\n\\def\\lfrac#1#2{{\\large\\frac{#1}{#2}}}\n$\n$(n/e)^n\n&lt; n!\n&lt; (n/e)^{n+1}\n$.</p>\n\n<p>Taking n-th roots,\n$n/e\n&lt; (n!)^{1/n}\n&lt; (n/e)^{1+1/n}\n$.</p>\n\n<p>Therefore\n$\\sqrt[n]{\\lfrac{(3n)!}{(5n)^{3n}}}\n=\\lfrac{((3n)!)^{1/n}}{(5n)^{3}}\n&gt;\\lfrac{((3n/e)^{3n})^{1/n}}{(5n)^{3}}\n=\\lfrac{(3n/e)^{3}}{(5n)^{3}}\n=\\lfrac{27n^3}{125e^3n^{3}}\n=\\lfrac{27}{125e^3}\n$.</p>\n\n<p>Arguing the other way,\n$\\sqrt[n]{\\lfrac{(3n)!}{(5n)^{3n}}}\n=\\lfrac{((3n)!)^{1/n}}{(5n)^{3}}\n&lt;\\lfrac{((3n/e)^{3n+1})^{1/n}}{(5n)^{3}}\n=\\lfrac{(3n/e)^{3}(3n/e)^{1/n}}{(5n)^{3}}\n=\\lfrac{27n^3(3n/e)^{1/n}}{125e^3n^{3}}\n=\\lfrac{27}{125e^3}(3n/e)^{1/n}\n$.\nSince,\nas $n \\to \\infty$,\n$n^{1/n}\n\\to 1\n$\nand\n$a^{1/n}\n\\to 1\n$\nfor any $a &gt; 0$,\n$(3n/e)^{1/n}\n\\to 1\n$ so the limit is\n$\\lfrac{27}{125e^3}\n$.</p>\n\n<hr>\n\n<p>A more general result:</p>\n\n<p>$\\sqrt[n]{\\lfrac{(an)!}{(bn)^{an}}}\n=\\lfrac{((an)!)^{1/n}}{(bn)^a}\n\\gt \\lfrac{((an/e)^{an})^{1/n}}{(bn)^a}\n=\\lfrac{(an/e)^a}{(bn)^a}\n=\\lfrac{a^a n^a}{b^ae^a}\n=\\lfrac{a^a}{b^ae^a}\n=\\left(\\lfrac{a}{be}\\right)^a\n$.</p>\n\n<p>The other way goes exactly as above,\nso that\n$\\lim_{n \\to \\infty} \\sqrt[n]{\\lfrac{(an)!}{(bn)^{an}}}\n=\\left(\\lfrac{a}{be}\\right)^a\n$.</p>\n" }, { "answer_id": 2109199, "score": 5, "text": "<p>By Stirling's inequality the answer is clearly $\\left(\\frac{3}{5e}\\right)^3$. To prove it, you may notice that by setting\n$$ a_n = \\frac{(3n)!}{(5n)^{3n}} $$\nyou have:\n$$ \\frac{a_{n+1}}{a_n} = \\frac{(3n+3)(3n+2)(3n+1)(5n)^{3n}}{(5n+5)^{3n+3}} = \\frac{\\frac{3n+3}{5n+5}\\cdot\\frac{3n+2}{5n+5}\\cdot\\frac{3n+1}{5n+5}}{\\left(1+\\frac{1}{n}\\right)^{3n}}\\to\\frac{\\left(\\frac{3}{5}\\right)^3}{e^3}$$\nas $n\\to +\\infty$.</p>\n" }, { "answer_id": 2109309, "score": 3, "text": "<blockquote>\n <p>I thought it might be useful to present an approach that relies on elementary tools only. To that end, we proceed.</p>\n</blockquote>\n\n<p>First, we write </p>\n\n<p>$$\\begin{align}\n\\frac{1}{n}\\log((3n!))&amp;=\\frac1n\\sum_{k=1}^{3n}\\log(k)\\\\\\\\\n&amp;=\\left(\\frac1n\\sum_{k=1}^{3n}\\log(k/n)\\right)+3\\log(n) \\tag 1\n\\end{align}$$</p>\n\n<p>Now, note that the parenthetical term on the right-hand side of $(1)$ is the Riemann sum for $\\int_0^3 \\log(x)\\,dx=3\\log(3)-3$. </p>\n\n<p>Using $(1)$, we have</p>\n\n<p>$$\\begin{align}\n\\lim_{n\\to \\infty}\\sqrt[n]{\\frac{(3n)!}{(5n)^{3n}}}&amp;=\\lim_{n\\to \\infty}e^{\\frac1n \\log((3n)!)-3\\log(5n)}\\\\\\\\\n&amp;=\\lim_{n\\to \\infty}e^{\\left(\\frac1n\\sum_{k=1}^{3n}\\log(k/n)\\right)+3\\log(n)-3\\log(5n)}\\\\\\\\\n&amp;=\\lim_{n\\to \\infty}e^{\\left(\\frac1n\\sum_{k=1}^{3n}\\log(k/n)\\right)-3\\log(5)}\\\\\\\\\n&amp;= e^{3\\log(3)-3-3\\log(5)}\\\\\\\\\n&amp;=\\left(\\frac{3}{5e}\\right)^3\n\\end{align}$$</p>\n\n<p>as expected!</p>\n\n<blockquote>\n <p><strong>Tools used: Riemann sums and the continuity of the exponential function.</strong></p>\n</blockquote>\n" }, { "answer_id": 2109210, "score": 1, "text": "<p>$$ (3(n+1))! \\neq 3 \\cdot 2 \\cdot 3 \\cdot 3 \\cdot 3 \\cdot 4 \\cdots 3 \\cdot (n+1) $$ To make it clear what the problem is, let's write the right-hand side with brackets: $$ (3 \\cdot 2) \\cdot (3 \\cdot 3) \\cdot (3 \\cdot 4) \\cdots (3 \\cdot (n+1)) $$ That's just multiplying all the positive multiples of 3 less than $ 3(n+1) $ together; the factorial is defined as multiplying together <em>all</em> positive integers less than $ 3(n+1) $. So the correct expansions are \\begin{align*}\n(3(n+1))! &amp;= 2 \\cdot 3 \\cdot 4 \\cdot 5 \\cdot 6 \\cdots (3n-2) \\cdot (3n-1) \\cdot 3n \\cdot (3n+1) \\cdot (3n+2) \\cdot 3(n+1) \\\\\n(3n)! &amp;= 2 \\cdot 3 \\cdot 4 \\cdot 5 \\cdot 6 \\cdots (3n-2) \\cdot (3n-1) \\cdot 3n\n\\end{align*}\nwhich clearly have the quotient Mathematica gave you. </p>\n" }, { "answer_id": 2109237, "score": 1, "text": "<p>My favorite elementary inequality\nfor $n!$:\n$\n\\def\\lfrac#1#2{{\\large\\frac{#1}{#2}}}\n$\n$(n/e)^n\n&lt; n!\n&lt; (n/e)^{n+1}\n$.</p>\n\n<p>Taking n-th roots,\n$n/e\n&lt; (n!)^{1/n}\n&lt; (n/e)^{1+1/n}\n$.</p>\n\n<p>Therefore\n$\\sqrt[n]{\\lfrac{(3n)!}{(5n)^{3n}}}\n=\\lfrac{((3n)!)^{1/n}}{(5n)^{3}}\n&gt;\\lfrac{((3n/e)^{3n})^{1/n}}{(5n)^{3}}\n=\\lfrac{(3n/e)^{3}}{(5n)^{3}}\n=\\lfrac{27n^3}{125e^3n^{3}}\n=\\lfrac{27}{125e^3}\n$.</p>\n\n<p>Arguing the other way,\n$\\sqrt[n]{\\lfrac{(3n)!}{(5n)^{3n}}}\n=\\lfrac{((3n)!)^{1/n}}{(5n)^{3}}\n&lt;\\lfrac{((3n/e)^{3n+1})^{1/n}}{(5n)^{3}}\n=\\lfrac{(3n/e)^{3}(3n/e)^{1/n}}{(5n)^{3}}\n=\\lfrac{27n^3(3n/e)^{1/n}}{125e^3n^{3}}\n=\\lfrac{27}{125e^3}(3n/e)^{1/n}\n$.\nSince,\nas $n \\to \\infty$,\n$n^{1/n}\n\\to 1\n$\nand\n$a^{1/n}\n\\to 1\n$\nfor any $a &gt; 0$,\n$(3n/e)^{1/n}\n\\to 1\n$ so the limit is\n$\\lfrac{27}{125e^3}\n$.</p>\n\n<hr>\n\n<p>A more general result:</p>\n\n<p>$\\sqrt[n]{\\lfrac{(an)!}{(bn)^{an}}}\n=\\lfrac{((an)!)^{1/n}}{(bn)^a}\n\\gt \\lfrac{((an/e)^{an})^{1/n}}{(bn)^a}\n=\\lfrac{(an/e)^a}{(bn)^a}\n=\\lfrac{a^a n^a}{b^ae^a}\n=\\lfrac{a^a}{b^ae^a}\n=\\left(\\lfrac{a}{be}\\right)^a\n$.</p>\n\n<p>The other way goes exactly as above,\nso that\n$\\lim_{n \\to \\infty} \\sqrt[n]{\\lfrac{(an)!}{(bn)^{an}}}\n=\\left(\\lfrac{a}{be}\\right)^a\n$.</p>\n" }, { "answer_id": 2109199, "score": 5, "text": "<p>By Stirling's inequality the answer is clearly $\\left(\\frac{3}{5e}\\right)^3$. To prove it, you may notice that by setting\n$$ a_n = \\frac{(3n)!}{(5n)^{3n}} $$\nyou have:\n$$ \\frac{a_{n+1}}{a_n} = \\frac{(3n+3)(3n+2)(3n+1)(5n)^{3n}}{(5n+5)^{3n+3}} = \\frac{\\frac{3n+3}{5n+5}\\cdot\\frac{3n+2}{5n+5}\\cdot\\frac{3n+1}{5n+5}}{\\left(1+\\frac{1}{n}\\right)^{3n}}\\to\\frac{\\left(\\frac{3}{5}\\right)^3}{e^3}$$\nas $n\\to +\\infty$.</p>\n" }, { "answer_id": 2109309, "score": 3, "text": "<blockquote>\n <p>I thought it might be useful to present an approach that relies on elementary tools only. To that end, we proceed.</p>\n</blockquote>\n\n<p>First, we write </p>\n\n<p>$$\\begin{align}\n\\frac{1}{n}\\log((3n!))&amp;=\\frac1n\\sum_{k=1}^{3n}\\log(k)\\\\\\\\\n&amp;=\\left(\\frac1n\\sum_{k=1}^{3n}\\log(k/n)\\right)+3\\log(n) \\tag 1\n\\end{align}$$</p>\n\n<p>Now, note that the parenthetical term on the right-hand side of $(1)$ is the Riemann sum for $\\int_0^3 \\log(x)\\,dx=3\\log(3)-3$. </p>\n\n<p>Using $(1)$, we have</p>\n\n<p>$$\\begin{align}\n\\lim_{n\\to \\infty}\\sqrt[n]{\\frac{(3n)!}{(5n)^{3n}}}&amp;=\\lim_{n\\to \\infty}e^{\\frac1n \\log((3n)!)-3\\log(5n)}\\\\\\\\\n&amp;=\\lim_{n\\to \\infty}e^{\\left(\\frac1n\\sum_{k=1}^{3n}\\log(k/n)\\right)+3\\log(n)-3\\log(5n)}\\\\\\\\\n&amp;=\\lim_{n\\to \\infty}e^{\\left(\\frac1n\\sum_{k=1}^{3n}\\log(k/n)\\right)-3\\log(5)}\\\\\\\\\n&amp;= e^{3\\log(3)-3-3\\log(5)}\\\\\\\\\n&amp;=\\left(\\frac{3}{5e}\\right)^3\n\\end{align}$$</p>\n\n<p>as expected!</p>\n\n<blockquote>\n <p><strong>Tools used: Riemann sums and the continuity of the exponential function.</strong></p>\n</blockquote>\n" }, { "answer_id": 2109210, "score": 1, "text": "<p>$$ (3(n+1))! \\neq 3 \\cdot 2 \\cdot 3 \\cdot 3 \\cdot 3 \\cdot 4 \\cdots 3 \\cdot (n+1) $$ To make it clear what the problem is, let's write the right-hand side with brackets: $$ (3 \\cdot 2) \\cdot (3 \\cdot 3) \\cdot (3 \\cdot 4) \\cdots (3 \\cdot (n+1)) $$ That's just multiplying all the positive multiples of 3 less than $ 3(n+1) $ together; the factorial is defined as multiplying together <em>all</em> positive integers less than $ 3(n+1) $. So the correct expansions are \\begin{align*}\n(3(n+1))! &amp;= 2 \\cdot 3 \\cdot 4 \\cdot 5 \\cdot 6 \\cdots (3n-2) \\cdot (3n-1) \\cdot 3n \\cdot (3n+1) \\cdot (3n+2) \\cdot 3(n+1) \\\\\n(3n)! &amp;= 2 \\cdot 3 \\cdot 4 \\cdot 5 \\cdot 6 \\cdots (3n-2) \\cdot (3n-1) \\cdot 3n\n\\end{align*}\nwhich clearly have the quotient Mathematica gave you. </p>\n" }, { "answer_id": 2109237, "score": 1, "text": "<p>My favorite elementary inequality\nfor $n!$:\n$\n\\def\\lfrac#1#2{{\\large\\frac{#1}{#2}}}\n$\n$(n/e)^n\n&lt; n!\n&lt; (n/e)^{n+1}\n$.</p>\n\n<p>Taking n-th roots,\n$n/e\n&lt; (n!)^{1/n}\n&lt; (n/e)^{1+1/n}\n$.</p>\n\n<p>Therefore\n$\\sqrt[n]{\\lfrac{(3n)!}{(5n)^{3n}}}\n=\\lfrac{((3n)!)^{1/n}}{(5n)^{3}}\n&gt;\\lfrac{((3n/e)^{3n})^{1/n}}{(5n)^{3}}\n=\\lfrac{(3n/e)^{3}}{(5n)^{3}}\n=\\lfrac{27n^3}{125e^3n^{3}}\n=\\lfrac{27}{125e^3}\n$.</p>\n\n<p>Arguing the other way,\n$\\sqrt[n]{\\lfrac{(3n)!}{(5n)^{3n}}}\n=\\lfrac{((3n)!)^{1/n}}{(5n)^{3}}\n&lt;\\lfrac{((3n/e)^{3n+1})^{1/n}}{(5n)^{3}}\n=\\lfrac{(3n/e)^{3}(3n/e)^{1/n}}{(5n)^{3}}\n=\\lfrac{27n^3(3n/e)^{1/n}}{125e^3n^{3}}\n=\\lfrac{27}{125e^3}(3n/e)^{1/n}\n$.\nSince,\nas $n \\to \\infty$,\n$n^{1/n}\n\\to 1\n$\nand\n$a^{1/n}\n\\to 1\n$\nfor any $a &gt; 0$,\n$(3n/e)^{1/n}\n\\to 1\n$ so the limit is\n$\\lfrac{27}{125e^3}\n$.</p>\n\n<hr>\n\n<p>A more general result:</p>\n\n<p>$\\sqrt[n]{\\lfrac{(an)!}{(bn)^{an}}}\n=\\lfrac{((an)!)^{1/n}}{(bn)^a}\n\\gt \\lfrac{((an/e)^{an})^{1/n}}{(bn)^a}\n=\\lfrac{(an/e)^a}{(bn)^a}\n=\\lfrac{a^a n^a}{b^ae^a}\n=\\lfrac{a^a}{b^ae^a}\n=\\left(\\lfrac{a}{be}\\right)^a\n$.</p>\n\n<p>The other way goes exactly as above,\nso that\n$\\lim_{n \\to \\infty} \\sqrt[n]{\\lfrac{(an)!}{(bn)^{an}}}\n=\\left(\\lfrac{a}{be}\\right)^a\n$.</p>\n" } ]
Question: Simplifying fraction with factorials: $\frac{(3(n+1))!}{(3n)!}$ Answer 1: <p>By Stirling's inequality the answer is clearly $\left(\frac{3}{5e}\right)^3$. To prove it, you may notice that by setting $$ a_n = \frac{(3n)!}{(5n)^{3n}} $$ you have: $$ \frac{a_{n+1}}{a_n} = \frac{(3n+3)(3n+2)(3n+1)(5n)^{3n}}{(5n+5)^{3n+3}} = \frac{\frac{3n+3}{5n+5}\cdot\frac{3n+2}{5n+5}\cdot\frac{3n+1}{5n+5}}{\left(1+\frac{1}{n}\right)^{3n}}\to\frac{\left(\frac{3}{5}\right)^3}{e^3}$$ as $n\to +\infty$.</p> Answer 2: <blockquote> <p>I thought it might be useful to present an approach that relies on elementary tools only. To that end, we proceed.</p> </blockquote> <p>First, we write </p> <p>$$\begin{align} \frac{1}{n}\log((3n!))&amp;=\frac1n\sum_{k=1}^{3n}\log(k)\\\\ &amp;=\left(\frac1n\sum_{k=1}^{3n}\log(k/n)\right)+3\log(n) \tag 1 \end{align}$$</p> <p>Now, note that the parenthetical term on the right-hand side of $(1)$ is the Riemann sum for $\int_0^3 \log(x)\,dx=3\log(3)-3$. </p> <p>Using $(1)$, we have</p> <p>$$\begin{align} \lim_{n\to \infty}\sqrt[n]{\frac{(3n)!}{(5n)^{3n}}}&amp;=\lim_{n\to \infty}e^{\frac1n \log((3n)!)-3\log(5n)}\\\\ &amp;=\lim_{n\to \infty}e^{\left(\frac1n\sum_{k=1}^{3n}\log(k/n)\right)+3\log(n)-3\log(5n)}\\\\ &amp;=\lim_{n\to \infty}e^{\left(\frac1n\sum_{k=1}^{3n}\log(k/n)\right)-3\log(5)}\\\\ &amp;= e^{3\log(3)-3-3\log(5)}\\\\ &amp;=\left(\frac{3}{5e}\right)^3 \end{align}$$</p> <p>as expected!</p> <blockquote> <p><strong>Tools used: Riemann sums and the continuity of the exponential function.</strong></p> </blockquote> Answer 3: <p>$$ (3(n+1))! \neq 3 \cdot 2 \cdot 3 \cdot 3 \cdot 3 \cdot 4 \cdots 3 \cdot (n+1) $$ To make it clear what the problem is, let's write the right-hand side with brackets: $$ (3 \cdot 2) \cdot (3 \cdot 3) \cdot (3 \cdot 4) \cdots (3 \cdot (n+1)) $$ That's just multiplying all the positive multiples of 3 less than $ 3(n+1) $ together; the factorial is defined as multiplying together <em>all</em> positive integers less than $ 3(n+1) $. So the correct expansions are \begin{align*} (3(n+1))! &amp;= 2 \cdot 3 \cdot 4 \cdot 5 \cdot 6 \cdots (3n-2) \cdot (3n-1) \cdot 3n \cdot (3n+1) \cdot (3n+2) \cdot 3(n+1) \\ (3n)! &amp;= 2 \cdot 3 \cdot 4 \cdot 5 \cdot 6 \cdots (3n-2) \cdot (3n-1) \cdot 3n \end{align*} which clearly have the quotient Mathematica gave you. </p> Answer 4: <p>My favorite elementary inequality for $n!$: $ \def\lfrac#1#2{{\large\frac{#1}{#2}}} $ $(n/e)^n &lt; n! &lt; (n/e)^{n+1} $.</p> <p>Taking n-th roots, $n/e &lt; (n!)^{1/n} &lt; (n/e)^{1+1/n} $.</p> <p>Therefore $\sqrt[n]{\lfrac{(3n)!}{(5n)^{3n}}} =\lfrac{((3n)!)^{1/n}}{(5n)^{3}} &gt;\lfrac{((3n/e)^{3n})^{1/n}}{(5n)^{3}} =\lfrac{(3n/e)^{3}}{(5n)^{3}} =\lfrac{27n^3}{125e^3n^{3}} =\lfrac{27}{125e^3} $.</p> <p>Arguing the other way, $\sqrt[n]{\lfrac{(3n)!}{(5n)^{3n}}} =\lfrac{((3n)!)^{1/n}}{(5n)^{3}} &lt;\lfrac{((3n/e)^{3n+1})^{1/n}}{(5n)^{3}} =\lfrac{(3n/e)^{3}(3n/e)^{1/n}}{(5n)^{3}} =\lfrac{27n^3(3n/e)^{1/n}}{125e^3n^{3}} =\lfrac{27}{125e^3}(3n/e)^{1/n} $. Since, as $n \to \infty$, $n^{1/n} \to 1 $ and $a^{1/n} \to 1 $ for any $a &gt; 0$, $(3n/e)^{1/n} \to 1 $ so the limit is $\lfrac{27}{125e^3} $.</p> <hr> <p>A more general result:</p> <p>$\sqrt[n]{\lfrac{(an)!}{(bn)^{an}}} =\lfrac{((an)!)^{1/n}}{(bn)^a} \gt \lfrac{((an/e)^{an})^{1/n}}{(bn)^a} =\lfrac{(an/e)^a}{(bn)^a} =\lfrac{a^a n^a}{b^ae^a} =\lfrac{a^a}{b^ae^a} =\left(\lfrac{a}{be}\right)^a $.</p> <p>The other way goes exactly as above, so that $\lim_{n \to \infty} \sqrt[n]{\lfrac{(an)!}{(bn)^{an}}} =\left(\lfrac{a}{be}\right)^a $.</p> Answer 5: <p>By Stirling's inequality the answer is clearly $\left(\frac{3}{5e}\right)^3$. To prove it, you may notice that by setting $$ a_n = \frac{(3n)!}{(5n)^{3n}} $$ you have: $$ \frac{a_{n+1}}{a_n} = \frac{(3n+3)(3n+2)(3n+1)(5n)^{3n}}{(5n+5)^{3n+3}} = \frac{\frac{3n+3}{5n+5}\cdot\frac{3n+2}{5n+5}\cdot\frac{3n+1}{5n+5}}{\left(1+\frac{1}{n}\right)^{3n}}\to\frac{\left(\frac{3}{5}\right)^3}{e^3}$$ as $n\to +\infty$.</p> Answer 6: <blockquote> <p>I thought it might be useful to present an approach that relies on elementary tools only. To that end, we proceed.</p> </blockquote> <p>First, we write </p> <p>$$\begin{align} \frac{1}{n}\log((3n!))&amp;=\frac1n\sum_{k=1}^{3n}\log(k)\\\\ &amp;=\left(\frac1n\sum_{k=1}^{3n}\log(k/n)\right)+3\log(n) \tag 1 \end{align}$$</p> <p>Now, note that the parenthetical term on the right-hand side of $(1)$ is the Riemann sum for $\int_0^3 \log(x)\,dx=3\log(3)-3$. </p> <p>Using $(1)$, we have</p> <p>$$\begin{align} \lim_{n\to \infty}\sqrt[n]{\frac{(3n)!}{(5n)^{3n}}}&amp;=\lim_{n\to \infty}e^{\frac1n \log((3n)!)-3\log(5n)}\\\\ &amp;=\lim_{n\to \infty}e^{\left(\frac1n\sum_{k=1}^{3n}\log(k/n)\right)+3\log(n)-3\log(5n)}\\\\ &amp;=\lim_{n\to \infty}e^{\left(\frac1n\sum_{k=1}^{3n}\log(k/n)\right)-3\log(5)}\\\\ &amp;= e^{3\log(3)-3-3\log(5)}\\\\ &amp;=\left(\frac{3}{5e}\right)^3 \end{align}$$</p> <p>as expected!</p> <blockquote> <p><strong>Tools used: Riemann sums and the continuity of the exponential function.</strong></p> </blockquote> Answer 7: <p>$$ (3(n+1))! \neq 3 \cdot 2 \cdot 3 \cdot 3 \cdot 3 \cdot 4 \cdots 3 \cdot (n+1) $$ To make it clear what the problem is, let's write the right-hand side with brackets: $$ (3 \cdot 2) \cdot (3 \cdot 3) \cdot (3 \cdot 4) \cdots (3 \cdot (n+1)) $$ That's just multiplying all the positive multiples of 3 less than $ 3(n+1) $ together; the factorial is defined as multiplying together <em>all</em> positive integers less than $ 3(n+1) $. So the correct expansions are \begin{align*} (3(n+1))! &amp;= 2 \cdot 3 \cdot 4 \cdot 5 \cdot 6 \cdots (3n-2) \cdot (3n-1) \cdot 3n \cdot (3n+1) \cdot (3n+2) \cdot 3(n+1) \\ (3n)! &amp;= 2 \cdot 3 \cdot 4 \cdot 5 \cdot 6 \cdots (3n-2) \cdot (3n-1) \cdot 3n \end{align*} which clearly have the quotient Mathematica gave you. </p> Answer 8: <p>My favorite elementary inequality for $n!$: $ \def\lfrac#1#2{{\large\frac{#1}{#2}}} $ $(n/e)^n &lt; n! &lt; (n/e)^{n+1} $.</p> <p>Taking n-th roots, $n/e &lt; (n!)^{1/n} &lt; (n/e)^{1+1/n} $.</p> <p>Therefore $\sqrt[n]{\lfrac{(3n)!}{(5n)^{3n}}} =\lfrac{((3n)!)^{1/n}}{(5n)^{3}} &gt;\lfrac{((3n/e)^{3n})^{1/n}}{(5n)^{3}} =\lfrac{(3n/e)^{3}}{(5n)^{3}} =\lfrac{27n^3}{125e^3n^{3}} =\lfrac{27}{125e^3} $.</p> <p>Arguing the other way, $\sqrt[n]{\lfrac{(3n)!}{(5n)^{3n}}} =\lfrac{((3n)!)^{1/n}}{(5n)^{3}} &lt;\lfrac{((3n/e)^{3n+1})^{1/n}}{(5n)^{3}} =\lfrac{(3n/e)^{3}(3n/e)^{1/n}}{(5n)^{3}} =\lfrac{27n^3(3n/e)^{1/n}}{125e^3n^{3}} =\lfrac{27}{125e^3}(3n/e)^{1/n} $. Since, as $n \to \infty$, $n^{1/n} \to 1 $ and $a^{1/n} \to 1 $ for any $a &gt; 0$, $(3n/e)^{1/n} \to 1 $ so the limit is $\lfrac{27}{125e^3} $.</p> <hr> <p>A more general result:</p> <p>$\sqrt[n]{\lfrac{(an)!}{(bn)^{an}}} =\lfrac{((an)!)^{1/n}}{(bn)^a} \gt \lfrac{((an/e)^{an})^{1/n}}{(bn)^a} =\lfrac{(an/e)^a}{(bn)^a} =\lfrac{a^a n^a}{b^ae^a} =\lfrac{a^a}{b^ae^a} =\left(\lfrac{a}{be}\right)^a $.</p> <p>The other way goes exactly as above, so that $\lim_{n \to \infty} \sqrt[n]{\lfrac{(an)!}{(bn)^{an}}} =\left(\lfrac{a}{be}\right)^a $.</p> Answer 9: <p>By Stirling's inequality the answer is clearly $\left(\frac{3}{5e}\right)^3$. To prove it, you may notice that by setting $$ a_n = \frac{(3n)!}{(5n)^{3n}} $$ you have: $$ \frac{a_{n+1}}{a_n} = \frac{(3n+3)(3n+2)(3n+1)(5n)^{3n}}{(5n+5)^{3n+3}} = \frac{\frac{3n+3}{5n+5}\cdot\frac{3n+2}{5n+5}\cdot\frac{3n+1}{5n+5}}{\left(1+\frac{1}{n}\right)^{3n}}\to\frac{\left(\frac{3}{5}\right)^3}{e^3}$$ as $n\to +\infty$.</p> Answer 10: <blockquote> <p>I thought it might be useful to present an approach that relies on elementary tools only. To that end, we proceed.</p> </blockquote> <p>First, we write </p> <p>$$\begin{align} \frac{1}{n}\log((3n!))&amp;=\frac1n\sum_{k=1}^{3n}\log(k)\\\\ &amp;=\left(\frac1n\sum_{k=1}^{3n}\log(k/n)\right)+3\log(n) \tag 1 \end{align}$$</p> <p>Now, note that the parenthetical term on the right-hand side of $(1)$ is the Riemann sum for $\int_0^3 \log(x)\,dx=3\log(3)-3$. </p> <p>Using $(1)$, we have</p> <p>$$\begin{align} \lim_{n\to \infty}\sqrt[n]{\frac{(3n)!}{(5n)^{3n}}}&amp;=\lim_{n\to \infty}e^{\frac1n \log((3n)!)-3\log(5n)}\\\\ &amp;=\lim_{n\to \infty}e^{\left(\frac1n\sum_{k=1}^{3n}\log(k/n)\right)+3\log(n)-3\log(5n)}\\\\ &amp;=\lim_{n\to \infty}e^{\left(\frac1n\sum_{k=1}^{3n}\log(k/n)\right)-3\log(5)}\\\\ &amp;= e^{3\log(3)-3-3\log(5)}\\\\ &amp;=\left(\frac{3}{5e}\right)^3 \end{align}$$</p> <p>as expected!</p> <blockquote> <p><strong>Tools used: Riemann sums and the continuity of the exponential function.</strong></p> </blockquote> Answer 11: <p>$$ (3(n+1))! \neq 3 \cdot 2 \cdot 3 \cdot 3 \cdot 3 \cdot 4 \cdots 3 \cdot (n+1) $$ To make it clear what the problem is, let's write the right-hand side with brackets: $$ (3 \cdot 2) \cdot (3 \cdot 3) \cdot (3 \cdot 4) \cdots (3 \cdot (n+1)) $$ That's just multiplying all the positive multiples of 3 less than $ 3(n+1) $ together; the factorial is defined as multiplying together <em>all</em> positive integers less than $ 3(n+1) $. So the correct expansions are \begin{align*} (3(n+1))! &amp;= 2 \cdot 3 \cdot 4 \cdot 5 \cdot 6 \cdots (3n-2) \cdot (3n-1) \cdot 3n \cdot (3n+1) \cdot (3n+2) \cdot 3(n+1) \\ (3n)! &amp;= 2 \cdot 3 \cdot 4 \cdot 5 \cdot 6 \cdots (3n-2) \cdot (3n-1) \cdot 3n \end{align*} which clearly have the quotient Mathematica gave you. </p> Answer 12: <p>My favorite elementary inequality for $n!$: $ \def\lfrac#1#2{{\large\frac{#1}{#2}}} $ $(n/e)^n &lt; n! &lt; (n/e)^{n+1} $.</p> <p>Taking n-th roots, $n/e &lt; (n!)^{1/n} &lt; (n/e)^{1+1/n} $.</p> <p>Therefore $\sqrt[n]{\lfrac{(3n)!}{(5n)^{3n}}} =\lfrac{((3n)!)^{1/n}}{(5n)^{3}} &gt;\lfrac{((3n/e)^{3n})^{1/n}}{(5n)^{3}} =\lfrac{(3n/e)^{3}}{(5n)^{3}} =\lfrac{27n^3}{125e^3n^{3}} =\lfrac{27}{125e^3} $.</p> <p>Arguing the other way, $\sqrt[n]{\lfrac{(3n)!}{(5n)^{3n}}} =\lfrac{((3n)!)^{1/n}}{(5n)^{3}} &lt;\lfrac{((3n/e)^{3n+1})^{1/n}}{(5n)^{3}} =\lfrac{(3n/e)^{3}(3n/e)^{1/n}}{(5n)^{3}} =\lfrac{27n^3(3n/e)^{1/n}}{125e^3n^{3}} =\lfrac{27}{125e^3}(3n/e)^{1/n} $. Since, as $n \to \infty$, $n^{1/n} \to 1 $ and $a^{1/n} \to 1 $ for any $a &gt; 0$, $(3n/e)^{1/n} \to 1 $ so the limit is $\lfrac{27}{125e^3} $.</p> <hr> <p>A more general result:</p> <p>$\sqrt[n]{\lfrac{(an)!}{(bn)^{an}}} =\lfrac{((an)!)^{1/n}}{(bn)^a} \gt \lfrac{((an/e)^{an})^{1/n}}{(bn)^a} =\lfrac{(an/e)^a}{(bn)^a} =\lfrac{a^a n^a}{b^ae^a} =\lfrac{a^a}{b^ae^a} =\left(\lfrac{a}{be}\right)^a $.</p> <p>The other way goes exactly as above, so that $\lim_{n \to \infty} \sqrt[n]{\lfrac{(an)!}{(bn)^{an}}} =\left(\lfrac{a}{be}\right)^a $.</p> Answer 13: <p>By Stirling's inequality the answer is clearly $\left(\frac{3}{5e}\right)^3$. To prove it, you may notice that by setting $$ a_n = \frac{(3n)!}{(5n)^{3n}} $$ you have: $$ \frac{a_{n+1}}{a_n} = \frac{(3n+3)(3n+2)(3n+1)(5n)^{3n}}{(5n+5)^{3n+3}} = \frac{\frac{3n+3}{5n+5}\cdot\frac{3n+2}{5n+5}\cdot\frac{3n+1}{5n+5}}{\left(1+\frac{1}{n}\right)^{3n}}\to\frac{\left(\frac{3}{5}\right)^3}{e^3}$$ as $n\to +\infty$.</p> Answer 14: <blockquote> <p>I thought it might be useful to present an approach that relies on elementary tools only. To that end, we proceed.</p> </blockquote> <p>First, we write </p> <p>$$\begin{align} \frac{1}{n}\log((3n!))&amp;=\frac1n\sum_{k=1}^{3n}\log(k)\\\\ &amp;=\left(\frac1n\sum_{k=1}^{3n}\log(k/n)\right)+3\log(n) \tag 1 \end{align}$$</p> <p>Now, note that the parenthetical term on the right-hand side of $(1)$ is the Riemann sum for $\int_0^3 \log(x)\,dx=3\log(3)-3$. </p> <p>Using $(1)$, we have</p> <p>$$\begin{align} \lim_{n\to \infty}\sqrt[n]{\frac{(3n)!}{(5n)^{3n}}}&amp;=\lim_{n\to \infty}e^{\frac1n \log((3n)!)-3\log(5n)}\\\\ &amp;=\lim_{n\to \infty}e^{\left(\frac1n\sum_{k=1}^{3n}\log(k/n)\right)+3\log(n)-3\log(5n)}\\\\ &amp;=\lim_{n\to \infty}e^{\left(\frac1n\sum_{k=1}^{3n}\log(k/n)\right)-3\log(5)}\\\\ &amp;= e^{3\log(3)-3-3\log(5)}\\\\ &amp;=\left(\frac{3}{5e}\right)^3 \end{align}$$</p> <p>as expected!</p> <blockquote> <p><strong>Tools used: Riemann sums and the continuity of the exponential function.</strong></p> </blockquote> Answer 15: <p>$$ (3(n+1))! \neq 3 \cdot 2 \cdot 3 \cdot 3 \cdot 3 \cdot 4 \cdots 3 \cdot (n+1) $$ To make it clear what the problem is, let's write the right-hand side with brackets: $$ (3 \cdot 2) \cdot (3 \cdot 3) \cdot (3 \cdot 4) \cdots (3 \cdot (n+1)) $$ That's just multiplying all the positive multiples of 3 less than $ 3(n+1) $ together; the factorial is defined as multiplying together <em>all</em> positive integers less than $ 3(n+1) $. So the correct expansions are \begin{align*} (3(n+1))! &amp;= 2 \cdot 3 \cdot 4 \cdot 5 \cdot 6 \cdots (3n-2) \cdot (3n-1) \cdot 3n \cdot (3n+1) \cdot (3n+2) \cdot 3(n+1) \\ (3n)! &amp;= 2 \cdot 3 \cdot 4 \cdot 5 \cdot 6 \cdots (3n-2) \cdot (3n-1) \cdot 3n \end{align*} which clearly have the quotient Mathematica gave you. </p> Answer 16: <p>My favorite elementary inequality for $n!$: $ \def\lfrac#1#2{{\large\frac{#1}{#2}}} $ $(n/e)^n &lt; n! &lt; (n/e)^{n+1} $.</p> <p>Taking n-th roots, $n/e &lt; (n!)^{1/n} &lt; (n/e)^{1+1/n} $.</p> <p>Therefore $\sqrt[n]{\lfrac{(3n)!}{(5n)^{3n}}} =\lfrac{((3n)!)^{1/n}}{(5n)^{3}} &gt;\lfrac{((3n/e)^{3n})^{1/n}}{(5n)^{3}} =\lfrac{(3n/e)^{3}}{(5n)^{3}} =\lfrac{27n^3}{125e^3n^{3}} =\lfrac{27}{125e^3} $.</p> <p>Arguing the other way, $\sqrt[n]{\lfrac{(3n)!}{(5n)^{3n}}} =\lfrac{((3n)!)^{1/n}}{(5n)^{3}} &lt;\lfrac{((3n/e)^{3n+1})^{1/n}}{(5n)^{3}} =\lfrac{(3n/e)^{3}(3n/e)^{1/n}}{(5n)^{3}} =\lfrac{27n^3(3n/e)^{1/n}}{125e^3n^{3}} =\lfrac{27}{125e^3}(3n/e)^{1/n} $. Since, as $n \to \infty$, $n^{1/n} \to 1 $ and $a^{1/n} \to 1 $ for any $a &gt; 0$, $(3n/e)^{1/n} \to 1 $ so the limit is $\lfrac{27}{125e^3} $.</p> <hr> <p>A more general result:</p> <p>$\sqrt[n]{\lfrac{(an)!}{(bn)^{an}}} =\lfrac{((an)!)^{1/n}}{(bn)^a} \gt \lfrac{((an/e)^{an})^{1/n}}{(bn)^a} =\lfrac{(an/e)^a}{(bn)^a} =\lfrac{a^a n^a}{b^ae^a} =\lfrac{a^a}{b^ae^a} =\left(\lfrac{a}{be}\right)^a $.</p> <p>The other way goes exactly as above, so that $\lim_{n \to \infty} \sqrt[n]{\lfrac{(an)!}{(bn)^{an}}} =\left(\lfrac{a}{be}\right)^a $.</p> Answer 17: <p>By Stirling's inequality the answer is clearly $\left(\frac{3}{5e}\right)^3$. To prove it, you may notice that by setting $$ a_n = \frac{(3n)!}{(5n)^{3n}} $$ you have: $$ \frac{a_{n+1}}{a_n} = \frac{(3n+3)(3n+2)(3n+1)(5n)^{3n}}{(5n+5)^{3n+3}} = \frac{\frac{3n+3}{5n+5}\cdot\frac{3n+2}{5n+5}\cdot\frac{3n+1}{5n+5}}{\left(1+\frac{1}{n}\right)^{3n}}\to\frac{\left(\frac{3}{5}\right)^3}{e^3}$$ as $n\to +\infty$.</p> Answer 18: <blockquote> <p>I thought it might be useful to present an approach that relies on elementary tools only. To that end, we proceed.</p> </blockquote> <p>First, we write </p> <p>$$\begin{align} \frac{1}{n}\log((3n!))&amp;=\frac1n\sum_{k=1}^{3n}\log(k)\\\\ &amp;=\left(\frac1n\sum_{k=1}^{3n}\log(k/n)\right)+3\log(n) \tag 1 \end{align}$$</p> <p>Now, note that the parenthetical term on the right-hand side of $(1)$ is the Riemann sum for $\int_0^3 \log(x)\,dx=3\log(3)-3$. </p> <p>Using $(1)$, we have</p> <p>$$\begin{align} \lim_{n\to \infty}\sqrt[n]{\frac{(3n)!}{(5n)^{3n}}}&amp;=\lim_{n\to \infty}e^{\frac1n \log((3n)!)-3\log(5n)}\\\\ &amp;=\lim_{n\to \infty}e^{\left(\frac1n\sum_{k=1}^{3n}\log(k/n)\right)+3\log(n)-3\log(5n)}\\\\ &amp;=\lim_{n\to \infty}e^{\left(\frac1n\sum_{k=1}^{3n}\log(k/n)\right)-3\log(5)}\\\\ &amp;= e^{3\log(3)-3-3\log(5)}\\\\ &amp;=\left(\frac{3}{5e}\right)^3 \end{align}$$</p> <p>as expected!</p> <blockquote> <p><strong>Tools used: Riemann sums and the continuity of the exponential function.</strong></p> </blockquote> Answer 19: <p>$$ (3(n+1))! \neq 3 \cdot 2 \cdot 3 \cdot 3 \cdot 3 \cdot 4 \cdots 3 \cdot (n+1) $$ To make it clear what the problem is, let's write the right-hand side with brackets: $$ (3 \cdot 2) \cdot (3 \cdot 3) \cdot (3 \cdot 4) \cdots (3 \cdot (n+1)) $$ That's just multiplying all the positive multiples of 3 less than $ 3(n+1) $ together; the factorial is defined as multiplying together <em>all</em> positive integers less than $ 3(n+1) $. So the correct expansions are \begin{align*} (3(n+1))! &amp;= 2 \cdot 3 \cdot 4 \cdot 5 \cdot 6 \cdots (3n-2) \cdot (3n-1) \cdot 3n \cdot (3n+1) \cdot (3n+2) \cdot 3(n+1) \\ (3n)! &amp;= 2 \cdot 3 \cdot 4 \cdot 5 \cdot 6 \cdots (3n-2) \cdot (3n-1) \cdot 3n \end{align*} which clearly have the quotient Mathematica gave you. </p> Answer 20: <p>My favorite elementary inequality for $n!$: $ \def\lfrac#1#2{{\large\frac{#1}{#2}}} $ $(n/e)^n &lt; n! &lt; (n/e)^{n+1} $.</p> <p>Taking n-th roots, $n/e &lt; (n!)^{1/n} &lt; (n/e)^{1+1/n} $.</p> <p>Therefore $\sqrt[n]{\lfrac{(3n)!}{(5n)^{3n}}} =\lfrac{((3n)!)^{1/n}}{(5n)^{3}} &gt;\lfrac{((3n/e)^{3n})^{1/n}}{(5n)^{3}} =\lfrac{(3n/e)^{3}}{(5n)^{3}} =\lfrac{27n^3}{125e^3n^{3}} =\lfrac{27}{125e^3} $.</p> <p>Arguing the other way, $\sqrt[n]{\lfrac{(3n)!}{(5n)^{3n}}} =\lfrac{((3n)!)^{1/n}}{(5n)^{3}} &lt;\lfrac{((3n/e)^{3n+1})^{1/n}}{(5n)^{3}} =\lfrac{(3n/e)^{3}(3n/e)^{1/n}}{(5n)^{3}} =\lfrac{27n^3(3n/e)^{1/n}}{125e^3n^{3}} =\lfrac{27}{125e^3}(3n/e)^{1/n} $. Since, as $n \to \infty$, $n^{1/n} \to 1 $ and $a^{1/n} \to 1 $ for any $a &gt; 0$, $(3n/e)^{1/n} \to 1 $ so the limit is $\lfrac{27}{125e^3} $.</p> <hr> <p>A more general result:</p> <p>$\sqrt[n]{\lfrac{(an)!}{(bn)^{an}}} =\lfrac{((an)!)^{1/n}}{(bn)^a} \gt \lfrac{((an/e)^{an})^{1/n}}{(bn)^a} =\lfrac{(an/e)^a}{(bn)^a} =\lfrac{a^a n^a}{b^ae^a} =\lfrac{a^a}{b^ae^a} =\left(\lfrac{a}{be}\right)^a $.</p> <p>The other way goes exactly as above, so that $\lim_{n \to \infty} \sqrt[n]{\lfrac{(an)!}{(bn)^{an}}} =\left(\lfrac{a}{be}\right)^a $.</p> Answer 21: <p>By Stirling's inequality the answer is clearly $\left(\frac{3}{5e}\right)^3$. To prove it, you may notice that by setting $$ a_n = \frac{(3n)!}{(5n)^{3n}} $$ you have: $$ \frac{a_{n+1}}{a_n} = \frac{(3n+3)(3n+2)(3n+1)(5n)^{3n}}{(5n+5)^{3n+3}} = \frac{\frac{3n+3}{5n+5}\cdot\frac{3n+2}{5n+5}\cdot\frac{3n+1}{5n+5}}{\left(1+\frac{1}{n}\right)^{3n}}\to\frac{\left(\frac{3}{5}\right)^3}{e^3}$$ as $n\to +\infty$.</p> Answer 22: <blockquote> <p>I thought it might be useful to present an approach that relies on elementary tools only. To that end, we proceed.</p> </blockquote> <p>First, we write </p> <p>$$\begin{align} \frac{1}{n}\log((3n!))&amp;=\frac1n\sum_{k=1}^{3n}\log(k)\\\\ &amp;=\left(\frac1n\sum_{k=1}^{3n}\log(k/n)\right)+3\log(n) \tag 1 \end{align}$$</p> <p>Now, note that the parenthetical term on the right-hand side of $(1)$ is the Riemann sum for $\int_0^3 \log(x)\,dx=3\log(3)-3$. </p> <p>Using $(1)$, we have</p> <p>$$\begin{align} \lim_{n\to \infty}\sqrt[n]{\frac{(3n)!}{(5n)^{3n}}}&amp;=\lim_{n\to \infty}e^{\frac1n \log((3n)!)-3\log(5n)}\\\\ &amp;=\lim_{n\to \infty}e^{\left(\frac1n\sum_{k=1}^{3n}\log(k/n)\right)+3\log(n)-3\log(5n)}\\\\ &amp;=\lim_{n\to \infty}e^{\left(\frac1n\sum_{k=1}^{3n}\log(k/n)\right)-3\log(5)}\\\\ &amp;= e^{3\log(3)-3-3\log(5)}\\\\ &amp;=\left(\frac{3}{5e}\right)^3 \end{align}$$</p> <p>as expected!</p> <blockquote> <p><strong>Tools used: Riemann sums and the continuity of the exponential function.</strong></p> </blockquote> Answer 23: <p>$$ (3(n+1))! \neq 3 \cdot 2 \cdot 3 \cdot 3 \cdot 3 \cdot 4 \cdots 3 \cdot (n+1) $$ To make it clear what the problem is, let's write the right-hand side with brackets: $$ (3 \cdot 2) \cdot (3 \cdot 3) \cdot (3 \cdot 4) \cdots (3 \cdot (n+1)) $$ That's just multiplying all the positive multiples of 3 less than $ 3(n+1) $ together; the factorial is defined as multiplying together <em>all</em> positive integers less than $ 3(n+1) $. So the correct expansions are \begin{align*} (3(n+1))! &amp;= 2 \cdot 3 \cdot 4 \cdot 5 \cdot 6 \cdots (3n-2) \cdot (3n-1) \cdot 3n \cdot (3n+1) \cdot (3n+2) \cdot 3(n+1) \\ (3n)! &amp;= 2 \cdot 3 \cdot 4 \cdot 5 \cdot 6 \cdots (3n-2) \cdot (3n-1) \cdot 3n \end{align*} which clearly have the quotient Mathematica gave you. </p> Answer 24: <p>My favorite elementary inequality for $n!$: $ \def\lfrac#1#2{{\large\frac{#1}{#2}}} $ $(n/e)^n &lt; n! &lt; (n/e)^{n+1} $.</p> <p>Taking n-th roots, $n/e &lt; (n!)^{1/n} &lt; (n/e)^{1+1/n} $.</p> <p>Therefore $\sqrt[n]{\lfrac{(3n)!}{(5n)^{3n}}} =\lfrac{((3n)!)^{1/n}}{(5n)^{3}} &gt;\lfrac{((3n/e)^{3n})^{1/n}}{(5n)^{3}} =\lfrac{(3n/e)^{3}}{(5n)^{3}} =\lfrac{27n^3}{125e^3n^{3}} =\lfrac{27}{125e^3} $.</p> <p>Arguing the other way, $\sqrt[n]{\lfrac{(3n)!}{(5n)^{3n}}} =\lfrac{((3n)!)^{1/n}}{(5n)^{3}} &lt;\lfrac{((3n/e)^{3n+1})^{1/n}}{(5n)^{3}} =\lfrac{(3n/e)^{3}(3n/e)^{1/n}}{(5n)^{3}} =\lfrac{27n^3(3n/e)^{1/n}}{125e^3n^{3}} =\lfrac{27}{125e^3}(3n/e)^{1/n} $. Since, as $n \to \infty$, $n^{1/n} \to 1 $ and $a^{1/n} \to 1 $ for any $a &gt; 0$, $(3n/e)^{1/n} \to 1 $ so the limit is $\lfrac{27}{125e^3} $.</p> <hr> <p>A more general result:</p> <p>$\sqrt[n]{\lfrac{(an)!}{(bn)^{an}}} =\lfrac{((an)!)^{1/n}}{(bn)^a} \gt \lfrac{((an/e)^{an})^{1/n}}{(bn)^a} =\lfrac{(an/e)^a}{(bn)^a} =\lfrac{a^a n^a}{b^ae^a} =\lfrac{a^a}{b^ae^a} =\left(\lfrac{a}{be}\right)^a $.</p> <p>The other way goes exactly as above, so that $\lim_{n \to \infty} \sqrt[n]{\lfrac{(an)!}{(bn)^{an}}} =\left(\lfrac{a}{be}\right)^a $.</p> Answer 25: <p>By Stirling's inequality the answer is clearly $\left(\frac{3}{5e}\right)^3$. To prove it, you may notice that by setting $$ a_n = \frac{(3n)!}{(5n)^{3n}} $$ you have: $$ \frac{a_{n+1}}{a_n} = \frac{(3n+3)(3n+2)(3n+1)(5n)^{3n}}{(5n+5)^{3n+3}} = \frac{\frac{3n+3}{5n+5}\cdot\frac{3n+2}{5n+5}\cdot\frac{3n+1}{5n+5}}{\left(1+\frac{1}{n}\right)^{3n}}\to\frac{\left(\frac{3}{5}\right)^3}{e^3}$$ as $n\to +\infty$.</p> Answer 26: <blockquote> <p>I thought it might be useful to present an approach that relies on elementary tools only. To that end, we proceed.</p> </blockquote> <p>First, we write </p> <p>$$\begin{align} \frac{1}{n}\log((3n!))&amp;=\frac1n\sum_{k=1}^{3n}\log(k)\\\\ &amp;=\left(\frac1n\sum_{k=1}^{3n}\log(k/n)\right)+3\log(n) \tag 1 \end{align}$$</p> <p>Now, note that the parenthetical term on the right-hand side of $(1)$ is the Riemann sum for $\int_0^3 \log(x)\,dx=3\log(3)-3$. </p> <p>Using $(1)$, we have</p> <p>$$\begin{align} \lim_{n\to \infty}\sqrt[n]{\frac{(3n)!}{(5n)^{3n}}}&amp;=\lim_{n\to \infty}e^{\frac1n \log((3n)!)-3\log(5n)}\\\\ &amp;=\lim_{n\to \infty}e^{\left(\frac1n\sum_{k=1}^{3n}\log(k/n)\right)+3\log(n)-3\log(5n)}\\\\ &amp;=\lim_{n\to \infty}e^{\left(\frac1n\sum_{k=1}^{3n}\log(k/n)\right)-3\log(5)}\\\\ &amp;= e^{3\log(3)-3-3\log(5)}\\\\ &amp;=\left(\frac{3}{5e}\right)^3 \end{align}$$</p> <p>as expected!</p> <blockquote> <p><strong>Tools used: Riemann sums and the continuity of the exponential function.</strong></p> </blockquote> Answer 27: <p>$$ (3(n+1))! \neq 3 \cdot 2 \cdot 3 \cdot 3 \cdot 3 \cdot 4 \cdots 3 \cdot (n+1) $$ To make it clear what the problem is, let's write the right-hand side with brackets: $$ (3 \cdot 2) \cdot (3 \cdot 3) \cdot (3 \cdot 4) \cdots (3 \cdot (n+1)) $$ That's just multiplying all the positive multiples of 3 less than $ 3(n+1) $ together; the factorial is defined as multiplying together <em>all</em> positive integers less than $ 3(n+1) $. So the correct expansions are \begin{align*} (3(n+1))! &amp;= 2 \cdot 3 \cdot 4 \cdot 5 \cdot 6 \cdots (3n-2) \cdot (3n-1) \cdot 3n \cdot (3n+1) \cdot (3n+2) \cdot 3(n+1) \\ (3n)! &amp;= 2 \cdot 3 \cdot 4 \cdot 5 \cdot 6 \cdots (3n-2) \cdot (3n-1) \cdot 3n \end{align*} which clearly have the quotient Mathematica gave you. </p> Answer 28: <p>My favorite elementary inequality for $n!$: $ \def\lfrac#1#2{{\large\frac{#1}{#2}}} $ $(n/e)^n &lt; n! &lt; (n/e)^{n+1} $.</p> <p>Taking n-th roots, $n/e &lt; (n!)^{1/n} &lt; (n/e)^{1+1/n} $.</p> <p>Therefore $\sqrt[n]{\lfrac{(3n)!}{(5n)^{3n}}} =\lfrac{((3n)!)^{1/n}}{(5n)^{3}} &gt;\lfrac{((3n/e)^{3n})^{1/n}}{(5n)^{3}} =\lfrac{(3n/e)^{3}}{(5n)^{3}} =\lfrac{27n^3}{125e^3n^{3}} =\lfrac{27}{125e^3} $.</p> <p>Arguing the other way, $\sqrt[n]{\lfrac{(3n)!}{(5n)^{3n}}} =\lfrac{((3n)!)^{1/n}}{(5n)^{3}} &lt;\lfrac{((3n/e)^{3n+1})^{1/n}}{(5n)^{3}} =\lfrac{(3n/e)^{3}(3n/e)^{1/n}}{(5n)^{3}} =\lfrac{27n^3(3n/e)^{1/n}}{125e^3n^{3}} =\lfrac{27}{125e^3}(3n/e)^{1/n} $. Since, as $n \to \infty$, $n^{1/n} \to 1 $ and $a^{1/n} \to 1 $ for any $a &gt; 0$, $(3n/e)^{1/n} \to 1 $ so the limit is $\lfrac{27}{125e^3} $.</p> <hr> <p>A more general result:</p> <p>$\sqrt[n]{\lfrac{(an)!}{(bn)^{an}}} =\lfrac{((an)!)^{1/n}}{(bn)^a} \gt \lfrac{((an/e)^{an})^{1/n}}{(bn)^a} =\lfrac{(an/e)^a}{(bn)^a} =\lfrac{a^a n^a}{b^ae^a} =\lfrac{a^a}{b^ae^a} =\left(\lfrac{a}{be}\right)^a $.</p> <p>The other way goes exactly as above, so that $\lim_{n \to \infty} \sqrt[n]{\lfrac{(an)!}{(bn)^{an}}} =\left(\lfrac{a}{be}\right)^a $.</p> Answer 29: <p>By Stirling's inequality the answer is clearly $\left(\frac{3}{5e}\right)^3$. To prove it, you may notice that by setting $$ a_n = \frac{(3n)!}{(5n)^{3n}} $$ you have: $$ \frac{a_{n+1}}{a_n} = \frac{(3n+3)(3n+2)(3n+1)(5n)^{3n}}{(5n+5)^{3n+3}} = \frac{\frac{3n+3}{5n+5}\cdot\frac{3n+2}{5n+5}\cdot\frac{3n+1}{5n+5}}{\left(1+\frac{1}{n}\right)^{3n}}\to\frac{\left(\frac{3}{5}\right)^3}{e^3}$$ as $n\to +\infty$.</p> Answer 30: <blockquote> <p>I thought it might be useful to present an approach that relies on elementary tools only. To that end, we proceed.</p> </blockquote> <p>First, we write </p> <p>$$\begin{align} \frac{1}{n}\log((3n!))&amp;=\frac1n\sum_{k=1}^{3n}\log(k)\\\\ &amp;=\left(\frac1n\sum_{k=1}^{3n}\log(k/n)\right)+3\log(n) \tag 1 \end{align}$$</p> <p>Now, note that the parenthetical term on the right-hand side of $(1)$ is the Riemann sum for $\int_0^3 \log(x)\,dx=3\log(3)-3$. </p> <p>Using $(1)$, we have</p> <p>$$\begin{align} \lim_{n\to \infty}\sqrt[n]{\frac{(3n)!}{(5n)^{3n}}}&amp;=\lim_{n\to \infty}e^{\frac1n \log((3n)!)-3\log(5n)}\\\\ &amp;=\lim_{n\to \infty}e^{\left(\frac1n\sum_{k=1}^{3n}\log(k/n)\right)+3\log(n)-3\log(5n)}\\\\ &amp;=\lim_{n\to \infty}e^{\left(\frac1n\sum_{k=1}^{3n}\log(k/n)\right)-3\log(5)}\\\\ &amp;= e^{3\log(3)-3-3\log(5)}\\\\ &amp;=\left(\frac{3}{5e}\right)^3 \end{align}$$</p> <p>as expected!</p> <blockquote> <p><strong>Tools used: Riemann sums and the continuity of the exponential function.</strong></p> </blockquote> Answer 31: <p>$$ (3(n+1))! \neq 3 \cdot 2 \cdot 3 \cdot 3 \cdot 3 \cdot 4 \cdots 3 \cdot (n+1) $$ To make it clear what the problem is, let's write the right-hand side with brackets: $$ (3 \cdot 2) \cdot (3 \cdot 3) \cdot (3 \cdot 4) \cdots (3 \cdot (n+1)) $$ That's just multiplying all the positive multiples of 3 less than $ 3(n+1) $ together; the factorial is defined as multiplying together <em>all</em> positive integers less than $ 3(n+1) $. So the correct expansions are \begin{align*} (3(n+1))! &amp;= 2 \cdot 3 \cdot 4 \cdot 5 \cdot 6 \cdots (3n-2) \cdot (3n-1) \cdot 3n \cdot (3n+1) \cdot (3n+2) \cdot 3(n+1) \\ (3n)! &amp;= 2 \cdot 3 \cdot 4 \cdot 5 \cdot 6 \cdots (3n-2) \cdot (3n-1) \cdot 3n \end{align*} which clearly have the quotient Mathematica gave you. </p> Answer 32: <p>My favorite elementary inequality for $n!$: $ \def\lfrac#1#2{{\large\frac{#1}{#2}}} $ $(n/e)^n &lt; n! &lt; (n/e)^{n+1} $.</p> <p>Taking n-th roots, $n/e &lt; (n!)^{1/n} &lt; (n/e)^{1+1/n} $.</p> <p>Therefore $\sqrt[n]{\lfrac{(3n)!}{(5n)^{3n}}} =\lfrac{((3n)!)^{1/n}}{(5n)^{3}} &gt;\lfrac{((3n/e)^{3n})^{1/n}}{(5n)^{3}} =\lfrac{(3n/e)^{3}}{(5n)^{3}} =\lfrac{27n^3}{125e^3n^{3}} =\lfrac{27}{125e^3} $.</p> <p>Arguing the other way, $\sqrt[n]{\lfrac{(3n)!}{(5n)^{3n}}} =\lfrac{((3n)!)^{1/n}}{(5n)^{3}} &lt;\lfrac{((3n/e)^{3n+1})^{1/n}}{(5n)^{3}} =\lfrac{(3n/e)^{3}(3n/e)^{1/n}}{(5n)^{3}} =\lfrac{27n^3(3n/e)^{1/n}}{125e^3n^{3}} =\lfrac{27}{125e^3}(3n/e)^{1/n} $. Since, as $n \to \infty$, $n^{1/n} \to 1 $ and $a^{1/n} \to 1 $ for any $a &gt; 0$, $(3n/e)^{1/n} \to 1 $ so the limit is $\lfrac{27}{125e^3} $.</p> <hr> <p>A more general result:</p> <p>$\sqrt[n]{\lfrac{(an)!}{(bn)^{an}}} =\lfrac{((an)!)^{1/n}}{(bn)^a} \gt \lfrac{((an/e)^{an})^{1/n}}{(bn)^a} =\lfrac{(an/e)^a}{(bn)^a} =\lfrac{a^a n^a}{b^ae^a} =\lfrac{a^a}{b^ae^a} =\left(\lfrac{a}{be}\right)^a $.</p> <p>The other way goes exactly as above, so that $\lim_{n \to \infty} \sqrt[n]{\lfrac{(an)!}{(bn)^{an}}} =\left(\lfrac{a}{be}\right)^a $.</p> Answer 33: <p>By Stirling's inequality the answer is clearly $\left(\frac{3}{5e}\right)^3$. To prove it, you may notice that by setting $$ a_n = \frac{(3n)!}{(5n)^{3n}} $$ you have: $$ \frac{a_{n+1}}{a_n} = \frac{(3n+3)(3n+2)(3n+1)(5n)^{3n}}{(5n+5)^{3n+3}} = \frac{\frac{3n+3}{5n+5}\cdot\frac{3n+2}{5n+5}\cdot\frac{3n+1}{5n+5}}{\left(1+\frac{1}{n}\right)^{3n}}\to\frac{\left(\frac{3}{5}\right)^3}{e^3}$$ as $n\to +\infty$.</p> Answer 34: <blockquote> <p>I thought it might be useful to present an approach that relies on elementary tools only. To that end, we proceed.</p> </blockquote> <p>First, we write </p> <p>$$\begin{align} \frac{1}{n}\log((3n!))&amp;=\frac1n\sum_{k=1}^{3n}\log(k)\\\\ &amp;=\left(\frac1n\sum_{k=1}^{3n}\log(k/n)\right)+3\log(n) \tag 1 \end{align}$$</p> <p>Now, note that the parenthetical term on the right-hand side of $(1)$ is the Riemann sum for $\int_0^3 \log(x)\,dx=3\log(3)-3$. </p> <p>Using $(1)$, we have</p> <p>$$\begin{align} \lim_{n\to \infty}\sqrt[n]{\frac{(3n)!}{(5n)^{3n}}}&amp;=\lim_{n\to \infty}e^{\frac1n \log((3n)!)-3\log(5n)}\\\\ &amp;=\lim_{n\to \infty}e^{\left(\frac1n\sum_{k=1}^{3n}\log(k/n)\right)+3\log(n)-3\log(5n)}\\\\ &amp;=\lim_{n\to \infty}e^{\left(\frac1n\sum_{k=1}^{3n}\log(k/n)\right)-3\log(5)}\\\\ &amp;= e^{3\log(3)-3-3\log(5)}\\\\ &amp;=\left(\frac{3}{5e}\right)^3 \end{align}$$</p> <p>as expected!</p> <blockquote> <p><strong>Tools used: Riemann sums and the continuity of the exponential function.</strong></p> </blockquote> Answer 35: <p>$$ (3(n+1))! \neq 3 \cdot 2 \cdot 3 \cdot 3 \cdot 3 \cdot 4 \cdots 3 \cdot (n+1) $$ To make it clear what the problem is, let's write the right-hand side with brackets: $$ (3 \cdot 2) \cdot (3 \cdot 3) \cdot (3 \cdot 4) \cdots (3 \cdot (n+1)) $$ That's just multiplying all the positive multiples of 3 less than $ 3(n+1) $ together; the factorial is defined as multiplying together <em>all</em> positive integers less than $ 3(n+1) $. So the correct expansions are \begin{align*} (3(n+1))! &amp;= 2 \cdot 3 \cdot 4 \cdot 5 \cdot 6 \cdots (3n-2) \cdot (3n-1) \cdot 3n \cdot (3n+1) \cdot (3n+2) \cdot 3(n+1) \\ (3n)! &amp;= 2 \cdot 3 \cdot 4 \cdot 5 \cdot 6 \cdots (3n-2) \cdot (3n-1) \cdot 3n \end{align*} which clearly have the quotient Mathematica gave you. </p> Answer 36: <p>My favorite elementary inequality for $n!$: $ \def\lfrac#1#2{{\large\frac{#1}{#2}}} $ $(n/e)^n &lt; n! &lt; (n/e)^{n+1} $.</p> <p>Taking n-th roots, $n/e &lt; (n!)^{1/n} &lt; (n/e)^{1+1/n} $.</p> <p>Therefore $\sqrt[n]{\lfrac{(3n)!}{(5n)^{3n}}} =\lfrac{((3n)!)^{1/n}}{(5n)^{3}} &gt;\lfrac{((3n/e)^{3n})^{1/n}}{(5n)^{3}} =\lfrac{(3n/e)^{3}}{(5n)^{3}} =\lfrac{27n^3}{125e^3n^{3}} =\lfrac{27}{125e^3} $.</p> <p>Arguing the other way, $\sqrt[n]{\lfrac{(3n)!}{(5n)^{3n}}} =\lfrac{((3n)!)^{1/n}}{(5n)^{3}} &lt;\lfrac{((3n/e)^{3n+1})^{1/n}}{(5n)^{3}} =\lfrac{(3n/e)^{3}(3n/e)^{1/n}}{(5n)^{3}} =\lfrac{27n^3(3n/e)^{1/n}}{125e^3n^{3}} =\lfrac{27}{125e^3}(3n/e)^{1/n} $. Since, as $n \to \infty$, $n^{1/n} \to 1 $ and $a^{1/n} \to 1 $ for any $a &gt; 0$, $(3n/e)^{1/n} \to 1 $ so the limit is $\lfrac{27}{125e^3} $.</p> <hr> <p>A more general result:</p> <p>$\sqrt[n]{\lfrac{(an)!}{(bn)^{an}}} =\lfrac{((an)!)^{1/n}}{(bn)^a} \gt \lfrac{((an/e)^{an})^{1/n}}{(bn)^a} =\lfrac{(an/e)^a}{(bn)^a} =\lfrac{a^a n^a}{b^ae^a} =\lfrac{a^a}{b^ae^a} =\left(\lfrac{a}{be}\right)^a $.</p> <p>The other way goes exactly as above, so that $\lim_{n \to \infty} \sqrt[n]{\lfrac{(an)!}{(bn)^{an}}} =\left(\lfrac{a}{be}\right)^a $.</p> Answer 37: <p>By Stirling's inequality the answer is clearly $\left(\frac{3}{5e}\right)^3$. To prove it, you may notice that by setting $$ a_n = \frac{(3n)!}{(5n)^{3n}} $$ you have: $$ \frac{a_{n+1}}{a_n} = \frac{(3n+3)(3n+2)(3n+1)(5n)^{3n}}{(5n+5)^{3n+3}} = \frac{\frac{3n+3}{5n+5}\cdot\frac{3n+2}{5n+5}\cdot\frac{3n+1}{5n+5}}{\left(1+\frac{1}{n}\right)^{3n}}\to\frac{\left(\frac{3}{5}\right)^3}{e^3}$$ as $n\to +\infty$.</p> Answer 38: <blockquote> <p>I thought it might be useful to present an approach that relies on elementary tools only. To that end, we proceed.</p> </blockquote> <p>First, we write </p> <p>$$\begin{align} \frac{1}{n}\log((3n!))&amp;=\frac1n\sum_{k=1}^{3n}\log(k)\\\\ &amp;=\left(\frac1n\sum_{k=1}^{3n}\log(k/n)\right)+3\log(n) \tag 1 \end{align}$$</p> <p>Now, note that the parenthetical term on the right-hand side of $(1)$ is the Riemann sum for $\int_0^3 \log(x)\,dx=3\log(3)-3$. </p> <p>Using $(1)$, we have</p> <p>$$\begin{align} \lim_{n\to \infty}\sqrt[n]{\frac{(3n)!}{(5n)^{3n}}}&amp;=\lim_{n\to \infty}e^{\frac1n \log((3n)!)-3\log(5n)}\\\\ &amp;=\lim_{n\to \infty}e^{\left(\frac1n\sum_{k=1}^{3n}\log(k/n)\right)+3\log(n)-3\log(5n)}\\\\ &amp;=\lim_{n\to \infty}e^{\left(\frac1n\sum_{k=1}^{3n}\log(k/n)\right)-3\log(5)}\\\\ &amp;= e^{3\log(3)-3-3\log(5)}\\\\ &amp;=\left(\frac{3}{5e}\right)^3 \end{align}$$</p> <p>as expected!</p> <blockquote> <p><strong>Tools used: Riemann sums and the continuity of the exponential function.</strong></p> </blockquote> Answer 39: <p>$$ (3(n+1))! \neq 3 \cdot 2 \cdot 3 \cdot 3 \cdot 3 \cdot 4 \cdots 3 \cdot (n+1) $$ To make it clear what the problem is, let's write the right-hand side with brackets: $$ (3 \cdot 2) \cdot (3 \cdot 3) \cdot (3 \cdot 4) \cdots (3 \cdot (n+1)) $$ That's just multiplying all the positive multiples of 3 less than $ 3(n+1) $ together; the factorial is defined as multiplying together <em>all</em> positive integers less than $ 3(n+1) $. So the correct expansions are \begin{align*} (3(n+1))! &amp;= 2 \cdot 3 \cdot 4 \cdot 5 \cdot 6 \cdots (3n-2) \cdot (3n-1) \cdot 3n \cdot (3n+1) \cdot (3n+2) \cdot 3(n+1) \\ (3n)! &amp;= 2 \cdot 3 \cdot 4 \cdot 5 \cdot 6 \cdots (3n-2) \cdot (3n-1) \cdot 3n \end{align*} which clearly have the quotient Mathematica gave you. </p> Answer 40: <p>My favorite elementary inequality for $n!$: $ \def\lfrac#1#2{{\large\frac{#1}{#2}}} $ $(n/e)^n &lt; n! &lt; (n/e)^{n+1} $.</p> <p>Taking n-th roots, $n/e &lt; (n!)^{1/n} &lt; (n/e)^{1+1/n} $.</p> <p>Therefore $\sqrt[n]{\lfrac{(3n)!}{(5n)^{3n}}} =\lfrac{((3n)!)^{1/n}}{(5n)^{3}} &gt;\lfrac{((3n/e)^{3n})^{1/n}}{(5n)^{3}} =\lfrac{(3n/e)^{3}}{(5n)^{3}} =\lfrac{27n^3}{125e^3n^{3}} =\lfrac{27}{125e^3} $.</p> <p>Arguing the other way, $\sqrt[n]{\lfrac{(3n)!}{(5n)^{3n}}} =\lfrac{((3n)!)^{1/n}}{(5n)^{3}} &lt;\lfrac{((3n/e)^{3n+1})^{1/n}}{(5n)^{3}} =\lfrac{(3n/e)^{3}(3n/e)^{1/n}}{(5n)^{3}} =\lfrac{27n^3(3n/e)^{1/n}}{125e^3n^{3}} =\lfrac{27}{125e^3}(3n/e)^{1/n} $. Since, as $n \to \infty$, $n^{1/n} \to 1 $ and $a^{1/n} \to 1 $ for any $a &gt; 0$, $(3n/e)^{1/n} \to 1 $ so the limit is $\lfrac{27}{125e^3} $.</p> <hr> <p>A more general result:</p> <p>$\sqrt[n]{\lfrac{(an)!}{(bn)^{an}}} =\lfrac{((an)!)^{1/n}}{(bn)^a} \gt \lfrac{((an/e)^{an})^{1/n}}{(bn)^a} =\lfrac{(an/e)^a}{(bn)^a} =\lfrac{a^a n^a}{b^ae^a} =\lfrac{a^a}{b^ae^a} =\left(\lfrac{a}{be}\right)^a $.</p> <p>The other way goes exactly as above, so that $\lim_{n \to \infty} \sqrt[n]{\lfrac{(an)!}{(bn)^{an}}} =\left(\lfrac{a}{be}\right)^a $.</p>
Is this a valid way to prove, that $a^2 + b^2 \neq 3c^2$ for all integers $a, b, c$ ? (except the trivial case)
2,109,197
6
https://math.stackexchange.com/questions/2109197/is-this-a-valid-way-to-prove-that-a2-b2-neq-3c2-for-all-integers-a-b
<p><strong>Update:</strong><br> (because of the length of the question, I put an update at the top)<br> I appreciate recommendations regarding the alternative proofs. However, the main emphasis of my question is about the correctness of the reasoning in the 8th case of the provided proof (with a diagram).</p> <p><strong>Original question:</strong> </p> <p>I would like to know, whether the following proof, is a valid way to prove that $a^2 + b^2 \neq 3c^2$ for all $a, b, c \in Z$ (except the trivial case, when $a=b=c=0$). More formally, we have to prove the correctness of the following statement:</p> <p>$$P: (\forall a,b,c \in Z, a^2 + b^2 \neq 3c^2 \lor (a=b=c=0))$$</p> <p><strong>Proof.</strong> <em>(by contradiction)</em><br> For the sake of contradiction let's assume, that there exist such $a, b, c \in Z$, that $a^2 + b^2 = 3c^2$ (and the combination of $a,b,c$ is not a trivial case). More, formally, let's assume that $\neg P$ is <em>true</em>:</p> <p>$$\neg P: (\exists a,b,c \in Z, a^2 + b^2 = 3c^2 \land \neg (a=b=c=0))$$</p> <p>There are $2^3$ possible combinations of different parities of $a,b,c$ (8 disjoint cases, which cover entire $Z^3$). So, in order to prove the original statement, we have to consider each case, and show that the <em>true</em>-ness of the $\neg P$ always leads to some sort of contradiction.</p> <p>Let's consider 8 possible cases (7 of which are simple, <strong>whereas the 8th case looks a bit intricate, and I am not sure regarding its correctness</strong>):</p> <p><strong>Case 1)</strong> $a$ is odd, $b$ is odd, $c$ is odd<br> Thus:<br> $a = (2x + 1)$, $b = (2y + 1)$, $c = (2z + 1)$ for some $x, y, z \in Z$<br> So:<br> $$ a^2 + b^2 = 3c^2 \\ \implies (2x + 1) ^2 + (2y + 1)^2 = 3 \cdot (2z + 1)^2 \\ \implies 2 \cdot (2x^2 + 2x + 2y^2 + 2y + 1) = 2 \cdot (6z^2 + 6z + 1) + 1 \\ \implies even\ number = odd\ number \\ $$</p> <p>However, the derived result contradicts to the fact that <em>odd numbers</em> and <em>even numbers</em> can't be equal. Hence: $(even\ number = odd\ number) \land (even\ number \neq odd\ number)$, or equivalently: $(even\ number = odd\ number) \land \neg (even\ number = odd\ number)$. Contradiction.</p> <p><strong>Case 2)</strong> $a$ is odd, $b$ is odd, $c$ is even<br> Thus:<br> $a = (2x + 1)$, $b = (2y + 1)$, $c = 2z$ for some $x, y, z \in Z$<br> So:<br> $$ a^2 + b^2 = 3c^2 \\ \implies (2x + 1) ^2 + (2y + 1)^2 = 3 \cdot (2z)^2 \\ \implies 2 \cdot (2x^2 + 2x + 2y^2 + 2y + 1) = 12z^2 \\ \implies 2 \cdot (x^2 + x + y^2 + y) + 1 = 6z^2 \\ \implies odd\ number = even\ number $$ Contradiction.</p> <p><strong>Case 3)</strong> $a$ is odd, $b$ is even, $c$ is odd<br> Thus:<br> $a = (2x + 1)$, $b = 2y$, $c = (2z + 1)$ for some $x, y, z \in Z$<br> So:<br> $$ a^2 + b^2 = 3c^2 \\ \implies (2x + 1) ^2 + (2y)^2 = 3 \cdot (2z + 1)^2 \\ \implies 4x^2 + 4x + 1 + 4y^2 = 12z^2 + 12z + 3 \\ \implies 4\cdot(x^2 + x + y^2) = 2 \cdot (6z^2 + 6z + 1) \\ \implies 2\cdot(x^2 + x + y^2) = 6z^2 + 6z + 1 \\ \implies even\ number = odd\ number $$ Contradiction.</p> <p><strong>Case 4)</strong> $a$ is odd, $b$ is even, $c$ is even<br> The square of an odd number is odd (so, $a^2$ is odd).<br> The square of an even number is even (so, $b^2$ and $3c^2$ are even).<br> Fact: <em>the sum of an even number and an odd number is odd</em>.<br> However, equality: $a^2 + b^2 = 3c^2$ leads to the conclusion, that: $odd\ number + even\ number = even\ number$<br> Contradiction.</p> <p><strong>Case 5)</strong> $a$ is even, $b$ is odd, $c$ is odd<br> Symmetric to the Case 3 (because $a$ and $b$ are mutually exchangeable), which shows the contradiction.</p> <p><strong>Case 6)</strong> $a$ is even, $b$ is odd, $c$ is even<br> Symmetric to the Case 4, which shows the contradiction.</p> <p><strong>Case 7)</strong> $a$ is even, $b$ is even, $c$ is odd<br> Thus:<br> $a = 2x$, $b = 2y$, $c = (2z + 1)$ for some $x, y, z \in Z$<br> So:<br> $$ a^2 + b^2 = 3c^2 \\ \implies 4x^2 + 4y^2 = 12z^2 + 12z + 3 \\ \implies even\ number = odd\ number $$ Contradiction.</p> <p><strong>Case 8)</strong> $a$ is even, $b$ is even, $c$ is even<br> Thus:<br> $a = 2x$, $b = 2y$, $c = 2z$ for some $x, y, z \in Z$<br> So: $$ a^2 + b^2 = 3c^2 \\ \implies 4x^2 + 4y^2 = 3 \cdot 4z^2 \\ \implies x^2 + y^2 = 3z^2 $$</p> <p>Now, we are faced with the similar instance of the problem, <strong>however, the size of the problem is strictly smaller</strong> ($x = {a \over 2}$, $y = {b \over 2}$, $z = {c \over 2}$).<br> At first glance, it seems that we have to consider again the eight possible parities of $x, y, z$. However, if we analyze all dependencies between the cases of the problem, we will notice that the only possible outcomes are either <em>contradiction</em> or <em>the trivial case</em>:</p> <p><a href="https://i.stack.imgur.com/KVWUL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KVWUL.png" alt="enter image description here"></a></p> <p>We have shown the contradiction in all cases, hence we have subsequently proved the original statement. $\blacksquare$</p> <p>So, I would like to know, if there is any problem with reasoning in the 8th case?</p>
[ { "answer_id": 2109206, "score": 2, "text": "<p>Suppose a solution with at least one of $a,b\\neq 0$ exists. If $(a,b,c)$ is a solution then $(\\frac{a}{\\gcd(a,b)},\\frac{b}{\\gcd(a,b)},\\frac{c}{\\gcd(a,b)})$ is also a solution (and all terms are integers), and $a$ and $b$ are coprime.</p>\n\n<p>Now notice that $a^2+b^2$ cannot be a multiple of $3$ unless both are multiples of $3$ (because $a^2\\equiv 0$ or $1\\bmod 3$). We conclude that if $a^2+b^2=3c^2$ we have $a=b=0$.</p>\n" }, { "answer_id": 2119548, "score": 2, "text": "<p>There is truth in your method for case 8. It is called <a href=\"https://en.wikipedia.org/wiki/Proof_by_infinite_descent\" rel=\"nofollow noreferrer\">inifinte descent</a> and equivalent to induction (i.e., alternatively you might start away with assuming that $(a,b,c)$ is the smallest non-trivial solution, and then $(a/2,b/2,c/2)$ cannot be a solution).</p>\n" }, { "answer_id": 2109206, "score": 2, "text": "<p>Suppose a solution with at least one of $a,b\\neq 0$ exists. If $(a,b,c)$ is a solution then $(\\frac{a}{\\gcd(a,b)},\\frac{b}{\\gcd(a,b)},\\frac{c}{\\gcd(a,b)})$ is also a solution (and all terms are integers), and $a$ and $b$ are coprime.</p>\n\n<p>Now notice that $a^2+b^2$ cannot be a multiple of $3$ unless both are multiples of $3$ (because $a^2\\equiv 0$ or $1\\bmod 3$). We conclude that if $a^2+b^2=3c^2$ we have $a=b=0$.</p>\n" }, { "answer_id": 2119548, "score": 2, "text": "<p>There is truth in your method for case 8. It is called <a href=\"https://en.wikipedia.org/wiki/Proof_by_infinite_descent\" rel=\"nofollow noreferrer\">inifinte descent</a> and equivalent to induction (i.e., alternatively you might start away with assuming that $(a,b,c)$ is the smallest non-trivial solution, and then $(a/2,b/2,c/2)$ cannot be a solution).</p>\n" }, { "answer_id": 2109206, "score": 2, "text": "<p>Suppose a solution with at least one of $a,b\\neq 0$ exists. If $(a,b,c)$ is a solution then $(\\frac{a}{\\gcd(a,b)},\\frac{b}{\\gcd(a,b)},\\frac{c}{\\gcd(a,b)})$ is also a solution (and all terms are integers), and $a$ and $b$ are coprime.</p>\n\n<p>Now notice that $a^2+b^2$ cannot be a multiple of $3$ unless both are multiples of $3$ (because $a^2\\equiv 0$ or $1\\bmod 3$). We conclude that if $a^2+b^2=3c^2$ we have $a=b=0$.</p>\n" }, { "answer_id": 2119548, "score": 2, "text": "<p>There is truth in your method for case 8. It is called <a href=\"https://en.wikipedia.org/wiki/Proof_by_infinite_descent\" rel=\"nofollow noreferrer\">inifinte descent</a> and equivalent to induction (i.e., alternatively you might start away with assuming that $(a,b,c)$ is the smallest non-trivial solution, and then $(a/2,b/2,c/2)$ cannot be a solution).</p>\n" }, { "answer_id": 2109206, "score": 2, "text": "<p>Suppose a solution with at least one of $a,b\\neq 0$ exists. If $(a,b,c)$ is a solution then $(\\frac{a}{\\gcd(a,b)},\\frac{b}{\\gcd(a,b)},\\frac{c}{\\gcd(a,b)})$ is also a solution (and all terms are integers), and $a$ and $b$ are coprime.</p>\n\n<p>Now notice that $a^2+b^2$ cannot be a multiple of $3$ unless both are multiples of $3$ (because $a^2\\equiv 0$ or $1\\bmod 3$). We conclude that if $a^2+b^2=3c^2$ we have $a=b=0$.</p>\n" }, { "answer_id": 2119548, "score": 2, "text": "<p>There is truth in your method for case 8. It is called <a href=\"https://en.wikipedia.org/wiki/Proof_by_infinite_descent\" rel=\"nofollow noreferrer\">inifinte descent</a> and equivalent to induction (i.e., alternatively you might start away with assuming that $(a,b,c)$ is the smallest non-trivial solution, and then $(a/2,b/2,c/2)$ cannot be a solution).</p>\n" }, { "answer_id": 2109206, "score": 2, "text": "<p>Suppose a solution with at least one of $a,b\\neq 0$ exists. If $(a,b,c)$ is a solution then $(\\frac{a}{\\gcd(a,b)},\\frac{b}{\\gcd(a,b)},\\frac{c}{\\gcd(a,b)})$ is also a solution (and all terms are integers), and $a$ and $b$ are coprime.</p>\n\n<p>Now notice that $a^2+b^2$ cannot be a multiple of $3$ unless both are multiples of $3$ (because $a^2\\equiv 0$ or $1\\bmod 3$). We conclude that if $a^2+b^2=3c^2$ we have $a=b=0$.</p>\n" }, { "answer_id": 2119548, "score": 2, "text": "<p>There is truth in your method for case 8. It is called <a href=\"https://en.wikipedia.org/wiki/Proof_by_infinite_descent\" rel=\"nofollow noreferrer\">inifinte descent</a> and equivalent to induction (i.e., alternatively you might start away with assuming that $(a,b,c)$ is the smallest non-trivial solution, and then $(a/2,b/2,c/2)$ cannot be a solution).</p>\n" }, { "answer_id": 2109206, "score": 2, "text": "<p>Suppose a solution with at least one of $a,b\\neq 0$ exists. If $(a,b,c)$ is a solution then $(\\frac{a}{\\gcd(a,b)},\\frac{b}{\\gcd(a,b)},\\frac{c}{\\gcd(a,b)})$ is also a solution (and all terms are integers), and $a$ and $b$ are coprime.</p>\n\n<p>Now notice that $a^2+b^2$ cannot be a multiple of $3$ unless both are multiples of $3$ (because $a^2\\equiv 0$ or $1\\bmod 3$). We conclude that if $a^2+b^2=3c^2$ we have $a=b=0$.</p>\n" }, { "answer_id": 2119548, "score": 2, "text": "<p>There is truth in your method for case 8. It is called <a href=\"https://en.wikipedia.org/wiki/Proof_by_infinite_descent\" rel=\"nofollow noreferrer\">inifinte descent</a> and equivalent to induction (i.e., alternatively you might start away with assuming that $(a,b,c)$ is the smallest non-trivial solution, and then $(a/2,b/2,c/2)$ cannot be a solution).</p>\n" }, { "answer_id": 2109206, "score": 2, "text": "<p>Suppose a solution with at least one of $a,b\\neq 0$ exists. If $(a,b,c)$ is a solution then $(\\frac{a}{\\gcd(a,b)},\\frac{b}{\\gcd(a,b)},\\frac{c}{\\gcd(a,b)})$ is also a solution (and all terms are integers), and $a$ and $b$ are coprime.</p>\n\n<p>Now notice that $a^2+b^2$ cannot be a multiple of $3$ unless both are multiples of $3$ (because $a^2\\equiv 0$ or $1\\bmod 3$). We conclude that if $a^2+b^2=3c^2$ we have $a=b=0$.</p>\n" }, { "answer_id": 2119548, "score": 2, "text": "<p>There is truth in your method for case 8. It is called <a href=\"https://en.wikipedia.org/wiki/Proof_by_infinite_descent\" rel=\"nofollow noreferrer\">inifinte descent</a> and equivalent to induction (i.e., alternatively you might start away with assuming that $(a,b,c)$ is the smallest non-trivial solution, and then $(a/2,b/2,c/2)$ cannot be a solution).</p>\n" }, { "answer_id": 2109206, "score": 2, "text": "<p>Suppose a solution with at least one of $a,b\\neq 0$ exists. If $(a,b,c)$ is a solution then $(\\frac{a}{\\gcd(a,b)},\\frac{b}{\\gcd(a,b)},\\frac{c}{\\gcd(a,b)})$ is also a solution (and all terms are integers), and $a$ and $b$ are coprime.</p>\n\n<p>Now notice that $a^2+b^2$ cannot be a multiple of $3$ unless both are multiples of $3$ (because $a^2\\equiv 0$ or $1\\bmod 3$). We conclude that if $a^2+b^2=3c^2$ we have $a=b=0$.</p>\n" }, { "answer_id": 2119548, "score": 2, "text": "<p>There is truth in your method for case 8. It is called <a href=\"https://en.wikipedia.org/wiki/Proof_by_infinite_descent\" rel=\"nofollow noreferrer\">inifinte descent</a> and equivalent to induction (i.e., alternatively you might start away with assuming that $(a,b,c)$ is the smallest non-trivial solution, and then $(a/2,b/2,c/2)$ cannot be a solution).</p>\n" }, { "answer_id": 2109206, "score": 2, "text": "<p>Suppose a solution with at least one of $a,b\\neq 0$ exists. If $(a,b,c)$ is a solution then $(\\frac{a}{\\gcd(a,b)},\\frac{b}{\\gcd(a,b)},\\frac{c}{\\gcd(a,b)})$ is also a solution (and all terms are integers), and $a$ and $b$ are coprime.</p>\n\n<p>Now notice that $a^2+b^2$ cannot be a multiple of $3$ unless both are multiples of $3$ (because $a^2\\equiv 0$ or $1\\bmod 3$). We conclude that if $a^2+b^2=3c^2$ we have $a=b=0$.</p>\n" }, { "answer_id": 2119548, "score": 2, "text": "<p>There is truth in your method for case 8. It is called <a href=\"https://en.wikipedia.org/wiki/Proof_by_infinite_descent\" rel=\"nofollow noreferrer\">inifinte descent</a> and equivalent to induction (i.e., alternatively you might start away with assuming that $(a,b,c)$ is the smallest non-trivial solution, and then $(a/2,b/2,c/2)$ cannot be a solution).</p>\n" }, { "answer_id": 2109206, "score": 2, "text": "<p>Suppose a solution with at least one of $a,b\\neq 0$ exists. If $(a,b,c)$ is a solution then $(\\frac{a}{\\gcd(a,b)},\\frac{b}{\\gcd(a,b)},\\frac{c}{\\gcd(a,b)})$ is also a solution (and all terms are integers), and $a$ and $b$ are coprime.</p>\n\n<p>Now notice that $a^2+b^2$ cannot be a multiple of $3$ unless both are multiples of $3$ (because $a^2\\equiv 0$ or $1\\bmod 3$). We conclude that if $a^2+b^2=3c^2$ we have $a=b=0$.</p>\n" }, { "answer_id": 2119548, "score": 2, "text": "<p>There is truth in your method for case 8. It is called <a href=\"https://en.wikipedia.org/wiki/Proof_by_infinite_descent\" rel=\"nofollow noreferrer\">inifinte descent</a> and equivalent to induction (i.e., alternatively you might start away with assuming that $(a,b,c)$ is the smallest non-trivial solution, and then $(a/2,b/2,c/2)$ cannot be a solution).</p>\n" } ]
Question: Is this a valid way to prove, that $a^2 + b^2 \neq 3c^2$ for all integers $a, b, c$ ? (except the trivial case) Answer 1: <p>Suppose a solution with at least one of $a,b\neq 0$ exists. If $(a,b,c)$ is a solution then $(\frac{a}{\gcd(a,b)},\frac{b}{\gcd(a,b)},\frac{c}{\gcd(a,b)})$ is also a solution (and all terms are integers), and $a$ and $b$ are coprime.</p> <p>Now notice that $a^2+b^2$ cannot be a multiple of $3$ unless both are multiples of $3$ (because $a^2\equiv 0$ or $1\bmod 3$). We conclude that if $a^2+b^2=3c^2$ we have $a=b=0$.</p> Answer 2: <p>There is truth in your method for case 8. It is called <a href="https://en.wikipedia.org/wiki/Proof_by_infinite_descent" rel="nofollow noreferrer">inifinte descent</a> and equivalent to induction (i.e., alternatively you might start away with assuming that $(a,b,c)$ is the smallest non-trivial solution, and then $(a/2,b/2,c/2)$ cannot be a solution).</p> Answer 3: <p>Suppose a solution with at least one of $a,b\neq 0$ exists. If $(a,b,c)$ is a solution then $(\frac{a}{\gcd(a,b)},\frac{b}{\gcd(a,b)},\frac{c}{\gcd(a,b)})$ is also a solution (and all terms are integers), and $a$ and $b$ are coprime.</p> <p>Now notice that $a^2+b^2$ cannot be a multiple of $3$ unless both are multiples of $3$ (because $a^2\equiv 0$ or $1\bmod 3$). We conclude that if $a^2+b^2=3c^2$ we have $a=b=0$.</p> Answer 4: <p>There is truth in your method for case 8. It is called <a href="https://en.wikipedia.org/wiki/Proof_by_infinite_descent" rel="nofollow noreferrer">inifinte descent</a> and equivalent to induction (i.e., alternatively you might start away with assuming that $(a,b,c)$ is the smallest non-trivial solution, and then $(a/2,b/2,c/2)$ cannot be a solution).</p> Answer 5: <p>Suppose a solution with at least one of $a,b\neq 0$ exists. If $(a,b,c)$ is a solution then $(\frac{a}{\gcd(a,b)},\frac{b}{\gcd(a,b)},\frac{c}{\gcd(a,b)})$ is also a solution (and all terms are integers), and $a$ and $b$ are coprime.</p> <p>Now notice that $a^2+b^2$ cannot be a multiple of $3$ unless both are multiples of $3$ (because $a^2\equiv 0$ or $1\bmod 3$). We conclude that if $a^2+b^2=3c^2$ we have $a=b=0$.</p> Answer 6: <p>There is truth in your method for case 8. It is called <a href="https://en.wikipedia.org/wiki/Proof_by_infinite_descent" rel="nofollow noreferrer">inifinte descent</a> and equivalent to induction (i.e., alternatively you might start away with assuming that $(a,b,c)$ is the smallest non-trivial solution, and then $(a/2,b/2,c/2)$ cannot be a solution).</p> Answer 7: <p>Suppose a solution with at least one of $a,b\neq 0$ exists. If $(a,b,c)$ is a solution then $(\frac{a}{\gcd(a,b)},\frac{b}{\gcd(a,b)},\frac{c}{\gcd(a,b)})$ is also a solution (and all terms are integers), and $a$ and $b$ are coprime.</p> <p>Now notice that $a^2+b^2$ cannot be a multiple of $3$ unless both are multiples of $3$ (because $a^2\equiv 0$ or $1\bmod 3$). We conclude that if $a^2+b^2=3c^2$ we have $a=b=0$.</p> Answer 8: <p>There is truth in your method for case 8. It is called <a href="https://en.wikipedia.org/wiki/Proof_by_infinite_descent" rel="nofollow noreferrer">inifinte descent</a> and equivalent to induction (i.e., alternatively you might start away with assuming that $(a,b,c)$ is the smallest non-trivial solution, and then $(a/2,b/2,c/2)$ cannot be a solution).</p> Answer 9: <p>Suppose a solution with at least one of $a,b\neq 0$ exists. If $(a,b,c)$ is a solution then $(\frac{a}{\gcd(a,b)},\frac{b}{\gcd(a,b)},\frac{c}{\gcd(a,b)})$ is also a solution (and all terms are integers), and $a$ and $b$ are coprime.</p> <p>Now notice that $a^2+b^2$ cannot be a multiple of $3$ unless both are multiples of $3$ (because $a^2\equiv 0$ or $1\bmod 3$). We conclude that if $a^2+b^2=3c^2$ we have $a=b=0$.</p> Answer 10: <p>There is truth in your method for case 8. It is called <a href="https://en.wikipedia.org/wiki/Proof_by_infinite_descent" rel="nofollow noreferrer">inifinte descent</a> and equivalent to induction (i.e., alternatively you might start away with assuming that $(a,b,c)$ is the smallest non-trivial solution, and then $(a/2,b/2,c/2)$ cannot be a solution).</p> Answer 11: <p>Suppose a solution with at least one of $a,b\neq 0$ exists. If $(a,b,c)$ is a solution then $(\frac{a}{\gcd(a,b)},\frac{b}{\gcd(a,b)},\frac{c}{\gcd(a,b)})$ is also a solution (and all terms are integers), and $a$ and $b$ are coprime.</p> <p>Now notice that $a^2+b^2$ cannot be a multiple of $3$ unless both are multiples of $3$ (because $a^2\equiv 0$ or $1\bmod 3$). We conclude that if $a^2+b^2=3c^2$ we have $a=b=0$.</p> Answer 12: <p>There is truth in your method for case 8. It is called <a href="https://en.wikipedia.org/wiki/Proof_by_infinite_descent" rel="nofollow noreferrer">inifinte descent</a> and equivalent to induction (i.e., alternatively you might start away with assuming that $(a,b,c)$ is the smallest non-trivial solution, and then $(a/2,b/2,c/2)$ cannot be a solution).</p> Answer 13: <p>Suppose a solution with at least one of $a,b\neq 0$ exists. If $(a,b,c)$ is a solution then $(\frac{a}{\gcd(a,b)},\frac{b}{\gcd(a,b)},\frac{c}{\gcd(a,b)})$ is also a solution (and all terms are integers), and $a$ and $b$ are coprime.</p> <p>Now notice that $a^2+b^2$ cannot be a multiple of $3$ unless both are multiples of $3$ (because $a^2\equiv 0$ or $1\bmod 3$). We conclude that if $a^2+b^2=3c^2$ we have $a=b=0$.</p> Answer 14: <p>There is truth in your method for case 8. It is called <a href="https://en.wikipedia.org/wiki/Proof_by_infinite_descent" rel="nofollow noreferrer">inifinte descent</a> and equivalent to induction (i.e., alternatively you might start away with assuming that $(a,b,c)$ is the smallest non-trivial solution, and then $(a/2,b/2,c/2)$ cannot be a solution).</p> Answer 15: <p>Suppose a solution with at least one of $a,b\neq 0$ exists. If $(a,b,c)$ is a solution then $(\frac{a}{\gcd(a,b)},\frac{b}{\gcd(a,b)},\frac{c}{\gcd(a,b)})$ is also a solution (and all terms are integers), and $a$ and $b$ are coprime.</p> <p>Now notice that $a^2+b^2$ cannot be a multiple of $3$ unless both are multiples of $3$ (because $a^2\equiv 0$ or $1\bmod 3$). We conclude that if $a^2+b^2=3c^2$ we have $a=b=0$.</p> Answer 16: <p>There is truth in your method for case 8. It is called <a href="https://en.wikipedia.org/wiki/Proof_by_infinite_descent" rel="nofollow noreferrer">inifinte descent</a> and equivalent to induction (i.e., alternatively you might start away with assuming that $(a,b,c)$ is the smallest non-trivial solution, and then $(a/2,b/2,c/2)$ cannot be a solution).</p> Answer 17: <p>Suppose a solution with at least one of $a,b\neq 0$ exists. If $(a,b,c)$ is a solution then $(\frac{a}{\gcd(a,b)},\frac{b}{\gcd(a,b)},\frac{c}{\gcd(a,b)})$ is also a solution (and all terms are integers), and $a$ and $b$ are coprime.</p> <p>Now notice that $a^2+b^2$ cannot be a multiple of $3$ unless both are multiples of $3$ (because $a^2\equiv 0$ or $1\bmod 3$). We conclude that if $a^2+b^2=3c^2$ we have $a=b=0$.</p> Answer 18: <p>There is truth in your method for case 8. It is called <a href="https://en.wikipedia.org/wiki/Proof_by_infinite_descent" rel="nofollow noreferrer">inifinte descent</a> and equivalent to induction (i.e., alternatively you might start away with assuming that $(a,b,c)$ is the smallest non-trivial solution, and then $(a/2,b/2,c/2)$ cannot be a solution).</p> Answer 19: <p>Suppose a solution with at least one of $a,b\neq 0$ exists. If $(a,b,c)$ is a solution then $(\frac{a}{\gcd(a,b)},\frac{b}{\gcd(a,b)},\frac{c}{\gcd(a,b)})$ is also a solution (and all terms are integers), and $a$ and $b$ are coprime.</p> <p>Now notice that $a^2+b^2$ cannot be a multiple of $3$ unless both are multiples of $3$ (because $a^2\equiv 0$ or $1\bmod 3$). We conclude that if $a^2+b^2=3c^2$ we have $a=b=0$.</p> Answer 20: <p>There is truth in your method for case 8. It is called <a href="https://en.wikipedia.org/wiki/Proof_by_infinite_descent" rel="nofollow noreferrer">inifinte descent</a> and equivalent to induction (i.e., alternatively you might start away with assuming that $(a,b,c)$ is the smallest non-trivial solution, and then $(a/2,b/2,c/2)$ cannot be a solution).</p>
If $f$ is continuous and $f(f(x))+f(x)+x=0$ for all $x$ in $\mathbb R^2$, then $f$ is linear?
2,109,245
6
https://math.stackexchange.com/questions/2109245/if-f-is-continuous-and-ffxfxx-0-for-all-x-in-mathbb-r2-then
<p>Let $f\in C^0(\mathbb R^2,\mathbb R^2)$, with $f(f(x))+f(x)+x=0$ for all $x$ in $\mathbb R^2$.</p> <p>Is $f$ linear ( : $\forall (x,y) \in (\mathbb R^2)^2,f(x+y)=f(x)+f(y)$ ) ?</p> <p><em>This problem can be translated to a problem of Geometry of the plane.</em></p>
[ { "answer_id": 2125524, "score": 7, "text": "<p>No, such an $f$ need not be linear. Explicitly writing down a counterexample is rather messy (unless I'm missing some trick), but here's the idea. Fix three points $p$, $q$, and $r$ such that $p+q+r=0$, a path from $p$ to $q$, and a path from $q$ to $r$. Define $f$ so that it maps the path from $p$ to $q$ to the path from $q$ to $r$. We are then forced to define $f$ on the path from $q$ to $r$ to follow a certain path from $r$ to $p$, in order for $f(f(x))+f(x)+x=0$ to hold. Then we can define $f$ to take the path from $r$ to $p$ to the path from $p$ to $q$.</p>\n\n<p>In this way (if we chose our paths well so that everything turns out to be well-defined), we can define $f$ on a loop that goes from $p$ to $q$ to $r$ and then back to $p$ such that $f(f(x))+f(x)+x=0$ for every point on the loop. If we have chosen our paths well, this loop will go around the origin once with monotonically increasing angle, so that for any nonzero $x\\in\\mathbb{R}^2$ there is a unique (continuously varying) $t&gt;0$ such that $tx$ is a point of our loop. We can now extend $f$ to all of $\\mathbb{R}^2$ by defining $f(x)=t^{-1}f(tx)$ for this value of $t$ (and $f(0)=0$). That is, we partition $\\mathbb{R}^2$ into scaled copies of our loop, and define $f$ in the same way on each one. The equation $f(f(x))+f(x)+x=0$ will then hold on all of $\\mathbb{R}^2$, since it holds on each scaled loop.</p>\n\n<hr>\n\n<p>OK, now here's an explicit example if you really want one. Let us take $p=(0,1)$, $q=(1,0)$, and $r=(-1,-1)$. Our first path will be the line segment from $p$ to $q$, parametrized linearly. Our second path will be the line segment from $q$ to $r$, but parametrized nonlinearly (this will make our $f$ nonlinear). Explicitly, for $0\\leq t\\leq 1$, we define the following values of $f$:\n$$f((1-t)p+tq)=(1-t^2)q+t^2r$$ \n$$f((1-t^2)q+t^2r)=-(1-t)p-tq-(1-t^2)q-t^2r$$\n$$f(-(1-t)p-tq-(1-t^2)q-t^2r)= (1-t^2)q+t^2r$$</p>\n\n<p>The first equation says that $f$ maps the line segment from $p$ to $q$ to the line segment from $q$ to $r$, but with a nonlinear parametrization of the latter line segment (using $t^2$ instead of $t$). The second equation then defines $f$ on the line segment from $q$ to $r$ in the only possible way such that $f(f(x))+f(x)+x=0$ for any $x$ on the segment from $p$ to $q$. This means that $f$ maps the line segment from $q$ to $r$ to some quadratic path from $r$ to $p$. Finally, the third equation defines $f$ at points on the quadratic path so that $f(f(f(x)))=x$ for all the points where we have defined $f$ so far. This will make it so $f(f(x))+f(x)+x=0$ is also true on the line segment from $q$ to $r$ and on the quadratic path from $r$ to $p$.</p>\n\n<p>We want to know that this loop we have defined by these three paths goes around $0$ once with strictly increasing angle (in particular, this means our quadratic path is injective and doesn't cross the line segments, so our definitions of $f$ cannot conflict with each other). It is clear that this is true for the two line segments; checking that it is also true for the quadratic path is just a messy computation. Plugging in the definitions of $p$, $q$, and $r$, we see that the quadratic path is parametrized by $$t\\mapsto (t^2+t-1,2t^2-t-1).$$ To see the angle is increasing, we can define $$g(t)=\\arctan\\left(\\frac{2t^2-t-1}{t^2+t-1}\\right)$$ (for an appropriate branch of $\\arctan$) and show $g'(t)&gt;0$ for $0\\leq t\\leq 1$. This is just a computation: $g'(t)$ is a rational function in $t$, and with some work you can show its numerator and denominator are always positive for $0\\leq t\\leq 1$ (see <a href=\"http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))\" rel=\"noreferrer\">http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))</a>, for instance). We can also check that the quadratic path does not loop all the way around the circle (for instance, since its second coordinate is always negative).</p>\n\n<p>So this loop passes through every angle with respect to the origin exactly once. This means that for any nonzero $x\\in\\mathbb{R}^2$ there is a unique $t&gt;0$ such that $tx$ is on the loop, and we can define $f(x)=t^{-1}f(tx)$ (and $f(0)=0$) as described above. Finally, this $f$ is nonlinear: for instance, $f(p)=q$ and $f(q)=r$ but $$f\\left(\\frac{p+q}{2}\\right)=\\frac{3q+r}{4}\\neq \\frac{q+r}{2}=\\frac{f(p)+f(q)}{2}.$$</p>\n" }, { "answer_id": 2126034, "score": 5, "text": "<p>The computational part of Eric Wofsey's answer can be simplified a little bit ; in particular\nit can be done entirely by hand as I show below.\nFollowing this answer, let us start with the three points $p=(0,1),q=(1,0),r=(-1,-1)$. Let $b$ be a $C^1$ bijection $[0,1]\\to [0,1]$, to be defined later, and let</p>\n\n<p>$$\n\\begin{array}{lclcl}\nA(t) &amp;=&amp; (1-t)p+tq &amp;=&amp; (t,1-t) \\\\\nB(t) &amp;=&amp; (1-b(t))q+b(t)r &amp;=&amp; (1-2b(t),-b(t)) \\\\\nC(t) &amp;=&amp; -(A(t)+B(t)) &amp;=&amp; (2b(t)-(t+1),t+b(t)-1). \\\\\n\\end{array}\n$$\n Then there is a unique bijection $f$, defined on the triangle $pqr$, such that\n$f$ cyclically permutes every triple $(A(t),B(t),C(t))$ for $t\\in [0,1]$.</p>\n\n<p>As shown in Eric's answer, the only problem is then to find a $b$ such\nthat f is non-linear and such that the angle $(Or,OC(t))$ varies monotonically\nas $t$ goes from $0$ to $1$.</p>\n\n<p>It is easy to see that $f$ is linear iff $b$ is the identity (indeed, $f$ cyclically\npermutes $p,q$ and $r$, and this suffices to uniquely define $f$ is $f$ is linear).</p>\n\n<p>A little computation shows that the straight lines $(qr)$ and $(OC(t))$ intersect\nat $D(t)=(1-d(t))r+d(t)p$ where</p>\n\n<p>$$\nd(t)=\\frac{2t-b(t)}{3t-3b(t)+1}\n$$</p>\n\n<p>So it suffices to find a $b$ such that the derivative of $d$ is never zero on\n$(0,1)$. Another little computation shows that the numerator of $d'$ is\n(here $u$ and $v$ denote the numerator and denominator of $d$ respectively)</p>\n\n<p>$$\nN(t)=u'(t)v(t)-u(t)v'(t)=(3b(t)-1)b'(t)+2-3b(t)\n$$</p>\n\n<p>If we try $b(t)=t^2$, we are lucky : $N(t)=3t^2-2t+2=3(t-\\frac{1}{3})^2+\\frac{5}{3}$\nis never zero.</p>\n" }, { "answer_id": 2125524, "score": 7, "text": "<p>No, such an $f$ need not be linear. Explicitly writing down a counterexample is rather messy (unless I'm missing some trick), but here's the idea. Fix three points $p$, $q$, and $r$ such that $p+q+r=0$, a path from $p$ to $q$, and a path from $q$ to $r$. Define $f$ so that it maps the path from $p$ to $q$ to the path from $q$ to $r$. We are then forced to define $f$ on the path from $q$ to $r$ to follow a certain path from $r$ to $p$, in order for $f(f(x))+f(x)+x=0$ to hold. Then we can define $f$ to take the path from $r$ to $p$ to the path from $p$ to $q$.</p>\n\n<p>In this way (if we chose our paths well so that everything turns out to be well-defined), we can define $f$ on a loop that goes from $p$ to $q$ to $r$ and then back to $p$ such that $f(f(x))+f(x)+x=0$ for every point on the loop. If we have chosen our paths well, this loop will go around the origin once with monotonically increasing angle, so that for any nonzero $x\\in\\mathbb{R}^2$ there is a unique (continuously varying) $t&gt;0$ such that $tx$ is a point of our loop. We can now extend $f$ to all of $\\mathbb{R}^2$ by defining $f(x)=t^{-1}f(tx)$ for this value of $t$ (and $f(0)=0$). That is, we partition $\\mathbb{R}^2$ into scaled copies of our loop, and define $f$ in the same way on each one. The equation $f(f(x))+f(x)+x=0$ will then hold on all of $\\mathbb{R}^2$, since it holds on each scaled loop.</p>\n\n<hr>\n\n<p>OK, now here's an explicit example if you really want one. Let us take $p=(0,1)$, $q=(1,0)$, and $r=(-1,-1)$. Our first path will be the line segment from $p$ to $q$, parametrized linearly. Our second path will be the line segment from $q$ to $r$, but parametrized nonlinearly (this will make our $f$ nonlinear). Explicitly, for $0\\leq t\\leq 1$, we define the following values of $f$:\n$$f((1-t)p+tq)=(1-t^2)q+t^2r$$ \n$$f((1-t^2)q+t^2r)=-(1-t)p-tq-(1-t^2)q-t^2r$$\n$$f(-(1-t)p-tq-(1-t^2)q-t^2r)= (1-t^2)q+t^2r$$</p>\n\n<p>The first equation says that $f$ maps the line segment from $p$ to $q$ to the line segment from $q$ to $r$, but with a nonlinear parametrization of the latter line segment (using $t^2$ instead of $t$). The second equation then defines $f$ on the line segment from $q$ to $r$ in the only possible way such that $f(f(x))+f(x)+x=0$ for any $x$ on the segment from $p$ to $q$. This means that $f$ maps the line segment from $q$ to $r$ to some quadratic path from $r$ to $p$. Finally, the third equation defines $f$ at points on the quadratic path so that $f(f(f(x)))=x$ for all the points where we have defined $f$ so far. This will make it so $f(f(x))+f(x)+x=0$ is also true on the line segment from $q$ to $r$ and on the quadratic path from $r$ to $p$.</p>\n\n<p>We want to know that this loop we have defined by these three paths goes around $0$ once with strictly increasing angle (in particular, this means our quadratic path is injective and doesn't cross the line segments, so our definitions of $f$ cannot conflict with each other). It is clear that this is true for the two line segments; checking that it is also true for the quadratic path is just a messy computation. Plugging in the definitions of $p$, $q$, and $r$, we see that the quadratic path is parametrized by $$t\\mapsto (t^2+t-1,2t^2-t-1).$$ To see the angle is increasing, we can define $$g(t)=\\arctan\\left(\\frac{2t^2-t-1}{t^2+t-1}\\right)$$ (for an appropriate branch of $\\arctan$) and show $g'(t)&gt;0$ for $0\\leq t\\leq 1$. This is just a computation: $g'(t)$ is a rational function in $t$, and with some work you can show its numerator and denominator are always positive for $0\\leq t\\leq 1$ (see <a href=\"http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))\" rel=\"noreferrer\">http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))</a>, for instance). We can also check that the quadratic path does not loop all the way around the circle (for instance, since its second coordinate is always negative).</p>\n\n<p>So this loop passes through every angle with respect to the origin exactly once. This means that for any nonzero $x\\in\\mathbb{R}^2$ there is a unique $t&gt;0$ such that $tx$ is on the loop, and we can define $f(x)=t^{-1}f(tx)$ (and $f(0)=0$) as described above. Finally, this $f$ is nonlinear: for instance, $f(p)=q$ and $f(q)=r$ but $$f\\left(\\frac{p+q}{2}\\right)=\\frac{3q+r}{4}\\neq \\frac{q+r}{2}=\\frac{f(p)+f(q)}{2}.$$</p>\n" }, { "answer_id": 2126034, "score": 5, "text": "<p>The computational part of Eric Wofsey's answer can be simplified a little bit ; in particular\nit can be done entirely by hand as I show below.\nFollowing this answer, let us start with the three points $p=(0,1),q=(1,0),r=(-1,-1)$. Let $b$ be a $C^1$ bijection $[0,1]\\to [0,1]$, to be defined later, and let</p>\n\n<p>$$\n\\begin{array}{lclcl}\nA(t) &amp;=&amp; (1-t)p+tq &amp;=&amp; (t,1-t) \\\\\nB(t) &amp;=&amp; (1-b(t))q+b(t)r &amp;=&amp; (1-2b(t),-b(t)) \\\\\nC(t) &amp;=&amp; -(A(t)+B(t)) &amp;=&amp; (2b(t)-(t+1),t+b(t)-1). \\\\\n\\end{array}\n$$\n Then there is a unique bijection $f$, defined on the triangle $pqr$, such that\n$f$ cyclically permutes every triple $(A(t),B(t),C(t))$ for $t\\in [0,1]$.</p>\n\n<p>As shown in Eric's answer, the only problem is then to find a $b$ such\nthat f is non-linear and such that the angle $(Or,OC(t))$ varies monotonically\nas $t$ goes from $0$ to $1$.</p>\n\n<p>It is easy to see that $f$ is linear iff $b$ is the identity (indeed, $f$ cyclically\npermutes $p,q$ and $r$, and this suffices to uniquely define $f$ is $f$ is linear).</p>\n\n<p>A little computation shows that the straight lines $(qr)$ and $(OC(t))$ intersect\nat $D(t)=(1-d(t))r+d(t)p$ where</p>\n\n<p>$$\nd(t)=\\frac{2t-b(t)}{3t-3b(t)+1}\n$$</p>\n\n<p>So it suffices to find a $b$ such that the derivative of $d$ is never zero on\n$(0,1)$. Another little computation shows that the numerator of $d'$ is\n(here $u$ and $v$ denote the numerator and denominator of $d$ respectively)</p>\n\n<p>$$\nN(t)=u'(t)v(t)-u(t)v'(t)=(3b(t)-1)b'(t)+2-3b(t)\n$$</p>\n\n<p>If we try $b(t)=t^2$, we are lucky : $N(t)=3t^2-2t+2=3(t-\\frac{1}{3})^2+\\frac{5}{3}$\nis never zero.</p>\n" }, { "answer_id": 2125524, "score": 7, "text": "<p>No, such an $f$ need not be linear. Explicitly writing down a counterexample is rather messy (unless I'm missing some trick), but here's the idea. Fix three points $p$, $q$, and $r$ such that $p+q+r=0$, a path from $p$ to $q$, and a path from $q$ to $r$. Define $f$ so that it maps the path from $p$ to $q$ to the path from $q$ to $r$. We are then forced to define $f$ on the path from $q$ to $r$ to follow a certain path from $r$ to $p$, in order for $f(f(x))+f(x)+x=0$ to hold. Then we can define $f$ to take the path from $r$ to $p$ to the path from $p$ to $q$.</p>\n\n<p>In this way (if we chose our paths well so that everything turns out to be well-defined), we can define $f$ on a loop that goes from $p$ to $q$ to $r$ and then back to $p$ such that $f(f(x))+f(x)+x=0$ for every point on the loop. If we have chosen our paths well, this loop will go around the origin once with monotonically increasing angle, so that for any nonzero $x\\in\\mathbb{R}^2$ there is a unique (continuously varying) $t&gt;0$ such that $tx$ is a point of our loop. We can now extend $f$ to all of $\\mathbb{R}^2$ by defining $f(x)=t^{-1}f(tx)$ for this value of $t$ (and $f(0)=0$). That is, we partition $\\mathbb{R}^2$ into scaled copies of our loop, and define $f$ in the same way on each one. The equation $f(f(x))+f(x)+x=0$ will then hold on all of $\\mathbb{R}^2$, since it holds on each scaled loop.</p>\n\n<hr>\n\n<p>OK, now here's an explicit example if you really want one. Let us take $p=(0,1)$, $q=(1,0)$, and $r=(-1,-1)$. Our first path will be the line segment from $p$ to $q$, parametrized linearly. Our second path will be the line segment from $q$ to $r$, but parametrized nonlinearly (this will make our $f$ nonlinear). Explicitly, for $0\\leq t\\leq 1$, we define the following values of $f$:\n$$f((1-t)p+tq)=(1-t^2)q+t^2r$$ \n$$f((1-t^2)q+t^2r)=-(1-t)p-tq-(1-t^2)q-t^2r$$\n$$f(-(1-t)p-tq-(1-t^2)q-t^2r)= (1-t^2)q+t^2r$$</p>\n\n<p>The first equation says that $f$ maps the line segment from $p$ to $q$ to the line segment from $q$ to $r$, but with a nonlinear parametrization of the latter line segment (using $t^2$ instead of $t$). The second equation then defines $f$ on the line segment from $q$ to $r$ in the only possible way such that $f(f(x))+f(x)+x=0$ for any $x$ on the segment from $p$ to $q$. This means that $f$ maps the line segment from $q$ to $r$ to some quadratic path from $r$ to $p$. Finally, the third equation defines $f$ at points on the quadratic path so that $f(f(f(x)))=x$ for all the points where we have defined $f$ so far. This will make it so $f(f(x))+f(x)+x=0$ is also true on the line segment from $q$ to $r$ and on the quadratic path from $r$ to $p$.</p>\n\n<p>We want to know that this loop we have defined by these three paths goes around $0$ once with strictly increasing angle (in particular, this means our quadratic path is injective and doesn't cross the line segments, so our definitions of $f$ cannot conflict with each other). It is clear that this is true for the two line segments; checking that it is also true for the quadratic path is just a messy computation. Plugging in the definitions of $p$, $q$, and $r$, we see that the quadratic path is parametrized by $$t\\mapsto (t^2+t-1,2t^2-t-1).$$ To see the angle is increasing, we can define $$g(t)=\\arctan\\left(\\frac{2t^2-t-1}{t^2+t-1}\\right)$$ (for an appropriate branch of $\\arctan$) and show $g'(t)&gt;0$ for $0\\leq t\\leq 1$. This is just a computation: $g'(t)$ is a rational function in $t$, and with some work you can show its numerator and denominator are always positive for $0\\leq t\\leq 1$ (see <a href=\"http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))\" rel=\"noreferrer\">http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))</a>, for instance). We can also check that the quadratic path does not loop all the way around the circle (for instance, since its second coordinate is always negative).</p>\n\n<p>So this loop passes through every angle with respect to the origin exactly once. This means that for any nonzero $x\\in\\mathbb{R}^2$ there is a unique $t&gt;0$ such that $tx$ is on the loop, and we can define $f(x)=t^{-1}f(tx)$ (and $f(0)=0$) as described above. Finally, this $f$ is nonlinear: for instance, $f(p)=q$ and $f(q)=r$ but $$f\\left(\\frac{p+q}{2}\\right)=\\frac{3q+r}{4}\\neq \\frac{q+r}{2}=\\frac{f(p)+f(q)}{2}.$$</p>\n" }, { "answer_id": 2126034, "score": 5, "text": "<p>The computational part of Eric Wofsey's answer can be simplified a little bit ; in particular\nit can be done entirely by hand as I show below.\nFollowing this answer, let us start with the three points $p=(0,1),q=(1,0),r=(-1,-1)$. Let $b$ be a $C^1$ bijection $[0,1]\\to [0,1]$, to be defined later, and let</p>\n\n<p>$$\n\\begin{array}{lclcl}\nA(t) &amp;=&amp; (1-t)p+tq &amp;=&amp; (t,1-t) \\\\\nB(t) &amp;=&amp; (1-b(t))q+b(t)r &amp;=&amp; (1-2b(t),-b(t)) \\\\\nC(t) &amp;=&amp; -(A(t)+B(t)) &amp;=&amp; (2b(t)-(t+1),t+b(t)-1). \\\\\n\\end{array}\n$$\n Then there is a unique bijection $f$, defined on the triangle $pqr$, such that\n$f$ cyclically permutes every triple $(A(t),B(t),C(t))$ for $t\\in [0,1]$.</p>\n\n<p>As shown in Eric's answer, the only problem is then to find a $b$ such\nthat f is non-linear and such that the angle $(Or,OC(t))$ varies monotonically\nas $t$ goes from $0$ to $1$.</p>\n\n<p>It is easy to see that $f$ is linear iff $b$ is the identity (indeed, $f$ cyclically\npermutes $p,q$ and $r$, and this suffices to uniquely define $f$ is $f$ is linear).</p>\n\n<p>A little computation shows that the straight lines $(qr)$ and $(OC(t))$ intersect\nat $D(t)=(1-d(t))r+d(t)p$ where</p>\n\n<p>$$\nd(t)=\\frac{2t-b(t)}{3t-3b(t)+1}\n$$</p>\n\n<p>So it suffices to find a $b$ such that the derivative of $d$ is never zero on\n$(0,1)$. Another little computation shows that the numerator of $d'$ is\n(here $u$ and $v$ denote the numerator and denominator of $d$ respectively)</p>\n\n<p>$$\nN(t)=u'(t)v(t)-u(t)v'(t)=(3b(t)-1)b'(t)+2-3b(t)\n$$</p>\n\n<p>If we try $b(t)=t^2$, we are lucky : $N(t)=3t^2-2t+2=3(t-\\frac{1}{3})^2+\\frac{5}{3}$\nis never zero.</p>\n" }, { "answer_id": 2125524, "score": 7, "text": "<p>No, such an $f$ need not be linear. Explicitly writing down a counterexample is rather messy (unless I'm missing some trick), but here's the idea. Fix three points $p$, $q$, and $r$ such that $p+q+r=0$, a path from $p$ to $q$, and a path from $q$ to $r$. Define $f$ so that it maps the path from $p$ to $q$ to the path from $q$ to $r$. We are then forced to define $f$ on the path from $q$ to $r$ to follow a certain path from $r$ to $p$, in order for $f(f(x))+f(x)+x=0$ to hold. Then we can define $f$ to take the path from $r$ to $p$ to the path from $p$ to $q$.</p>\n\n<p>In this way (if we chose our paths well so that everything turns out to be well-defined), we can define $f$ on a loop that goes from $p$ to $q$ to $r$ and then back to $p$ such that $f(f(x))+f(x)+x=0$ for every point on the loop. If we have chosen our paths well, this loop will go around the origin once with monotonically increasing angle, so that for any nonzero $x\\in\\mathbb{R}^2$ there is a unique (continuously varying) $t&gt;0$ such that $tx$ is a point of our loop. We can now extend $f$ to all of $\\mathbb{R}^2$ by defining $f(x)=t^{-1}f(tx)$ for this value of $t$ (and $f(0)=0$). That is, we partition $\\mathbb{R}^2$ into scaled copies of our loop, and define $f$ in the same way on each one. The equation $f(f(x))+f(x)+x=0$ will then hold on all of $\\mathbb{R}^2$, since it holds on each scaled loop.</p>\n\n<hr>\n\n<p>OK, now here's an explicit example if you really want one. Let us take $p=(0,1)$, $q=(1,0)$, and $r=(-1,-1)$. Our first path will be the line segment from $p$ to $q$, parametrized linearly. Our second path will be the line segment from $q$ to $r$, but parametrized nonlinearly (this will make our $f$ nonlinear). Explicitly, for $0\\leq t\\leq 1$, we define the following values of $f$:\n$$f((1-t)p+tq)=(1-t^2)q+t^2r$$ \n$$f((1-t^2)q+t^2r)=-(1-t)p-tq-(1-t^2)q-t^2r$$\n$$f(-(1-t)p-tq-(1-t^2)q-t^2r)= (1-t^2)q+t^2r$$</p>\n\n<p>The first equation says that $f$ maps the line segment from $p$ to $q$ to the line segment from $q$ to $r$, but with a nonlinear parametrization of the latter line segment (using $t^2$ instead of $t$). The second equation then defines $f$ on the line segment from $q$ to $r$ in the only possible way such that $f(f(x))+f(x)+x=0$ for any $x$ on the segment from $p$ to $q$. This means that $f$ maps the line segment from $q$ to $r$ to some quadratic path from $r$ to $p$. Finally, the third equation defines $f$ at points on the quadratic path so that $f(f(f(x)))=x$ for all the points where we have defined $f$ so far. This will make it so $f(f(x))+f(x)+x=0$ is also true on the line segment from $q$ to $r$ and on the quadratic path from $r$ to $p$.</p>\n\n<p>We want to know that this loop we have defined by these three paths goes around $0$ once with strictly increasing angle (in particular, this means our quadratic path is injective and doesn't cross the line segments, so our definitions of $f$ cannot conflict with each other). It is clear that this is true for the two line segments; checking that it is also true for the quadratic path is just a messy computation. Plugging in the definitions of $p$, $q$, and $r$, we see that the quadratic path is parametrized by $$t\\mapsto (t^2+t-1,2t^2-t-1).$$ To see the angle is increasing, we can define $$g(t)=\\arctan\\left(\\frac{2t^2-t-1}{t^2+t-1}\\right)$$ (for an appropriate branch of $\\arctan$) and show $g'(t)&gt;0$ for $0\\leq t\\leq 1$. This is just a computation: $g'(t)$ is a rational function in $t$, and with some work you can show its numerator and denominator are always positive for $0\\leq t\\leq 1$ (see <a href=\"http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))\" rel=\"noreferrer\">http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))</a>, for instance). We can also check that the quadratic path does not loop all the way around the circle (for instance, since its second coordinate is always negative).</p>\n\n<p>So this loop passes through every angle with respect to the origin exactly once. This means that for any nonzero $x\\in\\mathbb{R}^2$ there is a unique $t&gt;0$ such that $tx$ is on the loop, and we can define $f(x)=t^{-1}f(tx)$ (and $f(0)=0$) as described above. Finally, this $f$ is nonlinear: for instance, $f(p)=q$ and $f(q)=r$ but $$f\\left(\\frac{p+q}{2}\\right)=\\frac{3q+r}{4}\\neq \\frac{q+r}{2}=\\frac{f(p)+f(q)}{2}.$$</p>\n" }, { "answer_id": 2126034, "score": 5, "text": "<p>The computational part of Eric Wofsey's answer can be simplified a little bit ; in particular\nit can be done entirely by hand as I show below.\nFollowing this answer, let us start with the three points $p=(0,1),q=(1,0),r=(-1,-1)$. Let $b$ be a $C^1$ bijection $[0,1]\\to [0,1]$, to be defined later, and let</p>\n\n<p>$$\n\\begin{array}{lclcl}\nA(t) &amp;=&amp; (1-t)p+tq &amp;=&amp; (t,1-t) \\\\\nB(t) &amp;=&amp; (1-b(t))q+b(t)r &amp;=&amp; (1-2b(t),-b(t)) \\\\\nC(t) &amp;=&amp; -(A(t)+B(t)) &amp;=&amp; (2b(t)-(t+1),t+b(t)-1). \\\\\n\\end{array}\n$$\n Then there is a unique bijection $f$, defined on the triangle $pqr$, such that\n$f$ cyclically permutes every triple $(A(t),B(t),C(t))$ for $t\\in [0,1]$.</p>\n\n<p>As shown in Eric's answer, the only problem is then to find a $b$ such\nthat f is non-linear and such that the angle $(Or,OC(t))$ varies monotonically\nas $t$ goes from $0$ to $1$.</p>\n\n<p>It is easy to see that $f$ is linear iff $b$ is the identity (indeed, $f$ cyclically\npermutes $p,q$ and $r$, and this suffices to uniquely define $f$ is $f$ is linear).</p>\n\n<p>A little computation shows that the straight lines $(qr)$ and $(OC(t))$ intersect\nat $D(t)=(1-d(t))r+d(t)p$ where</p>\n\n<p>$$\nd(t)=\\frac{2t-b(t)}{3t-3b(t)+1}\n$$</p>\n\n<p>So it suffices to find a $b$ such that the derivative of $d$ is never zero on\n$(0,1)$. Another little computation shows that the numerator of $d'$ is\n(here $u$ and $v$ denote the numerator and denominator of $d$ respectively)</p>\n\n<p>$$\nN(t)=u'(t)v(t)-u(t)v'(t)=(3b(t)-1)b'(t)+2-3b(t)\n$$</p>\n\n<p>If we try $b(t)=t^2$, we are lucky : $N(t)=3t^2-2t+2=3(t-\\frac{1}{3})^2+\\frac{5}{3}$\nis never zero.</p>\n" }, { "answer_id": 2125524, "score": 7, "text": "<p>No, such an $f$ need not be linear. Explicitly writing down a counterexample is rather messy (unless I'm missing some trick), but here's the idea. Fix three points $p$, $q$, and $r$ such that $p+q+r=0$, a path from $p$ to $q$, and a path from $q$ to $r$. Define $f$ so that it maps the path from $p$ to $q$ to the path from $q$ to $r$. We are then forced to define $f$ on the path from $q$ to $r$ to follow a certain path from $r$ to $p$, in order for $f(f(x))+f(x)+x=0$ to hold. Then we can define $f$ to take the path from $r$ to $p$ to the path from $p$ to $q$.</p>\n\n<p>In this way (if we chose our paths well so that everything turns out to be well-defined), we can define $f$ on a loop that goes from $p$ to $q$ to $r$ and then back to $p$ such that $f(f(x))+f(x)+x=0$ for every point on the loop. If we have chosen our paths well, this loop will go around the origin once with monotonically increasing angle, so that for any nonzero $x\\in\\mathbb{R}^2$ there is a unique (continuously varying) $t&gt;0$ such that $tx$ is a point of our loop. We can now extend $f$ to all of $\\mathbb{R}^2$ by defining $f(x)=t^{-1}f(tx)$ for this value of $t$ (and $f(0)=0$). That is, we partition $\\mathbb{R}^2$ into scaled copies of our loop, and define $f$ in the same way on each one. The equation $f(f(x))+f(x)+x=0$ will then hold on all of $\\mathbb{R}^2$, since it holds on each scaled loop.</p>\n\n<hr>\n\n<p>OK, now here's an explicit example if you really want one. Let us take $p=(0,1)$, $q=(1,0)$, and $r=(-1,-1)$. Our first path will be the line segment from $p$ to $q$, parametrized linearly. Our second path will be the line segment from $q$ to $r$, but parametrized nonlinearly (this will make our $f$ nonlinear). Explicitly, for $0\\leq t\\leq 1$, we define the following values of $f$:\n$$f((1-t)p+tq)=(1-t^2)q+t^2r$$ \n$$f((1-t^2)q+t^2r)=-(1-t)p-tq-(1-t^2)q-t^2r$$\n$$f(-(1-t)p-tq-(1-t^2)q-t^2r)= (1-t^2)q+t^2r$$</p>\n\n<p>The first equation says that $f$ maps the line segment from $p$ to $q$ to the line segment from $q$ to $r$, but with a nonlinear parametrization of the latter line segment (using $t^2$ instead of $t$). The second equation then defines $f$ on the line segment from $q$ to $r$ in the only possible way such that $f(f(x))+f(x)+x=0$ for any $x$ on the segment from $p$ to $q$. This means that $f$ maps the line segment from $q$ to $r$ to some quadratic path from $r$ to $p$. Finally, the third equation defines $f$ at points on the quadratic path so that $f(f(f(x)))=x$ for all the points where we have defined $f$ so far. This will make it so $f(f(x))+f(x)+x=0$ is also true on the line segment from $q$ to $r$ and on the quadratic path from $r$ to $p$.</p>\n\n<p>We want to know that this loop we have defined by these three paths goes around $0$ once with strictly increasing angle (in particular, this means our quadratic path is injective and doesn't cross the line segments, so our definitions of $f$ cannot conflict with each other). It is clear that this is true for the two line segments; checking that it is also true for the quadratic path is just a messy computation. Plugging in the definitions of $p$, $q$, and $r$, we see that the quadratic path is parametrized by $$t\\mapsto (t^2+t-1,2t^2-t-1).$$ To see the angle is increasing, we can define $$g(t)=\\arctan\\left(\\frac{2t^2-t-1}{t^2+t-1}\\right)$$ (for an appropriate branch of $\\arctan$) and show $g'(t)&gt;0$ for $0\\leq t\\leq 1$. This is just a computation: $g'(t)$ is a rational function in $t$, and with some work you can show its numerator and denominator are always positive for $0\\leq t\\leq 1$ (see <a href=\"http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))\" rel=\"noreferrer\">http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))</a>, for instance). We can also check that the quadratic path does not loop all the way around the circle (for instance, since its second coordinate is always negative).</p>\n\n<p>So this loop passes through every angle with respect to the origin exactly once. This means that for any nonzero $x\\in\\mathbb{R}^2$ there is a unique $t&gt;0$ such that $tx$ is on the loop, and we can define $f(x)=t^{-1}f(tx)$ (and $f(0)=0$) as described above. Finally, this $f$ is nonlinear: for instance, $f(p)=q$ and $f(q)=r$ but $$f\\left(\\frac{p+q}{2}\\right)=\\frac{3q+r}{4}\\neq \\frac{q+r}{2}=\\frac{f(p)+f(q)}{2}.$$</p>\n" }, { "answer_id": 2126034, "score": 5, "text": "<p>The computational part of Eric Wofsey's answer can be simplified a little bit ; in particular\nit can be done entirely by hand as I show below.\nFollowing this answer, let us start with the three points $p=(0,1),q=(1,0),r=(-1,-1)$. Let $b$ be a $C^1$ bijection $[0,1]\\to [0,1]$, to be defined later, and let</p>\n\n<p>$$\n\\begin{array}{lclcl}\nA(t) &amp;=&amp; (1-t)p+tq &amp;=&amp; (t,1-t) \\\\\nB(t) &amp;=&amp; (1-b(t))q+b(t)r &amp;=&amp; (1-2b(t),-b(t)) \\\\\nC(t) &amp;=&amp; -(A(t)+B(t)) &amp;=&amp; (2b(t)-(t+1),t+b(t)-1). \\\\\n\\end{array}\n$$\n Then there is a unique bijection $f$, defined on the triangle $pqr$, such that\n$f$ cyclically permutes every triple $(A(t),B(t),C(t))$ for $t\\in [0,1]$.</p>\n\n<p>As shown in Eric's answer, the only problem is then to find a $b$ such\nthat f is non-linear and such that the angle $(Or,OC(t))$ varies monotonically\nas $t$ goes from $0$ to $1$.</p>\n\n<p>It is easy to see that $f$ is linear iff $b$ is the identity (indeed, $f$ cyclically\npermutes $p,q$ and $r$, and this suffices to uniquely define $f$ is $f$ is linear).</p>\n\n<p>A little computation shows that the straight lines $(qr)$ and $(OC(t))$ intersect\nat $D(t)=(1-d(t))r+d(t)p$ where</p>\n\n<p>$$\nd(t)=\\frac{2t-b(t)}{3t-3b(t)+1}\n$$</p>\n\n<p>So it suffices to find a $b$ such that the derivative of $d$ is never zero on\n$(0,1)$. Another little computation shows that the numerator of $d'$ is\n(here $u$ and $v$ denote the numerator and denominator of $d$ respectively)</p>\n\n<p>$$\nN(t)=u'(t)v(t)-u(t)v'(t)=(3b(t)-1)b'(t)+2-3b(t)\n$$</p>\n\n<p>If we try $b(t)=t^2$, we are lucky : $N(t)=3t^2-2t+2=3(t-\\frac{1}{3})^2+\\frac{5}{3}$\nis never zero.</p>\n" }, { "answer_id": 2125524, "score": 7, "text": "<p>No, such an $f$ need not be linear. Explicitly writing down a counterexample is rather messy (unless I'm missing some trick), but here's the idea. Fix three points $p$, $q$, and $r$ such that $p+q+r=0$, a path from $p$ to $q$, and a path from $q$ to $r$. Define $f$ so that it maps the path from $p$ to $q$ to the path from $q$ to $r$. We are then forced to define $f$ on the path from $q$ to $r$ to follow a certain path from $r$ to $p$, in order for $f(f(x))+f(x)+x=0$ to hold. Then we can define $f$ to take the path from $r$ to $p$ to the path from $p$ to $q$.</p>\n\n<p>In this way (if we chose our paths well so that everything turns out to be well-defined), we can define $f$ on a loop that goes from $p$ to $q$ to $r$ and then back to $p$ such that $f(f(x))+f(x)+x=0$ for every point on the loop. If we have chosen our paths well, this loop will go around the origin once with monotonically increasing angle, so that for any nonzero $x\\in\\mathbb{R}^2$ there is a unique (continuously varying) $t&gt;0$ such that $tx$ is a point of our loop. We can now extend $f$ to all of $\\mathbb{R}^2$ by defining $f(x)=t^{-1}f(tx)$ for this value of $t$ (and $f(0)=0$). That is, we partition $\\mathbb{R}^2$ into scaled copies of our loop, and define $f$ in the same way on each one. The equation $f(f(x))+f(x)+x=0$ will then hold on all of $\\mathbb{R}^2$, since it holds on each scaled loop.</p>\n\n<hr>\n\n<p>OK, now here's an explicit example if you really want one. Let us take $p=(0,1)$, $q=(1,0)$, and $r=(-1,-1)$. Our first path will be the line segment from $p$ to $q$, parametrized linearly. Our second path will be the line segment from $q$ to $r$, but parametrized nonlinearly (this will make our $f$ nonlinear). Explicitly, for $0\\leq t\\leq 1$, we define the following values of $f$:\n$$f((1-t)p+tq)=(1-t^2)q+t^2r$$ \n$$f((1-t^2)q+t^2r)=-(1-t)p-tq-(1-t^2)q-t^2r$$\n$$f(-(1-t)p-tq-(1-t^2)q-t^2r)= (1-t^2)q+t^2r$$</p>\n\n<p>The first equation says that $f$ maps the line segment from $p$ to $q$ to the line segment from $q$ to $r$, but with a nonlinear parametrization of the latter line segment (using $t^2$ instead of $t$). The second equation then defines $f$ on the line segment from $q$ to $r$ in the only possible way such that $f(f(x))+f(x)+x=0$ for any $x$ on the segment from $p$ to $q$. This means that $f$ maps the line segment from $q$ to $r$ to some quadratic path from $r$ to $p$. Finally, the third equation defines $f$ at points on the quadratic path so that $f(f(f(x)))=x$ for all the points where we have defined $f$ so far. This will make it so $f(f(x))+f(x)+x=0$ is also true on the line segment from $q$ to $r$ and on the quadratic path from $r$ to $p$.</p>\n\n<p>We want to know that this loop we have defined by these three paths goes around $0$ once with strictly increasing angle (in particular, this means our quadratic path is injective and doesn't cross the line segments, so our definitions of $f$ cannot conflict with each other). It is clear that this is true for the two line segments; checking that it is also true for the quadratic path is just a messy computation. Plugging in the definitions of $p$, $q$, and $r$, we see that the quadratic path is parametrized by $$t\\mapsto (t^2+t-1,2t^2-t-1).$$ To see the angle is increasing, we can define $$g(t)=\\arctan\\left(\\frac{2t^2-t-1}{t^2+t-1}\\right)$$ (for an appropriate branch of $\\arctan$) and show $g'(t)&gt;0$ for $0\\leq t\\leq 1$. This is just a computation: $g'(t)$ is a rational function in $t$, and with some work you can show its numerator and denominator are always positive for $0\\leq t\\leq 1$ (see <a href=\"http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))\" rel=\"noreferrer\">http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))</a>, for instance). We can also check that the quadratic path does not loop all the way around the circle (for instance, since its second coordinate is always negative).</p>\n\n<p>So this loop passes through every angle with respect to the origin exactly once. This means that for any nonzero $x\\in\\mathbb{R}^2$ there is a unique $t&gt;0$ such that $tx$ is on the loop, and we can define $f(x)=t^{-1}f(tx)$ (and $f(0)=0$) as described above. Finally, this $f$ is nonlinear: for instance, $f(p)=q$ and $f(q)=r$ but $$f\\left(\\frac{p+q}{2}\\right)=\\frac{3q+r}{4}\\neq \\frac{q+r}{2}=\\frac{f(p)+f(q)}{2}.$$</p>\n" }, { "answer_id": 2126034, "score": 5, "text": "<p>The computational part of Eric Wofsey's answer can be simplified a little bit ; in particular\nit can be done entirely by hand as I show below.\nFollowing this answer, let us start with the three points $p=(0,1),q=(1,0),r=(-1,-1)$. Let $b$ be a $C^1$ bijection $[0,1]\\to [0,1]$, to be defined later, and let</p>\n\n<p>$$\n\\begin{array}{lclcl}\nA(t) &amp;=&amp; (1-t)p+tq &amp;=&amp; (t,1-t) \\\\\nB(t) &amp;=&amp; (1-b(t))q+b(t)r &amp;=&amp; (1-2b(t),-b(t)) \\\\\nC(t) &amp;=&amp; -(A(t)+B(t)) &amp;=&amp; (2b(t)-(t+1),t+b(t)-1). \\\\\n\\end{array}\n$$\n Then there is a unique bijection $f$, defined on the triangle $pqr$, such that\n$f$ cyclically permutes every triple $(A(t),B(t),C(t))$ for $t\\in [0,1]$.</p>\n\n<p>As shown in Eric's answer, the only problem is then to find a $b$ such\nthat f is non-linear and such that the angle $(Or,OC(t))$ varies monotonically\nas $t$ goes from $0$ to $1$.</p>\n\n<p>It is easy to see that $f$ is linear iff $b$ is the identity (indeed, $f$ cyclically\npermutes $p,q$ and $r$, and this suffices to uniquely define $f$ is $f$ is linear).</p>\n\n<p>A little computation shows that the straight lines $(qr)$ and $(OC(t))$ intersect\nat $D(t)=(1-d(t))r+d(t)p$ where</p>\n\n<p>$$\nd(t)=\\frac{2t-b(t)}{3t-3b(t)+1}\n$$</p>\n\n<p>So it suffices to find a $b$ such that the derivative of $d$ is never zero on\n$(0,1)$. Another little computation shows that the numerator of $d'$ is\n(here $u$ and $v$ denote the numerator and denominator of $d$ respectively)</p>\n\n<p>$$\nN(t)=u'(t)v(t)-u(t)v'(t)=(3b(t)-1)b'(t)+2-3b(t)\n$$</p>\n\n<p>If we try $b(t)=t^2$, we are lucky : $N(t)=3t^2-2t+2=3(t-\\frac{1}{3})^2+\\frac{5}{3}$\nis never zero.</p>\n" }, { "answer_id": 2125524, "score": 7, "text": "<p>No, such an $f$ need not be linear. Explicitly writing down a counterexample is rather messy (unless I'm missing some trick), but here's the idea. Fix three points $p$, $q$, and $r$ such that $p+q+r=0$, a path from $p$ to $q$, and a path from $q$ to $r$. Define $f$ so that it maps the path from $p$ to $q$ to the path from $q$ to $r$. We are then forced to define $f$ on the path from $q$ to $r$ to follow a certain path from $r$ to $p$, in order for $f(f(x))+f(x)+x=0$ to hold. Then we can define $f$ to take the path from $r$ to $p$ to the path from $p$ to $q$.</p>\n\n<p>In this way (if we chose our paths well so that everything turns out to be well-defined), we can define $f$ on a loop that goes from $p$ to $q$ to $r$ and then back to $p$ such that $f(f(x))+f(x)+x=0$ for every point on the loop. If we have chosen our paths well, this loop will go around the origin once with monotonically increasing angle, so that for any nonzero $x\\in\\mathbb{R}^2$ there is a unique (continuously varying) $t&gt;0$ such that $tx$ is a point of our loop. We can now extend $f$ to all of $\\mathbb{R}^2$ by defining $f(x)=t^{-1}f(tx)$ for this value of $t$ (and $f(0)=0$). That is, we partition $\\mathbb{R}^2$ into scaled copies of our loop, and define $f$ in the same way on each one. The equation $f(f(x))+f(x)+x=0$ will then hold on all of $\\mathbb{R}^2$, since it holds on each scaled loop.</p>\n\n<hr>\n\n<p>OK, now here's an explicit example if you really want one. Let us take $p=(0,1)$, $q=(1,0)$, and $r=(-1,-1)$. Our first path will be the line segment from $p$ to $q$, parametrized linearly. Our second path will be the line segment from $q$ to $r$, but parametrized nonlinearly (this will make our $f$ nonlinear). Explicitly, for $0\\leq t\\leq 1$, we define the following values of $f$:\n$$f((1-t)p+tq)=(1-t^2)q+t^2r$$ \n$$f((1-t^2)q+t^2r)=-(1-t)p-tq-(1-t^2)q-t^2r$$\n$$f(-(1-t)p-tq-(1-t^2)q-t^2r)= (1-t^2)q+t^2r$$</p>\n\n<p>The first equation says that $f$ maps the line segment from $p$ to $q$ to the line segment from $q$ to $r$, but with a nonlinear parametrization of the latter line segment (using $t^2$ instead of $t$). The second equation then defines $f$ on the line segment from $q$ to $r$ in the only possible way such that $f(f(x))+f(x)+x=0$ for any $x$ on the segment from $p$ to $q$. This means that $f$ maps the line segment from $q$ to $r$ to some quadratic path from $r$ to $p$. Finally, the third equation defines $f$ at points on the quadratic path so that $f(f(f(x)))=x$ for all the points where we have defined $f$ so far. This will make it so $f(f(x))+f(x)+x=0$ is also true on the line segment from $q$ to $r$ and on the quadratic path from $r$ to $p$.</p>\n\n<p>We want to know that this loop we have defined by these three paths goes around $0$ once with strictly increasing angle (in particular, this means our quadratic path is injective and doesn't cross the line segments, so our definitions of $f$ cannot conflict with each other). It is clear that this is true for the two line segments; checking that it is also true for the quadratic path is just a messy computation. Plugging in the definitions of $p$, $q$, and $r$, we see that the quadratic path is parametrized by $$t\\mapsto (t^2+t-1,2t^2-t-1).$$ To see the angle is increasing, we can define $$g(t)=\\arctan\\left(\\frac{2t^2-t-1}{t^2+t-1}\\right)$$ (for an appropriate branch of $\\arctan$) and show $g'(t)&gt;0$ for $0\\leq t\\leq 1$. This is just a computation: $g'(t)$ is a rational function in $t$, and with some work you can show its numerator and denominator are always positive for $0\\leq t\\leq 1$ (see <a href=\"http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))\" rel=\"noreferrer\">http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))</a>, for instance). We can also check that the quadratic path does not loop all the way around the circle (for instance, since its second coordinate is always negative).</p>\n\n<p>So this loop passes through every angle with respect to the origin exactly once. This means that for any nonzero $x\\in\\mathbb{R}^2$ there is a unique $t&gt;0$ such that $tx$ is on the loop, and we can define $f(x)=t^{-1}f(tx)$ (and $f(0)=0$) as described above. Finally, this $f$ is nonlinear: for instance, $f(p)=q$ and $f(q)=r$ but $$f\\left(\\frac{p+q}{2}\\right)=\\frac{3q+r}{4}\\neq \\frac{q+r}{2}=\\frac{f(p)+f(q)}{2}.$$</p>\n" }, { "answer_id": 2126034, "score": 5, "text": "<p>The computational part of Eric Wofsey's answer can be simplified a little bit ; in particular\nit can be done entirely by hand as I show below.\nFollowing this answer, let us start with the three points $p=(0,1),q=(1,0),r=(-1,-1)$. Let $b$ be a $C^1$ bijection $[0,1]\\to [0,1]$, to be defined later, and let</p>\n\n<p>$$\n\\begin{array}{lclcl}\nA(t) &amp;=&amp; (1-t)p+tq &amp;=&amp; (t,1-t) \\\\\nB(t) &amp;=&amp; (1-b(t))q+b(t)r &amp;=&amp; (1-2b(t),-b(t)) \\\\\nC(t) &amp;=&amp; -(A(t)+B(t)) &amp;=&amp; (2b(t)-(t+1),t+b(t)-1). \\\\\n\\end{array}\n$$\n Then there is a unique bijection $f$, defined on the triangle $pqr$, such that\n$f$ cyclically permutes every triple $(A(t),B(t),C(t))$ for $t\\in [0,1]$.</p>\n\n<p>As shown in Eric's answer, the only problem is then to find a $b$ such\nthat f is non-linear and such that the angle $(Or,OC(t))$ varies monotonically\nas $t$ goes from $0$ to $1$.</p>\n\n<p>It is easy to see that $f$ is linear iff $b$ is the identity (indeed, $f$ cyclically\npermutes $p,q$ and $r$, and this suffices to uniquely define $f$ is $f$ is linear).</p>\n\n<p>A little computation shows that the straight lines $(qr)$ and $(OC(t))$ intersect\nat $D(t)=(1-d(t))r+d(t)p$ where</p>\n\n<p>$$\nd(t)=\\frac{2t-b(t)}{3t-3b(t)+1}\n$$</p>\n\n<p>So it suffices to find a $b$ such that the derivative of $d$ is never zero on\n$(0,1)$. Another little computation shows that the numerator of $d'$ is\n(here $u$ and $v$ denote the numerator and denominator of $d$ respectively)</p>\n\n<p>$$\nN(t)=u'(t)v(t)-u(t)v'(t)=(3b(t)-1)b'(t)+2-3b(t)\n$$</p>\n\n<p>If we try $b(t)=t^2$, we are lucky : $N(t)=3t^2-2t+2=3(t-\\frac{1}{3})^2+\\frac{5}{3}$\nis never zero.</p>\n" }, { "answer_id": 2125524, "score": 7, "text": "<p>No, such an $f$ need not be linear. Explicitly writing down a counterexample is rather messy (unless I'm missing some trick), but here's the idea. Fix three points $p$, $q$, and $r$ such that $p+q+r=0$, a path from $p$ to $q$, and a path from $q$ to $r$. Define $f$ so that it maps the path from $p$ to $q$ to the path from $q$ to $r$. We are then forced to define $f$ on the path from $q$ to $r$ to follow a certain path from $r$ to $p$, in order for $f(f(x))+f(x)+x=0$ to hold. Then we can define $f$ to take the path from $r$ to $p$ to the path from $p$ to $q$.</p>\n\n<p>In this way (if we chose our paths well so that everything turns out to be well-defined), we can define $f$ on a loop that goes from $p$ to $q$ to $r$ and then back to $p$ such that $f(f(x))+f(x)+x=0$ for every point on the loop. If we have chosen our paths well, this loop will go around the origin once with monotonically increasing angle, so that for any nonzero $x\\in\\mathbb{R}^2$ there is a unique (continuously varying) $t&gt;0$ such that $tx$ is a point of our loop. We can now extend $f$ to all of $\\mathbb{R}^2$ by defining $f(x)=t^{-1}f(tx)$ for this value of $t$ (and $f(0)=0$). That is, we partition $\\mathbb{R}^2$ into scaled copies of our loop, and define $f$ in the same way on each one. The equation $f(f(x))+f(x)+x=0$ will then hold on all of $\\mathbb{R}^2$, since it holds on each scaled loop.</p>\n\n<hr>\n\n<p>OK, now here's an explicit example if you really want one. Let us take $p=(0,1)$, $q=(1,0)$, and $r=(-1,-1)$. Our first path will be the line segment from $p$ to $q$, parametrized linearly. Our second path will be the line segment from $q$ to $r$, but parametrized nonlinearly (this will make our $f$ nonlinear). Explicitly, for $0\\leq t\\leq 1$, we define the following values of $f$:\n$$f((1-t)p+tq)=(1-t^2)q+t^2r$$ \n$$f((1-t^2)q+t^2r)=-(1-t)p-tq-(1-t^2)q-t^2r$$\n$$f(-(1-t)p-tq-(1-t^2)q-t^2r)= (1-t^2)q+t^2r$$</p>\n\n<p>The first equation says that $f$ maps the line segment from $p$ to $q$ to the line segment from $q$ to $r$, but with a nonlinear parametrization of the latter line segment (using $t^2$ instead of $t$). The second equation then defines $f$ on the line segment from $q$ to $r$ in the only possible way such that $f(f(x))+f(x)+x=0$ for any $x$ on the segment from $p$ to $q$. This means that $f$ maps the line segment from $q$ to $r$ to some quadratic path from $r$ to $p$. Finally, the third equation defines $f$ at points on the quadratic path so that $f(f(f(x)))=x$ for all the points where we have defined $f$ so far. This will make it so $f(f(x))+f(x)+x=0$ is also true on the line segment from $q$ to $r$ and on the quadratic path from $r$ to $p$.</p>\n\n<p>We want to know that this loop we have defined by these three paths goes around $0$ once with strictly increasing angle (in particular, this means our quadratic path is injective and doesn't cross the line segments, so our definitions of $f$ cannot conflict with each other). It is clear that this is true for the two line segments; checking that it is also true for the quadratic path is just a messy computation. Plugging in the definitions of $p$, $q$, and $r$, we see that the quadratic path is parametrized by $$t\\mapsto (t^2+t-1,2t^2-t-1).$$ To see the angle is increasing, we can define $$g(t)=\\arctan\\left(\\frac{2t^2-t-1}{t^2+t-1}\\right)$$ (for an appropriate branch of $\\arctan$) and show $g'(t)&gt;0$ for $0\\leq t\\leq 1$. This is just a computation: $g'(t)$ is a rational function in $t$, and with some work you can show its numerator and denominator are always positive for $0\\leq t\\leq 1$ (see <a href=\"http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))\" rel=\"noreferrer\">http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))</a>, for instance). We can also check that the quadratic path does not loop all the way around the circle (for instance, since its second coordinate is always negative).</p>\n\n<p>So this loop passes through every angle with respect to the origin exactly once. This means that for any nonzero $x\\in\\mathbb{R}^2$ there is a unique $t&gt;0$ such that $tx$ is on the loop, and we can define $f(x)=t^{-1}f(tx)$ (and $f(0)=0$) as described above. Finally, this $f$ is nonlinear: for instance, $f(p)=q$ and $f(q)=r$ but $$f\\left(\\frac{p+q}{2}\\right)=\\frac{3q+r}{4}\\neq \\frac{q+r}{2}=\\frac{f(p)+f(q)}{2}.$$</p>\n" }, { "answer_id": 2126034, "score": 5, "text": "<p>The computational part of Eric Wofsey's answer can be simplified a little bit ; in particular\nit can be done entirely by hand as I show below.\nFollowing this answer, let us start with the three points $p=(0,1),q=(1,0),r=(-1,-1)$. Let $b$ be a $C^1$ bijection $[0,1]\\to [0,1]$, to be defined later, and let</p>\n\n<p>$$\n\\begin{array}{lclcl}\nA(t) &amp;=&amp; (1-t)p+tq &amp;=&amp; (t,1-t) \\\\\nB(t) &amp;=&amp; (1-b(t))q+b(t)r &amp;=&amp; (1-2b(t),-b(t)) \\\\\nC(t) &amp;=&amp; -(A(t)+B(t)) &amp;=&amp; (2b(t)-(t+1),t+b(t)-1). \\\\\n\\end{array}\n$$\n Then there is a unique bijection $f$, defined on the triangle $pqr$, such that\n$f$ cyclically permutes every triple $(A(t),B(t),C(t))$ for $t\\in [0,1]$.</p>\n\n<p>As shown in Eric's answer, the only problem is then to find a $b$ such\nthat f is non-linear and such that the angle $(Or,OC(t))$ varies monotonically\nas $t$ goes from $0$ to $1$.</p>\n\n<p>It is easy to see that $f$ is linear iff $b$ is the identity (indeed, $f$ cyclically\npermutes $p,q$ and $r$, and this suffices to uniquely define $f$ is $f$ is linear).</p>\n\n<p>A little computation shows that the straight lines $(qr)$ and $(OC(t))$ intersect\nat $D(t)=(1-d(t))r+d(t)p$ where</p>\n\n<p>$$\nd(t)=\\frac{2t-b(t)}{3t-3b(t)+1}\n$$</p>\n\n<p>So it suffices to find a $b$ such that the derivative of $d$ is never zero on\n$(0,1)$. Another little computation shows that the numerator of $d'$ is\n(here $u$ and $v$ denote the numerator and denominator of $d$ respectively)</p>\n\n<p>$$\nN(t)=u'(t)v(t)-u(t)v'(t)=(3b(t)-1)b'(t)+2-3b(t)\n$$</p>\n\n<p>If we try $b(t)=t^2$, we are lucky : $N(t)=3t^2-2t+2=3(t-\\frac{1}{3})^2+\\frac{5}{3}$\nis never zero.</p>\n" }, { "answer_id": 2125524, "score": 7, "text": "<p>No, such an $f$ need not be linear. Explicitly writing down a counterexample is rather messy (unless I'm missing some trick), but here's the idea. Fix three points $p$, $q$, and $r$ such that $p+q+r=0$, a path from $p$ to $q$, and a path from $q$ to $r$. Define $f$ so that it maps the path from $p$ to $q$ to the path from $q$ to $r$. We are then forced to define $f$ on the path from $q$ to $r$ to follow a certain path from $r$ to $p$, in order for $f(f(x))+f(x)+x=0$ to hold. Then we can define $f$ to take the path from $r$ to $p$ to the path from $p$ to $q$.</p>\n\n<p>In this way (if we chose our paths well so that everything turns out to be well-defined), we can define $f$ on a loop that goes from $p$ to $q$ to $r$ and then back to $p$ such that $f(f(x))+f(x)+x=0$ for every point on the loop. If we have chosen our paths well, this loop will go around the origin once with monotonically increasing angle, so that for any nonzero $x\\in\\mathbb{R}^2$ there is a unique (continuously varying) $t&gt;0$ such that $tx$ is a point of our loop. We can now extend $f$ to all of $\\mathbb{R}^2$ by defining $f(x)=t^{-1}f(tx)$ for this value of $t$ (and $f(0)=0$). That is, we partition $\\mathbb{R}^2$ into scaled copies of our loop, and define $f$ in the same way on each one. The equation $f(f(x))+f(x)+x=0$ will then hold on all of $\\mathbb{R}^2$, since it holds on each scaled loop.</p>\n\n<hr>\n\n<p>OK, now here's an explicit example if you really want one. Let us take $p=(0,1)$, $q=(1,0)$, and $r=(-1,-1)$. Our first path will be the line segment from $p$ to $q$, parametrized linearly. Our second path will be the line segment from $q$ to $r$, but parametrized nonlinearly (this will make our $f$ nonlinear). Explicitly, for $0\\leq t\\leq 1$, we define the following values of $f$:\n$$f((1-t)p+tq)=(1-t^2)q+t^2r$$ \n$$f((1-t^2)q+t^2r)=-(1-t)p-tq-(1-t^2)q-t^2r$$\n$$f(-(1-t)p-tq-(1-t^2)q-t^2r)= (1-t^2)q+t^2r$$</p>\n\n<p>The first equation says that $f$ maps the line segment from $p$ to $q$ to the line segment from $q$ to $r$, but with a nonlinear parametrization of the latter line segment (using $t^2$ instead of $t$). The second equation then defines $f$ on the line segment from $q$ to $r$ in the only possible way such that $f(f(x))+f(x)+x=0$ for any $x$ on the segment from $p$ to $q$. This means that $f$ maps the line segment from $q$ to $r$ to some quadratic path from $r$ to $p$. Finally, the third equation defines $f$ at points on the quadratic path so that $f(f(f(x)))=x$ for all the points where we have defined $f$ so far. This will make it so $f(f(x))+f(x)+x=0$ is also true on the line segment from $q$ to $r$ and on the quadratic path from $r$ to $p$.</p>\n\n<p>We want to know that this loop we have defined by these three paths goes around $0$ once with strictly increasing angle (in particular, this means our quadratic path is injective and doesn't cross the line segments, so our definitions of $f$ cannot conflict with each other). It is clear that this is true for the two line segments; checking that it is also true for the quadratic path is just a messy computation. Plugging in the definitions of $p$, $q$, and $r$, we see that the quadratic path is parametrized by $$t\\mapsto (t^2+t-1,2t^2-t-1).$$ To see the angle is increasing, we can define $$g(t)=\\arctan\\left(\\frac{2t^2-t-1}{t^2+t-1}\\right)$$ (for an appropriate branch of $\\arctan$) and show $g'(t)&gt;0$ for $0\\leq t\\leq 1$. This is just a computation: $g'(t)$ is a rational function in $t$, and with some work you can show its numerator and denominator are always positive for $0\\leq t\\leq 1$ (see <a href=\"http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))\" rel=\"noreferrer\">http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))</a>, for instance). We can also check that the quadratic path does not loop all the way around the circle (for instance, since its second coordinate is always negative).</p>\n\n<p>So this loop passes through every angle with respect to the origin exactly once. This means that for any nonzero $x\\in\\mathbb{R}^2$ there is a unique $t&gt;0$ such that $tx$ is on the loop, and we can define $f(x)=t^{-1}f(tx)$ (and $f(0)=0$) as described above. Finally, this $f$ is nonlinear: for instance, $f(p)=q$ and $f(q)=r$ but $$f\\left(\\frac{p+q}{2}\\right)=\\frac{3q+r}{4}\\neq \\frac{q+r}{2}=\\frac{f(p)+f(q)}{2}.$$</p>\n" }, { "answer_id": 2126034, "score": 5, "text": "<p>The computational part of Eric Wofsey's answer can be simplified a little bit ; in particular\nit can be done entirely by hand as I show below.\nFollowing this answer, let us start with the three points $p=(0,1),q=(1,0),r=(-1,-1)$. Let $b$ be a $C^1$ bijection $[0,1]\\to [0,1]$, to be defined later, and let</p>\n\n<p>$$\n\\begin{array}{lclcl}\nA(t) &amp;=&amp; (1-t)p+tq &amp;=&amp; (t,1-t) \\\\\nB(t) &amp;=&amp; (1-b(t))q+b(t)r &amp;=&amp; (1-2b(t),-b(t)) \\\\\nC(t) &amp;=&amp; -(A(t)+B(t)) &amp;=&amp; (2b(t)-(t+1),t+b(t)-1). \\\\\n\\end{array}\n$$\n Then there is a unique bijection $f$, defined on the triangle $pqr$, such that\n$f$ cyclically permutes every triple $(A(t),B(t),C(t))$ for $t\\in [0,1]$.</p>\n\n<p>As shown in Eric's answer, the only problem is then to find a $b$ such\nthat f is non-linear and such that the angle $(Or,OC(t))$ varies monotonically\nas $t$ goes from $0$ to $1$.</p>\n\n<p>It is easy to see that $f$ is linear iff $b$ is the identity (indeed, $f$ cyclically\npermutes $p,q$ and $r$, and this suffices to uniquely define $f$ is $f$ is linear).</p>\n\n<p>A little computation shows that the straight lines $(qr)$ and $(OC(t))$ intersect\nat $D(t)=(1-d(t))r+d(t)p$ where</p>\n\n<p>$$\nd(t)=\\frac{2t-b(t)}{3t-3b(t)+1}\n$$</p>\n\n<p>So it suffices to find a $b$ such that the derivative of $d$ is never zero on\n$(0,1)$. Another little computation shows that the numerator of $d'$ is\n(here $u$ and $v$ denote the numerator and denominator of $d$ respectively)</p>\n\n<p>$$\nN(t)=u'(t)v(t)-u(t)v'(t)=(3b(t)-1)b'(t)+2-3b(t)\n$$</p>\n\n<p>If we try $b(t)=t^2$, we are lucky : $N(t)=3t^2-2t+2=3(t-\\frac{1}{3})^2+\\frac{5}{3}$\nis never zero.</p>\n" }, { "answer_id": 2125524, "score": 7, "text": "<p>No, such an $f$ need not be linear. Explicitly writing down a counterexample is rather messy (unless I'm missing some trick), but here's the idea. Fix three points $p$, $q$, and $r$ such that $p+q+r=0$, a path from $p$ to $q$, and a path from $q$ to $r$. Define $f$ so that it maps the path from $p$ to $q$ to the path from $q$ to $r$. We are then forced to define $f$ on the path from $q$ to $r$ to follow a certain path from $r$ to $p$, in order for $f(f(x))+f(x)+x=0$ to hold. Then we can define $f$ to take the path from $r$ to $p$ to the path from $p$ to $q$.</p>\n\n<p>In this way (if we chose our paths well so that everything turns out to be well-defined), we can define $f$ on a loop that goes from $p$ to $q$ to $r$ and then back to $p$ such that $f(f(x))+f(x)+x=0$ for every point on the loop. If we have chosen our paths well, this loop will go around the origin once with monotonically increasing angle, so that for any nonzero $x\\in\\mathbb{R}^2$ there is a unique (continuously varying) $t&gt;0$ such that $tx$ is a point of our loop. We can now extend $f$ to all of $\\mathbb{R}^2$ by defining $f(x)=t^{-1}f(tx)$ for this value of $t$ (and $f(0)=0$). That is, we partition $\\mathbb{R}^2$ into scaled copies of our loop, and define $f$ in the same way on each one. The equation $f(f(x))+f(x)+x=0$ will then hold on all of $\\mathbb{R}^2$, since it holds on each scaled loop.</p>\n\n<hr>\n\n<p>OK, now here's an explicit example if you really want one. Let us take $p=(0,1)$, $q=(1,0)$, and $r=(-1,-1)$. Our first path will be the line segment from $p$ to $q$, parametrized linearly. Our second path will be the line segment from $q$ to $r$, but parametrized nonlinearly (this will make our $f$ nonlinear). Explicitly, for $0\\leq t\\leq 1$, we define the following values of $f$:\n$$f((1-t)p+tq)=(1-t^2)q+t^2r$$ \n$$f((1-t^2)q+t^2r)=-(1-t)p-tq-(1-t^2)q-t^2r$$\n$$f(-(1-t)p-tq-(1-t^2)q-t^2r)= (1-t^2)q+t^2r$$</p>\n\n<p>The first equation says that $f$ maps the line segment from $p$ to $q$ to the line segment from $q$ to $r$, but with a nonlinear parametrization of the latter line segment (using $t^2$ instead of $t$). The second equation then defines $f$ on the line segment from $q$ to $r$ in the only possible way such that $f(f(x))+f(x)+x=0$ for any $x$ on the segment from $p$ to $q$. This means that $f$ maps the line segment from $q$ to $r$ to some quadratic path from $r$ to $p$. Finally, the third equation defines $f$ at points on the quadratic path so that $f(f(f(x)))=x$ for all the points where we have defined $f$ so far. This will make it so $f(f(x))+f(x)+x=0$ is also true on the line segment from $q$ to $r$ and on the quadratic path from $r$ to $p$.</p>\n\n<p>We want to know that this loop we have defined by these three paths goes around $0$ once with strictly increasing angle (in particular, this means our quadratic path is injective and doesn't cross the line segments, so our definitions of $f$ cannot conflict with each other). It is clear that this is true for the two line segments; checking that it is also true for the quadratic path is just a messy computation. Plugging in the definitions of $p$, $q$, and $r$, we see that the quadratic path is parametrized by $$t\\mapsto (t^2+t-1,2t^2-t-1).$$ To see the angle is increasing, we can define $$g(t)=\\arctan\\left(\\frac{2t^2-t-1}{t^2+t-1}\\right)$$ (for an appropriate branch of $\\arctan$) and show $g'(t)&gt;0$ for $0\\leq t\\leq 1$. This is just a computation: $g'(t)$ is a rational function in $t$, and with some work you can show its numerator and denominator are always positive for $0\\leq t\\leq 1$ (see <a href=\"http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))\" rel=\"noreferrer\">http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))</a>, for instance). We can also check that the quadratic path does not loop all the way around the circle (for instance, since its second coordinate is always negative).</p>\n\n<p>So this loop passes through every angle with respect to the origin exactly once. This means that for any nonzero $x\\in\\mathbb{R}^2$ there is a unique $t&gt;0$ such that $tx$ is on the loop, and we can define $f(x)=t^{-1}f(tx)$ (and $f(0)=0$) as described above. Finally, this $f$ is nonlinear: for instance, $f(p)=q$ and $f(q)=r$ but $$f\\left(\\frac{p+q}{2}\\right)=\\frac{3q+r}{4}\\neq \\frac{q+r}{2}=\\frac{f(p)+f(q)}{2}.$$</p>\n" }, { "answer_id": 2126034, "score": 5, "text": "<p>The computational part of Eric Wofsey's answer can be simplified a little bit ; in particular\nit can be done entirely by hand as I show below.\nFollowing this answer, let us start with the three points $p=(0,1),q=(1,0),r=(-1,-1)$. Let $b$ be a $C^1$ bijection $[0,1]\\to [0,1]$, to be defined later, and let</p>\n\n<p>$$\n\\begin{array}{lclcl}\nA(t) &amp;=&amp; (1-t)p+tq &amp;=&amp; (t,1-t) \\\\\nB(t) &amp;=&amp; (1-b(t))q+b(t)r &amp;=&amp; (1-2b(t),-b(t)) \\\\\nC(t) &amp;=&amp; -(A(t)+B(t)) &amp;=&amp; (2b(t)-(t+1),t+b(t)-1). \\\\\n\\end{array}\n$$\n Then there is a unique bijection $f$, defined on the triangle $pqr$, such that\n$f$ cyclically permutes every triple $(A(t),B(t),C(t))$ for $t\\in [0,1]$.</p>\n\n<p>As shown in Eric's answer, the only problem is then to find a $b$ such\nthat f is non-linear and such that the angle $(Or,OC(t))$ varies monotonically\nas $t$ goes from $0$ to $1$.</p>\n\n<p>It is easy to see that $f$ is linear iff $b$ is the identity (indeed, $f$ cyclically\npermutes $p,q$ and $r$, and this suffices to uniquely define $f$ is $f$ is linear).</p>\n\n<p>A little computation shows that the straight lines $(qr)$ and $(OC(t))$ intersect\nat $D(t)=(1-d(t))r+d(t)p$ where</p>\n\n<p>$$\nd(t)=\\frac{2t-b(t)}{3t-3b(t)+1}\n$$</p>\n\n<p>So it suffices to find a $b$ such that the derivative of $d$ is never zero on\n$(0,1)$. Another little computation shows that the numerator of $d'$ is\n(here $u$ and $v$ denote the numerator and denominator of $d$ respectively)</p>\n\n<p>$$\nN(t)=u'(t)v(t)-u(t)v'(t)=(3b(t)-1)b'(t)+2-3b(t)\n$$</p>\n\n<p>If we try $b(t)=t^2$, we are lucky : $N(t)=3t^2-2t+2=3(t-\\frac{1}{3})^2+\\frac{5}{3}$\nis never zero.</p>\n" } ]
Question: If $f$ is continuous and $f(f(x))+f(x)+x=0$ for all $x$ in $\mathbb R^2$, then $f$ is linear? Answer 1: <p>No, such an $f$ need not be linear. Explicitly writing down a counterexample is rather messy (unless I'm missing some trick), but here's the idea. Fix three points $p$, $q$, and $r$ such that $p+q+r=0$, a path from $p$ to $q$, and a path from $q$ to $r$. Define $f$ so that it maps the path from $p$ to $q$ to the path from $q$ to $r$. We are then forced to define $f$ on the path from $q$ to $r$ to follow a certain path from $r$ to $p$, in order for $f(f(x))+f(x)+x=0$ to hold. Then we can define $f$ to take the path from $r$ to $p$ to the path from $p$ to $q$.</p> <p>In this way (if we chose our paths well so that everything turns out to be well-defined), we can define $f$ on a loop that goes from $p$ to $q$ to $r$ and then back to $p$ such that $f(f(x))+f(x)+x=0$ for every point on the loop. If we have chosen our paths well, this loop will go around the origin once with monotonically increasing angle, so that for any nonzero $x\in\mathbb{R}^2$ there is a unique (continuously varying) $t&gt;0$ such that $tx$ is a point of our loop. We can now extend $f$ to all of $\mathbb{R}^2$ by defining $f(x)=t^{-1}f(tx)$ for this value of $t$ (and $f(0)=0$). That is, we partition $\mathbb{R}^2$ into scaled copies of our loop, and define $f$ in the same way on each one. The equation $f(f(x))+f(x)+x=0$ will then hold on all of $\mathbb{R}^2$, since it holds on each scaled loop.</p> <hr> <p>OK, now here's an explicit example if you really want one. Let us take $p=(0,1)$, $q=(1,0)$, and $r=(-1,-1)$. Our first path will be the line segment from $p$ to $q$, parametrized linearly. Our second path will be the line segment from $q$ to $r$, but parametrized nonlinearly (this will make our $f$ nonlinear). Explicitly, for $0\leq t\leq 1$, we define the following values of $f$: $$f((1-t)p+tq)=(1-t^2)q+t^2r$$ $$f((1-t^2)q+t^2r)=-(1-t)p-tq-(1-t^2)q-t^2r$$ $$f(-(1-t)p-tq-(1-t^2)q-t^2r)= (1-t^2)q+t^2r$$</p> <p>The first equation says that $f$ maps the line segment from $p$ to $q$ to the line segment from $q$ to $r$, but with a nonlinear parametrization of the latter line segment (using $t^2$ instead of $t$). The second equation then defines $f$ on the line segment from $q$ to $r$ in the only possible way such that $f(f(x))+f(x)+x=0$ for any $x$ on the segment from $p$ to $q$. This means that $f$ maps the line segment from $q$ to $r$ to some quadratic path from $r$ to $p$. Finally, the third equation defines $f$ at points on the quadratic path so that $f(f(f(x)))=x$ for all the points where we have defined $f$ so far. This will make it so $f(f(x))+f(x)+x=0$ is also true on the line segment from $q$ to $r$ and on the quadratic path from $r$ to $p$.</p> <p>We want to know that this loop we have defined by these three paths goes around $0$ once with strictly increasing angle (in particular, this means our quadratic path is injective and doesn't cross the line segments, so our definitions of $f$ cannot conflict with each other). It is clear that this is true for the two line segments; checking that it is also true for the quadratic path is just a messy computation. Plugging in the definitions of $p$, $q$, and $r$, we see that the quadratic path is parametrized by $$t\mapsto (t^2+t-1,2t^2-t-1).$$ To see the angle is increasing, we can define $$g(t)=\arctan\left(\frac{2t^2-t-1}{t^2+t-1}\right)$$ (for an appropriate branch of $\arctan$) and show $g'(t)&gt;0$ for $0\leq t\leq 1$. This is just a computation: $g'(t)$ is a rational function in $t$, and with some work you can show its numerator and denominator are always positive for $0\leq t\leq 1$ (see <a href="http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))" rel="noreferrer">http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))</a>, for instance). We can also check that the quadratic path does not loop all the way around the circle (for instance, since its second coordinate is always negative).</p> <p>So this loop passes through every angle with respect to the origin exactly once. This means that for any nonzero $x\in\mathbb{R}^2$ there is a unique $t&gt;0$ such that $tx$ is on the loop, and we can define $f(x)=t^{-1}f(tx)$ (and $f(0)=0$) as described above. Finally, this $f$ is nonlinear: for instance, $f(p)=q$ and $f(q)=r$ but $$f\left(\frac{p+q}{2}\right)=\frac{3q+r}{4}\neq \frac{q+r}{2}=\frac{f(p)+f(q)}{2}.$$</p> Answer 2: <p>The computational part of Eric Wofsey's answer can be simplified a little bit ; in particular it can be done entirely by hand as I show below. Following this answer, let us start with the three points $p=(0,1),q=(1,0),r=(-1,-1)$. Let $b$ be a $C^1$ bijection $[0,1]\to [0,1]$, to be defined later, and let</p> <p>$$ \begin{array}{lclcl} A(t) &amp;=&amp; (1-t)p+tq &amp;=&amp; (t,1-t) \\ B(t) &amp;=&amp; (1-b(t))q+b(t)r &amp;=&amp; (1-2b(t),-b(t)) \\ C(t) &amp;=&amp; -(A(t)+B(t)) &amp;=&amp; (2b(t)-(t+1),t+b(t)-1). \\ \end{array} $$ Then there is a unique bijection $f$, defined on the triangle $pqr$, such that $f$ cyclically permutes every triple $(A(t),B(t),C(t))$ for $t\in [0,1]$.</p> <p>As shown in Eric's answer, the only problem is then to find a $b$ such that f is non-linear and such that the angle $(Or,OC(t))$ varies monotonically as $t$ goes from $0$ to $1$.</p> <p>It is easy to see that $f$ is linear iff $b$ is the identity (indeed, $f$ cyclically permutes $p,q$ and $r$, and this suffices to uniquely define $f$ is $f$ is linear).</p> <p>A little computation shows that the straight lines $(qr)$ and $(OC(t))$ intersect at $D(t)=(1-d(t))r+d(t)p$ where</p> <p>$$ d(t)=\frac{2t-b(t)}{3t-3b(t)+1} $$</p> <p>So it suffices to find a $b$ such that the derivative of $d$ is never zero on $(0,1)$. Another little computation shows that the numerator of $d'$ is (here $u$ and $v$ denote the numerator and denominator of $d$ respectively)</p> <p>$$ N(t)=u'(t)v(t)-u(t)v'(t)=(3b(t)-1)b'(t)+2-3b(t) $$</p> <p>If we try $b(t)=t^2$, we are lucky : $N(t)=3t^2-2t+2=3(t-\frac{1}{3})^2+\frac{5}{3}$ is never zero.</p> Answer 3: <p>No, such an $f$ need not be linear. Explicitly writing down a counterexample is rather messy (unless I'm missing some trick), but here's the idea. Fix three points $p$, $q$, and $r$ such that $p+q+r=0$, a path from $p$ to $q$, and a path from $q$ to $r$. Define $f$ so that it maps the path from $p$ to $q$ to the path from $q$ to $r$. We are then forced to define $f$ on the path from $q$ to $r$ to follow a certain path from $r$ to $p$, in order for $f(f(x))+f(x)+x=0$ to hold. Then we can define $f$ to take the path from $r$ to $p$ to the path from $p$ to $q$.</p> <p>In this way (if we chose our paths well so that everything turns out to be well-defined), we can define $f$ on a loop that goes from $p$ to $q$ to $r$ and then back to $p$ such that $f(f(x))+f(x)+x=0$ for every point on the loop. If we have chosen our paths well, this loop will go around the origin once with monotonically increasing angle, so that for any nonzero $x\in\mathbb{R}^2$ there is a unique (continuously varying) $t&gt;0$ such that $tx$ is a point of our loop. We can now extend $f$ to all of $\mathbb{R}^2$ by defining $f(x)=t^{-1}f(tx)$ for this value of $t$ (and $f(0)=0$). That is, we partition $\mathbb{R}^2$ into scaled copies of our loop, and define $f$ in the same way on each one. The equation $f(f(x))+f(x)+x=0$ will then hold on all of $\mathbb{R}^2$, since it holds on each scaled loop.</p> <hr> <p>OK, now here's an explicit example if you really want one. Let us take $p=(0,1)$, $q=(1,0)$, and $r=(-1,-1)$. Our first path will be the line segment from $p$ to $q$, parametrized linearly. Our second path will be the line segment from $q$ to $r$, but parametrized nonlinearly (this will make our $f$ nonlinear). Explicitly, for $0\leq t\leq 1$, we define the following values of $f$: $$f((1-t)p+tq)=(1-t^2)q+t^2r$$ $$f((1-t^2)q+t^2r)=-(1-t)p-tq-(1-t^2)q-t^2r$$ $$f(-(1-t)p-tq-(1-t^2)q-t^2r)= (1-t^2)q+t^2r$$</p> <p>The first equation says that $f$ maps the line segment from $p$ to $q$ to the line segment from $q$ to $r$, but with a nonlinear parametrization of the latter line segment (using $t^2$ instead of $t$). The second equation then defines $f$ on the line segment from $q$ to $r$ in the only possible way such that $f(f(x))+f(x)+x=0$ for any $x$ on the segment from $p$ to $q$. This means that $f$ maps the line segment from $q$ to $r$ to some quadratic path from $r$ to $p$. Finally, the third equation defines $f$ at points on the quadratic path so that $f(f(f(x)))=x$ for all the points where we have defined $f$ so far. This will make it so $f(f(x))+f(x)+x=0$ is also true on the line segment from $q$ to $r$ and on the quadratic path from $r$ to $p$.</p> <p>We want to know that this loop we have defined by these three paths goes around $0$ once with strictly increasing angle (in particular, this means our quadratic path is injective and doesn't cross the line segments, so our definitions of $f$ cannot conflict with each other). It is clear that this is true for the two line segments; checking that it is also true for the quadratic path is just a messy computation. Plugging in the definitions of $p$, $q$, and $r$, we see that the quadratic path is parametrized by $$t\mapsto (t^2+t-1,2t^2-t-1).$$ To see the angle is increasing, we can define $$g(t)=\arctan\left(\frac{2t^2-t-1}{t^2+t-1}\right)$$ (for an appropriate branch of $\arctan$) and show $g'(t)&gt;0$ for $0\leq t\leq 1$. This is just a computation: $g'(t)$ is a rational function in $t$, and with some work you can show its numerator and denominator are always positive for $0\leq t\leq 1$ (see <a href="http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))" rel="noreferrer">http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))</a>, for instance). We can also check that the quadratic path does not loop all the way around the circle (for instance, since its second coordinate is always negative).</p> <p>So this loop passes through every angle with respect to the origin exactly once. This means that for any nonzero $x\in\mathbb{R}^2$ there is a unique $t&gt;0$ such that $tx$ is on the loop, and we can define $f(x)=t^{-1}f(tx)$ (and $f(0)=0$) as described above. Finally, this $f$ is nonlinear: for instance, $f(p)=q$ and $f(q)=r$ but $$f\left(\frac{p+q}{2}\right)=\frac{3q+r}{4}\neq \frac{q+r}{2}=\frac{f(p)+f(q)}{2}.$$</p> Answer 4: <p>The computational part of Eric Wofsey's answer can be simplified a little bit ; in particular it can be done entirely by hand as I show below. Following this answer, let us start with the three points $p=(0,1),q=(1,0),r=(-1,-1)$. Let $b$ be a $C^1$ bijection $[0,1]\to [0,1]$, to be defined later, and let</p> <p>$$ \begin{array}{lclcl} A(t) &amp;=&amp; (1-t)p+tq &amp;=&amp; (t,1-t) \\ B(t) &amp;=&amp; (1-b(t))q+b(t)r &amp;=&amp; (1-2b(t),-b(t)) \\ C(t) &amp;=&amp; -(A(t)+B(t)) &amp;=&amp; (2b(t)-(t+1),t+b(t)-1). \\ \end{array} $$ Then there is a unique bijection $f$, defined on the triangle $pqr$, such that $f$ cyclically permutes every triple $(A(t),B(t),C(t))$ for $t\in [0,1]$.</p> <p>As shown in Eric's answer, the only problem is then to find a $b$ such that f is non-linear and such that the angle $(Or,OC(t))$ varies monotonically as $t$ goes from $0$ to $1$.</p> <p>It is easy to see that $f$ is linear iff $b$ is the identity (indeed, $f$ cyclically permutes $p,q$ and $r$, and this suffices to uniquely define $f$ is $f$ is linear).</p> <p>A little computation shows that the straight lines $(qr)$ and $(OC(t))$ intersect at $D(t)=(1-d(t))r+d(t)p$ where</p> <p>$$ d(t)=\frac{2t-b(t)}{3t-3b(t)+1} $$</p> <p>So it suffices to find a $b$ such that the derivative of $d$ is never zero on $(0,1)$. Another little computation shows that the numerator of $d'$ is (here $u$ and $v$ denote the numerator and denominator of $d$ respectively)</p> <p>$$ N(t)=u'(t)v(t)-u(t)v'(t)=(3b(t)-1)b'(t)+2-3b(t) $$</p> <p>If we try $b(t)=t^2$, we are lucky : $N(t)=3t^2-2t+2=3(t-\frac{1}{3})^2+\frac{5}{3}$ is never zero.</p> Answer 5: <p>No, such an $f$ need not be linear. Explicitly writing down a counterexample is rather messy (unless I'm missing some trick), but here's the idea. Fix three points $p$, $q$, and $r$ such that $p+q+r=0$, a path from $p$ to $q$, and a path from $q$ to $r$. Define $f$ so that it maps the path from $p$ to $q$ to the path from $q$ to $r$. We are then forced to define $f$ on the path from $q$ to $r$ to follow a certain path from $r$ to $p$, in order for $f(f(x))+f(x)+x=0$ to hold. Then we can define $f$ to take the path from $r$ to $p$ to the path from $p$ to $q$.</p> <p>In this way (if we chose our paths well so that everything turns out to be well-defined), we can define $f$ on a loop that goes from $p$ to $q$ to $r$ and then back to $p$ such that $f(f(x))+f(x)+x=0$ for every point on the loop. If we have chosen our paths well, this loop will go around the origin once with monotonically increasing angle, so that for any nonzero $x\in\mathbb{R}^2$ there is a unique (continuously varying) $t&gt;0$ such that $tx$ is a point of our loop. We can now extend $f$ to all of $\mathbb{R}^2$ by defining $f(x)=t^{-1}f(tx)$ for this value of $t$ (and $f(0)=0$). That is, we partition $\mathbb{R}^2$ into scaled copies of our loop, and define $f$ in the same way on each one. The equation $f(f(x))+f(x)+x=0$ will then hold on all of $\mathbb{R}^2$, since it holds on each scaled loop.</p> <hr> <p>OK, now here's an explicit example if you really want one. Let us take $p=(0,1)$, $q=(1,0)$, and $r=(-1,-1)$. Our first path will be the line segment from $p$ to $q$, parametrized linearly. Our second path will be the line segment from $q$ to $r$, but parametrized nonlinearly (this will make our $f$ nonlinear). Explicitly, for $0\leq t\leq 1$, we define the following values of $f$: $$f((1-t)p+tq)=(1-t^2)q+t^2r$$ $$f((1-t^2)q+t^2r)=-(1-t)p-tq-(1-t^2)q-t^2r$$ $$f(-(1-t)p-tq-(1-t^2)q-t^2r)= (1-t^2)q+t^2r$$</p> <p>The first equation says that $f$ maps the line segment from $p$ to $q$ to the line segment from $q$ to $r$, but with a nonlinear parametrization of the latter line segment (using $t^2$ instead of $t$). The second equation then defines $f$ on the line segment from $q$ to $r$ in the only possible way such that $f(f(x))+f(x)+x=0$ for any $x$ on the segment from $p$ to $q$. This means that $f$ maps the line segment from $q$ to $r$ to some quadratic path from $r$ to $p$. Finally, the third equation defines $f$ at points on the quadratic path so that $f(f(f(x)))=x$ for all the points where we have defined $f$ so far. This will make it so $f(f(x))+f(x)+x=0$ is also true on the line segment from $q$ to $r$ and on the quadratic path from $r$ to $p$.</p> <p>We want to know that this loop we have defined by these three paths goes around $0$ once with strictly increasing angle (in particular, this means our quadratic path is injective and doesn't cross the line segments, so our definitions of $f$ cannot conflict with each other). It is clear that this is true for the two line segments; checking that it is also true for the quadratic path is just a messy computation. Plugging in the definitions of $p$, $q$, and $r$, we see that the quadratic path is parametrized by $$t\mapsto (t^2+t-1,2t^2-t-1).$$ To see the angle is increasing, we can define $$g(t)=\arctan\left(\frac{2t^2-t-1}{t^2+t-1}\right)$$ (for an appropriate branch of $\arctan$) and show $g'(t)&gt;0$ for $0\leq t\leq 1$. This is just a computation: $g'(t)$ is a rational function in $t$, and with some work you can show its numerator and denominator are always positive for $0\leq t\leq 1$ (see <a href="http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))" rel="noreferrer">http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))</a>, for instance). We can also check that the quadratic path does not loop all the way around the circle (for instance, since its second coordinate is always negative).</p> <p>So this loop passes through every angle with respect to the origin exactly once. This means that for any nonzero $x\in\mathbb{R}^2$ there is a unique $t&gt;0$ such that $tx$ is on the loop, and we can define $f(x)=t^{-1}f(tx)$ (and $f(0)=0$) as described above. Finally, this $f$ is nonlinear: for instance, $f(p)=q$ and $f(q)=r$ but $$f\left(\frac{p+q}{2}\right)=\frac{3q+r}{4}\neq \frac{q+r}{2}=\frac{f(p)+f(q)}{2}.$$</p> Answer 6: <p>The computational part of Eric Wofsey's answer can be simplified a little bit ; in particular it can be done entirely by hand as I show below. Following this answer, let us start with the three points $p=(0,1),q=(1,0),r=(-1,-1)$. Let $b$ be a $C^1$ bijection $[0,1]\to [0,1]$, to be defined later, and let</p> <p>$$ \begin{array}{lclcl} A(t) &amp;=&amp; (1-t)p+tq &amp;=&amp; (t,1-t) \\ B(t) &amp;=&amp; (1-b(t))q+b(t)r &amp;=&amp; (1-2b(t),-b(t)) \\ C(t) &amp;=&amp; -(A(t)+B(t)) &amp;=&amp; (2b(t)-(t+1),t+b(t)-1). \\ \end{array} $$ Then there is a unique bijection $f$, defined on the triangle $pqr$, such that $f$ cyclically permutes every triple $(A(t),B(t),C(t))$ for $t\in [0,1]$.</p> <p>As shown in Eric's answer, the only problem is then to find a $b$ such that f is non-linear and such that the angle $(Or,OC(t))$ varies monotonically as $t$ goes from $0$ to $1$.</p> <p>It is easy to see that $f$ is linear iff $b$ is the identity (indeed, $f$ cyclically permutes $p,q$ and $r$, and this suffices to uniquely define $f$ is $f$ is linear).</p> <p>A little computation shows that the straight lines $(qr)$ and $(OC(t))$ intersect at $D(t)=(1-d(t))r+d(t)p$ where</p> <p>$$ d(t)=\frac{2t-b(t)}{3t-3b(t)+1} $$</p> <p>So it suffices to find a $b$ such that the derivative of $d$ is never zero on $(0,1)$. Another little computation shows that the numerator of $d'$ is (here $u$ and $v$ denote the numerator and denominator of $d$ respectively)</p> <p>$$ N(t)=u'(t)v(t)-u(t)v'(t)=(3b(t)-1)b'(t)+2-3b(t) $$</p> <p>If we try $b(t)=t^2$, we are lucky : $N(t)=3t^2-2t+2=3(t-\frac{1}{3})^2+\frac{5}{3}$ is never zero.</p> Answer 7: <p>No, such an $f$ need not be linear. Explicitly writing down a counterexample is rather messy (unless I'm missing some trick), but here's the idea. Fix three points $p$, $q$, and $r$ such that $p+q+r=0$, a path from $p$ to $q$, and a path from $q$ to $r$. Define $f$ so that it maps the path from $p$ to $q$ to the path from $q$ to $r$. We are then forced to define $f$ on the path from $q$ to $r$ to follow a certain path from $r$ to $p$, in order for $f(f(x))+f(x)+x=0$ to hold. Then we can define $f$ to take the path from $r$ to $p$ to the path from $p$ to $q$.</p> <p>In this way (if we chose our paths well so that everything turns out to be well-defined), we can define $f$ on a loop that goes from $p$ to $q$ to $r$ and then back to $p$ such that $f(f(x))+f(x)+x=0$ for every point on the loop. If we have chosen our paths well, this loop will go around the origin once with monotonically increasing angle, so that for any nonzero $x\in\mathbb{R}^2$ there is a unique (continuously varying) $t&gt;0$ such that $tx$ is a point of our loop. We can now extend $f$ to all of $\mathbb{R}^2$ by defining $f(x)=t^{-1}f(tx)$ for this value of $t$ (and $f(0)=0$). That is, we partition $\mathbb{R}^2$ into scaled copies of our loop, and define $f$ in the same way on each one. The equation $f(f(x))+f(x)+x=0$ will then hold on all of $\mathbb{R}^2$, since it holds on each scaled loop.</p> <hr> <p>OK, now here's an explicit example if you really want one. Let us take $p=(0,1)$, $q=(1,0)$, and $r=(-1,-1)$. Our first path will be the line segment from $p$ to $q$, parametrized linearly. Our second path will be the line segment from $q$ to $r$, but parametrized nonlinearly (this will make our $f$ nonlinear). Explicitly, for $0\leq t\leq 1$, we define the following values of $f$: $$f((1-t)p+tq)=(1-t^2)q+t^2r$$ $$f((1-t^2)q+t^2r)=-(1-t)p-tq-(1-t^2)q-t^2r$$ $$f(-(1-t)p-tq-(1-t^2)q-t^2r)= (1-t^2)q+t^2r$$</p> <p>The first equation says that $f$ maps the line segment from $p$ to $q$ to the line segment from $q$ to $r$, but with a nonlinear parametrization of the latter line segment (using $t^2$ instead of $t$). The second equation then defines $f$ on the line segment from $q$ to $r$ in the only possible way such that $f(f(x))+f(x)+x=0$ for any $x$ on the segment from $p$ to $q$. This means that $f$ maps the line segment from $q$ to $r$ to some quadratic path from $r$ to $p$. Finally, the third equation defines $f$ at points on the quadratic path so that $f(f(f(x)))=x$ for all the points where we have defined $f$ so far. This will make it so $f(f(x))+f(x)+x=0$ is also true on the line segment from $q$ to $r$ and on the quadratic path from $r$ to $p$.</p> <p>We want to know that this loop we have defined by these three paths goes around $0$ once with strictly increasing angle (in particular, this means our quadratic path is injective and doesn't cross the line segments, so our definitions of $f$ cannot conflict with each other). It is clear that this is true for the two line segments; checking that it is also true for the quadratic path is just a messy computation. Plugging in the definitions of $p$, $q$, and $r$, we see that the quadratic path is parametrized by $$t\mapsto (t^2+t-1,2t^2-t-1).$$ To see the angle is increasing, we can define $$g(t)=\arctan\left(\frac{2t^2-t-1}{t^2+t-1}\right)$$ (for an appropriate branch of $\arctan$) and show $g'(t)&gt;0$ for $0\leq t\leq 1$. This is just a computation: $g'(t)$ is a rational function in $t$, and with some work you can show its numerator and denominator are always positive for $0\leq t\leq 1$ (see <a href="http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))" rel="noreferrer">http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))</a>, for instance). We can also check that the quadratic path does not loop all the way around the circle (for instance, since its second coordinate is always negative).</p> <p>So this loop passes through every angle with respect to the origin exactly once. This means that for any nonzero $x\in\mathbb{R}^2$ there is a unique $t&gt;0$ such that $tx$ is on the loop, and we can define $f(x)=t^{-1}f(tx)$ (and $f(0)=0$) as described above. Finally, this $f$ is nonlinear: for instance, $f(p)=q$ and $f(q)=r$ but $$f\left(\frac{p+q}{2}\right)=\frac{3q+r}{4}\neq \frac{q+r}{2}=\frac{f(p)+f(q)}{2}.$$</p> Answer 8: <p>The computational part of Eric Wofsey's answer can be simplified a little bit ; in particular it can be done entirely by hand as I show below. Following this answer, let us start with the three points $p=(0,1),q=(1,0),r=(-1,-1)$. Let $b$ be a $C^1$ bijection $[0,1]\to [0,1]$, to be defined later, and let</p> <p>$$ \begin{array}{lclcl} A(t) &amp;=&amp; (1-t)p+tq &amp;=&amp; (t,1-t) \\ B(t) &amp;=&amp; (1-b(t))q+b(t)r &amp;=&amp; (1-2b(t),-b(t)) \\ C(t) &amp;=&amp; -(A(t)+B(t)) &amp;=&amp; (2b(t)-(t+1),t+b(t)-1). \\ \end{array} $$ Then there is a unique bijection $f$, defined on the triangle $pqr$, such that $f$ cyclically permutes every triple $(A(t),B(t),C(t))$ for $t\in [0,1]$.</p> <p>As shown in Eric's answer, the only problem is then to find a $b$ such that f is non-linear and such that the angle $(Or,OC(t))$ varies monotonically as $t$ goes from $0$ to $1$.</p> <p>It is easy to see that $f$ is linear iff $b$ is the identity (indeed, $f$ cyclically permutes $p,q$ and $r$, and this suffices to uniquely define $f$ is $f$ is linear).</p> <p>A little computation shows that the straight lines $(qr)$ and $(OC(t))$ intersect at $D(t)=(1-d(t))r+d(t)p$ where</p> <p>$$ d(t)=\frac{2t-b(t)}{3t-3b(t)+1} $$</p> <p>So it suffices to find a $b$ such that the derivative of $d$ is never zero on $(0,1)$. Another little computation shows that the numerator of $d'$ is (here $u$ and $v$ denote the numerator and denominator of $d$ respectively)</p> <p>$$ N(t)=u'(t)v(t)-u(t)v'(t)=(3b(t)-1)b'(t)+2-3b(t) $$</p> <p>If we try $b(t)=t^2$, we are lucky : $N(t)=3t^2-2t+2=3(t-\frac{1}{3})^2+\frac{5}{3}$ is never zero.</p> Answer 9: <p>No, such an $f$ need not be linear. Explicitly writing down a counterexample is rather messy (unless I'm missing some trick), but here's the idea. Fix three points $p$, $q$, and $r$ such that $p+q+r=0$, a path from $p$ to $q$, and a path from $q$ to $r$. Define $f$ so that it maps the path from $p$ to $q$ to the path from $q$ to $r$. We are then forced to define $f$ on the path from $q$ to $r$ to follow a certain path from $r$ to $p$, in order for $f(f(x))+f(x)+x=0$ to hold. Then we can define $f$ to take the path from $r$ to $p$ to the path from $p$ to $q$.</p> <p>In this way (if we chose our paths well so that everything turns out to be well-defined), we can define $f$ on a loop that goes from $p$ to $q$ to $r$ and then back to $p$ such that $f(f(x))+f(x)+x=0$ for every point on the loop. If we have chosen our paths well, this loop will go around the origin once with monotonically increasing angle, so that for any nonzero $x\in\mathbb{R}^2$ there is a unique (continuously varying) $t&gt;0$ such that $tx$ is a point of our loop. We can now extend $f$ to all of $\mathbb{R}^2$ by defining $f(x)=t^{-1}f(tx)$ for this value of $t$ (and $f(0)=0$). That is, we partition $\mathbb{R}^2$ into scaled copies of our loop, and define $f$ in the same way on each one. The equation $f(f(x))+f(x)+x=0$ will then hold on all of $\mathbb{R}^2$, since it holds on each scaled loop.</p> <hr> <p>OK, now here's an explicit example if you really want one. Let us take $p=(0,1)$, $q=(1,0)$, and $r=(-1,-1)$. Our first path will be the line segment from $p$ to $q$, parametrized linearly. Our second path will be the line segment from $q$ to $r$, but parametrized nonlinearly (this will make our $f$ nonlinear). Explicitly, for $0\leq t\leq 1$, we define the following values of $f$: $$f((1-t)p+tq)=(1-t^2)q+t^2r$$ $$f((1-t^2)q+t^2r)=-(1-t)p-tq-(1-t^2)q-t^2r$$ $$f(-(1-t)p-tq-(1-t^2)q-t^2r)= (1-t^2)q+t^2r$$</p> <p>The first equation says that $f$ maps the line segment from $p$ to $q$ to the line segment from $q$ to $r$, but with a nonlinear parametrization of the latter line segment (using $t^2$ instead of $t$). The second equation then defines $f$ on the line segment from $q$ to $r$ in the only possible way such that $f(f(x))+f(x)+x=0$ for any $x$ on the segment from $p$ to $q$. This means that $f$ maps the line segment from $q$ to $r$ to some quadratic path from $r$ to $p$. Finally, the third equation defines $f$ at points on the quadratic path so that $f(f(f(x)))=x$ for all the points where we have defined $f$ so far. This will make it so $f(f(x))+f(x)+x=0$ is also true on the line segment from $q$ to $r$ and on the quadratic path from $r$ to $p$.</p> <p>We want to know that this loop we have defined by these three paths goes around $0$ once with strictly increasing angle (in particular, this means our quadratic path is injective and doesn't cross the line segments, so our definitions of $f$ cannot conflict with each other). It is clear that this is true for the two line segments; checking that it is also true for the quadratic path is just a messy computation. Plugging in the definitions of $p$, $q$, and $r$, we see that the quadratic path is parametrized by $$t\mapsto (t^2+t-1,2t^2-t-1).$$ To see the angle is increasing, we can define $$g(t)=\arctan\left(\frac{2t^2-t-1}{t^2+t-1}\right)$$ (for an appropriate branch of $\arctan$) and show $g'(t)&gt;0$ for $0\leq t\leq 1$. This is just a computation: $g'(t)$ is a rational function in $t$, and with some work you can show its numerator and denominator are always positive for $0\leq t\leq 1$ (see <a href="http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))" rel="noreferrer">http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))</a>, for instance). We can also check that the quadratic path does not loop all the way around the circle (for instance, since its second coordinate is always negative).</p> <p>So this loop passes through every angle with respect to the origin exactly once. This means that for any nonzero $x\in\mathbb{R}^2$ there is a unique $t&gt;0$ such that $tx$ is on the loop, and we can define $f(x)=t^{-1}f(tx)$ (and $f(0)=0$) as described above. Finally, this $f$ is nonlinear: for instance, $f(p)=q$ and $f(q)=r$ but $$f\left(\frac{p+q}{2}\right)=\frac{3q+r}{4}\neq \frac{q+r}{2}=\frac{f(p)+f(q)}{2}.$$</p> Answer 10: <p>The computational part of Eric Wofsey's answer can be simplified a little bit ; in particular it can be done entirely by hand as I show below. Following this answer, let us start with the three points $p=(0,1),q=(1,0),r=(-1,-1)$. Let $b$ be a $C^1$ bijection $[0,1]\to [0,1]$, to be defined later, and let</p> <p>$$ \begin{array}{lclcl} A(t) &amp;=&amp; (1-t)p+tq &amp;=&amp; (t,1-t) \\ B(t) &amp;=&amp; (1-b(t))q+b(t)r &amp;=&amp; (1-2b(t),-b(t)) \\ C(t) &amp;=&amp; -(A(t)+B(t)) &amp;=&amp; (2b(t)-(t+1),t+b(t)-1). \\ \end{array} $$ Then there is a unique bijection $f$, defined on the triangle $pqr$, such that $f$ cyclically permutes every triple $(A(t),B(t),C(t))$ for $t\in [0,1]$.</p> <p>As shown in Eric's answer, the only problem is then to find a $b$ such that f is non-linear and such that the angle $(Or,OC(t))$ varies monotonically as $t$ goes from $0$ to $1$.</p> <p>It is easy to see that $f$ is linear iff $b$ is the identity (indeed, $f$ cyclically permutes $p,q$ and $r$, and this suffices to uniquely define $f$ is $f$ is linear).</p> <p>A little computation shows that the straight lines $(qr)$ and $(OC(t))$ intersect at $D(t)=(1-d(t))r+d(t)p$ where</p> <p>$$ d(t)=\frac{2t-b(t)}{3t-3b(t)+1} $$</p> <p>So it suffices to find a $b$ such that the derivative of $d$ is never zero on $(0,1)$. Another little computation shows that the numerator of $d'$ is (here $u$ and $v$ denote the numerator and denominator of $d$ respectively)</p> <p>$$ N(t)=u'(t)v(t)-u(t)v'(t)=(3b(t)-1)b'(t)+2-3b(t) $$</p> <p>If we try $b(t)=t^2$, we are lucky : $N(t)=3t^2-2t+2=3(t-\frac{1}{3})^2+\frac{5}{3}$ is never zero.</p> Answer 11: <p>No, such an $f$ need not be linear. Explicitly writing down a counterexample is rather messy (unless I'm missing some trick), but here's the idea. Fix three points $p$, $q$, and $r$ such that $p+q+r=0$, a path from $p$ to $q$, and a path from $q$ to $r$. Define $f$ so that it maps the path from $p$ to $q$ to the path from $q$ to $r$. We are then forced to define $f$ on the path from $q$ to $r$ to follow a certain path from $r$ to $p$, in order for $f(f(x))+f(x)+x=0$ to hold. Then we can define $f$ to take the path from $r$ to $p$ to the path from $p$ to $q$.</p> <p>In this way (if we chose our paths well so that everything turns out to be well-defined), we can define $f$ on a loop that goes from $p$ to $q$ to $r$ and then back to $p$ such that $f(f(x))+f(x)+x=0$ for every point on the loop. If we have chosen our paths well, this loop will go around the origin once with monotonically increasing angle, so that for any nonzero $x\in\mathbb{R}^2$ there is a unique (continuously varying) $t&gt;0$ such that $tx$ is a point of our loop. We can now extend $f$ to all of $\mathbb{R}^2$ by defining $f(x)=t^{-1}f(tx)$ for this value of $t$ (and $f(0)=0$). That is, we partition $\mathbb{R}^2$ into scaled copies of our loop, and define $f$ in the same way on each one. The equation $f(f(x))+f(x)+x=0$ will then hold on all of $\mathbb{R}^2$, since it holds on each scaled loop.</p> <hr> <p>OK, now here's an explicit example if you really want one. Let us take $p=(0,1)$, $q=(1,0)$, and $r=(-1,-1)$. Our first path will be the line segment from $p$ to $q$, parametrized linearly. Our second path will be the line segment from $q$ to $r$, but parametrized nonlinearly (this will make our $f$ nonlinear). Explicitly, for $0\leq t\leq 1$, we define the following values of $f$: $$f((1-t)p+tq)=(1-t^2)q+t^2r$$ $$f((1-t^2)q+t^2r)=-(1-t)p-tq-(1-t^2)q-t^2r$$ $$f(-(1-t)p-tq-(1-t^2)q-t^2r)= (1-t^2)q+t^2r$$</p> <p>The first equation says that $f$ maps the line segment from $p$ to $q$ to the line segment from $q$ to $r$, but with a nonlinear parametrization of the latter line segment (using $t^2$ instead of $t$). The second equation then defines $f$ on the line segment from $q$ to $r$ in the only possible way such that $f(f(x))+f(x)+x=0$ for any $x$ on the segment from $p$ to $q$. This means that $f$ maps the line segment from $q$ to $r$ to some quadratic path from $r$ to $p$. Finally, the third equation defines $f$ at points on the quadratic path so that $f(f(f(x)))=x$ for all the points where we have defined $f$ so far. This will make it so $f(f(x))+f(x)+x=0$ is also true on the line segment from $q$ to $r$ and on the quadratic path from $r$ to $p$.</p> <p>We want to know that this loop we have defined by these three paths goes around $0$ once with strictly increasing angle (in particular, this means our quadratic path is injective and doesn't cross the line segments, so our definitions of $f$ cannot conflict with each other). It is clear that this is true for the two line segments; checking that it is also true for the quadratic path is just a messy computation. Plugging in the definitions of $p$, $q$, and $r$, we see that the quadratic path is parametrized by $$t\mapsto (t^2+t-1,2t^2-t-1).$$ To see the angle is increasing, we can define $$g(t)=\arctan\left(\frac{2t^2-t-1}{t^2+t-1}\right)$$ (for an appropriate branch of $\arctan$) and show $g'(t)&gt;0$ for $0\leq t\leq 1$. This is just a computation: $g'(t)$ is a rational function in $t$, and with some work you can show its numerator and denominator are always positive for $0\leq t\leq 1$ (see <a href="http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))" rel="noreferrer">http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))</a>, for instance). We can also check that the quadratic path does not loop all the way around the circle (for instance, since its second coordinate is always negative).</p> <p>So this loop passes through every angle with respect to the origin exactly once. This means that for any nonzero $x\in\mathbb{R}^2$ there is a unique $t&gt;0$ such that $tx$ is on the loop, and we can define $f(x)=t^{-1}f(tx)$ (and $f(0)=0$) as described above. Finally, this $f$ is nonlinear: for instance, $f(p)=q$ and $f(q)=r$ but $$f\left(\frac{p+q}{2}\right)=\frac{3q+r}{4}\neq \frac{q+r}{2}=\frac{f(p)+f(q)}{2}.$$</p> Answer 12: <p>The computational part of Eric Wofsey's answer can be simplified a little bit ; in particular it can be done entirely by hand as I show below. Following this answer, let us start with the three points $p=(0,1),q=(1,0),r=(-1,-1)$. Let $b$ be a $C^1$ bijection $[0,1]\to [0,1]$, to be defined later, and let</p> <p>$$ \begin{array}{lclcl} A(t) &amp;=&amp; (1-t)p+tq &amp;=&amp; (t,1-t) \\ B(t) &amp;=&amp; (1-b(t))q+b(t)r &amp;=&amp; (1-2b(t),-b(t)) \\ C(t) &amp;=&amp; -(A(t)+B(t)) &amp;=&amp; (2b(t)-(t+1),t+b(t)-1). \\ \end{array} $$ Then there is a unique bijection $f$, defined on the triangle $pqr$, such that $f$ cyclically permutes every triple $(A(t),B(t),C(t))$ for $t\in [0,1]$.</p> <p>As shown in Eric's answer, the only problem is then to find a $b$ such that f is non-linear and such that the angle $(Or,OC(t))$ varies monotonically as $t$ goes from $0$ to $1$.</p> <p>It is easy to see that $f$ is linear iff $b$ is the identity (indeed, $f$ cyclically permutes $p,q$ and $r$, and this suffices to uniquely define $f$ is $f$ is linear).</p> <p>A little computation shows that the straight lines $(qr)$ and $(OC(t))$ intersect at $D(t)=(1-d(t))r+d(t)p$ where</p> <p>$$ d(t)=\frac{2t-b(t)}{3t-3b(t)+1} $$</p> <p>So it suffices to find a $b$ such that the derivative of $d$ is never zero on $(0,1)$. Another little computation shows that the numerator of $d'$ is (here $u$ and $v$ denote the numerator and denominator of $d$ respectively)</p> <p>$$ N(t)=u'(t)v(t)-u(t)v'(t)=(3b(t)-1)b'(t)+2-3b(t) $$</p> <p>If we try $b(t)=t^2$, we are lucky : $N(t)=3t^2-2t+2=3(t-\frac{1}{3})^2+\frac{5}{3}$ is never zero.</p> Answer 13: <p>No, such an $f$ need not be linear. Explicitly writing down a counterexample is rather messy (unless I'm missing some trick), but here's the idea. Fix three points $p$, $q$, and $r$ such that $p+q+r=0$, a path from $p$ to $q$, and a path from $q$ to $r$. Define $f$ so that it maps the path from $p$ to $q$ to the path from $q$ to $r$. We are then forced to define $f$ on the path from $q$ to $r$ to follow a certain path from $r$ to $p$, in order for $f(f(x))+f(x)+x=0$ to hold. Then we can define $f$ to take the path from $r$ to $p$ to the path from $p$ to $q$.</p> <p>In this way (if we chose our paths well so that everything turns out to be well-defined), we can define $f$ on a loop that goes from $p$ to $q$ to $r$ and then back to $p$ such that $f(f(x))+f(x)+x=0$ for every point on the loop. If we have chosen our paths well, this loop will go around the origin once with monotonically increasing angle, so that for any nonzero $x\in\mathbb{R}^2$ there is a unique (continuously varying) $t&gt;0$ such that $tx$ is a point of our loop. We can now extend $f$ to all of $\mathbb{R}^2$ by defining $f(x)=t^{-1}f(tx)$ for this value of $t$ (and $f(0)=0$). That is, we partition $\mathbb{R}^2$ into scaled copies of our loop, and define $f$ in the same way on each one. The equation $f(f(x))+f(x)+x=0$ will then hold on all of $\mathbb{R}^2$, since it holds on each scaled loop.</p> <hr> <p>OK, now here's an explicit example if you really want one. Let us take $p=(0,1)$, $q=(1,0)$, and $r=(-1,-1)$. Our first path will be the line segment from $p$ to $q$, parametrized linearly. Our second path will be the line segment from $q$ to $r$, but parametrized nonlinearly (this will make our $f$ nonlinear). Explicitly, for $0\leq t\leq 1$, we define the following values of $f$: $$f((1-t)p+tq)=(1-t^2)q+t^2r$$ $$f((1-t^2)q+t^2r)=-(1-t)p-tq-(1-t^2)q-t^2r$$ $$f(-(1-t)p-tq-(1-t^2)q-t^2r)= (1-t^2)q+t^2r$$</p> <p>The first equation says that $f$ maps the line segment from $p$ to $q$ to the line segment from $q$ to $r$, but with a nonlinear parametrization of the latter line segment (using $t^2$ instead of $t$). The second equation then defines $f$ on the line segment from $q$ to $r$ in the only possible way such that $f(f(x))+f(x)+x=0$ for any $x$ on the segment from $p$ to $q$. This means that $f$ maps the line segment from $q$ to $r$ to some quadratic path from $r$ to $p$. Finally, the third equation defines $f$ at points on the quadratic path so that $f(f(f(x)))=x$ for all the points where we have defined $f$ so far. This will make it so $f(f(x))+f(x)+x=0$ is also true on the line segment from $q$ to $r$ and on the quadratic path from $r$ to $p$.</p> <p>We want to know that this loop we have defined by these three paths goes around $0$ once with strictly increasing angle (in particular, this means our quadratic path is injective and doesn't cross the line segments, so our definitions of $f$ cannot conflict with each other). It is clear that this is true for the two line segments; checking that it is also true for the quadratic path is just a messy computation. Plugging in the definitions of $p$, $q$, and $r$, we see that the quadratic path is parametrized by $$t\mapsto (t^2+t-1,2t^2-t-1).$$ To see the angle is increasing, we can define $$g(t)=\arctan\left(\frac{2t^2-t-1}{t^2+t-1}\right)$$ (for an appropriate branch of $\arctan$) and show $g'(t)&gt;0$ for $0\leq t\leq 1$. This is just a computation: $g'(t)$ is a rational function in $t$, and with some work you can show its numerator and denominator are always positive for $0\leq t\leq 1$ (see <a href="http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))" rel="noreferrer">http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))</a>, for instance). We can also check that the quadratic path does not loop all the way around the circle (for instance, since its second coordinate is always negative).</p> <p>So this loop passes through every angle with respect to the origin exactly once. This means that for any nonzero $x\in\mathbb{R}^2$ there is a unique $t&gt;0$ such that $tx$ is on the loop, and we can define $f(x)=t^{-1}f(tx)$ (and $f(0)=0$) as described above. Finally, this $f$ is nonlinear: for instance, $f(p)=q$ and $f(q)=r$ but $$f\left(\frac{p+q}{2}\right)=\frac{3q+r}{4}\neq \frac{q+r}{2}=\frac{f(p)+f(q)}{2}.$$</p> Answer 14: <p>The computational part of Eric Wofsey's answer can be simplified a little bit ; in particular it can be done entirely by hand as I show below. Following this answer, let us start with the three points $p=(0,1),q=(1,0),r=(-1,-1)$. Let $b$ be a $C^1$ bijection $[0,1]\to [0,1]$, to be defined later, and let</p> <p>$$ \begin{array}{lclcl} A(t) &amp;=&amp; (1-t)p+tq &amp;=&amp; (t,1-t) \\ B(t) &amp;=&amp; (1-b(t))q+b(t)r &amp;=&amp; (1-2b(t),-b(t)) \\ C(t) &amp;=&amp; -(A(t)+B(t)) &amp;=&amp; (2b(t)-(t+1),t+b(t)-1). \\ \end{array} $$ Then there is a unique bijection $f$, defined on the triangle $pqr$, such that $f$ cyclically permutes every triple $(A(t),B(t),C(t))$ for $t\in [0,1]$.</p> <p>As shown in Eric's answer, the only problem is then to find a $b$ such that f is non-linear and such that the angle $(Or,OC(t))$ varies monotonically as $t$ goes from $0$ to $1$.</p> <p>It is easy to see that $f$ is linear iff $b$ is the identity (indeed, $f$ cyclically permutes $p,q$ and $r$, and this suffices to uniquely define $f$ is $f$ is linear).</p> <p>A little computation shows that the straight lines $(qr)$ and $(OC(t))$ intersect at $D(t)=(1-d(t))r+d(t)p$ where</p> <p>$$ d(t)=\frac{2t-b(t)}{3t-3b(t)+1} $$</p> <p>So it suffices to find a $b$ such that the derivative of $d$ is never zero on $(0,1)$. Another little computation shows that the numerator of $d'$ is (here $u$ and $v$ denote the numerator and denominator of $d$ respectively)</p> <p>$$ N(t)=u'(t)v(t)-u(t)v'(t)=(3b(t)-1)b'(t)+2-3b(t) $$</p> <p>If we try $b(t)=t^2$, we are lucky : $N(t)=3t^2-2t+2=3(t-\frac{1}{3})^2+\frac{5}{3}$ is never zero.</p> Answer 15: <p>No, such an $f$ need not be linear. Explicitly writing down a counterexample is rather messy (unless I'm missing some trick), but here's the idea. Fix three points $p$, $q$, and $r$ such that $p+q+r=0$, a path from $p$ to $q$, and a path from $q$ to $r$. Define $f$ so that it maps the path from $p$ to $q$ to the path from $q$ to $r$. We are then forced to define $f$ on the path from $q$ to $r$ to follow a certain path from $r$ to $p$, in order for $f(f(x))+f(x)+x=0$ to hold. Then we can define $f$ to take the path from $r$ to $p$ to the path from $p$ to $q$.</p> <p>In this way (if we chose our paths well so that everything turns out to be well-defined), we can define $f$ on a loop that goes from $p$ to $q$ to $r$ and then back to $p$ such that $f(f(x))+f(x)+x=0$ for every point on the loop. If we have chosen our paths well, this loop will go around the origin once with monotonically increasing angle, so that for any nonzero $x\in\mathbb{R}^2$ there is a unique (continuously varying) $t&gt;0$ such that $tx$ is a point of our loop. We can now extend $f$ to all of $\mathbb{R}^2$ by defining $f(x)=t^{-1}f(tx)$ for this value of $t$ (and $f(0)=0$). That is, we partition $\mathbb{R}^2$ into scaled copies of our loop, and define $f$ in the same way on each one. The equation $f(f(x))+f(x)+x=0$ will then hold on all of $\mathbb{R}^2$, since it holds on each scaled loop.</p> <hr> <p>OK, now here's an explicit example if you really want one. Let us take $p=(0,1)$, $q=(1,0)$, and $r=(-1,-1)$. Our first path will be the line segment from $p$ to $q$, parametrized linearly. Our second path will be the line segment from $q$ to $r$, but parametrized nonlinearly (this will make our $f$ nonlinear). Explicitly, for $0\leq t\leq 1$, we define the following values of $f$: $$f((1-t)p+tq)=(1-t^2)q+t^2r$$ $$f((1-t^2)q+t^2r)=-(1-t)p-tq-(1-t^2)q-t^2r$$ $$f(-(1-t)p-tq-(1-t^2)q-t^2r)= (1-t^2)q+t^2r$$</p> <p>The first equation says that $f$ maps the line segment from $p$ to $q$ to the line segment from $q$ to $r$, but with a nonlinear parametrization of the latter line segment (using $t^2$ instead of $t$). The second equation then defines $f$ on the line segment from $q$ to $r$ in the only possible way such that $f(f(x))+f(x)+x=0$ for any $x$ on the segment from $p$ to $q$. This means that $f$ maps the line segment from $q$ to $r$ to some quadratic path from $r$ to $p$. Finally, the third equation defines $f$ at points on the quadratic path so that $f(f(f(x)))=x$ for all the points where we have defined $f$ so far. This will make it so $f(f(x))+f(x)+x=0$ is also true on the line segment from $q$ to $r$ and on the quadratic path from $r$ to $p$.</p> <p>We want to know that this loop we have defined by these three paths goes around $0$ once with strictly increasing angle (in particular, this means our quadratic path is injective and doesn't cross the line segments, so our definitions of $f$ cannot conflict with each other). It is clear that this is true for the two line segments; checking that it is also true for the quadratic path is just a messy computation. Plugging in the definitions of $p$, $q$, and $r$, we see that the quadratic path is parametrized by $$t\mapsto (t^2+t-1,2t^2-t-1).$$ To see the angle is increasing, we can define $$g(t)=\arctan\left(\frac{2t^2-t-1}{t^2+t-1}\right)$$ (for an appropriate branch of $\arctan$) and show $g'(t)&gt;0$ for $0\leq t\leq 1$. This is just a computation: $g'(t)$ is a rational function in $t$, and with some work you can show its numerator and denominator are always positive for $0\leq t\leq 1$ (see <a href="http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))" rel="noreferrer">http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))</a>, for instance). We can also check that the quadratic path does not loop all the way around the circle (for instance, since its second coordinate is always negative).</p> <p>So this loop passes through every angle with respect to the origin exactly once. This means that for any nonzero $x\in\mathbb{R}^2$ there is a unique $t&gt;0$ such that $tx$ is on the loop, and we can define $f(x)=t^{-1}f(tx)$ (and $f(0)=0$) as described above. Finally, this $f$ is nonlinear: for instance, $f(p)=q$ and $f(q)=r$ but $$f\left(\frac{p+q}{2}\right)=\frac{3q+r}{4}\neq \frac{q+r}{2}=\frac{f(p)+f(q)}{2}.$$</p> Answer 16: <p>The computational part of Eric Wofsey's answer can be simplified a little bit ; in particular it can be done entirely by hand as I show below. Following this answer, let us start with the three points $p=(0,1),q=(1,0),r=(-1,-1)$. Let $b$ be a $C^1$ bijection $[0,1]\to [0,1]$, to be defined later, and let</p> <p>$$ \begin{array}{lclcl} A(t) &amp;=&amp; (1-t)p+tq &amp;=&amp; (t,1-t) \\ B(t) &amp;=&amp; (1-b(t))q+b(t)r &amp;=&amp; (1-2b(t),-b(t)) \\ C(t) &amp;=&amp; -(A(t)+B(t)) &amp;=&amp; (2b(t)-(t+1),t+b(t)-1). \\ \end{array} $$ Then there is a unique bijection $f$, defined on the triangle $pqr$, such that $f$ cyclically permutes every triple $(A(t),B(t),C(t))$ for $t\in [0,1]$.</p> <p>As shown in Eric's answer, the only problem is then to find a $b$ such that f is non-linear and such that the angle $(Or,OC(t))$ varies monotonically as $t$ goes from $0$ to $1$.</p> <p>It is easy to see that $f$ is linear iff $b$ is the identity (indeed, $f$ cyclically permutes $p,q$ and $r$, and this suffices to uniquely define $f$ is $f$ is linear).</p> <p>A little computation shows that the straight lines $(qr)$ and $(OC(t))$ intersect at $D(t)=(1-d(t))r+d(t)p$ where</p> <p>$$ d(t)=\frac{2t-b(t)}{3t-3b(t)+1} $$</p> <p>So it suffices to find a $b$ such that the derivative of $d$ is never zero on $(0,1)$. Another little computation shows that the numerator of $d'$ is (here $u$ and $v$ denote the numerator and denominator of $d$ respectively)</p> <p>$$ N(t)=u'(t)v(t)-u(t)v'(t)=(3b(t)-1)b'(t)+2-3b(t) $$</p> <p>If we try $b(t)=t^2$, we are lucky : $N(t)=3t^2-2t+2=3(t-\frac{1}{3})^2+\frac{5}{3}$ is never zero.</p> Answer 17: <p>No, such an $f$ need not be linear. Explicitly writing down a counterexample is rather messy (unless I'm missing some trick), but here's the idea. Fix three points $p$, $q$, and $r$ such that $p+q+r=0$, a path from $p$ to $q$, and a path from $q$ to $r$. Define $f$ so that it maps the path from $p$ to $q$ to the path from $q$ to $r$. We are then forced to define $f$ on the path from $q$ to $r$ to follow a certain path from $r$ to $p$, in order for $f(f(x))+f(x)+x=0$ to hold. Then we can define $f$ to take the path from $r$ to $p$ to the path from $p$ to $q$.</p> <p>In this way (if we chose our paths well so that everything turns out to be well-defined), we can define $f$ on a loop that goes from $p$ to $q$ to $r$ and then back to $p$ such that $f(f(x))+f(x)+x=0$ for every point on the loop. If we have chosen our paths well, this loop will go around the origin once with monotonically increasing angle, so that for any nonzero $x\in\mathbb{R}^2$ there is a unique (continuously varying) $t&gt;0$ such that $tx$ is a point of our loop. We can now extend $f$ to all of $\mathbb{R}^2$ by defining $f(x)=t^{-1}f(tx)$ for this value of $t$ (and $f(0)=0$). That is, we partition $\mathbb{R}^2$ into scaled copies of our loop, and define $f$ in the same way on each one. The equation $f(f(x))+f(x)+x=0$ will then hold on all of $\mathbb{R}^2$, since it holds on each scaled loop.</p> <hr> <p>OK, now here's an explicit example if you really want one. Let us take $p=(0,1)$, $q=(1,0)$, and $r=(-1,-1)$. Our first path will be the line segment from $p$ to $q$, parametrized linearly. Our second path will be the line segment from $q$ to $r$, but parametrized nonlinearly (this will make our $f$ nonlinear). Explicitly, for $0\leq t\leq 1$, we define the following values of $f$: $$f((1-t)p+tq)=(1-t^2)q+t^2r$$ $$f((1-t^2)q+t^2r)=-(1-t)p-tq-(1-t^2)q-t^2r$$ $$f(-(1-t)p-tq-(1-t^2)q-t^2r)= (1-t^2)q+t^2r$$</p> <p>The first equation says that $f$ maps the line segment from $p$ to $q$ to the line segment from $q$ to $r$, but with a nonlinear parametrization of the latter line segment (using $t^2$ instead of $t$). The second equation then defines $f$ on the line segment from $q$ to $r$ in the only possible way such that $f(f(x))+f(x)+x=0$ for any $x$ on the segment from $p$ to $q$. This means that $f$ maps the line segment from $q$ to $r$ to some quadratic path from $r$ to $p$. Finally, the third equation defines $f$ at points on the quadratic path so that $f(f(f(x)))=x$ for all the points where we have defined $f$ so far. This will make it so $f(f(x))+f(x)+x=0$ is also true on the line segment from $q$ to $r$ and on the quadratic path from $r$ to $p$.</p> <p>We want to know that this loop we have defined by these three paths goes around $0$ once with strictly increasing angle (in particular, this means our quadratic path is injective and doesn't cross the line segments, so our definitions of $f$ cannot conflict with each other). It is clear that this is true for the two line segments; checking that it is also true for the quadratic path is just a messy computation. Plugging in the definitions of $p$, $q$, and $r$, we see that the quadratic path is parametrized by $$t\mapsto (t^2+t-1,2t^2-t-1).$$ To see the angle is increasing, we can define $$g(t)=\arctan\left(\frac{2t^2-t-1}{t^2+t-1}\right)$$ (for an appropriate branch of $\arctan$) and show $g'(t)&gt;0$ for $0\leq t\leq 1$. This is just a computation: $g'(t)$ is a rational function in $t$, and with some work you can show its numerator and denominator are always positive for $0\leq t\leq 1$ (see <a href="http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))" rel="noreferrer">http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))</a>, for instance). We can also check that the quadratic path does not loop all the way around the circle (for instance, since its second coordinate is always negative).</p> <p>So this loop passes through every angle with respect to the origin exactly once. This means that for any nonzero $x\in\mathbb{R}^2$ there is a unique $t&gt;0$ such that $tx$ is on the loop, and we can define $f(x)=t^{-1}f(tx)$ (and $f(0)=0$) as described above. Finally, this $f$ is nonlinear: for instance, $f(p)=q$ and $f(q)=r$ but $$f\left(\frac{p+q}{2}\right)=\frac{3q+r}{4}\neq \frac{q+r}{2}=\frac{f(p)+f(q)}{2}.$$</p> Answer 18: <p>The computational part of Eric Wofsey's answer can be simplified a little bit ; in particular it can be done entirely by hand as I show below. Following this answer, let us start with the three points $p=(0,1),q=(1,0),r=(-1,-1)$. Let $b$ be a $C^1$ bijection $[0,1]\to [0,1]$, to be defined later, and let</p> <p>$$ \begin{array}{lclcl} A(t) &amp;=&amp; (1-t)p+tq &amp;=&amp; (t,1-t) \\ B(t) &amp;=&amp; (1-b(t))q+b(t)r &amp;=&amp; (1-2b(t),-b(t)) \\ C(t) &amp;=&amp; -(A(t)+B(t)) &amp;=&amp; (2b(t)-(t+1),t+b(t)-1). \\ \end{array} $$ Then there is a unique bijection $f$, defined on the triangle $pqr$, such that $f$ cyclically permutes every triple $(A(t),B(t),C(t))$ for $t\in [0,1]$.</p> <p>As shown in Eric's answer, the only problem is then to find a $b$ such that f is non-linear and such that the angle $(Or,OC(t))$ varies monotonically as $t$ goes from $0$ to $1$.</p> <p>It is easy to see that $f$ is linear iff $b$ is the identity (indeed, $f$ cyclically permutes $p,q$ and $r$, and this suffices to uniquely define $f$ is $f$ is linear).</p> <p>A little computation shows that the straight lines $(qr)$ and $(OC(t))$ intersect at $D(t)=(1-d(t))r+d(t)p$ where</p> <p>$$ d(t)=\frac{2t-b(t)}{3t-3b(t)+1} $$</p> <p>So it suffices to find a $b$ such that the derivative of $d$ is never zero on $(0,1)$. Another little computation shows that the numerator of $d'$ is (here $u$ and $v$ denote the numerator and denominator of $d$ respectively)</p> <p>$$ N(t)=u'(t)v(t)-u(t)v'(t)=(3b(t)-1)b'(t)+2-3b(t) $$</p> <p>If we try $b(t)=t^2$, we are lucky : $N(t)=3t^2-2t+2=3(t-\frac{1}{3})^2+\frac{5}{3}$ is never zero.</p> Answer 19: <p>No, such an $f$ need not be linear. Explicitly writing down a counterexample is rather messy (unless I'm missing some trick), but here's the idea. Fix three points $p$, $q$, and $r$ such that $p+q+r=0$, a path from $p$ to $q$, and a path from $q$ to $r$. Define $f$ so that it maps the path from $p$ to $q$ to the path from $q$ to $r$. We are then forced to define $f$ on the path from $q$ to $r$ to follow a certain path from $r$ to $p$, in order for $f(f(x))+f(x)+x=0$ to hold. Then we can define $f$ to take the path from $r$ to $p$ to the path from $p$ to $q$.</p> <p>In this way (if we chose our paths well so that everything turns out to be well-defined), we can define $f$ on a loop that goes from $p$ to $q$ to $r$ and then back to $p$ such that $f(f(x))+f(x)+x=0$ for every point on the loop. If we have chosen our paths well, this loop will go around the origin once with monotonically increasing angle, so that for any nonzero $x\in\mathbb{R}^2$ there is a unique (continuously varying) $t&gt;0$ such that $tx$ is a point of our loop. We can now extend $f$ to all of $\mathbb{R}^2$ by defining $f(x)=t^{-1}f(tx)$ for this value of $t$ (and $f(0)=0$). That is, we partition $\mathbb{R}^2$ into scaled copies of our loop, and define $f$ in the same way on each one. The equation $f(f(x))+f(x)+x=0$ will then hold on all of $\mathbb{R}^2$, since it holds on each scaled loop.</p> <hr> <p>OK, now here's an explicit example if you really want one. Let us take $p=(0,1)$, $q=(1,0)$, and $r=(-1,-1)$. Our first path will be the line segment from $p$ to $q$, parametrized linearly. Our second path will be the line segment from $q$ to $r$, but parametrized nonlinearly (this will make our $f$ nonlinear). Explicitly, for $0\leq t\leq 1$, we define the following values of $f$: $$f((1-t)p+tq)=(1-t^2)q+t^2r$$ $$f((1-t^2)q+t^2r)=-(1-t)p-tq-(1-t^2)q-t^2r$$ $$f(-(1-t)p-tq-(1-t^2)q-t^2r)= (1-t^2)q+t^2r$$</p> <p>The first equation says that $f$ maps the line segment from $p$ to $q$ to the line segment from $q$ to $r$, but with a nonlinear parametrization of the latter line segment (using $t^2$ instead of $t$). The second equation then defines $f$ on the line segment from $q$ to $r$ in the only possible way such that $f(f(x))+f(x)+x=0$ for any $x$ on the segment from $p$ to $q$. This means that $f$ maps the line segment from $q$ to $r$ to some quadratic path from $r$ to $p$. Finally, the third equation defines $f$ at points on the quadratic path so that $f(f(f(x)))=x$ for all the points where we have defined $f$ so far. This will make it so $f(f(x))+f(x)+x=0$ is also true on the line segment from $q$ to $r$ and on the quadratic path from $r$ to $p$.</p> <p>We want to know that this loop we have defined by these three paths goes around $0$ once with strictly increasing angle (in particular, this means our quadratic path is injective and doesn't cross the line segments, so our definitions of $f$ cannot conflict with each other). It is clear that this is true for the two line segments; checking that it is also true for the quadratic path is just a messy computation. Plugging in the definitions of $p$, $q$, and $r$, we see that the quadratic path is parametrized by $$t\mapsto (t^2+t-1,2t^2-t-1).$$ To see the angle is increasing, we can define $$g(t)=\arctan\left(\frac{2t^2-t-1}{t^2+t-1}\right)$$ (for an appropriate branch of $\arctan$) and show $g'(t)&gt;0$ for $0\leq t\leq 1$. This is just a computation: $g'(t)$ is a rational function in $t$, and with some work you can show its numerator and denominator are always positive for $0\leq t\leq 1$ (see <a href="http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))" rel="noreferrer">http://www.wolframalpha.com/input/?i=derivative+of+arctan((2t%5E2-t-1)%2F(t%5E2%2Bt-1))</a>, for instance). We can also check that the quadratic path does not loop all the way around the circle (for instance, since its second coordinate is always negative).</p> <p>So this loop passes through every angle with respect to the origin exactly once. This means that for any nonzero $x\in\mathbb{R}^2$ there is a unique $t&gt;0$ such that $tx$ is on the loop, and we can define $f(x)=t^{-1}f(tx)$ (and $f(0)=0$) as described above. Finally, this $f$ is nonlinear: for instance, $f(p)=q$ and $f(q)=r$ but $$f\left(\frac{p+q}{2}\right)=\frac{3q+r}{4}\neq \frac{q+r}{2}=\frac{f(p)+f(q)}{2}.$$</p> Answer 20: <p>The computational part of Eric Wofsey's answer can be simplified a little bit ; in particular it can be done entirely by hand as I show below. Following this answer, let us start with the three points $p=(0,1),q=(1,0),r=(-1,-1)$. Let $b$ be a $C^1$ bijection $[0,1]\to [0,1]$, to be defined later, and let</p> <p>$$ \begin{array}{lclcl} A(t) &amp;=&amp; (1-t)p+tq &amp;=&amp; (t,1-t) \\ B(t) &amp;=&amp; (1-b(t))q+b(t)r &amp;=&amp; (1-2b(t),-b(t)) \\ C(t) &amp;=&amp; -(A(t)+B(t)) &amp;=&amp; (2b(t)-(t+1),t+b(t)-1). \\ \end{array} $$ Then there is a unique bijection $f$, defined on the triangle $pqr$, such that $f$ cyclically permutes every triple $(A(t),B(t),C(t))$ for $t\in [0,1]$.</p> <p>As shown in Eric's answer, the only problem is then to find a $b$ such that f is non-linear and such that the angle $(Or,OC(t))$ varies monotonically as $t$ goes from $0$ to $1$.</p> <p>It is easy to see that $f$ is linear iff $b$ is the identity (indeed, $f$ cyclically permutes $p,q$ and $r$, and this suffices to uniquely define $f$ is $f$ is linear).</p> <p>A little computation shows that the straight lines $(qr)$ and $(OC(t))$ intersect at $D(t)=(1-d(t))r+d(t)p$ where</p> <p>$$ d(t)=\frac{2t-b(t)}{3t-3b(t)+1} $$</p> <p>So it suffices to find a $b$ such that the derivative of $d$ is never zero on $(0,1)$. Another little computation shows that the numerator of $d'$ is (here $u$ and $v$ denote the numerator and denominator of $d$ respectively)</p> <p>$$ N(t)=u'(t)v(t)-u(t)v'(t)=(3b(t)-1)b'(t)+2-3b(t) $$</p> <p>If we try $b(t)=t^2$, we are lucky : $N(t)=3t^2-2t+2=3(t-\frac{1}{3})^2+\frac{5}{3}$ is never zero.</p>
$(\mathbb R, \oplus)$ is a group. Define a multiplication with which we get a field. Where $a \oplus b = a + b +1$
2,109,893
6
https://math.stackexchange.com/questions/2109893/mathbb-r-oplus-is-a-group-define-a-multiplication-with-which-we-get-a-fi
<p>I have this problem from the book <strong>Real Analysis</strong> of Miklos Laczkovich (<a href="https://books.google.com/books?id=e_2zCgAAQBAJ&amp;pg=PA35" rel="nofollow noreferrer">p. $35$</a>):</p> <blockquote> <p>Let $\oplus : \mathbb R \times \mathbb R\to \mathbb R $, where $(a,b) \mapsto a + b +1$. Then, $(\mathbb R, \oplus)$ is a group. Define a multiplication with which we get a field.</p> </blockquote> <p>But in the exercise there are no hints. Could anyone help me?</p>
[ { "answer_id": 2109918, "score": 2, "text": "<p>If you consider $a\\otimes b= ab+a+b$. Is clear that is a commutative operation because of the structure. Notice that:</p>\n\n<p>$$(a\\otimes b) \\oplus (a\\otimes c)= (ab+a+b)\\oplus (ac+a+c)=(ab+a+b)+(ac+a+c)+1$$\n$$=a(b+c)+2a+b+c+1$$\nThat is equal to\n$$a\\otimes(b\\oplus c)=a\\otimes(b+c+1)=a(b+c+1)+a+(b+c+1)$$\n$$=a(b+c)+2a+b+c+1$$</p>\n\n<p>Finally look that \n$$a\\otimes(b\\otimes c)=a\\otimes (bc+b+c)=a(bc+b+c)+a+(bc+b+c)$$\n$$=abc+ab+ac+bc+a+b+c$$\nAnd\n$$(a\\otimes b)\\otimes c=(ab+a+b)\\otimes c=(ab+a+b)c+(ab+a+b)+c$$\n$$=abc+ab+ac+bc+a+b+c$$</p>\n" }, { "answer_id": 2109982, "score": 2, "text": "<p>The most straightforward approach to solving this problem is to prove $(\\mathbb{R}, \\oplus) \\cong (\\mathbb{R}, +)$. Once you've done that, you can use the isomorphism to define $\\otimes$ in terms of $\\times$.</p>\n" }, { "answer_id": 2109985, "score": 2, "text": "<p>Let us start by noticing that \n$$(a-1)\\oplus(b-1)=(a-1)+(b-1)+1=(a+b)-1.$$\nThis means that $f\\colon\\mathbb R\\to\\mathbb R$ defined by $f(x)=x-1$ is a <a href=\"https://en.wikipedia.org/wiki/Group_isomorphism\" rel=\"nofollow noreferrer\">group isomorphism</a> between $(\\mathbb R,+)$ and $(\\mathbb R,\\oplus)$. (The map $f$ is bijection and fulfills $f(a+b)=f(a)\\oplus f(b)$.) And the inverse $g(x)=x+1$ is an isomorphism $(\\mathbb R,\\oplus)$ to $(\\mathbb R,+)$.</p>\n\n<p>I had a brief glance at the book mentioned in the question - it seems that the notion of of group and the notion of isomorphism is not introduced in the book at this point. (Of course, I might have missed it.) But the basic intuition can be explained in a few words. We are basically saying that after \"renaming\" the objects the two structures $(\\mathbb R,+)$ and $(\\mathbb R,\\oplus)$ are the same.</p>\n\n<p>Just notice this part of <a href=\"https://en.wikipedia.org/wiki/Cayley_table\" rel=\"nofollow noreferrer\">Cayley table</a> showing results of operation. (Of course, since we are working with an infinite set, we cannot put the whole set $\\mathbb R$ into a finite table.) </p>\n\n<p>$$\n\\begin{array}{cc}\n\\begin{array}{c|ccccc}\n + &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\\\\hline\n -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\\\\n -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\\\n 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\\\n 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3 \\\\\n 2 &amp; 0 &amp; 1 &amp; 2 &amp; 3 &amp; 4 \n\\end{array}\n &amp; \n\\begin{array}{c|ccccc}\n \\oplus &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\\\\hline\n -3 &amp;-5 &amp;-4 &amp;-3 &amp;-2 &amp;-1 \\\\\n -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\\\\n -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\\\n 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\\\n 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3\n\\end{array} \n\\end{array}\n$$\nYou can see that if we replace in the first table each occurrence of $-4$ by $-5$ and similarly $-3\\mapsto-4$, $-2\\mapsto-3$, $-1\\mapsto-2$, etc., then we get the second table. </p>\n\n<p>In this sense, the two operations are \"the same\", we just \"renamed\" the elements. (And $x\\mapsto x-1$ is the dictionary.) </p>\n\n<hr>\n\n<p>Now we know that $(\\mathbb R,+,\\cdot)$ is a field. We also know that using \"renaming\" $f\\colon x\\mapsto x-1$ creates $\\oplus$ from $+$. What happens if we apply the same thing to the usual multiplication $\\cdot$?\n$$a\\odot b = (a+1)\\cdot(b+1)-1=ab+a+b$$\nor \n$$a\\odot b +1 = (a+1)\\cdot(b+1) +1.$$\nTherefore $a\\odot b=ab+a+b$ seems like a natural candidate for the multiplication.\n(Notice that I have used the inverse mapping $x\\mapsto x+1$.)</p>\n\n<hr>\n\n<p>This can be formulated a bit simpler in case you are already familiar with the notion of group isomorphism and field isomorphism. I have posted a bit more lengthy answer on purpose. (I was not sure whether the OP is familiar with these notions. And for the readers who are already familiar with them, there already is a shorter answer. Perhaps this longer answer might be useful for some people reading this question, too.)</p>\n\n<p>I will add the link to this post, which might also help to gain some insight: <a href=\"https://math.stackexchange.com/questions/242348/intuition-on-group-homomorphisms\">Intuition on group homomorphisms</a>.</p>\n\n<p>And I will also add another link to a post where at least some of the field axioms for $(\\mathbb R,\\oplus,\\odot)$ are verified: <a href=\"https://math.stackexchange.com/questions/239750/prove-that-r-is-a-ring-under-special-definitions-of-multiplication-and-additio\">Prove that R is a ring under &#39;special&#39; definitions of multiplication and addition</a>.</p>\n" }, { "answer_id": 2110422, "score": 1, "text": "<p><strong>Hint</strong> <span class=\"math-container\">$ $</span> Any set bijection <span class=\"math-container\">$\\,h\\,:\\,R'\\to R\\,$</span> serves to <a href=\"https://math.stackexchange.com/search?q=user%3A242+transport\">transport the (ring) structure</a> of <span class=\"math-container\">$\\,(R,+,*,0,1),$</span> to <span class=\"math-container\">$\\,(R',\\oplus,\\otimes,0',1') \\,$</span> by simply defining the operations in <span class=\"math-container\">$\\,R'\\,$</span> to make <span class=\"math-container\">$\\,h\\,$</span> be a (ring) isomorphism</p>\n<p><span class=\"math-container\">$$\\begin{align} &amp;a \\oplus b\\, =\\ h^{-1}(h(a) + h(b)),\\quad 0' = h^{-1}(0)\\\\ \n &amp;a \\otimes b\\, =\\, h^{-1}(h(a)\\, * h(b)),\\quad\\, 1' = h^{-1}(1)\\end{align}\\qquad $$</span></p>\n<p>Yours is the special case <span class=\"math-container\">$\\ h(x) = x+1.$</span></p>\n<p>As an example let's explicitly examine how the associative law is transported from <span class=\"math-container\">$R$</span> to <span class=\"math-container\">$R'$</span>.</p>\n<p><span class=\"math-container\">$$ \\begin{align} h(a\\oplus(b\\oplus c)) \\,&amp;=\\, h(a)+h(b\\oplus c)\\\\ &amp;=\\, h(a)+(h(b) + h(c))\\\\ &amp;=\\, (h(a) + h(b))+h(c)\\\\ &amp;=\\, h(a\\oplus b)+h(c)\\\\ &amp;=\\, h((a\\oplus b)\\oplus c)\\end{align}\\qquad $$</span></p>\n<p>Therefore we conclude that <span class=\"math-container\">$\\ a\\oplus (b\\oplus c) = (a\\oplus b)\\oplus c\\ $</span> by applying <span class=\"math-container\">$h^{-1}$</span> to above. The other (universal equational) group axioms transport in exactly the same way.</p>\n" }, { "answer_id": 2109918, "score": 2, "text": "<p>If you consider $a\\otimes b= ab+a+b$. Is clear that is a commutative operation because of the structure. Notice that:</p>\n\n<p>$$(a\\otimes b) \\oplus (a\\otimes c)= (ab+a+b)\\oplus (ac+a+c)=(ab+a+b)+(ac+a+c)+1$$\n$$=a(b+c)+2a+b+c+1$$\nThat is equal to\n$$a\\otimes(b\\oplus c)=a\\otimes(b+c+1)=a(b+c+1)+a+(b+c+1)$$\n$$=a(b+c)+2a+b+c+1$$</p>\n\n<p>Finally look that \n$$a\\otimes(b\\otimes c)=a\\otimes (bc+b+c)=a(bc+b+c)+a+(bc+b+c)$$\n$$=abc+ab+ac+bc+a+b+c$$\nAnd\n$$(a\\otimes b)\\otimes c=(ab+a+b)\\otimes c=(ab+a+b)c+(ab+a+b)+c$$\n$$=abc+ab+ac+bc+a+b+c$$</p>\n" }, { "answer_id": 2109982, "score": 2, "text": "<p>The most straightforward approach to solving this problem is to prove $(\\mathbb{R}, \\oplus) \\cong (\\mathbb{R}, +)$. Once you've done that, you can use the isomorphism to define $\\otimes$ in terms of $\\times$.</p>\n" }, { "answer_id": 2109985, "score": 2, "text": "<p>Let us start by noticing that \n$$(a-1)\\oplus(b-1)=(a-1)+(b-1)+1=(a+b)-1.$$\nThis means that $f\\colon\\mathbb R\\to\\mathbb R$ defined by $f(x)=x-1$ is a <a href=\"https://en.wikipedia.org/wiki/Group_isomorphism\" rel=\"nofollow noreferrer\">group isomorphism</a> between $(\\mathbb R,+)$ and $(\\mathbb R,\\oplus)$. (The map $f$ is bijection and fulfills $f(a+b)=f(a)\\oplus f(b)$.) And the inverse $g(x)=x+1$ is an isomorphism $(\\mathbb R,\\oplus)$ to $(\\mathbb R,+)$.</p>\n\n<p>I had a brief glance at the book mentioned in the question - it seems that the notion of of group and the notion of isomorphism is not introduced in the book at this point. (Of course, I might have missed it.) But the basic intuition can be explained in a few words. We are basically saying that after \"renaming\" the objects the two structures $(\\mathbb R,+)$ and $(\\mathbb R,\\oplus)$ are the same.</p>\n\n<p>Just notice this part of <a href=\"https://en.wikipedia.org/wiki/Cayley_table\" rel=\"nofollow noreferrer\">Cayley table</a> showing results of operation. (Of course, since we are working with an infinite set, we cannot put the whole set $\\mathbb R$ into a finite table.) </p>\n\n<p>$$\n\\begin{array}{cc}\n\\begin{array}{c|ccccc}\n + &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\\\\hline\n -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\\\\n -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\\\n 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\\\n 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3 \\\\\n 2 &amp; 0 &amp; 1 &amp; 2 &amp; 3 &amp; 4 \n\\end{array}\n &amp; \n\\begin{array}{c|ccccc}\n \\oplus &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\\\\hline\n -3 &amp;-5 &amp;-4 &amp;-3 &amp;-2 &amp;-1 \\\\\n -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\\\\n -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\\\n 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\\\n 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3\n\\end{array} \n\\end{array}\n$$\nYou can see that if we replace in the first table each occurrence of $-4$ by $-5$ and similarly $-3\\mapsto-4$, $-2\\mapsto-3$, $-1\\mapsto-2$, etc., then we get the second table. </p>\n\n<p>In this sense, the two operations are \"the same\", we just \"renamed\" the elements. (And $x\\mapsto x-1$ is the dictionary.) </p>\n\n<hr>\n\n<p>Now we know that $(\\mathbb R,+,\\cdot)$ is a field. We also know that using \"renaming\" $f\\colon x\\mapsto x-1$ creates $\\oplus$ from $+$. What happens if we apply the same thing to the usual multiplication $\\cdot$?\n$$a\\odot b = (a+1)\\cdot(b+1)-1=ab+a+b$$\nor \n$$a\\odot b +1 = (a+1)\\cdot(b+1) +1.$$\nTherefore $a\\odot b=ab+a+b$ seems like a natural candidate for the multiplication.\n(Notice that I have used the inverse mapping $x\\mapsto x+1$.)</p>\n\n<hr>\n\n<p>This can be formulated a bit simpler in case you are already familiar with the notion of group isomorphism and field isomorphism. I have posted a bit more lengthy answer on purpose. (I was not sure whether the OP is familiar with these notions. And for the readers who are already familiar with them, there already is a shorter answer. Perhaps this longer answer might be useful for some people reading this question, too.)</p>\n\n<p>I will add the link to this post, which might also help to gain some insight: <a href=\"https://math.stackexchange.com/questions/242348/intuition-on-group-homomorphisms\">Intuition on group homomorphisms</a>.</p>\n\n<p>And I will also add another link to a post where at least some of the field axioms for $(\\mathbb R,\\oplus,\\odot)$ are verified: <a href=\"https://math.stackexchange.com/questions/239750/prove-that-r-is-a-ring-under-special-definitions-of-multiplication-and-additio\">Prove that R is a ring under &#39;special&#39; definitions of multiplication and addition</a>.</p>\n" }, { "answer_id": 2110422, "score": 1, "text": "<p><strong>Hint</strong> <span class=\"math-container\">$ $</span> Any set bijection <span class=\"math-container\">$\\,h\\,:\\,R'\\to R\\,$</span> serves to <a href=\"https://math.stackexchange.com/search?q=user%3A242+transport\">transport the (ring) structure</a> of <span class=\"math-container\">$\\,(R,+,*,0,1),$</span> to <span class=\"math-container\">$\\,(R',\\oplus,\\otimes,0',1') \\,$</span> by simply defining the operations in <span class=\"math-container\">$\\,R'\\,$</span> to make <span class=\"math-container\">$\\,h\\,$</span> be a (ring) isomorphism</p>\n<p><span class=\"math-container\">$$\\begin{align} &amp;a \\oplus b\\, =\\ h^{-1}(h(a) + h(b)),\\quad 0' = h^{-1}(0)\\\\ \n &amp;a \\otimes b\\, =\\, h^{-1}(h(a)\\, * h(b)),\\quad\\, 1' = h^{-1}(1)\\end{align}\\qquad $$</span></p>\n<p>Yours is the special case <span class=\"math-container\">$\\ h(x) = x+1.$</span></p>\n<p>As an example let's explicitly examine how the associative law is transported from <span class=\"math-container\">$R$</span> to <span class=\"math-container\">$R'$</span>.</p>\n<p><span class=\"math-container\">$$ \\begin{align} h(a\\oplus(b\\oplus c)) \\,&amp;=\\, h(a)+h(b\\oplus c)\\\\ &amp;=\\, h(a)+(h(b) + h(c))\\\\ &amp;=\\, (h(a) + h(b))+h(c)\\\\ &amp;=\\, h(a\\oplus b)+h(c)\\\\ &amp;=\\, h((a\\oplus b)\\oplus c)\\end{align}\\qquad $$</span></p>\n<p>Therefore we conclude that <span class=\"math-container\">$\\ a\\oplus (b\\oplus c) = (a\\oplus b)\\oplus c\\ $</span> by applying <span class=\"math-container\">$h^{-1}$</span> to above. The other (universal equational) group axioms transport in exactly the same way.</p>\n" }, { "answer_id": 2109918, "score": 2, "text": "<p>If you consider $a\\otimes b= ab+a+b$. Is clear that is a commutative operation because of the structure. Notice that:</p>\n\n<p>$$(a\\otimes b) \\oplus (a\\otimes c)= (ab+a+b)\\oplus (ac+a+c)=(ab+a+b)+(ac+a+c)+1$$\n$$=a(b+c)+2a+b+c+1$$\nThat is equal to\n$$a\\otimes(b\\oplus c)=a\\otimes(b+c+1)=a(b+c+1)+a+(b+c+1)$$\n$$=a(b+c)+2a+b+c+1$$</p>\n\n<p>Finally look that \n$$a\\otimes(b\\otimes c)=a\\otimes (bc+b+c)=a(bc+b+c)+a+(bc+b+c)$$\n$$=abc+ab+ac+bc+a+b+c$$\nAnd\n$$(a\\otimes b)\\otimes c=(ab+a+b)\\otimes c=(ab+a+b)c+(ab+a+b)+c$$\n$$=abc+ab+ac+bc+a+b+c$$</p>\n" }, { "answer_id": 2109982, "score": 2, "text": "<p>The most straightforward approach to solving this problem is to prove $(\\mathbb{R}, \\oplus) \\cong (\\mathbb{R}, +)$. Once you've done that, you can use the isomorphism to define $\\otimes$ in terms of $\\times$.</p>\n" }, { "answer_id": 2109985, "score": 2, "text": "<p>Let us start by noticing that \n$$(a-1)\\oplus(b-1)=(a-1)+(b-1)+1=(a+b)-1.$$\nThis means that $f\\colon\\mathbb R\\to\\mathbb R$ defined by $f(x)=x-1$ is a <a href=\"https://en.wikipedia.org/wiki/Group_isomorphism\" rel=\"nofollow noreferrer\">group isomorphism</a> between $(\\mathbb R,+)$ and $(\\mathbb R,\\oplus)$. (The map $f$ is bijection and fulfills $f(a+b)=f(a)\\oplus f(b)$.) And the inverse $g(x)=x+1$ is an isomorphism $(\\mathbb R,\\oplus)$ to $(\\mathbb R,+)$.</p>\n\n<p>I had a brief glance at the book mentioned in the question - it seems that the notion of of group and the notion of isomorphism is not introduced in the book at this point. (Of course, I might have missed it.) But the basic intuition can be explained in a few words. We are basically saying that after \"renaming\" the objects the two structures $(\\mathbb R,+)$ and $(\\mathbb R,\\oplus)$ are the same.</p>\n\n<p>Just notice this part of <a href=\"https://en.wikipedia.org/wiki/Cayley_table\" rel=\"nofollow noreferrer\">Cayley table</a> showing results of operation. (Of course, since we are working with an infinite set, we cannot put the whole set $\\mathbb R$ into a finite table.) </p>\n\n<p>$$\n\\begin{array}{cc}\n\\begin{array}{c|ccccc}\n + &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\\\\hline\n -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\\\\n -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\\\n 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\\\n 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3 \\\\\n 2 &amp; 0 &amp; 1 &amp; 2 &amp; 3 &amp; 4 \n\\end{array}\n &amp; \n\\begin{array}{c|ccccc}\n \\oplus &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\\\\hline\n -3 &amp;-5 &amp;-4 &amp;-3 &amp;-2 &amp;-1 \\\\\n -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\\\\n -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\\\n 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\\\n 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3\n\\end{array} \n\\end{array}\n$$\nYou can see that if we replace in the first table each occurrence of $-4$ by $-5$ and similarly $-3\\mapsto-4$, $-2\\mapsto-3$, $-1\\mapsto-2$, etc., then we get the second table. </p>\n\n<p>In this sense, the two operations are \"the same\", we just \"renamed\" the elements. (And $x\\mapsto x-1$ is the dictionary.) </p>\n\n<hr>\n\n<p>Now we know that $(\\mathbb R,+,\\cdot)$ is a field. We also know that using \"renaming\" $f\\colon x\\mapsto x-1$ creates $\\oplus$ from $+$. What happens if we apply the same thing to the usual multiplication $\\cdot$?\n$$a\\odot b = (a+1)\\cdot(b+1)-1=ab+a+b$$\nor \n$$a\\odot b +1 = (a+1)\\cdot(b+1) +1.$$\nTherefore $a\\odot b=ab+a+b$ seems like a natural candidate for the multiplication.\n(Notice that I have used the inverse mapping $x\\mapsto x+1$.)</p>\n\n<hr>\n\n<p>This can be formulated a bit simpler in case you are already familiar with the notion of group isomorphism and field isomorphism. I have posted a bit more lengthy answer on purpose. (I was not sure whether the OP is familiar with these notions. And for the readers who are already familiar with them, there already is a shorter answer. Perhaps this longer answer might be useful for some people reading this question, too.)</p>\n\n<p>I will add the link to this post, which might also help to gain some insight: <a href=\"https://math.stackexchange.com/questions/242348/intuition-on-group-homomorphisms\">Intuition on group homomorphisms</a>.</p>\n\n<p>And I will also add another link to a post where at least some of the field axioms for $(\\mathbb R,\\oplus,\\odot)$ are verified: <a href=\"https://math.stackexchange.com/questions/239750/prove-that-r-is-a-ring-under-special-definitions-of-multiplication-and-additio\">Prove that R is a ring under &#39;special&#39; definitions of multiplication and addition</a>.</p>\n" }, { "answer_id": 2110422, "score": 1, "text": "<p><strong>Hint</strong> <span class=\"math-container\">$ $</span> Any set bijection <span class=\"math-container\">$\\,h\\,:\\,R'\\to R\\,$</span> serves to <a href=\"https://math.stackexchange.com/search?q=user%3A242+transport\">transport the (ring) structure</a> of <span class=\"math-container\">$\\,(R,+,*,0,1),$</span> to <span class=\"math-container\">$\\,(R',\\oplus,\\otimes,0',1') \\,$</span> by simply defining the operations in <span class=\"math-container\">$\\,R'\\,$</span> to make <span class=\"math-container\">$\\,h\\,$</span> be a (ring) isomorphism</p>\n<p><span class=\"math-container\">$$\\begin{align} &amp;a \\oplus b\\, =\\ h^{-1}(h(a) + h(b)),\\quad 0' = h^{-1}(0)\\\\ \n &amp;a \\otimes b\\, =\\, h^{-1}(h(a)\\, * h(b)),\\quad\\, 1' = h^{-1}(1)\\end{align}\\qquad $$</span></p>\n<p>Yours is the special case <span class=\"math-container\">$\\ h(x) = x+1.$</span></p>\n<p>As an example let's explicitly examine how the associative law is transported from <span class=\"math-container\">$R$</span> to <span class=\"math-container\">$R'$</span>.</p>\n<p><span class=\"math-container\">$$ \\begin{align} h(a\\oplus(b\\oplus c)) \\,&amp;=\\, h(a)+h(b\\oplus c)\\\\ &amp;=\\, h(a)+(h(b) + h(c))\\\\ &amp;=\\, (h(a) + h(b))+h(c)\\\\ &amp;=\\, h(a\\oplus b)+h(c)\\\\ &amp;=\\, h((a\\oplus b)\\oplus c)\\end{align}\\qquad $$</span></p>\n<p>Therefore we conclude that <span class=\"math-container\">$\\ a\\oplus (b\\oplus c) = (a\\oplus b)\\oplus c\\ $</span> by applying <span class=\"math-container\">$h^{-1}$</span> to above. The other (universal equational) group axioms transport in exactly the same way.</p>\n" }, { "answer_id": 2109918, "score": 2, "text": "<p>If you consider $a\\otimes b= ab+a+b$. Is clear that is a commutative operation because of the structure. Notice that:</p>\n\n<p>$$(a\\otimes b) \\oplus (a\\otimes c)= (ab+a+b)\\oplus (ac+a+c)=(ab+a+b)+(ac+a+c)+1$$\n$$=a(b+c)+2a+b+c+1$$\nThat is equal to\n$$a\\otimes(b\\oplus c)=a\\otimes(b+c+1)=a(b+c+1)+a+(b+c+1)$$\n$$=a(b+c)+2a+b+c+1$$</p>\n\n<p>Finally look that \n$$a\\otimes(b\\otimes c)=a\\otimes (bc+b+c)=a(bc+b+c)+a+(bc+b+c)$$\n$$=abc+ab+ac+bc+a+b+c$$\nAnd\n$$(a\\otimes b)\\otimes c=(ab+a+b)\\otimes c=(ab+a+b)c+(ab+a+b)+c$$\n$$=abc+ab+ac+bc+a+b+c$$</p>\n" }, { "answer_id": 2109982, "score": 2, "text": "<p>The most straightforward approach to solving this problem is to prove $(\\mathbb{R}, \\oplus) \\cong (\\mathbb{R}, +)$. Once you've done that, you can use the isomorphism to define $\\otimes$ in terms of $\\times$.</p>\n" }, { "answer_id": 2109985, "score": 2, "text": "<p>Let us start by noticing that \n$$(a-1)\\oplus(b-1)=(a-1)+(b-1)+1=(a+b)-1.$$\nThis means that $f\\colon\\mathbb R\\to\\mathbb R$ defined by $f(x)=x-1$ is a <a href=\"https://en.wikipedia.org/wiki/Group_isomorphism\" rel=\"nofollow noreferrer\">group isomorphism</a> between $(\\mathbb R,+)$ and $(\\mathbb R,\\oplus)$. (The map $f$ is bijection and fulfills $f(a+b)=f(a)\\oplus f(b)$.) And the inverse $g(x)=x+1$ is an isomorphism $(\\mathbb R,\\oplus)$ to $(\\mathbb R,+)$.</p>\n\n<p>I had a brief glance at the book mentioned in the question - it seems that the notion of of group and the notion of isomorphism is not introduced in the book at this point. (Of course, I might have missed it.) But the basic intuition can be explained in a few words. We are basically saying that after \"renaming\" the objects the two structures $(\\mathbb R,+)$ and $(\\mathbb R,\\oplus)$ are the same.</p>\n\n<p>Just notice this part of <a href=\"https://en.wikipedia.org/wiki/Cayley_table\" rel=\"nofollow noreferrer\">Cayley table</a> showing results of operation. (Of course, since we are working with an infinite set, we cannot put the whole set $\\mathbb R$ into a finite table.) </p>\n\n<p>$$\n\\begin{array}{cc}\n\\begin{array}{c|ccccc}\n + &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\\\\hline\n -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\\\\n -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\\\n 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\\\n 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3 \\\\\n 2 &amp; 0 &amp; 1 &amp; 2 &amp; 3 &amp; 4 \n\\end{array}\n &amp; \n\\begin{array}{c|ccccc}\n \\oplus &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\\\\hline\n -3 &amp;-5 &amp;-4 &amp;-3 &amp;-2 &amp;-1 \\\\\n -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\\\\n -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\\\n 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\\\n 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3\n\\end{array} \n\\end{array}\n$$\nYou can see that if we replace in the first table each occurrence of $-4$ by $-5$ and similarly $-3\\mapsto-4$, $-2\\mapsto-3$, $-1\\mapsto-2$, etc., then we get the second table. </p>\n\n<p>In this sense, the two operations are \"the same\", we just \"renamed\" the elements. (And $x\\mapsto x-1$ is the dictionary.) </p>\n\n<hr>\n\n<p>Now we know that $(\\mathbb R,+,\\cdot)$ is a field. We also know that using \"renaming\" $f\\colon x\\mapsto x-1$ creates $\\oplus$ from $+$. What happens if we apply the same thing to the usual multiplication $\\cdot$?\n$$a\\odot b = (a+1)\\cdot(b+1)-1=ab+a+b$$\nor \n$$a\\odot b +1 = (a+1)\\cdot(b+1) +1.$$\nTherefore $a\\odot b=ab+a+b$ seems like a natural candidate for the multiplication.\n(Notice that I have used the inverse mapping $x\\mapsto x+1$.)</p>\n\n<hr>\n\n<p>This can be formulated a bit simpler in case you are already familiar with the notion of group isomorphism and field isomorphism. I have posted a bit more lengthy answer on purpose. (I was not sure whether the OP is familiar with these notions. And for the readers who are already familiar with them, there already is a shorter answer. Perhaps this longer answer might be useful for some people reading this question, too.)</p>\n\n<p>I will add the link to this post, which might also help to gain some insight: <a href=\"https://math.stackexchange.com/questions/242348/intuition-on-group-homomorphisms\">Intuition on group homomorphisms</a>.</p>\n\n<p>And I will also add another link to a post where at least some of the field axioms for $(\\mathbb R,\\oplus,\\odot)$ are verified: <a href=\"https://math.stackexchange.com/questions/239750/prove-that-r-is-a-ring-under-special-definitions-of-multiplication-and-additio\">Prove that R is a ring under &#39;special&#39; definitions of multiplication and addition</a>.</p>\n" }, { "answer_id": 2110422, "score": 1, "text": "<p><strong>Hint</strong> <span class=\"math-container\">$ $</span> Any set bijection <span class=\"math-container\">$\\,h\\,:\\,R'\\to R\\,$</span> serves to <a href=\"https://math.stackexchange.com/search?q=user%3A242+transport\">transport the (ring) structure</a> of <span class=\"math-container\">$\\,(R,+,*,0,1),$</span> to <span class=\"math-container\">$\\,(R',\\oplus,\\otimes,0',1') \\,$</span> by simply defining the operations in <span class=\"math-container\">$\\,R'\\,$</span> to make <span class=\"math-container\">$\\,h\\,$</span> be a (ring) isomorphism</p>\n<p><span class=\"math-container\">$$\\begin{align} &amp;a \\oplus b\\, =\\ h^{-1}(h(a) + h(b)),\\quad 0' = h^{-1}(0)\\\\ \n &amp;a \\otimes b\\, =\\, h^{-1}(h(a)\\, * h(b)),\\quad\\, 1' = h^{-1}(1)\\end{align}\\qquad $$</span></p>\n<p>Yours is the special case <span class=\"math-container\">$\\ h(x) = x+1.$</span></p>\n<p>As an example let's explicitly examine how the associative law is transported from <span class=\"math-container\">$R$</span> to <span class=\"math-container\">$R'$</span>.</p>\n<p><span class=\"math-container\">$$ \\begin{align} h(a\\oplus(b\\oplus c)) \\,&amp;=\\, h(a)+h(b\\oplus c)\\\\ &amp;=\\, h(a)+(h(b) + h(c))\\\\ &amp;=\\, (h(a) + h(b))+h(c)\\\\ &amp;=\\, h(a\\oplus b)+h(c)\\\\ &amp;=\\, h((a\\oplus b)\\oplus c)\\end{align}\\qquad $$</span></p>\n<p>Therefore we conclude that <span class=\"math-container\">$\\ a\\oplus (b\\oplus c) = (a\\oplus b)\\oplus c\\ $</span> by applying <span class=\"math-container\">$h^{-1}$</span> to above. The other (universal equational) group axioms transport in exactly the same way.</p>\n" }, { "answer_id": 2109918, "score": 2, "text": "<p>If you consider $a\\otimes b= ab+a+b$. Is clear that is a commutative operation because of the structure. Notice that:</p>\n\n<p>$$(a\\otimes b) \\oplus (a\\otimes c)= (ab+a+b)\\oplus (ac+a+c)=(ab+a+b)+(ac+a+c)+1$$\n$$=a(b+c)+2a+b+c+1$$\nThat is equal to\n$$a\\otimes(b\\oplus c)=a\\otimes(b+c+1)=a(b+c+1)+a+(b+c+1)$$\n$$=a(b+c)+2a+b+c+1$$</p>\n\n<p>Finally look that \n$$a\\otimes(b\\otimes c)=a\\otimes (bc+b+c)=a(bc+b+c)+a+(bc+b+c)$$\n$$=abc+ab+ac+bc+a+b+c$$\nAnd\n$$(a\\otimes b)\\otimes c=(ab+a+b)\\otimes c=(ab+a+b)c+(ab+a+b)+c$$\n$$=abc+ab+ac+bc+a+b+c$$</p>\n" }, { "answer_id": 2109982, "score": 2, "text": "<p>The most straightforward approach to solving this problem is to prove $(\\mathbb{R}, \\oplus) \\cong (\\mathbb{R}, +)$. Once you've done that, you can use the isomorphism to define $\\otimes$ in terms of $\\times$.</p>\n" }, { "answer_id": 2109985, "score": 2, "text": "<p>Let us start by noticing that \n$$(a-1)\\oplus(b-1)=(a-1)+(b-1)+1=(a+b)-1.$$\nThis means that $f\\colon\\mathbb R\\to\\mathbb R$ defined by $f(x)=x-1$ is a <a href=\"https://en.wikipedia.org/wiki/Group_isomorphism\" rel=\"nofollow noreferrer\">group isomorphism</a> between $(\\mathbb R,+)$ and $(\\mathbb R,\\oplus)$. (The map $f$ is bijection and fulfills $f(a+b)=f(a)\\oplus f(b)$.) And the inverse $g(x)=x+1$ is an isomorphism $(\\mathbb R,\\oplus)$ to $(\\mathbb R,+)$.</p>\n\n<p>I had a brief glance at the book mentioned in the question - it seems that the notion of of group and the notion of isomorphism is not introduced in the book at this point. (Of course, I might have missed it.) But the basic intuition can be explained in a few words. We are basically saying that after \"renaming\" the objects the two structures $(\\mathbb R,+)$ and $(\\mathbb R,\\oplus)$ are the same.</p>\n\n<p>Just notice this part of <a href=\"https://en.wikipedia.org/wiki/Cayley_table\" rel=\"nofollow noreferrer\">Cayley table</a> showing results of operation. (Of course, since we are working with an infinite set, we cannot put the whole set $\\mathbb R$ into a finite table.) </p>\n\n<p>$$\n\\begin{array}{cc}\n\\begin{array}{c|ccccc}\n + &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\\\\hline\n -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\\\\n -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\\\n 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\\\n 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3 \\\\\n 2 &amp; 0 &amp; 1 &amp; 2 &amp; 3 &amp; 4 \n\\end{array}\n &amp; \n\\begin{array}{c|ccccc}\n \\oplus &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\\\\hline\n -3 &amp;-5 &amp;-4 &amp;-3 &amp;-2 &amp;-1 \\\\\n -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\\\\n -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\\\n 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\\\n 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3\n\\end{array} \n\\end{array}\n$$\nYou can see that if we replace in the first table each occurrence of $-4$ by $-5$ and similarly $-3\\mapsto-4$, $-2\\mapsto-3$, $-1\\mapsto-2$, etc., then we get the second table. </p>\n\n<p>In this sense, the two operations are \"the same\", we just \"renamed\" the elements. (And $x\\mapsto x-1$ is the dictionary.) </p>\n\n<hr>\n\n<p>Now we know that $(\\mathbb R,+,\\cdot)$ is a field. We also know that using \"renaming\" $f\\colon x\\mapsto x-1$ creates $\\oplus$ from $+$. What happens if we apply the same thing to the usual multiplication $\\cdot$?\n$$a\\odot b = (a+1)\\cdot(b+1)-1=ab+a+b$$\nor \n$$a\\odot b +1 = (a+1)\\cdot(b+1) +1.$$\nTherefore $a\\odot b=ab+a+b$ seems like a natural candidate for the multiplication.\n(Notice that I have used the inverse mapping $x\\mapsto x+1$.)</p>\n\n<hr>\n\n<p>This can be formulated a bit simpler in case you are already familiar with the notion of group isomorphism and field isomorphism. I have posted a bit more lengthy answer on purpose. (I was not sure whether the OP is familiar with these notions. And for the readers who are already familiar with them, there already is a shorter answer. Perhaps this longer answer might be useful for some people reading this question, too.)</p>\n\n<p>I will add the link to this post, which might also help to gain some insight: <a href=\"https://math.stackexchange.com/questions/242348/intuition-on-group-homomorphisms\">Intuition on group homomorphisms</a>.</p>\n\n<p>And I will also add another link to a post where at least some of the field axioms for $(\\mathbb R,\\oplus,\\odot)$ are verified: <a href=\"https://math.stackexchange.com/questions/239750/prove-that-r-is-a-ring-under-special-definitions-of-multiplication-and-additio\">Prove that R is a ring under &#39;special&#39; definitions of multiplication and addition</a>.</p>\n" }, { "answer_id": 2110422, "score": 1, "text": "<p><strong>Hint</strong> <span class=\"math-container\">$ $</span> Any set bijection <span class=\"math-container\">$\\,h\\,:\\,R'\\to R\\,$</span> serves to <a href=\"https://math.stackexchange.com/search?q=user%3A242+transport\">transport the (ring) structure</a> of <span class=\"math-container\">$\\,(R,+,*,0,1),$</span> to <span class=\"math-container\">$\\,(R',\\oplus,\\otimes,0',1') \\,$</span> by simply defining the operations in <span class=\"math-container\">$\\,R'\\,$</span> to make <span class=\"math-container\">$\\,h\\,$</span> be a (ring) isomorphism</p>\n<p><span class=\"math-container\">$$\\begin{align} &amp;a \\oplus b\\, =\\ h^{-1}(h(a) + h(b)),\\quad 0' = h^{-1}(0)\\\\ \n &amp;a \\otimes b\\, =\\, h^{-1}(h(a)\\, * h(b)),\\quad\\, 1' = h^{-1}(1)\\end{align}\\qquad $$</span></p>\n<p>Yours is the special case <span class=\"math-container\">$\\ h(x) = x+1.$</span></p>\n<p>As an example let's explicitly examine how the associative law is transported from <span class=\"math-container\">$R$</span> to <span class=\"math-container\">$R'$</span>.</p>\n<p><span class=\"math-container\">$$ \\begin{align} h(a\\oplus(b\\oplus c)) \\,&amp;=\\, h(a)+h(b\\oplus c)\\\\ &amp;=\\, h(a)+(h(b) + h(c))\\\\ &amp;=\\, (h(a) + h(b))+h(c)\\\\ &amp;=\\, h(a\\oplus b)+h(c)\\\\ &amp;=\\, h((a\\oplus b)\\oplus c)\\end{align}\\qquad $$</span></p>\n<p>Therefore we conclude that <span class=\"math-container\">$\\ a\\oplus (b\\oplus c) = (a\\oplus b)\\oplus c\\ $</span> by applying <span class=\"math-container\">$h^{-1}$</span> to above. The other (universal equational) group axioms transport in exactly the same way.</p>\n" }, { "answer_id": 2109918, "score": 2, "text": "<p>If you consider $a\\otimes b= ab+a+b$. Is clear that is a commutative operation because of the structure. Notice that:</p>\n\n<p>$$(a\\otimes b) \\oplus (a\\otimes c)= (ab+a+b)\\oplus (ac+a+c)=(ab+a+b)+(ac+a+c)+1$$\n$$=a(b+c)+2a+b+c+1$$\nThat is equal to\n$$a\\otimes(b\\oplus c)=a\\otimes(b+c+1)=a(b+c+1)+a+(b+c+1)$$\n$$=a(b+c)+2a+b+c+1$$</p>\n\n<p>Finally look that \n$$a\\otimes(b\\otimes c)=a\\otimes (bc+b+c)=a(bc+b+c)+a+(bc+b+c)$$\n$$=abc+ab+ac+bc+a+b+c$$\nAnd\n$$(a\\otimes b)\\otimes c=(ab+a+b)\\otimes c=(ab+a+b)c+(ab+a+b)+c$$\n$$=abc+ab+ac+bc+a+b+c$$</p>\n" }, { "answer_id": 2109982, "score": 2, "text": "<p>The most straightforward approach to solving this problem is to prove $(\\mathbb{R}, \\oplus) \\cong (\\mathbb{R}, +)$. Once you've done that, you can use the isomorphism to define $\\otimes$ in terms of $\\times$.</p>\n" }, { "answer_id": 2109985, "score": 2, "text": "<p>Let us start by noticing that \n$$(a-1)\\oplus(b-1)=(a-1)+(b-1)+1=(a+b)-1.$$\nThis means that $f\\colon\\mathbb R\\to\\mathbb R$ defined by $f(x)=x-1$ is a <a href=\"https://en.wikipedia.org/wiki/Group_isomorphism\" rel=\"nofollow noreferrer\">group isomorphism</a> between $(\\mathbb R,+)$ and $(\\mathbb R,\\oplus)$. (The map $f$ is bijection and fulfills $f(a+b)=f(a)\\oplus f(b)$.) And the inverse $g(x)=x+1$ is an isomorphism $(\\mathbb R,\\oplus)$ to $(\\mathbb R,+)$.</p>\n\n<p>I had a brief glance at the book mentioned in the question - it seems that the notion of of group and the notion of isomorphism is not introduced in the book at this point. (Of course, I might have missed it.) But the basic intuition can be explained in a few words. We are basically saying that after \"renaming\" the objects the two structures $(\\mathbb R,+)$ and $(\\mathbb R,\\oplus)$ are the same.</p>\n\n<p>Just notice this part of <a href=\"https://en.wikipedia.org/wiki/Cayley_table\" rel=\"nofollow noreferrer\">Cayley table</a> showing results of operation. (Of course, since we are working with an infinite set, we cannot put the whole set $\\mathbb R$ into a finite table.) </p>\n\n<p>$$\n\\begin{array}{cc}\n\\begin{array}{c|ccccc}\n + &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\\\\hline\n -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\\\\n -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\\\n 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\\\n 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3 \\\\\n 2 &amp; 0 &amp; 1 &amp; 2 &amp; 3 &amp; 4 \n\\end{array}\n &amp; \n\\begin{array}{c|ccccc}\n \\oplus &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\\\\hline\n -3 &amp;-5 &amp;-4 &amp;-3 &amp;-2 &amp;-1 \\\\\n -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\\\\n -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\\\n 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\\\n 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3\n\\end{array} \n\\end{array}\n$$\nYou can see that if we replace in the first table each occurrence of $-4$ by $-5$ and similarly $-3\\mapsto-4$, $-2\\mapsto-3$, $-1\\mapsto-2$, etc., then we get the second table. </p>\n\n<p>In this sense, the two operations are \"the same\", we just \"renamed\" the elements. (And $x\\mapsto x-1$ is the dictionary.) </p>\n\n<hr>\n\n<p>Now we know that $(\\mathbb R,+,\\cdot)$ is a field. We also know that using \"renaming\" $f\\colon x\\mapsto x-1$ creates $\\oplus$ from $+$. What happens if we apply the same thing to the usual multiplication $\\cdot$?\n$$a\\odot b = (a+1)\\cdot(b+1)-1=ab+a+b$$\nor \n$$a\\odot b +1 = (a+1)\\cdot(b+1) +1.$$\nTherefore $a\\odot b=ab+a+b$ seems like a natural candidate for the multiplication.\n(Notice that I have used the inverse mapping $x\\mapsto x+1$.)</p>\n\n<hr>\n\n<p>This can be formulated a bit simpler in case you are already familiar with the notion of group isomorphism and field isomorphism. I have posted a bit more lengthy answer on purpose. (I was not sure whether the OP is familiar with these notions. And for the readers who are already familiar with them, there already is a shorter answer. Perhaps this longer answer might be useful for some people reading this question, too.)</p>\n\n<p>I will add the link to this post, which might also help to gain some insight: <a href=\"https://math.stackexchange.com/questions/242348/intuition-on-group-homomorphisms\">Intuition on group homomorphisms</a>.</p>\n\n<p>And I will also add another link to a post where at least some of the field axioms for $(\\mathbb R,\\oplus,\\odot)$ are verified: <a href=\"https://math.stackexchange.com/questions/239750/prove-that-r-is-a-ring-under-special-definitions-of-multiplication-and-additio\">Prove that R is a ring under &#39;special&#39; definitions of multiplication and addition</a>.</p>\n" }, { "answer_id": 2110422, "score": 1, "text": "<p><strong>Hint</strong> <span class=\"math-container\">$ $</span> Any set bijection <span class=\"math-container\">$\\,h\\,:\\,R'\\to R\\,$</span> serves to <a href=\"https://math.stackexchange.com/search?q=user%3A242+transport\">transport the (ring) structure</a> of <span class=\"math-container\">$\\,(R,+,*,0,1),$</span> to <span class=\"math-container\">$\\,(R',\\oplus,\\otimes,0',1') \\,$</span> by simply defining the operations in <span class=\"math-container\">$\\,R'\\,$</span> to make <span class=\"math-container\">$\\,h\\,$</span> be a (ring) isomorphism</p>\n<p><span class=\"math-container\">$$\\begin{align} &amp;a \\oplus b\\, =\\ h^{-1}(h(a) + h(b)),\\quad 0' = h^{-1}(0)\\\\ \n &amp;a \\otimes b\\, =\\, h^{-1}(h(a)\\, * h(b)),\\quad\\, 1' = h^{-1}(1)\\end{align}\\qquad $$</span></p>\n<p>Yours is the special case <span class=\"math-container\">$\\ h(x) = x+1.$</span></p>\n<p>As an example let's explicitly examine how the associative law is transported from <span class=\"math-container\">$R$</span> to <span class=\"math-container\">$R'$</span>.</p>\n<p><span class=\"math-container\">$$ \\begin{align} h(a\\oplus(b\\oplus c)) \\,&amp;=\\, h(a)+h(b\\oplus c)\\\\ &amp;=\\, h(a)+(h(b) + h(c))\\\\ &amp;=\\, (h(a) + h(b))+h(c)\\\\ &amp;=\\, h(a\\oplus b)+h(c)\\\\ &amp;=\\, h((a\\oplus b)\\oplus c)\\end{align}\\qquad $$</span></p>\n<p>Therefore we conclude that <span class=\"math-container\">$\\ a\\oplus (b\\oplus c) = (a\\oplus b)\\oplus c\\ $</span> by applying <span class=\"math-container\">$h^{-1}$</span> to above. The other (universal equational) group axioms transport in exactly the same way.</p>\n" }, { "answer_id": 2109918, "score": 2, "text": "<p>If you consider $a\\otimes b= ab+a+b$. Is clear that is a commutative operation because of the structure. Notice that:</p>\n\n<p>$$(a\\otimes b) \\oplus (a\\otimes c)= (ab+a+b)\\oplus (ac+a+c)=(ab+a+b)+(ac+a+c)+1$$\n$$=a(b+c)+2a+b+c+1$$\nThat is equal to\n$$a\\otimes(b\\oplus c)=a\\otimes(b+c+1)=a(b+c+1)+a+(b+c+1)$$\n$$=a(b+c)+2a+b+c+1$$</p>\n\n<p>Finally look that \n$$a\\otimes(b\\otimes c)=a\\otimes (bc+b+c)=a(bc+b+c)+a+(bc+b+c)$$\n$$=abc+ab+ac+bc+a+b+c$$\nAnd\n$$(a\\otimes b)\\otimes c=(ab+a+b)\\otimes c=(ab+a+b)c+(ab+a+b)+c$$\n$$=abc+ab+ac+bc+a+b+c$$</p>\n" }, { "answer_id": 2109982, "score": 2, "text": "<p>The most straightforward approach to solving this problem is to prove $(\\mathbb{R}, \\oplus) \\cong (\\mathbb{R}, +)$. Once you've done that, you can use the isomorphism to define $\\otimes$ in terms of $\\times$.</p>\n" }, { "answer_id": 2109985, "score": 2, "text": "<p>Let us start by noticing that \n$$(a-1)\\oplus(b-1)=(a-1)+(b-1)+1=(a+b)-1.$$\nThis means that $f\\colon\\mathbb R\\to\\mathbb R$ defined by $f(x)=x-1$ is a <a href=\"https://en.wikipedia.org/wiki/Group_isomorphism\" rel=\"nofollow noreferrer\">group isomorphism</a> between $(\\mathbb R,+)$ and $(\\mathbb R,\\oplus)$. (The map $f$ is bijection and fulfills $f(a+b)=f(a)\\oplus f(b)$.) And the inverse $g(x)=x+1$ is an isomorphism $(\\mathbb R,\\oplus)$ to $(\\mathbb R,+)$.</p>\n\n<p>I had a brief glance at the book mentioned in the question - it seems that the notion of of group and the notion of isomorphism is not introduced in the book at this point. (Of course, I might have missed it.) But the basic intuition can be explained in a few words. We are basically saying that after \"renaming\" the objects the two structures $(\\mathbb R,+)$ and $(\\mathbb R,\\oplus)$ are the same.</p>\n\n<p>Just notice this part of <a href=\"https://en.wikipedia.org/wiki/Cayley_table\" rel=\"nofollow noreferrer\">Cayley table</a> showing results of operation. (Of course, since we are working with an infinite set, we cannot put the whole set $\\mathbb R$ into a finite table.) </p>\n\n<p>$$\n\\begin{array}{cc}\n\\begin{array}{c|ccccc}\n + &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\\\\hline\n -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\\\\n -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\\\n 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\\\n 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3 \\\\\n 2 &amp; 0 &amp; 1 &amp; 2 &amp; 3 &amp; 4 \n\\end{array}\n &amp; \n\\begin{array}{c|ccccc}\n \\oplus &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\\\\hline\n -3 &amp;-5 &amp;-4 &amp;-3 &amp;-2 &amp;-1 \\\\\n -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\\\\n -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\\\n 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\\\n 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3\n\\end{array} \n\\end{array}\n$$\nYou can see that if we replace in the first table each occurrence of $-4$ by $-5$ and similarly $-3\\mapsto-4$, $-2\\mapsto-3$, $-1\\mapsto-2$, etc., then we get the second table. </p>\n\n<p>In this sense, the two operations are \"the same\", we just \"renamed\" the elements. (And $x\\mapsto x-1$ is the dictionary.) </p>\n\n<hr>\n\n<p>Now we know that $(\\mathbb R,+,\\cdot)$ is a field. We also know that using \"renaming\" $f\\colon x\\mapsto x-1$ creates $\\oplus$ from $+$. What happens if we apply the same thing to the usual multiplication $\\cdot$?\n$$a\\odot b = (a+1)\\cdot(b+1)-1=ab+a+b$$\nor \n$$a\\odot b +1 = (a+1)\\cdot(b+1) +1.$$\nTherefore $a\\odot b=ab+a+b$ seems like a natural candidate for the multiplication.\n(Notice that I have used the inverse mapping $x\\mapsto x+1$.)</p>\n\n<hr>\n\n<p>This can be formulated a bit simpler in case you are already familiar with the notion of group isomorphism and field isomorphism. I have posted a bit more lengthy answer on purpose. (I was not sure whether the OP is familiar with these notions. And for the readers who are already familiar with them, there already is a shorter answer. Perhaps this longer answer might be useful for some people reading this question, too.)</p>\n\n<p>I will add the link to this post, which might also help to gain some insight: <a href=\"https://math.stackexchange.com/questions/242348/intuition-on-group-homomorphisms\">Intuition on group homomorphisms</a>.</p>\n\n<p>And I will also add another link to a post where at least some of the field axioms for $(\\mathbb R,\\oplus,\\odot)$ are verified: <a href=\"https://math.stackexchange.com/questions/239750/prove-that-r-is-a-ring-under-special-definitions-of-multiplication-and-additio\">Prove that R is a ring under &#39;special&#39; definitions of multiplication and addition</a>.</p>\n" }, { "answer_id": 2110422, "score": 1, "text": "<p><strong>Hint</strong> <span class=\"math-container\">$ $</span> Any set bijection <span class=\"math-container\">$\\,h\\,:\\,R'\\to R\\,$</span> serves to <a href=\"https://math.stackexchange.com/search?q=user%3A242+transport\">transport the (ring) structure</a> of <span class=\"math-container\">$\\,(R,+,*,0,1),$</span> to <span class=\"math-container\">$\\,(R',\\oplus,\\otimes,0',1') \\,$</span> by simply defining the operations in <span class=\"math-container\">$\\,R'\\,$</span> to make <span class=\"math-container\">$\\,h\\,$</span> be a (ring) isomorphism</p>\n<p><span class=\"math-container\">$$\\begin{align} &amp;a \\oplus b\\, =\\ h^{-1}(h(a) + h(b)),\\quad 0' = h^{-1}(0)\\\\ \n &amp;a \\otimes b\\, =\\, h^{-1}(h(a)\\, * h(b)),\\quad\\, 1' = h^{-1}(1)\\end{align}\\qquad $$</span></p>\n<p>Yours is the special case <span class=\"math-container\">$\\ h(x) = x+1.$</span></p>\n<p>As an example let's explicitly examine how the associative law is transported from <span class=\"math-container\">$R$</span> to <span class=\"math-container\">$R'$</span>.</p>\n<p><span class=\"math-container\">$$ \\begin{align} h(a\\oplus(b\\oplus c)) \\,&amp;=\\, h(a)+h(b\\oplus c)\\\\ &amp;=\\, h(a)+(h(b) + h(c))\\\\ &amp;=\\, (h(a) + h(b))+h(c)\\\\ &amp;=\\, h(a\\oplus b)+h(c)\\\\ &amp;=\\, h((a\\oplus b)\\oplus c)\\end{align}\\qquad $$</span></p>\n<p>Therefore we conclude that <span class=\"math-container\">$\\ a\\oplus (b\\oplus c) = (a\\oplus b)\\oplus c\\ $</span> by applying <span class=\"math-container\">$h^{-1}$</span> to above. The other (universal equational) group axioms transport in exactly the same way.</p>\n" }, { "answer_id": 2109918, "score": 2, "text": "<p>If you consider $a\\otimes b= ab+a+b$. Is clear that is a commutative operation because of the structure. Notice that:</p>\n\n<p>$$(a\\otimes b) \\oplus (a\\otimes c)= (ab+a+b)\\oplus (ac+a+c)=(ab+a+b)+(ac+a+c)+1$$\n$$=a(b+c)+2a+b+c+1$$\nThat is equal to\n$$a\\otimes(b\\oplus c)=a\\otimes(b+c+1)=a(b+c+1)+a+(b+c+1)$$\n$$=a(b+c)+2a+b+c+1$$</p>\n\n<p>Finally look that \n$$a\\otimes(b\\otimes c)=a\\otimes (bc+b+c)=a(bc+b+c)+a+(bc+b+c)$$\n$$=abc+ab+ac+bc+a+b+c$$\nAnd\n$$(a\\otimes b)\\otimes c=(ab+a+b)\\otimes c=(ab+a+b)c+(ab+a+b)+c$$\n$$=abc+ab+ac+bc+a+b+c$$</p>\n" }, { "answer_id": 2109982, "score": 2, "text": "<p>The most straightforward approach to solving this problem is to prove $(\\mathbb{R}, \\oplus) \\cong (\\mathbb{R}, +)$. Once you've done that, you can use the isomorphism to define $\\otimes$ in terms of $\\times$.</p>\n" }, { "answer_id": 2109985, "score": 2, "text": "<p>Let us start by noticing that \n$$(a-1)\\oplus(b-1)=(a-1)+(b-1)+1=(a+b)-1.$$\nThis means that $f\\colon\\mathbb R\\to\\mathbb R$ defined by $f(x)=x-1$ is a <a href=\"https://en.wikipedia.org/wiki/Group_isomorphism\" rel=\"nofollow noreferrer\">group isomorphism</a> between $(\\mathbb R,+)$ and $(\\mathbb R,\\oplus)$. (The map $f$ is bijection and fulfills $f(a+b)=f(a)\\oplus f(b)$.) And the inverse $g(x)=x+1$ is an isomorphism $(\\mathbb R,\\oplus)$ to $(\\mathbb R,+)$.</p>\n\n<p>I had a brief glance at the book mentioned in the question - it seems that the notion of of group and the notion of isomorphism is not introduced in the book at this point. (Of course, I might have missed it.) But the basic intuition can be explained in a few words. We are basically saying that after \"renaming\" the objects the two structures $(\\mathbb R,+)$ and $(\\mathbb R,\\oplus)$ are the same.</p>\n\n<p>Just notice this part of <a href=\"https://en.wikipedia.org/wiki/Cayley_table\" rel=\"nofollow noreferrer\">Cayley table</a> showing results of operation. (Of course, since we are working with an infinite set, we cannot put the whole set $\\mathbb R$ into a finite table.) </p>\n\n<p>$$\n\\begin{array}{cc}\n\\begin{array}{c|ccccc}\n + &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\\\\hline\n -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\\\\n -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\\\n 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\\\n 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3 \\\\\n 2 &amp; 0 &amp; 1 &amp; 2 &amp; 3 &amp; 4 \n\\end{array}\n &amp; \n\\begin{array}{c|ccccc}\n \\oplus &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\\\\hline\n -3 &amp;-5 &amp;-4 &amp;-3 &amp;-2 &amp;-1 \\\\\n -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\\\\n -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\\\n 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\\\n 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3\n\\end{array} \n\\end{array}\n$$\nYou can see that if we replace in the first table each occurrence of $-4$ by $-5$ and similarly $-3\\mapsto-4$, $-2\\mapsto-3$, $-1\\mapsto-2$, etc., then we get the second table. </p>\n\n<p>In this sense, the two operations are \"the same\", we just \"renamed\" the elements. (And $x\\mapsto x-1$ is the dictionary.) </p>\n\n<hr>\n\n<p>Now we know that $(\\mathbb R,+,\\cdot)$ is a field. We also know that using \"renaming\" $f\\colon x\\mapsto x-1$ creates $\\oplus$ from $+$. What happens if we apply the same thing to the usual multiplication $\\cdot$?\n$$a\\odot b = (a+1)\\cdot(b+1)-1=ab+a+b$$\nor \n$$a\\odot b +1 = (a+1)\\cdot(b+1) +1.$$\nTherefore $a\\odot b=ab+a+b$ seems like a natural candidate for the multiplication.\n(Notice that I have used the inverse mapping $x\\mapsto x+1$.)</p>\n\n<hr>\n\n<p>This can be formulated a bit simpler in case you are already familiar with the notion of group isomorphism and field isomorphism. I have posted a bit more lengthy answer on purpose. (I was not sure whether the OP is familiar with these notions. And for the readers who are already familiar with them, there already is a shorter answer. Perhaps this longer answer might be useful for some people reading this question, too.)</p>\n\n<p>I will add the link to this post, which might also help to gain some insight: <a href=\"https://math.stackexchange.com/questions/242348/intuition-on-group-homomorphisms\">Intuition on group homomorphisms</a>.</p>\n\n<p>And I will also add another link to a post where at least some of the field axioms for $(\\mathbb R,\\oplus,\\odot)$ are verified: <a href=\"https://math.stackexchange.com/questions/239750/prove-that-r-is-a-ring-under-special-definitions-of-multiplication-and-additio\">Prove that R is a ring under &#39;special&#39; definitions of multiplication and addition</a>.</p>\n" }, { "answer_id": 2110422, "score": 1, "text": "<p><strong>Hint</strong> <span class=\"math-container\">$ $</span> Any set bijection <span class=\"math-container\">$\\,h\\,:\\,R'\\to R\\,$</span> serves to <a href=\"https://math.stackexchange.com/search?q=user%3A242+transport\">transport the (ring) structure</a> of <span class=\"math-container\">$\\,(R,+,*,0,1),$</span> to <span class=\"math-container\">$\\,(R',\\oplus,\\otimes,0',1') \\,$</span> by simply defining the operations in <span class=\"math-container\">$\\,R'\\,$</span> to make <span class=\"math-container\">$\\,h\\,$</span> be a (ring) isomorphism</p>\n<p><span class=\"math-container\">$$\\begin{align} &amp;a \\oplus b\\, =\\ h^{-1}(h(a) + h(b)),\\quad 0' = h^{-1}(0)\\\\ \n &amp;a \\otimes b\\, =\\, h^{-1}(h(a)\\, * h(b)),\\quad\\, 1' = h^{-1}(1)\\end{align}\\qquad $$</span></p>\n<p>Yours is the special case <span class=\"math-container\">$\\ h(x) = x+1.$</span></p>\n<p>As an example let's explicitly examine how the associative law is transported from <span class=\"math-container\">$R$</span> to <span class=\"math-container\">$R'$</span>.</p>\n<p><span class=\"math-container\">$$ \\begin{align} h(a\\oplus(b\\oplus c)) \\,&amp;=\\, h(a)+h(b\\oplus c)\\\\ &amp;=\\, h(a)+(h(b) + h(c))\\\\ &amp;=\\, (h(a) + h(b))+h(c)\\\\ &amp;=\\, h(a\\oplus b)+h(c)\\\\ &amp;=\\, h((a\\oplus b)\\oplus c)\\end{align}\\qquad $$</span></p>\n<p>Therefore we conclude that <span class=\"math-container\">$\\ a\\oplus (b\\oplus c) = (a\\oplus b)\\oplus c\\ $</span> by applying <span class=\"math-container\">$h^{-1}$</span> to above. The other (universal equational) group axioms transport in exactly the same way.</p>\n" }, { "answer_id": 2109918, "score": 2, "text": "<p>If you consider $a\\otimes b= ab+a+b$. Is clear that is a commutative operation because of the structure. Notice that:</p>\n\n<p>$$(a\\otimes b) \\oplus (a\\otimes c)= (ab+a+b)\\oplus (ac+a+c)=(ab+a+b)+(ac+a+c)+1$$\n$$=a(b+c)+2a+b+c+1$$\nThat is equal to\n$$a\\otimes(b\\oplus c)=a\\otimes(b+c+1)=a(b+c+1)+a+(b+c+1)$$\n$$=a(b+c)+2a+b+c+1$$</p>\n\n<p>Finally look that \n$$a\\otimes(b\\otimes c)=a\\otimes (bc+b+c)=a(bc+b+c)+a+(bc+b+c)$$\n$$=abc+ab+ac+bc+a+b+c$$\nAnd\n$$(a\\otimes b)\\otimes c=(ab+a+b)\\otimes c=(ab+a+b)c+(ab+a+b)+c$$\n$$=abc+ab+ac+bc+a+b+c$$</p>\n" }, { "answer_id": 2109982, "score": 2, "text": "<p>The most straightforward approach to solving this problem is to prove $(\\mathbb{R}, \\oplus) \\cong (\\mathbb{R}, +)$. Once you've done that, you can use the isomorphism to define $\\otimes$ in terms of $\\times$.</p>\n" }, { "answer_id": 2109985, "score": 2, "text": "<p>Let us start by noticing that \n$$(a-1)\\oplus(b-1)=(a-1)+(b-1)+1=(a+b)-1.$$\nThis means that $f\\colon\\mathbb R\\to\\mathbb R$ defined by $f(x)=x-1$ is a <a href=\"https://en.wikipedia.org/wiki/Group_isomorphism\" rel=\"nofollow noreferrer\">group isomorphism</a> between $(\\mathbb R,+)$ and $(\\mathbb R,\\oplus)$. (The map $f$ is bijection and fulfills $f(a+b)=f(a)\\oplus f(b)$.) And the inverse $g(x)=x+1$ is an isomorphism $(\\mathbb R,\\oplus)$ to $(\\mathbb R,+)$.</p>\n\n<p>I had a brief glance at the book mentioned in the question - it seems that the notion of of group and the notion of isomorphism is not introduced in the book at this point. (Of course, I might have missed it.) But the basic intuition can be explained in a few words. We are basically saying that after \"renaming\" the objects the two structures $(\\mathbb R,+)$ and $(\\mathbb R,\\oplus)$ are the same.</p>\n\n<p>Just notice this part of <a href=\"https://en.wikipedia.org/wiki/Cayley_table\" rel=\"nofollow noreferrer\">Cayley table</a> showing results of operation. (Of course, since we are working with an infinite set, we cannot put the whole set $\\mathbb R$ into a finite table.) </p>\n\n<p>$$\n\\begin{array}{cc}\n\\begin{array}{c|ccccc}\n + &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\\\\hline\n -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\\\\n -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\\\n 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\\\n 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3 \\\\\n 2 &amp; 0 &amp; 1 &amp; 2 &amp; 3 &amp; 4 \n\\end{array}\n &amp; \n\\begin{array}{c|ccccc}\n \\oplus &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\\\\hline\n -3 &amp;-5 &amp;-4 &amp;-3 &amp;-2 &amp;-1 \\\\\n -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\\\\n -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\\\n 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\\\n 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3\n\\end{array} \n\\end{array}\n$$\nYou can see that if we replace in the first table each occurrence of $-4$ by $-5$ and similarly $-3\\mapsto-4$, $-2\\mapsto-3$, $-1\\mapsto-2$, etc., then we get the second table. </p>\n\n<p>In this sense, the two operations are \"the same\", we just \"renamed\" the elements. (And $x\\mapsto x-1$ is the dictionary.) </p>\n\n<hr>\n\n<p>Now we know that $(\\mathbb R,+,\\cdot)$ is a field. We also know that using \"renaming\" $f\\colon x\\mapsto x-1$ creates $\\oplus$ from $+$. What happens if we apply the same thing to the usual multiplication $\\cdot$?\n$$a\\odot b = (a+1)\\cdot(b+1)-1=ab+a+b$$\nor \n$$a\\odot b +1 = (a+1)\\cdot(b+1) +1.$$\nTherefore $a\\odot b=ab+a+b$ seems like a natural candidate for the multiplication.\n(Notice that I have used the inverse mapping $x\\mapsto x+1$.)</p>\n\n<hr>\n\n<p>This can be formulated a bit simpler in case you are already familiar with the notion of group isomorphism and field isomorphism. I have posted a bit more lengthy answer on purpose. (I was not sure whether the OP is familiar with these notions. And for the readers who are already familiar with them, there already is a shorter answer. Perhaps this longer answer might be useful for some people reading this question, too.)</p>\n\n<p>I will add the link to this post, which might also help to gain some insight: <a href=\"https://math.stackexchange.com/questions/242348/intuition-on-group-homomorphisms\">Intuition on group homomorphisms</a>.</p>\n\n<p>And I will also add another link to a post where at least some of the field axioms for $(\\mathbb R,\\oplus,\\odot)$ are verified: <a href=\"https://math.stackexchange.com/questions/239750/prove-that-r-is-a-ring-under-special-definitions-of-multiplication-and-additio\">Prove that R is a ring under &#39;special&#39; definitions of multiplication and addition</a>.</p>\n" }, { "answer_id": 2110422, "score": 1, "text": "<p><strong>Hint</strong> <span class=\"math-container\">$ $</span> Any set bijection <span class=\"math-container\">$\\,h\\,:\\,R'\\to R\\,$</span> serves to <a href=\"https://math.stackexchange.com/search?q=user%3A242+transport\">transport the (ring) structure</a> of <span class=\"math-container\">$\\,(R,+,*,0,1),$</span> to <span class=\"math-container\">$\\,(R',\\oplus,\\otimes,0',1') \\,$</span> by simply defining the operations in <span class=\"math-container\">$\\,R'\\,$</span> to make <span class=\"math-container\">$\\,h\\,$</span> be a (ring) isomorphism</p>\n<p><span class=\"math-container\">$$\\begin{align} &amp;a \\oplus b\\, =\\ h^{-1}(h(a) + h(b)),\\quad 0' = h^{-1}(0)\\\\ \n &amp;a \\otimes b\\, =\\, h^{-1}(h(a)\\, * h(b)),\\quad\\, 1' = h^{-1}(1)\\end{align}\\qquad $$</span></p>\n<p>Yours is the special case <span class=\"math-container\">$\\ h(x) = x+1.$</span></p>\n<p>As an example let's explicitly examine how the associative law is transported from <span class=\"math-container\">$R$</span> to <span class=\"math-container\">$R'$</span>.</p>\n<p><span class=\"math-container\">$$ \\begin{align} h(a\\oplus(b\\oplus c)) \\,&amp;=\\, h(a)+h(b\\oplus c)\\\\ &amp;=\\, h(a)+(h(b) + h(c))\\\\ &amp;=\\, (h(a) + h(b))+h(c)\\\\ &amp;=\\, h(a\\oplus b)+h(c)\\\\ &amp;=\\, h((a\\oplus b)\\oplus c)\\end{align}\\qquad $$</span></p>\n<p>Therefore we conclude that <span class=\"math-container\">$\\ a\\oplus (b\\oplus c) = (a\\oplus b)\\oplus c\\ $</span> by applying <span class=\"math-container\">$h^{-1}$</span> to above. The other (universal equational) group axioms transport in exactly the same way.</p>\n" }, { "answer_id": 2109918, "score": 2, "text": "<p>If you consider $a\\otimes b= ab+a+b$. Is clear that is a commutative operation because of the structure. Notice that:</p>\n\n<p>$$(a\\otimes b) \\oplus (a\\otimes c)= (ab+a+b)\\oplus (ac+a+c)=(ab+a+b)+(ac+a+c)+1$$\n$$=a(b+c)+2a+b+c+1$$\nThat is equal to\n$$a\\otimes(b\\oplus c)=a\\otimes(b+c+1)=a(b+c+1)+a+(b+c+1)$$\n$$=a(b+c)+2a+b+c+1$$</p>\n\n<p>Finally look that \n$$a\\otimes(b\\otimes c)=a\\otimes (bc+b+c)=a(bc+b+c)+a+(bc+b+c)$$\n$$=abc+ab+ac+bc+a+b+c$$\nAnd\n$$(a\\otimes b)\\otimes c=(ab+a+b)\\otimes c=(ab+a+b)c+(ab+a+b)+c$$\n$$=abc+ab+ac+bc+a+b+c$$</p>\n" }, { "answer_id": 2109982, "score": 2, "text": "<p>The most straightforward approach to solving this problem is to prove $(\\mathbb{R}, \\oplus) \\cong (\\mathbb{R}, +)$. Once you've done that, you can use the isomorphism to define $\\otimes$ in terms of $\\times$.</p>\n" }, { "answer_id": 2109985, "score": 2, "text": "<p>Let us start by noticing that \n$$(a-1)\\oplus(b-1)=(a-1)+(b-1)+1=(a+b)-1.$$\nThis means that $f\\colon\\mathbb R\\to\\mathbb R$ defined by $f(x)=x-1$ is a <a href=\"https://en.wikipedia.org/wiki/Group_isomorphism\" rel=\"nofollow noreferrer\">group isomorphism</a> between $(\\mathbb R,+)$ and $(\\mathbb R,\\oplus)$. (The map $f$ is bijection and fulfills $f(a+b)=f(a)\\oplus f(b)$.) And the inverse $g(x)=x+1$ is an isomorphism $(\\mathbb R,\\oplus)$ to $(\\mathbb R,+)$.</p>\n\n<p>I had a brief glance at the book mentioned in the question - it seems that the notion of of group and the notion of isomorphism is not introduced in the book at this point. (Of course, I might have missed it.) But the basic intuition can be explained in a few words. We are basically saying that after \"renaming\" the objects the two structures $(\\mathbb R,+)$ and $(\\mathbb R,\\oplus)$ are the same.</p>\n\n<p>Just notice this part of <a href=\"https://en.wikipedia.org/wiki/Cayley_table\" rel=\"nofollow noreferrer\">Cayley table</a> showing results of operation. (Of course, since we are working with an infinite set, we cannot put the whole set $\\mathbb R$ into a finite table.) </p>\n\n<p>$$\n\\begin{array}{cc}\n\\begin{array}{c|ccccc}\n + &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\\\\hline\n -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\\\\n -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\\\n 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\\\n 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3 \\\\\n 2 &amp; 0 &amp; 1 &amp; 2 &amp; 3 &amp; 4 \n\\end{array}\n &amp; \n\\begin{array}{c|ccccc}\n \\oplus &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\\\\hline\n -3 &amp;-5 &amp;-4 &amp;-3 &amp;-2 &amp;-1 \\\\\n -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\\\\n -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\\\n 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\\\n 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3\n\\end{array} \n\\end{array}\n$$\nYou can see that if we replace in the first table each occurrence of $-4$ by $-5$ and similarly $-3\\mapsto-4$, $-2\\mapsto-3$, $-1\\mapsto-2$, etc., then we get the second table. </p>\n\n<p>In this sense, the two operations are \"the same\", we just \"renamed\" the elements. (And $x\\mapsto x-1$ is the dictionary.) </p>\n\n<hr>\n\n<p>Now we know that $(\\mathbb R,+,\\cdot)$ is a field. We also know that using \"renaming\" $f\\colon x\\mapsto x-1$ creates $\\oplus$ from $+$. What happens if we apply the same thing to the usual multiplication $\\cdot$?\n$$a\\odot b = (a+1)\\cdot(b+1)-1=ab+a+b$$\nor \n$$a\\odot b +1 = (a+1)\\cdot(b+1) +1.$$\nTherefore $a\\odot b=ab+a+b$ seems like a natural candidate for the multiplication.\n(Notice that I have used the inverse mapping $x\\mapsto x+1$.)</p>\n\n<hr>\n\n<p>This can be formulated a bit simpler in case you are already familiar with the notion of group isomorphism and field isomorphism. I have posted a bit more lengthy answer on purpose. (I was not sure whether the OP is familiar with these notions. And for the readers who are already familiar with them, there already is a shorter answer. Perhaps this longer answer might be useful for some people reading this question, too.)</p>\n\n<p>I will add the link to this post, which might also help to gain some insight: <a href=\"https://math.stackexchange.com/questions/242348/intuition-on-group-homomorphisms\">Intuition on group homomorphisms</a>.</p>\n\n<p>And I will also add another link to a post where at least some of the field axioms for $(\\mathbb R,\\oplus,\\odot)$ are verified: <a href=\"https://math.stackexchange.com/questions/239750/prove-that-r-is-a-ring-under-special-definitions-of-multiplication-and-additio\">Prove that R is a ring under &#39;special&#39; definitions of multiplication and addition</a>.</p>\n" }, { "answer_id": 2110422, "score": 1, "text": "<p><strong>Hint</strong> <span class=\"math-container\">$ $</span> Any set bijection <span class=\"math-container\">$\\,h\\,:\\,R'\\to R\\,$</span> serves to <a href=\"https://math.stackexchange.com/search?q=user%3A242+transport\">transport the (ring) structure</a> of <span class=\"math-container\">$\\,(R,+,*,0,1),$</span> to <span class=\"math-container\">$\\,(R',\\oplus,\\otimes,0',1') \\,$</span> by simply defining the operations in <span class=\"math-container\">$\\,R'\\,$</span> to make <span class=\"math-container\">$\\,h\\,$</span> be a (ring) isomorphism</p>\n<p><span class=\"math-container\">$$\\begin{align} &amp;a \\oplus b\\, =\\ h^{-1}(h(a) + h(b)),\\quad 0' = h^{-1}(0)\\\\ \n &amp;a \\otimes b\\, =\\, h^{-1}(h(a)\\, * h(b)),\\quad\\, 1' = h^{-1}(1)\\end{align}\\qquad $$</span></p>\n<p>Yours is the special case <span class=\"math-container\">$\\ h(x) = x+1.$</span></p>\n<p>As an example let's explicitly examine how the associative law is transported from <span class=\"math-container\">$R$</span> to <span class=\"math-container\">$R'$</span>.</p>\n<p><span class=\"math-container\">$$ \\begin{align} h(a\\oplus(b\\oplus c)) \\,&amp;=\\, h(a)+h(b\\oplus c)\\\\ &amp;=\\, h(a)+(h(b) + h(c))\\\\ &amp;=\\, (h(a) + h(b))+h(c)\\\\ &amp;=\\, h(a\\oplus b)+h(c)\\\\ &amp;=\\, h((a\\oplus b)\\oplus c)\\end{align}\\qquad $$</span></p>\n<p>Therefore we conclude that <span class=\"math-container\">$\\ a\\oplus (b\\oplus c) = (a\\oplus b)\\oplus c\\ $</span> by applying <span class=\"math-container\">$h^{-1}$</span> to above. The other (universal equational) group axioms transport in exactly the same way.</p>\n" } ]
Question: $(\mathbb R, \oplus)$ is a group. Define a multiplication with which we get a field. Where $a \oplus b = a + b +1$ Answer 1: <p>If you consider $a\otimes b= ab+a+b$. Is clear that is a commutative operation because of the structure. Notice that:</p> <p>$$(a\otimes b) \oplus (a\otimes c)= (ab+a+b)\oplus (ac+a+c)=(ab+a+b)+(ac+a+c)+1$$ $$=a(b+c)+2a+b+c+1$$ That is equal to $$a\otimes(b\oplus c)=a\otimes(b+c+1)=a(b+c+1)+a+(b+c+1)$$ $$=a(b+c)+2a+b+c+1$$</p> <p>Finally look that $$a\otimes(b\otimes c)=a\otimes (bc+b+c)=a(bc+b+c)+a+(bc+b+c)$$ $$=abc+ab+ac+bc+a+b+c$$ And $$(a\otimes b)\otimes c=(ab+a+b)\otimes c=(ab+a+b)c+(ab+a+b)+c$$ $$=abc+ab+ac+bc+a+b+c$$</p> Answer 2: <p>The most straightforward approach to solving this problem is to prove $(\mathbb{R}, \oplus) \cong (\mathbb{R}, +)$. Once you've done that, you can use the isomorphism to define $\otimes$ in terms of $\times$.</p> Answer 3: <p>Let us start by noticing that $$(a-1)\oplus(b-1)=(a-1)+(b-1)+1=(a+b)-1.$$ This means that $f\colon\mathbb R\to\mathbb R$ defined by $f(x)=x-1$ is a <a href="https://en.wikipedia.org/wiki/Group_isomorphism" rel="nofollow noreferrer">group isomorphism</a> between $(\mathbb R,+)$ and $(\mathbb R,\oplus)$. (The map $f$ is bijection and fulfills $f(a+b)=f(a)\oplus f(b)$.) And the inverse $g(x)=x+1$ is an isomorphism $(\mathbb R,\oplus)$ to $(\mathbb R,+)$.</p> <p>I had a brief glance at the book mentioned in the question - it seems that the notion of of group and the notion of isomorphism is not introduced in the book at this point. (Of course, I might have missed it.) But the basic intuition can be explained in a few words. We are basically saying that after "renaming" the objects the two structures $(\mathbb R,+)$ and $(\mathbb R,\oplus)$ are the same.</p> <p>Just notice this part of <a href="https://en.wikipedia.org/wiki/Cayley_table" rel="nofollow noreferrer">Cayley table</a> showing results of operation. (Of course, since we are working with an infinite set, we cannot put the whole set $\mathbb R$ into a finite table.) </p> <p>$$ \begin{array}{cc} \begin{array}{c|ccccc} + &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\hline -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\ -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\ 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\ 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3 \\ 2 &amp; 0 &amp; 1 &amp; 2 &amp; 3 &amp; 4 \end{array} &amp; \begin{array}{c|ccccc} \oplus &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\hline -3 &amp;-5 &amp;-4 &amp;-3 &amp;-2 &amp;-1 \\ -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\ -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\ 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\ 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3 \end{array} \end{array} $$ You can see that if we replace in the first table each occurrence of $-4$ by $-5$ and similarly $-3\mapsto-4$, $-2\mapsto-3$, $-1\mapsto-2$, etc., then we get the second table. </p> <p>In this sense, the two operations are "the same", we just "renamed" the elements. (And $x\mapsto x-1$ is the dictionary.) </p> <hr> <p>Now we know that $(\mathbb R,+,\cdot)$ is a field. We also know that using "renaming" $f\colon x\mapsto x-1$ creates $\oplus$ from $+$. What happens if we apply the same thing to the usual multiplication $\cdot$? $$a\odot b = (a+1)\cdot(b+1)-1=ab+a+b$$ or $$a\odot b +1 = (a+1)\cdot(b+1) +1.$$ Therefore $a\odot b=ab+a+b$ seems like a natural candidate for the multiplication. (Notice that I have used the inverse mapping $x\mapsto x+1$.)</p> <hr> <p>This can be formulated a bit simpler in case you are already familiar with the notion of group isomorphism and field isomorphism. I have posted a bit more lengthy answer on purpose. (I was not sure whether the OP is familiar with these notions. And for the readers who are already familiar with them, there already is a shorter answer. Perhaps this longer answer might be useful for some people reading this question, too.)</p> <p>I will add the link to this post, which might also help to gain some insight: <a href="https://math.stackexchange.com/questions/242348/intuition-on-group-homomorphisms">Intuition on group homomorphisms</a>.</p> <p>And I will also add another link to a post where at least some of the field axioms for $(\mathbb R,\oplus,\odot)$ are verified: <a href="https://math.stackexchange.com/questions/239750/prove-that-r-is-a-ring-under-special-definitions-of-multiplication-and-additio">Prove that R is a ring under &#39;special&#39; definitions of multiplication and addition</a>.</p> Answer 4: <p><strong>Hint</strong> <span class="math-container">$ $</span> Any set bijection <span class="math-container">$\,h\,:\,R'\to R\,$</span> serves to <a href="https://math.stackexchange.com/search?q=user%3A242+transport">transport the (ring) structure</a> of <span class="math-container">$\,(R,+,*,0,1),$</span> to <span class="math-container">$\,(R',\oplus,\otimes,0',1') \,$</span> by simply defining the operations in <span class="math-container">$\,R'\,$</span> to make <span class="math-container">$\,h\,$</span> be a (ring) isomorphism</p> <p><span class="math-container">$$\begin{align} &amp;a \oplus b\, =\ h^{-1}(h(a) + h(b)),\quad 0' = h^{-1}(0)\\ &amp;a \otimes b\, =\, h^{-1}(h(a)\, * h(b)),\quad\, 1' = h^{-1}(1)\end{align}\qquad $$</span></p> <p>Yours is the special case <span class="math-container">$\ h(x) = x+1.$</span></p> <p>As an example let's explicitly examine how the associative law is transported from <span class="math-container">$R$</span> to <span class="math-container">$R'$</span>.</p> <p><span class="math-container">$$ \begin{align} h(a\oplus(b\oplus c)) \,&amp;=\, h(a)+h(b\oplus c)\\ &amp;=\, h(a)+(h(b) + h(c))\\ &amp;=\, (h(a) + h(b))+h(c)\\ &amp;=\, h(a\oplus b)+h(c)\\ &amp;=\, h((a\oplus b)\oplus c)\end{align}\qquad $$</span></p> <p>Therefore we conclude that <span class="math-container">$\ a\oplus (b\oplus c) = (a\oplus b)\oplus c\ $</span> by applying <span class="math-container">$h^{-1}$</span> to above. The other (universal equational) group axioms transport in exactly the same way.</p> Answer 5: <p>If you consider $a\otimes b= ab+a+b$. Is clear that is a commutative operation because of the structure. Notice that:</p> <p>$$(a\otimes b) \oplus (a\otimes c)= (ab+a+b)\oplus (ac+a+c)=(ab+a+b)+(ac+a+c)+1$$ $$=a(b+c)+2a+b+c+1$$ That is equal to $$a\otimes(b\oplus c)=a\otimes(b+c+1)=a(b+c+1)+a+(b+c+1)$$ $$=a(b+c)+2a+b+c+1$$</p> <p>Finally look that $$a\otimes(b\otimes c)=a\otimes (bc+b+c)=a(bc+b+c)+a+(bc+b+c)$$ $$=abc+ab+ac+bc+a+b+c$$ And $$(a\otimes b)\otimes c=(ab+a+b)\otimes c=(ab+a+b)c+(ab+a+b)+c$$ $$=abc+ab+ac+bc+a+b+c$$</p> Answer 6: <p>The most straightforward approach to solving this problem is to prove $(\mathbb{R}, \oplus) \cong (\mathbb{R}, +)$. Once you've done that, you can use the isomorphism to define $\otimes$ in terms of $\times$.</p> Answer 7: <p>Let us start by noticing that $$(a-1)\oplus(b-1)=(a-1)+(b-1)+1=(a+b)-1.$$ This means that $f\colon\mathbb R\to\mathbb R$ defined by $f(x)=x-1$ is a <a href="https://en.wikipedia.org/wiki/Group_isomorphism" rel="nofollow noreferrer">group isomorphism</a> between $(\mathbb R,+)$ and $(\mathbb R,\oplus)$. (The map $f$ is bijection and fulfills $f(a+b)=f(a)\oplus f(b)$.) And the inverse $g(x)=x+1$ is an isomorphism $(\mathbb R,\oplus)$ to $(\mathbb R,+)$.</p> <p>I had a brief glance at the book mentioned in the question - it seems that the notion of of group and the notion of isomorphism is not introduced in the book at this point. (Of course, I might have missed it.) But the basic intuition can be explained in a few words. We are basically saying that after "renaming" the objects the two structures $(\mathbb R,+)$ and $(\mathbb R,\oplus)$ are the same.</p> <p>Just notice this part of <a href="https://en.wikipedia.org/wiki/Cayley_table" rel="nofollow noreferrer">Cayley table</a> showing results of operation. (Of course, since we are working with an infinite set, we cannot put the whole set $\mathbb R$ into a finite table.) </p> <p>$$ \begin{array}{cc} \begin{array}{c|ccccc} + &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\hline -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\ -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\ 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\ 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3 \\ 2 &amp; 0 &amp; 1 &amp; 2 &amp; 3 &amp; 4 \end{array} &amp; \begin{array}{c|ccccc} \oplus &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\hline -3 &amp;-5 &amp;-4 &amp;-3 &amp;-2 &amp;-1 \\ -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\ -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\ 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\ 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3 \end{array} \end{array} $$ You can see that if we replace in the first table each occurrence of $-4$ by $-5$ and similarly $-3\mapsto-4$, $-2\mapsto-3$, $-1\mapsto-2$, etc., then we get the second table. </p> <p>In this sense, the two operations are "the same", we just "renamed" the elements. (And $x\mapsto x-1$ is the dictionary.) </p> <hr> <p>Now we know that $(\mathbb R,+,\cdot)$ is a field. We also know that using "renaming" $f\colon x\mapsto x-1$ creates $\oplus$ from $+$. What happens if we apply the same thing to the usual multiplication $\cdot$? $$a\odot b = (a+1)\cdot(b+1)-1=ab+a+b$$ or $$a\odot b +1 = (a+1)\cdot(b+1) +1.$$ Therefore $a\odot b=ab+a+b$ seems like a natural candidate for the multiplication. (Notice that I have used the inverse mapping $x\mapsto x+1$.)</p> <hr> <p>This can be formulated a bit simpler in case you are already familiar with the notion of group isomorphism and field isomorphism. I have posted a bit more lengthy answer on purpose. (I was not sure whether the OP is familiar with these notions. And for the readers who are already familiar with them, there already is a shorter answer. Perhaps this longer answer might be useful for some people reading this question, too.)</p> <p>I will add the link to this post, which might also help to gain some insight: <a href="https://math.stackexchange.com/questions/242348/intuition-on-group-homomorphisms">Intuition on group homomorphisms</a>.</p> <p>And I will also add another link to a post where at least some of the field axioms for $(\mathbb R,\oplus,\odot)$ are verified: <a href="https://math.stackexchange.com/questions/239750/prove-that-r-is-a-ring-under-special-definitions-of-multiplication-and-additio">Prove that R is a ring under &#39;special&#39; definitions of multiplication and addition</a>.</p> Answer 8: <p><strong>Hint</strong> <span class="math-container">$ $</span> Any set bijection <span class="math-container">$\,h\,:\,R'\to R\,$</span> serves to <a href="https://math.stackexchange.com/search?q=user%3A242+transport">transport the (ring) structure</a> of <span class="math-container">$\,(R,+,*,0,1),$</span> to <span class="math-container">$\,(R',\oplus,\otimes,0',1') \,$</span> by simply defining the operations in <span class="math-container">$\,R'\,$</span> to make <span class="math-container">$\,h\,$</span> be a (ring) isomorphism</p> <p><span class="math-container">$$\begin{align} &amp;a \oplus b\, =\ h^{-1}(h(a) + h(b)),\quad 0' = h^{-1}(0)\\ &amp;a \otimes b\, =\, h^{-1}(h(a)\, * h(b)),\quad\, 1' = h^{-1}(1)\end{align}\qquad $$</span></p> <p>Yours is the special case <span class="math-container">$\ h(x) = x+1.$</span></p> <p>As an example let's explicitly examine how the associative law is transported from <span class="math-container">$R$</span> to <span class="math-container">$R'$</span>.</p> <p><span class="math-container">$$ \begin{align} h(a\oplus(b\oplus c)) \,&amp;=\, h(a)+h(b\oplus c)\\ &amp;=\, h(a)+(h(b) + h(c))\\ &amp;=\, (h(a) + h(b))+h(c)\\ &amp;=\, h(a\oplus b)+h(c)\\ &amp;=\, h((a\oplus b)\oplus c)\end{align}\qquad $$</span></p> <p>Therefore we conclude that <span class="math-container">$\ a\oplus (b\oplus c) = (a\oplus b)\oplus c\ $</span> by applying <span class="math-container">$h^{-1}$</span> to above. The other (universal equational) group axioms transport in exactly the same way.</p> Answer 9: <p>If you consider $a\otimes b= ab+a+b$. Is clear that is a commutative operation because of the structure. Notice that:</p> <p>$$(a\otimes b) \oplus (a\otimes c)= (ab+a+b)\oplus (ac+a+c)=(ab+a+b)+(ac+a+c)+1$$ $$=a(b+c)+2a+b+c+1$$ That is equal to $$a\otimes(b\oplus c)=a\otimes(b+c+1)=a(b+c+1)+a+(b+c+1)$$ $$=a(b+c)+2a+b+c+1$$</p> <p>Finally look that $$a\otimes(b\otimes c)=a\otimes (bc+b+c)=a(bc+b+c)+a+(bc+b+c)$$ $$=abc+ab+ac+bc+a+b+c$$ And $$(a\otimes b)\otimes c=(ab+a+b)\otimes c=(ab+a+b)c+(ab+a+b)+c$$ $$=abc+ab+ac+bc+a+b+c$$</p> Answer 10: <p>The most straightforward approach to solving this problem is to prove $(\mathbb{R}, \oplus) \cong (\mathbb{R}, +)$. Once you've done that, you can use the isomorphism to define $\otimes$ in terms of $\times$.</p> Answer 11: <p>Let us start by noticing that $$(a-1)\oplus(b-1)=(a-1)+(b-1)+1=(a+b)-1.$$ This means that $f\colon\mathbb R\to\mathbb R$ defined by $f(x)=x-1$ is a <a href="https://en.wikipedia.org/wiki/Group_isomorphism" rel="nofollow noreferrer">group isomorphism</a> between $(\mathbb R,+)$ and $(\mathbb R,\oplus)$. (The map $f$ is bijection and fulfills $f(a+b)=f(a)\oplus f(b)$.) And the inverse $g(x)=x+1$ is an isomorphism $(\mathbb R,\oplus)$ to $(\mathbb R,+)$.</p> <p>I had a brief glance at the book mentioned in the question - it seems that the notion of of group and the notion of isomorphism is not introduced in the book at this point. (Of course, I might have missed it.) But the basic intuition can be explained in a few words. We are basically saying that after "renaming" the objects the two structures $(\mathbb R,+)$ and $(\mathbb R,\oplus)$ are the same.</p> <p>Just notice this part of <a href="https://en.wikipedia.org/wiki/Cayley_table" rel="nofollow noreferrer">Cayley table</a> showing results of operation. (Of course, since we are working with an infinite set, we cannot put the whole set $\mathbb R$ into a finite table.) </p> <p>$$ \begin{array}{cc} \begin{array}{c|ccccc} + &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\hline -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\ -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\ 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\ 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3 \\ 2 &amp; 0 &amp; 1 &amp; 2 &amp; 3 &amp; 4 \end{array} &amp; \begin{array}{c|ccccc} \oplus &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\hline -3 &amp;-5 &amp;-4 &amp;-3 &amp;-2 &amp;-1 \\ -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\ -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\ 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\ 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3 \end{array} \end{array} $$ You can see that if we replace in the first table each occurrence of $-4$ by $-5$ and similarly $-3\mapsto-4$, $-2\mapsto-3$, $-1\mapsto-2$, etc., then we get the second table. </p> <p>In this sense, the two operations are "the same", we just "renamed" the elements. (And $x\mapsto x-1$ is the dictionary.) </p> <hr> <p>Now we know that $(\mathbb R,+,\cdot)$ is a field. We also know that using "renaming" $f\colon x\mapsto x-1$ creates $\oplus$ from $+$. What happens if we apply the same thing to the usual multiplication $\cdot$? $$a\odot b = (a+1)\cdot(b+1)-1=ab+a+b$$ or $$a\odot b +1 = (a+1)\cdot(b+1) +1.$$ Therefore $a\odot b=ab+a+b$ seems like a natural candidate for the multiplication. (Notice that I have used the inverse mapping $x\mapsto x+1$.)</p> <hr> <p>This can be formulated a bit simpler in case you are already familiar with the notion of group isomorphism and field isomorphism. I have posted a bit more lengthy answer on purpose. (I was not sure whether the OP is familiar with these notions. And for the readers who are already familiar with them, there already is a shorter answer. Perhaps this longer answer might be useful for some people reading this question, too.)</p> <p>I will add the link to this post, which might also help to gain some insight: <a href="https://math.stackexchange.com/questions/242348/intuition-on-group-homomorphisms">Intuition on group homomorphisms</a>.</p> <p>And I will also add another link to a post where at least some of the field axioms for $(\mathbb R,\oplus,\odot)$ are verified: <a href="https://math.stackexchange.com/questions/239750/prove-that-r-is-a-ring-under-special-definitions-of-multiplication-and-additio">Prove that R is a ring under &#39;special&#39; definitions of multiplication and addition</a>.</p> Answer 12: <p><strong>Hint</strong> <span class="math-container">$ $</span> Any set bijection <span class="math-container">$\,h\,:\,R'\to R\,$</span> serves to <a href="https://math.stackexchange.com/search?q=user%3A242+transport">transport the (ring) structure</a> of <span class="math-container">$\,(R,+,*,0,1),$</span> to <span class="math-container">$\,(R',\oplus,\otimes,0',1') \,$</span> by simply defining the operations in <span class="math-container">$\,R'\,$</span> to make <span class="math-container">$\,h\,$</span> be a (ring) isomorphism</p> <p><span class="math-container">$$\begin{align} &amp;a \oplus b\, =\ h^{-1}(h(a) + h(b)),\quad 0' = h^{-1}(0)\\ &amp;a \otimes b\, =\, h^{-1}(h(a)\, * h(b)),\quad\, 1' = h^{-1}(1)\end{align}\qquad $$</span></p> <p>Yours is the special case <span class="math-container">$\ h(x) = x+1.$</span></p> <p>As an example let's explicitly examine how the associative law is transported from <span class="math-container">$R$</span> to <span class="math-container">$R'$</span>.</p> <p><span class="math-container">$$ \begin{align} h(a\oplus(b\oplus c)) \,&amp;=\, h(a)+h(b\oplus c)\\ &amp;=\, h(a)+(h(b) + h(c))\\ &amp;=\, (h(a) + h(b))+h(c)\\ &amp;=\, h(a\oplus b)+h(c)\\ &amp;=\, h((a\oplus b)\oplus c)\end{align}\qquad $$</span></p> <p>Therefore we conclude that <span class="math-container">$\ a\oplus (b\oplus c) = (a\oplus b)\oplus c\ $</span> by applying <span class="math-container">$h^{-1}$</span> to above. The other (universal equational) group axioms transport in exactly the same way.</p> Answer 13: <p>If you consider $a\otimes b= ab+a+b$. Is clear that is a commutative operation because of the structure. Notice that:</p> <p>$$(a\otimes b) \oplus (a\otimes c)= (ab+a+b)\oplus (ac+a+c)=(ab+a+b)+(ac+a+c)+1$$ $$=a(b+c)+2a+b+c+1$$ That is equal to $$a\otimes(b\oplus c)=a\otimes(b+c+1)=a(b+c+1)+a+(b+c+1)$$ $$=a(b+c)+2a+b+c+1$$</p> <p>Finally look that $$a\otimes(b\otimes c)=a\otimes (bc+b+c)=a(bc+b+c)+a+(bc+b+c)$$ $$=abc+ab+ac+bc+a+b+c$$ And $$(a\otimes b)\otimes c=(ab+a+b)\otimes c=(ab+a+b)c+(ab+a+b)+c$$ $$=abc+ab+ac+bc+a+b+c$$</p> Answer 14: <p>The most straightforward approach to solving this problem is to prove $(\mathbb{R}, \oplus) \cong (\mathbb{R}, +)$. Once you've done that, you can use the isomorphism to define $\otimes$ in terms of $\times$.</p> Answer 15: <p>Let us start by noticing that $$(a-1)\oplus(b-1)=(a-1)+(b-1)+1=(a+b)-1.$$ This means that $f\colon\mathbb R\to\mathbb R$ defined by $f(x)=x-1$ is a <a href="https://en.wikipedia.org/wiki/Group_isomorphism" rel="nofollow noreferrer">group isomorphism</a> between $(\mathbb R,+)$ and $(\mathbb R,\oplus)$. (The map $f$ is bijection and fulfills $f(a+b)=f(a)\oplus f(b)$.) And the inverse $g(x)=x+1$ is an isomorphism $(\mathbb R,\oplus)$ to $(\mathbb R,+)$.</p> <p>I had a brief glance at the book mentioned in the question - it seems that the notion of of group and the notion of isomorphism is not introduced in the book at this point. (Of course, I might have missed it.) But the basic intuition can be explained in a few words. We are basically saying that after "renaming" the objects the two structures $(\mathbb R,+)$ and $(\mathbb R,\oplus)$ are the same.</p> <p>Just notice this part of <a href="https://en.wikipedia.org/wiki/Cayley_table" rel="nofollow noreferrer">Cayley table</a> showing results of operation. (Of course, since we are working with an infinite set, we cannot put the whole set $\mathbb R$ into a finite table.) </p> <p>$$ \begin{array}{cc} \begin{array}{c|ccccc} + &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\hline -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\ -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\ 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\ 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3 \\ 2 &amp; 0 &amp; 1 &amp; 2 &amp; 3 &amp; 4 \end{array} &amp; \begin{array}{c|ccccc} \oplus &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\hline -3 &amp;-5 &amp;-4 &amp;-3 &amp;-2 &amp;-1 \\ -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\ -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\ 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\ 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3 \end{array} \end{array} $$ You can see that if we replace in the first table each occurrence of $-4$ by $-5$ and similarly $-3\mapsto-4$, $-2\mapsto-3$, $-1\mapsto-2$, etc., then we get the second table. </p> <p>In this sense, the two operations are "the same", we just "renamed" the elements. (And $x\mapsto x-1$ is the dictionary.) </p> <hr> <p>Now we know that $(\mathbb R,+,\cdot)$ is a field. We also know that using "renaming" $f\colon x\mapsto x-1$ creates $\oplus$ from $+$. What happens if we apply the same thing to the usual multiplication $\cdot$? $$a\odot b = (a+1)\cdot(b+1)-1=ab+a+b$$ or $$a\odot b +1 = (a+1)\cdot(b+1) +1.$$ Therefore $a\odot b=ab+a+b$ seems like a natural candidate for the multiplication. (Notice that I have used the inverse mapping $x\mapsto x+1$.)</p> <hr> <p>This can be formulated a bit simpler in case you are already familiar with the notion of group isomorphism and field isomorphism. I have posted a bit more lengthy answer on purpose. (I was not sure whether the OP is familiar with these notions. And for the readers who are already familiar with them, there already is a shorter answer. Perhaps this longer answer might be useful for some people reading this question, too.)</p> <p>I will add the link to this post, which might also help to gain some insight: <a href="https://math.stackexchange.com/questions/242348/intuition-on-group-homomorphisms">Intuition on group homomorphisms</a>.</p> <p>And I will also add another link to a post where at least some of the field axioms for $(\mathbb R,\oplus,\odot)$ are verified: <a href="https://math.stackexchange.com/questions/239750/prove-that-r-is-a-ring-under-special-definitions-of-multiplication-and-additio">Prove that R is a ring under &#39;special&#39; definitions of multiplication and addition</a>.</p> Answer 16: <p><strong>Hint</strong> <span class="math-container">$ $</span> Any set bijection <span class="math-container">$\,h\,:\,R'\to R\,$</span> serves to <a href="https://math.stackexchange.com/search?q=user%3A242+transport">transport the (ring) structure</a> of <span class="math-container">$\,(R,+,*,0,1),$</span> to <span class="math-container">$\,(R',\oplus,\otimes,0',1') \,$</span> by simply defining the operations in <span class="math-container">$\,R'\,$</span> to make <span class="math-container">$\,h\,$</span> be a (ring) isomorphism</p> <p><span class="math-container">$$\begin{align} &amp;a \oplus b\, =\ h^{-1}(h(a) + h(b)),\quad 0' = h^{-1}(0)\\ &amp;a \otimes b\, =\, h^{-1}(h(a)\, * h(b)),\quad\, 1' = h^{-1}(1)\end{align}\qquad $$</span></p> <p>Yours is the special case <span class="math-container">$\ h(x) = x+1.$</span></p> <p>As an example let's explicitly examine how the associative law is transported from <span class="math-container">$R$</span> to <span class="math-container">$R'$</span>.</p> <p><span class="math-container">$$ \begin{align} h(a\oplus(b\oplus c)) \,&amp;=\, h(a)+h(b\oplus c)\\ &amp;=\, h(a)+(h(b) + h(c))\\ &amp;=\, (h(a) + h(b))+h(c)\\ &amp;=\, h(a\oplus b)+h(c)\\ &amp;=\, h((a\oplus b)\oplus c)\end{align}\qquad $$</span></p> <p>Therefore we conclude that <span class="math-container">$\ a\oplus (b\oplus c) = (a\oplus b)\oplus c\ $</span> by applying <span class="math-container">$h^{-1}$</span> to above. The other (universal equational) group axioms transport in exactly the same way.</p> Answer 17: <p>If you consider $a\otimes b= ab+a+b$. Is clear that is a commutative operation because of the structure. Notice that:</p> <p>$$(a\otimes b) \oplus (a\otimes c)= (ab+a+b)\oplus (ac+a+c)=(ab+a+b)+(ac+a+c)+1$$ $$=a(b+c)+2a+b+c+1$$ That is equal to $$a\otimes(b\oplus c)=a\otimes(b+c+1)=a(b+c+1)+a+(b+c+1)$$ $$=a(b+c)+2a+b+c+1$$</p> <p>Finally look that $$a\otimes(b\otimes c)=a\otimes (bc+b+c)=a(bc+b+c)+a+(bc+b+c)$$ $$=abc+ab+ac+bc+a+b+c$$ And $$(a\otimes b)\otimes c=(ab+a+b)\otimes c=(ab+a+b)c+(ab+a+b)+c$$ $$=abc+ab+ac+bc+a+b+c$$</p> Answer 18: <p>The most straightforward approach to solving this problem is to prove $(\mathbb{R}, \oplus) \cong (\mathbb{R}, +)$. Once you've done that, you can use the isomorphism to define $\otimes$ in terms of $\times$.</p> Answer 19: <p>Let us start by noticing that $$(a-1)\oplus(b-1)=(a-1)+(b-1)+1=(a+b)-1.$$ This means that $f\colon\mathbb R\to\mathbb R$ defined by $f(x)=x-1$ is a <a href="https://en.wikipedia.org/wiki/Group_isomorphism" rel="nofollow noreferrer">group isomorphism</a> between $(\mathbb R,+)$ and $(\mathbb R,\oplus)$. (The map $f$ is bijection and fulfills $f(a+b)=f(a)\oplus f(b)$.) And the inverse $g(x)=x+1$ is an isomorphism $(\mathbb R,\oplus)$ to $(\mathbb R,+)$.</p> <p>I had a brief glance at the book mentioned in the question - it seems that the notion of of group and the notion of isomorphism is not introduced in the book at this point. (Of course, I might have missed it.) But the basic intuition can be explained in a few words. We are basically saying that after "renaming" the objects the two structures $(\mathbb R,+)$ and $(\mathbb R,\oplus)$ are the same.</p> <p>Just notice this part of <a href="https://en.wikipedia.org/wiki/Cayley_table" rel="nofollow noreferrer">Cayley table</a> showing results of operation. (Of course, since we are working with an infinite set, we cannot put the whole set $\mathbb R$ into a finite table.) </p> <p>$$ \begin{array}{cc} \begin{array}{c|ccccc} + &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\hline -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\ -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\ 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\ 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3 \\ 2 &amp; 0 &amp; 1 &amp; 2 &amp; 3 &amp; 4 \end{array} &amp; \begin{array}{c|ccccc} \oplus &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\hline -3 &amp;-5 &amp;-4 &amp;-3 &amp;-2 &amp;-1 \\ -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\ -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\ 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\ 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3 \end{array} \end{array} $$ You can see that if we replace in the first table each occurrence of $-4$ by $-5$ and similarly $-3\mapsto-4$, $-2\mapsto-3$, $-1\mapsto-2$, etc., then we get the second table. </p> <p>In this sense, the two operations are "the same", we just "renamed" the elements. (And $x\mapsto x-1$ is the dictionary.) </p> <hr> <p>Now we know that $(\mathbb R,+,\cdot)$ is a field. We also know that using "renaming" $f\colon x\mapsto x-1$ creates $\oplus$ from $+$. What happens if we apply the same thing to the usual multiplication $\cdot$? $$a\odot b = (a+1)\cdot(b+1)-1=ab+a+b$$ or $$a\odot b +1 = (a+1)\cdot(b+1) +1.$$ Therefore $a\odot b=ab+a+b$ seems like a natural candidate for the multiplication. (Notice that I have used the inverse mapping $x\mapsto x+1$.)</p> <hr> <p>This can be formulated a bit simpler in case you are already familiar with the notion of group isomorphism and field isomorphism. I have posted a bit more lengthy answer on purpose. (I was not sure whether the OP is familiar with these notions. And for the readers who are already familiar with them, there already is a shorter answer. Perhaps this longer answer might be useful for some people reading this question, too.)</p> <p>I will add the link to this post, which might also help to gain some insight: <a href="https://math.stackexchange.com/questions/242348/intuition-on-group-homomorphisms">Intuition on group homomorphisms</a>.</p> <p>And I will also add another link to a post where at least some of the field axioms for $(\mathbb R,\oplus,\odot)$ are verified: <a href="https://math.stackexchange.com/questions/239750/prove-that-r-is-a-ring-under-special-definitions-of-multiplication-and-additio">Prove that R is a ring under &#39;special&#39; definitions of multiplication and addition</a>.</p> Answer 20: <p><strong>Hint</strong> <span class="math-container">$ $</span> Any set bijection <span class="math-container">$\,h\,:\,R'\to R\,$</span> serves to <a href="https://math.stackexchange.com/search?q=user%3A242+transport">transport the (ring) structure</a> of <span class="math-container">$\,(R,+,*,0,1),$</span> to <span class="math-container">$\,(R',\oplus,\otimes,0',1') \,$</span> by simply defining the operations in <span class="math-container">$\,R'\,$</span> to make <span class="math-container">$\,h\,$</span> be a (ring) isomorphism</p> <p><span class="math-container">$$\begin{align} &amp;a \oplus b\, =\ h^{-1}(h(a) + h(b)),\quad 0' = h^{-1}(0)\\ &amp;a \otimes b\, =\, h^{-1}(h(a)\, * h(b)),\quad\, 1' = h^{-1}(1)\end{align}\qquad $$</span></p> <p>Yours is the special case <span class="math-container">$\ h(x) = x+1.$</span></p> <p>As an example let's explicitly examine how the associative law is transported from <span class="math-container">$R$</span> to <span class="math-container">$R'$</span>.</p> <p><span class="math-container">$$ \begin{align} h(a\oplus(b\oplus c)) \,&amp;=\, h(a)+h(b\oplus c)\\ &amp;=\, h(a)+(h(b) + h(c))\\ &amp;=\, (h(a) + h(b))+h(c)\\ &amp;=\, h(a\oplus b)+h(c)\\ &amp;=\, h((a\oplus b)\oplus c)\end{align}\qquad $$</span></p> <p>Therefore we conclude that <span class="math-container">$\ a\oplus (b\oplus c) = (a\oplus b)\oplus c\ $</span> by applying <span class="math-container">$h^{-1}$</span> to above. The other (universal equational) group axioms transport in exactly the same way.</p> Answer 21: <p>If you consider $a\otimes b= ab+a+b$. Is clear that is a commutative operation because of the structure. Notice that:</p> <p>$$(a\otimes b) \oplus (a\otimes c)= (ab+a+b)\oplus (ac+a+c)=(ab+a+b)+(ac+a+c)+1$$ $$=a(b+c)+2a+b+c+1$$ That is equal to $$a\otimes(b\oplus c)=a\otimes(b+c+1)=a(b+c+1)+a+(b+c+1)$$ $$=a(b+c)+2a+b+c+1$$</p> <p>Finally look that $$a\otimes(b\otimes c)=a\otimes (bc+b+c)=a(bc+b+c)+a+(bc+b+c)$$ $$=abc+ab+ac+bc+a+b+c$$ And $$(a\otimes b)\otimes c=(ab+a+b)\otimes c=(ab+a+b)c+(ab+a+b)+c$$ $$=abc+ab+ac+bc+a+b+c$$</p> Answer 22: <p>The most straightforward approach to solving this problem is to prove $(\mathbb{R}, \oplus) \cong (\mathbb{R}, +)$. Once you've done that, you can use the isomorphism to define $\otimes$ in terms of $\times$.</p> Answer 23: <p>Let us start by noticing that $$(a-1)\oplus(b-1)=(a-1)+(b-1)+1=(a+b)-1.$$ This means that $f\colon\mathbb R\to\mathbb R$ defined by $f(x)=x-1$ is a <a href="https://en.wikipedia.org/wiki/Group_isomorphism" rel="nofollow noreferrer">group isomorphism</a> between $(\mathbb R,+)$ and $(\mathbb R,\oplus)$. (The map $f$ is bijection and fulfills $f(a+b)=f(a)\oplus f(b)$.) And the inverse $g(x)=x+1$ is an isomorphism $(\mathbb R,\oplus)$ to $(\mathbb R,+)$.</p> <p>I had a brief glance at the book mentioned in the question - it seems that the notion of of group and the notion of isomorphism is not introduced in the book at this point. (Of course, I might have missed it.) But the basic intuition can be explained in a few words. We are basically saying that after "renaming" the objects the two structures $(\mathbb R,+)$ and $(\mathbb R,\oplus)$ are the same.</p> <p>Just notice this part of <a href="https://en.wikipedia.org/wiki/Cayley_table" rel="nofollow noreferrer">Cayley table</a> showing results of operation. (Of course, since we are working with an infinite set, we cannot put the whole set $\mathbb R$ into a finite table.) </p> <p>$$ \begin{array}{cc} \begin{array}{c|ccccc} + &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\hline -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\ -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\ 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\ 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3 \\ 2 &amp; 0 &amp; 1 &amp; 2 &amp; 3 &amp; 4 \end{array} &amp; \begin{array}{c|ccccc} \oplus &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\hline -3 &amp;-5 &amp;-4 &amp;-3 &amp;-2 &amp;-1 \\ -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\ -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\ 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\ 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3 \end{array} \end{array} $$ You can see that if we replace in the first table each occurrence of $-4$ by $-5$ and similarly $-3\mapsto-4$, $-2\mapsto-3$, $-1\mapsto-2$, etc., then we get the second table. </p> <p>In this sense, the two operations are "the same", we just "renamed" the elements. (And $x\mapsto x-1$ is the dictionary.) </p> <hr> <p>Now we know that $(\mathbb R,+,\cdot)$ is a field. We also know that using "renaming" $f\colon x\mapsto x-1$ creates $\oplus$ from $+$. What happens if we apply the same thing to the usual multiplication $\cdot$? $$a\odot b = (a+1)\cdot(b+1)-1=ab+a+b$$ or $$a\odot b +1 = (a+1)\cdot(b+1) +1.$$ Therefore $a\odot b=ab+a+b$ seems like a natural candidate for the multiplication. (Notice that I have used the inverse mapping $x\mapsto x+1$.)</p> <hr> <p>This can be formulated a bit simpler in case you are already familiar with the notion of group isomorphism and field isomorphism. I have posted a bit more lengthy answer on purpose. (I was not sure whether the OP is familiar with these notions. And for the readers who are already familiar with them, there already is a shorter answer. Perhaps this longer answer might be useful for some people reading this question, too.)</p> <p>I will add the link to this post, which might also help to gain some insight: <a href="https://math.stackexchange.com/questions/242348/intuition-on-group-homomorphisms">Intuition on group homomorphisms</a>.</p> <p>And I will also add another link to a post where at least some of the field axioms for $(\mathbb R,\oplus,\odot)$ are verified: <a href="https://math.stackexchange.com/questions/239750/prove-that-r-is-a-ring-under-special-definitions-of-multiplication-and-additio">Prove that R is a ring under &#39;special&#39; definitions of multiplication and addition</a>.</p> Answer 24: <p><strong>Hint</strong> <span class="math-container">$ $</span> Any set bijection <span class="math-container">$\,h\,:\,R'\to R\,$</span> serves to <a href="https://math.stackexchange.com/search?q=user%3A242+transport">transport the (ring) structure</a> of <span class="math-container">$\,(R,+,*,0,1),$</span> to <span class="math-container">$\,(R',\oplus,\otimes,0',1') \,$</span> by simply defining the operations in <span class="math-container">$\,R'\,$</span> to make <span class="math-container">$\,h\,$</span> be a (ring) isomorphism</p> <p><span class="math-container">$$\begin{align} &amp;a \oplus b\, =\ h^{-1}(h(a) + h(b)),\quad 0' = h^{-1}(0)\\ &amp;a \otimes b\, =\, h^{-1}(h(a)\, * h(b)),\quad\, 1' = h^{-1}(1)\end{align}\qquad $$</span></p> <p>Yours is the special case <span class="math-container">$\ h(x) = x+1.$</span></p> <p>As an example let's explicitly examine how the associative law is transported from <span class="math-container">$R$</span> to <span class="math-container">$R'$</span>.</p> <p><span class="math-container">$$ \begin{align} h(a\oplus(b\oplus c)) \,&amp;=\, h(a)+h(b\oplus c)\\ &amp;=\, h(a)+(h(b) + h(c))\\ &amp;=\, (h(a) + h(b))+h(c)\\ &amp;=\, h(a\oplus b)+h(c)\\ &amp;=\, h((a\oplus b)\oplus c)\end{align}\qquad $$</span></p> <p>Therefore we conclude that <span class="math-container">$\ a\oplus (b\oplus c) = (a\oplus b)\oplus c\ $</span> by applying <span class="math-container">$h^{-1}$</span> to above. The other (universal equational) group axioms transport in exactly the same way.</p> Answer 25: <p>If you consider $a\otimes b= ab+a+b$. Is clear that is a commutative operation because of the structure. Notice that:</p> <p>$$(a\otimes b) \oplus (a\otimes c)= (ab+a+b)\oplus (ac+a+c)=(ab+a+b)+(ac+a+c)+1$$ $$=a(b+c)+2a+b+c+1$$ That is equal to $$a\otimes(b\oplus c)=a\otimes(b+c+1)=a(b+c+1)+a+(b+c+1)$$ $$=a(b+c)+2a+b+c+1$$</p> <p>Finally look that $$a\otimes(b\otimes c)=a\otimes (bc+b+c)=a(bc+b+c)+a+(bc+b+c)$$ $$=abc+ab+ac+bc+a+b+c$$ And $$(a\otimes b)\otimes c=(ab+a+b)\otimes c=(ab+a+b)c+(ab+a+b)+c$$ $$=abc+ab+ac+bc+a+b+c$$</p> Answer 26: <p>The most straightforward approach to solving this problem is to prove $(\mathbb{R}, \oplus) \cong (\mathbb{R}, +)$. Once you've done that, you can use the isomorphism to define $\otimes$ in terms of $\times$.</p> Answer 27: <p>Let us start by noticing that $$(a-1)\oplus(b-1)=(a-1)+(b-1)+1=(a+b)-1.$$ This means that $f\colon\mathbb R\to\mathbb R$ defined by $f(x)=x-1$ is a <a href="https://en.wikipedia.org/wiki/Group_isomorphism" rel="nofollow noreferrer">group isomorphism</a> between $(\mathbb R,+)$ and $(\mathbb R,\oplus)$. (The map $f$ is bijection and fulfills $f(a+b)=f(a)\oplus f(b)$.) And the inverse $g(x)=x+1$ is an isomorphism $(\mathbb R,\oplus)$ to $(\mathbb R,+)$.</p> <p>I had a brief glance at the book mentioned in the question - it seems that the notion of of group and the notion of isomorphism is not introduced in the book at this point. (Of course, I might have missed it.) But the basic intuition can be explained in a few words. We are basically saying that after "renaming" the objects the two structures $(\mathbb R,+)$ and $(\mathbb R,\oplus)$ are the same.</p> <p>Just notice this part of <a href="https://en.wikipedia.org/wiki/Cayley_table" rel="nofollow noreferrer">Cayley table</a> showing results of operation. (Of course, since we are working with an infinite set, we cannot put the whole set $\mathbb R$ into a finite table.) </p> <p>$$ \begin{array}{cc} \begin{array}{c|ccccc} + &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\hline -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\ -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\ 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\ 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3 \\ 2 &amp; 0 &amp; 1 &amp; 2 &amp; 3 &amp; 4 \end{array} &amp; \begin{array}{c|ccccc} \oplus &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\hline -3 &amp;-5 &amp;-4 &amp;-3 &amp;-2 &amp;-1 \\ -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\ -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\ 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\ 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3 \end{array} \end{array} $$ You can see that if we replace in the first table each occurrence of $-4$ by $-5$ and similarly $-3\mapsto-4$, $-2\mapsto-3$, $-1\mapsto-2$, etc., then we get the second table. </p> <p>In this sense, the two operations are "the same", we just "renamed" the elements. (And $x\mapsto x-1$ is the dictionary.) </p> <hr> <p>Now we know that $(\mathbb R,+,\cdot)$ is a field. We also know that using "renaming" $f\colon x\mapsto x-1$ creates $\oplus$ from $+$. What happens if we apply the same thing to the usual multiplication $\cdot$? $$a\odot b = (a+1)\cdot(b+1)-1=ab+a+b$$ or $$a\odot b +1 = (a+1)\cdot(b+1) +1.$$ Therefore $a\odot b=ab+a+b$ seems like a natural candidate for the multiplication. (Notice that I have used the inverse mapping $x\mapsto x+1$.)</p> <hr> <p>This can be formulated a bit simpler in case you are already familiar with the notion of group isomorphism and field isomorphism. I have posted a bit more lengthy answer on purpose. (I was not sure whether the OP is familiar with these notions. And for the readers who are already familiar with them, there already is a shorter answer. Perhaps this longer answer might be useful for some people reading this question, too.)</p> <p>I will add the link to this post, which might also help to gain some insight: <a href="https://math.stackexchange.com/questions/242348/intuition-on-group-homomorphisms">Intuition on group homomorphisms</a>.</p> <p>And I will also add another link to a post where at least some of the field axioms for $(\mathbb R,\oplus,\odot)$ are verified: <a href="https://math.stackexchange.com/questions/239750/prove-that-r-is-a-ring-under-special-definitions-of-multiplication-and-additio">Prove that R is a ring under &#39;special&#39; definitions of multiplication and addition</a>.</p> Answer 28: <p><strong>Hint</strong> <span class="math-container">$ $</span> Any set bijection <span class="math-container">$\,h\,:\,R'\to R\,$</span> serves to <a href="https://math.stackexchange.com/search?q=user%3A242+transport">transport the (ring) structure</a> of <span class="math-container">$\,(R,+,*,0,1),$</span> to <span class="math-container">$\,(R',\oplus,\otimes,0',1') \,$</span> by simply defining the operations in <span class="math-container">$\,R'\,$</span> to make <span class="math-container">$\,h\,$</span> be a (ring) isomorphism</p> <p><span class="math-container">$$\begin{align} &amp;a \oplus b\, =\ h^{-1}(h(a) + h(b)),\quad 0' = h^{-1}(0)\\ &amp;a \otimes b\, =\, h^{-1}(h(a)\, * h(b)),\quad\, 1' = h^{-1}(1)\end{align}\qquad $$</span></p> <p>Yours is the special case <span class="math-container">$\ h(x) = x+1.$</span></p> <p>As an example let's explicitly examine how the associative law is transported from <span class="math-container">$R$</span> to <span class="math-container">$R'$</span>.</p> <p><span class="math-container">$$ \begin{align} h(a\oplus(b\oplus c)) \,&amp;=\, h(a)+h(b\oplus c)\\ &amp;=\, h(a)+(h(b) + h(c))\\ &amp;=\, (h(a) + h(b))+h(c)\\ &amp;=\, h(a\oplus b)+h(c)\\ &amp;=\, h((a\oplus b)\oplus c)\end{align}\qquad $$</span></p> <p>Therefore we conclude that <span class="math-container">$\ a\oplus (b\oplus c) = (a\oplus b)\oplus c\ $</span> by applying <span class="math-container">$h^{-1}$</span> to above. The other (universal equational) group axioms transport in exactly the same way.</p> Answer 29: <p>If you consider $a\otimes b= ab+a+b$. Is clear that is a commutative operation because of the structure. Notice that:</p> <p>$$(a\otimes b) \oplus (a\otimes c)= (ab+a+b)\oplus (ac+a+c)=(ab+a+b)+(ac+a+c)+1$$ $$=a(b+c)+2a+b+c+1$$ That is equal to $$a\otimes(b\oplus c)=a\otimes(b+c+1)=a(b+c+1)+a+(b+c+1)$$ $$=a(b+c)+2a+b+c+1$$</p> <p>Finally look that $$a\otimes(b\otimes c)=a\otimes (bc+b+c)=a(bc+b+c)+a+(bc+b+c)$$ $$=abc+ab+ac+bc+a+b+c$$ And $$(a\otimes b)\otimes c=(ab+a+b)\otimes c=(ab+a+b)c+(ab+a+b)+c$$ $$=abc+ab+ac+bc+a+b+c$$</p> Answer 30: <p>The most straightforward approach to solving this problem is to prove $(\mathbb{R}, \oplus) \cong (\mathbb{R}, +)$. Once you've done that, you can use the isomorphism to define $\otimes$ in terms of $\times$.</p> Answer 31: <p>Let us start by noticing that $$(a-1)\oplus(b-1)=(a-1)+(b-1)+1=(a+b)-1.$$ This means that $f\colon\mathbb R\to\mathbb R$ defined by $f(x)=x-1$ is a <a href="https://en.wikipedia.org/wiki/Group_isomorphism" rel="nofollow noreferrer">group isomorphism</a> between $(\mathbb R,+)$ and $(\mathbb R,\oplus)$. (The map $f$ is bijection and fulfills $f(a+b)=f(a)\oplus f(b)$.) And the inverse $g(x)=x+1$ is an isomorphism $(\mathbb R,\oplus)$ to $(\mathbb R,+)$.</p> <p>I had a brief glance at the book mentioned in the question - it seems that the notion of of group and the notion of isomorphism is not introduced in the book at this point. (Of course, I might have missed it.) But the basic intuition can be explained in a few words. We are basically saying that after "renaming" the objects the two structures $(\mathbb R,+)$ and $(\mathbb R,\oplus)$ are the same.</p> <p>Just notice this part of <a href="https://en.wikipedia.org/wiki/Cayley_table" rel="nofollow noreferrer">Cayley table</a> showing results of operation. (Of course, since we are working with an infinite set, we cannot put the whole set $\mathbb R$ into a finite table.) </p> <p>$$ \begin{array}{cc} \begin{array}{c|ccccc} + &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\hline -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\ -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\ 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\ 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3 \\ 2 &amp; 0 &amp; 1 &amp; 2 &amp; 3 &amp; 4 \end{array} &amp; \begin{array}{c|ccccc} \oplus &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\hline -3 &amp;-5 &amp;-4 &amp;-3 &amp;-2 &amp;-1 \\ -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\ -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\ 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\ 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3 \end{array} \end{array} $$ You can see that if we replace in the first table each occurrence of $-4$ by $-5$ and similarly $-3\mapsto-4$, $-2\mapsto-3$, $-1\mapsto-2$, etc., then we get the second table. </p> <p>In this sense, the two operations are "the same", we just "renamed" the elements. (And $x\mapsto x-1$ is the dictionary.) </p> <hr> <p>Now we know that $(\mathbb R,+,\cdot)$ is a field. We also know that using "renaming" $f\colon x\mapsto x-1$ creates $\oplus$ from $+$. What happens if we apply the same thing to the usual multiplication $\cdot$? $$a\odot b = (a+1)\cdot(b+1)-1=ab+a+b$$ or $$a\odot b +1 = (a+1)\cdot(b+1) +1.$$ Therefore $a\odot b=ab+a+b$ seems like a natural candidate for the multiplication. (Notice that I have used the inverse mapping $x\mapsto x+1$.)</p> <hr> <p>This can be formulated a bit simpler in case you are already familiar with the notion of group isomorphism and field isomorphism. I have posted a bit more lengthy answer on purpose. (I was not sure whether the OP is familiar with these notions. And for the readers who are already familiar with them, there already is a shorter answer. Perhaps this longer answer might be useful for some people reading this question, too.)</p> <p>I will add the link to this post, which might also help to gain some insight: <a href="https://math.stackexchange.com/questions/242348/intuition-on-group-homomorphisms">Intuition on group homomorphisms</a>.</p> <p>And I will also add another link to a post where at least some of the field axioms for $(\mathbb R,\oplus,\odot)$ are verified: <a href="https://math.stackexchange.com/questions/239750/prove-that-r-is-a-ring-under-special-definitions-of-multiplication-and-additio">Prove that R is a ring under &#39;special&#39; definitions of multiplication and addition</a>.</p> Answer 32: <p><strong>Hint</strong> <span class="math-container">$ $</span> Any set bijection <span class="math-container">$\,h\,:\,R'\to R\,$</span> serves to <a href="https://math.stackexchange.com/search?q=user%3A242+transport">transport the (ring) structure</a> of <span class="math-container">$\,(R,+,*,0,1),$</span> to <span class="math-container">$\,(R',\oplus,\otimes,0',1') \,$</span> by simply defining the operations in <span class="math-container">$\,R'\,$</span> to make <span class="math-container">$\,h\,$</span> be a (ring) isomorphism</p> <p><span class="math-container">$$\begin{align} &amp;a \oplus b\, =\ h^{-1}(h(a) + h(b)),\quad 0' = h^{-1}(0)\\ &amp;a \otimes b\, =\, h^{-1}(h(a)\, * h(b)),\quad\, 1' = h^{-1}(1)\end{align}\qquad $$</span></p> <p>Yours is the special case <span class="math-container">$\ h(x) = x+1.$</span></p> <p>As an example let's explicitly examine how the associative law is transported from <span class="math-container">$R$</span> to <span class="math-container">$R'$</span>.</p> <p><span class="math-container">$$ \begin{align} h(a\oplus(b\oplus c)) \,&amp;=\, h(a)+h(b\oplus c)\\ &amp;=\, h(a)+(h(b) + h(c))\\ &amp;=\, (h(a) + h(b))+h(c)\\ &amp;=\, h(a\oplus b)+h(c)\\ &amp;=\, h((a\oplus b)\oplus c)\end{align}\qquad $$</span></p> <p>Therefore we conclude that <span class="math-container">$\ a\oplus (b\oplus c) = (a\oplus b)\oplus c\ $</span> by applying <span class="math-container">$h^{-1}$</span> to above. The other (universal equational) group axioms transport in exactly the same way.</p> Answer 33: <p>If you consider $a\otimes b= ab+a+b$. Is clear that is a commutative operation because of the structure. Notice that:</p> <p>$$(a\otimes b) \oplus (a\otimes c)= (ab+a+b)\oplus (ac+a+c)=(ab+a+b)+(ac+a+c)+1$$ $$=a(b+c)+2a+b+c+1$$ That is equal to $$a\otimes(b\oplus c)=a\otimes(b+c+1)=a(b+c+1)+a+(b+c+1)$$ $$=a(b+c)+2a+b+c+1$$</p> <p>Finally look that $$a\otimes(b\otimes c)=a\otimes (bc+b+c)=a(bc+b+c)+a+(bc+b+c)$$ $$=abc+ab+ac+bc+a+b+c$$ And $$(a\otimes b)\otimes c=(ab+a+b)\otimes c=(ab+a+b)c+(ab+a+b)+c$$ $$=abc+ab+ac+bc+a+b+c$$</p> Answer 34: <p>The most straightforward approach to solving this problem is to prove $(\mathbb{R}, \oplus) \cong (\mathbb{R}, +)$. Once you've done that, you can use the isomorphism to define $\otimes$ in terms of $\times$.</p> Answer 35: <p>Let us start by noticing that $$(a-1)\oplus(b-1)=(a-1)+(b-1)+1=(a+b)-1.$$ This means that $f\colon\mathbb R\to\mathbb R$ defined by $f(x)=x-1$ is a <a href="https://en.wikipedia.org/wiki/Group_isomorphism" rel="nofollow noreferrer">group isomorphism</a> between $(\mathbb R,+)$ and $(\mathbb R,\oplus)$. (The map $f$ is bijection and fulfills $f(a+b)=f(a)\oplus f(b)$.) And the inverse $g(x)=x+1$ is an isomorphism $(\mathbb R,\oplus)$ to $(\mathbb R,+)$.</p> <p>I had a brief glance at the book mentioned in the question - it seems that the notion of of group and the notion of isomorphism is not introduced in the book at this point. (Of course, I might have missed it.) But the basic intuition can be explained in a few words. We are basically saying that after "renaming" the objects the two structures $(\mathbb R,+)$ and $(\mathbb R,\oplus)$ are the same.</p> <p>Just notice this part of <a href="https://en.wikipedia.org/wiki/Cayley_table" rel="nofollow noreferrer">Cayley table</a> showing results of operation. (Of course, since we are working with an infinite set, we cannot put the whole set $\mathbb R$ into a finite table.) </p> <p>$$ \begin{array}{cc} \begin{array}{c|ccccc} + &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\hline -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\ -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\ 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\ 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3 \\ 2 &amp; 0 &amp; 1 &amp; 2 &amp; 3 &amp; 4 \end{array} &amp; \begin{array}{c|ccccc} \oplus &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\hline -3 &amp;-5 &amp;-4 &amp;-3 &amp;-2 &amp;-1 \\ -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\ -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\ 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\ 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3 \end{array} \end{array} $$ You can see that if we replace in the first table each occurrence of $-4$ by $-5$ and similarly $-3\mapsto-4$, $-2\mapsto-3$, $-1\mapsto-2$, etc., then we get the second table. </p> <p>In this sense, the two operations are "the same", we just "renamed" the elements. (And $x\mapsto x-1$ is the dictionary.) </p> <hr> <p>Now we know that $(\mathbb R,+,\cdot)$ is a field. We also know that using "renaming" $f\colon x\mapsto x-1$ creates $\oplus$ from $+$. What happens if we apply the same thing to the usual multiplication $\cdot$? $$a\odot b = (a+1)\cdot(b+1)-1=ab+a+b$$ or $$a\odot b +1 = (a+1)\cdot(b+1) +1.$$ Therefore $a\odot b=ab+a+b$ seems like a natural candidate for the multiplication. (Notice that I have used the inverse mapping $x\mapsto x+1$.)</p> <hr> <p>This can be formulated a bit simpler in case you are already familiar with the notion of group isomorphism and field isomorphism. I have posted a bit more lengthy answer on purpose. (I was not sure whether the OP is familiar with these notions. And for the readers who are already familiar with them, there already is a shorter answer. Perhaps this longer answer might be useful for some people reading this question, too.)</p> <p>I will add the link to this post, which might also help to gain some insight: <a href="https://math.stackexchange.com/questions/242348/intuition-on-group-homomorphisms">Intuition on group homomorphisms</a>.</p> <p>And I will also add another link to a post where at least some of the field axioms for $(\mathbb R,\oplus,\odot)$ are verified: <a href="https://math.stackexchange.com/questions/239750/prove-that-r-is-a-ring-under-special-definitions-of-multiplication-and-additio">Prove that R is a ring under &#39;special&#39; definitions of multiplication and addition</a>.</p> Answer 36: <p><strong>Hint</strong> <span class="math-container">$ $</span> Any set bijection <span class="math-container">$\,h\,:\,R'\to R\,$</span> serves to <a href="https://math.stackexchange.com/search?q=user%3A242+transport">transport the (ring) structure</a> of <span class="math-container">$\,(R,+,*,0,1),$</span> to <span class="math-container">$\,(R',\oplus,\otimes,0',1') \,$</span> by simply defining the operations in <span class="math-container">$\,R'\,$</span> to make <span class="math-container">$\,h\,$</span> be a (ring) isomorphism</p> <p><span class="math-container">$$\begin{align} &amp;a \oplus b\, =\ h^{-1}(h(a) + h(b)),\quad 0' = h^{-1}(0)\\ &amp;a \otimes b\, =\, h^{-1}(h(a)\, * h(b)),\quad\, 1' = h^{-1}(1)\end{align}\qquad $$</span></p> <p>Yours is the special case <span class="math-container">$\ h(x) = x+1.$</span></p> <p>As an example let's explicitly examine how the associative law is transported from <span class="math-container">$R$</span> to <span class="math-container">$R'$</span>.</p> <p><span class="math-container">$$ \begin{align} h(a\oplus(b\oplus c)) \,&amp;=\, h(a)+h(b\oplus c)\\ &amp;=\, h(a)+(h(b) + h(c))\\ &amp;=\, (h(a) + h(b))+h(c)\\ &amp;=\, h(a\oplus b)+h(c)\\ &amp;=\, h((a\oplus b)\oplus c)\end{align}\qquad $$</span></p> <p>Therefore we conclude that <span class="math-container">$\ a\oplus (b\oplus c) = (a\oplus b)\oplus c\ $</span> by applying <span class="math-container">$h^{-1}$</span> to above. The other (universal equational) group axioms transport in exactly the same way.</p> Answer 37: <p>If you consider $a\otimes b= ab+a+b$. Is clear that is a commutative operation because of the structure. Notice that:</p> <p>$$(a\otimes b) \oplus (a\otimes c)= (ab+a+b)\oplus (ac+a+c)=(ab+a+b)+(ac+a+c)+1$$ $$=a(b+c)+2a+b+c+1$$ That is equal to $$a\otimes(b\oplus c)=a\otimes(b+c+1)=a(b+c+1)+a+(b+c+1)$$ $$=a(b+c)+2a+b+c+1$$</p> <p>Finally look that $$a\otimes(b\otimes c)=a\otimes (bc+b+c)=a(bc+b+c)+a+(bc+b+c)$$ $$=abc+ab+ac+bc+a+b+c$$ And $$(a\otimes b)\otimes c=(ab+a+b)\otimes c=(ab+a+b)c+(ab+a+b)+c$$ $$=abc+ab+ac+bc+a+b+c$$</p> Answer 38: <p>The most straightforward approach to solving this problem is to prove $(\mathbb{R}, \oplus) \cong (\mathbb{R}, +)$. Once you've done that, you can use the isomorphism to define $\otimes$ in terms of $\times$.</p> Answer 39: <p>Let us start by noticing that $$(a-1)\oplus(b-1)=(a-1)+(b-1)+1=(a+b)-1.$$ This means that $f\colon\mathbb R\to\mathbb R$ defined by $f(x)=x-1$ is a <a href="https://en.wikipedia.org/wiki/Group_isomorphism" rel="nofollow noreferrer">group isomorphism</a> between $(\mathbb R,+)$ and $(\mathbb R,\oplus)$. (The map $f$ is bijection and fulfills $f(a+b)=f(a)\oplus f(b)$.) And the inverse $g(x)=x+1$ is an isomorphism $(\mathbb R,\oplus)$ to $(\mathbb R,+)$.</p> <p>I had a brief glance at the book mentioned in the question - it seems that the notion of of group and the notion of isomorphism is not introduced in the book at this point. (Of course, I might have missed it.) But the basic intuition can be explained in a few words. We are basically saying that after "renaming" the objects the two structures $(\mathbb R,+)$ and $(\mathbb R,\oplus)$ are the same.</p> <p>Just notice this part of <a href="https://en.wikipedia.org/wiki/Cayley_table" rel="nofollow noreferrer">Cayley table</a> showing results of operation. (Of course, since we are working with an infinite set, we cannot put the whole set $\mathbb R$ into a finite table.) </p> <p>$$ \begin{array}{cc} \begin{array}{c|ccccc} + &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\\hline -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\ -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\ 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\ 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3 \\ 2 &amp; 0 &amp; 1 &amp; 2 &amp; 3 &amp; 4 \end{array} &amp; \begin{array}{c|ccccc} \oplus &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\\hline -3 &amp;-5 &amp;-4 &amp;-3 &amp;-2 &amp;-1 \\ -2 &amp;-4 &amp;-3 &amp;-2 &amp;-1 &amp; 0 \\ -1 &amp;-3 &amp;-2 &amp;-1 &amp; 0 &amp; 1 \\ 0 &amp;-2 &amp;-1 &amp; 0 &amp; 1 &amp; 2 \\ 1 &amp;-1 &amp; 0 &amp; 1 &amp; 2 &amp; 3 \end{array} \end{array} $$ You can see that if we replace in the first table each occurrence of $-4$ by $-5$ and similarly $-3\mapsto-4$, $-2\mapsto-3$, $-1\mapsto-2$, etc., then we get the second table. </p> <p>In this sense, the two operations are "the same", we just "renamed" the elements. (And $x\mapsto x-1$ is the dictionary.) </p> <hr> <p>Now we know that $(\mathbb R,+,\cdot)$ is a field. We also know that using "renaming" $f\colon x\mapsto x-1$ creates $\oplus$ from $+$. What happens if we apply the same thing to the usual multiplication $\cdot$? $$a\odot b = (a+1)\cdot(b+1)-1=ab+a+b$$ or $$a\odot b +1 = (a+1)\cdot(b+1) +1.$$ Therefore $a\odot b=ab+a+b$ seems like a natural candidate for the multiplication. (Notice that I have used the inverse mapping $x\mapsto x+1$.)</p> <hr> <p>This can be formulated a bit simpler in case you are already familiar with the notion of group isomorphism and field isomorphism. I have posted a bit more lengthy answer on purpose. (I was not sure whether the OP is familiar with these notions. And for the readers who are already familiar with them, there already is a shorter answer. Perhaps this longer answer might be useful for some people reading this question, too.)</p> <p>I will add the link to this post, which might also help to gain some insight: <a href="https://math.stackexchange.com/questions/242348/intuition-on-group-homomorphisms">Intuition on group homomorphisms</a>.</p> <p>And I will also add another link to a post where at least some of the field axioms for $(\mathbb R,\oplus,\odot)$ are verified: <a href="https://math.stackexchange.com/questions/239750/prove-that-r-is-a-ring-under-special-definitions-of-multiplication-and-additio">Prove that R is a ring under &#39;special&#39; definitions of multiplication and addition</a>.</p> Answer 40: <p><strong>Hint</strong> <span class="math-container">$ $</span> Any set bijection <span class="math-container">$\,h\,:\,R'\to R\,$</span> serves to <a href="https://math.stackexchange.com/search?q=user%3A242+transport">transport the (ring) structure</a> of <span class="math-container">$\,(R,+,*,0,1),$</span> to <span class="math-container">$\,(R',\oplus,\otimes,0',1') \,$</span> by simply defining the operations in <span class="math-container">$\,R'\,$</span> to make <span class="math-container">$\,h\,$</span> be a (ring) isomorphism</p> <p><span class="math-container">$$\begin{align} &amp;a \oplus b\, =\ h^{-1}(h(a) + h(b)),\quad 0' = h^{-1}(0)\\ &amp;a \otimes b\, =\, h^{-1}(h(a)\, * h(b)),\quad\, 1' = h^{-1}(1)\end{align}\qquad $$</span></p> <p>Yours is the special case <span class="math-container">$\ h(x) = x+1.$</span></p> <p>As an example let's explicitly examine how the associative law is transported from <span class="math-container">$R$</span> to <span class="math-container">$R'$</span>.</p> <p><span class="math-container">$$ \begin{align} h(a\oplus(b\oplus c)) \,&amp;=\, h(a)+h(b\oplus c)\\ &amp;=\, h(a)+(h(b) + h(c))\\ &amp;=\, (h(a) + h(b))+h(c)\\ &amp;=\, h(a\oplus b)+h(c)\\ &amp;=\, h((a\oplus b)\oplus c)\end{align}\qquad $$</span></p> <p>Therefore we conclude that <span class="math-container">$\ a\oplus (b\oplus c) = (a\oplus b)\oplus c\ $</span> by applying <span class="math-container">$h^{-1}$</span> to above. The other (universal equational) group axioms transport in exactly the same way.</p>
When a zero-dimensional topological space is normal
2,110,299
6
https://math.stackexchange.com/questions/2110299/when-a-zero-dimensional-topological-space-is-normal
<p>A topological space is called zero-dimensional whenever it has a clopen basis for open sets. There is an exercise that states every Hausdorff zero-dimensional space is normal, but I think it is false and there is a missed assumption in this exercise, am I right?</p>
[ { "answer_id": 2110550, "score": 3, "text": "<p>No this is false, some examples:</p>\n\n<ol>\n<li><p><a href=\"https://mathoverflow.net/a/53301/2060\">https://mathoverflow.net/a/53301/2060</a> describes the deleted Tychonov plank.</p></li>\n<li><p><a href=\"https://mathoverflow.net/a/56805/2060\">https://mathoverflow.net/a/56805/2060</a> describes the rational sequence topology </p></li>\n<li><p><a href=\"https://math.stackexchange.com/a/462029/4280\">https://math.stackexchange.com/a/462029/4280</a> descibres Mrowka $\\Psi$-space</p></li>\n<li><p><a href=\"https://math.stackexchange.com/a/170740/4280\">https://math.stackexchange.com/a/170740/4280</a> gives a proof of non-normality of the Sorgenfrey square $S \\times S$, where $S$ is the reals in the lower limit topology (generated by the clopen sets $[a,b)$, the square thus is also zero-dimensional). </p></li>\n</ol>\n\n<p>All of these are zero-dimensional Hausdorff and not normal. The first 3 are also locally compact. 2,3 and 4 are also separable (and first countable). So some conditions extra need not be enough to get normality.</p>\n" }, { "answer_id": 3699975, "score": 1, "text": "<p>I'm a few years late, but the missing assumption in your exercise might be that of second countability of the space. It is indeed true that second countable hausdorff zero dimensional spaces are normal, because zero dimensional implies (completely) regular and regular plus second countable implies normal.</p>\n\n<p>Showing that if <span class=\"math-container\">$X$</span> is Hausdorff, second countable and zero dimensional, then for every pair of closed sets <span class=\"math-container\">$C_1$</span> and <span class=\"math-container\">$C_2$</span> in <span class=\"math-container\">$X$</span> there is a clopen set <span class=\"math-container\">$B$</span> with <span class=\"math-container\">$C_1\\subseteq B$</span> and <span class=\"math-container\">$C_2\\cap B=\\varnothing$</span> is exercise 7.2 in Kechris's book <em>Classical Descriptive Set Theory</em></p>\n" }, { "answer_id": 2110550, "score": 3, "text": "<p>No this is false, some examples:</p>\n\n<ol>\n<li><p><a href=\"https://mathoverflow.net/a/53301/2060\">https://mathoverflow.net/a/53301/2060</a> describes the deleted Tychonov plank.</p></li>\n<li><p><a href=\"https://mathoverflow.net/a/56805/2060\">https://mathoverflow.net/a/56805/2060</a> describes the rational sequence topology </p></li>\n<li><p><a href=\"https://math.stackexchange.com/a/462029/4280\">https://math.stackexchange.com/a/462029/4280</a> descibres Mrowka $\\Psi$-space</p></li>\n<li><p><a href=\"https://math.stackexchange.com/a/170740/4280\">https://math.stackexchange.com/a/170740/4280</a> gives a proof of non-normality of the Sorgenfrey square $S \\times S$, where $S$ is the reals in the lower limit topology (generated by the clopen sets $[a,b)$, the square thus is also zero-dimensional). </p></li>\n</ol>\n\n<p>All of these are zero-dimensional Hausdorff and not normal. The first 3 are also locally compact. 2,3 and 4 are also separable (and first countable). So some conditions extra need not be enough to get normality.</p>\n" }, { "answer_id": 3699975, "score": 1, "text": "<p>I'm a few years late, but the missing assumption in your exercise might be that of second countability of the space. It is indeed true that second countable hausdorff zero dimensional spaces are normal, because zero dimensional implies (completely) regular and regular plus second countable implies normal.</p>\n\n<p>Showing that if <span class=\"math-container\">$X$</span> is Hausdorff, second countable and zero dimensional, then for every pair of closed sets <span class=\"math-container\">$C_1$</span> and <span class=\"math-container\">$C_2$</span> in <span class=\"math-container\">$X$</span> there is a clopen set <span class=\"math-container\">$B$</span> with <span class=\"math-container\">$C_1\\subseteq B$</span> and <span class=\"math-container\">$C_2\\cap B=\\varnothing$</span> is exercise 7.2 in Kechris's book <em>Classical Descriptive Set Theory</em></p>\n" }, { "answer_id": 2110550, "score": 3, "text": "<p>No this is false, some examples:</p>\n\n<ol>\n<li><p><a href=\"https://mathoverflow.net/a/53301/2060\">https://mathoverflow.net/a/53301/2060</a> describes the deleted Tychonov plank.</p></li>\n<li><p><a href=\"https://mathoverflow.net/a/56805/2060\">https://mathoverflow.net/a/56805/2060</a> describes the rational sequence topology </p></li>\n<li><p><a href=\"https://math.stackexchange.com/a/462029/4280\">https://math.stackexchange.com/a/462029/4280</a> descibres Mrowka $\\Psi$-space</p></li>\n<li><p><a href=\"https://math.stackexchange.com/a/170740/4280\">https://math.stackexchange.com/a/170740/4280</a> gives a proof of non-normality of the Sorgenfrey square $S \\times S$, where $S$ is the reals in the lower limit topology (generated by the clopen sets $[a,b)$, the square thus is also zero-dimensional). </p></li>\n</ol>\n\n<p>All of these are zero-dimensional Hausdorff and not normal. The first 3 are also locally compact. 2,3 and 4 are also separable (and first countable). So some conditions extra need not be enough to get normality.</p>\n" }, { "answer_id": 3699975, "score": 1, "text": "<p>I'm a few years late, but the missing assumption in your exercise might be that of second countability of the space. It is indeed true that second countable hausdorff zero dimensional spaces are normal, because zero dimensional implies (completely) regular and regular plus second countable implies normal.</p>\n\n<p>Showing that if <span class=\"math-container\">$X$</span> is Hausdorff, second countable and zero dimensional, then for every pair of closed sets <span class=\"math-container\">$C_1$</span> and <span class=\"math-container\">$C_2$</span> in <span class=\"math-container\">$X$</span> there is a clopen set <span class=\"math-container\">$B$</span> with <span class=\"math-container\">$C_1\\subseteq B$</span> and <span class=\"math-container\">$C_2\\cap B=\\varnothing$</span> is exercise 7.2 in Kechris's book <em>Classical Descriptive Set Theory</em></p>\n" }, { "answer_id": 2110550, "score": 3, "text": "<p>No this is false, some examples:</p>\n\n<ol>\n<li><p><a href=\"https://mathoverflow.net/a/53301/2060\">https://mathoverflow.net/a/53301/2060</a> describes the deleted Tychonov plank.</p></li>\n<li><p><a href=\"https://mathoverflow.net/a/56805/2060\">https://mathoverflow.net/a/56805/2060</a> describes the rational sequence topology </p></li>\n<li><p><a href=\"https://math.stackexchange.com/a/462029/4280\">https://math.stackexchange.com/a/462029/4280</a> descibres Mrowka $\\Psi$-space</p></li>\n<li><p><a href=\"https://math.stackexchange.com/a/170740/4280\">https://math.stackexchange.com/a/170740/4280</a> gives a proof of non-normality of the Sorgenfrey square $S \\times S$, where $S$ is the reals in the lower limit topology (generated by the clopen sets $[a,b)$, the square thus is also zero-dimensional). </p></li>\n</ol>\n\n<p>All of these are zero-dimensional Hausdorff and not normal. The first 3 are also locally compact. 2,3 and 4 are also separable (and first countable). So some conditions extra need not be enough to get normality.</p>\n" }, { "answer_id": 3699975, "score": 1, "text": "<p>I'm a few years late, but the missing assumption in your exercise might be that of second countability of the space. It is indeed true that second countable hausdorff zero dimensional spaces are normal, because zero dimensional implies (completely) regular and regular plus second countable implies normal.</p>\n\n<p>Showing that if <span class=\"math-container\">$X$</span> is Hausdorff, second countable and zero dimensional, then for every pair of closed sets <span class=\"math-container\">$C_1$</span> and <span class=\"math-container\">$C_2$</span> in <span class=\"math-container\">$X$</span> there is a clopen set <span class=\"math-container\">$B$</span> with <span class=\"math-container\">$C_1\\subseteq B$</span> and <span class=\"math-container\">$C_2\\cap B=\\varnothing$</span> is exercise 7.2 in Kechris's book <em>Classical Descriptive Set Theory</em></p>\n" }, { "answer_id": 2110550, "score": 3, "text": "<p>No this is false, some examples:</p>\n\n<ol>\n<li><p><a href=\"https://mathoverflow.net/a/53301/2060\">https://mathoverflow.net/a/53301/2060</a> describes the deleted Tychonov plank.</p></li>\n<li><p><a href=\"https://mathoverflow.net/a/56805/2060\">https://mathoverflow.net/a/56805/2060</a> describes the rational sequence topology </p></li>\n<li><p><a href=\"https://math.stackexchange.com/a/462029/4280\">https://math.stackexchange.com/a/462029/4280</a> descibres Mrowka $\\Psi$-space</p></li>\n<li><p><a href=\"https://math.stackexchange.com/a/170740/4280\">https://math.stackexchange.com/a/170740/4280</a> gives a proof of non-normality of the Sorgenfrey square $S \\times S$, where $S$ is the reals in the lower limit topology (generated by the clopen sets $[a,b)$, the square thus is also zero-dimensional). </p></li>\n</ol>\n\n<p>All of these are zero-dimensional Hausdorff and not normal. The first 3 are also locally compact. 2,3 and 4 are also separable (and first countable). So some conditions extra need not be enough to get normality.</p>\n" }, { "answer_id": 3699975, "score": 1, "text": "<p>I'm a few years late, but the missing assumption in your exercise might be that of second countability of the space. It is indeed true that second countable hausdorff zero dimensional spaces are normal, because zero dimensional implies (completely) regular and regular plus second countable implies normal.</p>\n\n<p>Showing that if <span class=\"math-container\">$X$</span> is Hausdorff, second countable and zero dimensional, then for every pair of closed sets <span class=\"math-container\">$C_1$</span> and <span class=\"math-container\">$C_2$</span> in <span class=\"math-container\">$X$</span> there is a clopen set <span class=\"math-container\">$B$</span> with <span class=\"math-container\">$C_1\\subseteq B$</span> and <span class=\"math-container\">$C_2\\cap B=\\varnothing$</span> is exercise 7.2 in Kechris's book <em>Classical Descriptive Set Theory</em></p>\n" }, { "answer_id": 2110550, "score": 3, "text": "<p>No this is false, some examples:</p>\n\n<ol>\n<li><p><a href=\"https://mathoverflow.net/a/53301/2060\">https://mathoverflow.net/a/53301/2060</a> describes the deleted Tychonov plank.</p></li>\n<li><p><a href=\"https://mathoverflow.net/a/56805/2060\">https://mathoverflow.net/a/56805/2060</a> describes the rational sequence topology </p></li>\n<li><p><a href=\"https://math.stackexchange.com/a/462029/4280\">https://math.stackexchange.com/a/462029/4280</a> descibres Mrowka $\\Psi$-space</p></li>\n<li><p><a href=\"https://math.stackexchange.com/a/170740/4280\">https://math.stackexchange.com/a/170740/4280</a> gives a proof of non-normality of the Sorgenfrey square $S \\times S$, where $S$ is the reals in the lower limit topology (generated by the clopen sets $[a,b)$, the square thus is also zero-dimensional). </p></li>\n</ol>\n\n<p>All of these are zero-dimensional Hausdorff and not normal. The first 3 are also locally compact. 2,3 and 4 are also separable (and first countable). So some conditions extra need not be enough to get normality.</p>\n" }, { "answer_id": 3699975, "score": 1, "text": "<p>I'm a few years late, but the missing assumption in your exercise might be that of second countability of the space. It is indeed true that second countable hausdorff zero dimensional spaces are normal, because zero dimensional implies (completely) regular and regular plus second countable implies normal.</p>\n\n<p>Showing that if <span class=\"math-container\">$X$</span> is Hausdorff, second countable and zero dimensional, then for every pair of closed sets <span class=\"math-container\">$C_1$</span> and <span class=\"math-container\">$C_2$</span> in <span class=\"math-container\">$X$</span> there is a clopen set <span class=\"math-container\">$B$</span> with <span class=\"math-container\">$C_1\\subseteq B$</span> and <span class=\"math-container\">$C_2\\cap B=\\varnothing$</span> is exercise 7.2 in Kechris's book <em>Classical Descriptive Set Theory</em></p>\n" }, { "answer_id": 2110550, "score": 3, "text": "<p>No this is false, some examples:</p>\n\n<ol>\n<li><p><a href=\"https://mathoverflow.net/a/53301/2060\">https://mathoverflow.net/a/53301/2060</a> describes the deleted Tychonov plank.</p></li>\n<li><p><a href=\"https://mathoverflow.net/a/56805/2060\">https://mathoverflow.net/a/56805/2060</a> describes the rational sequence topology </p></li>\n<li><p><a href=\"https://math.stackexchange.com/a/462029/4280\">https://math.stackexchange.com/a/462029/4280</a> descibres Mrowka $\\Psi$-space</p></li>\n<li><p><a href=\"https://math.stackexchange.com/a/170740/4280\">https://math.stackexchange.com/a/170740/4280</a> gives a proof of non-normality of the Sorgenfrey square $S \\times S$, where $S$ is the reals in the lower limit topology (generated by the clopen sets $[a,b)$, the square thus is also zero-dimensional). </p></li>\n</ol>\n\n<p>All of these are zero-dimensional Hausdorff and not normal. The first 3 are also locally compact. 2,3 and 4 are also separable (and first countable). So some conditions extra need not be enough to get normality.</p>\n" }, { "answer_id": 3699975, "score": 1, "text": "<p>I'm a few years late, but the missing assumption in your exercise might be that of second countability of the space. It is indeed true that second countable hausdorff zero dimensional spaces are normal, because zero dimensional implies (completely) regular and regular plus second countable implies normal.</p>\n\n<p>Showing that if <span class=\"math-container\">$X$</span> is Hausdorff, second countable and zero dimensional, then for every pair of closed sets <span class=\"math-container\">$C_1$</span> and <span class=\"math-container\">$C_2$</span> in <span class=\"math-container\">$X$</span> there is a clopen set <span class=\"math-container\">$B$</span> with <span class=\"math-container\">$C_1\\subseteq B$</span> and <span class=\"math-container\">$C_2\\cap B=\\varnothing$</span> is exercise 7.2 in Kechris's book <em>Classical Descriptive Set Theory</em></p>\n" }, { "answer_id": 2110550, "score": 3, "text": "<p>No this is false, some examples:</p>\n\n<ol>\n<li><p><a href=\"https://mathoverflow.net/a/53301/2060\">https://mathoverflow.net/a/53301/2060</a> describes the deleted Tychonov plank.</p></li>\n<li><p><a href=\"https://mathoverflow.net/a/56805/2060\">https://mathoverflow.net/a/56805/2060</a> describes the rational sequence topology </p></li>\n<li><p><a href=\"https://math.stackexchange.com/a/462029/4280\">https://math.stackexchange.com/a/462029/4280</a> descibres Mrowka $\\Psi$-space</p></li>\n<li><p><a href=\"https://math.stackexchange.com/a/170740/4280\">https://math.stackexchange.com/a/170740/4280</a> gives a proof of non-normality of the Sorgenfrey square $S \\times S$, where $S$ is the reals in the lower limit topology (generated by the clopen sets $[a,b)$, the square thus is also zero-dimensional). </p></li>\n</ol>\n\n<p>All of these are zero-dimensional Hausdorff and not normal. The first 3 are also locally compact. 2,3 and 4 are also separable (and first countable). So some conditions extra need not be enough to get normality.</p>\n" }, { "answer_id": 3699975, "score": 1, "text": "<p>I'm a few years late, but the missing assumption in your exercise might be that of second countability of the space. It is indeed true that second countable hausdorff zero dimensional spaces are normal, because zero dimensional implies (completely) regular and regular plus second countable implies normal.</p>\n\n<p>Showing that if <span class=\"math-container\">$X$</span> is Hausdorff, second countable and zero dimensional, then for every pair of closed sets <span class=\"math-container\">$C_1$</span> and <span class=\"math-container\">$C_2$</span> in <span class=\"math-container\">$X$</span> there is a clopen set <span class=\"math-container\">$B$</span> with <span class=\"math-container\">$C_1\\subseteq B$</span> and <span class=\"math-container\">$C_2\\cap B=\\varnothing$</span> is exercise 7.2 in Kechris's book <em>Classical Descriptive Set Theory</em></p>\n" }, { "answer_id": 2110550, "score": 3, "text": "<p>No this is false, some examples:</p>\n\n<ol>\n<li><p><a href=\"https://mathoverflow.net/a/53301/2060\">https://mathoverflow.net/a/53301/2060</a> describes the deleted Tychonov plank.</p></li>\n<li><p><a href=\"https://mathoverflow.net/a/56805/2060\">https://mathoverflow.net/a/56805/2060</a> describes the rational sequence topology </p></li>\n<li><p><a href=\"https://math.stackexchange.com/a/462029/4280\">https://math.stackexchange.com/a/462029/4280</a> descibres Mrowka $\\Psi$-space</p></li>\n<li><p><a href=\"https://math.stackexchange.com/a/170740/4280\">https://math.stackexchange.com/a/170740/4280</a> gives a proof of non-normality of the Sorgenfrey square $S \\times S$, where $S$ is the reals in the lower limit topology (generated by the clopen sets $[a,b)$, the square thus is also zero-dimensional). </p></li>\n</ol>\n\n<p>All of these are zero-dimensional Hausdorff and not normal. The first 3 are also locally compact. 2,3 and 4 are also separable (and first countable). So some conditions extra need not be enough to get normality.</p>\n" }, { "answer_id": 3699975, "score": 1, "text": "<p>I'm a few years late, but the missing assumption in your exercise might be that of second countability of the space. It is indeed true that second countable hausdorff zero dimensional spaces are normal, because zero dimensional implies (completely) regular and regular plus second countable implies normal.</p>\n\n<p>Showing that if <span class=\"math-container\">$X$</span> is Hausdorff, second countable and zero dimensional, then for every pair of closed sets <span class=\"math-container\">$C_1$</span> and <span class=\"math-container\">$C_2$</span> in <span class=\"math-container\">$X$</span> there is a clopen set <span class=\"math-container\">$B$</span> with <span class=\"math-container\">$C_1\\subseteq B$</span> and <span class=\"math-container\">$C_2\\cap B=\\varnothing$</span> is exercise 7.2 in Kechris's book <em>Classical Descriptive Set Theory</em></p>\n" }, { "answer_id": 2110550, "score": 3, "text": "<p>No this is false, some examples:</p>\n\n<ol>\n<li><p><a href=\"https://mathoverflow.net/a/53301/2060\">https://mathoverflow.net/a/53301/2060</a> describes the deleted Tychonov plank.</p></li>\n<li><p><a href=\"https://mathoverflow.net/a/56805/2060\">https://mathoverflow.net/a/56805/2060</a> describes the rational sequence topology </p></li>\n<li><p><a href=\"https://math.stackexchange.com/a/462029/4280\">https://math.stackexchange.com/a/462029/4280</a> descibres Mrowka $\\Psi$-space</p></li>\n<li><p><a href=\"https://math.stackexchange.com/a/170740/4280\">https://math.stackexchange.com/a/170740/4280</a> gives a proof of non-normality of the Sorgenfrey square $S \\times S$, where $S$ is the reals in the lower limit topology (generated by the clopen sets $[a,b)$, the square thus is also zero-dimensional). </p></li>\n</ol>\n\n<p>All of these are zero-dimensional Hausdorff and not normal. The first 3 are also locally compact. 2,3 and 4 are also separable (and first countable). So some conditions extra need not be enough to get normality.</p>\n" }, { "answer_id": 3699975, "score": 1, "text": "<p>I'm a few years late, but the missing assumption in your exercise might be that of second countability of the space. It is indeed true that second countable hausdorff zero dimensional spaces are normal, because zero dimensional implies (completely) regular and regular plus second countable implies normal.</p>\n\n<p>Showing that if <span class=\"math-container\">$X$</span> is Hausdorff, second countable and zero dimensional, then for every pair of closed sets <span class=\"math-container\">$C_1$</span> and <span class=\"math-container\">$C_2$</span> in <span class=\"math-container\">$X$</span> there is a clopen set <span class=\"math-container\">$B$</span> with <span class=\"math-container\">$C_1\\subseteq B$</span> and <span class=\"math-container\">$C_2\\cap B=\\varnothing$</span> is exercise 7.2 in Kechris's book <em>Classical Descriptive Set Theory</em></p>\n" } ]
Question: When a zero-dimensional topological space is normal Answer 1: <p>No this is false, some examples:</p> <ol> <li><p><a href="https://mathoverflow.net/a/53301/2060">https://mathoverflow.net/a/53301/2060</a> describes the deleted Tychonov plank.</p></li> <li><p><a href="https://mathoverflow.net/a/56805/2060">https://mathoverflow.net/a/56805/2060</a> describes the rational sequence topology </p></li> <li><p><a href="https://math.stackexchange.com/a/462029/4280">https://math.stackexchange.com/a/462029/4280</a> descibres Mrowka $\Psi$-space</p></li> <li><p><a href="https://math.stackexchange.com/a/170740/4280">https://math.stackexchange.com/a/170740/4280</a> gives a proof of non-normality of the Sorgenfrey square $S \times S$, where $S$ is the reals in the lower limit topology (generated by the clopen sets $[a,b)$, the square thus is also zero-dimensional). </p></li> </ol> <p>All of these are zero-dimensional Hausdorff and not normal. The first 3 are also locally compact. 2,3 and 4 are also separable (and first countable). So some conditions extra need not be enough to get normality.</p> Answer 2: <p>I'm a few years late, but the missing assumption in your exercise might be that of second countability of the space. It is indeed true that second countable hausdorff zero dimensional spaces are normal, because zero dimensional implies (completely) regular and regular plus second countable implies normal.</p> <p>Showing that if <span class="math-container">$X$</span> is Hausdorff, second countable and zero dimensional, then for every pair of closed sets <span class="math-container">$C_1$</span> and <span class="math-container">$C_2$</span> in <span class="math-container">$X$</span> there is a clopen set <span class="math-container">$B$</span> with <span class="math-container">$C_1\subseteq B$</span> and <span class="math-container">$C_2\cap B=\varnothing$</span> is exercise 7.2 in Kechris's book <em>Classical Descriptive Set Theory</em></p> Answer 3: <p>No this is false, some examples:</p> <ol> <li><p><a href="https://mathoverflow.net/a/53301/2060">https://mathoverflow.net/a/53301/2060</a> describes the deleted Tychonov plank.</p></li> <li><p><a href="https://mathoverflow.net/a/56805/2060">https://mathoverflow.net/a/56805/2060</a> describes the rational sequence topology </p></li> <li><p><a href="https://math.stackexchange.com/a/462029/4280">https://math.stackexchange.com/a/462029/4280</a> descibres Mrowka $\Psi$-space</p></li> <li><p><a href="https://math.stackexchange.com/a/170740/4280">https://math.stackexchange.com/a/170740/4280</a> gives a proof of non-normality of the Sorgenfrey square $S \times S$, where $S$ is the reals in the lower limit topology (generated by the clopen sets $[a,b)$, the square thus is also zero-dimensional). </p></li> </ol> <p>All of these are zero-dimensional Hausdorff and not normal. The first 3 are also locally compact. 2,3 and 4 are also separable (and first countable). So some conditions extra need not be enough to get normality.</p> Answer 4: <p>I'm a few years late, but the missing assumption in your exercise might be that of second countability of the space. It is indeed true that second countable hausdorff zero dimensional spaces are normal, because zero dimensional implies (completely) regular and regular plus second countable implies normal.</p> <p>Showing that if <span class="math-container">$X$</span> is Hausdorff, second countable and zero dimensional, then for every pair of closed sets <span class="math-container">$C_1$</span> and <span class="math-container">$C_2$</span> in <span class="math-container">$X$</span> there is a clopen set <span class="math-container">$B$</span> with <span class="math-container">$C_1\subseteq B$</span> and <span class="math-container">$C_2\cap B=\varnothing$</span> is exercise 7.2 in Kechris's book <em>Classical Descriptive Set Theory</em></p> Answer 5: <p>No this is false, some examples:</p> <ol> <li><p><a href="https://mathoverflow.net/a/53301/2060">https://mathoverflow.net/a/53301/2060</a> describes the deleted Tychonov plank.</p></li> <li><p><a href="https://mathoverflow.net/a/56805/2060">https://mathoverflow.net/a/56805/2060</a> describes the rational sequence topology </p></li> <li><p><a href="https://math.stackexchange.com/a/462029/4280">https://math.stackexchange.com/a/462029/4280</a> descibres Mrowka $\Psi$-space</p></li> <li><p><a href="https://math.stackexchange.com/a/170740/4280">https://math.stackexchange.com/a/170740/4280</a> gives a proof of non-normality of the Sorgenfrey square $S \times S$, where $S$ is the reals in the lower limit topology (generated by the clopen sets $[a,b)$, the square thus is also zero-dimensional). </p></li> </ol> <p>All of these are zero-dimensional Hausdorff and not normal. The first 3 are also locally compact. 2,3 and 4 are also separable (and first countable). So some conditions extra need not be enough to get normality.</p> Answer 6: <p>I'm a few years late, but the missing assumption in your exercise might be that of second countability of the space. It is indeed true that second countable hausdorff zero dimensional spaces are normal, because zero dimensional implies (completely) regular and regular plus second countable implies normal.</p> <p>Showing that if <span class="math-container">$X$</span> is Hausdorff, second countable and zero dimensional, then for every pair of closed sets <span class="math-container">$C_1$</span> and <span class="math-container">$C_2$</span> in <span class="math-container">$X$</span> there is a clopen set <span class="math-container">$B$</span> with <span class="math-container">$C_1\subseteq B$</span> and <span class="math-container">$C_2\cap B=\varnothing$</span> is exercise 7.2 in Kechris's book <em>Classical Descriptive Set Theory</em></p> Answer 7: <p>No this is false, some examples:</p> <ol> <li><p><a href="https://mathoverflow.net/a/53301/2060">https://mathoverflow.net/a/53301/2060</a> describes the deleted Tychonov plank.</p></li> <li><p><a href="https://mathoverflow.net/a/56805/2060">https://mathoverflow.net/a/56805/2060</a> describes the rational sequence topology </p></li> <li><p><a href="https://math.stackexchange.com/a/462029/4280">https://math.stackexchange.com/a/462029/4280</a> descibres Mrowka $\Psi$-space</p></li> <li><p><a href="https://math.stackexchange.com/a/170740/4280">https://math.stackexchange.com/a/170740/4280</a> gives a proof of non-normality of the Sorgenfrey square $S \times S$, where $S$ is the reals in the lower limit topology (generated by the clopen sets $[a,b)$, the square thus is also zero-dimensional). </p></li> </ol> <p>All of these are zero-dimensional Hausdorff and not normal. The first 3 are also locally compact. 2,3 and 4 are also separable (and first countable). So some conditions extra need not be enough to get normality.</p> Answer 8: <p>I'm a few years late, but the missing assumption in your exercise might be that of second countability of the space. It is indeed true that second countable hausdorff zero dimensional spaces are normal, because zero dimensional implies (completely) regular and regular plus second countable implies normal.</p> <p>Showing that if <span class="math-container">$X$</span> is Hausdorff, second countable and zero dimensional, then for every pair of closed sets <span class="math-container">$C_1$</span> and <span class="math-container">$C_2$</span> in <span class="math-container">$X$</span> there is a clopen set <span class="math-container">$B$</span> with <span class="math-container">$C_1\subseteq B$</span> and <span class="math-container">$C_2\cap B=\varnothing$</span> is exercise 7.2 in Kechris's book <em>Classical Descriptive Set Theory</em></p> Answer 9: <p>No this is false, some examples:</p> <ol> <li><p><a href="https://mathoverflow.net/a/53301/2060">https://mathoverflow.net/a/53301/2060</a> describes the deleted Tychonov plank.</p></li> <li><p><a href="https://mathoverflow.net/a/56805/2060">https://mathoverflow.net/a/56805/2060</a> describes the rational sequence topology </p></li> <li><p><a href="https://math.stackexchange.com/a/462029/4280">https://math.stackexchange.com/a/462029/4280</a> descibres Mrowka $\Psi$-space</p></li> <li><p><a href="https://math.stackexchange.com/a/170740/4280">https://math.stackexchange.com/a/170740/4280</a> gives a proof of non-normality of the Sorgenfrey square $S \times S$, where $S$ is the reals in the lower limit topology (generated by the clopen sets $[a,b)$, the square thus is also zero-dimensional). </p></li> </ol> <p>All of these are zero-dimensional Hausdorff and not normal. The first 3 are also locally compact. 2,3 and 4 are also separable (and first countable). So some conditions extra need not be enough to get normality.</p> Answer 10: <p>I'm a few years late, but the missing assumption in your exercise might be that of second countability of the space. It is indeed true that second countable hausdorff zero dimensional spaces are normal, because zero dimensional implies (completely) regular and regular plus second countable implies normal.</p> <p>Showing that if <span class="math-container">$X$</span> is Hausdorff, second countable and zero dimensional, then for every pair of closed sets <span class="math-container">$C_1$</span> and <span class="math-container">$C_2$</span> in <span class="math-container">$X$</span> there is a clopen set <span class="math-container">$B$</span> with <span class="math-container">$C_1\subseteq B$</span> and <span class="math-container">$C_2\cap B=\varnothing$</span> is exercise 7.2 in Kechris's book <em>Classical Descriptive Set Theory</em></p> Answer 11: <p>No this is false, some examples:</p> <ol> <li><p><a href="https://mathoverflow.net/a/53301/2060">https://mathoverflow.net/a/53301/2060</a> describes the deleted Tychonov plank.</p></li> <li><p><a href="https://mathoverflow.net/a/56805/2060">https://mathoverflow.net/a/56805/2060</a> describes the rational sequence topology </p></li> <li><p><a href="https://math.stackexchange.com/a/462029/4280">https://math.stackexchange.com/a/462029/4280</a> descibres Mrowka $\Psi$-space</p></li> <li><p><a href="https://math.stackexchange.com/a/170740/4280">https://math.stackexchange.com/a/170740/4280</a> gives a proof of non-normality of the Sorgenfrey square $S \times S$, where $S$ is the reals in the lower limit topology (generated by the clopen sets $[a,b)$, the square thus is also zero-dimensional). </p></li> </ol> <p>All of these are zero-dimensional Hausdorff and not normal. The first 3 are also locally compact. 2,3 and 4 are also separable (and first countable). So some conditions extra need not be enough to get normality.</p> Answer 12: <p>I'm a few years late, but the missing assumption in your exercise might be that of second countability of the space. It is indeed true that second countable hausdorff zero dimensional spaces are normal, because zero dimensional implies (completely) regular and regular plus second countable implies normal.</p> <p>Showing that if <span class="math-container">$X$</span> is Hausdorff, second countable and zero dimensional, then for every pair of closed sets <span class="math-container">$C_1$</span> and <span class="math-container">$C_2$</span> in <span class="math-container">$X$</span> there is a clopen set <span class="math-container">$B$</span> with <span class="math-container">$C_1\subseteq B$</span> and <span class="math-container">$C_2\cap B=\varnothing$</span> is exercise 7.2 in Kechris's book <em>Classical Descriptive Set Theory</em></p> Answer 13: <p>No this is false, some examples:</p> <ol> <li><p><a href="https://mathoverflow.net/a/53301/2060">https://mathoverflow.net/a/53301/2060</a> describes the deleted Tychonov plank.</p></li> <li><p><a href="https://mathoverflow.net/a/56805/2060">https://mathoverflow.net/a/56805/2060</a> describes the rational sequence topology </p></li> <li><p><a href="https://math.stackexchange.com/a/462029/4280">https://math.stackexchange.com/a/462029/4280</a> descibres Mrowka $\Psi$-space</p></li> <li><p><a href="https://math.stackexchange.com/a/170740/4280">https://math.stackexchange.com/a/170740/4280</a> gives a proof of non-normality of the Sorgenfrey square $S \times S$, where $S$ is the reals in the lower limit topology (generated by the clopen sets $[a,b)$, the square thus is also zero-dimensional). </p></li> </ol> <p>All of these are zero-dimensional Hausdorff and not normal. The first 3 are also locally compact. 2,3 and 4 are also separable (and first countable). So some conditions extra need not be enough to get normality.</p> Answer 14: <p>I'm a few years late, but the missing assumption in your exercise might be that of second countability of the space. It is indeed true that second countable hausdorff zero dimensional spaces are normal, because zero dimensional implies (completely) regular and regular plus second countable implies normal.</p> <p>Showing that if <span class="math-container">$X$</span> is Hausdorff, second countable and zero dimensional, then for every pair of closed sets <span class="math-container">$C_1$</span> and <span class="math-container">$C_2$</span> in <span class="math-container">$X$</span> there is a clopen set <span class="math-container">$B$</span> with <span class="math-container">$C_1\subseteq B$</span> and <span class="math-container">$C_2\cap B=\varnothing$</span> is exercise 7.2 in Kechris's book <em>Classical Descriptive Set Theory</em></p> Answer 15: <p>No this is false, some examples:</p> <ol> <li><p><a href="https://mathoverflow.net/a/53301/2060">https://mathoverflow.net/a/53301/2060</a> describes the deleted Tychonov plank.</p></li> <li><p><a href="https://mathoverflow.net/a/56805/2060">https://mathoverflow.net/a/56805/2060</a> describes the rational sequence topology </p></li> <li><p><a href="https://math.stackexchange.com/a/462029/4280">https://math.stackexchange.com/a/462029/4280</a> descibres Mrowka $\Psi$-space</p></li> <li><p><a href="https://math.stackexchange.com/a/170740/4280">https://math.stackexchange.com/a/170740/4280</a> gives a proof of non-normality of the Sorgenfrey square $S \times S$, where $S$ is the reals in the lower limit topology (generated by the clopen sets $[a,b)$, the square thus is also zero-dimensional). </p></li> </ol> <p>All of these are zero-dimensional Hausdorff and not normal. The first 3 are also locally compact. 2,3 and 4 are also separable (and first countable). So some conditions extra need not be enough to get normality.</p> Answer 16: <p>I'm a few years late, but the missing assumption in your exercise might be that of second countability of the space. It is indeed true that second countable hausdorff zero dimensional spaces are normal, because zero dimensional implies (completely) regular and regular plus second countable implies normal.</p> <p>Showing that if <span class="math-container">$X$</span> is Hausdorff, second countable and zero dimensional, then for every pair of closed sets <span class="math-container">$C_1$</span> and <span class="math-container">$C_2$</span> in <span class="math-container">$X$</span> there is a clopen set <span class="math-container">$B$</span> with <span class="math-container">$C_1\subseteq B$</span> and <span class="math-container">$C_2\cap B=\varnothing$</span> is exercise 7.2 in Kechris's book <em>Classical Descriptive Set Theory</em></p> Answer 17: <p>No this is false, some examples:</p> <ol> <li><p><a href="https://mathoverflow.net/a/53301/2060">https://mathoverflow.net/a/53301/2060</a> describes the deleted Tychonov plank.</p></li> <li><p><a href="https://mathoverflow.net/a/56805/2060">https://mathoverflow.net/a/56805/2060</a> describes the rational sequence topology </p></li> <li><p><a href="https://math.stackexchange.com/a/462029/4280">https://math.stackexchange.com/a/462029/4280</a> descibres Mrowka $\Psi$-space</p></li> <li><p><a href="https://math.stackexchange.com/a/170740/4280">https://math.stackexchange.com/a/170740/4280</a> gives a proof of non-normality of the Sorgenfrey square $S \times S$, where $S$ is the reals in the lower limit topology (generated by the clopen sets $[a,b)$, the square thus is also zero-dimensional). </p></li> </ol> <p>All of these are zero-dimensional Hausdorff and not normal. The first 3 are also locally compact. 2,3 and 4 are also separable (and first countable). So some conditions extra need not be enough to get normality.</p> Answer 18: <p>I'm a few years late, but the missing assumption in your exercise might be that of second countability of the space. It is indeed true that second countable hausdorff zero dimensional spaces are normal, because zero dimensional implies (completely) regular and regular plus second countable implies normal.</p> <p>Showing that if <span class="math-container">$X$</span> is Hausdorff, second countable and zero dimensional, then for every pair of closed sets <span class="math-container">$C_1$</span> and <span class="math-container">$C_2$</span> in <span class="math-container">$X$</span> there is a clopen set <span class="math-container">$B$</span> with <span class="math-container">$C_1\subseteq B$</span> and <span class="math-container">$C_2\cap B=\varnothing$</span> is exercise 7.2 in Kechris's book <em>Classical Descriptive Set Theory</em></p> Answer 19: <p>No this is false, some examples:</p> <ol> <li><p><a href="https://mathoverflow.net/a/53301/2060">https://mathoverflow.net/a/53301/2060</a> describes the deleted Tychonov plank.</p></li> <li><p><a href="https://mathoverflow.net/a/56805/2060">https://mathoverflow.net/a/56805/2060</a> describes the rational sequence topology </p></li> <li><p><a href="https://math.stackexchange.com/a/462029/4280">https://math.stackexchange.com/a/462029/4280</a> descibres Mrowka $\Psi$-space</p></li> <li><p><a href="https://math.stackexchange.com/a/170740/4280">https://math.stackexchange.com/a/170740/4280</a> gives a proof of non-normality of the Sorgenfrey square $S \times S$, where $S$ is the reals in the lower limit topology (generated by the clopen sets $[a,b)$, the square thus is also zero-dimensional). </p></li> </ol> <p>All of these are zero-dimensional Hausdorff and not normal. The first 3 are also locally compact. 2,3 and 4 are also separable (and first countable). So some conditions extra need not be enough to get normality.</p> Answer 20: <p>I'm a few years late, but the missing assumption in your exercise might be that of second countability of the space. It is indeed true that second countable hausdorff zero dimensional spaces are normal, because zero dimensional implies (completely) regular and regular plus second countable implies normal.</p> <p>Showing that if <span class="math-container">$X$</span> is Hausdorff, second countable and zero dimensional, then for every pair of closed sets <span class="math-container">$C_1$</span> and <span class="math-container">$C_2$</span> in <span class="math-container">$X$</span> there is a clopen set <span class="math-container">$B$</span> with <span class="math-container">$C_1\subseteq B$</span> and <span class="math-container">$C_2\cap B=\varnothing$</span> is exercise 7.2 in Kechris's book <em>Classical Descriptive Set Theory</em></p>
What I am missing in this simple equation from Nesterov&#39;s paper?
2,113,108
6
https://math.stackexchange.com/questions/2113108/what-i-am-missing-in-this-simple-equation-from-nesterovs-paper
<p>In this paper by Prof Nesterov, <a href="http://www.optimization-online.org/DB_FILE/2010/12/2865.pdf" rel="nofollow noreferrer">First-order methods of smooth convex optimization with inexact oracle</a>, proof of Theorem 2, there is the following very simple equation which I think it is wrong, \begin{align} \Vert x_{k+1}-x^*\Vert^2 = \Vert x_{k}-x^*\Vert^2 + 2\langle B(x_{k+1}-x_k),x_{k+1}-x^*\rangle -\Vert x_{k+1}-x_k\Vert^2 \end{align} (They defined norm as $\Vert x\Vert^2 =\langle Bx,x\rangle$) Obviously, it must be \begin{align} \Vert x_{k+1}-x^*\Vert^2 = \Vert x_k-x^*+x_{k+1}-x_{k}\Vert^2 = \Vert x_{k}-x^*\Vert^2 + 2\langle B(x_{k+1}-x_k),x_{k}-x^*\rangle \\ +\Vert x_{k+1}-x_k\Vert^2 \end{align} Am I missing something? </p> <p>I also, checked journal version, <a href="http://link.springer.com/article/10.1007/s10107-013-0677-5" rel="nofollow noreferrer">Mathematical Programming</a>.</p>
[ { "answer_id": 2113165, "score": 8, "text": "<p>The author is correct. The norm in question is induced by the real inner product $(x,y):=\\langle Bx,y\\rangle$ where $B$ is positive definite with respect to $\\langle\\cdot,\\cdot\\rangle$. Let $u = x_{k+1} - x^\\ast$ and $v = x_{k+1}-x_k$. Then the equation is simply saying that\n$$\n\\|u\\|^2 = \\|u-v\\|^2 + 2(v,u) - \\|v\\|^2,\n$$\nwhich is just a rearrangement of terms in the cosine law\n$$\n\\|u-v\\|^2 = \\|u\\|^2 - 2(v,u) + \\|v\\|^2.\n$$</p>\n" }, { "answer_id": 2113126, "score": 3, "text": "<p>If it is bilinear, the derivation is correct.</p>\n\n<p>$$2\\langle B(x_{k+1}-x_k),x_k-x^*\\rangle +\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\langle B(x_{k+1}-x_k),x_k-x^*\\rangle +2\\Vert x_{k+1}-x_k\\Vert ^2-\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\left(\\langle B(x_{k+1}-x_k),x_k-x^*\\rangle +\\langle B(x_{k+1}-x_k),x_{k+1}-x_k\\rangle\\right)-\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\langle B(x_{k+1}-x_k),x_k-x^*+x_{k+1}-x_k\\rangle -\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\langle B(x_{k+1}-x_k),x_{k+1}-x^*\\rangle -\\Vert x_{k+1}-x_k\\Vert^2$$</p>\n" }, { "answer_id": 2113165, "score": 8, "text": "<p>The author is correct. The norm in question is induced by the real inner product $(x,y):=\\langle Bx,y\\rangle$ where $B$ is positive definite with respect to $\\langle\\cdot,\\cdot\\rangle$. Let $u = x_{k+1} - x^\\ast$ and $v = x_{k+1}-x_k$. Then the equation is simply saying that\n$$\n\\|u\\|^2 = \\|u-v\\|^2 + 2(v,u) - \\|v\\|^2,\n$$\nwhich is just a rearrangement of terms in the cosine law\n$$\n\\|u-v\\|^2 = \\|u\\|^2 - 2(v,u) + \\|v\\|^2.\n$$</p>\n" }, { "answer_id": 2113126, "score": 3, "text": "<p>If it is bilinear, the derivation is correct.</p>\n\n<p>$$2\\langle B(x_{k+1}-x_k),x_k-x^*\\rangle +\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\langle B(x_{k+1}-x_k),x_k-x^*\\rangle +2\\Vert x_{k+1}-x_k\\Vert ^2-\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\left(\\langle B(x_{k+1}-x_k),x_k-x^*\\rangle +\\langle B(x_{k+1}-x_k),x_{k+1}-x_k\\rangle\\right)-\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\langle B(x_{k+1}-x_k),x_k-x^*+x_{k+1}-x_k\\rangle -\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\langle B(x_{k+1}-x_k),x_{k+1}-x^*\\rangle -\\Vert x_{k+1}-x_k\\Vert^2$$</p>\n" }, { "answer_id": 2113165, "score": 8, "text": "<p>The author is correct. The norm in question is induced by the real inner product $(x,y):=\\langle Bx,y\\rangle$ where $B$ is positive definite with respect to $\\langle\\cdot,\\cdot\\rangle$. Let $u = x_{k+1} - x^\\ast$ and $v = x_{k+1}-x_k$. Then the equation is simply saying that\n$$\n\\|u\\|^2 = \\|u-v\\|^2 + 2(v,u) - \\|v\\|^2,\n$$\nwhich is just a rearrangement of terms in the cosine law\n$$\n\\|u-v\\|^2 = \\|u\\|^2 - 2(v,u) + \\|v\\|^2.\n$$</p>\n" }, { "answer_id": 2113126, "score": 3, "text": "<p>If it is bilinear, the derivation is correct.</p>\n\n<p>$$2\\langle B(x_{k+1}-x_k),x_k-x^*\\rangle +\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\langle B(x_{k+1}-x_k),x_k-x^*\\rangle +2\\Vert x_{k+1}-x_k\\Vert ^2-\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\left(\\langle B(x_{k+1}-x_k),x_k-x^*\\rangle +\\langle B(x_{k+1}-x_k),x_{k+1}-x_k\\rangle\\right)-\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\langle B(x_{k+1}-x_k),x_k-x^*+x_{k+1}-x_k\\rangle -\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\langle B(x_{k+1}-x_k),x_{k+1}-x^*\\rangle -\\Vert x_{k+1}-x_k\\Vert^2$$</p>\n" }, { "answer_id": 2113165, "score": 8, "text": "<p>The author is correct. The norm in question is induced by the real inner product $(x,y):=\\langle Bx,y\\rangle$ where $B$ is positive definite with respect to $\\langle\\cdot,\\cdot\\rangle$. Let $u = x_{k+1} - x^\\ast$ and $v = x_{k+1}-x_k$. Then the equation is simply saying that\n$$\n\\|u\\|^2 = \\|u-v\\|^2 + 2(v,u) - \\|v\\|^2,\n$$\nwhich is just a rearrangement of terms in the cosine law\n$$\n\\|u-v\\|^2 = \\|u\\|^2 - 2(v,u) + \\|v\\|^2.\n$$</p>\n" }, { "answer_id": 2113126, "score": 3, "text": "<p>If it is bilinear, the derivation is correct.</p>\n\n<p>$$2\\langle B(x_{k+1}-x_k),x_k-x^*\\rangle +\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\langle B(x_{k+1}-x_k),x_k-x^*\\rangle +2\\Vert x_{k+1}-x_k\\Vert ^2-\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\left(\\langle B(x_{k+1}-x_k),x_k-x^*\\rangle +\\langle B(x_{k+1}-x_k),x_{k+1}-x_k\\rangle\\right)-\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\langle B(x_{k+1}-x_k),x_k-x^*+x_{k+1}-x_k\\rangle -\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\langle B(x_{k+1}-x_k),x_{k+1}-x^*\\rangle -\\Vert x_{k+1}-x_k\\Vert^2$$</p>\n" }, { "answer_id": 2113165, "score": 8, "text": "<p>The author is correct. The norm in question is induced by the real inner product $(x,y):=\\langle Bx,y\\rangle$ where $B$ is positive definite with respect to $\\langle\\cdot,\\cdot\\rangle$. Let $u = x_{k+1} - x^\\ast$ and $v = x_{k+1}-x_k$. Then the equation is simply saying that\n$$\n\\|u\\|^2 = \\|u-v\\|^2 + 2(v,u) - \\|v\\|^2,\n$$\nwhich is just a rearrangement of terms in the cosine law\n$$\n\\|u-v\\|^2 = \\|u\\|^2 - 2(v,u) + \\|v\\|^2.\n$$</p>\n" }, { "answer_id": 2113126, "score": 3, "text": "<p>If it is bilinear, the derivation is correct.</p>\n\n<p>$$2\\langle B(x_{k+1}-x_k),x_k-x^*\\rangle +\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\langle B(x_{k+1}-x_k),x_k-x^*\\rangle +2\\Vert x_{k+1}-x_k\\Vert ^2-\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\left(\\langle B(x_{k+1}-x_k),x_k-x^*\\rangle +\\langle B(x_{k+1}-x_k),x_{k+1}-x_k\\rangle\\right)-\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\langle B(x_{k+1}-x_k),x_k-x^*+x_{k+1}-x_k\\rangle -\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\langle B(x_{k+1}-x_k),x_{k+1}-x^*\\rangle -\\Vert x_{k+1}-x_k\\Vert^2$$</p>\n" }, { "answer_id": 2113165, "score": 8, "text": "<p>The author is correct. The norm in question is induced by the real inner product $(x,y):=\\langle Bx,y\\rangle$ where $B$ is positive definite with respect to $\\langle\\cdot,\\cdot\\rangle$. Let $u = x_{k+1} - x^\\ast$ and $v = x_{k+1}-x_k$. Then the equation is simply saying that\n$$\n\\|u\\|^2 = \\|u-v\\|^2 + 2(v,u) - \\|v\\|^2,\n$$\nwhich is just a rearrangement of terms in the cosine law\n$$\n\\|u-v\\|^2 = \\|u\\|^2 - 2(v,u) + \\|v\\|^2.\n$$</p>\n" }, { "answer_id": 2113126, "score": 3, "text": "<p>If it is bilinear, the derivation is correct.</p>\n\n<p>$$2\\langle B(x_{k+1}-x_k),x_k-x^*\\rangle +\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\langle B(x_{k+1}-x_k),x_k-x^*\\rangle +2\\Vert x_{k+1}-x_k\\Vert ^2-\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\left(\\langle B(x_{k+1}-x_k),x_k-x^*\\rangle +\\langle B(x_{k+1}-x_k),x_{k+1}-x_k\\rangle\\right)-\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\langle B(x_{k+1}-x_k),x_k-x^*+x_{k+1}-x_k\\rangle -\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\langle B(x_{k+1}-x_k),x_{k+1}-x^*\\rangle -\\Vert x_{k+1}-x_k\\Vert^2$$</p>\n" }, { "answer_id": 2113165, "score": 8, "text": "<p>The author is correct. The norm in question is induced by the real inner product $(x,y):=\\langle Bx,y\\rangle$ where $B$ is positive definite with respect to $\\langle\\cdot,\\cdot\\rangle$. Let $u = x_{k+1} - x^\\ast$ and $v = x_{k+1}-x_k$. Then the equation is simply saying that\n$$\n\\|u\\|^2 = \\|u-v\\|^2 + 2(v,u) - \\|v\\|^2,\n$$\nwhich is just a rearrangement of terms in the cosine law\n$$\n\\|u-v\\|^2 = \\|u\\|^2 - 2(v,u) + \\|v\\|^2.\n$$</p>\n" }, { "answer_id": 2113126, "score": 3, "text": "<p>If it is bilinear, the derivation is correct.</p>\n\n<p>$$2\\langle B(x_{k+1}-x_k),x_k-x^*\\rangle +\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\langle B(x_{k+1}-x_k),x_k-x^*\\rangle +2\\Vert x_{k+1}-x_k\\Vert ^2-\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\left(\\langle B(x_{k+1}-x_k),x_k-x^*\\rangle +\\langle B(x_{k+1}-x_k),x_{k+1}-x_k\\rangle\\right)-\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\langle B(x_{k+1}-x_k),x_k-x^*+x_{k+1}-x_k\\rangle -\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\langle B(x_{k+1}-x_k),x_{k+1}-x^*\\rangle -\\Vert x_{k+1}-x_k\\Vert^2$$</p>\n" }, { "answer_id": 2113165, "score": 8, "text": "<p>The author is correct. The norm in question is induced by the real inner product $(x,y):=\\langle Bx,y\\rangle$ where $B$ is positive definite with respect to $\\langle\\cdot,\\cdot\\rangle$. Let $u = x_{k+1} - x^\\ast$ and $v = x_{k+1}-x_k$. Then the equation is simply saying that\n$$\n\\|u\\|^2 = \\|u-v\\|^2 + 2(v,u) - \\|v\\|^2,\n$$\nwhich is just a rearrangement of terms in the cosine law\n$$\n\\|u-v\\|^2 = \\|u\\|^2 - 2(v,u) + \\|v\\|^2.\n$$</p>\n" }, { "answer_id": 2113126, "score": 3, "text": "<p>If it is bilinear, the derivation is correct.</p>\n\n<p>$$2\\langle B(x_{k+1}-x_k),x_k-x^*\\rangle +\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\langle B(x_{k+1}-x_k),x_k-x^*\\rangle +2\\Vert x_{k+1}-x_k\\Vert ^2-\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\left(\\langle B(x_{k+1}-x_k),x_k-x^*\\rangle +\\langle B(x_{k+1}-x_k),x_{k+1}-x_k\\rangle\\right)-\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\langle B(x_{k+1}-x_k),x_k-x^*+x_{k+1}-x_k\\rangle -\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\langle B(x_{k+1}-x_k),x_{k+1}-x^*\\rangle -\\Vert x_{k+1}-x_k\\Vert^2$$</p>\n" }, { "answer_id": 2113165, "score": 8, "text": "<p>The author is correct. The norm in question is induced by the real inner product $(x,y):=\\langle Bx,y\\rangle$ where $B$ is positive definite with respect to $\\langle\\cdot,\\cdot\\rangle$. Let $u = x_{k+1} - x^\\ast$ and $v = x_{k+1}-x_k$. Then the equation is simply saying that\n$$\n\\|u\\|^2 = \\|u-v\\|^2 + 2(v,u) - \\|v\\|^2,\n$$\nwhich is just a rearrangement of terms in the cosine law\n$$\n\\|u-v\\|^2 = \\|u\\|^2 - 2(v,u) + \\|v\\|^2.\n$$</p>\n" }, { "answer_id": 2113126, "score": 3, "text": "<p>If it is bilinear, the derivation is correct.</p>\n\n<p>$$2\\langle B(x_{k+1}-x_k),x_k-x^*\\rangle +\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\langle B(x_{k+1}-x_k),x_k-x^*\\rangle +2\\Vert x_{k+1}-x_k\\Vert ^2-\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\left(\\langle B(x_{k+1}-x_k),x_k-x^*\\rangle +\\langle B(x_{k+1}-x_k),x_{k+1}-x_k\\rangle\\right)-\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\langle B(x_{k+1}-x_k),x_k-x^*+x_{k+1}-x_k\\rangle -\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\langle B(x_{k+1}-x_k),x_{k+1}-x^*\\rangle -\\Vert x_{k+1}-x_k\\Vert^2$$</p>\n" }, { "answer_id": 2113165, "score": 8, "text": "<p>The author is correct. The norm in question is induced by the real inner product $(x,y):=\\langle Bx,y\\rangle$ where $B$ is positive definite with respect to $\\langle\\cdot,\\cdot\\rangle$. Let $u = x_{k+1} - x^\\ast$ and $v = x_{k+1}-x_k$. Then the equation is simply saying that\n$$\n\\|u\\|^2 = \\|u-v\\|^2 + 2(v,u) - \\|v\\|^2,\n$$\nwhich is just a rearrangement of terms in the cosine law\n$$\n\\|u-v\\|^2 = \\|u\\|^2 - 2(v,u) + \\|v\\|^2.\n$$</p>\n" }, { "answer_id": 2113126, "score": 3, "text": "<p>If it is bilinear, the derivation is correct.</p>\n\n<p>$$2\\langle B(x_{k+1}-x_k),x_k-x^*\\rangle +\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\langle B(x_{k+1}-x_k),x_k-x^*\\rangle +2\\Vert x_{k+1}-x_k\\Vert ^2-\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\left(\\langle B(x_{k+1}-x_k),x_k-x^*\\rangle +\\langle B(x_{k+1}-x_k),x_{k+1}-x_k\\rangle\\right)-\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\langle B(x_{k+1}-x_k),x_k-x^*+x_{k+1}-x_k\\rangle -\\Vert x_{k+1}-x_k\\Vert^2=$$</p>\n\n<p>$$=2\\langle B(x_{k+1}-x_k),x_{k+1}-x^*\\rangle -\\Vert x_{k+1}-x_k\\Vert^2$$</p>\n" } ]
Question: What I am missing in this simple equation from Nesterov&#39;s paper? Answer 1: <p>The author is correct. The norm in question is induced by the real inner product $(x,y):=\langle Bx,y\rangle$ where $B$ is positive definite with respect to $\langle\cdot,\cdot\rangle$. Let $u = x_{k+1} - x^\ast$ and $v = x_{k+1}-x_k$. Then the equation is simply saying that $$ \|u\|^2 = \|u-v\|^2 + 2(v,u) - \|v\|^2, $$ which is just a rearrangement of terms in the cosine law $$ \|u-v\|^2 = \|u\|^2 - 2(v,u) + \|v\|^2. $$</p> Answer 2: <p>If it is bilinear, the derivation is correct.</p> <p>$$2\langle B(x_{k+1}-x_k),x_k-x^*\rangle +\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\langle B(x_{k+1}-x_k),x_k-x^*\rangle +2\Vert x_{k+1}-x_k\Vert ^2-\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\left(\langle B(x_{k+1}-x_k),x_k-x^*\rangle +\langle B(x_{k+1}-x_k),x_{k+1}-x_k\rangle\right)-\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\langle B(x_{k+1}-x_k),x_k-x^*+x_{k+1}-x_k\rangle -\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\langle B(x_{k+1}-x_k),x_{k+1}-x^*\rangle -\Vert x_{k+1}-x_k\Vert^2$$</p> Answer 3: <p>The author is correct. The norm in question is induced by the real inner product $(x,y):=\langle Bx,y\rangle$ where $B$ is positive definite with respect to $\langle\cdot,\cdot\rangle$. Let $u = x_{k+1} - x^\ast$ and $v = x_{k+1}-x_k$. Then the equation is simply saying that $$ \|u\|^2 = \|u-v\|^2 + 2(v,u) - \|v\|^2, $$ which is just a rearrangement of terms in the cosine law $$ \|u-v\|^2 = \|u\|^2 - 2(v,u) + \|v\|^2. $$</p> Answer 4: <p>If it is bilinear, the derivation is correct.</p> <p>$$2\langle B(x_{k+1}-x_k),x_k-x^*\rangle +\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\langle B(x_{k+1}-x_k),x_k-x^*\rangle +2\Vert x_{k+1}-x_k\Vert ^2-\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\left(\langle B(x_{k+1}-x_k),x_k-x^*\rangle +\langle B(x_{k+1}-x_k),x_{k+1}-x_k\rangle\right)-\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\langle B(x_{k+1}-x_k),x_k-x^*+x_{k+1}-x_k\rangle -\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\langle B(x_{k+1}-x_k),x_{k+1}-x^*\rangle -\Vert x_{k+1}-x_k\Vert^2$$</p> Answer 5: <p>The author is correct. The norm in question is induced by the real inner product $(x,y):=\langle Bx,y\rangle$ where $B$ is positive definite with respect to $\langle\cdot,\cdot\rangle$. Let $u = x_{k+1} - x^\ast$ and $v = x_{k+1}-x_k$. Then the equation is simply saying that $$ \|u\|^2 = \|u-v\|^2 + 2(v,u) - \|v\|^2, $$ which is just a rearrangement of terms in the cosine law $$ \|u-v\|^2 = \|u\|^2 - 2(v,u) + \|v\|^2. $$</p> Answer 6: <p>If it is bilinear, the derivation is correct.</p> <p>$$2\langle B(x_{k+1}-x_k),x_k-x^*\rangle +\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\langle B(x_{k+1}-x_k),x_k-x^*\rangle +2\Vert x_{k+1}-x_k\Vert ^2-\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\left(\langle B(x_{k+1}-x_k),x_k-x^*\rangle +\langle B(x_{k+1}-x_k),x_{k+1}-x_k\rangle\right)-\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\langle B(x_{k+1}-x_k),x_k-x^*+x_{k+1}-x_k\rangle -\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\langle B(x_{k+1}-x_k),x_{k+1}-x^*\rangle -\Vert x_{k+1}-x_k\Vert^2$$</p> Answer 7: <p>The author is correct. The norm in question is induced by the real inner product $(x,y):=\langle Bx,y\rangle$ where $B$ is positive definite with respect to $\langle\cdot,\cdot\rangle$. Let $u = x_{k+1} - x^\ast$ and $v = x_{k+1}-x_k$. Then the equation is simply saying that $$ \|u\|^2 = \|u-v\|^2 + 2(v,u) - \|v\|^2, $$ which is just a rearrangement of terms in the cosine law $$ \|u-v\|^2 = \|u\|^2 - 2(v,u) + \|v\|^2. $$</p> Answer 8: <p>If it is bilinear, the derivation is correct.</p> <p>$$2\langle B(x_{k+1}-x_k),x_k-x^*\rangle +\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\langle B(x_{k+1}-x_k),x_k-x^*\rangle +2\Vert x_{k+1}-x_k\Vert ^2-\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\left(\langle B(x_{k+1}-x_k),x_k-x^*\rangle +\langle B(x_{k+1}-x_k),x_{k+1}-x_k\rangle\right)-\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\langle B(x_{k+1}-x_k),x_k-x^*+x_{k+1}-x_k\rangle -\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\langle B(x_{k+1}-x_k),x_{k+1}-x^*\rangle -\Vert x_{k+1}-x_k\Vert^2$$</p> Answer 9: <p>The author is correct. The norm in question is induced by the real inner product $(x,y):=\langle Bx,y\rangle$ where $B$ is positive definite with respect to $\langle\cdot,\cdot\rangle$. Let $u = x_{k+1} - x^\ast$ and $v = x_{k+1}-x_k$. Then the equation is simply saying that $$ \|u\|^2 = \|u-v\|^2 + 2(v,u) - \|v\|^2, $$ which is just a rearrangement of terms in the cosine law $$ \|u-v\|^2 = \|u\|^2 - 2(v,u) + \|v\|^2. $$</p> Answer 10: <p>If it is bilinear, the derivation is correct.</p> <p>$$2\langle B(x_{k+1}-x_k),x_k-x^*\rangle +\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\langle B(x_{k+1}-x_k),x_k-x^*\rangle +2\Vert x_{k+1}-x_k\Vert ^2-\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\left(\langle B(x_{k+1}-x_k),x_k-x^*\rangle +\langle B(x_{k+1}-x_k),x_{k+1}-x_k\rangle\right)-\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\langle B(x_{k+1}-x_k),x_k-x^*+x_{k+1}-x_k\rangle -\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\langle B(x_{k+1}-x_k),x_{k+1}-x^*\rangle -\Vert x_{k+1}-x_k\Vert^2$$</p> Answer 11: <p>The author is correct. The norm in question is induced by the real inner product $(x,y):=\langle Bx,y\rangle$ where $B$ is positive definite with respect to $\langle\cdot,\cdot\rangle$. Let $u = x_{k+1} - x^\ast$ and $v = x_{k+1}-x_k$. Then the equation is simply saying that $$ \|u\|^2 = \|u-v\|^2 + 2(v,u) - \|v\|^2, $$ which is just a rearrangement of terms in the cosine law $$ \|u-v\|^2 = \|u\|^2 - 2(v,u) + \|v\|^2. $$</p> Answer 12: <p>If it is bilinear, the derivation is correct.</p> <p>$$2\langle B(x_{k+1}-x_k),x_k-x^*\rangle +\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\langle B(x_{k+1}-x_k),x_k-x^*\rangle +2\Vert x_{k+1}-x_k\Vert ^2-\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\left(\langle B(x_{k+1}-x_k),x_k-x^*\rangle +\langle B(x_{k+1}-x_k),x_{k+1}-x_k\rangle\right)-\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\langle B(x_{k+1}-x_k),x_k-x^*+x_{k+1}-x_k\rangle -\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\langle B(x_{k+1}-x_k),x_{k+1}-x^*\rangle -\Vert x_{k+1}-x_k\Vert^2$$</p> Answer 13: <p>The author is correct. The norm in question is induced by the real inner product $(x,y):=\langle Bx,y\rangle$ where $B$ is positive definite with respect to $\langle\cdot,\cdot\rangle$. Let $u = x_{k+1} - x^\ast$ and $v = x_{k+1}-x_k$. Then the equation is simply saying that $$ \|u\|^2 = \|u-v\|^2 + 2(v,u) - \|v\|^2, $$ which is just a rearrangement of terms in the cosine law $$ \|u-v\|^2 = \|u\|^2 - 2(v,u) + \|v\|^2. $$</p> Answer 14: <p>If it is bilinear, the derivation is correct.</p> <p>$$2\langle B(x_{k+1}-x_k),x_k-x^*\rangle +\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\langle B(x_{k+1}-x_k),x_k-x^*\rangle +2\Vert x_{k+1}-x_k\Vert ^2-\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\left(\langle B(x_{k+1}-x_k),x_k-x^*\rangle +\langle B(x_{k+1}-x_k),x_{k+1}-x_k\rangle\right)-\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\langle B(x_{k+1}-x_k),x_k-x^*+x_{k+1}-x_k\rangle -\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\langle B(x_{k+1}-x_k),x_{k+1}-x^*\rangle -\Vert x_{k+1}-x_k\Vert^2$$</p> Answer 15: <p>The author is correct. The norm in question is induced by the real inner product $(x,y):=\langle Bx,y\rangle$ where $B$ is positive definite with respect to $\langle\cdot,\cdot\rangle$. Let $u = x_{k+1} - x^\ast$ and $v = x_{k+1}-x_k$. Then the equation is simply saying that $$ \|u\|^2 = \|u-v\|^2 + 2(v,u) - \|v\|^2, $$ which is just a rearrangement of terms in the cosine law $$ \|u-v\|^2 = \|u\|^2 - 2(v,u) + \|v\|^2. $$</p> Answer 16: <p>If it is bilinear, the derivation is correct.</p> <p>$$2\langle B(x_{k+1}-x_k),x_k-x^*\rangle +\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\langle B(x_{k+1}-x_k),x_k-x^*\rangle +2\Vert x_{k+1}-x_k\Vert ^2-\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\left(\langle B(x_{k+1}-x_k),x_k-x^*\rangle +\langle B(x_{k+1}-x_k),x_{k+1}-x_k\rangle\right)-\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\langle B(x_{k+1}-x_k),x_k-x^*+x_{k+1}-x_k\rangle -\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\langle B(x_{k+1}-x_k),x_{k+1}-x^*\rangle -\Vert x_{k+1}-x_k\Vert^2$$</p> Answer 17: <p>The author is correct. The norm in question is induced by the real inner product $(x,y):=\langle Bx,y\rangle$ where $B$ is positive definite with respect to $\langle\cdot,\cdot\rangle$. Let $u = x_{k+1} - x^\ast$ and $v = x_{k+1}-x_k$. Then the equation is simply saying that $$ \|u\|^2 = \|u-v\|^2 + 2(v,u) - \|v\|^2, $$ which is just a rearrangement of terms in the cosine law $$ \|u-v\|^2 = \|u\|^2 - 2(v,u) + \|v\|^2. $$</p> Answer 18: <p>If it is bilinear, the derivation is correct.</p> <p>$$2\langle B(x_{k+1}-x_k),x_k-x^*\rangle +\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\langle B(x_{k+1}-x_k),x_k-x^*\rangle +2\Vert x_{k+1}-x_k\Vert ^2-\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\left(\langle B(x_{k+1}-x_k),x_k-x^*\rangle +\langle B(x_{k+1}-x_k),x_{k+1}-x_k\rangle\right)-\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\langle B(x_{k+1}-x_k),x_k-x^*+x_{k+1}-x_k\rangle -\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\langle B(x_{k+1}-x_k),x_{k+1}-x^*\rangle -\Vert x_{k+1}-x_k\Vert^2$$</p> Answer 19: <p>The author is correct. The norm in question is induced by the real inner product $(x,y):=\langle Bx,y\rangle$ where $B$ is positive definite with respect to $\langle\cdot,\cdot\rangle$. Let $u = x_{k+1} - x^\ast$ and $v = x_{k+1}-x_k$. Then the equation is simply saying that $$ \|u\|^2 = \|u-v\|^2 + 2(v,u) - \|v\|^2, $$ which is just a rearrangement of terms in the cosine law $$ \|u-v\|^2 = \|u\|^2 - 2(v,u) + \|v\|^2. $$</p> Answer 20: <p>If it is bilinear, the derivation is correct.</p> <p>$$2\langle B(x_{k+1}-x_k),x_k-x^*\rangle +\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\langle B(x_{k+1}-x_k),x_k-x^*\rangle +2\Vert x_{k+1}-x_k\Vert ^2-\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\left(\langle B(x_{k+1}-x_k),x_k-x^*\rangle +\langle B(x_{k+1}-x_k),x_{k+1}-x_k\rangle\right)-\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\langle B(x_{k+1}-x_k),x_k-x^*+x_{k+1}-x_k\rangle -\Vert x_{k+1}-x_k\Vert^2=$$</p> <p>$$=2\langle B(x_{k+1}-x_k),x_{k+1}-x^*\rangle -\Vert x_{k+1}-x_k\Vert^2$$</p>
Number theory/combinatorics problem?
2,114,119
6
https://math.stackexchange.com/questions/2114119/number-theory-combinatorics-problem
<p>Given any set of $14$ (different) natural numbers, prove that for some $k$ ($1 ≤ k ≤ 7$) there exist two disjoint $k$-element subsets $\{a_1,...,a_k\}$ and $\{b_1,...,b_k\}$ such that the sum of the reciprocals of all elements in each set differ by less than $0.001$, i.e. $|A−B| &lt; 0.001$, where $A =$ the sum of the reciprocals of the elements of the first subset and $B =$ the sum of the reciprocals of the elements of the second subset.</p> <p>Note: this problem is from the $1998$ Czech and Slovak Math Olympiad </p>
[ { "answer_id": 2119510, "score": 13, "text": "<p>Consider the $\\binom{14}{7} = 3432$ $7$-element subsets of the fourteen numbers, and look at the sums of the reciprocals of the numbers in each subset. Each sum is at most$$1 + {1\\over2} + \\ldots + {1\\over7} = {{363}\\over{140}} &lt; 2.60,$$so each of the $3432$ sums lies in one of the $2600$ intervals$$\\left(0, {1\\over{1000}}\\right], \\text{ }\\left({1\\over{1000}}, {2\\over{1000}}\\right], \\text{ }\\ldots\\text{ }, \\text{ }\\left({{2599}\\over{1000}}, {{2600}\\over{1000}}\\right].$$Then by the pigeonhole principle, some two sums lie in the same interval. Taking the corresponding sets and discarding any possible common elements, we obtain two satisfactory subsets $A$ and $B$.</p>\n" }, { "answer_id": 2119480, "score": 3, "text": "<p>Here is a greedy approach, which suggests that $14$ could be lowered to $13$ but is not a proof. If the first number is very large to keep its reciprocal small, the next one needs to be at most $999$ so its reciprocal exceeds $0.001$. Then the next needs to be at most $499$ so its reciprocal exceeds $0.002$. The next reciprocal needs to be at least $0.004$, so the number is at most $249$. We can represent these as $0,1,2,4$, the number of thousandths in the reciprocal. If we continue we get $0, 1, 2, 4, 7, 13, 24, 44, 84, 161, 309, 594, 1164, 2284\\ldots$, shown in <a href=\"http://oeis.org/A005318\" rel=\"nofollow noreferrer\">OEIS $A005318$</a>. The fact that the thirteenth term exceeds $1000$ suggests we have run out of room already.</p>\n" }, { "answer_id": 2119510, "score": 13, "text": "<p>Consider the $\\binom{14}{7} = 3432$ $7$-element subsets of the fourteen numbers, and look at the sums of the reciprocals of the numbers in each subset. Each sum is at most$$1 + {1\\over2} + \\ldots + {1\\over7} = {{363}\\over{140}} &lt; 2.60,$$so each of the $3432$ sums lies in one of the $2600$ intervals$$\\left(0, {1\\over{1000}}\\right], \\text{ }\\left({1\\over{1000}}, {2\\over{1000}}\\right], \\text{ }\\ldots\\text{ }, \\text{ }\\left({{2599}\\over{1000}}, {{2600}\\over{1000}}\\right].$$Then by the pigeonhole principle, some two sums lie in the same interval. Taking the corresponding sets and discarding any possible common elements, we obtain two satisfactory subsets $A$ and $B$.</p>\n" }, { "answer_id": 2119480, "score": 3, "text": "<p>Here is a greedy approach, which suggests that $14$ could be lowered to $13$ but is not a proof. If the first number is very large to keep its reciprocal small, the next one needs to be at most $999$ so its reciprocal exceeds $0.001$. Then the next needs to be at most $499$ so its reciprocal exceeds $0.002$. The next reciprocal needs to be at least $0.004$, so the number is at most $249$. We can represent these as $0,1,2,4$, the number of thousandths in the reciprocal. If we continue we get $0, 1, 2, 4, 7, 13, 24, 44, 84, 161, 309, 594, 1164, 2284\\ldots$, shown in <a href=\"http://oeis.org/A005318\" rel=\"nofollow noreferrer\">OEIS $A005318$</a>. The fact that the thirteenth term exceeds $1000$ suggests we have run out of room already.</p>\n" }, { "answer_id": 2119510, "score": 13, "text": "<p>Consider the $\\binom{14}{7} = 3432$ $7$-element subsets of the fourteen numbers, and look at the sums of the reciprocals of the numbers in each subset. Each sum is at most$$1 + {1\\over2} + \\ldots + {1\\over7} = {{363}\\over{140}} &lt; 2.60,$$so each of the $3432$ sums lies in one of the $2600$ intervals$$\\left(0, {1\\over{1000}}\\right], \\text{ }\\left({1\\over{1000}}, {2\\over{1000}}\\right], \\text{ }\\ldots\\text{ }, \\text{ }\\left({{2599}\\over{1000}}, {{2600}\\over{1000}}\\right].$$Then by the pigeonhole principle, some two sums lie in the same interval. Taking the corresponding sets and discarding any possible common elements, we obtain two satisfactory subsets $A$ and $B$.</p>\n" }, { "answer_id": 2119480, "score": 3, "text": "<p>Here is a greedy approach, which suggests that $14$ could be lowered to $13$ but is not a proof. If the first number is very large to keep its reciprocal small, the next one needs to be at most $999$ so its reciprocal exceeds $0.001$. Then the next needs to be at most $499$ so its reciprocal exceeds $0.002$. The next reciprocal needs to be at least $0.004$, so the number is at most $249$. We can represent these as $0,1,2,4$, the number of thousandths in the reciprocal. If we continue we get $0, 1, 2, 4, 7, 13, 24, 44, 84, 161, 309, 594, 1164, 2284\\ldots$, shown in <a href=\"http://oeis.org/A005318\" rel=\"nofollow noreferrer\">OEIS $A005318$</a>. The fact that the thirteenth term exceeds $1000$ suggests we have run out of room already.</p>\n" }, { "answer_id": 2119510, "score": 13, "text": "<p>Consider the $\\binom{14}{7} = 3432$ $7$-element subsets of the fourteen numbers, and look at the sums of the reciprocals of the numbers in each subset. Each sum is at most$$1 + {1\\over2} + \\ldots + {1\\over7} = {{363}\\over{140}} &lt; 2.60,$$so each of the $3432$ sums lies in one of the $2600$ intervals$$\\left(0, {1\\over{1000}}\\right], \\text{ }\\left({1\\over{1000}}, {2\\over{1000}}\\right], \\text{ }\\ldots\\text{ }, \\text{ }\\left({{2599}\\over{1000}}, {{2600}\\over{1000}}\\right].$$Then by the pigeonhole principle, some two sums lie in the same interval. Taking the corresponding sets and discarding any possible common elements, we obtain two satisfactory subsets $A$ and $B$.</p>\n" }, { "answer_id": 2119480, "score": 3, "text": "<p>Here is a greedy approach, which suggests that $14$ could be lowered to $13$ but is not a proof. If the first number is very large to keep its reciprocal small, the next one needs to be at most $999$ so its reciprocal exceeds $0.001$. Then the next needs to be at most $499$ so its reciprocal exceeds $0.002$. The next reciprocal needs to be at least $0.004$, so the number is at most $249$. We can represent these as $0,1,2,4$, the number of thousandths in the reciprocal. If we continue we get $0, 1, 2, 4, 7, 13, 24, 44, 84, 161, 309, 594, 1164, 2284\\ldots$, shown in <a href=\"http://oeis.org/A005318\" rel=\"nofollow noreferrer\">OEIS $A005318$</a>. The fact that the thirteenth term exceeds $1000$ suggests we have run out of room already.</p>\n" }, { "answer_id": 2119510, "score": 13, "text": "<p>Consider the $\\binom{14}{7} = 3432$ $7$-element subsets of the fourteen numbers, and look at the sums of the reciprocals of the numbers in each subset. Each sum is at most$$1 + {1\\over2} + \\ldots + {1\\over7} = {{363}\\over{140}} &lt; 2.60,$$so each of the $3432$ sums lies in one of the $2600$ intervals$$\\left(0, {1\\over{1000}}\\right], \\text{ }\\left({1\\over{1000}}, {2\\over{1000}}\\right], \\text{ }\\ldots\\text{ }, \\text{ }\\left({{2599}\\over{1000}}, {{2600}\\over{1000}}\\right].$$Then by the pigeonhole principle, some two sums lie in the same interval. Taking the corresponding sets and discarding any possible common elements, we obtain two satisfactory subsets $A$ and $B$.</p>\n" }, { "answer_id": 2119480, "score": 3, "text": "<p>Here is a greedy approach, which suggests that $14$ could be lowered to $13$ but is not a proof. If the first number is very large to keep its reciprocal small, the next one needs to be at most $999$ so its reciprocal exceeds $0.001$. Then the next needs to be at most $499$ so its reciprocal exceeds $0.002$. The next reciprocal needs to be at least $0.004$, so the number is at most $249$. We can represent these as $0,1,2,4$, the number of thousandths in the reciprocal. If we continue we get $0, 1, 2, 4, 7, 13, 24, 44, 84, 161, 309, 594, 1164, 2284\\ldots$, shown in <a href=\"http://oeis.org/A005318\" rel=\"nofollow noreferrer\">OEIS $A005318$</a>. The fact that the thirteenth term exceeds $1000$ suggests we have run out of room already.</p>\n" }, { "answer_id": 2119510, "score": 13, "text": "<p>Consider the $\\binom{14}{7} = 3432$ $7$-element subsets of the fourteen numbers, and look at the sums of the reciprocals of the numbers in each subset. Each sum is at most$$1 + {1\\over2} + \\ldots + {1\\over7} = {{363}\\over{140}} &lt; 2.60,$$so each of the $3432$ sums lies in one of the $2600$ intervals$$\\left(0, {1\\over{1000}}\\right], \\text{ }\\left({1\\over{1000}}, {2\\over{1000}}\\right], \\text{ }\\ldots\\text{ }, \\text{ }\\left({{2599}\\over{1000}}, {{2600}\\over{1000}}\\right].$$Then by the pigeonhole principle, some two sums lie in the same interval. Taking the corresponding sets and discarding any possible common elements, we obtain two satisfactory subsets $A$ and $B$.</p>\n" }, { "answer_id": 2119480, "score": 3, "text": "<p>Here is a greedy approach, which suggests that $14$ could be lowered to $13$ but is not a proof. If the first number is very large to keep its reciprocal small, the next one needs to be at most $999$ so its reciprocal exceeds $0.001$. Then the next needs to be at most $499$ so its reciprocal exceeds $0.002$. The next reciprocal needs to be at least $0.004$, so the number is at most $249$. We can represent these as $0,1,2,4$, the number of thousandths in the reciprocal. If we continue we get $0, 1, 2, 4, 7, 13, 24, 44, 84, 161, 309, 594, 1164, 2284\\ldots$, shown in <a href=\"http://oeis.org/A005318\" rel=\"nofollow noreferrer\">OEIS $A005318$</a>. The fact that the thirteenth term exceeds $1000$ suggests we have run out of room already.</p>\n" }, { "answer_id": 2119510, "score": 13, "text": "<p>Consider the $\\binom{14}{7} = 3432$ $7$-element subsets of the fourteen numbers, and look at the sums of the reciprocals of the numbers in each subset. Each sum is at most$$1 + {1\\over2} + \\ldots + {1\\over7} = {{363}\\over{140}} &lt; 2.60,$$so each of the $3432$ sums lies in one of the $2600$ intervals$$\\left(0, {1\\over{1000}}\\right], \\text{ }\\left({1\\over{1000}}, {2\\over{1000}}\\right], \\text{ }\\ldots\\text{ }, \\text{ }\\left({{2599}\\over{1000}}, {{2600}\\over{1000}}\\right].$$Then by the pigeonhole principle, some two sums lie in the same interval. Taking the corresponding sets and discarding any possible common elements, we obtain two satisfactory subsets $A$ and $B$.</p>\n" }, { "answer_id": 2119480, "score": 3, "text": "<p>Here is a greedy approach, which suggests that $14$ could be lowered to $13$ but is not a proof. If the first number is very large to keep its reciprocal small, the next one needs to be at most $999$ so its reciprocal exceeds $0.001$. Then the next needs to be at most $499$ so its reciprocal exceeds $0.002$. The next reciprocal needs to be at least $0.004$, so the number is at most $249$. We can represent these as $0,1,2,4$, the number of thousandths in the reciprocal. If we continue we get $0, 1, 2, 4, 7, 13, 24, 44, 84, 161, 309, 594, 1164, 2284\\ldots$, shown in <a href=\"http://oeis.org/A005318\" rel=\"nofollow noreferrer\">OEIS $A005318$</a>. The fact that the thirteenth term exceeds $1000$ suggests we have run out of room already.</p>\n" }, { "answer_id": 2119510, "score": 13, "text": "<p>Consider the $\\binom{14}{7} = 3432$ $7$-element subsets of the fourteen numbers, and look at the sums of the reciprocals of the numbers in each subset. Each sum is at most$$1 + {1\\over2} + \\ldots + {1\\over7} = {{363}\\over{140}} &lt; 2.60,$$so each of the $3432$ sums lies in one of the $2600$ intervals$$\\left(0, {1\\over{1000}}\\right], \\text{ }\\left({1\\over{1000}}, {2\\over{1000}}\\right], \\text{ }\\ldots\\text{ }, \\text{ }\\left({{2599}\\over{1000}}, {{2600}\\over{1000}}\\right].$$Then by the pigeonhole principle, some two sums lie in the same interval. Taking the corresponding sets and discarding any possible common elements, we obtain two satisfactory subsets $A$ and $B$.</p>\n" }, { "answer_id": 2119480, "score": 3, "text": "<p>Here is a greedy approach, which suggests that $14$ could be lowered to $13$ but is not a proof. If the first number is very large to keep its reciprocal small, the next one needs to be at most $999$ so its reciprocal exceeds $0.001$. Then the next needs to be at most $499$ so its reciprocal exceeds $0.002$. The next reciprocal needs to be at least $0.004$, so the number is at most $249$. We can represent these as $0,1,2,4$, the number of thousandths in the reciprocal. If we continue we get $0, 1, 2, 4, 7, 13, 24, 44, 84, 161, 309, 594, 1164, 2284\\ldots$, shown in <a href=\"http://oeis.org/A005318\" rel=\"nofollow noreferrer\">OEIS $A005318$</a>. The fact that the thirteenth term exceeds $1000$ suggests we have run out of room already.</p>\n" }, { "answer_id": 2119510, "score": 13, "text": "<p>Consider the $\\binom{14}{7} = 3432$ $7$-element subsets of the fourteen numbers, and look at the sums of the reciprocals of the numbers in each subset. Each sum is at most$$1 + {1\\over2} + \\ldots + {1\\over7} = {{363}\\over{140}} &lt; 2.60,$$so each of the $3432$ sums lies in one of the $2600$ intervals$$\\left(0, {1\\over{1000}}\\right], \\text{ }\\left({1\\over{1000}}, {2\\over{1000}}\\right], \\text{ }\\ldots\\text{ }, \\text{ }\\left({{2599}\\over{1000}}, {{2600}\\over{1000}}\\right].$$Then by the pigeonhole principle, some two sums lie in the same interval. Taking the corresponding sets and discarding any possible common elements, we obtain two satisfactory subsets $A$ and $B$.</p>\n" }, { "answer_id": 2119480, "score": 3, "text": "<p>Here is a greedy approach, which suggests that $14$ could be lowered to $13$ but is not a proof. If the first number is very large to keep its reciprocal small, the next one needs to be at most $999$ so its reciprocal exceeds $0.001$. Then the next needs to be at most $499$ so its reciprocal exceeds $0.002$. The next reciprocal needs to be at least $0.004$, so the number is at most $249$. We can represent these as $0,1,2,4$, the number of thousandths in the reciprocal. If we continue we get $0, 1, 2, 4, 7, 13, 24, 44, 84, 161, 309, 594, 1164, 2284\\ldots$, shown in <a href=\"http://oeis.org/A005318\" rel=\"nofollow noreferrer\">OEIS $A005318$</a>. The fact that the thirteenth term exceeds $1000$ suggests we have run out of room already.</p>\n" }, { "answer_id": 2119510, "score": 13, "text": "<p>Consider the $\\binom{14}{7} = 3432$ $7$-element subsets of the fourteen numbers, and look at the sums of the reciprocals of the numbers in each subset. Each sum is at most$$1 + {1\\over2} + \\ldots + {1\\over7} = {{363}\\over{140}} &lt; 2.60,$$so each of the $3432$ sums lies in one of the $2600$ intervals$$\\left(0, {1\\over{1000}}\\right], \\text{ }\\left({1\\over{1000}}, {2\\over{1000}}\\right], \\text{ }\\ldots\\text{ }, \\text{ }\\left({{2599}\\over{1000}}, {{2600}\\over{1000}}\\right].$$Then by the pigeonhole principle, some two sums lie in the same interval. Taking the corresponding sets and discarding any possible common elements, we obtain two satisfactory subsets $A$ and $B$.</p>\n" }, { "answer_id": 2119480, "score": 3, "text": "<p>Here is a greedy approach, which suggests that $14$ could be lowered to $13$ but is not a proof. If the first number is very large to keep its reciprocal small, the next one needs to be at most $999$ so its reciprocal exceeds $0.001$. Then the next needs to be at most $499$ so its reciprocal exceeds $0.002$. The next reciprocal needs to be at least $0.004$, so the number is at most $249$. We can represent these as $0,1,2,4$, the number of thousandths in the reciprocal. If we continue we get $0, 1, 2, 4, 7, 13, 24, 44, 84, 161, 309, 594, 1164, 2284\\ldots$, shown in <a href=\"http://oeis.org/A005318\" rel=\"nofollow noreferrer\">OEIS $A005318$</a>. The fact that the thirteenth term exceeds $1000$ suggests we have run out of room already.</p>\n" } ]
Question: Number theory/combinatorics problem? Answer 1: <p>Consider the $\binom{14}{7} = 3432$ $7$-element subsets of the fourteen numbers, and look at the sums of the reciprocals of the numbers in each subset. Each sum is at most$$1 + {1\over2} + \ldots + {1\over7} = {{363}\over{140}} &lt; 2.60,$$so each of the $3432$ sums lies in one of the $2600$ intervals$$\left(0, {1\over{1000}}\right], \text{ }\left({1\over{1000}}, {2\over{1000}}\right], \text{ }\ldots\text{ }, \text{ }\left({{2599}\over{1000}}, {{2600}\over{1000}}\right].$$Then by the pigeonhole principle, some two sums lie in the same interval. Taking the corresponding sets and discarding any possible common elements, we obtain two satisfactory subsets $A$ and $B$.</p> Answer 2: <p>Here is a greedy approach, which suggests that $14$ could be lowered to $13$ but is not a proof. If the first number is very large to keep its reciprocal small, the next one needs to be at most $999$ so its reciprocal exceeds $0.001$. Then the next needs to be at most $499$ so its reciprocal exceeds $0.002$. The next reciprocal needs to be at least $0.004$, so the number is at most $249$. We can represent these as $0,1,2,4$, the number of thousandths in the reciprocal. If we continue we get $0, 1, 2, 4, 7, 13, 24, 44, 84, 161, 309, 594, 1164, 2284\ldots$, shown in <a href="http://oeis.org/A005318" rel="nofollow noreferrer">OEIS $A005318$</a>. The fact that the thirteenth term exceeds $1000$ suggests we have run out of room already.</p> Answer 3: <p>Consider the $\binom{14}{7} = 3432$ $7$-element subsets of the fourteen numbers, and look at the sums of the reciprocals of the numbers in each subset. Each sum is at most$$1 + {1\over2} + \ldots + {1\over7} = {{363}\over{140}} &lt; 2.60,$$so each of the $3432$ sums lies in one of the $2600$ intervals$$\left(0, {1\over{1000}}\right], \text{ }\left({1\over{1000}}, {2\over{1000}}\right], \text{ }\ldots\text{ }, \text{ }\left({{2599}\over{1000}}, {{2600}\over{1000}}\right].$$Then by the pigeonhole principle, some two sums lie in the same interval. Taking the corresponding sets and discarding any possible common elements, we obtain two satisfactory subsets $A$ and $B$.</p> Answer 4: <p>Here is a greedy approach, which suggests that $14$ could be lowered to $13$ but is not a proof. If the first number is very large to keep its reciprocal small, the next one needs to be at most $999$ so its reciprocal exceeds $0.001$. Then the next needs to be at most $499$ so its reciprocal exceeds $0.002$. The next reciprocal needs to be at least $0.004$, so the number is at most $249$. We can represent these as $0,1,2,4$, the number of thousandths in the reciprocal. If we continue we get $0, 1, 2, 4, 7, 13, 24, 44, 84, 161, 309, 594, 1164, 2284\ldots$, shown in <a href="http://oeis.org/A005318" rel="nofollow noreferrer">OEIS $A005318$</a>. The fact that the thirteenth term exceeds $1000$ suggests we have run out of room already.</p> Answer 5: <p>Consider the $\binom{14}{7} = 3432$ $7$-element subsets of the fourteen numbers, and look at the sums of the reciprocals of the numbers in each subset. Each sum is at most$$1 + {1\over2} + \ldots + {1\over7} = {{363}\over{140}} &lt; 2.60,$$so each of the $3432$ sums lies in one of the $2600$ intervals$$\left(0, {1\over{1000}}\right], \text{ }\left({1\over{1000}}, {2\over{1000}}\right], \text{ }\ldots\text{ }, \text{ }\left({{2599}\over{1000}}, {{2600}\over{1000}}\right].$$Then by the pigeonhole principle, some two sums lie in the same interval. Taking the corresponding sets and discarding any possible common elements, we obtain two satisfactory subsets $A$ and $B$.</p> Answer 6: <p>Here is a greedy approach, which suggests that $14$ could be lowered to $13$ but is not a proof. If the first number is very large to keep its reciprocal small, the next one needs to be at most $999$ so its reciprocal exceeds $0.001$. Then the next needs to be at most $499$ so its reciprocal exceeds $0.002$. The next reciprocal needs to be at least $0.004$, so the number is at most $249$. We can represent these as $0,1,2,4$, the number of thousandths in the reciprocal. If we continue we get $0, 1, 2, 4, 7, 13, 24, 44, 84, 161, 309, 594, 1164, 2284\ldots$, shown in <a href="http://oeis.org/A005318" rel="nofollow noreferrer">OEIS $A005318$</a>. The fact that the thirteenth term exceeds $1000$ suggests we have run out of room already.</p> Answer 7: <p>Consider the $\binom{14}{7} = 3432$ $7$-element subsets of the fourteen numbers, and look at the sums of the reciprocals of the numbers in each subset. Each sum is at most$$1 + {1\over2} + \ldots + {1\over7} = {{363}\over{140}} &lt; 2.60,$$so each of the $3432$ sums lies in one of the $2600$ intervals$$\left(0, {1\over{1000}}\right], \text{ }\left({1\over{1000}}, {2\over{1000}}\right], \text{ }\ldots\text{ }, \text{ }\left({{2599}\over{1000}}, {{2600}\over{1000}}\right].$$Then by the pigeonhole principle, some two sums lie in the same interval. Taking the corresponding sets and discarding any possible common elements, we obtain two satisfactory subsets $A$ and $B$.</p> Answer 8: <p>Here is a greedy approach, which suggests that $14$ could be lowered to $13$ but is not a proof. If the first number is very large to keep its reciprocal small, the next one needs to be at most $999$ so its reciprocal exceeds $0.001$. Then the next needs to be at most $499$ so its reciprocal exceeds $0.002$. The next reciprocal needs to be at least $0.004$, so the number is at most $249$. We can represent these as $0,1,2,4$, the number of thousandths in the reciprocal. If we continue we get $0, 1, 2, 4, 7, 13, 24, 44, 84, 161, 309, 594, 1164, 2284\ldots$, shown in <a href="http://oeis.org/A005318" rel="nofollow noreferrer">OEIS $A005318$</a>. The fact that the thirteenth term exceeds $1000$ suggests we have run out of room already.</p> Answer 9: <p>Consider the $\binom{14}{7} = 3432$ $7$-element subsets of the fourteen numbers, and look at the sums of the reciprocals of the numbers in each subset. Each sum is at most$$1 + {1\over2} + \ldots + {1\over7} = {{363}\over{140}} &lt; 2.60,$$so each of the $3432$ sums lies in one of the $2600$ intervals$$\left(0, {1\over{1000}}\right], \text{ }\left({1\over{1000}}, {2\over{1000}}\right], \text{ }\ldots\text{ }, \text{ }\left({{2599}\over{1000}}, {{2600}\over{1000}}\right].$$Then by the pigeonhole principle, some two sums lie in the same interval. Taking the corresponding sets and discarding any possible common elements, we obtain two satisfactory subsets $A$ and $B$.</p> Answer 10: <p>Here is a greedy approach, which suggests that $14$ could be lowered to $13$ but is not a proof. If the first number is very large to keep its reciprocal small, the next one needs to be at most $999$ so its reciprocal exceeds $0.001$. Then the next needs to be at most $499$ so its reciprocal exceeds $0.002$. The next reciprocal needs to be at least $0.004$, so the number is at most $249$. We can represent these as $0,1,2,4$, the number of thousandths in the reciprocal. If we continue we get $0, 1, 2, 4, 7, 13, 24, 44, 84, 161, 309, 594, 1164, 2284\ldots$, shown in <a href="http://oeis.org/A005318" rel="nofollow noreferrer">OEIS $A005318$</a>. The fact that the thirteenth term exceeds $1000$ suggests we have run out of room already.</p> Answer 11: <p>Consider the $\binom{14}{7} = 3432$ $7$-element subsets of the fourteen numbers, and look at the sums of the reciprocals of the numbers in each subset. Each sum is at most$$1 + {1\over2} + \ldots + {1\over7} = {{363}\over{140}} &lt; 2.60,$$so each of the $3432$ sums lies in one of the $2600$ intervals$$\left(0, {1\over{1000}}\right], \text{ }\left({1\over{1000}}, {2\over{1000}}\right], \text{ }\ldots\text{ }, \text{ }\left({{2599}\over{1000}}, {{2600}\over{1000}}\right].$$Then by the pigeonhole principle, some two sums lie in the same interval. Taking the corresponding sets and discarding any possible common elements, we obtain two satisfactory subsets $A$ and $B$.</p> Answer 12: <p>Here is a greedy approach, which suggests that $14$ could be lowered to $13$ but is not a proof. If the first number is very large to keep its reciprocal small, the next one needs to be at most $999$ so its reciprocal exceeds $0.001$. Then the next needs to be at most $499$ so its reciprocal exceeds $0.002$. The next reciprocal needs to be at least $0.004$, so the number is at most $249$. We can represent these as $0,1,2,4$, the number of thousandths in the reciprocal. If we continue we get $0, 1, 2, 4, 7, 13, 24, 44, 84, 161, 309, 594, 1164, 2284\ldots$, shown in <a href="http://oeis.org/A005318" rel="nofollow noreferrer">OEIS $A005318$</a>. The fact that the thirteenth term exceeds $1000$ suggests we have run out of room already.</p> Answer 13: <p>Consider the $\binom{14}{7} = 3432$ $7$-element subsets of the fourteen numbers, and look at the sums of the reciprocals of the numbers in each subset. Each sum is at most$$1 + {1\over2} + \ldots + {1\over7} = {{363}\over{140}} &lt; 2.60,$$so each of the $3432$ sums lies in one of the $2600$ intervals$$\left(0, {1\over{1000}}\right], \text{ }\left({1\over{1000}}, {2\over{1000}}\right], \text{ }\ldots\text{ }, \text{ }\left({{2599}\over{1000}}, {{2600}\over{1000}}\right].$$Then by the pigeonhole principle, some two sums lie in the same interval. Taking the corresponding sets and discarding any possible common elements, we obtain two satisfactory subsets $A$ and $B$.</p> Answer 14: <p>Here is a greedy approach, which suggests that $14$ could be lowered to $13$ but is not a proof. If the first number is very large to keep its reciprocal small, the next one needs to be at most $999$ so its reciprocal exceeds $0.001$. Then the next needs to be at most $499$ so its reciprocal exceeds $0.002$. The next reciprocal needs to be at least $0.004$, so the number is at most $249$. We can represent these as $0,1,2,4$, the number of thousandths in the reciprocal. If we continue we get $0, 1, 2, 4, 7, 13, 24, 44, 84, 161, 309, 594, 1164, 2284\ldots$, shown in <a href="http://oeis.org/A005318" rel="nofollow noreferrer">OEIS $A005318$</a>. The fact that the thirteenth term exceeds $1000$ suggests we have run out of room already.</p> Answer 15: <p>Consider the $\binom{14}{7} = 3432$ $7$-element subsets of the fourteen numbers, and look at the sums of the reciprocals of the numbers in each subset. Each sum is at most$$1 + {1\over2} + \ldots + {1\over7} = {{363}\over{140}} &lt; 2.60,$$so each of the $3432$ sums lies in one of the $2600$ intervals$$\left(0, {1\over{1000}}\right], \text{ }\left({1\over{1000}}, {2\over{1000}}\right], \text{ }\ldots\text{ }, \text{ }\left({{2599}\over{1000}}, {{2600}\over{1000}}\right].$$Then by the pigeonhole principle, some two sums lie in the same interval. Taking the corresponding sets and discarding any possible common elements, we obtain two satisfactory subsets $A$ and $B$.</p> Answer 16: <p>Here is a greedy approach, which suggests that $14$ could be lowered to $13$ but is not a proof. If the first number is very large to keep its reciprocal small, the next one needs to be at most $999$ so its reciprocal exceeds $0.001$. Then the next needs to be at most $499$ so its reciprocal exceeds $0.002$. The next reciprocal needs to be at least $0.004$, so the number is at most $249$. We can represent these as $0,1,2,4$, the number of thousandths in the reciprocal. If we continue we get $0, 1, 2, 4, 7, 13, 24, 44, 84, 161, 309, 594, 1164, 2284\ldots$, shown in <a href="http://oeis.org/A005318" rel="nofollow noreferrer">OEIS $A005318$</a>. The fact that the thirteenth term exceeds $1000$ suggests we have run out of room already.</p> Answer 17: <p>Consider the $\binom{14}{7} = 3432$ $7$-element subsets of the fourteen numbers, and look at the sums of the reciprocals of the numbers in each subset. Each sum is at most$$1 + {1\over2} + \ldots + {1\over7} = {{363}\over{140}} &lt; 2.60,$$so each of the $3432$ sums lies in one of the $2600$ intervals$$\left(0, {1\over{1000}}\right], \text{ }\left({1\over{1000}}, {2\over{1000}}\right], \text{ }\ldots\text{ }, \text{ }\left({{2599}\over{1000}}, {{2600}\over{1000}}\right].$$Then by the pigeonhole principle, some two sums lie in the same interval. Taking the corresponding sets and discarding any possible common elements, we obtain two satisfactory subsets $A$ and $B$.</p> Answer 18: <p>Here is a greedy approach, which suggests that $14$ could be lowered to $13$ but is not a proof. If the first number is very large to keep its reciprocal small, the next one needs to be at most $999$ so its reciprocal exceeds $0.001$. Then the next needs to be at most $499$ so its reciprocal exceeds $0.002$. The next reciprocal needs to be at least $0.004$, so the number is at most $249$. We can represent these as $0,1,2,4$, the number of thousandths in the reciprocal. If we continue we get $0, 1, 2, 4, 7, 13, 24, 44, 84, 161, 309, 594, 1164, 2284\ldots$, shown in <a href="http://oeis.org/A005318" rel="nofollow noreferrer">OEIS $A005318$</a>. The fact that the thirteenth term exceeds $1000$ suggests we have run out of room already.</p> Answer 19: <p>Consider the $\binom{14}{7} = 3432$ $7$-element subsets of the fourteen numbers, and look at the sums of the reciprocals of the numbers in each subset. Each sum is at most$$1 + {1\over2} + \ldots + {1\over7} = {{363}\over{140}} &lt; 2.60,$$so each of the $3432$ sums lies in one of the $2600$ intervals$$\left(0, {1\over{1000}}\right], \text{ }\left({1\over{1000}}, {2\over{1000}}\right], \text{ }\ldots\text{ }, \text{ }\left({{2599}\over{1000}}, {{2600}\over{1000}}\right].$$Then by the pigeonhole principle, some two sums lie in the same interval. Taking the corresponding sets and discarding any possible common elements, we obtain two satisfactory subsets $A$ and $B$.</p> Answer 20: <p>Here is a greedy approach, which suggests that $14$ could be lowered to $13$ but is not a proof. If the first number is very large to keep its reciprocal small, the next one needs to be at most $999$ so its reciprocal exceeds $0.001$. Then the next needs to be at most $499$ so its reciprocal exceeds $0.002$. The next reciprocal needs to be at least $0.004$, so the number is at most $249$. We can represent these as $0,1,2,4$, the number of thousandths in the reciprocal. If we continue we get $0, 1, 2, 4, 7, 13, 24, 44, 84, 161, 309, 594, 1164, 2284\ldots$, shown in <a href="http://oeis.org/A005318" rel="nofollow noreferrer">OEIS $A005318$</a>. The fact that the thirteenth term exceeds $1000$ suggests we have run out of room already.</p>
Complete the short exact sequence $0\to \mathbb Z\to A\to Z_n \to 0$
2,114,257
6
https://math.stackexchange.com/questions/2114257/complete-the-short-exact-sequence-0-to-mathbb-z-to-a-to-z-n-to-0
<p>I want to find all possible $A$ in the following SES:</p> <blockquote> <p>$$0\to \mathbb Z\to A\to Z_n \to 0$$</p> </blockquote> <p>I know by structure theorem of finite generated abelian groups, $A\cong \mathbb Z\oplus \mathbb Z_d$. Then how can we find the relation between $d$ and $n$? What are two maps in the middle?</p> <p>Since this is an exercise in Hatcher's book at the beginning section of homology theory, a solution without using Ext will be much better.</p>
[ { "answer_id": 2114775, "score": 6, "text": "<p>Since I don't know what we can assume, let's classify all those extensions by brute force. Since it's a problem in a chapter about homology, $A$ is probably meant to be abelian but let's go ahead and do the general case. I'll outline a proof in the form of exercises. At the end, I outline the relationship between the explicit constructions and the cohomological argument.</p>\n\n<p>First, name the arrows in the exact sequence: $i: \\mathbf Z \\rightarrow A$ and $p: A \\rightarrow \\mathbf Z_n$.</p>\n\n<p>Let $a = i(1)$ and let $b \\in A$ such that $p(b) = 1$.</p>\n\n<blockquote>\n <p><strong>Exercise 1:</strong> $a$ and $b$ generate $A$</p>\n</blockquote>\n\n<p>So, $$A = \\langle a, b : \\text{ some relations we need to find}\\rangle$$</p>\n\n<blockquote>\n <p><strong>Exercise 2</strong>: $b^n = a^r$ and $bab^{-1} = a^s$ for some integers $r$ and $s$.</p>\n</blockquote>\n\n<p><em>Hint: What are the images of $bab^{-1}$ and $b^n$ under $p$?.</em></p>\n\n<blockquote>\n <p><strong>Exercise 3</strong>: $a^{s^n} = a$ and thus $s = \\pm 1$.</p>\n</blockquote>\n\n<p><em>Hint: use the relations we have so far. For the last part, recall that $a$ is the generator of a copy of $\\mathbf Z$ in $A$.</em></p>\n\n<blockquote>\n <p><strong>Exercise 4:</strong> Suppose $s = 1$. Let $t = \\gcd(n, r)$ and write $$1 = \\alpha \\frac n t + \\beta \\frac r t,$$ for suitable $\\alpha, \\beta \\in \\mathbf Z$. Then, $$A \\cong \\mathbf Z \\times \\mathbf Z_t,$$ with $i': \\mathbf Z \\rightarrow \\mathbf Z \\times \\mathbf Z_t$ and $p': \\mathbf Z \\times \\mathbf Z_t \\rightarrow \\mathbf Z_n$ given by $$i'(1) = (n/t,-\\beta)\\\\ p'(1,0) = \\beta \\\\ p'(0,1) = n/t.$$</p>\n</blockquote>\n\n<p><em>Hint (so it doesn't look like magic): $s = 1$ implies $A$ abelian and the relation $b^n = a^r$ can then be rewritten as $1 = (b^{n/t}a^{-r/t})^t$. In fact, $t$ is the order of $b^{n/t}a^{-r/t}$ in $A$. Show that $b^{n/t}a^{-r/t}$ and $b^{\\beta}a^{\\alpha}$ generate $A$ and consider the homomorphism $\\varphi: \\mathbf Z \\times \\mathbf Z_t \\rightarrow A$ given by $\\varphi(1,0) = b^{\\beta}a^{\\alpha}$ and $\\varphi(0, 1)=b^{n/t}a^{-r/t}$. It sits in the commutative diagram</em>\n$\\require{AMScd}$\n\\begin{CD}\n0 @&gt;&gt;&gt; \\mathbf Z @&gt;{i'}&gt;&gt; \\mathbf Z \\times \\mathbf Z_t @&gt;{p'}&gt;&gt; \\mathbf Z_n @&gt;&gt;&gt; 0\\\\\n&amp; @V{=\\,}VV @VV{\\varphi}V @VV{\\, =}V \\\\\n0 @&gt;&gt;&gt; \\mathbf Z @&gt;&gt;{i}&gt; A @&gt;&gt;{p}&gt; \\mathbf Z_n @&gt;&gt;&gt; 0\\end{CD}</p>\n\n<blockquote>\n <p><strong>Exercise 5:</strong> Suppose $s = -1$. Then, $n$ is even, $r = 0$ and $$A \\cong \\mathbf Z \\rtimes_\\sigma \\mathbf Z_n,$$ where $\\sigma: \\mathbf Z_n \\rightarrow \\operatorname{Aut}(\\mathbf Z)$ is given by $\\sigma(1)(1) = -1$. The maps $i': \\mathbf Z \\rightarrow \\mathbf Z \\rtimes \\mathbf Z_n$ and $p': \\mathbf Z \\rtimes \\mathbf Z_n \\rightarrow \\mathbf Z_n$ are given by $$i'(1) = (1,0)\\\\ p'(1,0) = 0 \\\\ p'(0,1) = 1.$$</p>\n</blockquote>\n\n<p><em>Hint: in this case $ba^rb^{-1} = a^{-r}$ and $a^r = b^n$, so $r = 0$ and $b^n = 1$. Conclude that $A$ is isomorphic to $\\mathbf Z \\rtimes \\mathbf Z_n$ and classify all homomorphisms $\\mathbf Z_n \\rightarrow \\operatorname{Aut}(\\mathbf Z)$. Note that $\\mathbf Z$ has only two automorphisms and, by exercise 3, $n$ is even when $s = -1$.</em></p>\n\n<hr>\n\n<p>What's going on in cohomological terms?</p>\n\n<p>$\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z)$ classifies the extensions of $\\mathbf Z_n$ by $\\mathbf Z$ as $\\mathbf Z$-modules or, in other words, as abelian groups. To compute $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z)$, one would usually apply $\\operatorname{Hom}(\\cdot, \\mathbf Z)$ to the \"multiplication by $n$\" sequence $$0 \\rightarrow \\mathbf Z \\rightarrow \\mathbf Z \\rightarrow \\mathbf Z_n \\rightarrow 0$$ to get $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z) = \\mathbf Z/n\\mathbf Z$. Exercise 4 makes this equality explicit. The class of $r$ in $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z) = \\mathbf Z/n\\mathbf Z$ corresponds to the extension $\\mathbf Z \\times \\mathbf Z_t$ with maps $p'$ and $i'$ as in the statement of exercise 4.</p>\n\n<p>Non-abelian group extensions of $\\mathbf Z_n$ by $\\mathbf Z$ are not detected by $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z) = H^2(\\mathbf Z_n, \\mathbf Z)$. To detect more general extensions, one could look, for example, at $H^3$ (see <a href=\"http://sierra.nmsu.edu/morandi/notes/GroupExtensions.pdf\" rel=\"nofollow noreferrer\">here</a>). IMO Exercise 5 is a more down to earth way of treating this case.</p>\n" }, { "answer_id": 2114785, "score": 5, "text": "<p>This is a nice exercise for a beginning section of homology theory, since doing this without Ext will make you really appreciate the machinery homological algebra has to offer.</p>\n\n<p>I do not think, that I will go through all details, but let us start:</p>\n\n<p>First of all, we have the case $\\mathbb Z_d = 0$ (one could refer to this as $d=1$), i.e. a short exact sequence</p>\n\n<p>$$0 \\to \\mathbb Z \\to \\mathbb Z \\to \\mathbb Z_n \\to 0.$$</p>\n\n<p>Of course the first map is given by $\\cdot m$ for some $m \\in \\mathbb Z$ and by looking at the cokernel, we get that $m = \\pm n$. The second map is given by $1 \\mapsto e$, where $e$ is co-prime to $n$.</p>\n\n<p>The much more involved case is the case $d \\geq 2$, i.e. where the summand $\\mathbb Z_d$ actually occurs.</p>\n\n<p>Call the first map $g: \\mathbb Z \\to \\mathbb Z \\oplus \\mathbb Z_d$ and the second map $f: \\mathbb Z \\oplus \\mathbb Z_d \\to \\mathbb Z_n$. Denote the kernel of $f$ by $K$. By exactness we get $K \\cong \\mathbb Z$.</p>\n\n<p>Note that $f(0,n)=0$. $K$ is torsion free, but $(0,n) \\in \\mathbb Z \\oplus \\mathbb Z_d$ is of course torsion (annihilated by $d$), hence we have $(0,n)=(0,0)$, i.e. $d|n$.</p>\n\n<blockquote>\n <p>So we have all possibilites for $A$, namely $A \\in \\{\\mathbb Z \\} \\cup\n\\{\\mathbb Z \\oplus \\mathbb Z_d ~ | ~ d \\geq 2,d | n\\}$.</p>\n</blockquote>\n\n<p>We still have to figure out the maps in the second case:</p>\n\n<p>Of course $g$ is given by $g(1) = (m,e)$ for some $m,e \\in \\mathbb Z, 0 \\leq e &lt;d$. Let us call the cokernel $C_{m,e}$.</p>\n\n<p>$C_{m_e}$ is generated by $e_1=(1,0),e_2=(0,1)$ and we have relations $me_1+ee_2=0$ and $de_2=0$, thus $C_{m_e}$ is presented by $$\\begin{pmatrix}m&amp;0\\\\e&amp;d\\end{pmatrix}.$$ Upto multiplication in $SL_2(\\mathbb Z)$, this should be equal to $$\\begin{pmatrix}n&amp;0\\\\0&amp;1\\end{pmatrix}$$, i.e. by looking at fitting ideals, we get that</p>\n\n<p>$$md=\\pm n ~~ \\text{ and } ~~ (m,e,d)=(1).$$</p>\n\n<p>Thus we have found all possibilities for $g$. To find $f$, note that we have the following commutative diagram of exact sequences:</p>\n\n<p>$$\\require{AMScd}\n\\begin{CD}\n0 @&gt;&gt;&gt; \\mathbb Z^2 @&gt;\\begin{pmatrix}m&amp;0\\\\e&amp;d\\end{pmatrix}&gt;&gt; \\mathbb Z^2 @&gt;&gt;&gt; C_{m,e} @&gt;&gt;&gt;0\\\\\n@VVV @VXVV @VVYV @VVV @VVV \\\\\n0 @&gt;&gt;&gt; \\mathbb Z^2 @&gt;&gt;\\begin{pmatrix}n&amp;0\\\\0&amp;1\\end{pmatrix}&gt; \\mathbb Z^2 @&gt;p&gt;&gt; \\mathbb Z_n @&gt;&gt;&gt;0\n\\end{CD}$$\n, where $X$ and $Y$ are obtained via the <a href=\"https://en.wikipedia.org/wiki/Smith_normal_form#Algorithm\" rel=\"nofollow noreferrer\">algorithm to determine the Smith normal form</a>.</p>\n\n<p>The map $p \\circ Y$ factors through $\\mathbb Z \\oplus \\mathbb Z_d$ and the factorization is $f$.</p>\n" }, { "answer_id": 2114775, "score": 6, "text": "<p>Since I don't know what we can assume, let's classify all those extensions by brute force. Since it's a problem in a chapter about homology, $A$ is probably meant to be abelian but let's go ahead and do the general case. I'll outline a proof in the form of exercises. At the end, I outline the relationship between the explicit constructions and the cohomological argument.</p>\n\n<p>First, name the arrows in the exact sequence: $i: \\mathbf Z \\rightarrow A$ and $p: A \\rightarrow \\mathbf Z_n$.</p>\n\n<p>Let $a = i(1)$ and let $b \\in A$ such that $p(b) = 1$.</p>\n\n<blockquote>\n <p><strong>Exercise 1:</strong> $a$ and $b$ generate $A$</p>\n</blockquote>\n\n<p>So, $$A = \\langle a, b : \\text{ some relations we need to find}\\rangle$$</p>\n\n<blockquote>\n <p><strong>Exercise 2</strong>: $b^n = a^r$ and $bab^{-1} = a^s$ for some integers $r$ and $s$.</p>\n</blockquote>\n\n<p><em>Hint: What are the images of $bab^{-1}$ and $b^n$ under $p$?.</em></p>\n\n<blockquote>\n <p><strong>Exercise 3</strong>: $a^{s^n} = a$ and thus $s = \\pm 1$.</p>\n</blockquote>\n\n<p><em>Hint: use the relations we have so far. For the last part, recall that $a$ is the generator of a copy of $\\mathbf Z$ in $A$.</em></p>\n\n<blockquote>\n <p><strong>Exercise 4:</strong> Suppose $s = 1$. Let $t = \\gcd(n, r)$ and write $$1 = \\alpha \\frac n t + \\beta \\frac r t,$$ for suitable $\\alpha, \\beta \\in \\mathbf Z$. Then, $$A \\cong \\mathbf Z \\times \\mathbf Z_t,$$ with $i': \\mathbf Z \\rightarrow \\mathbf Z \\times \\mathbf Z_t$ and $p': \\mathbf Z \\times \\mathbf Z_t \\rightarrow \\mathbf Z_n$ given by $$i'(1) = (n/t,-\\beta)\\\\ p'(1,0) = \\beta \\\\ p'(0,1) = n/t.$$</p>\n</blockquote>\n\n<p><em>Hint (so it doesn't look like magic): $s = 1$ implies $A$ abelian and the relation $b^n = a^r$ can then be rewritten as $1 = (b^{n/t}a^{-r/t})^t$. In fact, $t$ is the order of $b^{n/t}a^{-r/t}$ in $A$. Show that $b^{n/t}a^{-r/t}$ and $b^{\\beta}a^{\\alpha}$ generate $A$ and consider the homomorphism $\\varphi: \\mathbf Z \\times \\mathbf Z_t \\rightarrow A$ given by $\\varphi(1,0) = b^{\\beta}a^{\\alpha}$ and $\\varphi(0, 1)=b^{n/t}a^{-r/t}$. It sits in the commutative diagram</em>\n$\\require{AMScd}$\n\\begin{CD}\n0 @&gt;&gt;&gt; \\mathbf Z @&gt;{i'}&gt;&gt; \\mathbf Z \\times \\mathbf Z_t @&gt;{p'}&gt;&gt; \\mathbf Z_n @&gt;&gt;&gt; 0\\\\\n&amp; @V{=\\,}VV @VV{\\varphi}V @VV{\\, =}V \\\\\n0 @&gt;&gt;&gt; \\mathbf Z @&gt;&gt;{i}&gt; A @&gt;&gt;{p}&gt; \\mathbf Z_n @&gt;&gt;&gt; 0\\end{CD}</p>\n\n<blockquote>\n <p><strong>Exercise 5:</strong> Suppose $s = -1$. Then, $n$ is even, $r = 0$ and $$A \\cong \\mathbf Z \\rtimes_\\sigma \\mathbf Z_n,$$ where $\\sigma: \\mathbf Z_n \\rightarrow \\operatorname{Aut}(\\mathbf Z)$ is given by $\\sigma(1)(1) = -1$. The maps $i': \\mathbf Z \\rightarrow \\mathbf Z \\rtimes \\mathbf Z_n$ and $p': \\mathbf Z \\rtimes \\mathbf Z_n \\rightarrow \\mathbf Z_n$ are given by $$i'(1) = (1,0)\\\\ p'(1,0) = 0 \\\\ p'(0,1) = 1.$$</p>\n</blockquote>\n\n<p><em>Hint: in this case $ba^rb^{-1} = a^{-r}$ and $a^r = b^n$, so $r = 0$ and $b^n = 1$. Conclude that $A$ is isomorphic to $\\mathbf Z \\rtimes \\mathbf Z_n$ and classify all homomorphisms $\\mathbf Z_n \\rightarrow \\operatorname{Aut}(\\mathbf Z)$. Note that $\\mathbf Z$ has only two automorphisms and, by exercise 3, $n$ is even when $s = -1$.</em></p>\n\n<hr>\n\n<p>What's going on in cohomological terms?</p>\n\n<p>$\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z)$ classifies the extensions of $\\mathbf Z_n$ by $\\mathbf Z$ as $\\mathbf Z$-modules or, in other words, as abelian groups. To compute $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z)$, one would usually apply $\\operatorname{Hom}(\\cdot, \\mathbf Z)$ to the \"multiplication by $n$\" sequence $$0 \\rightarrow \\mathbf Z \\rightarrow \\mathbf Z \\rightarrow \\mathbf Z_n \\rightarrow 0$$ to get $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z) = \\mathbf Z/n\\mathbf Z$. Exercise 4 makes this equality explicit. The class of $r$ in $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z) = \\mathbf Z/n\\mathbf Z$ corresponds to the extension $\\mathbf Z \\times \\mathbf Z_t$ with maps $p'$ and $i'$ as in the statement of exercise 4.</p>\n\n<p>Non-abelian group extensions of $\\mathbf Z_n$ by $\\mathbf Z$ are not detected by $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z) = H^2(\\mathbf Z_n, \\mathbf Z)$. To detect more general extensions, one could look, for example, at $H^3$ (see <a href=\"http://sierra.nmsu.edu/morandi/notes/GroupExtensions.pdf\" rel=\"nofollow noreferrer\">here</a>). IMO Exercise 5 is a more down to earth way of treating this case.</p>\n" }, { "answer_id": 2114785, "score": 5, "text": "<p>This is a nice exercise for a beginning section of homology theory, since doing this without Ext will make you really appreciate the machinery homological algebra has to offer.</p>\n\n<p>I do not think, that I will go through all details, but let us start:</p>\n\n<p>First of all, we have the case $\\mathbb Z_d = 0$ (one could refer to this as $d=1$), i.e. a short exact sequence</p>\n\n<p>$$0 \\to \\mathbb Z \\to \\mathbb Z \\to \\mathbb Z_n \\to 0.$$</p>\n\n<p>Of course the first map is given by $\\cdot m$ for some $m \\in \\mathbb Z$ and by looking at the cokernel, we get that $m = \\pm n$. The second map is given by $1 \\mapsto e$, where $e$ is co-prime to $n$.</p>\n\n<p>The much more involved case is the case $d \\geq 2$, i.e. where the summand $\\mathbb Z_d$ actually occurs.</p>\n\n<p>Call the first map $g: \\mathbb Z \\to \\mathbb Z \\oplus \\mathbb Z_d$ and the second map $f: \\mathbb Z \\oplus \\mathbb Z_d \\to \\mathbb Z_n$. Denote the kernel of $f$ by $K$. By exactness we get $K \\cong \\mathbb Z$.</p>\n\n<p>Note that $f(0,n)=0$. $K$ is torsion free, but $(0,n) \\in \\mathbb Z \\oplus \\mathbb Z_d$ is of course torsion (annihilated by $d$), hence we have $(0,n)=(0,0)$, i.e. $d|n$.</p>\n\n<blockquote>\n <p>So we have all possibilites for $A$, namely $A \\in \\{\\mathbb Z \\} \\cup\n\\{\\mathbb Z \\oplus \\mathbb Z_d ~ | ~ d \\geq 2,d | n\\}$.</p>\n</blockquote>\n\n<p>We still have to figure out the maps in the second case:</p>\n\n<p>Of course $g$ is given by $g(1) = (m,e)$ for some $m,e \\in \\mathbb Z, 0 \\leq e &lt;d$. Let us call the cokernel $C_{m,e}$.</p>\n\n<p>$C_{m_e}$ is generated by $e_1=(1,0),e_2=(0,1)$ and we have relations $me_1+ee_2=0$ and $de_2=0$, thus $C_{m_e}$ is presented by $$\\begin{pmatrix}m&amp;0\\\\e&amp;d\\end{pmatrix}.$$ Upto multiplication in $SL_2(\\mathbb Z)$, this should be equal to $$\\begin{pmatrix}n&amp;0\\\\0&amp;1\\end{pmatrix}$$, i.e. by looking at fitting ideals, we get that</p>\n\n<p>$$md=\\pm n ~~ \\text{ and } ~~ (m,e,d)=(1).$$</p>\n\n<p>Thus we have found all possibilities for $g$. To find $f$, note that we have the following commutative diagram of exact sequences:</p>\n\n<p>$$\\require{AMScd}\n\\begin{CD}\n0 @&gt;&gt;&gt; \\mathbb Z^2 @&gt;\\begin{pmatrix}m&amp;0\\\\e&amp;d\\end{pmatrix}&gt;&gt; \\mathbb Z^2 @&gt;&gt;&gt; C_{m,e} @&gt;&gt;&gt;0\\\\\n@VVV @VXVV @VVYV @VVV @VVV \\\\\n0 @&gt;&gt;&gt; \\mathbb Z^2 @&gt;&gt;\\begin{pmatrix}n&amp;0\\\\0&amp;1\\end{pmatrix}&gt; \\mathbb Z^2 @&gt;p&gt;&gt; \\mathbb Z_n @&gt;&gt;&gt;0\n\\end{CD}$$\n, where $X$ and $Y$ are obtained via the <a href=\"https://en.wikipedia.org/wiki/Smith_normal_form#Algorithm\" rel=\"nofollow noreferrer\">algorithm to determine the Smith normal form</a>.</p>\n\n<p>The map $p \\circ Y$ factors through $\\mathbb Z \\oplus \\mathbb Z_d$ and the factorization is $f$.</p>\n" }, { "answer_id": 2114775, "score": 6, "text": "<p>Since I don't know what we can assume, let's classify all those extensions by brute force. Since it's a problem in a chapter about homology, $A$ is probably meant to be abelian but let's go ahead and do the general case. I'll outline a proof in the form of exercises. At the end, I outline the relationship between the explicit constructions and the cohomological argument.</p>\n\n<p>First, name the arrows in the exact sequence: $i: \\mathbf Z \\rightarrow A$ and $p: A \\rightarrow \\mathbf Z_n$.</p>\n\n<p>Let $a = i(1)$ and let $b \\in A$ such that $p(b) = 1$.</p>\n\n<blockquote>\n <p><strong>Exercise 1:</strong> $a$ and $b$ generate $A$</p>\n</blockquote>\n\n<p>So, $$A = \\langle a, b : \\text{ some relations we need to find}\\rangle$$</p>\n\n<blockquote>\n <p><strong>Exercise 2</strong>: $b^n = a^r$ and $bab^{-1} = a^s$ for some integers $r$ and $s$.</p>\n</blockquote>\n\n<p><em>Hint: What are the images of $bab^{-1}$ and $b^n$ under $p$?.</em></p>\n\n<blockquote>\n <p><strong>Exercise 3</strong>: $a^{s^n} = a$ and thus $s = \\pm 1$.</p>\n</blockquote>\n\n<p><em>Hint: use the relations we have so far. For the last part, recall that $a$ is the generator of a copy of $\\mathbf Z$ in $A$.</em></p>\n\n<blockquote>\n <p><strong>Exercise 4:</strong> Suppose $s = 1$. Let $t = \\gcd(n, r)$ and write $$1 = \\alpha \\frac n t + \\beta \\frac r t,$$ for suitable $\\alpha, \\beta \\in \\mathbf Z$. Then, $$A \\cong \\mathbf Z \\times \\mathbf Z_t,$$ with $i': \\mathbf Z \\rightarrow \\mathbf Z \\times \\mathbf Z_t$ and $p': \\mathbf Z \\times \\mathbf Z_t \\rightarrow \\mathbf Z_n$ given by $$i'(1) = (n/t,-\\beta)\\\\ p'(1,0) = \\beta \\\\ p'(0,1) = n/t.$$</p>\n</blockquote>\n\n<p><em>Hint (so it doesn't look like magic): $s = 1$ implies $A$ abelian and the relation $b^n = a^r$ can then be rewritten as $1 = (b^{n/t}a^{-r/t})^t$. In fact, $t$ is the order of $b^{n/t}a^{-r/t}$ in $A$. Show that $b^{n/t}a^{-r/t}$ and $b^{\\beta}a^{\\alpha}$ generate $A$ and consider the homomorphism $\\varphi: \\mathbf Z \\times \\mathbf Z_t \\rightarrow A$ given by $\\varphi(1,0) = b^{\\beta}a^{\\alpha}$ and $\\varphi(0, 1)=b^{n/t}a^{-r/t}$. It sits in the commutative diagram</em>\n$\\require{AMScd}$\n\\begin{CD}\n0 @&gt;&gt;&gt; \\mathbf Z @&gt;{i'}&gt;&gt; \\mathbf Z \\times \\mathbf Z_t @&gt;{p'}&gt;&gt; \\mathbf Z_n @&gt;&gt;&gt; 0\\\\\n&amp; @V{=\\,}VV @VV{\\varphi}V @VV{\\, =}V \\\\\n0 @&gt;&gt;&gt; \\mathbf Z @&gt;&gt;{i}&gt; A @&gt;&gt;{p}&gt; \\mathbf Z_n @&gt;&gt;&gt; 0\\end{CD}</p>\n\n<blockquote>\n <p><strong>Exercise 5:</strong> Suppose $s = -1$. Then, $n$ is even, $r = 0$ and $$A \\cong \\mathbf Z \\rtimes_\\sigma \\mathbf Z_n,$$ where $\\sigma: \\mathbf Z_n \\rightarrow \\operatorname{Aut}(\\mathbf Z)$ is given by $\\sigma(1)(1) = -1$. The maps $i': \\mathbf Z \\rightarrow \\mathbf Z \\rtimes \\mathbf Z_n$ and $p': \\mathbf Z \\rtimes \\mathbf Z_n \\rightarrow \\mathbf Z_n$ are given by $$i'(1) = (1,0)\\\\ p'(1,0) = 0 \\\\ p'(0,1) = 1.$$</p>\n</blockquote>\n\n<p><em>Hint: in this case $ba^rb^{-1} = a^{-r}$ and $a^r = b^n$, so $r = 0$ and $b^n = 1$. Conclude that $A$ is isomorphic to $\\mathbf Z \\rtimes \\mathbf Z_n$ and classify all homomorphisms $\\mathbf Z_n \\rightarrow \\operatorname{Aut}(\\mathbf Z)$. Note that $\\mathbf Z$ has only two automorphisms and, by exercise 3, $n$ is even when $s = -1$.</em></p>\n\n<hr>\n\n<p>What's going on in cohomological terms?</p>\n\n<p>$\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z)$ classifies the extensions of $\\mathbf Z_n$ by $\\mathbf Z$ as $\\mathbf Z$-modules or, in other words, as abelian groups. To compute $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z)$, one would usually apply $\\operatorname{Hom}(\\cdot, \\mathbf Z)$ to the \"multiplication by $n$\" sequence $$0 \\rightarrow \\mathbf Z \\rightarrow \\mathbf Z \\rightarrow \\mathbf Z_n \\rightarrow 0$$ to get $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z) = \\mathbf Z/n\\mathbf Z$. Exercise 4 makes this equality explicit. The class of $r$ in $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z) = \\mathbf Z/n\\mathbf Z$ corresponds to the extension $\\mathbf Z \\times \\mathbf Z_t$ with maps $p'$ and $i'$ as in the statement of exercise 4.</p>\n\n<p>Non-abelian group extensions of $\\mathbf Z_n$ by $\\mathbf Z$ are not detected by $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z) = H^2(\\mathbf Z_n, \\mathbf Z)$. To detect more general extensions, one could look, for example, at $H^3$ (see <a href=\"http://sierra.nmsu.edu/morandi/notes/GroupExtensions.pdf\" rel=\"nofollow noreferrer\">here</a>). IMO Exercise 5 is a more down to earth way of treating this case.</p>\n" }, { "answer_id": 2114785, "score": 5, "text": "<p>This is a nice exercise for a beginning section of homology theory, since doing this without Ext will make you really appreciate the machinery homological algebra has to offer.</p>\n\n<p>I do not think, that I will go through all details, but let us start:</p>\n\n<p>First of all, we have the case $\\mathbb Z_d = 0$ (one could refer to this as $d=1$), i.e. a short exact sequence</p>\n\n<p>$$0 \\to \\mathbb Z \\to \\mathbb Z \\to \\mathbb Z_n \\to 0.$$</p>\n\n<p>Of course the first map is given by $\\cdot m$ for some $m \\in \\mathbb Z$ and by looking at the cokernel, we get that $m = \\pm n$. The second map is given by $1 \\mapsto e$, where $e$ is co-prime to $n$.</p>\n\n<p>The much more involved case is the case $d \\geq 2$, i.e. where the summand $\\mathbb Z_d$ actually occurs.</p>\n\n<p>Call the first map $g: \\mathbb Z \\to \\mathbb Z \\oplus \\mathbb Z_d$ and the second map $f: \\mathbb Z \\oplus \\mathbb Z_d \\to \\mathbb Z_n$. Denote the kernel of $f$ by $K$. By exactness we get $K \\cong \\mathbb Z$.</p>\n\n<p>Note that $f(0,n)=0$. $K$ is torsion free, but $(0,n) \\in \\mathbb Z \\oplus \\mathbb Z_d$ is of course torsion (annihilated by $d$), hence we have $(0,n)=(0,0)$, i.e. $d|n$.</p>\n\n<blockquote>\n <p>So we have all possibilites for $A$, namely $A \\in \\{\\mathbb Z \\} \\cup\n\\{\\mathbb Z \\oplus \\mathbb Z_d ~ | ~ d \\geq 2,d | n\\}$.</p>\n</blockquote>\n\n<p>We still have to figure out the maps in the second case:</p>\n\n<p>Of course $g$ is given by $g(1) = (m,e)$ for some $m,e \\in \\mathbb Z, 0 \\leq e &lt;d$. Let us call the cokernel $C_{m,e}$.</p>\n\n<p>$C_{m_e}$ is generated by $e_1=(1,0),e_2=(0,1)$ and we have relations $me_1+ee_2=0$ and $de_2=0$, thus $C_{m_e}$ is presented by $$\\begin{pmatrix}m&amp;0\\\\e&amp;d\\end{pmatrix}.$$ Upto multiplication in $SL_2(\\mathbb Z)$, this should be equal to $$\\begin{pmatrix}n&amp;0\\\\0&amp;1\\end{pmatrix}$$, i.e. by looking at fitting ideals, we get that</p>\n\n<p>$$md=\\pm n ~~ \\text{ and } ~~ (m,e,d)=(1).$$</p>\n\n<p>Thus we have found all possibilities for $g$. To find $f$, note that we have the following commutative diagram of exact sequences:</p>\n\n<p>$$\\require{AMScd}\n\\begin{CD}\n0 @&gt;&gt;&gt; \\mathbb Z^2 @&gt;\\begin{pmatrix}m&amp;0\\\\e&amp;d\\end{pmatrix}&gt;&gt; \\mathbb Z^2 @&gt;&gt;&gt; C_{m,e} @&gt;&gt;&gt;0\\\\\n@VVV @VXVV @VVYV @VVV @VVV \\\\\n0 @&gt;&gt;&gt; \\mathbb Z^2 @&gt;&gt;\\begin{pmatrix}n&amp;0\\\\0&amp;1\\end{pmatrix}&gt; \\mathbb Z^2 @&gt;p&gt;&gt; \\mathbb Z_n @&gt;&gt;&gt;0\n\\end{CD}$$\n, where $X$ and $Y$ are obtained via the <a href=\"https://en.wikipedia.org/wiki/Smith_normal_form#Algorithm\" rel=\"nofollow noreferrer\">algorithm to determine the Smith normal form</a>.</p>\n\n<p>The map $p \\circ Y$ factors through $\\mathbb Z \\oplus \\mathbb Z_d$ and the factorization is $f$.</p>\n" }, { "answer_id": 2114775, "score": 6, "text": "<p>Since I don't know what we can assume, let's classify all those extensions by brute force. Since it's a problem in a chapter about homology, $A$ is probably meant to be abelian but let's go ahead and do the general case. I'll outline a proof in the form of exercises. At the end, I outline the relationship between the explicit constructions and the cohomological argument.</p>\n\n<p>First, name the arrows in the exact sequence: $i: \\mathbf Z \\rightarrow A$ and $p: A \\rightarrow \\mathbf Z_n$.</p>\n\n<p>Let $a = i(1)$ and let $b \\in A$ such that $p(b) = 1$.</p>\n\n<blockquote>\n <p><strong>Exercise 1:</strong> $a$ and $b$ generate $A$</p>\n</blockquote>\n\n<p>So, $$A = \\langle a, b : \\text{ some relations we need to find}\\rangle$$</p>\n\n<blockquote>\n <p><strong>Exercise 2</strong>: $b^n = a^r$ and $bab^{-1} = a^s$ for some integers $r$ and $s$.</p>\n</blockquote>\n\n<p><em>Hint: What are the images of $bab^{-1}$ and $b^n$ under $p$?.</em></p>\n\n<blockquote>\n <p><strong>Exercise 3</strong>: $a^{s^n} = a$ and thus $s = \\pm 1$.</p>\n</blockquote>\n\n<p><em>Hint: use the relations we have so far. For the last part, recall that $a$ is the generator of a copy of $\\mathbf Z$ in $A$.</em></p>\n\n<blockquote>\n <p><strong>Exercise 4:</strong> Suppose $s = 1$. Let $t = \\gcd(n, r)$ and write $$1 = \\alpha \\frac n t + \\beta \\frac r t,$$ for suitable $\\alpha, \\beta \\in \\mathbf Z$. Then, $$A \\cong \\mathbf Z \\times \\mathbf Z_t,$$ with $i': \\mathbf Z \\rightarrow \\mathbf Z \\times \\mathbf Z_t$ and $p': \\mathbf Z \\times \\mathbf Z_t \\rightarrow \\mathbf Z_n$ given by $$i'(1) = (n/t,-\\beta)\\\\ p'(1,0) = \\beta \\\\ p'(0,1) = n/t.$$</p>\n</blockquote>\n\n<p><em>Hint (so it doesn't look like magic): $s = 1$ implies $A$ abelian and the relation $b^n = a^r$ can then be rewritten as $1 = (b^{n/t}a^{-r/t})^t$. In fact, $t$ is the order of $b^{n/t}a^{-r/t}$ in $A$. Show that $b^{n/t}a^{-r/t}$ and $b^{\\beta}a^{\\alpha}$ generate $A$ and consider the homomorphism $\\varphi: \\mathbf Z \\times \\mathbf Z_t \\rightarrow A$ given by $\\varphi(1,0) = b^{\\beta}a^{\\alpha}$ and $\\varphi(0, 1)=b^{n/t}a^{-r/t}$. It sits in the commutative diagram</em>\n$\\require{AMScd}$\n\\begin{CD}\n0 @&gt;&gt;&gt; \\mathbf Z @&gt;{i'}&gt;&gt; \\mathbf Z \\times \\mathbf Z_t @&gt;{p'}&gt;&gt; \\mathbf Z_n @&gt;&gt;&gt; 0\\\\\n&amp; @V{=\\,}VV @VV{\\varphi}V @VV{\\, =}V \\\\\n0 @&gt;&gt;&gt; \\mathbf Z @&gt;&gt;{i}&gt; A @&gt;&gt;{p}&gt; \\mathbf Z_n @&gt;&gt;&gt; 0\\end{CD}</p>\n\n<blockquote>\n <p><strong>Exercise 5:</strong> Suppose $s = -1$. Then, $n$ is even, $r = 0$ and $$A \\cong \\mathbf Z \\rtimes_\\sigma \\mathbf Z_n,$$ where $\\sigma: \\mathbf Z_n \\rightarrow \\operatorname{Aut}(\\mathbf Z)$ is given by $\\sigma(1)(1) = -1$. The maps $i': \\mathbf Z \\rightarrow \\mathbf Z \\rtimes \\mathbf Z_n$ and $p': \\mathbf Z \\rtimes \\mathbf Z_n \\rightarrow \\mathbf Z_n$ are given by $$i'(1) = (1,0)\\\\ p'(1,0) = 0 \\\\ p'(0,1) = 1.$$</p>\n</blockquote>\n\n<p><em>Hint: in this case $ba^rb^{-1} = a^{-r}$ and $a^r = b^n$, so $r = 0$ and $b^n = 1$. Conclude that $A$ is isomorphic to $\\mathbf Z \\rtimes \\mathbf Z_n$ and classify all homomorphisms $\\mathbf Z_n \\rightarrow \\operatorname{Aut}(\\mathbf Z)$. Note that $\\mathbf Z$ has only two automorphisms and, by exercise 3, $n$ is even when $s = -1$.</em></p>\n\n<hr>\n\n<p>What's going on in cohomological terms?</p>\n\n<p>$\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z)$ classifies the extensions of $\\mathbf Z_n$ by $\\mathbf Z$ as $\\mathbf Z$-modules or, in other words, as abelian groups. To compute $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z)$, one would usually apply $\\operatorname{Hom}(\\cdot, \\mathbf Z)$ to the \"multiplication by $n$\" sequence $$0 \\rightarrow \\mathbf Z \\rightarrow \\mathbf Z \\rightarrow \\mathbf Z_n \\rightarrow 0$$ to get $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z) = \\mathbf Z/n\\mathbf Z$. Exercise 4 makes this equality explicit. The class of $r$ in $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z) = \\mathbf Z/n\\mathbf Z$ corresponds to the extension $\\mathbf Z \\times \\mathbf Z_t$ with maps $p'$ and $i'$ as in the statement of exercise 4.</p>\n\n<p>Non-abelian group extensions of $\\mathbf Z_n$ by $\\mathbf Z$ are not detected by $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z) = H^2(\\mathbf Z_n, \\mathbf Z)$. To detect more general extensions, one could look, for example, at $H^3$ (see <a href=\"http://sierra.nmsu.edu/morandi/notes/GroupExtensions.pdf\" rel=\"nofollow noreferrer\">here</a>). IMO Exercise 5 is a more down to earth way of treating this case.</p>\n" }, { "answer_id": 2114785, "score": 5, "text": "<p>This is a nice exercise for a beginning section of homology theory, since doing this without Ext will make you really appreciate the machinery homological algebra has to offer.</p>\n\n<p>I do not think, that I will go through all details, but let us start:</p>\n\n<p>First of all, we have the case $\\mathbb Z_d = 0$ (one could refer to this as $d=1$), i.e. a short exact sequence</p>\n\n<p>$$0 \\to \\mathbb Z \\to \\mathbb Z \\to \\mathbb Z_n \\to 0.$$</p>\n\n<p>Of course the first map is given by $\\cdot m$ for some $m \\in \\mathbb Z$ and by looking at the cokernel, we get that $m = \\pm n$. The second map is given by $1 \\mapsto e$, where $e$ is co-prime to $n$.</p>\n\n<p>The much more involved case is the case $d \\geq 2$, i.e. where the summand $\\mathbb Z_d$ actually occurs.</p>\n\n<p>Call the first map $g: \\mathbb Z \\to \\mathbb Z \\oplus \\mathbb Z_d$ and the second map $f: \\mathbb Z \\oplus \\mathbb Z_d \\to \\mathbb Z_n$. Denote the kernel of $f$ by $K$. By exactness we get $K \\cong \\mathbb Z$.</p>\n\n<p>Note that $f(0,n)=0$. $K$ is torsion free, but $(0,n) \\in \\mathbb Z \\oplus \\mathbb Z_d$ is of course torsion (annihilated by $d$), hence we have $(0,n)=(0,0)$, i.e. $d|n$.</p>\n\n<blockquote>\n <p>So we have all possibilites for $A$, namely $A \\in \\{\\mathbb Z \\} \\cup\n\\{\\mathbb Z \\oplus \\mathbb Z_d ~ | ~ d \\geq 2,d | n\\}$.</p>\n</blockquote>\n\n<p>We still have to figure out the maps in the second case:</p>\n\n<p>Of course $g$ is given by $g(1) = (m,e)$ for some $m,e \\in \\mathbb Z, 0 \\leq e &lt;d$. Let us call the cokernel $C_{m,e}$.</p>\n\n<p>$C_{m_e}$ is generated by $e_1=(1,0),e_2=(0,1)$ and we have relations $me_1+ee_2=0$ and $de_2=0$, thus $C_{m_e}$ is presented by $$\\begin{pmatrix}m&amp;0\\\\e&amp;d\\end{pmatrix}.$$ Upto multiplication in $SL_2(\\mathbb Z)$, this should be equal to $$\\begin{pmatrix}n&amp;0\\\\0&amp;1\\end{pmatrix}$$, i.e. by looking at fitting ideals, we get that</p>\n\n<p>$$md=\\pm n ~~ \\text{ and } ~~ (m,e,d)=(1).$$</p>\n\n<p>Thus we have found all possibilities for $g$. To find $f$, note that we have the following commutative diagram of exact sequences:</p>\n\n<p>$$\\require{AMScd}\n\\begin{CD}\n0 @&gt;&gt;&gt; \\mathbb Z^2 @&gt;\\begin{pmatrix}m&amp;0\\\\e&amp;d\\end{pmatrix}&gt;&gt; \\mathbb Z^2 @&gt;&gt;&gt; C_{m,e} @&gt;&gt;&gt;0\\\\\n@VVV @VXVV @VVYV @VVV @VVV \\\\\n0 @&gt;&gt;&gt; \\mathbb Z^2 @&gt;&gt;\\begin{pmatrix}n&amp;0\\\\0&amp;1\\end{pmatrix}&gt; \\mathbb Z^2 @&gt;p&gt;&gt; \\mathbb Z_n @&gt;&gt;&gt;0\n\\end{CD}$$\n, where $X$ and $Y$ are obtained via the <a href=\"https://en.wikipedia.org/wiki/Smith_normal_form#Algorithm\" rel=\"nofollow noreferrer\">algorithm to determine the Smith normal form</a>.</p>\n\n<p>The map $p \\circ Y$ factors through $\\mathbb Z \\oplus \\mathbb Z_d$ and the factorization is $f$.</p>\n" }, { "answer_id": 2114775, "score": 6, "text": "<p>Since I don't know what we can assume, let's classify all those extensions by brute force. Since it's a problem in a chapter about homology, $A$ is probably meant to be abelian but let's go ahead and do the general case. I'll outline a proof in the form of exercises. At the end, I outline the relationship between the explicit constructions and the cohomological argument.</p>\n\n<p>First, name the arrows in the exact sequence: $i: \\mathbf Z \\rightarrow A$ and $p: A \\rightarrow \\mathbf Z_n$.</p>\n\n<p>Let $a = i(1)$ and let $b \\in A$ such that $p(b) = 1$.</p>\n\n<blockquote>\n <p><strong>Exercise 1:</strong> $a$ and $b$ generate $A$</p>\n</blockquote>\n\n<p>So, $$A = \\langle a, b : \\text{ some relations we need to find}\\rangle$$</p>\n\n<blockquote>\n <p><strong>Exercise 2</strong>: $b^n = a^r$ and $bab^{-1} = a^s$ for some integers $r$ and $s$.</p>\n</blockquote>\n\n<p><em>Hint: What are the images of $bab^{-1}$ and $b^n$ under $p$?.</em></p>\n\n<blockquote>\n <p><strong>Exercise 3</strong>: $a^{s^n} = a$ and thus $s = \\pm 1$.</p>\n</blockquote>\n\n<p><em>Hint: use the relations we have so far. For the last part, recall that $a$ is the generator of a copy of $\\mathbf Z$ in $A$.</em></p>\n\n<blockquote>\n <p><strong>Exercise 4:</strong> Suppose $s = 1$. Let $t = \\gcd(n, r)$ and write $$1 = \\alpha \\frac n t + \\beta \\frac r t,$$ for suitable $\\alpha, \\beta \\in \\mathbf Z$. Then, $$A \\cong \\mathbf Z \\times \\mathbf Z_t,$$ with $i': \\mathbf Z \\rightarrow \\mathbf Z \\times \\mathbf Z_t$ and $p': \\mathbf Z \\times \\mathbf Z_t \\rightarrow \\mathbf Z_n$ given by $$i'(1) = (n/t,-\\beta)\\\\ p'(1,0) = \\beta \\\\ p'(0,1) = n/t.$$</p>\n</blockquote>\n\n<p><em>Hint (so it doesn't look like magic): $s = 1$ implies $A$ abelian and the relation $b^n = a^r$ can then be rewritten as $1 = (b^{n/t}a^{-r/t})^t$. In fact, $t$ is the order of $b^{n/t}a^{-r/t}$ in $A$. Show that $b^{n/t}a^{-r/t}$ and $b^{\\beta}a^{\\alpha}$ generate $A$ and consider the homomorphism $\\varphi: \\mathbf Z \\times \\mathbf Z_t \\rightarrow A$ given by $\\varphi(1,0) = b^{\\beta}a^{\\alpha}$ and $\\varphi(0, 1)=b^{n/t}a^{-r/t}$. It sits in the commutative diagram</em>\n$\\require{AMScd}$\n\\begin{CD}\n0 @&gt;&gt;&gt; \\mathbf Z @&gt;{i'}&gt;&gt; \\mathbf Z \\times \\mathbf Z_t @&gt;{p'}&gt;&gt; \\mathbf Z_n @&gt;&gt;&gt; 0\\\\\n&amp; @V{=\\,}VV @VV{\\varphi}V @VV{\\, =}V \\\\\n0 @&gt;&gt;&gt; \\mathbf Z @&gt;&gt;{i}&gt; A @&gt;&gt;{p}&gt; \\mathbf Z_n @&gt;&gt;&gt; 0\\end{CD}</p>\n\n<blockquote>\n <p><strong>Exercise 5:</strong> Suppose $s = -1$. Then, $n$ is even, $r = 0$ and $$A \\cong \\mathbf Z \\rtimes_\\sigma \\mathbf Z_n,$$ where $\\sigma: \\mathbf Z_n \\rightarrow \\operatorname{Aut}(\\mathbf Z)$ is given by $\\sigma(1)(1) = -1$. The maps $i': \\mathbf Z \\rightarrow \\mathbf Z \\rtimes \\mathbf Z_n$ and $p': \\mathbf Z \\rtimes \\mathbf Z_n \\rightarrow \\mathbf Z_n$ are given by $$i'(1) = (1,0)\\\\ p'(1,0) = 0 \\\\ p'(0,1) = 1.$$</p>\n</blockquote>\n\n<p><em>Hint: in this case $ba^rb^{-1} = a^{-r}$ and $a^r = b^n$, so $r = 0$ and $b^n = 1$. Conclude that $A$ is isomorphic to $\\mathbf Z \\rtimes \\mathbf Z_n$ and classify all homomorphisms $\\mathbf Z_n \\rightarrow \\operatorname{Aut}(\\mathbf Z)$. Note that $\\mathbf Z$ has only two automorphisms and, by exercise 3, $n$ is even when $s = -1$.</em></p>\n\n<hr>\n\n<p>What's going on in cohomological terms?</p>\n\n<p>$\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z)$ classifies the extensions of $\\mathbf Z_n$ by $\\mathbf Z$ as $\\mathbf Z$-modules or, in other words, as abelian groups. To compute $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z)$, one would usually apply $\\operatorname{Hom}(\\cdot, \\mathbf Z)$ to the \"multiplication by $n$\" sequence $$0 \\rightarrow \\mathbf Z \\rightarrow \\mathbf Z \\rightarrow \\mathbf Z_n \\rightarrow 0$$ to get $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z) = \\mathbf Z/n\\mathbf Z$. Exercise 4 makes this equality explicit. The class of $r$ in $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z) = \\mathbf Z/n\\mathbf Z$ corresponds to the extension $\\mathbf Z \\times \\mathbf Z_t$ with maps $p'$ and $i'$ as in the statement of exercise 4.</p>\n\n<p>Non-abelian group extensions of $\\mathbf Z_n$ by $\\mathbf Z$ are not detected by $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z) = H^2(\\mathbf Z_n, \\mathbf Z)$. To detect more general extensions, one could look, for example, at $H^3$ (see <a href=\"http://sierra.nmsu.edu/morandi/notes/GroupExtensions.pdf\" rel=\"nofollow noreferrer\">here</a>). IMO Exercise 5 is a more down to earth way of treating this case.</p>\n" }, { "answer_id": 2114785, "score": 5, "text": "<p>This is a nice exercise for a beginning section of homology theory, since doing this without Ext will make you really appreciate the machinery homological algebra has to offer.</p>\n\n<p>I do not think, that I will go through all details, but let us start:</p>\n\n<p>First of all, we have the case $\\mathbb Z_d = 0$ (one could refer to this as $d=1$), i.e. a short exact sequence</p>\n\n<p>$$0 \\to \\mathbb Z \\to \\mathbb Z \\to \\mathbb Z_n \\to 0.$$</p>\n\n<p>Of course the first map is given by $\\cdot m$ for some $m \\in \\mathbb Z$ and by looking at the cokernel, we get that $m = \\pm n$. The second map is given by $1 \\mapsto e$, where $e$ is co-prime to $n$.</p>\n\n<p>The much more involved case is the case $d \\geq 2$, i.e. where the summand $\\mathbb Z_d$ actually occurs.</p>\n\n<p>Call the first map $g: \\mathbb Z \\to \\mathbb Z \\oplus \\mathbb Z_d$ and the second map $f: \\mathbb Z \\oplus \\mathbb Z_d \\to \\mathbb Z_n$. Denote the kernel of $f$ by $K$. By exactness we get $K \\cong \\mathbb Z$.</p>\n\n<p>Note that $f(0,n)=0$. $K$ is torsion free, but $(0,n) \\in \\mathbb Z \\oplus \\mathbb Z_d$ is of course torsion (annihilated by $d$), hence we have $(0,n)=(0,0)$, i.e. $d|n$.</p>\n\n<blockquote>\n <p>So we have all possibilites for $A$, namely $A \\in \\{\\mathbb Z \\} \\cup\n\\{\\mathbb Z \\oplus \\mathbb Z_d ~ | ~ d \\geq 2,d | n\\}$.</p>\n</blockquote>\n\n<p>We still have to figure out the maps in the second case:</p>\n\n<p>Of course $g$ is given by $g(1) = (m,e)$ for some $m,e \\in \\mathbb Z, 0 \\leq e &lt;d$. Let us call the cokernel $C_{m,e}$.</p>\n\n<p>$C_{m_e}$ is generated by $e_1=(1,0),e_2=(0,1)$ and we have relations $me_1+ee_2=0$ and $de_2=0$, thus $C_{m_e}$ is presented by $$\\begin{pmatrix}m&amp;0\\\\e&amp;d\\end{pmatrix}.$$ Upto multiplication in $SL_2(\\mathbb Z)$, this should be equal to $$\\begin{pmatrix}n&amp;0\\\\0&amp;1\\end{pmatrix}$$, i.e. by looking at fitting ideals, we get that</p>\n\n<p>$$md=\\pm n ~~ \\text{ and } ~~ (m,e,d)=(1).$$</p>\n\n<p>Thus we have found all possibilities for $g$. To find $f$, note that we have the following commutative diagram of exact sequences:</p>\n\n<p>$$\\require{AMScd}\n\\begin{CD}\n0 @&gt;&gt;&gt; \\mathbb Z^2 @&gt;\\begin{pmatrix}m&amp;0\\\\e&amp;d\\end{pmatrix}&gt;&gt; \\mathbb Z^2 @&gt;&gt;&gt; C_{m,e} @&gt;&gt;&gt;0\\\\\n@VVV @VXVV @VVYV @VVV @VVV \\\\\n0 @&gt;&gt;&gt; \\mathbb Z^2 @&gt;&gt;\\begin{pmatrix}n&amp;0\\\\0&amp;1\\end{pmatrix}&gt; \\mathbb Z^2 @&gt;p&gt;&gt; \\mathbb Z_n @&gt;&gt;&gt;0\n\\end{CD}$$\n, where $X$ and $Y$ are obtained via the <a href=\"https://en.wikipedia.org/wiki/Smith_normal_form#Algorithm\" rel=\"nofollow noreferrer\">algorithm to determine the Smith normal form</a>.</p>\n\n<p>The map $p \\circ Y$ factors through $\\mathbb Z \\oplus \\mathbb Z_d$ and the factorization is $f$.</p>\n" }, { "answer_id": 2114775, "score": 6, "text": "<p>Since I don't know what we can assume, let's classify all those extensions by brute force. Since it's a problem in a chapter about homology, $A$ is probably meant to be abelian but let's go ahead and do the general case. I'll outline a proof in the form of exercises. At the end, I outline the relationship between the explicit constructions and the cohomological argument.</p>\n\n<p>First, name the arrows in the exact sequence: $i: \\mathbf Z \\rightarrow A$ and $p: A \\rightarrow \\mathbf Z_n$.</p>\n\n<p>Let $a = i(1)$ and let $b \\in A$ such that $p(b) = 1$.</p>\n\n<blockquote>\n <p><strong>Exercise 1:</strong> $a$ and $b$ generate $A$</p>\n</blockquote>\n\n<p>So, $$A = \\langle a, b : \\text{ some relations we need to find}\\rangle$$</p>\n\n<blockquote>\n <p><strong>Exercise 2</strong>: $b^n = a^r$ and $bab^{-1} = a^s$ for some integers $r$ and $s$.</p>\n</blockquote>\n\n<p><em>Hint: What are the images of $bab^{-1}$ and $b^n$ under $p$?.</em></p>\n\n<blockquote>\n <p><strong>Exercise 3</strong>: $a^{s^n} = a$ and thus $s = \\pm 1$.</p>\n</blockquote>\n\n<p><em>Hint: use the relations we have so far. For the last part, recall that $a$ is the generator of a copy of $\\mathbf Z$ in $A$.</em></p>\n\n<blockquote>\n <p><strong>Exercise 4:</strong> Suppose $s = 1$. Let $t = \\gcd(n, r)$ and write $$1 = \\alpha \\frac n t + \\beta \\frac r t,$$ for suitable $\\alpha, \\beta \\in \\mathbf Z$. Then, $$A \\cong \\mathbf Z \\times \\mathbf Z_t,$$ with $i': \\mathbf Z \\rightarrow \\mathbf Z \\times \\mathbf Z_t$ and $p': \\mathbf Z \\times \\mathbf Z_t \\rightarrow \\mathbf Z_n$ given by $$i'(1) = (n/t,-\\beta)\\\\ p'(1,0) = \\beta \\\\ p'(0,1) = n/t.$$</p>\n</blockquote>\n\n<p><em>Hint (so it doesn't look like magic): $s = 1$ implies $A$ abelian and the relation $b^n = a^r$ can then be rewritten as $1 = (b^{n/t}a^{-r/t})^t$. In fact, $t$ is the order of $b^{n/t}a^{-r/t}$ in $A$. Show that $b^{n/t}a^{-r/t}$ and $b^{\\beta}a^{\\alpha}$ generate $A$ and consider the homomorphism $\\varphi: \\mathbf Z \\times \\mathbf Z_t \\rightarrow A$ given by $\\varphi(1,0) = b^{\\beta}a^{\\alpha}$ and $\\varphi(0, 1)=b^{n/t}a^{-r/t}$. It sits in the commutative diagram</em>\n$\\require{AMScd}$\n\\begin{CD}\n0 @&gt;&gt;&gt; \\mathbf Z @&gt;{i'}&gt;&gt; \\mathbf Z \\times \\mathbf Z_t @&gt;{p'}&gt;&gt; \\mathbf Z_n @&gt;&gt;&gt; 0\\\\\n&amp; @V{=\\,}VV @VV{\\varphi}V @VV{\\, =}V \\\\\n0 @&gt;&gt;&gt; \\mathbf Z @&gt;&gt;{i}&gt; A @&gt;&gt;{p}&gt; \\mathbf Z_n @&gt;&gt;&gt; 0\\end{CD}</p>\n\n<blockquote>\n <p><strong>Exercise 5:</strong> Suppose $s = -1$. Then, $n$ is even, $r = 0$ and $$A \\cong \\mathbf Z \\rtimes_\\sigma \\mathbf Z_n,$$ where $\\sigma: \\mathbf Z_n \\rightarrow \\operatorname{Aut}(\\mathbf Z)$ is given by $\\sigma(1)(1) = -1$. The maps $i': \\mathbf Z \\rightarrow \\mathbf Z \\rtimes \\mathbf Z_n$ and $p': \\mathbf Z \\rtimes \\mathbf Z_n \\rightarrow \\mathbf Z_n$ are given by $$i'(1) = (1,0)\\\\ p'(1,0) = 0 \\\\ p'(0,1) = 1.$$</p>\n</blockquote>\n\n<p><em>Hint: in this case $ba^rb^{-1} = a^{-r}$ and $a^r = b^n$, so $r = 0$ and $b^n = 1$. Conclude that $A$ is isomorphic to $\\mathbf Z \\rtimes \\mathbf Z_n$ and classify all homomorphisms $\\mathbf Z_n \\rightarrow \\operatorname{Aut}(\\mathbf Z)$. Note that $\\mathbf Z$ has only two automorphisms and, by exercise 3, $n$ is even when $s = -1$.</em></p>\n\n<hr>\n\n<p>What's going on in cohomological terms?</p>\n\n<p>$\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z)$ classifies the extensions of $\\mathbf Z_n$ by $\\mathbf Z$ as $\\mathbf Z$-modules or, in other words, as abelian groups. To compute $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z)$, one would usually apply $\\operatorname{Hom}(\\cdot, \\mathbf Z)$ to the \"multiplication by $n$\" sequence $$0 \\rightarrow \\mathbf Z \\rightarrow \\mathbf Z \\rightarrow \\mathbf Z_n \\rightarrow 0$$ to get $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z) = \\mathbf Z/n\\mathbf Z$. Exercise 4 makes this equality explicit. The class of $r$ in $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z) = \\mathbf Z/n\\mathbf Z$ corresponds to the extension $\\mathbf Z \\times \\mathbf Z_t$ with maps $p'$ and $i'$ as in the statement of exercise 4.</p>\n\n<p>Non-abelian group extensions of $\\mathbf Z_n$ by $\\mathbf Z$ are not detected by $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z) = H^2(\\mathbf Z_n, \\mathbf Z)$. To detect more general extensions, one could look, for example, at $H^3$ (see <a href=\"http://sierra.nmsu.edu/morandi/notes/GroupExtensions.pdf\" rel=\"nofollow noreferrer\">here</a>). IMO Exercise 5 is a more down to earth way of treating this case.</p>\n" }, { "answer_id": 2114785, "score": 5, "text": "<p>This is a nice exercise for a beginning section of homology theory, since doing this without Ext will make you really appreciate the machinery homological algebra has to offer.</p>\n\n<p>I do not think, that I will go through all details, but let us start:</p>\n\n<p>First of all, we have the case $\\mathbb Z_d = 0$ (one could refer to this as $d=1$), i.e. a short exact sequence</p>\n\n<p>$$0 \\to \\mathbb Z \\to \\mathbb Z \\to \\mathbb Z_n \\to 0.$$</p>\n\n<p>Of course the first map is given by $\\cdot m$ for some $m \\in \\mathbb Z$ and by looking at the cokernel, we get that $m = \\pm n$. The second map is given by $1 \\mapsto e$, where $e$ is co-prime to $n$.</p>\n\n<p>The much more involved case is the case $d \\geq 2$, i.e. where the summand $\\mathbb Z_d$ actually occurs.</p>\n\n<p>Call the first map $g: \\mathbb Z \\to \\mathbb Z \\oplus \\mathbb Z_d$ and the second map $f: \\mathbb Z \\oplus \\mathbb Z_d \\to \\mathbb Z_n$. Denote the kernel of $f$ by $K$. By exactness we get $K \\cong \\mathbb Z$.</p>\n\n<p>Note that $f(0,n)=0$. $K$ is torsion free, but $(0,n) \\in \\mathbb Z \\oplus \\mathbb Z_d$ is of course torsion (annihilated by $d$), hence we have $(0,n)=(0,0)$, i.e. $d|n$.</p>\n\n<blockquote>\n <p>So we have all possibilites for $A$, namely $A \\in \\{\\mathbb Z \\} \\cup\n\\{\\mathbb Z \\oplus \\mathbb Z_d ~ | ~ d \\geq 2,d | n\\}$.</p>\n</blockquote>\n\n<p>We still have to figure out the maps in the second case:</p>\n\n<p>Of course $g$ is given by $g(1) = (m,e)$ for some $m,e \\in \\mathbb Z, 0 \\leq e &lt;d$. Let us call the cokernel $C_{m,e}$.</p>\n\n<p>$C_{m_e}$ is generated by $e_1=(1,0),e_2=(0,1)$ and we have relations $me_1+ee_2=0$ and $de_2=0$, thus $C_{m_e}$ is presented by $$\\begin{pmatrix}m&amp;0\\\\e&amp;d\\end{pmatrix}.$$ Upto multiplication in $SL_2(\\mathbb Z)$, this should be equal to $$\\begin{pmatrix}n&amp;0\\\\0&amp;1\\end{pmatrix}$$, i.e. by looking at fitting ideals, we get that</p>\n\n<p>$$md=\\pm n ~~ \\text{ and } ~~ (m,e,d)=(1).$$</p>\n\n<p>Thus we have found all possibilities for $g$. To find $f$, note that we have the following commutative diagram of exact sequences:</p>\n\n<p>$$\\require{AMScd}\n\\begin{CD}\n0 @&gt;&gt;&gt; \\mathbb Z^2 @&gt;\\begin{pmatrix}m&amp;0\\\\e&amp;d\\end{pmatrix}&gt;&gt; \\mathbb Z^2 @&gt;&gt;&gt; C_{m,e} @&gt;&gt;&gt;0\\\\\n@VVV @VXVV @VVYV @VVV @VVV \\\\\n0 @&gt;&gt;&gt; \\mathbb Z^2 @&gt;&gt;\\begin{pmatrix}n&amp;0\\\\0&amp;1\\end{pmatrix}&gt; \\mathbb Z^2 @&gt;p&gt;&gt; \\mathbb Z_n @&gt;&gt;&gt;0\n\\end{CD}$$\n, where $X$ and $Y$ are obtained via the <a href=\"https://en.wikipedia.org/wiki/Smith_normal_form#Algorithm\" rel=\"nofollow noreferrer\">algorithm to determine the Smith normal form</a>.</p>\n\n<p>The map $p \\circ Y$ factors through $\\mathbb Z \\oplus \\mathbb Z_d$ and the factorization is $f$.</p>\n" }, { "answer_id": 2114775, "score": 6, "text": "<p>Since I don't know what we can assume, let's classify all those extensions by brute force. Since it's a problem in a chapter about homology, $A$ is probably meant to be abelian but let's go ahead and do the general case. I'll outline a proof in the form of exercises. At the end, I outline the relationship between the explicit constructions and the cohomological argument.</p>\n\n<p>First, name the arrows in the exact sequence: $i: \\mathbf Z \\rightarrow A$ and $p: A \\rightarrow \\mathbf Z_n$.</p>\n\n<p>Let $a = i(1)$ and let $b \\in A$ such that $p(b) = 1$.</p>\n\n<blockquote>\n <p><strong>Exercise 1:</strong> $a$ and $b$ generate $A$</p>\n</blockquote>\n\n<p>So, $$A = \\langle a, b : \\text{ some relations we need to find}\\rangle$$</p>\n\n<blockquote>\n <p><strong>Exercise 2</strong>: $b^n = a^r$ and $bab^{-1} = a^s$ for some integers $r$ and $s$.</p>\n</blockquote>\n\n<p><em>Hint: What are the images of $bab^{-1}$ and $b^n$ under $p$?.</em></p>\n\n<blockquote>\n <p><strong>Exercise 3</strong>: $a^{s^n} = a$ and thus $s = \\pm 1$.</p>\n</blockquote>\n\n<p><em>Hint: use the relations we have so far. For the last part, recall that $a$ is the generator of a copy of $\\mathbf Z$ in $A$.</em></p>\n\n<blockquote>\n <p><strong>Exercise 4:</strong> Suppose $s = 1$. Let $t = \\gcd(n, r)$ and write $$1 = \\alpha \\frac n t + \\beta \\frac r t,$$ for suitable $\\alpha, \\beta \\in \\mathbf Z$. Then, $$A \\cong \\mathbf Z \\times \\mathbf Z_t,$$ with $i': \\mathbf Z \\rightarrow \\mathbf Z \\times \\mathbf Z_t$ and $p': \\mathbf Z \\times \\mathbf Z_t \\rightarrow \\mathbf Z_n$ given by $$i'(1) = (n/t,-\\beta)\\\\ p'(1,0) = \\beta \\\\ p'(0,1) = n/t.$$</p>\n</blockquote>\n\n<p><em>Hint (so it doesn't look like magic): $s = 1$ implies $A$ abelian and the relation $b^n = a^r$ can then be rewritten as $1 = (b^{n/t}a^{-r/t})^t$. In fact, $t$ is the order of $b^{n/t}a^{-r/t}$ in $A$. Show that $b^{n/t}a^{-r/t}$ and $b^{\\beta}a^{\\alpha}$ generate $A$ and consider the homomorphism $\\varphi: \\mathbf Z \\times \\mathbf Z_t \\rightarrow A$ given by $\\varphi(1,0) = b^{\\beta}a^{\\alpha}$ and $\\varphi(0, 1)=b^{n/t}a^{-r/t}$. It sits in the commutative diagram</em>\n$\\require{AMScd}$\n\\begin{CD}\n0 @&gt;&gt;&gt; \\mathbf Z @&gt;{i'}&gt;&gt; \\mathbf Z \\times \\mathbf Z_t @&gt;{p'}&gt;&gt; \\mathbf Z_n @&gt;&gt;&gt; 0\\\\\n&amp; @V{=\\,}VV @VV{\\varphi}V @VV{\\, =}V \\\\\n0 @&gt;&gt;&gt; \\mathbf Z @&gt;&gt;{i}&gt; A @&gt;&gt;{p}&gt; \\mathbf Z_n @&gt;&gt;&gt; 0\\end{CD}</p>\n\n<blockquote>\n <p><strong>Exercise 5:</strong> Suppose $s = -1$. Then, $n$ is even, $r = 0$ and $$A \\cong \\mathbf Z \\rtimes_\\sigma \\mathbf Z_n,$$ where $\\sigma: \\mathbf Z_n \\rightarrow \\operatorname{Aut}(\\mathbf Z)$ is given by $\\sigma(1)(1) = -1$. The maps $i': \\mathbf Z \\rightarrow \\mathbf Z \\rtimes \\mathbf Z_n$ and $p': \\mathbf Z \\rtimes \\mathbf Z_n \\rightarrow \\mathbf Z_n$ are given by $$i'(1) = (1,0)\\\\ p'(1,0) = 0 \\\\ p'(0,1) = 1.$$</p>\n</blockquote>\n\n<p><em>Hint: in this case $ba^rb^{-1} = a^{-r}$ and $a^r = b^n$, so $r = 0$ and $b^n = 1$. Conclude that $A$ is isomorphic to $\\mathbf Z \\rtimes \\mathbf Z_n$ and classify all homomorphisms $\\mathbf Z_n \\rightarrow \\operatorname{Aut}(\\mathbf Z)$. Note that $\\mathbf Z$ has only two automorphisms and, by exercise 3, $n$ is even when $s = -1$.</em></p>\n\n<hr>\n\n<p>What's going on in cohomological terms?</p>\n\n<p>$\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z)$ classifies the extensions of $\\mathbf Z_n$ by $\\mathbf Z$ as $\\mathbf Z$-modules or, in other words, as abelian groups. To compute $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z)$, one would usually apply $\\operatorname{Hom}(\\cdot, \\mathbf Z)$ to the \"multiplication by $n$\" sequence $$0 \\rightarrow \\mathbf Z \\rightarrow \\mathbf Z \\rightarrow \\mathbf Z_n \\rightarrow 0$$ to get $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z) = \\mathbf Z/n\\mathbf Z$. Exercise 4 makes this equality explicit. The class of $r$ in $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z) = \\mathbf Z/n\\mathbf Z$ corresponds to the extension $\\mathbf Z \\times \\mathbf Z_t$ with maps $p'$ and $i'$ as in the statement of exercise 4.</p>\n\n<p>Non-abelian group extensions of $\\mathbf Z_n$ by $\\mathbf Z$ are not detected by $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z) = H^2(\\mathbf Z_n, \\mathbf Z)$. To detect more general extensions, one could look, for example, at $H^3$ (see <a href=\"http://sierra.nmsu.edu/morandi/notes/GroupExtensions.pdf\" rel=\"nofollow noreferrer\">here</a>). IMO Exercise 5 is a more down to earth way of treating this case.</p>\n" }, { "answer_id": 2114785, "score": 5, "text": "<p>This is a nice exercise for a beginning section of homology theory, since doing this without Ext will make you really appreciate the machinery homological algebra has to offer.</p>\n\n<p>I do not think, that I will go through all details, but let us start:</p>\n\n<p>First of all, we have the case $\\mathbb Z_d = 0$ (one could refer to this as $d=1$), i.e. a short exact sequence</p>\n\n<p>$$0 \\to \\mathbb Z \\to \\mathbb Z \\to \\mathbb Z_n \\to 0.$$</p>\n\n<p>Of course the first map is given by $\\cdot m$ for some $m \\in \\mathbb Z$ and by looking at the cokernel, we get that $m = \\pm n$. The second map is given by $1 \\mapsto e$, where $e$ is co-prime to $n$.</p>\n\n<p>The much more involved case is the case $d \\geq 2$, i.e. where the summand $\\mathbb Z_d$ actually occurs.</p>\n\n<p>Call the first map $g: \\mathbb Z \\to \\mathbb Z \\oplus \\mathbb Z_d$ and the second map $f: \\mathbb Z \\oplus \\mathbb Z_d \\to \\mathbb Z_n$. Denote the kernel of $f$ by $K$. By exactness we get $K \\cong \\mathbb Z$.</p>\n\n<p>Note that $f(0,n)=0$. $K$ is torsion free, but $(0,n) \\in \\mathbb Z \\oplus \\mathbb Z_d$ is of course torsion (annihilated by $d$), hence we have $(0,n)=(0,0)$, i.e. $d|n$.</p>\n\n<blockquote>\n <p>So we have all possibilites for $A$, namely $A \\in \\{\\mathbb Z \\} \\cup\n\\{\\mathbb Z \\oplus \\mathbb Z_d ~ | ~ d \\geq 2,d | n\\}$.</p>\n</blockquote>\n\n<p>We still have to figure out the maps in the second case:</p>\n\n<p>Of course $g$ is given by $g(1) = (m,e)$ for some $m,e \\in \\mathbb Z, 0 \\leq e &lt;d$. Let us call the cokernel $C_{m,e}$.</p>\n\n<p>$C_{m_e}$ is generated by $e_1=(1,0),e_2=(0,1)$ and we have relations $me_1+ee_2=0$ and $de_2=0$, thus $C_{m_e}$ is presented by $$\\begin{pmatrix}m&amp;0\\\\e&amp;d\\end{pmatrix}.$$ Upto multiplication in $SL_2(\\mathbb Z)$, this should be equal to $$\\begin{pmatrix}n&amp;0\\\\0&amp;1\\end{pmatrix}$$, i.e. by looking at fitting ideals, we get that</p>\n\n<p>$$md=\\pm n ~~ \\text{ and } ~~ (m,e,d)=(1).$$</p>\n\n<p>Thus we have found all possibilities for $g$. To find $f$, note that we have the following commutative diagram of exact sequences:</p>\n\n<p>$$\\require{AMScd}\n\\begin{CD}\n0 @&gt;&gt;&gt; \\mathbb Z^2 @&gt;\\begin{pmatrix}m&amp;0\\\\e&amp;d\\end{pmatrix}&gt;&gt; \\mathbb Z^2 @&gt;&gt;&gt; C_{m,e} @&gt;&gt;&gt;0\\\\\n@VVV @VXVV @VVYV @VVV @VVV \\\\\n0 @&gt;&gt;&gt; \\mathbb Z^2 @&gt;&gt;\\begin{pmatrix}n&amp;0\\\\0&amp;1\\end{pmatrix}&gt; \\mathbb Z^2 @&gt;p&gt;&gt; \\mathbb Z_n @&gt;&gt;&gt;0\n\\end{CD}$$\n, where $X$ and $Y$ are obtained via the <a href=\"https://en.wikipedia.org/wiki/Smith_normal_form#Algorithm\" rel=\"nofollow noreferrer\">algorithm to determine the Smith normal form</a>.</p>\n\n<p>The map $p \\circ Y$ factors through $\\mathbb Z \\oplus \\mathbb Z_d$ and the factorization is $f$.</p>\n" }, { "answer_id": 2114775, "score": 6, "text": "<p>Since I don't know what we can assume, let's classify all those extensions by brute force. Since it's a problem in a chapter about homology, $A$ is probably meant to be abelian but let's go ahead and do the general case. I'll outline a proof in the form of exercises. At the end, I outline the relationship between the explicit constructions and the cohomological argument.</p>\n\n<p>First, name the arrows in the exact sequence: $i: \\mathbf Z \\rightarrow A$ and $p: A \\rightarrow \\mathbf Z_n$.</p>\n\n<p>Let $a = i(1)$ and let $b \\in A$ such that $p(b) = 1$.</p>\n\n<blockquote>\n <p><strong>Exercise 1:</strong> $a$ and $b$ generate $A$</p>\n</blockquote>\n\n<p>So, $$A = \\langle a, b : \\text{ some relations we need to find}\\rangle$$</p>\n\n<blockquote>\n <p><strong>Exercise 2</strong>: $b^n = a^r$ and $bab^{-1} = a^s$ for some integers $r$ and $s$.</p>\n</blockquote>\n\n<p><em>Hint: What are the images of $bab^{-1}$ and $b^n$ under $p$?.</em></p>\n\n<blockquote>\n <p><strong>Exercise 3</strong>: $a^{s^n} = a$ and thus $s = \\pm 1$.</p>\n</blockquote>\n\n<p><em>Hint: use the relations we have so far. For the last part, recall that $a$ is the generator of a copy of $\\mathbf Z$ in $A$.</em></p>\n\n<blockquote>\n <p><strong>Exercise 4:</strong> Suppose $s = 1$. Let $t = \\gcd(n, r)$ and write $$1 = \\alpha \\frac n t + \\beta \\frac r t,$$ for suitable $\\alpha, \\beta \\in \\mathbf Z$. Then, $$A \\cong \\mathbf Z \\times \\mathbf Z_t,$$ with $i': \\mathbf Z \\rightarrow \\mathbf Z \\times \\mathbf Z_t$ and $p': \\mathbf Z \\times \\mathbf Z_t \\rightarrow \\mathbf Z_n$ given by $$i'(1) = (n/t,-\\beta)\\\\ p'(1,0) = \\beta \\\\ p'(0,1) = n/t.$$</p>\n</blockquote>\n\n<p><em>Hint (so it doesn't look like magic): $s = 1$ implies $A$ abelian and the relation $b^n = a^r$ can then be rewritten as $1 = (b^{n/t}a^{-r/t})^t$. In fact, $t$ is the order of $b^{n/t}a^{-r/t}$ in $A$. Show that $b^{n/t}a^{-r/t}$ and $b^{\\beta}a^{\\alpha}$ generate $A$ and consider the homomorphism $\\varphi: \\mathbf Z \\times \\mathbf Z_t \\rightarrow A$ given by $\\varphi(1,0) = b^{\\beta}a^{\\alpha}$ and $\\varphi(0, 1)=b^{n/t}a^{-r/t}$. It sits in the commutative diagram</em>\n$\\require{AMScd}$\n\\begin{CD}\n0 @&gt;&gt;&gt; \\mathbf Z @&gt;{i'}&gt;&gt; \\mathbf Z \\times \\mathbf Z_t @&gt;{p'}&gt;&gt; \\mathbf Z_n @&gt;&gt;&gt; 0\\\\\n&amp; @V{=\\,}VV @VV{\\varphi}V @VV{\\, =}V \\\\\n0 @&gt;&gt;&gt; \\mathbf Z @&gt;&gt;{i}&gt; A @&gt;&gt;{p}&gt; \\mathbf Z_n @&gt;&gt;&gt; 0\\end{CD}</p>\n\n<blockquote>\n <p><strong>Exercise 5:</strong> Suppose $s = -1$. Then, $n$ is even, $r = 0$ and $$A \\cong \\mathbf Z \\rtimes_\\sigma \\mathbf Z_n,$$ where $\\sigma: \\mathbf Z_n \\rightarrow \\operatorname{Aut}(\\mathbf Z)$ is given by $\\sigma(1)(1) = -1$. The maps $i': \\mathbf Z \\rightarrow \\mathbf Z \\rtimes \\mathbf Z_n$ and $p': \\mathbf Z \\rtimes \\mathbf Z_n \\rightarrow \\mathbf Z_n$ are given by $$i'(1) = (1,0)\\\\ p'(1,0) = 0 \\\\ p'(0,1) = 1.$$</p>\n</blockquote>\n\n<p><em>Hint: in this case $ba^rb^{-1} = a^{-r}$ and $a^r = b^n$, so $r = 0$ and $b^n = 1$. Conclude that $A$ is isomorphic to $\\mathbf Z \\rtimes \\mathbf Z_n$ and classify all homomorphisms $\\mathbf Z_n \\rightarrow \\operatorname{Aut}(\\mathbf Z)$. Note that $\\mathbf Z$ has only two automorphisms and, by exercise 3, $n$ is even when $s = -1$.</em></p>\n\n<hr>\n\n<p>What's going on in cohomological terms?</p>\n\n<p>$\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z)$ classifies the extensions of $\\mathbf Z_n$ by $\\mathbf Z$ as $\\mathbf Z$-modules or, in other words, as abelian groups. To compute $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z)$, one would usually apply $\\operatorname{Hom}(\\cdot, \\mathbf Z)$ to the \"multiplication by $n$\" sequence $$0 \\rightarrow \\mathbf Z \\rightarrow \\mathbf Z \\rightarrow \\mathbf Z_n \\rightarrow 0$$ to get $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z) = \\mathbf Z/n\\mathbf Z$. Exercise 4 makes this equality explicit. The class of $r$ in $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z) = \\mathbf Z/n\\mathbf Z$ corresponds to the extension $\\mathbf Z \\times \\mathbf Z_t$ with maps $p'$ and $i'$ as in the statement of exercise 4.</p>\n\n<p>Non-abelian group extensions of $\\mathbf Z_n$ by $\\mathbf Z$ are not detected by $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z) = H^2(\\mathbf Z_n, \\mathbf Z)$. To detect more general extensions, one could look, for example, at $H^3$ (see <a href=\"http://sierra.nmsu.edu/morandi/notes/GroupExtensions.pdf\" rel=\"nofollow noreferrer\">here</a>). IMO Exercise 5 is a more down to earth way of treating this case.</p>\n" }, { "answer_id": 2114785, "score": 5, "text": "<p>This is a nice exercise for a beginning section of homology theory, since doing this without Ext will make you really appreciate the machinery homological algebra has to offer.</p>\n\n<p>I do not think, that I will go through all details, but let us start:</p>\n\n<p>First of all, we have the case $\\mathbb Z_d = 0$ (one could refer to this as $d=1$), i.e. a short exact sequence</p>\n\n<p>$$0 \\to \\mathbb Z \\to \\mathbb Z \\to \\mathbb Z_n \\to 0.$$</p>\n\n<p>Of course the first map is given by $\\cdot m$ for some $m \\in \\mathbb Z$ and by looking at the cokernel, we get that $m = \\pm n$. The second map is given by $1 \\mapsto e$, where $e$ is co-prime to $n$.</p>\n\n<p>The much more involved case is the case $d \\geq 2$, i.e. where the summand $\\mathbb Z_d$ actually occurs.</p>\n\n<p>Call the first map $g: \\mathbb Z \\to \\mathbb Z \\oplus \\mathbb Z_d$ and the second map $f: \\mathbb Z \\oplus \\mathbb Z_d \\to \\mathbb Z_n$. Denote the kernel of $f$ by $K$. By exactness we get $K \\cong \\mathbb Z$.</p>\n\n<p>Note that $f(0,n)=0$. $K$ is torsion free, but $(0,n) \\in \\mathbb Z \\oplus \\mathbb Z_d$ is of course torsion (annihilated by $d$), hence we have $(0,n)=(0,0)$, i.e. $d|n$.</p>\n\n<blockquote>\n <p>So we have all possibilites for $A$, namely $A \\in \\{\\mathbb Z \\} \\cup\n\\{\\mathbb Z \\oplus \\mathbb Z_d ~ | ~ d \\geq 2,d | n\\}$.</p>\n</blockquote>\n\n<p>We still have to figure out the maps in the second case:</p>\n\n<p>Of course $g$ is given by $g(1) = (m,e)$ for some $m,e \\in \\mathbb Z, 0 \\leq e &lt;d$. Let us call the cokernel $C_{m,e}$.</p>\n\n<p>$C_{m_e}$ is generated by $e_1=(1,0),e_2=(0,1)$ and we have relations $me_1+ee_2=0$ and $de_2=0$, thus $C_{m_e}$ is presented by $$\\begin{pmatrix}m&amp;0\\\\e&amp;d\\end{pmatrix}.$$ Upto multiplication in $SL_2(\\mathbb Z)$, this should be equal to $$\\begin{pmatrix}n&amp;0\\\\0&amp;1\\end{pmatrix}$$, i.e. by looking at fitting ideals, we get that</p>\n\n<p>$$md=\\pm n ~~ \\text{ and } ~~ (m,e,d)=(1).$$</p>\n\n<p>Thus we have found all possibilities for $g$. To find $f$, note that we have the following commutative diagram of exact sequences:</p>\n\n<p>$$\\require{AMScd}\n\\begin{CD}\n0 @&gt;&gt;&gt; \\mathbb Z^2 @&gt;\\begin{pmatrix}m&amp;0\\\\e&amp;d\\end{pmatrix}&gt;&gt; \\mathbb Z^2 @&gt;&gt;&gt; C_{m,e} @&gt;&gt;&gt;0\\\\\n@VVV @VXVV @VVYV @VVV @VVV \\\\\n0 @&gt;&gt;&gt; \\mathbb Z^2 @&gt;&gt;\\begin{pmatrix}n&amp;0\\\\0&amp;1\\end{pmatrix}&gt; \\mathbb Z^2 @&gt;p&gt;&gt; \\mathbb Z_n @&gt;&gt;&gt;0\n\\end{CD}$$\n, where $X$ and $Y$ are obtained via the <a href=\"https://en.wikipedia.org/wiki/Smith_normal_form#Algorithm\" rel=\"nofollow noreferrer\">algorithm to determine the Smith normal form</a>.</p>\n\n<p>The map $p \\circ Y$ factors through $\\mathbb Z \\oplus \\mathbb Z_d$ and the factorization is $f$.</p>\n" }, { "answer_id": 2114775, "score": 6, "text": "<p>Since I don't know what we can assume, let's classify all those extensions by brute force. Since it's a problem in a chapter about homology, $A$ is probably meant to be abelian but let's go ahead and do the general case. I'll outline a proof in the form of exercises. At the end, I outline the relationship between the explicit constructions and the cohomological argument.</p>\n\n<p>First, name the arrows in the exact sequence: $i: \\mathbf Z \\rightarrow A$ and $p: A \\rightarrow \\mathbf Z_n$.</p>\n\n<p>Let $a = i(1)$ and let $b \\in A$ such that $p(b) = 1$.</p>\n\n<blockquote>\n <p><strong>Exercise 1:</strong> $a$ and $b$ generate $A$</p>\n</blockquote>\n\n<p>So, $$A = \\langle a, b : \\text{ some relations we need to find}\\rangle$$</p>\n\n<blockquote>\n <p><strong>Exercise 2</strong>: $b^n = a^r$ and $bab^{-1} = a^s$ for some integers $r$ and $s$.</p>\n</blockquote>\n\n<p><em>Hint: What are the images of $bab^{-1}$ and $b^n$ under $p$?.</em></p>\n\n<blockquote>\n <p><strong>Exercise 3</strong>: $a^{s^n} = a$ and thus $s = \\pm 1$.</p>\n</blockquote>\n\n<p><em>Hint: use the relations we have so far. For the last part, recall that $a$ is the generator of a copy of $\\mathbf Z$ in $A$.</em></p>\n\n<blockquote>\n <p><strong>Exercise 4:</strong> Suppose $s = 1$. Let $t = \\gcd(n, r)$ and write $$1 = \\alpha \\frac n t + \\beta \\frac r t,$$ for suitable $\\alpha, \\beta \\in \\mathbf Z$. Then, $$A \\cong \\mathbf Z \\times \\mathbf Z_t,$$ with $i': \\mathbf Z \\rightarrow \\mathbf Z \\times \\mathbf Z_t$ and $p': \\mathbf Z \\times \\mathbf Z_t \\rightarrow \\mathbf Z_n$ given by $$i'(1) = (n/t,-\\beta)\\\\ p'(1,0) = \\beta \\\\ p'(0,1) = n/t.$$</p>\n</blockquote>\n\n<p><em>Hint (so it doesn't look like magic): $s = 1$ implies $A$ abelian and the relation $b^n = a^r$ can then be rewritten as $1 = (b^{n/t}a^{-r/t})^t$. In fact, $t$ is the order of $b^{n/t}a^{-r/t}$ in $A$. Show that $b^{n/t}a^{-r/t}$ and $b^{\\beta}a^{\\alpha}$ generate $A$ and consider the homomorphism $\\varphi: \\mathbf Z \\times \\mathbf Z_t \\rightarrow A$ given by $\\varphi(1,0) = b^{\\beta}a^{\\alpha}$ and $\\varphi(0, 1)=b^{n/t}a^{-r/t}$. It sits in the commutative diagram</em>\n$\\require{AMScd}$\n\\begin{CD}\n0 @&gt;&gt;&gt; \\mathbf Z @&gt;{i'}&gt;&gt; \\mathbf Z \\times \\mathbf Z_t @&gt;{p'}&gt;&gt; \\mathbf Z_n @&gt;&gt;&gt; 0\\\\\n&amp; @V{=\\,}VV @VV{\\varphi}V @VV{\\, =}V \\\\\n0 @&gt;&gt;&gt; \\mathbf Z @&gt;&gt;{i}&gt; A @&gt;&gt;{p}&gt; \\mathbf Z_n @&gt;&gt;&gt; 0\\end{CD}</p>\n\n<blockquote>\n <p><strong>Exercise 5:</strong> Suppose $s = -1$. Then, $n$ is even, $r = 0$ and $$A \\cong \\mathbf Z \\rtimes_\\sigma \\mathbf Z_n,$$ where $\\sigma: \\mathbf Z_n \\rightarrow \\operatorname{Aut}(\\mathbf Z)$ is given by $\\sigma(1)(1) = -1$. The maps $i': \\mathbf Z \\rightarrow \\mathbf Z \\rtimes \\mathbf Z_n$ and $p': \\mathbf Z \\rtimes \\mathbf Z_n \\rightarrow \\mathbf Z_n$ are given by $$i'(1) = (1,0)\\\\ p'(1,0) = 0 \\\\ p'(0,1) = 1.$$</p>\n</blockquote>\n\n<p><em>Hint: in this case $ba^rb^{-1} = a^{-r}$ and $a^r = b^n$, so $r = 0$ and $b^n = 1$. Conclude that $A$ is isomorphic to $\\mathbf Z \\rtimes \\mathbf Z_n$ and classify all homomorphisms $\\mathbf Z_n \\rightarrow \\operatorname{Aut}(\\mathbf Z)$. Note that $\\mathbf Z$ has only two automorphisms and, by exercise 3, $n$ is even when $s = -1$.</em></p>\n\n<hr>\n\n<p>What's going on in cohomological terms?</p>\n\n<p>$\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z)$ classifies the extensions of $\\mathbf Z_n$ by $\\mathbf Z$ as $\\mathbf Z$-modules or, in other words, as abelian groups. To compute $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z)$, one would usually apply $\\operatorname{Hom}(\\cdot, \\mathbf Z)$ to the \"multiplication by $n$\" sequence $$0 \\rightarrow \\mathbf Z \\rightarrow \\mathbf Z \\rightarrow \\mathbf Z_n \\rightarrow 0$$ to get $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z) = \\mathbf Z/n\\mathbf Z$. Exercise 4 makes this equality explicit. The class of $r$ in $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z) = \\mathbf Z/n\\mathbf Z$ corresponds to the extension $\\mathbf Z \\times \\mathbf Z_t$ with maps $p'$ and $i'$ as in the statement of exercise 4.</p>\n\n<p>Non-abelian group extensions of $\\mathbf Z_n$ by $\\mathbf Z$ are not detected by $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z) = H^2(\\mathbf Z_n, \\mathbf Z)$. To detect more general extensions, one could look, for example, at $H^3$ (see <a href=\"http://sierra.nmsu.edu/morandi/notes/GroupExtensions.pdf\" rel=\"nofollow noreferrer\">here</a>). IMO Exercise 5 is a more down to earth way of treating this case.</p>\n" }, { "answer_id": 2114785, "score": 5, "text": "<p>This is a nice exercise for a beginning section of homology theory, since doing this without Ext will make you really appreciate the machinery homological algebra has to offer.</p>\n\n<p>I do not think, that I will go through all details, but let us start:</p>\n\n<p>First of all, we have the case $\\mathbb Z_d = 0$ (one could refer to this as $d=1$), i.e. a short exact sequence</p>\n\n<p>$$0 \\to \\mathbb Z \\to \\mathbb Z \\to \\mathbb Z_n \\to 0.$$</p>\n\n<p>Of course the first map is given by $\\cdot m$ for some $m \\in \\mathbb Z$ and by looking at the cokernel, we get that $m = \\pm n$. The second map is given by $1 \\mapsto e$, where $e$ is co-prime to $n$.</p>\n\n<p>The much more involved case is the case $d \\geq 2$, i.e. where the summand $\\mathbb Z_d$ actually occurs.</p>\n\n<p>Call the first map $g: \\mathbb Z \\to \\mathbb Z \\oplus \\mathbb Z_d$ and the second map $f: \\mathbb Z \\oplus \\mathbb Z_d \\to \\mathbb Z_n$. Denote the kernel of $f$ by $K$. By exactness we get $K \\cong \\mathbb Z$.</p>\n\n<p>Note that $f(0,n)=0$. $K$ is torsion free, but $(0,n) \\in \\mathbb Z \\oplus \\mathbb Z_d$ is of course torsion (annihilated by $d$), hence we have $(0,n)=(0,0)$, i.e. $d|n$.</p>\n\n<blockquote>\n <p>So we have all possibilites for $A$, namely $A \\in \\{\\mathbb Z \\} \\cup\n\\{\\mathbb Z \\oplus \\mathbb Z_d ~ | ~ d \\geq 2,d | n\\}$.</p>\n</blockquote>\n\n<p>We still have to figure out the maps in the second case:</p>\n\n<p>Of course $g$ is given by $g(1) = (m,e)$ for some $m,e \\in \\mathbb Z, 0 \\leq e &lt;d$. Let us call the cokernel $C_{m,e}$.</p>\n\n<p>$C_{m_e}$ is generated by $e_1=(1,0),e_2=(0,1)$ and we have relations $me_1+ee_2=0$ and $de_2=0$, thus $C_{m_e}$ is presented by $$\\begin{pmatrix}m&amp;0\\\\e&amp;d\\end{pmatrix}.$$ Upto multiplication in $SL_2(\\mathbb Z)$, this should be equal to $$\\begin{pmatrix}n&amp;0\\\\0&amp;1\\end{pmatrix}$$, i.e. by looking at fitting ideals, we get that</p>\n\n<p>$$md=\\pm n ~~ \\text{ and } ~~ (m,e,d)=(1).$$</p>\n\n<p>Thus we have found all possibilities for $g$. To find $f$, note that we have the following commutative diagram of exact sequences:</p>\n\n<p>$$\\require{AMScd}\n\\begin{CD}\n0 @&gt;&gt;&gt; \\mathbb Z^2 @&gt;\\begin{pmatrix}m&amp;0\\\\e&amp;d\\end{pmatrix}&gt;&gt; \\mathbb Z^2 @&gt;&gt;&gt; C_{m,e} @&gt;&gt;&gt;0\\\\\n@VVV @VXVV @VVYV @VVV @VVV \\\\\n0 @&gt;&gt;&gt; \\mathbb Z^2 @&gt;&gt;\\begin{pmatrix}n&amp;0\\\\0&amp;1\\end{pmatrix}&gt; \\mathbb Z^2 @&gt;p&gt;&gt; \\mathbb Z_n @&gt;&gt;&gt;0\n\\end{CD}$$\n, where $X$ and $Y$ are obtained via the <a href=\"https://en.wikipedia.org/wiki/Smith_normal_form#Algorithm\" rel=\"nofollow noreferrer\">algorithm to determine the Smith normal form</a>.</p>\n\n<p>The map $p \\circ Y$ factors through $\\mathbb Z \\oplus \\mathbb Z_d$ and the factorization is $f$.</p>\n" }, { "answer_id": 2114775, "score": 6, "text": "<p>Since I don't know what we can assume, let's classify all those extensions by brute force. Since it's a problem in a chapter about homology, $A$ is probably meant to be abelian but let's go ahead and do the general case. I'll outline a proof in the form of exercises. At the end, I outline the relationship between the explicit constructions and the cohomological argument.</p>\n\n<p>First, name the arrows in the exact sequence: $i: \\mathbf Z \\rightarrow A$ and $p: A \\rightarrow \\mathbf Z_n$.</p>\n\n<p>Let $a = i(1)$ and let $b \\in A$ such that $p(b) = 1$.</p>\n\n<blockquote>\n <p><strong>Exercise 1:</strong> $a$ and $b$ generate $A$</p>\n</blockquote>\n\n<p>So, $$A = \\langle a, b : \\text{ some relations we need to find}\\rangle$$</p>\n\n<blockquote>\n <p><strong>Exercise 2</strong>: $b^n = a^r$ and $bab^{-1} = a^s$ for some integers $r$ and $s$.</p>\n</blockquote>\n\n<p><em>Hint: What are the images of $bab^{-1}$ and $b^n$ under $p$?.</em></p>\n\n<blockquote>\n <p><strong>Exercise 3</strong>: $a^{s^n} = a$ and thus $s = \\pm 1$.</p>\n</blockquote>\n\n<p><em>Hint: use the relations we have so far. For the last part, recall that $a$ is the generator of a copy of $\\mathbf Z$ in $A$.</em></p>\n\n<blockquote>\n <p><strong>Exercise 4:</strong> Suppose $s = 1$. Let $t = \\gcd(n, r)$ and write $$1 = \\alpha \\frac n t + \\beta \\frac r t,$$ for suitable $\\alpha, \\beta \\in \\mathbf Z$. Then, $$A \\cong \\mathbf Z \\times \\mathbf Z_t,$$ with $i': \\mathbf Z \\rightarrow \\mathbf Z \\times \\mathbf Z_t$ and $p': \\mathbf Z \\times \\mathbf Z_t \\rightarrow \\mathbf Z_n$ given by $$i'(1) = (n/t,-\\beta)\\\\ p'(1,0) = \\beta \\\\ p'(0,1) = n/t.$$</p>\n</blockquote>\n\n<p><em>Hint (so it doesn't look like magic): $s = 1$ implies $A$ abelian and the relation $b^n = a^r$ can then be rewritten as $1 = (b^{n/t}a^{-r/t})^t$. In fact, $t$ is the order of $b^{n/t}a^{-r/t}$ in $A$. Show that $b^{n/t}a^{-r/t}$ and $b^{\\beta}a^{\\alpha}$ generate $A$ and consider the homomorphism $\\varphi: \\mathbf Z \\times \\mathbf Z_t \\rightarrow A$ given by $\\varphi(1,0) = b^{\\beta}a^{\\alpha}$ and $\\varphi(0, 1)=b^{n/t}a^{-r/t}$. It sits in the commutative diagram</em>\n$\\require{AMScd}$\n\\begin{CD}\n0 @&gt;&gt;&gt; \\mathbf Z @&gt;{i'}&gt;&gt; \\mathbf Z \\times \\mathbf Z_t @&gt;{p'}&gt;&gt; \\mathbf Z_n @&gt;&gt;&gt; 0\\\\\n&amp; @V{=\\,}VV @VV{\\varphi}V @VV{\\, =}V \\\\\n0 @&gt;&gt;&gt; \\mathbf Z @&gt;&gt;{i}&gt; A @&gt;&gt;{p}&gt; \\mathbf Z_n @&gt;&gt;&gt; 0\\end{CD}</p>\n\n<blockquote>\n <p><strong>Exercise 5:</strong> Suppose $s = -1$. Then, $n$ is even, $r = 0$ and $$A \\cong \\mathbf Z \\rtimes_\\sigma \\mathbf Z_n,$$ where $\\sigma: \\mathbf Z_n \\rightarrow \\operatorname{Aut}(\\mathbf Z)$ is given by $\\sigma(1)(1) = -1$. The maps $i': \\mathbf Z \\rightarrow \\mathbf Z \\rtimes \\mathbf Z_n$ and $p': \\mathbf Z \\rtimes \\mathbf Z_n \\rightarrow \\mathbf Z_n$ are given by $$i'(1) = (1,0)\\\\ p'(1,0) = 0 \\\\ p'(0,1) = 1.$$</p>\n</blockquote>\n\n<p><em>Hint: in this case $ba^rb^{-1} = a^{-r}$ and $a^r = b^n$, so $r = 0$ and $b^n = 1$. Conclude that $A$ is isomorphic to $\\mathbf Z \\rtimes \\mathbf Z_n$ and classify all homomorphisms $\\mathbf Z_n \\rightarrow \\operatorname{Aut}(\\mathbf Z)$. Note that $\\mathbf Z$ has only two automorphisms and, by exercise 3, $n$ is even when $s = -1$.</em></p>\n\n<hr>\n\n<p>What's going on in cohomological terms?</p>\n\n<p>$\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z)$ classifies the extensions of $\\mathbf Z_n$ by $\\mathbf Z$ as $\\mathbf Z$-modules or, in other words, as abelian groups. To compute $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z)$, one would usually apply $\\operatorname{Hom}(\\cdot, \\mathbf Z)$ to the \"multiplication by $n$\" sequence $$0 \\rightarrow \\mathbf Z \\rightarrow \\mathbf Z \\rightarrow \\mathbf Z_n \\rightarrow 0$$ to get $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z) = \\mathbf Z/n\\mathbf Z$. Exercise 4 makes this equality explicit. The class of $r$ in $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z) = \\mathbf Z/n\\mathbf Z$ corresponds to the extension $\\mathbf Z \\times \\mathbf Z_t$ with maps $p'$ and $i'$ as in the statement of exercise 4.</p>\n\n<p>Non-abelian group extensions of $\\mathbf Z_n$ by $\\mathbf Z$ are not detected by $\\operatorname{Ext}(\\mathbf Z_n, \\mathbf Z) = H^2(\\mathbf Z_n, \\mathbf Z)$. To detect more general extensions, one could look, for example, at $H^3$ (see <a href=\"http://sierra.nmsu.edu/morandi/notes/GroupExtensions.pdf\" rel=\"nofollow noreferrer\">here</a>). IMO Exercise 5 is a more down to earth way of treating this case.</p>\n" }, { "answer_id": 2114785, "score": 5, "text": "<p>This is a nice exercise for a beginning section of homology theory, since doing this without Ext will make you really appreciate the machinery homological algebra has to offer.</p>\n\n<p>I do not think, that I will go through all details, but let us start:</p>\n\n<p>First of all, we have the case $\\mathbb Z_d = 0$ (one could refer to this as $d=1$), i.e. a short exact sequence</p>\n\n<p>$$0 \\to \\mathbb Z \\to \\mathbb Z \\to \\mathbb Z_n \\to 0.$$</p>\n\n<p>Of course the first map is given by $\\cdot m$ for some $m \\in \\mathbb Z$ and by looking at the cokernel, we get that $m = \\pm n$. The second map is given by $1 \\mapsto e$, where $e$ is co-prime to $n$.</p>\n\n<p>The much more involved case is the case $d \\geq 2$, i.e. where the summand $\\mathbb Z_d$ actually occurs.</p>\n\n<p>Call the first map $g: \\mathbb Z \\to \\mathbb Z \\oplus \\mathbb Z_d$ and the second map $f: \\mathbb Z \\oplus \\mathbb Z_d \\to \\mathbb Z_n$. Denote the kernel of $f$ by $K$. By exactness we get $K \\cong \\mathbb Z$.</p>\n\n<p>Note that $f(0,n)=0$. $K$ is torsion free, but $(0,n) \\in \\mathbb Z \\oplus \\mathbb Z_d$ is of course torsion (annihilated by $d$), hence we have $(0,n)=(0,0)$, i.e. $d|n$.</p>\n\n<blockquote>\n <p>So we have all possibilites for $A$, namely $A \\in \\{\\mathbb Z \\} \\cup\n\\{\\mathbb Z \\oplus \\mathbb Z_d ~ | ~ d \\geq 2,d | n\\}$.</p>\n</blockquote>\n\n<p>We still have to figure out the maps in the second case:</p>\n\n<p>Of course $g$ is given by $g(1) = (m,e)$ for some $m,e \\in \\mathbb Z, 0 \\leq e &lt;d$. Let us call the cokernel $C_{m,e}$.</p>\n\n<p>$C_{m_e}$ is generated by $e_1=(1,0),e_2=(0,1)$ and we have relations $me_1+ee_2=0$ and $de_2=0$, thus $C_{m_e}$ is presented by $$\\begin{pmatrix}m&amp;0\\\\e&amp;d\\end{pmatrix}.$$ Upto multiplication in $SL_2(\\mathbb Z)$, this should be equal to $$\\begin{pmatrix}n&amp;0\\\\0&amp;1\\end{pmatrix}$$, i.e. by looking at fitting ideals, we get that</p>\n\n<p>$$md=\\pm n ~~ \\text{ and } ~~ (m,e,d)=(1).$$</p>\n\n<p>Thus we have found all possibilities for $g$. To find $f$, note that we have the following commutative diagram of exact sequences:</p>\n\n<p>$$\\require{AMScd}\n\\begin{CD}\n0 @&gt;&gt;&gt; \\mathbb Z^2 @&gt;\\begin{pmatrix}m&amp;0\\\\e&amp;d\\end{pmatrix}&gt;&gt; \\mathbb Z^2 @&gt;&gt;&gt; C_{m,e} @&gt;&gt;&gt;0\\\\\n@VVV @VXVV @VVYV @VVV @VVV \\\\\n0 @&gt;&gt;&gt; \\mathbb Z^2 @&gt;&gt;\\begin{pmatrix}n&amp;0\\\\0&amp;1\\end{pmatrix}&gt; \\mathbb Z^2 @&gt;p&gt;&gt; \\mathbb Z_n @&gt;&gt;&gt;0\n\\end{CD}$$\n, where $X$ and $Y$ are obtained via the <a href=\"https://en.wikipedia.org/wiki/Smith_normal_form#Algorithm\" rel=\"nofollow noreferrer\">algorithm to determine the Smith normal form</a>.</p>\n\n<p>The map $p \\circ Y$ factors through $\\mathbb Z \\oplus \\mathbb Z_d$ and the factorization is $f$.</p>\n" } ]
Question: Complete the short exact sequence $0\to \mathbb Z\to A\to Z_n \to 0$ Answer 1: <p>Since I don't know what we can assume, let's classify all those extensions by brute force. Since it's a problem in a chapter about homology, $A$ is probably meant to be abelian but let's go ahead and do the general case. I'll outline a proof in the form of exercises. At the end, I outline the relationship between the explicit constructions and the cohomological argument.</p> <p>First, name the arrows in the exact sequence: $i: \mathbf Z \rightarrow A$ and $p: A \rightarrow \mathbf Z_n$.</p> <p>Let $a = i(1)$ and let $b \in A$ such that $p(b) = 1$.</p> <blockquote> <p><strong>Exercise 1:</strong> $a$ and $b$ generate $A$</p> </blockquote> <p>So, $$A = \langle a, b : \text{ some relations we need to find}\rangle$$</p> <blockquote> <p><strong>Exercise 2</strong>: $b^n = a^r$ and $bab^{-1} = a^s$ for some integers $r$ and $s$.</p> </blockquote> <p><em>Hint: What are the images of $bab^{-1}$ and $b^n$ under $p$?.</em></p> <blockquote> <p><strong>Exercise 3</strong>: $a^{s^n} = a$ and thus $s = \pm 1$.</p> </blockquote> <p><em>Hint: use the relations we have so far. For the last part, recall that $a$ is the generator of a copy of $\mathbf Z$ in $A$.</em></p> <blockquote> <p><strong>Exercise 4:</strong> Suppose $s = 1$. Let $t = \gcd(n, r)$ and write $$1 = \alpha \frac n t + \beta \frac r t,$$ for suitable $\alpha, \beta \in \mathbf Z$. Then, $$A \cong \mathbf Z \times \mathbf Z_t,$$ with $i': \mathbf Z \rightarrow \mathbf Z \times \mathbf Z_t$ and $p': \mathbf Z \times \mathbf Z_t \rightarrow \mathbf Z_n$ given by $$i'(1) = (n/t,-\beta)\\ p'(1,0) = \beta \\ p'(0,1) = n/t.$$</p> </blockquote> <p><em>Hint (so it doesn't look like magic): $s = 1$ implies $A$ abelian and the relation $b^n = a^r$ can then be rewritten as $1 = (b^{n/t}a^{-r/t})^t$. In fact, $t$ is the order of $b^{n/t}a^{-r/t}$ in $A$. Show that $b^{n/t}a^{-r/t}$ and $b^{\beta}a^{\alpha}$ generate $A$ and consider the homomorphism $\varphi: \mathbf Z \times \mathbf Z_t \rightarrow A$ given by $\varphi(1,0) = b^{\beta}a^{\alpha}$ and $\varphi(0, 1)=b^{n/t}a^{-r/t}$. It sits in the commutative diagram</em> $\require{AMScd}$ \begin{CD} 0 @&gt;&gt;&gt; \mathbf Z @&gt;{i'}&gt;&gt; \mathbf Z \times \mathbf Z_t @&gt;{p'}&gt;&gt; \mathbf Z_n @&gt;&gt;&gt; 0\\ &amp; @V{=\,}VV @VV{\varphi}V @VV{\, =}V \\ 0 @&gt;&gt;&gt; \mathbf Z @&gt;&gt;{i}&gt; A @&gt;&gt;{p}&gt; \mathbf Z_n @&gt;&gt;&gt; 0\end{CD}</p> <blockquote> <p><strong>Exercise 5:</strong> Suppose $s = -1$. Then, $n$ is even, $r = 0$ and $$A \cong \mathbf Z \rtimes_\sigma \mathbf Z_n,$$ where $\sigma: \mathbf Z_n \rightarrow \operatorname{Aut}(\mathbf Z)$ is given by $\sigma(1)(1) = -1$. The maps $i': \mathbf Z \rightarrow \mathbf Z \rtimes \mathbf Z_n$ and $p': \mathbf Z \rtimes \mathbf Z_n \rightarrow \mathbf Z_n$ are given by $$i'(1) = (1,0)\\ p'(1,0) = 0 \\ p'(0,1) = 1.$$</p> </blockquote> <p><em>Hint: in this case $ba^rb^{-1} = a^{-r}$ and $a^r = b^n$, so $r = 0$ and $b^n = 1$. Conclude that $A$ is isomorphic to $\mathbf Z \rtimes \mathbf Z_n$ and classify all homomorphisms $\mathbf Z_n \rightarrow \operatorname{Aut}(\mathbf Z)$. Note that $\mathbf Z$ has only two automorphisms and, by exercise 3, $n$ is even when $s = -1$.</em></p> <hr> <p>What's going on in cohomological terms?</p> <p>$\operatorname{Ext}(\mathbf Z_n, \mathbf Z)$ classifies the extensions of $\mathbf Z_n$ by $\mathbf Z$ as $\mathbf Z$-modules or, in other words, as abelian groups. To compute $\operatorname{Ext}(\mathbf Z_n, \mathbf Z)$, one would usually apply $\operatorname{Hom}(\cdot, \mathbf Z)$ to the "multiplication by $n$" sequence $$0 \rightarrow \mathbf Z \rightarrow \mathbf Z \rightarrow \mathbf Z_n \rightarrow 0$$ to get $\operatorname{Ext}(\mathbf Z_n, \mathbf Z) = \mathbf Z/n\mathbf Z$. Exercise 4 makes this equality explicit. The class of $r$ in $\operatorname{Ext}(\mathbf Z_n, \mathbf Z) = \mathbf Z/n\mathbf Z$ corresponds to the extension $\mathbf Z \times \mathbf Z_t$ with maps $p'$ and $i'$ as in the statement of exercise 4.</p> <p>Non-abelian group extensions of $\mathbf Z_n$ by $\mathbf Z$ are not detected by $\operatorname{Ext}(\mathbf Z_n, \mathbf Z) = H^2(\mathbf Z_n, \mathbf Z)$. To detect more general extensions, one could look, for example, at $H^3$ (see <a href="http://sierra.nmsu.edu/morandi/notes/GroupExtensions.pdf" rel="nofollow noreferrer">here</a>). IMO Exercise 5 is a more down to earth way of treating this case.</p> Answer 2: <p>This is a nice exercise for a beginning section of homology theory, since doing this without Ext will make you really appreciate the machinery homological algebra has to offer.</p> <p>I do not think, that I will go through all details, but let us start:</p> <p>First of all, we have the case $\mathbb Z_d = 0$ (one could refer to this as $d=1$), i.e. a short exact sequence</p> <p>$$0 \to \mathbb Z \to \mathbb Z \to \mathbb Z_n \to 0.$$</p> <p>Of course the first map is given by $\cdot m$ for some $m \in \mathbb Z$ and by looking at the cokernel, we get that $m = \pm n$. The second map is given by $1 \mapsto e$, where $e$ is co-prime to $n$.</p> <p>The much more involved case is the case $d \geq 2$, i.e. where the summand $\mathbb Z_d$ actually occurs.</p> <p>Call the first map $g: \mathbb Z \to \mathbb Z \oplus \mathbb Z_d$ and the second map $f: \mathbb Z \oplus \mathbb Z_d \to \mathbb Z_n$. Denote the kernel of $f$ by $K$. By exactness we get $K \cong \mathbb Z$.</p> <p>Note that $f(0,n)=0$. $K$ is torsion free, but $(0,n) \in \mathbb Z \oplus \mathbb Z_d$ is of course torsion (annihilated by $d$), hence we have $(0,n)=(0,0)$, i.e. $d|n$.</p> <blockquote> <p>So we have all possibilites for $A$, namely $A \in \{\mathbb Z \} \cup \{\mathbb Z \oplus \mathbb Z_d ~ | ~ d \geq 2,d | n\}$.</p> </blockquote> <p>We still have to figure out the maps in the second case:</p> <p>Of course $g$ is given by $g(1) = (m,e)$ for some $m,e \in \mathbb Z, 0 \leq e &lt;d$. Let us call the cokernel $C_{m,e}$.</p> <p>$C_{m_e}$ is generated by $e_1=(1,0),e_2=(0,1)$ and we have relations $me_1+ee_2=0$ and $de_2=0$, thus $C_{m_e}$ is presented by $$\begin{pmatrix}m&amp;0\\e&amp;d\end{pmatrix}.$$ Upto multiplication in $SL_2(\mathbb Z)$, this should be equal to $$\begin{pmatrix}n&amp;0\\0&amp;1\end{pmatrix}$$, i.e. by looking at fitting ideals, we get that</p> <p>$$md=\pm n ~~ \text{ and } ~~ (m,e,d)=(1).$$</p> <p>Thus we have found all possibilities for $g$. To find $f$, note that we have the following commutative diagram of exact sequences:</p> <p>$$\require{AMScd} \begin{CD} 0 @&gt;&gt;&gt; \mathbb Z^2 @&gt;\begin{pmatrix}m&amp;0\\e&amp;d\end{pmatrix}&gt;&gt; \mathbb Z^2 @&gt;&gt;&gt; C_{m,e} @&gt;&gt;&gt;0\\ @VVV @VXVV @VVYV @VVV @VVV \\ 0 @&gt;&gt;&gt; \mathbb Z^2 @&gt;&gt;\begin{pmatrix}n&amp;0\\0&amp;1\end{pmatrix}&gt; \mathbb Z^2 @&gt;p&gt;&gt; \mathbb Z_n @&gt;&gt;&gt;0 \end{CD}$$ , where $X$ and $Y$ are obtained via the <a href="https://en.wikipedia.org/wiki/Smith_normal_form#Algorithm" rel="nofollow noreferrer">algorithm to determine the Smith normal form</a>.</p> <p>The map $p \circ Y$ factors through $\mathbb Z \oplus \mathbb Z_d$ and the factorization is $f$.</p> Answer 3: <p>Since I don't know what we can assume, let's classify all those extensions by brute force. Since it's a problem in a chapter about homology, $A$ is probably meant to be abelian but let's go ahead and do the general case. I'll outline a proof in the form of exercises. At the end, I outline the relationship between the explicit constructions and the cohomological argument.</p> <p>First, name the arrows in the exact sequence: $i: \mathbf Z \rightarrow A$ and $p: A \rightarrow \mathbf Z_n$.</p> <p>Let $a = i(1)$ and let $b \in A$ such that $p(b) = 1$.</p> <blockquote> <p><strong>Exercise 1:</strong> $a$ and $b$ generate $A$</p> </blockquote> <p>So, $$A = \langle a, b : \text{ some relations we need to find}\rangle$$</p> <blockquote> <p><strong>Exercise 2</strong>: $b^n = a^r$ and $bab^{-1} = a^s$ for some integers $r$ and $s$.</p> </blockquote> <p><em>Hint: What are the images of $bab^{-1}$ and $b^n$ under $p$?.</em></p> <blockquote> <p><strong>Exercise 3</strong>: $a^{s^n} = a$ and thus $s = \pm 1$.</p> </blockquote> <p><em>Hint: use the relations we have so far. For the last part, recall that $a$ is the generator of a copy of $\mathbf Z$ in $A$.</em></p> <blockquote> <p><strong>Exercise 4:</strong> Suppose $s = 1$. Let $t = \gcd(n, r)$ and write $$1 = \alpha \frac n t + \beta \frac r t,$$ for suitable $\alpha, \beta \in \mathbf Z$. Then, $$A \cong \mathbf Z \times \mathbf Z_t,$$ with $i': \mathbf Z \rightarrow \mathbf Z \times \mathbf Z_t$ and $p': \mathbf Z \times \mathbf Z_t \rightarrow \mathbf Z_n$ given by $$i'(1) = (n/t,-\beta)\\ p'(1,0) = \beta \\ p'(0,1) = n/t.$$</p> </blockquote> <p><em>Hint (so it doesn't look like magic): $s = 1$ implies $A$ abelian and the relation $b^n = a^r$ can then be rewritten as $1 = (b^{n/t}a^{-r/t})^t$. In fact, $t$ is the order of $b^{n/t}a^{-r/t}$ in $A$. Show that $b^{n/t}a^{-r/t}$ and $b^{\beta}a^{\alpha}$ generate $A$ and consider the homomorphism $\varphi: \mathbf Z \times \mathbf Z_t \rightarrow A$ given by $\varphi(1,0) = b^{\beta}a^{\alpha}$ and $\varphi(0, 1)=b^{n/t}a^{-r/t}$. It sits in the commutative diagram</em> $\require{AMScd}$ \begin{CD} 0 @&gt;&gt;&gt; \mathbf Z @&gt;{i'}&gt;&gt; \mathbf Z \times \mathbf Z_t @&gt;{p'}&gt;&gt; \mathbf Z_n @&gt;&gt;&gt; 0\\ &amp; @V{=\,}VV @VV{\varphi}V @VV{\, =}V \\ 0 @&gt;&gt;&gt; \mathbf Z @&gt;&gt;{i}&gt; A @&gt;&gt;{p}&gt; \mathbf Z_n @&gt;&gt;&gt; 0\end{CD}</p> <blockquote> <p><strong>Exercise 5:</strong> Suppose $s = -1$. Then, $n$ is even, $r = 0$ and $$A \cong \mathbf Z \rtimes_\sigma \mathbf Z_n,$$ where $\sigma: \mathbf Z_n \rightarrow \operatorname{Aut}(\mathbf Z)$ is given by $\sigma(1)(1) = -1$. The maps $i': \mathbf Z \rightarrow \mathbf Z \rtimes \mathbf Z_n$ and $p': \mathbf Z \rtimes \mathbf Z_n \rightarrow \mathbf Z_n$ are given by $$i'(1) = (1,0)\\ p'(1,0) = 0 \\ p'(0,1) = 1.$$</p> </blockquote> <p><em>Hint: in this case $ba^rb^{-1} = a^{-r}$ and $a^r = b^n$, so $r = 0$ and $b^n = 1$. Conclude that $A$ is isomorphic to $\mathbf Z \rtimes \mathbf Z_n$ and classify all homomorphisms $\mathbf Z_n \rightarrow \operatorname{Aut}(\mathbf Z)$. Note that $\mathbf Z$ has only two automorphisms and, by exercise 3, $n$ is even when $s = -1$.</em></p> <hr> <p>What's going on in cohomological terms?</p> <p>$\operatorname{Ext}(\mathbf Z_n, \mathbf Z)$ classifies the extensions of $\mathbf Z_n$ by $\mathbf Z$ as $\mathbf Z$-modules or, in other words, as abelian groups. To compute $\operatorname{Ext}(\mathbf Z_n, \mathbf Z)$, one would usually apply $\operatorname{Hom}(\cdot, \mathbf Z)$ to the "multiplication by $n$" sequence $$0 \rightarrow \mathbf Z \rightarrow \mathbf Z \rightarrow \mathbf Z_n \rightarrow 0$$ to get $\operatorname{Ext}(\mathbf Z_n, \mathbf Z) = \mathbf Z/n\mathbf Z$. Exercise 4 makes this equality explicit. The class of $r$ in $\operatorname{Ext}(\mathbf Z_n, \mathbf Z) = \mathbf Z/n\mathbf Z$ corresponds to the extension $\mathbf Z \times \mathbf Z_t$ with maps $p'$ and $i'$ as in the statement of exercise 4.</p> <p>Non-abelian group extensions of $\mathbf Z_n$ by $\mathbf Z$ are not detected by $\operatorname{Ext}(\mathbf Z_n, \mathbf Z) = H^2(\mathbf Z_n, \mathbf Z)$. To detect more general extensions, one could look, for example, at $H^3$ (see <a href="http://sierra.nmsu.edu/morandi/notes/GroupExtensions.pdf" rel="nofollow noreferrer">here</a>). IMO Exercise 5 is a more down to earth way of treating this case.</p> Answer 4: <p>This is a nice exercise for a beginning section of homology theory, since doing this without Ext will make you really appreciate the machinery homological algebra has to offer.</p> <p>I do not think, that I will go through all details, but let us start:</p> <p>First of all, we have the case $\mathbb Z_d = 0$ (one could refer to this as $d=1$), i.e. a short exact sequence</p> <p>$$0 \to \mathbb Z \to \mathbb Z \to \mathbb Z_n \to 0.$$</p> <p>Of course the first map is given by $\cdot m$ for some $m \in \mathbb Z$ and by looking at the cokernel, we get that $m = \pm n$. The second map is given by $1 \mapsto e$, where $e$ is co-prime to $n$.</p> <p>The much more involved case is the case $d \geq 2$, i.e. where the summand $\mathbb Z_d$ actually occurs.</p> <p>Call the first map $g: \mathbb Z \to \mathbb Z \oplus \mathbb Z_d$ and the second map $f: \mathbb Z \oplus \mathbb Z_d \to \mathbb Z_n$. Denote the kernel of $f$ by $K$. By exactness we get $K \cong \mathbb Z$.</p> <p>Note that $f(0,n)=0$. $K$ is torsion free, but $(0,n) \in \mathbb Z \oplus \mathbb Z_d$ is of course torsion (annihilated by $d$), hence we have $(0,n)=(0,0)$, i.e. $d|n$.</p> <blockquote> <p>So we have all possibilites for $A$, namely $A \in \{\mathbb Z \} \cup \{\mathbb Z \oplus \mathbb Z_d ~ | ~ d \geq 2,d | n\}$.</p> </blockquote> <p>We still have to figure out the maps in the second case:</p> <p>Of course $g$ is given by $g(1) = (m,e)$ for some $m,e \in \mathbb Z, 0 \leq e &lt;d$. Let us call the cokernel $C_{m,e}$.</p> <p>$C_{m_e}$ is generated by $e_1=(1,0),e_2=(0,1)$ and we have relations $me_1+ee_2=0$ and $de_2=0$, thus $C_{m_e}$ is presented by $$\begin{pmatrix}m&amp;0\\e&amp;d\end{pmatrix}.$$ Upto multiplication in $SL_2(\mathbb Z)$, this should be equal to $$\begin{pmatrix}n&amp;0\\0&amp;1\end{pmatrix}$$, i.e. by looking at fitting ideals, we get that</p> <p>$$md=\pm n ~~ \text{ and } ~~ (m,e,d)=(1).$$</p> <p>Thus we have found all possibilities for $g$. To find $f$, note that we have the following commutative diagram of exact sequences:</p> <p>$$\require{AMScd} \begin{CD} 0 @&gt;&gt;&gt; \mathbb Z^2 @&gt;\begin{pmatrix}m&amp;0\\e&amp;d\end{pmatrix}&gt;&gt; \mathbb Z^2 @&gt;&gt;&gt; C_{m,e} @&gt;&gt;&gt;0\\ @VVV @VXVV @VVYV @VVV @VVV \\ 0 @&gt;&gt;&gt; \mathbb Z^2 @&gt;&gt;\begin{pmatrix}n&amp;0\\0&amp;1\end{pmatrix}&gt; \mathbb Z^2 @&gt;p&gt;&gt; \mathbb Z_n @&gt;&gt;&gt;0 \end{CD}$$ , where $X$ and $Y$ are obtained via the <a href="https://en.wikipedia.org/wiki/Smith_normal_form#Algorithm" rel="nofollow noreferrer">algorithm to determine the Smith normal form</a>.</p> <p>The map $p \circ Y$ factors through $\mathbb Z \oplus \mathbb Z_d$ and the factorization is $f$.</p> Answer 5: <p>Since I don't know what we can assume, let's classify all those extensions by brute force. Since it's a problem in a chapter about homology, $A$ is probably meant to be abelian but let's go ahead and do the general case. I'll outline a proof in the form of exercises. At the end, I outline the relationship between the explicit constructions and the cohomological argument.</p> <p>First, name the arrows in the exact sequence: $i: \mathbf Z \rightarrow A$ and $p: A \rightarrow \mathbf Z_n$.</p> <p>Let $a = i(1)$ and let $b \in A$ such that $p(b) = 1$.</p> <blockquote> <p><strong>Exercise 1:</strong> $a$ and $b$ generate $A$</p> </blockquote> <p>So, $$A = \langle a, b : \text{ some relations we need to find}\rangle$$</p> <blockquote> <p><strong>Exercise 2</strong>: $b^n = a^r$ and $bab^{-1} = a^s$ for some integers $r$ and $s$.</p> </blockquote> <p><em>Hint: What are the images of $bab^{-1}$ and $b^n$ under $p$?.</em></p> <blockquote> <p><strong>Exercise 3</strong>: $a^{s^n} = a$ and thus $s = \pm 1$.</p> </blockquote> <p><em>Hint: use the relations we have so far. For the last part, recall that $a$ is the generator of a copy of $\mathbf Z$ in $A$.</em></p> <blockquote> <p><strong>Exercise 4:</strong> Suppose $s = 1$. Let $t = \gcd(n, r)$ and write $$1 = \alpha \frac n t + \beta \frac r t,$$ for suitable $\alpha, \beta \in \mathbf Z$. Then, $$A \cong \mathbf Z \times \mathbf Z_t,$$ with $i': \mathbf Z \rightarrow \mathbf Z \times \mathbf Z_t$ and $p': \mathbf Z \times \mathbf Z_t \rightarrow \mathbf Z_n$ given by $$i'(1) = (n/t,-\beta)\\ p'(1,0) = \beta \\ p'(0,1) = n/t.$$</p> </blockquote> <p><em>Hint (so it doesn't look like magic): $s = 1$ implies $A$ abelian and the relation $b^n = a^r$ can then be rewritten as $1 = (b^{n/t}a^{-r/t})^t$. In fact, $t$ is the order of $b^{n/t}a^{-r/t}$ in $A$. Show that $b^{n/t}a^{-r/t}$ and $b^{\beta}a^{\alpha}$ generate $A$ and consider the homomorphism $\varphi: \mathbf Z \times \mathbf Z_t \rightarrow A$ given by $\varphi(1,0) = b^{\beta}a^{\alpha}$ and $\varphi(0, 1)=b^{n/t}a^{-r/t}$. It sits in the commutative diagram</em> $\require{AMScd}$ \begin{CD} 0 @&gt;&gt;&gt; \mathbf Z @&gt;{i'}&gt;&gt; \mathbf Z \times \mathbf Z_t @&gt;{p'}&gt;&gt; \mathbf Z_n @&gt;&gt;&gt; 0\\ &amp; @V{=\,}VV @VV{\varphi}V @VV{\, =}V \\ 0 @&gt;&gt;&gt; \mathbf Z @&gt;&gt;{i}&gt; A @&gt;&gt;{p}&gt; \mathbf Z_n @&gt;&gt;&gt; 0\end{CD}</p> <blockquote> <p><strong>Exercise 5:</strong> Suppose $s = -1$. Then, $n$ is even, $r = 0$ and $$A \cong \mathbf Z \rtimes_\sigma \mathbf Z_n,$$ where $\sigma: \mathbf Z_n \rightarrow \operatorname{Aut}(\mathbf Z)$ is given by $\sigma(1)(1) = -1$. The maps $i': \mathbf Z \rightarrow \mathbf Z \rtimes \mathbf Z_n$ and $p': \mathbf Z \rtimes \mathbf Z_n \rightarrow \mathbf Z_n$ are given by $$i'(1) = (1,0)\\ p'(1,0) = 0 \\ p'(0,1) = 1.$$</p> </blockquote> <p><em>Hint: in this case $ba^rb^{-1} = a^{-r}$ and $a^r = b^n$, so $r = 0$ and $b^n = 1$. Conclude that $A$ is isomorphic to $\mathbf Z \rtimes \mathbf Z_n$ and classify all homomorphisms $\mathbf Z_n \rightarrow \operatorname{Aut}(\mathbf Z)$. Note that $\mathbf Z$ has only two automorphisms and, by exercise 3, $n$ is even when $s = -1$.</em></p> <hr> <p>What's going on in cohomological terms?</p> <p>$\operatorname{Ext}(\mathbf Z_n, \mathbf Z)$ classifies the extensions of $\mathbf Z_n$ by $\mathbf Z$ as $\mathbf Z$-modules or, in other words, as abelian groups. To compute $\operatorname{Ext}(\mathbf Z_n, \mathbf Z)$, one would usually apply $\operatorname{Hom}(\cdot, \mathbf Z)$ to the "multiplication by $n$" sequence $$0 \rightarrow \mathbf Z \rightarrow \mathbf Z \rightarrow \mathbf Z_n \rightarrow 0$$ to get $\operatorname{Ext}(\mathbf Z_n, \mathbf Z) = \mathbf Z/n\mathbf Z$. Exercise 4 makes this equality explicit. The class of $r$ in $\operatorname{Ext}(\mathbf Z_n, \mathbf Z) = \mathbf Z/n\mathbf Z$ corresponds to the extension $\mathbf Z \times \mathbf Z_t$ with maps $p'$ and $i'$ as in the statement of exercise 4.</p> <p>Non-abelian group extensions of $\mathbf Z_n$ by $\mathbf Z$ are not detected by $\operatorname{Ext}(\mathbf Z_n, \mathbf Z) = H^2(\mathbf Z_n, \mathbf Z)$. To detect more general extensions, one could look, for example, at $H^3$ (see <a href="http://sierra.nmsu.edu/morandi/notes/GroupExtensions.pdf" rel="nofollow noreferrer">here</a>). IMO Exercise 5 is a more down to earth way of treating this case.</p> Answer 6: <p>This is a nice exercise for a beginning section of homology theory, since doing this without Ext will make you really appreciate the machinery homological algebra has to offer.</p> <p>I do not think, that I will go through all details, but let us start:</p> <p>First of all, we have the case $\mathbb Z_d = 0$ (one could refer to this as $d=1$), i.e. a short exact sequence</p> <p>$$0 \to \mathbb Z \to \mathbb Z \to \mathbb Z_n \to 0.$$</p> <p>Of course the first map is given by $\cdot m$ for some $m \in \mathbb Z$ and by looking at the cokernel, we get that $m = \pm n$. The second map is given by $1 \mapsto e$, where $e$ is co-prime to $n$.</p> <p>The much more involved case is the case $d \geq 2$, i.e. where the summand $\mathbb Z_d$ actually occurs.</p> <p>Call the first map $g: \mathbb Z \to \mathbb Z \oplus \mathbb Z_d$ and the second map $f: \mathbb Z \oplus \mathbb Z_d \to \mathbb Z_n$. Denote the kernel of $f$ by $K$. By exactness we get $K \cong \mathbb Z$.</p> <p>Note that $f(0,n)=0$. $K$ is torsion free, but $(0,n) \in \mathbb Z \oplus \mathbb Z_d$ is of course torsion (annihilated by $d$), hence we have $(0,n)=(0,0)$, i.e. $d|n$.</p> <blockquote> <p>So we have all possibilites for $A$, namely $A \in \{\mathbb Z \} \cup \{\mathbb Z \oplus \mathbb Z_d ~ | ~ d \geq 2,d | n\}$.</p> </blockquote> <p>We still have to figure out the maps in the second case:</p> <p>Of course $g$ is given by $g(1) = (m,e)$ for some $m,e \in \mathbb Z, 0 \leq e &lt;d$. Let us call the cokernel $C_{m,e}$.</p> <p>$C_{m_e}$ is generated by $e_1=(1,0),e_2=(0,1)$ and we have relations $me_1+ee_2=0$ and $de_2=0$, thus $C_{m_e}$ is presented by $$\begin{pmatrix}m&amp;0\\e&amp;d\end{pmatrix}.$$ Upto multiplication in $SL_2(\mathbb Z)$, this should be equal to $$\begin{pmatrix}n&amp;0\\0&amp;1\end{pmatrix}$$, i.e. by looking at fitting ideals, we get that</p> <p>$$md=\pm n ~~ \text{ and } ~~ (m,e,d)=(1).$$</p> <p>Thus we have found all possibilities for $g$. To find $f$, note that we have the following commutative diagram of exact sequences:</p> <p>$$\require{AMScd} \begin{CD} 0 @&gt;&gt;&gt; \mathbb Z^2 @&gt;\begin{pmatrix}m&amp;0\\e&amp;d\end{pmatrix}&gt;&gt; \mathbb Z^2 @&gt;&gt;&gt; C_{m,e} @&gt;&gt;&gt;0\\ @VVV @VXVV @VVYV @VVV @VVV \\ 0 @&gt;&gt;&gt; \mathbb Z^2 @&gt;&gt;\begin{pmatrix}n&amp;0\\0&amp;1\end{pmatrix}&gt; \mathbb Z^2 @&gt;p&gt;&gt; \mathbb Z_n @&gt;&gt;&gt;0 \end{CD}$$ , where $X$ and $Y$ are obtained via the <a href="https://en.wikipedia.org/wiki/Smith_normal_form#Algorithm" rel="nofollow noreferrer">algorithm to determine the Smith normal form</a>.</p> <p>The map $p \circ Y$ factors through $\mathbb Z \oplus \mathbb Z_d$ and the factorization is $f$.</p> Answer 7: <p>Since I don't know what we can assume, let's classify all those extensions by brute force. Since it's a problem in a chapter about homology, $A$ is probably meant to be abelian but let's go ahead and do the general case. I'll outline a proof in the form of exercises. At the end, I outline the relationship between the explicit constructions and the cohomological argument.</p> <p>First, name the arrows in the exact sequence: $i: \mathbf Z \rightarrow A$ and $p: A \rightarrow \mathbf Z_n$.</p> <p>Let $a = i(1)$ and let $b \in A$ such that $p(b) = 1$.</p> <blockquote> <p><strong>Exercise 1:</strong> $a$ and $b$ generate $A$</p> </blockquote> <p>So, $$A = \langle a, b : \text{ some relations we need to find}\rangle$$</p> <blockquote> <p><strong>Exercise 2</strong>: $b^n = a^r$ and $bab^{-1} = a^s$ for some integers $r$ and $s$.</p> </blockquote> <p><em>Hint: What are the images of $bab^{-1}$ and $b^n$ under $p$?.</em></p> <blockquote> <p><strong>Exercise 3</strong>: $a^{s^n} = a$ and thus $s = \pm 1$.</p> </blockquote> <p><em>Hint: use the relations we have so far. For the last part, recall that $a$ is the generator of a copy of $\mathbf Z$ in $A$.</em></p> <blockquote> <p><strong>Exercise 4:</strong> Suppose $s = 1$. Let $t = \gcd(n, r)$ and write $$1 = \alpha \frac n t + \beta \frac r t,$$ for suitable $\alpha, \beta \in \mathbf Z$. Then, $$A \cong \mathbf Z \times \mathbf Z_t,$$ with $i': \mathbf Z \rightarrow \mathbf Z \times \mathbf Z_t$ and $p': \mathbf Z \times \mathbf Z_t \rightarrow \mathbf Z_n$ given by $$i'(1) = (n/t,-\beta)\\ p'(1,0) = \beta \\ p'(0,1) = n/t.$$</p> </blockquote> <p><em>Hint (so it doesn't look like magic): $s = 1$ implies $A$ abelian and the relation $b^n = a^r$ can then be rewritten as $1 = (b^{n/t}a^{-r/t})^t$. In fact, $t$ is the order of $b^{n/t}a^{-r/t}$ in $A$. Show that $b^{n/t}a^{-r/t}$ and $b^{\beta}a^{\alpha}$ generate $A$ and consider the homomorphism $\varphi: \mathbf Z \times \mathbf Z_t \rightarrow A$ given by $\varphi(1,0) = b^{\beta}a^{\alpha}$ and $\varphi(0, 1)=b^{n/t}a^{-r/t}$. It sits in the commutative diagram</em> $\require{AMScd}$ \begin{CD} 0 @&gt;&gt;&gt; \mathbf Z @&gt;{i'}&gt;&gt; \mathbf Z \times \mathbf Z_t @&gt;{p'}&gt;&gt; \mathbf Z_n @&gt;&gt;&gt; 0\\ &amp; @V{=\,}VV @VV{\varphi}V @VV{\, =}V \\ 0 @&gt;&gt;&gt; \mathbf Z @&gt;&gt;{i}&gt; A @&gt;&gt;{p}&gt; \mathbf Z_n @&gt;&gt;&gt; 0\end{CD}</p> <blockquote> <p><strong>Exercise 5:</strong> Suppose $s = -1$. Then, $n$ is even, $r = 0$ and $$A \cong \mathbf Z \rtimes_\sigma \mathbf Z_n,$$ where $\sigma: \mathbf Z_n \rightarrow \operatorname{Aut}(\mathbf Z)$ is given by $\sigma(1)(1) = -1$. The maps $i': \mathbf Z \rightarrow \mathbf Z \rtimes \mathbf Z_n$ and $p': \mathbf Z \rtimes \mathbf Z_n \rightarrow \mathbf Z_n$ are given by $$i'(1) = (1,0)\\ p'(1,0) = 0 \\ p'(0,1) = 1.$$</p> </blockquote> <p><em>Hint: in this case $ba^rb^{-1} = a^{-r}$ and $a^r = b^n$, so $r = 0$ and $b^n = 1$. Conclude that $A$ is isomorphic to $\mathbf Z \rtimes \mathbf Z_n$ and classify all homomorphisms $\mathbf Z_n \rightarrow \operatorname{Aut}(\mathbf Z)$. Note that $\mathbf Z$ has only two automorphisms and, by exercise 3, $n$ is even when $s = -1$.</em></p> <hr> <p>What's going on in cohomological terms?</p> <p>$\operatorname{Ext}(\mathbf Z_n, \mathbf Z)$ classifies the extensions of $\mathbf Z_n$ by $\mathbf Z$ as $\mathbf Z$-modules or, in other words, as abelian groups. To compute $\operatorname{Ext}(\mathbf Z_n, \mathbf Z)$, one would usually apply $\operatorname{Hom}(\cdot, \mathbf Z)$ to the "multiplication by $n$" sequence $$0 \rightarrow \mathbf Z \rightarrow \mathbf Z \rightarrow \mathbf Z_n \rightarrow 0$$ to get $\operatorname{Ext}(\mathbf Z_n, \mathbf Z) = \mathbf Z/n\mathbf Z$. Exercise 4 makes this equality explicit. The class of $r$ in $\operatorname{Ext}(\mathbf Z_n, \mathbf Z) = \mathbf Z/n\mathbf Z$ corresponds to the extension $\mathbf Z \times \mathbf Z_t$ with maps $p'$ and $i'$ as in the statement of exercise 4.</p> <p>Non-abelian group extensions of $\mathbf Z_n$ by $\mathbf Z$ are not detected by $\operatorname{Ext}(\mathbf Z_n, \mathbf Z) = H^2(\mathbf Z_n, \mathbf Z)$. To detect more general extensions, one could look, for example, at $H^3$ (see <a href="http://sierra.nmsu.edu/morandi/notes/GroupExtensions.pdf" rel="nofollow noreferrer">here</a>). IMO Exercise 5 is a more down to earth way of treating this case.</p> Answer 8: <p>This is a nice exercise for a beginning section of homology theory, since doing this without Ext will make you really appreciate the machinery homological algebra has to offer.</p> <p>I do not think, that I will go through all details, but let us start:</p> <p>First of all, we have the case $\mathbb Z_d = 0$ (one could refer to this as $d=1$), i.e. a short exact sequence</p> <p>$$0 \to \mathbb Z \to \mathbb Z \to \mathbb Z_n \to 0.$$</p> <p>Of course the first map is given by $\cdot m$ for some $m \in \mathbb Z$ and by looking at the cokernel, we get that $m = \pm n$. The second map is given by $1 \mapsto e$, where $e$ is co-prime to $n$.</p> <p>The much more involved case is the case $d \geq 2$, i.e. where the summand $\mathbb Z_d$ actually occurs.</p> <p>Call the first map $g: \mathbb Z \to \mathbb Z \oplus \mathbb Z_d$ and the second map $f: \mathbb Z \oplus \mathbb Z_d \to \mathbb Z_n$. Denote the kernel of $f$ by $K$. By exactness we get $K \cong \mathbb Z$.</p> <p>Note that $f(0,n)=0$. $K$ is torsion free, but $(0,n) \in \mathbb Z \oplus \mathbb Z_d$ is of course torsion (annihilated by $d$), hence we have $(0,n)=(0,0)$, i.e. $d|n$.</p> <blockquote> <p>So we have all possibilites for $A$, namely $A \in \{\mathbb Z \} \cup \{\mathbb Z \oplus \mathbb Z_d ~ | ~ d \geq 2,d | n\}$.</p> </blockquote> <p>We still have to figure out the maps in the second case:</p> <p>Of course $g$ is given by $g(1) = (m,e)$ for some $m,e \in \mathbb Z, 0 \leq e &lt;d$. Let us call the cokernel $C_{m,e}$.</p> <p>$C_{m_e}$ is generated by $e_1=(1,0),e_2=(0,1)$ and we have relations $me_1+ee_2=0$ and $de_2=0$, thus $C_{m_e}$ is presented by $$\begin{pmatrix}m&amp;0\\e&amp;d\end{pmatrix}.$$ Upto multiplication in $SL_2(\mathbb Z)$, this should be equal to $$\begin{pmatrix}n&amp;0\\0&amp;1\end{pmatrix}$$, i.e. by looking at fitting ideals, we get that</p> <p>$$md=\pm n ~~ \text{ and } ~~ (m,e,d)=(1).$$</p> <p>Thus we have found all possibilities for $g$. To find $f$, note that we have the following commutative diagram of exact sequences:</p> <p>$$\require{AMScd} \begin{CD} 0 @&gt;&gt;&gt; \mathbb Z^2 @&gt;\begin{pmatrix}m&amp;0\\e&amp;d\end{pmatrix}&gt;&gt; \mathbb Z^2 @&gt;&gt;&gt; C_{m,e} @&gt;&gt;&gt;0\\ @VVV @VXVV @VVYV @VVV @VVV \\ 0 @&gt;&gt;&gt; \mathbb Z^2 @&gt;&gt;\begin{pmatrix}n&amp;0\\0&amp;1\end{pmatrix}&gt; \mathbb Z^2 @&gt;p&gt;&gt; \mathbb Z_n @&gt;&gt;&gt;0 \end{CD}$$ , where $X$ and $Y$ are obtained via the <a href="https://en.wikipedia.org/wiki/Smith_normal_form#Algorithm" rel="nofollow noreferrer">algorithm to determine the Smith normal form</a>.</p> <p>The map $p \circ Y$ factors through $\mathbb Z \oplus \mathbb Z_d$ and the factorization is $f$.</p> Answer 9: <p>Since I don't know what we can assume, let's classify all those extensions by brute force. Since it's a problem in a chapter about homology, $A$ is probably meant to be abelian but let's go ahead and do the general case. I'll outline a proof in the form of exercises. At the end, I outline the relationship between the explicit constructions and the cohomological argument.</p> <p>First, name the arrows in the exact sequence: $i: \mathbf Z \rightarrow A$ and $p: A \rightarrow \mathbf Z_n$.</p> <p>Let $a = i(1)$ and let $b \in A$ such that $p(b) = 1$.</p> <blockquote> <p><strong>Exercise 1:</strong> $a$ and $b$ generate $A$</p> </blockquote> <p>So, $$A = \langle a, b : \text{ some relations we need to find}\rangle$$</p> <blockquote> <p><strong>Exercise 2</strong>: $b^n = a^r$ and $bab^{-1} = a^s$ for some integers $r$ and $s$.</p> </blockquote> <p><em>Hint: What are the images of $bab^{-1}$ and $b^n$ under $p$?.</em></p> <blockquote> <p><strong>Exercise 3</strong>: $a^{s^n} = a$ and thus $s = \pm 1$.</p> </blockquote> <p><em>Hint: use the relations we have so far. For the last part, recall that $a$ is the generator of a copy of $\mathbf Z$ in $A$.</em></p> <blockquote> <p><strong>Exercise 4:</strong> Suppose $s = 1$. Let $t = \gcd(n, r)$ and write $$1 = \alpha \frac n t + \beta \frac r t,$$ for suitable $\alpha, \beta \in \mathbf Z$. Then, $$A \cong \mathbf Z \times \mathbf Z_t,$$ with $i': \mathbf Z \rightarrow \mathbf Z \times \mathbf Z_t$ and $p': \mathbf Z \times \mathbf Z_t \rightarrow \mathbf Z_n$ given by $$i'(1) = (n/t,-\beta)\\ p'(1,0) = \beta \\ p'(0,1) = n/t.$$</p> </blockquote> <p><em>Hint (so it doesn't look like magic): $s = 1$ implies $A$ abelian and the relation $b^n = a^r$ can then be rewritten as $1 = (b^{n/t}a^{-r/t})^t$. In fact, $t$ is the order of $b^{n/t}a^{-r/t}$ in $A$. Show that $b^{n/t}a^{-r/t}$ and $b^{\beta}a^{\alpha}$ generate $A$ and consider the homomorphism $\varphi: \mathbf Z \times \mathbf Z_t \rightarrow A$ given by $\varphi(1,0) = b^{\beta}a^{\alpha}$ and $\varphi(0, 1)=b^{n/t}a^{-r/t}$. It sits in the commutative diagram</em> $\require{AMScd}$ \begin{CD} 0 @&gt;&gt;&gt; \mathbf Z @&gt;{i'}&gt;&gt; \mathbf Z \times \mathbf Z_t @&gt;{p'}&gt;&gt; \mathbf Z_n @&gt;&gt;&gt; 0\\ &amp; @V{=\,}VV @VV{\varphi}V @VV{\, =}V \\ 0 @&gt;&gt;&gt; \mathbf Z @&gt;&gt;{i}&gt; A @&gt;&gt;{p}&gt; \mathbf Z_n @&gt;&gt;&gt; 0\end{CD}</p> <blockquote> <p><strong>Exercise 5:</strong> Suppose $s = -1$. Then, $n$ is even, $r = 0$ and $$A \cong \mathbf Z \rtimes_\sigma \mathbf Z_n,$$ where $\sigma: \mathbf Z_n \rightarrow \operatorname{Aut}(\mathbf Z)$ is given by $\sigma(1)(1) = -1$. The maps $i': \mathbf Z \rightarrow \mathbf Z \rtimes \mathbf Z_n$ and $p': \mathbf Z \rtimes \mathbf Z_n \rightarrow \mathbf Z_n$ are given by $$i'(1) = (1,0)\\ p'(1,0) = 0 \\ p'(0,1) = 1.$$</p> </blockquote> <p><em>Hint: in this case $ba^rb^{-1} = a^{-r}$ and $a^r = b^n$, so $r = 0$ and $b^n = 1$. Conclude that $A$ is isomorphic to $\mathbf Z \rtimes \mathbf Z_n$ and classify all homomorphisms $\mathbf Z_n \rightarrow \operatorname{Aut}(\mathbf Z)$. Note that $\mathbf Z$ has only two automorphisms and, by exercise 3, $n$ is even when $s = -1$.</em></p> <hr> <p>What's going on in cohomological terms?</p> <p>$\operatorname{Ext}(\mathbf Z_n, \mathbf Z)$ classifies the extensions of $\mathbf Z_n$ by $\mathbf Z$ as $\mathbf Z$-modules or, in other words, as abelian groups. To compute $\operatorname{Ext}(\mathbf Z_n, \mathbf Z)$, one would usually apply $\operatorname{Hom}(\cdot, \mathbf Z)$ to the "multiplication by $n$" sequence $$0 \rightarrow \mathbf Z \rightarrow \mathbf Z \rightarrow \mathbf Z_n \rightarrow 0$$ to get $\operatorname{Ext}(\mathbf Z_n, \mathbf Z) = \mathbf Z/n\mathbf Z$. Exercise 4 makes this equality explicit. The class of $r$ in $\operatorname{Ext}(\mathbf Z_n, \mathbf Z) = \mathbf Z/n\mathbf Z$ corresponds to the extension $\mathbf Z \times \mathbf Z_t$ with maps $p'$ and $i'$ as in the statement of exercise 4.</p> <p>Non-abelian group extensions of $\mathbf Z_n$ by $\mathbf Z$ are not detected by $\operatorname{Ext}(\mathbf Z_n, \mathbf Z) = H^2(\mathbf Z_n, \mathbf Z)$. To detect more general extensions, one could look, for example, at $H^3$ (see <a href="http://sierra.nmsu.edu/morandi/notes/GroupExtensions.pdf" rel="nofollow noreferrer">here</a>). IMO Exercise 5 is a more down to earth way of treating this case.</p> Answer 10: <p>This is a nice exercise for a beginning section of homology theory, since doing this without Ext will make you really appreciate the machinery homological algebra has to offer.</p> <p>I do not think, that I will go through all details, but let us start:</p> <p>First of all, we have the case $\mathbb Z_d = 0$ (one could refer to this as $d=1$), i.e. a short exact sequence</p> <p>$$0 \to \mathbb Z \to \mathbb Z \to \mathbb Z_n \to 0.$$</p> <p>Of course the first map is given by $\cdot m$ for some $m \in \mathbb Z$ and by looking at the cokernel, we get that $m = \pm n$. The second map is given by $1 \mapsto e$, where $e$ is co-prime to $n$.</p> <p>The much more involved case is the case $d \geq 2$, i.e. where the summand $\mathbb Z_d$ actually occurs.</p> <p>Call the first map $g: \mathbb Z \to \mathbb Z \oplus \mathbb Z_d$ and the second map $f: \mathbb Z \oplus \mathbb Z_d \to \mathbb Z_n$. Denote the kernel of $f$ by $K$. By exactness we get $K \cong \mathbb Z$.</p> <p>Note that $f(0,n)=0$. $K$ is torsion free, but $(0,n) \in \mathbb Z \oplus \mathbb Z_d$ is of course torsion (annihilated by $d$), hence we have $(0,n)=(0,0)$, i.e. $d|n$.</p> <blockquote> <p>So we have all possibilites for $A$, namely $A \in \{\mathbb Z \} \cup \{\mathbb Z \oplus \mathbb Z_d ~ | ~ d \geq 2,d | n\}$.</p> </blockquote> <p>We still have to figure out the maps in the second case:</p> <p>Of course $g$ is given by $g(1) = (m,e)$ for some $m,e \in \mathbb Z, 0 \leq e &lt;d$. Let us call the cokernel $C_{m,e}$.</p> <p>$C_{m_e}$ is generated by $e_1=(1,0),e_2=(0,1)$ and we have relations $me_1+ee_2=0$ and $de_2=0$, thus $C_{m_e}$ is presented by $$\begin{pmatrix}m&amp;0\\e&amp;d\end{pmatrix}.$$ Upto multiplication in $SL_2(\mathbb Z)$, this should be equal to $$\begin{pmatrix}n&amp;0\\0&amp;1\end{pmatrix}$$, i.e. by looking at fitting ideals, we get that</p> <p>$$md=\pm n ~~ \text{ and } ~~ (m,e,d)=(1).$$</p> <p>Thus we have found all possibilities for $g$. To find $f$, note that we have the following commutative diagram of exact sequences:</p> <p>$$\require{AMScd} \begin{CD} 0 @&gt;&gt;&gt; \mathbb Z^2 @&gt;\begin{pmatrix}m&amp;0\\e&amp;d\end{pmatrix}&gt;&gt; \mathbb Z^2 @&gt;&gt;&gt; C_{m,e} @&gt;&gt;&gt;0\\ @VVV @VXVV @VVYV @VVV @VVV \\ 0 @&gt;&gt;&gt; \mathbb Z^2 @&gt;&gt;\begin{pmatrix}n&amp;0\\0&amp;1\end{pmatrix}&gt; \mathbb Z^2 @&gt;p&gt;&gt; \mathbb Z_n @&gt;&gt;&gt;0 \end{CD}$$ , where $X$ and $Y$ are obtained via the <a href="https://en.wikipedia.org/wiki/Smith_normal_form#Algorithm" rel="nofollow noreferrer">algorithm to determine the Smith normal form</a>.</p> <p>The map $p \circ Y$ factors through $\mathbb Z \oplus \mathbb Z_d$ and the factorization is $f$.</p> Answer 11: <p>Since I don't know what we can assume, let's classify all those extensions by brute force. Since it's a problem in a chapter about homology, $A$ is probably meant to be abelian but let's go ahead and do the general case. I'll outline a proof in the form of exercises. At the end, I outline the relationship between the explicit constructions and the cohomological argument.</p> <p>First, name the arrows in the exact sequence: $i: \mathbf Z \rightarrow A$ and $p: A \rightarrow \mathbf Z_n$.</p> <p>Let $a = i(1)$ and let $b \in A$ such that $p(b) = 1$.</p> <blockquote> <p><strong>Exercise 1:</strong> $a$ and $b$ generate $A$</p> </blockquote> <p>So, $$A = \langle a, b : \text{ some relations we need to find}\rangle$$</p> <blockquote> <p><strong>Exercise 2</strong>: $b^n = a^r$ and $bab^{-1} = a^s$ for some integers $r$ and $s$.</p> </blockquote> <p><em>Hint: What are the images of $bab^{-1}$ and $b^n$ under $p$?.</em></p> <blockquote> <p><strong>Exercise 3</strong>: $a^{s^n} = a$ and thus $s = \pm 1$.</p> </blockquote> <p><em>Hint: use the relations we have so far. For the last part, recall that $a$ is the generator of a copy of $\mathbf Z$ in $A$.</em></p> <blockquote> <p><strong>Exercise 4:</strong> Suppose $s = 1$. Let $t = \gcd(n, r)$ and write $$1 = \alpha \frac n t + \beta \frac r t,$$ for suitable $\alpha, \beta \in \mathbf Z$. Then, $$A \cong \mathbf Z \times \mathbf Z_t,$$ with $i': \mathbf Z \rightarrow \mathbf Z \times \mathbf Z_t$ and $p': \mathbf Z \times \mathbf Z_t \rightarrow \mathbf Z_n$ given by $$i'(1) = (n/t,-\beta)\\ p'(1,0) = \beta \\ p'(0,1) = n/t.$$</p> </blockquote> <p><em>Hint (so it doesn't look like magic): $s = 1$ implies $A$ abelian and the relation $b^n = a^r$ can then be rewritten as $1 = (b^{n/t}a^{-r/t})^t$. In fact, $t$ is the order of $b^{n/t}a^{-r/t}$ in $A$. Show that $b^{n/t}a^{-r/t}$ and $b^{\beta}a^{\alpha}$ generate $A$ and consider the homomorphism $\varphi: \mathbf Z \times \mathbf Z_t \rightarrow A$ given by $\varphi(1,0) = b^{\beta}a^{\alpha}$ and $\varphi(0, 1)=b^{n/t}a^{-r/t}$. It sits in the commutative diagram</em> $\require{AMScd}$ \begin{CD} 0 @&gt;&gt;&gt; \mathbf Z @&gt;{i'}&gt;&gt; \mathbf Z \times \mathbf Z_t @&gt;{p'}&gt;&gt; \mathbf Z_n @&gt;&gt;&gt; 0\\ &amp; @V{=\,}VV @VV{\varphi}V @VV{\, =}V \\ 0 @&gt;&gt;&gt; \mathbf Z @&gt;&gt;{i}&gt; A @&gt;&gt;{p}&gt; \mathbf Z_n @&gt;&gt;&gt; 0\end{CD}</p> <blockquote> <p><strong>Exercise 5:</strong> Suppose $s = -1$. Then, $n$ is even, $r = 0$ and $$A \cong \mathbf Z \rtimes_\sigma \mathbf Z_n,$$ where $\sigma: \mathbf Z_n \rightarrow \operatorname{Aut}(\mathbf Z)$ is given by $\sigma(1)(1) = -1$. The maps $i': \mathbf Z \rightarrow \mathbf Z \rtimes \mathbf Z_n$ and $p': \mathbf Z \rtimes \mathbf Z_n \rightarrow \mathbf Z_n$ are given by $$i'(1) = (1,0)\\ p'(1,0) = 0 \\ p'(0,1) = 1.$$</p> </blockquote> <p><em>Hint: in this case $ba^rb^{-1} = a^{-r}$ and $a^r = b^n$, so $r = 0$ and $b^n = 1$. Conclude that $A$ is isomorphic to $\mathbf Z \rtimes \mathbf Z_n$ and classify all homomorphisms $\mathbf Z_n \rightarrow \operatorname{Aut}(\mathbf Z)$. Note that $\mathbf Z$ has only two automorphisms and, by exercise 3, $n$ is even when $s = -1$.</em></p> <hr> <p>What's going on in cohomological terms?</p> <p>$\operatorname{Ext}(\mathbf Z_n, \mathbf Z)$ classifies the extensions of $\mathbf Z_n$ by $\mathbf Z$ as $\mathbf Z$-modules or, in other words, as abelian groups. To compute $\operatorname{Ext}(\mathbf Z_n, \mathbf Z)$, one would usually apply $\operatorname{Hom}(\cdot, \mathbf Z)$ to the "multiplication by $n$" sequence $$0 \rightarrow \mathbf Z \rightarrow \mathbf Z \rightarrow \mathbf Z_n \rightarrow 0$$ to get $\operatorname{Ext}(\mathbf Z_n, \mathbf Z) = \mathbf Z/n\mathbf Z$. Exercise 4 makes this equality explicit. The class of $r$ in $\operatorname{Ext}(\mathbf Z_n, \mathbf Z) = \mathbf Z/n\mathbf Z$ corresponds to the extension $\mathbf Z \times \mathbf Z_t$ with maps $p'$ and $i'$ as in the statement of exercise 4.</p> <p>Non-abelian group extensions of $\mathbf Z_n$ by $\mathbf Z$ are not detected by $\operatorname{Ext}(\mathbf Z_n, \mathbf Z) = H^2(\mathbf Z_n, \mathbf Z)$. To detect more general extensions, one could look, for example, at $H^3$ (see <a href="http://sierra.nmsu.edu/morandi/notes/GroupExtensions.pdf" rel="nofollow noreferrer">here</a>). IMO Exercise 5 is a more down to earth way of treating this case.</p> Answer 12: <p>This is a nice exercise for a beginning section of homology theory, since doing this without Ext will make you really appreciate the machinery homological algebra has to offer.</p> <p>I do not think, that I will go through all details, but let us start:</p> <p>First of all, we have the case $\mathbb Z_d = 0$ (one could refer to this as $d=1$), i.e. a short exact sequence</p> <p>$$0 \to \mathbb Z \to \mathbb Z \to \mathbb Z_n \to 0.$$</p> <p>Of course the first map is given by $\cdot m$ for some $m \in \mathbb Z$ and by looking at the cokernel, we get that $m = \pm n$. The second map is given by $1 \mapsto e$, where $e$ is co-prime to $n$.</p> <p>The much more involved case is the case $d \geq 2$, i.e. where the summand $\mathbb Z_d$ actually occurs.</p> <p>Call the first map $g: \mathbb Z \to \mathbb Z \oplus \mathbb Z_d$ and the second map $f: \mathbb Z \oplus \mathbb Z_d \to \mathbb Z_n$. Denote the kernel of $f$ by $K$. By exactness we get $K \cong \mathbb Z$.</p> <p>Note that $f(0,n)=0$. $K$ is torsion free, but $(0,n) \in \mathbb Z \oplus \mathbb Z_d$ is of course torsion (annihilated by $d$), hence we have $(0,n)=(0,0)$, i.e. $d|n$.</p> <blockquote> <p>So we have all possibilites for $A$, namely $A \in \{\mathbb Z \} \cup \{\mathbb Z \oplus \mathbb Z_d ~ | ~ d \geq 2,d | n\}$.</p> </blockquote> <p>We still have to figure out the maps in the second case:</p> <p>Of course $g$ is given by $g(1) = (m,e)$ for some $m,e \in \mathbb Z, 0 \leq e &lt;d$. Let us call the cokernel $C_{m,e}$.</p> <p>$C_{m_e}$ is generated by $e_1=(1,0),e_2=(0,1)$ and we have relations $me_1+ee_2=0$ and $de_2=0$, thus $C_{m_e}$ is presented by $$\begin{pmatrix}m&amp;0\\e&amp;d\end{pmatrix}.$$ Upto multiplication in $SL_2(\mathbb Z)$, this should be equal to $$\begin{pmatrix}n&amp;0\\0&amp;1\end{pmatrix}$$, i.e. by looking at fitting ideals, we get that</p> <p>$$md=\pm n ~~ \text{ and } ~~ (m,e,d)=(1).$$</p> <p>Thus we have found all possibilities for $g$. To find $f$, note that we have the following commutative diagram of exact sequences:</p> <p>$$\require{AMScd} \begin{CD} 0 @&gt;&gt;&gt; \mathbb Z^2 @&gt;\begin{pmatrix}m&amp;0\\e&amp;d\end{pmatrix}&gt;&gt; \mathbb Z^2 @&gt;&gt;&gt; C_{m,e} @&gt;&gt;&gt;0\\ @VVV @VXVV @VVYV @VVV @VVV \\ 0 @&gt;&gt;&gt; \mathbb Z^2 @&gt;&gt;\begin{pmatrix}n&amp;0\\0&amp;1\end{pmatrix}&gt; \mathbb Z^2 @&gt;p&gt;&gt; \mathbb Z_n @&gt;&gt;&gt;0 \end{CD}$$ , where $X$ and $Y$ are obtained via the <a href="https://en.wikipedia.org/wiki/Smith_normal_form#Algorithm" rel="nofollow noreferrer">algorithm to determine the Smith normal form</a>.</p> <p>The map $p \circ Y$ factors through $\mathbb Z \oplus \mathbb Z_d$ and the factorization is $f$.</p> Answer 13: <p>Since I don't know what we can assume, let's classify all those extensions by brute force. Since it's a problem in a chapter about homology, $A$ is probably meant to be abelian but let's go ahead and do the general case. I'll outline a proof in the form of exercises. At the end, I outline the relationship between the explicit constructions and the cohomological argument.</p> <p>First, name the arrows in the exact sequence: $i: \mathbf Z \rightarrow A$ and $p: A \rightarrow \mathbf Z_n$.</p> <p>Let $a = i(1)$ and let $b \in A$ such that $p(b) = 1$.</p> <blockquote> <p><strong>Exercise 1:</strong> $a$ and $b$ generate $A$</p> </blockquote> <p>So, $$A = \langle a, b : \text{ some relations we need to find}\rangle$$</p> <blockquote> <p><strong>Exercise 2</strong>: $b^n = a^r$ and $bab^{-1} = a^s$ for some integers $r$ and $s$.</p> </blockquote> <p><em>Hint: What are the images of $bab^{-1}$ and $b^n$ under $p$?.</em></p> <blockquote> <p><strong>Exercise 3</strong>: $a^{s^n} = a$ and thus $s = \pm 1$.</p> </blockquote> <p><em>Hint: use the relations we have so far. For the last part, recall that $a$ is the generator of a copy of $\mathbf Z$ in $A$.</em></p> <blockquote> <p><strong>Exercise 4:</strong> Suppose $s = 1$. Let $t = \gcd(n, r)$ and write $$1 = \alpha \frac n t + \beta \frac r t,$$ for suitable $\alpha, \beta \in \mathbf Z$. Then, $$A \cong \mathbf Z \times \mathbf Z_t,$$ with $i': \mathbf Z \rightarrow \mathbf Z \times \mathbf Z_t$ and $p': \mathbf Z \times \mathbf Z_t \rightarrow \mathbf Z_n$ given by $$i'(1) = (n/t,-\beta)\\ p'(1,0) = \beta \\ p'(0,1) = n/t.$$</p> </blockquote> <p><em>Hint (so it doesn't look like magic): $s = 1$ implies $A$ abelian and the relation $b^n = a^r$ can then be rewritten as $1 = (b^{n/t}a^{-r/t})^t$. In fact, $t$ is the order of $b^{n/t}a^{-r/t}$ in $A$. Show that $b^{n/t}a^{-r/t}$ and $b^{\beta}a^{\alpha}$ generate $A$ and consider the homomorphism $\varphi: \mathbf Z \times \mathbf Z_t \rightarrow A$ given by $\varphi(1,0) = b^{\beta}a^{\alpha}$ and $\varphi(0, 1)=b^{n/t}a^{-r/t}$. It sits in the commutative diagram</em> $\require{AMScd}$ \begin{CD} 0 @&gt;&gt;&gt; \mathbf Z @&gt;{i'}&gt;&gt; \mathbf Z \times \mathbf Z_t @&gt;{p'}&gt;&gt; \mathbf Z_n @&gt;&gt;&gt; 0\\ &amp; @V{=\,}VV @VV{\varphi}V @VV{\, =}V \\ 0 @&gt;&gt;&gt; \mathbf Z @&gt;&gt;{i}&gt; A @&gt;&gt;{p}&gt; \mathbf Z_n @&gt;&gt;&gt; 0\end{CD}</p> <blockquote> <p><strong>Exercise 5:</strong> Suppose $s = -1$. Then, $n$ is even, $r = 0$ and $$A \cong \mathbf Z \rtimes_\sigma \mathbf Z_n,$$ where $\sigma: \mathbf Z_n \rightarrow \operatorname{Aut}(\mathbf Z)$ is given by $\sigma(1)(1) = -1$. The maps $i': \mathbf Z \rightarrow \mathbf Z \rtimes \mathbf Z_n$ and $p': \mathbf Z \rtimes \mathbf Z_n \rightarrow \mathbf Z_n$ are given by $$i'(1) = (1,0)\\ p'(1,0) = 0 \\ p'(0,1) = 1.$$</p> </blockquote> <p><em>Hint: in this case $ba^rb^{-1} = a^{-r}$ and $a^r = b^n$, so $r = 0$ and $b^n = 1$. Conclude that $A$ is isomorphic to $\mathbf Z \rtimes \mathbf Z_n$ and classify all homomorphisms $\mathbf Z_n \rightarrow \operatorname{Aut}(\mathbf Z)$. Note that $\mathbf Z$ has only two automorphisms and, by exercise 3, $n$ is even when $s = -1$.</em></p> <hr> <p>What's going on in cohomological terms?</p> <p>$\operatorname{Ext}(\mathbf Z_n, \mathbf Z)$ classifies the extensions of $\mathbf Z_n$ by $\mathbf Z$ as $\mathbf Z$-modules or, in other words, as abelian groups. To compute $\operatorname{Ext}(\mathbf Z_n, \mathbf Z)$, one would usually apply $\operatorname{Hom}(\cdot, \mathbf Z)$ to the "multiplication by $n$" sequence $$0 \rightarrow \mathbf Z \rightarrow \mathbf Z \rightarrow \mathbf Z_n \rightarrow 0$$ to get $\operatorname{Ext}(\mathbf Z_n, \mathbf Z) = \mathbf Z/n\mathbf Z$. Exercise 4 makes this equality explicit. The class of $r$ in $\operatorname{Ext}(\mathbf Z_n, \mathbf Z) = \mathbf Z/n\mathbf Z$ corresponds to the extension $\mathbf Z \times \mathbf Z_t$ with maps $p'$ and $i'$ as in the statement of exercise 4.</p> <p>Non-abelian group extensions of $\mathbf Z_n$ by $\mathbf Z$ are not detected by $\operatorname{Ext}(\mathbf Z_n, \mathbf Z) = H^2(\mathbf Z_n, \mathbf Z)$. To detect more general extensions, one could look, for example, at $H^3$ (see <a href="http://sierra.nmsu.edu/morandi/notes/GroupExtensions.pdf" rel="nofollow noreferrer">here</a>). IMO Exercise 5 is a more down to earth way of treating this case.</p> Answer 14: <p>This is a nice exercise for a beginning section of homology theory, since doing this without Ext will make you really appreciate the machinery homological algebra has to offer.</p> <p>I do not think, that I will go through all details, but let us start:</p> <p>First of all, we have the case $\mathbb Z_d = 0$ (one could refer to this as $d=1$), i.e. a short exact sequence</p> <p>$$0 \to \mathbb Z \to \mathbb Z \to \mathbb Z_n \to 0.$$</p> <p>Of course the first map is given by $\cdot m$ for some $m \in \mathbb Z$ and by looking at the cokernel, we get that $m = \pm n$. The second map is given by $1 \mapsto e$, where $e$ is co-prime to $n$.</p> <p>The much more involved case is the case $d \geq 2$, i.e. where the summand $\mathbb Z_d$ actually occurs.</p> <p>Call the first map $g: \mathbb Z \to \mathbb Z \oplus \mathbb Z_d$ and the second map $f: \mathbb Z \oplus \mathbb Z_d \to \mathbb Z_n$. Denote the kernel of $f$ by $K$. By exactness we get $K \cong \mathbb Z$.</p> <p>Note that $f(0,n)=0$. $K$ is torsion free, but $(0,n) \in \mathbb Z \oplus \mathbb Z_d$ is of course torsion (annihilated by $d$), hence we have $(0,n)=(0,0)$, i.e. $d|n$.</p> <blockquote> <p>So we have all possibilites for $A$, namely $A \in \{\mathbb Z \} \cup \{\mathbb Z \oplus \mathbb Z_d ~ | ~ d \geq 2,d | n\}$.</p> </blockquote> <p>We still have to figure out the maps in the second case:</p> <p>Of course $g$ is given by $g(1) = (m,e)$ for some $m,e \in \mathbb Z, 0 \leq e &lt;d$. Let us call the cokernel $C_{m,e}$.</p> <p>$C_{m_e}$ is generated by $e_1=(1,0),e_2=(0,1)$ and we have relations $me_1+ee_2=0$ and $de_2=0$, thus $C_{m_e}$ is presented by $$\begin{pmatrix}m&amp;0\\e&amp;d\end{pmatrix}.$$ Upto multiplication in $SL_2(\mathbb Z)$, this should be equal to $$\begin{pmatrix}n&amp;0\\0&amp;1\end{pmatrix}$$, i.e. by looking at fitting ideals, we get that</p> <p>$$md=\pm n ~~ \text{ and } ~~ (m,e,d)=(1).$$</p> <p>Thus we have found all possibilities for $g$. To find $f$, note that we have the following commutative diagram of exact sequences:</p> <p>$$\require{AMScd} \begin{CD} 0 @&gt;&gt;&gt; \mathbb Z^2 @&gt;\begin{pmatrix}m&amp;0\\e&amp;d\end{pmatrix}&gt;&gt; \mathbb Z^2 @&gt;&gt;&gt; C_{m,e} @&gt;&gt;&gt;0\\ @VVV @VXVV @VVYV @VVV @VVV \\ 0 @&gt;&gt;&gt; \mathbb Z^2 @&gt;&gt;\begin{pmatrix}n&amp;0\\0&amp;1\end{pmatrix}&gt; \mathbb Z^2 @&gt;p&gt;&gt; \mathbb Z_n @&gt;&gt;&gt;0 \end{CD}$$ , where $X$ and $Y$ are obtained via the <a href="https://en.wikipedia.org/wiki/Smith_normal_form#Algorithm" rel="nofollow noreferrer">algorithm to determine the Smith normal form</a>.</p> <p>The map $p \circ Y$ factors through $\mathbb Z \oplus \mathbb Z_d$ and the factorization is $f$.</p> Answer 15: <p>Since I don't know what we can assume, let's classify all those extensions by brute force. Since it's a problem in a chapter about homology, $A$ is probably meant to be abelian but let's go ahead and do the general case. I'll outline a proof in the form of exercises. At the end, I outline the relationship between the explicit constructions and the cohomological argument.</p> <p>First, name the arrows in the exact sequence: $i: \mathbf Z \rightarrow A$ and $p: A \rightarrow \mathbf Z_n$.</p> <p>Let $a = i(1)$ and let $b \in A$ such that $p(b) = 1$.</p> <blockquote> <p><strong>Exercise 1:</strong> $a$ and $b$ generate $A$</p> </blockquote> <p>So, $$A = \langle a, b : \text{ some relations we need to find}\rangle$$</p> <blockquote> <p><strong>Exercise 2</strong>: $b^n = a^r$ and $bab^{-1} = a^s$ for some integers $r$ and $s$.</p> </blockquote> <p><em>Hint: What are the images of $bab^{-1}$ and $b^n$ under $p$?.</em></p> <blockquote> <p><strong>Exercise 3</strong>: $a^{s^n} = a$ and thus $s = \pm 1$.</p> </blockquote> <p><em>Hint: use the relations we have so far. For the last part, recall that $a$ is the generator of a copy of $\mathbf Z$ in $A$.</em></p> <blockquote> <p><strong>Exercise 4:</strong> Suppose $s = 1$. Let $t = \gcd(n, r)$ and write $$1 = \alpha \frac n t + \beta \frac r t,$$ for suitable $\alpha, \beta \in \mathbf Z$. Then, $$A \cong \mathbf Z \times \mathbf Z_t,$$ with $i': \mathbf Z \rightarrow \mathbf Z \times \mathbf Z_t$ and $p': \mathbf Z \times \mathbf Z_t \rightarrow \mathbf Z_n$ given by $$i'(1) = (n/t,-\beta)\\ p'(1,0) = \beta \\ p'(0,1) = n/t.$$</p> </blockquote> <p><em>Hint (so it doesn't look like magic): $s = 1$ implies $A$ abelian and the relation $b^n = a^r$ can then be rewritten as $1 = (b^{n/t}a^{-r/t})^t$. In fact, $t$ is the order of $b^{n/t}a^{-r/t}$ in $A$. Show that $b^{n/t}a^{-r/t}$ and $b^{\beta}a^{\alpha}$ generate $A$ and consider the homomorphism $\varphi: \mathbf Z \times \mathbf Z_t \rightarrow A$ given by $\varphi(1,0) = b^{\beta}a^{\alpha}$ and $\varphi(0, 1)=b^{n/t}a^{-r/t}$. It sits in the commutative diagram</em> $\require{AMScd}$ \begin{CD} 0 @&gt;&gt;&gt; \mathbf Z @&gt;{i'}&gt;&gt; \mathbf Z \times \mathbf Z_t @&gt;{p'}&gt;&gt; \mathbf Z_n @&gt;&gt;&gt; 0\\ &amp; @V{=\,}VV @VV{\varphi}V @VV{\, =}V \\ 0 @&gt;&gt;&gt; \mathbf Z @&gt;&gt;{i}&gt; A @&gt;&gt;{p}&gt; \mathbf Z_n @&gt;&gt;&gt; 0\end{CD}</p> <blockquote> <p><strong>Exercise 5:</strong> Suppose $s = -1$. Then, $n$ is even, $r = 0$ and $$A \cong \mathbf Z \rtimes_\sigma \mathbf Z_n,$$ where $\sigma: \mathbf Z_n \rightarrow \operatorname{Aut}(\mathbf Z)$ is given by $\sigma(1)(1) = -1$. The maps $i': \mathbf Z \rightarrow \mathbf Z \rtimes \mathbf Z_n$ and $p': \mathbf Z \rtimes \mathbf Z_n \rightarrow \mathbf Z_n$ are given by $$i'(1) = (1,0)\\ p'(1,0) = 0 \\ p'(0,1) = 1.$$</p> </blockquote> <p><em>Hint: in this case $ba^rb^{-1} = a^{-r}$ and $a^r = b^n$, so $r = 0$ and $b^n = 1$. Conclude that $A$ is isomorphic to $\mathbf Z \rtimes \mathbf Z_n$ and classify all homomorphisms $\mathbf Z_n \rightarrow \operatorname{Aut}(\mathbf Z)$. Note that $\mathbf Z$ has only two automorphisms and, by exercise 3, $n$ is even when $s = -1$.</em></p> <hr> <p>What's going on in cohomological terms?</p> <p>$\operatorname{Ext}(\mathbf Z_n, \mathbf Z)$ classifies the extensions of $\mathbf Z_n$ by $\mathbf Z$ as $\mathbf Z$-modules or, in other words, as abelian groups. To compute $\operatorname{Ext}(\mathbf Z_n, \mathbf Z)$, one would usually apply $\operatorname{Hom}(\cdot, \mathbf Z)$ to the "multiplication by $n$" sequence $$0 \rightarrow \mathbf Z \rightarrow \mathbf Z \rightarrow \mathbf Z_n \rightarrow 0$$ to get $\operatorname{Ext}(\mathbf Z_n, \mathbf Z) = \mathbf Z/n\mathbf Z$. Exercise 4 makes this equality explicit. The class of $r$ in $\operatorname{Ext}(\mathbf Z_n, \mathbf Z) = \mathbf Z/n\mathbf Z$ corresponds to the extension $\mathbf Z \times \mathbf Z_t$ with maps $p'$ and $i'$ as in the statement of exercise 4.</p> <p>Non-abelian group extensions of $\mathbf Z_n$ by $\mathbf Z$ are not detected by $\operatorname{Ext}(\mathbf Z_n, \mathbf Z) = H^2(\mathbf Z_n, \mathbf Z)$. To detect more general extensions, one could look, for example, at $H^3$ (see <a href="http://sierra.nmsu.edu/morandi/notes/GroupExtensions.pdf" rel="nofollow noreferrer">here</a>). IMO Exercise 5 is a more down to earth way of treating this case.</p> Answer 16: <p>This is a nice exercise for a beginning section of homology theory, since doing this without Ext will make you really appreciate the machinery homological algebra has to offer.</p> <p>I do not think, that I will go through all details, but let us start:</p> <p>First of all, we have the case $\mathbb Z_d = 0$ (one could refer to this as $d=1$), i.e. a short exact sequence</p> <p>$$0 \to \mathbb Z \to \mathbb Z \to \mathbb Z_n \to 0.$$</p> <p>Of course the first map is given by $\cdot m$ for some $m \in \mathbb Z$ and by looking at the cokernel, we get that $m = \pm n$. The second map is given by $1 \mapsto e$, where $e$ is co-prime to $n$.</p> <p>The much more involved case is the case $d \geq 2$, i.e. where the summand $\mathbb Z_d$ actually occurs.</p> <p>Call the first map $g: \mathbb Z \to \mathbb Z \oplus \mathbb Z_d$ and the second map $f: \mathbb Z \oplus \mathbb Z_d \to \mathbb Z_n$. Denote the kernel of $f$ by $K$. By exactness we get $K \cong \mathbb Z$.</p> <p>Note that $f(0,n)=0$. $K$ is torsion free, but $(0,n) \in \mathbb Z \oplus \mathbb Z_d$ is of course torsion (annihilated by $d$), hence we have $(0,n)=(0,0)$, i.e. $d|n$.</p> <blockquote> <p>So we have all possibilites for $A$, namely $A \in \{\mathbb Z \} \cup \{\mathbb Z \oplus \mathbb Z_d ~ | ~ d \geq 2,d | n\}$.</p> </blockquote> <p>We still have to figure out the maps in the second case:</p> <p>Of course $g$ is given by $g(1) = (m,e)$ for some $m,e \in \mathbb Z, 0 \leq e &lt;d$. Let us call the cokernel $C_{m,e}$.</p> <p>$C_{m_e}$ is generated by $e_1=(1,0),e_2=(0,1)$ and we have relations $me_1+ee_2=0$ and $de_2=0$, thus $C_{m_e}$ is presented by $$\begin{pmatrix}m&amp;0\\e&amp;d\end{pmatrix}.$$ Upto multiplication in $SL_2(\mathbb Z)$, this should be equal to $$\begin{pmatrix}n&amp;0\\0&amp;1\end{pmatrix}$$, i.e. by looking at fitting ideals, we get that</p> <p>$$md=\pm n ~~ \text{ and } ~~ (m,e,d)=(1).$$</p> <p>Thus we have found all possibilities for $g$. To find $f$, note that we have the following commutative diagram of exact sequences:</p> <p>$$\require{AMScd} \begin{CD} 0 @&gt;&gt;&gt; \mathbb Z^2 @&gt;\begin{pmatrix}m&amp;0\\e&amp;d\end{pmatrix}&gt;&gt; \mathbb Z^2 @&gt;&gt;&gt; C_{m,e} @&gt;&gt;&gt;0\\ @VVV @VXVV @VVYV @VVV @VVV \\ 0 @&gt;&gt;&gt; \mathbb Z^2 @&gt;&gt;\begin{pmatrix}n&amp;0\\0&amp;1\end{pmatrix}&gt; \mathbb Z^2 @&gt;p&gt;&gt; \mathbb Z_n @&gt;&gt;&gt;0 \end{CD}$$ , where $X$ and $Y$ are obtained via the <a href="https://en.wikipedia.org/wiki/Smith_normal_form#Algorithm" rel="nofollow noreferrer">algorithm to determine the Smith normal form</a>.</p> <p>The map $p \circ Y$ factors through $\mathbb Z \oplus \mathbb Z_d$ and the factorization is $f$.</p> Answer 17: <p>Since I don't know what we can assume, let's classify all those extensions by brute force. Since it's a problem in a chapter about homology, $A$ is probably meant to be abelian but let's go ahead and do the general case. I'll outline a proof in the form of exercises. At the end, I outline the relationship between the explicit constructions and the cohomological argument.</p> <p>First, name the arrows in the exact sequence: $i: \mathbf Z \rightarrow A$ and $p: A \rightarrow \mathbf Z_n$.</p> <p>Let $a = i(1)$ and let $b \in A$ such that $p(b) = 1$.</p> <blockquote> <p><strong>Exercise 1:</strong> $a$ and $b$ generate $A$</p> </blockquote> <p>So, $$A = \langle a, b : \text{ some relations we need to find}\rangle$$</p> <blockquote> <p><strong>Exercise 2</strong>: $b^n = a^r$ and $bab^{-1} = a^s$ for some integers $r$ and $s$.</p> </blockquote> <p><em>Hint: What are the images of $bab^{-1}$ and $b^n$ under $p$?.</em></p> <blockquote> <p><strong>Exercise 3</strong>: $a^{s^n} = a$ and thus $s = \pm 1$.</p> </blockquote> <p><em>Hint: use the relations we have so far. For the last part, recall that $a$ is the generator of a copy of $\mathbf Z$ in $A$.</em></p> <blockquote> <p><strong>Exercise 4:</strong> Suppose $s = 1$. Let $t = \gcd(n, r)$ and write $$1 = \alpha \frac n t + \beta \frac r t,$$ for suitable $\alpha, \beta \in \mathbf Z$. Then, $$A \cong \mathbf Z \times \mathbf Z_t,$$ with $i': \mathbf Z \rightarrow \mathbf Z \times \mathbf Z_t$ and $p': \mathbf Z \times \mathbf Z_t \rightarrow \mathbf Z_n$ given by $$i'(1) = (n/t,-\beta)\\ p'(1,0) = \beta \\ p'(0,1) = n/t.$$</p> </blockquote> <p><em>Hint (so it doesn't look like magic): $s = 1$ implies $A$ abelian and the relation $b^n = a^r$ can then be rewritten as $1 = (b^{n/t}a^{-r/t})^t$. In fact, $t$ is the order of $b^{n/t}a^{-r/t}$ in $A$. Show that $b^{n/t}a^{-r/t}$ and $b^{\beta}a^{\alpha}$ generate $A$ and consider the homomorphism $\varphi: \mathbf Z \times \mathbf Z_t \rightarrow A$ given by $\varphi(1,0) = b^{\beta}a^{\alpha}$ and $\varphi(0, 1)=b^{n/t}a^{-r/t}$. It sits in the commutative diagram</em> $\require{AMScd}$ \begin{CD} 0 @&gt;&gt;&gt; \mathbf Z @&gt;{i'}&gt;&gt; \mathbf Z \times \mathbf Z_t @&gt;{p'}&gt;&gt; \mathbf Z_n @&gt;&gt;&gt; 0\\ &amp; @V{=\,}VV @VV{\varphi}V @VV{\, =}V \\ 0 @&gt;&gt;&gt; \mathbf Z @&gt;&gt;{i}&gt; A @&gt;&gt;{p}&gt; \mathbf Z_n @&gt;&gt;&gt; 0\end{CD}</p> <blockquote> <p><strong>Exercise 5:</strong> Suppose $s = -1$. Then, $n$ is even, $r = 0$ and $$A \cong \mathbf Z \rtimes_\sigma \mathbf Z_n,$$ where $\sigma: \mathbf Z_n \rightarrow \operatorname{Aut}(\mathbf Z)$ is given by $\sigma(1)(1) = -1$. The maps $i': \mathbf Z \rightarrow \mathbf Z \rtimes \mathbf Z_n$ and $p': \mathbf Z \rtimes \mathbf Z_n \rightarrow \mathbf Z_n$ are given by $$i'(1) = (1,0)\\ p'(1,0) = 0 \\ p'(0,1) = 1.$$</p> </blockquote> <p><em>Hint: in this case $ba^rb^{-1} = a^{-r}$ and $a^r = b^n$, so $r = 0$ and $b^n = 1$. Conclude that $A$ is isomorphic to $\mathbf Z \rtimes \mathbf Z_n$ and classify all homomorphisms $\mathbf Z_n \rightarrow \operatorname{Aut}(\mathbf Z)$. Note that $\mathbf Z$ has only two automorphisms and, by exercise 3, $n$ is even when $s = -1$.</em></p> <hr> <p>What's going on in cohomological terms?</p> <p>$\operatorname{Ext}(\mathbf Z_n, \mathbf Z)$ classifies the extensions of $\mathbf Z_n$ by $\mathbf Z$ as $\mathbf Z$-modules or, in other words, as abelian groups. To compute $\operatorname{Ext}(\mathbf Z_n, \mathbf Z)$, one would usually apply $\operatorname{Hom}(\cdot, \mathbf Z)$ to the "multiplication by $n$" sequence $$0 \rightarrow \mathbf Z \rightarrow \mathbf Z \rightarrow \mathbf Z_n \rightarrow 0$$ to get $\operatorname{Ext}(\mathbf Z_n, \mathbf Z) = \mathbf Z/n\mathbf Z$. Exercise 4 makes this equality explicit. The class of $r$ in $\operatorname{Ext}(\mathbf Z_n, \mathbf Z) = \mathbf Z/n\mathbf Z$ corresponds to the extension $\mathbf Z \times \mathbf Z_t$ with maps $p'$ and $i'$ as in the statement of exercise 4.</p> <p>Non-abelian group extensions of $\mathbf Z_n$ by $\mathbf Z$ are not detected by $\operatorname{Ext}(\mathbf Z_n, \mathbf Z) = H^2(\mathbf Z_n, \mathbf Z)$. To detect more general extensions, one could look, for example, at $H^3$ (see <a href="http://sierra.nmsu.edu/morandi/notes/GroupExtensions.pdf" rel="nofollow noreferrer">here</a>). IMO Exercise 5 is a more down to earth way of treating this case.</p> Answer 18: <p>This is a nice exercise for a beginning section of homology theory, since doing this without Ext will make you really appreciate the machinery homological algebra has to offer.</p> <p>I do not think, that I will go through all details, but let us start:</p> <p>First of all, we have the case $\mathbb Z_d = 0$ (one could refer to this as $d=1$), i.e. a short exact sequence</p> <p>$$0 \to \mathbb Z \to \mathbb Z \to \mathbb Z_n \to 0.$$</p> <p>Of course the first map is given by $\cdot m$ for some $m \in \mathbb Z$ and by looking at the cokernel, we get that $m = \pm n$. The second map is given by $1 \mapsto e$, where $e$ is co-prime to $n$.</p> <p>The much more involved case is the case $d \geq 2$, i.e. where the summand $\mathbb Z_d$ actually occurs.</p> <p>Call the first map $g: \mathbb Z \to \mathbb Z \oplus \mathbb Z_d$ and the second map $f: \mathbb Z \oplus \mathbb Z_d \to \mathbb Z_n$. Denote the kernel of $f$ by $K$. By exactness we get $K \cong \mathbb Z$.</p> <p>Note that $f(0,n)=0$. $K$ is torsion free, but $(0,n) \in \mathbb Z \oplus \mathbb Z_d$ is of course torsion (annihilated by $d$), hence we have $(0,n)=(0,0)$, i.e. $d|n$.</p> <blockquote> <p>So we have all possibilites for $A$, namely $A \in \{\mathbb Z \} \cup \{\mathbb Z \oplus \mathbb Z_d ~ | ~ d \geq 2,d | n\}$.</p> </blockquote> <p>We still have to figure out the maps in the second case:</p> <p>Of course $g$ is given by $g(1) = (m,e)$ for some $m,e \in \mathbb Z, 0 \leq e &lt;d$. Let us call the cokernel $C_{m,e}$.</p> <p>$C_{m_e}$ is generated by $e_1=(1,0),e_2=(0,1)$ and we have relations $me_1+ee_2=0$ and $de_2=0$, thus $C_{m_e}$ is presented by $$\begin{pmatrix}m&amp;0\\e&amp;d\end{pmatrix}.$$ Upto multiplication in $SL_2(\mathbb Z)$, this should be equal to $$\begin{pmatrix}n&amp;0\\0&amp;1\end{pmatrix}$$, i.e. by looking at fitting ideals, we get that</p> <p>$$md=\pm n ~~ \text{ and } ~~ (m,e,d)=(1).$$</p> <p>Thus we have found all possibilities for $g$. To find $f$, note that we have the following commutative diagram of exact sequences:</p> <p>$$\require{AMScd} \begin{CD} 0 @&gt;&gt;&gt; \mathbb Z^2 @&gt;\begin{pmatrix}m&amp;0\\e&amp;d\end{pmatrix}&gt;&gt; \mathbb Z^2 @&gt;&gt;&gt; C_{m,e} @&gt;&gt;&gt;0\\ @VVV @VXVV @VVYV @VVV @VVV \\ 0 @&gt;&gt;&gt; \mathbb Z^2 @&gt;&gt;\begin{pmatrix}n&amp;0\\0&amp;1\end{pmatrix}&gt; \mathbb Z^2 @&gt;p&gt;&gt; \mathbb Z_n @&gt;&gt;&gt;0 \end{CD}$$ , where $X$ and $Y$ are obtained via the <a href="https://en.wikipedia.org/wiki/Smith_normal_form#Algorithm" rel="nofollow noreferrer">algorithm to determine the Smith normal form</a>.</p> <p>The map $p \circ Y$ factors through $\mathbb Z \oplus \mathbb Z_d$ and the factorization is $f$.</p> Answer 19: <p>Since I don't know what we can assume, let's classify all those extensions by brute force. Since it's a problem in a chapter about homology, $A$ is probably meant to be abelian but let's go ahead and do the general case. I'll outline a proof in the form of exercises. At the end, I outline the relationship between the explicit constructions and the cohomological argument.</p> <p>First, name the arrows in the exact sequence: $i: \mathbf Z \rightarrow A$ and $p: A \rightarrow \mathbf Z_n$.</p> <p>Let $a = i(1)$ and let $b \in A$ such that $p(b) = 1$.</p> <blockquote> <p><strong>Exercise 1:</strong> $a$ and $b$ generate $A$</p> </blockquote> <p>So, $$A = \langle a, b : \text{ some relations we need to find}\rangle$$</p> <blockquote> <p><strong>Exercise 2</strong>: $b^n = a^r$ and $bab^{-1} = a^s$ for some integers $r$ and $s$.</p> </blockquote> <p><em>Hint: What are the images of $bab^{-1}$ and $b^n$ under $p$?.</em></p> <blockquote> <p><strong>Exercise 3</strong>: $a^{s^n} = a$ and thus $s = \pm 1$.</p> </blockquote> <p><em>Hint: use the relations we have so far. For the last part, recall that $a$ is the generator of a copy of $\mathbf Z$ in $A$.</em></p> <blockquote> <p><strong>Exercise 4:</strong> Suppose $s = 1$. Let $t = \gcd(n, r)$ and write $$1 = \alpha \frac n t + \beta \frac r t,$$ for suitable $\alpha, \beta \in \mathbf Z$. Then, $$A \cong \mathbf Z \times \mathbf Z_t,$$ with $i': \mathbf Z \rightarrow \mathbf Z \times \mathbf Z_t$ and $p': \mathbf Z \times \mathbf Z_t \rightarrow \mathbf Z_n$ given by $$i'(1) = (n/t,-\beta)\\ p'(1,0) = \beta \\ p'(0,1) = n/t.$$</p> </blockquote> <p><em>Hint (so it doesn't look like magic): $s = 1$ implies $A$ abelian and the relation $b^n = a^r$ can then be rewritten as $1 = (b^{n/t}a^{-r/t})^t$. In fact, $t$ is the order of $b^{n/t}a^{-r/t}$ in $A$. Show that $b^{n/t}a^{-r/t}$ and $b^{\beta}a^{\alpha}$ generate $A$ and consider the homomorphism $\varphi: \mathbf Z \times \mathbf Z_t \rightarrow A$ given by $\varphi(1,0) = b^{\beta}a^{\alpha}$ and $\varphi(0, 1)=b^{n/t}a^{-r/t}$. It sits in the commutative diagram</em> $\require{AMScd}$ \begin{CD} 0 @&gt;&gt;&gt; \mathbf Z @&gt;{i'}&gt;&gt; \mathbf Z \times \mathbf Z_t @&gt;{p'}&gt;&gt; \mathbf Z_n @&gt;&gt;&gt; 0\\ &amp; @V{=\,}VV @VV{\varphi}V @VV{\, =}V \\ 0 @&gt;&gt;&gt; \mathbf Z @&gt;&gt;{i}&gt; A @&gt;&gt;{p}&gt; \mathbf Z_n @&gt;&gt;&gt; 0\end{CD}</p> <blockquote> <p><strong>Exercise 5:</strong> Suppose $s = -1$. Then, $n$ is even, $r = 0$ and $$A \cong \mathbf Z \rtimes_\sigma \mathbf Z_n,$$ where $\sigma: \mathbf Z_n \rightarrow \operatorname{Aut}(\mathbf Z)$ is given by $\sigma(1)(1) = -1$. The maps $i': \mathbf Z \rightarrow \mathbf Z \rtimes \mathbf Z_n$ and $p': \mathbf Z \rtimes \mathbf Z_n \rightarrow \mathbf Z_n$ are given by $$i'(1) = (1,0)\\ p'(1,0) = 0 \\ p'(0,1) = 1.$$</p> </blockquote> <p><em>Hint: in this case $ba^rb^{-1} = a^{-r}$ and $a^r = b^n$, so $r = 0$ and $b^n = 1$. Conclude that $A$ is isomorphic to $\mathbf Z \rtimes \mathbf Z_n$ and classify all homomorphisms $\mathbf Z_n \rightarrow \operatorname{Aut}(\mathbf Z)$. Note that $\mathbf Z$ has only two automorphisms and, by exercise 3, $n$ is even when $s = -1$.</em></p> <hr> <p>What's going on in cohomological terms?</p> <p>$\operatorname{Ext}(\mathbf Z_n, \mathbf Z)$ classifies the extensions of $\mathbf Z_n$ by $\mathbf Z$ as $\mathbf Z$-modules or, in other words, as abelian groups. To compute $\operatorname{Ext}(\mathbf Z_n, \mathbf Z)$, one would usually apply $\operatorname{Hom}(\cdot, \mathbf Z)$ to the "multiplication by $n$" sequence $$0 \rightarrow \mathbf Z \rightarrow \mathbf Z \rightarrow \mathbf Z_n \rightarrow 0$$ to get $\operatorname{Ext}(\mathbf Z_n, \mathbf Z) = \mathbf Z/n\mathbf Z$. Exercise 4 makes this equality explicit. The class of $r$ in $\operatorname{Ext}(\mathbf Z_n, \mathbf Z) = \mathbf Z/n\mathbf Z$ corresponds to the extension $\mathbf Z \times \mathbf Z_t$ with maps $p'$ and $i'$ as in the statement of exercise 4.</p> <p>Non-abelian group extensions of $\mathbf Z_n$ by $\mathbf Z$ are not detected by $\operatorname{Ext}(\mathbf Z_n, \mathbf Z) = H^2(\mathbf Z_n, \mathbf Z)$. To detect more general extensions, one could look, for example, at $H^3$ (see <a href="http://sierra.nmsu.edu/morandi/notes/GroupExtensions.pdf" rel="nofollow noreferrer">here</a>). IMO Exercise 5 is a more down to earth way of treating this case.</p> Answer 20: <p>This is a nice exercise for a beginning section of homology theory, since doing this without Ext will make you really appreciate the machinery homological algebra has to offer.</p> <p>I do not think, that I will go through all details, but let us start:</p> <p>First of all, we have the case $\mathbb Z_d = 0$ (one could refer to this as $d=1$), i.e. a short exact sequence</p> <p>$$0 \to \mathbb Z \to \mathbb Z \to \mathbb Z_n \to 0.$$</p> <p>Of course the first map is given by $\cdot m$ for some $m \in \mathbb Z$ and by looking at the cokernel, we get that $m = \pm n$. The second map is given by $1 \mapsto e$, where $e$ is co-prime to $n$.</p> <p>The much more involved case is the case $d \geq 2$, i.e. where the summand $\mathbb Z_d$ actually occurs.</p> <p>Call the first map $g: \mathbb Z \to \mathbb Z \oplus \mathbb Z_d$ and the second map $f: \mathbb Z \oplus \mathbb Z_d \to \mathbb Z_n$. Denote the kernel of $f$ by $K$. By exactness we get $K \cong \mathbb Z$.</p> <p>Note that $f(0,n)=0$. $K$ is torsion free, but $(0,n) \in \mathbb Z \oplus \mathbb Z_d$ is of course torsion (annihilated by $d$), hence we have $(0,n)=(0,0)$, i.e. $d|n$.</p> <blockquote> <p>So we have all possibilites for $A$, namely $A \in \{\mathbb Z \} \cup \{\mathbb Z \oplus \mathbb Z_d ~ | ~ d \geq 2,d | n\}$.</p> </blockquote> <p>We still have to figure out the maps in the second case:</p> <p>Of course $g$ is given by $g(1) = (m,e)$ for some $m,e \in \mathbb Z, 0 \leq e &lt;d$. Let us call the cokernel $C_{m,e}$.</p> <p>$C_{m_e}$ is generated by $e_1=(1,0),e_2=(0,1)$ and we have relations $me_1+ee_2=0$ and $de_2=0$, thus $C_{m_e}$ is presented by $$\begin{pmatrix}m&amp;0\\e&amp;d\end{pmatrix}.$$ Upto multiplication in $SL_2(\mathbb Z)$, this should be equal to $$\begin{pmatrix}n&amp;0\\0&amp;1\end{pmatrix}$$, i.e. by looking at fitting ideals, we get that</p> <p>$$md=\pm n ~~ \text{ and } ~~ (m,e,d)=(1).$$</p> <p>Thus we have found all possibilities for $g$. To find $f$, note that we have the following commutative diagram of exact sequences:</p> <p>$$\require{AMScd} \begin{CD} 0 @&gt;&gt;&gt; \mathbb Z^2 @&gt;\begin{pmatrix}m&amp;0\\e&amp;d\end{pmatrix}&gt;&gt; \mathbb Z^2 @&gt;&gt;&gt; C_{m,e} @&gt;&gt;&gt;0\\ @VVV @VXVV @VVYV @VVV @VVV \\ 0 @&gt;&gt;&gt; \mathbb Z^2 @&gt;&gt;\begin{pmatrix}n&amp;0\\0&amp;1\end{pmatrix}&gt; \mathbb Z^2 @&gt;p&gt;&gt; \mathbb Z_n @&gt;&gt;&gt;0 \end{CD}$$ , where $X$ and $Y$ are obtained via the <a href="https://en.wikipedia.org/wiki/Smith_normal_form#Algorithm" rel="nofollow noreferrer">algorithm to determine the Smith normal form</a>.</p> <p>The map $p \circ Y$ factors through $\mathbb Z \oplus \mathbb Z_d$ and the factorization is $f$.</p>
Partitions of n that generate all numbers smaller than n
2,114,575
6
https://math.stackexchange.com/questions/2114575/partitions-of-n-that-generate-all-numbers-smaller-than-n
<p>Consider a partition $$n=n_1+n_2+\cdots+n_k$$ such that each number $1,\cdots, n$ can be obtained by adding some of the numbers $n_1,n_2,\cdots,n_k$. For example, $$9=4+3+1+1,$$ </p> <p>and every number $1,2,\cdots,9$ be ca written as a sum of some of the numbers $4,3,1,1$. This other partition $$9=6+1+1+1$$ fails the desired property, as $4$ (and $5$) cannot be given by any sum of $6,1,1,1$.</p> <p><strong>Question:</strong> Can we charaterize which partitions of an arbitrary $n$ have this property? We clearly need at least one $1$ among $n_1,n_2,\cdots,n_k$, and intuitively we need many small numbers $n_i$. But I can't see much beyond this. Any idea or reference will be appreciated.</p>
[ { "answer_id": 2114653, "score": 1, "text": "<p>This is just a quick and dirty list of the first examples, for $n$ up to $10$. Feel free to edit, extend, or amend.</p>\n\n<p>$$1\\\\1+1\\\\\n1+1+1\\quad1+2\\\\\n1+1+1+1\\quad1+1+2\\\\\n1^5\\quad1+1+1+2\\quad1+2+2\\quad1+1+3\\\\\n1^6\\quad1^4+2\\quad1^2+2+2\\quad1^3+3\\quad1+2+3\\\\\n1^7\\quad1^5+2\\quad1^3+2+2\\quad1+2^3\\quad1^4+3\\quad1^2+2+3\\quad1^3+4\\quad1+2+4\\\\\n1^8\\quad1^6+2\\quad1^4+2+2\\quad1^2+2+2+2\\quad1^5+3\\quad1^3+2+3\\quad1^2+3+3\\quad1+2+2+3\\quad1^4+4\\quad1+1+2+4\\\\\n1^9\\quad1^7+2\\quad1^5+2+2\\quad1^3+2^3\\quad1+2^4\\quad1^6+3\\quad1^4+2+3\\quad1^2+2+2+3\\quad1^3+3+3\\quad1+2+3+3\\quad1^5+4\\quad1^3+2+4\\quad1+2+2+4\\quad1+1+3+4\\quad1^4+5\\quad1^2+2+5\\\\\n1^{10}\\quad1^8+2\\quad1^6+2^2\\quad1^4+2^3\\quad1^2+2^4\\quad1^7+3\\quad1^5+2+3\\quad1^3+2^2+3\\quad1+2^3+3\\quad1^4+3^2\\quad1^2+2+3^2\\quad1^6+4\\quad1^4+2+4\\quad1^2+2^2+4\\quad1^3+3+4\\quad1+2+3+4\\quad1^5+5\\quad1^3+2+5\\quad1+2^2+5\\quad1+1+3+5$$</p>\n\n<p>I hope the meaning of the superscript notation is clear, and I hope someone will check that I didn't make any mistakes or overlook anything. The list so far gives the sequence</p>\n\n<p>$$1,1,2,2,4,5,8,10,16,20,\\ldots$$</p>\n\n<p>which (after correcting a pair of mistakes in the original posting here) Peter Taylor found in the OEIS.</p>\n" }, { "answer_id": 2115251, "score": 1, "text": "<p>Let $\\lambda$ be a partition of $n$. The required condition is that $\\lambda$ contain partitions $\\lambda_i$ of each $1 \\le i &lt; n$. Clearly if $\\lambda$ contains a partition of $j$ then it also contains a partition of $(n - j)$, being the multiset difference $\\lambda - \\lambda_j$.</p>\n\n<p>Therefore the first thing to notice is that $\\lambda$ cannot contain any element $a &gt; \\lceil \\frac{n}{2} \\rceil$, for if it did then $\\{a\\}$ cannot be part of a partition of $\\lceil \\frac{n}{2} \\rceil$ and $\\lambda - \\{a\\}$ is a partition of $(n - a) &lt; (n - \\lceil \\frac{n}{2} \\rceil) &lt; \\lceil \\frac{n}{2} \\rceil$ cannot contain a partition of $\\lceil \\frac{n}{2} \\rceil$.</p>\n\n<p>Now, suppose that the largest element of $\\lambda$ is $m$. It is certainly <em>sufficient</em> that $\\lambda - \\{m\\}$ should satisfy the corresponding condition of providing partitions for each $1 \\le i &lt; (n - m)$. Proof: $\\lambda - \\{m\\}$ is a partition of $(n - m)$ and provides partitions for each smaller natural number, so it remains to construct partitions $\\lambda_i$ for $(n - m) &lt; i &lt; n$. We can do this by taking partitions from $\\lambda - \\{m\\}$ for $(n - 2m) &lt; j &lt; n - m$ and then adding $\\{m\\}$ to each one. This can only fail if $j &lt; 0$, which can only happen if $(n - 2m) &lt; -1$. Since $m \\le \\lceil \\frac{n}{2} \\rceil$ we have $n - 2m \\ge n - 2\\lceil \\frac{n}{2} \\rceil$, which is $0$ if $n$ is even and $-1$ if $n$ is odd, so all cases are covered.</p>\n\n<p>The interesting question is whether it's <em>necessary</em> that $\\lambda - \\{m\\}$ should satisfy the same condition. Clearly it must contain partitions of $1 \\le i &lt; m$, since $\\{m\\}$ doesn't participate in them. And by the simple principle of taking complements in $\\lambda$ it's clear that for each $m \\le i &lt; n$ the remnant $\\lambda - \\{m\\}$ must either contain partitions of $(i - m)$ and $n - i$; or $i$ and $n - m - i$. Is that sufficient?</p>\n\n<p>My intuition is that it's necessary, and testing on small examples (up to $n = 30$) supports that, but I haven't proved it.</p>\n\n<p>In the Online Encyclopedia of Integer Sequences it's <a href=\"https://oeis.org/A126796\" rel=\"nofollow noreferrer\">A126796</a> and a comment claims the characterisation</p>\n\n<blockquote>\n <p>A partition is complete iff each part is no more than 1 more than the sum of all smaller parts. (This includes the smallest part, which thus must be 1.) - Franklin T. Adams-Watters, Mar 22 2007</p>\n</blockquote>\n" }, { "answer_id": 2114653, "score": 1, "text": "<p>This is just a quick and dirty list of the first examples, for $n$ up to $10$. Feel free to edit, extend, or amend.</p>\n\n<p>$$1\\\\1+1\\\\\n1+1+1\\quad1+2\\\\\n1+1+1+1\\quad1+1+2\\\\\n1^5\\quad1+1+1+2\\quad1+2+2\\quad1+1+3\\\\\n1^6\\quad1^4+2\\quad1^2+2+2\\quad1^3+3\\quad1+2+3\\\\\n1^7\\quad1^5+2\\quad1^3+2+2\\quad1+2^3\\quad1^4+3\\quad1^2+2+3\\quad1^3+4\\quad1+2+4\\\\\n1^8\\quad1^6+2\\quad1^4+2+2\\quad1^2+2+2+2\\quad1^5+3\\quad1^3+2+3\\quad1^2+3+3\\quad1+2+2+3\\quad1^4+4\\quad1+1+2+4\\\\\n1^9\\quad1^7+2\\quad1^5+2+2\\quad1^3+2^3\\quad1+2^4\\quad1^6+3\\quad1^4+2+3\\quad1^2+2+2+3\\quad1^3+3+3\\quad1+2+3+3\\quad1^5+4\\quad1^3+2+4\\quad1+2+2+4\\quad1+1+3+4\\quad1^4+5\\quad1^2+2+5\\\\\n1^{10}\\quad1^8+2\\quad1^6+2^2\\quad1^4+2^3\\quad1^2+2^4\\quad1^7+3\\quad1^5+2+3\\quad1^3+2^2+3\\quad1+2^3+3\\quad1^4+3^2\\quad1^2+2+3^2\\quad1^6+4\\quad1^4+2+4\\quad1^2+2^2+4\\quad1^3+3+4\\quad1+2+3+4\\quad1^5+5\\quad1^3+2+5\\quad1+2^2+5\\quad1+1+3+5$$</p>\n\n<p>I hope the meaning of the superscript notation is clear, and I hope someone will check that I didn't make any mistakes or overlook anything. The list so far gives the sequence</p>\n\n<p>$$1,1,2,2,4,5,8,10,16,20,\\ldots$$</p>\n\n<p>which (after correcting a pair of mistakes in the original posting here) Peter Taylor found in the OEIS.</p>\n" }, { "answer_id": 2115251, "score": 1, "text": "<p>Let $\\lambda$ be a partition of $n$. The required condition is that $\\lambda$ contain partitions $\\lambda_i$ of each $1 \\le i &lt; n$. Clearly if $\\lambda$ contains a partition of $j$ then it also contains a partition of $(n - j)$, being the multiset difference $\\lambda - \\lambda_j$.</p>\n\n<p>Therefore the first thing to notice is that $\\lambda$ cannot contain any element $a &gt; \\lceil \\frac{n}{2} \\rceil$, for if it did then $\\{a\\}$ cannot be part of a partition of $\\lceil \\frac{n}{2} \\rceil$ and $\\lambda - \\{a\\}$ is a partition of $(n - a) &lt; (n - \\lceil \\frac{n}{2} \\rceil) &lt; \\lceil \\frac{n}{2} \\rceil$ cannot contain a partition of $\\lceil \\frac{n}{2} \\rceil$.</p>\n\n<p>Now, suppose that the largest element of $\\lambda$ is $m$. It is certainly <em>sufficient</em> that $\\lambda - \\{m\\}$ should satisfy the corresponding condition of providing partitions for each $1 \\le i &lt; (n - m)$. Proof: $\\lambda - \\{m\\}$ is a partition of $(n - m)$ and provides partitions for each smaller natural number, so it remains to construct partitions $\\lambda_i$ for $(n - m) &lt; i &lt; n$. We can do this by taking partitions from $\\lambda - \\{m\\}$ for $(n - 2m) &lt; j &lt; n - m$ and then adding $\\{m\\}$ to each one. This can only fail if $j &lt; 0$, which can only happen if $(n - 2m) &lt; -1$. Since $m \\le \\lceil \\frac{n}{2} \\rceil$ we have $n - 2m \\ge n - 2\\lceil \\frac{n}{2} \\rceil$, which is $0$ if $n$ is even and $-1$ if $n$ is odd, so all cases are covered.</p>\n\n<p>The interesting question is whether it's <em>necessary</em> that $\\lambda - \\{m\\}$ should satisfy the same condition. Clearly it must contain partitions of $1 \\le i &lt; m$, since $\\{m\\}$ doesn't participate in them. And by the simple principle of taking complements in $\\lambda$ it's clear that for each $m \\le i &lt; n$ the remnant $\\lambda - \\{m\\}$ must either contain partitions of $(i - m)$ and $n - i$; or $i$ and $n - m - i$. Is that sufficient?</p>\n\n<p>My intuition is that it's necessary, and testing on small examples (up to $n = 30$) supports that, but I haven't proved it.</p>\n\n<p>In the Online Encyclopedia of Integer Sequences it's <a href=\"https://oeis.org/A126796\" rel=\"nofollow noreferrer\">A126796</a> and a comment claims the characterisation</p>\n\n<blockquote>\n <p>A partition is complete iff each part is no more than 1 more than the sum of all smaller parts. (This includes the smallest part, which thus must be 1.) - Franklin T. Adams-Watters, Mar 22 2007</p>\n</blockquote>\n" }, { "answer_id": 2114653, "score": 1, "text": "<p>This is just a quick and dirty list of the first examples, for $n$ up to $10$. Feel free to edit, extend, or amend.</p>\n\n<p>$$1\\\\1+1\\\\\n1+1+1\\quad1+2\\\\\n1+1+1+1\\quad1+1+2\\\\\n1^5\\quad1+1+1+2\\quad1+2+2\\quad1+1+3\\\\\n1^6\\quad1^4+2\\quad1^2+2+2\\quad1^3+3\\quad1+2+3\\\\\n1^7\\quad1^5+2\\quad1^3+2+2\\quad1+2^3\\quad1^4+3\\quad1^2+2+3\\quad1^3+4\\quad1+2+4\\\\\n1^8\\quad1^6+2\\quad1^4+2+2\\quad1^2+2+2+2\\quad1^5+3\\quad1^3+2+3\\quad1^2+3+3\\quad1+2+2+3\\quad1^4+4\\quad1+1+2+4\\\\\n1^9\\quad1^7+2\\quad1^5+2+2\\quad1^3+2^3\\quad1+2^4\\quad1^6+3\\quad1^4+2+3\\quad1^2+2+2+3\\quad1^3+3+3\\quad1+2+3+3\\quad1^5+4\\quad1^3+2+4\\quad1+2+2+4\\quad1+1+3+4\\quad1^4+5\\quad1^2+2+5\\\\\n1^{10}\\quad1^8+2\\quad1^6+2^2\\quad1^4+2^3\\quad1^2+2^4\\quad1^7+3\\quad1^5+2+3\\quad1^3+2^2+3\\quad1+2^3+3\\quad1^4+3^2\\quad1^2+2+3^2\\quad1^6+4\\quad1^4+2+4\\quad1^2+2^2+4\\quad1^3+3+4\\quad1+2+3+4\\quad1^5+5\\quad1^3+2+5\\quad1+2^2+5\\quad1+1+3+5$$</p>\n\n<p>I hope the meaning of the superscript notation is clear, and I hope someone will check that I didn't make any mistakes or overlook anything. The list so far gives the sequence</p>\n\n<p>$$1,1,2,2,4,5,8,10,16,20,\\ldots$$</p>\n\n<p>which (after correcting a pair of mistakes in the original posting here) Peter Taylor found in the OEIS.</p>\n" }, { "answer_id": 2115251, "score": 1, "text": "<p>Let $\\lambda$ be a partition of $n$. The required condition is that $\\lambda$ contain partitions $\\lambda_i$ of each $1 \\le i &lt; n$. Clearly if $\\lambda$ contains a partition of $j$ then it also contains a partition of $(n - j)$, being the multiset difference $\\lambda - \\lambda_j$.</p>\n\n<p>Therefore the first thing to notice is that $\\lambda$ cannot contain any element $a &gt; \\lceil \\frac{n}{2} \\rceil$, for if it did then $\\{a\\}$ cannot be part of a partition of $\\lceil \\frac{n}{2} \\rceil$ and $\\lambda - \\{a\\}$ is a partition of $(n - a) &lt; (n - \\lceil \\frac{n}{2} \\rceil) &lt; \\lceil \\frac{n}{2} \\rceil$ cannot contain a partition of $\\lceil \\frac{n}{2} \\rceil$.</p>\n\n<p>Now, suppose that the largest element of $\\lambda$ is $m$. It is certainly <em>sufficient</em> that $\\lambda - \\{m\\}$ should satisfy the corresponding condition of providing partitions for each $1 \\le i &lt; (n - m)$. Proof: $\\lambda - \\{m\\}$ is a partition of $(n - m)$ and provides partitions for each smaller natural number, so it remains to construct partitions $\\lambda_i$ for $(n - m) &lt; i &lt; n$. We can do this by taking partitions from $\\lambda - \\{m\\}$ for $(n - 2m) &lt; j &lt; n - m$ and then adding $\\{m\\}$ to each one. This can only fail if $j &lt; 0$, which can only happen if $(n - 2m) &lt; -1$. Since $m \\le \\lceil \\frac{n}{2} \\rceil$ we have $n - 2m \\ge n - 2\\lceil \\frac{n}{2} \\rceil$, which is $0$ if $n$ is even and $-1$ if $n$ is odd, so all cases are covered.</p>\n\n<p>The interesting question is whether it's <em>necessary</em> that $\\lambda - \\{m\\}$ should satisfy the same condition. Clearly it must contain partitions of $1 \\le i &lt; m$, since $\\{m\\}$ doesn't participate in them. And by the simple principle of taking complements in $\\lambda$ it's clear that for each $m \\le i &lt; n$ the remnant $\\lambda - \\{m\\}$ must either contain partitions of $(i - m)$ and $n - i$; or $i$ and $n - m - i$. Is that sufficient?</p>\n\n<p>My intuition is that it's necessary, and testing on small examples (up to $n = 30$) supports that, but I haven't proved it.</p>\n\n<p>In the Online Encyclopedia of Integer Sequences it's <a href=\"https://oeis.org/A126796\" rel=\"nofollow noreferrer\">A126796</a> and a comment claims the characterisation</p>\n\n<blockquote>\n <p>A partition is complete iff each part is no more than 1 more than the sum of all smaller parts. (This includes the smallest part, which thus must be 1.) - Franklin T. Adams-Watters, Mar 22 2007</p>\n</blockquote>\n" }, { "answer_id": 2114653, "score": 1, "text": "<p>This is just a quick and dirty list of the first examples, for $n$ up to $10$. Feel free to edit, extend, or amend.</p>\n\n<p>$$1\\\\1+1\\\\\n1+1+1\\quad1+2\\\\\n1+1+1+1\\quad1+1+2\\\\\n1^5\\quad1+1+1+2\\quad1+2+2\\quad1+1+3\\\\\n1^6\\quad1^4+2\\quad1^2+2+2\\quad1^3+3\\quad1+2+3\\\\\n1^7\\quad1^5+2\\quad1^3+2+2\\quad1+2^3\\quad1^4+3\\quad1^2+2+3\\quad1^3+4\\quad1+2+4\\\\\n1^8\\quad1^6+2\\quad1^4+2+2\\quad1^2+2+2+2\\quad1^5+3\\quad1^3+2+3\\quad1^2+3+3\\quad1+2+2+3\\quad1^4+4\\quad1+1+2+4\\\\\n1^9\\quad1^7+2\\quad1^5+2+2\\quad1^3+2^3\\quad1+2^4\\quad1^6+3\\quad1^4+2+3\\quad1^2+2+2+3\\quad1^3+3+3\\quad1+2+3+3\\quad1^5+4\\quad1^3+2+4\\quad1+2+2+4\\quad1+1+3+4\\quad1^4+5\\quad1^2+2+5\\\\\n1^{10}\\quad1^8+2\\quad1^6+2^2\\quad1^4+2^3\\quad1^2+2^4\\quad1^7+3\\quad1^5+2+3\\quad1^3+2^2+3\\quad1+2^3+3\\quad1^4+3^2\\quad1^2+2+3^2\\quad1^6+4\\quad1^4+2+4\\quad1^2+2^2+4\\quad1^3+3+4\\quad1+2+3+4\\quad1^5+5\\quad1^3+2+5\\quad1+2^2+5\\quad1+1+3+5$$</p>\n\n<p>I hope the meaning of the superscript notation is clear, and I hope someone will check that I didn't make any mistakes or overlook anything. The list so far gives the sequence</p>\n\n<p>$$1,1,2,2,4,5,8,10,16,20,\\ldots$$</p>\n\n<p>which (after correcting a pair of mistakes in the original posting here) Peter Taylor found in the OEIS.</p>\n" }, { "answer_id": 2115251, "score": 1, "text": "<p>Let $\\lambda$ be a partition of $n$. The required condition is that $\\lambda$ contain partitions $\\lambda_i$ of each $1 \\le i &lt; n$. Clearly if $\\lambda$ contains a partition of $j$ then it also contains a partition of $(n - j)$, being the multiset difference $\\lambda - \\lambda_j$.</p>\n\n<p>Therefore the first thing to notice is that $\\lambda$ cannot contain any element $a &gt; \\lceil \\frac{n}{2} \\rceil$, for if it did then $\\{a\\}$ cannot be part of a partition of $\\lceil \\frac{n}{2} \\rceil$ and $\\lambda - \\{a\\}$ is a partition of $(n - a) &lt; (n - \\lceil \\frac{n}{2} \\rceil) &lt; \\lceil \\frac{n}{2} \\rceil$ cannot contain a partition of $\\lceil \\frac{n}{2} \\rceil$.</p>\n\n<p>Now, suppose that the largest element of $\\lambda$ is $m$. It is certainly <em>sufficient</em> that $\\lambda - \\{m\\}$ should satisfy the corresponding condition of providing partitions for each $1 \\le i &lt; (n - m)$. Proof: $\\lambda - \\{m\\}$ is a partition of $(n - m)$ and provides partitions for each smaller natural number, so it remains to construct partitions $\\lambda_i$ for $(n - m) &lt; i &lt; n$. We can do this by taking partitions from $\\lambda - \\{m\\}$ for $(n - 2m) &lt; j &lt; n - m$ and then adding $\\{m\\}$ to each one. This can only fail if $j &lt; 0$, which can only happen if $(n - 2m) &lt; -1$. Since $m \\le \\lceil \\frac{n}{2} \\rceil$ we have $n - 2m \\ge n - 2\\lceil \\frac{n}{2} \\rceil$, which is $0$ if $n$ is even and $-1$ if $n$ is odd, so all cases are covered.</p>\n\n<p>The interesting question is whether it's <em>necessary</em> that $\\lambda - \\{m\\}$ should satisfy the same condition. Clearly it must contain partitions of $1 \\le i &lt; m$, since $\\{m\\}$ doesn't participate in them. And by the simple principle of taking complements in $\\lambda$ it's clear that for each $m \\le i &lt; n$ the remnant $\\lambda - \\{m\\}$ must either contain partitions of $(i - m)$ and $n - i$; or $i$ and $n - m - i$. Is that sufficient?</p>\n\n<p>My intuition is that it's necessary, and testing on small examples (up to $n = 30$) supports that, but I haven't proved it.</p>\n\n<p>In the Online Encyclopedia of Integer Sequences it's <a href=\"https://oeis.org/A126796\" rel=\"nofollow noreferrer\">A126796</a> and a comment claims the characterisation</p>\n\n<blockquote>\n <p>A partition is complete iff each part is no more than 1 more than the sum of all smaller parts. (This includes the smallest part, which thus must be 1.) - Franklin T. Adams-Watters, Mar 22 2007</p>\n</blockquote>\n" }, { "answer_id": 2114653, "score": 1, "text": "<p>This is just a quick and dirty list of the first examples, for $n$ up to $10$. Feel free to edit, extend, or amend.</p>\n\n<p>$$1\\\\1+1\\\\\n1+1+1\\quad1+2\\\\\n1+1+1+1\\quad1+1+2\\\\\n1^5\\quad1+1+1+2\\quad1+2+2\\quad1+1+3\\\\\n1^6\\quad1^4+2\\quad1^2+2+2\\quad1^3+3\\quad1+2+3\\\\\n1^7\\quad1^5+2\\quad1^3+2+2\\quad1+2^3\\quad1^4+3\\quad1^2+2+3\\quad1^3+4\\quad1+2+4\\\\\n1^8\\quad1^6+2\\quad1^4+2+2\\quad1^2+2+2+2\\quad1^5+3\\quad1^3+2+3\\quad1^2+3+3\\quad1+2+2+3\\quad1^4+4\\quad1+1+2+4\\\\\n1^9\\quad1^7+2\\quad1^5+2+2\\quad1^3+2^3\\quad1+2^4\\quad1^6+3\\quad1^4+2+3\\quad1^2+2+2+3\\quad1^3+3+3\\quad1+2+3+3\\quad1^5+4\\quad1^3+2+4\\quad1+2+2+4\\quad1+1+3+4\\quad1^4+5\\quad1^2+2+5\\\\\n1^{10}\\quad1^8+2\\quad1^6+2^2\\quad1^4+2^3\\quad1^2+2^4\\quad1^7+3\\quad1^5+2+3\\quad1^3+2^2+3\\quad1+2^3+3\\quad1^4+3^2\\quad1^2+2+3^2\\quad1^6+4\\quad1^4+2+4\\quad1^2+2^2+4\\quad1^3+3+4\\quad1+2+3+4\\quad1^5+5\\quad1^3+2+5\\quad1+2^2+5\\quad1+1+3+5$$</p>\n\n<p>I hope the meaning of the superscript notation is clear, and I hope someone will check that I didn't make any mistakes or overlook anything. The list so far gives the sequence</p>\n\n<p>$$1,1,2,2,4,5,8,10,16,20,\\ldots$$</p>\n\n<p>which (after correcting a pair of mistakes in the original posting here) Peter Taylor found in the OEIS.</p>\n" }, { "answer_id": 2115251, "score": 1, "text": "<p>Let $\\lambda$ be a partition of $n$. The required condition is that $\\lambda$ contain partitions $\\lambda_i$ of each $1 \\le i &lt; n$. Clearly if $\\lambda$ contains a partition of $j$ then it also contains a partition of $(n - j)$, being the multiset difference $\\lambda - \\lambda_j$.</p>\n\n<p>Therefore the first thing to notice is that $\\lambda$ cannot contain any element $a &gt; \\lceil \\frac{n}{2} \\rceil$, for if it did then $\\{a\\}$ cannot be part of a partition of $\\lceil \\frac{n}{2} \\rceil$ and $\\lambda - \\{a\\}$ is a partition of $(n - a) &lt; (n - \\lceil \\frac{n}{2} \\rceil) &lt; \\lceil \\frac{n}{2} \\rceil$ cannot contain a partition of $\\lceil \\frac{n}{2} \\rceil$.</p>\n\n<p>Now, suppose that the largest element of $\\lambda$ is $m$. It is certainly <em>sufficient</em> that $\\lambda - \\{m\\}$ should satisfy the corresponding condition of providing partitions for each $1 \\le i &lt; (n - m)$. Proof: $\\lambda - \\{m\\}$ is a partition of $(n - m)$ and provides partitions for each smaller natural number, so it remains to construct partitions $\\lambda_i$ for $(n - m) &lt; i &lt; n$. We can do this by taking partitions from $\\lambda - \\{m\\}$ for $(n - 2m) &lt; j &lt; n - m$ and then adding $\\{m\\}$ to each one. This can only fail if $j &lt; 0$, which can only happen if $(n - 2m) &lt; -1$. Since $m \\le \\lceil \\frac{n}{2} \\rceil$ we have $n - 2m \\ge n - 2\\lceil \\frac{n}{2} \\rceil$, which is $0$ if $n$ is even and $-1$ if $n$ is odd, so all cases are covered.</p>\n\n<p>The interesting question is whether it's <em>necessary</em> that $\\lambda - \\{m\\}$ should satisfy the same condition. Clearly it must contain partitions of $1 \\le i &lt; m$, since $\\{m\\}$ doesn't participate in them. And by the simple principle of taking complements in $\\lambda$ it's clear that for each $m \\le i &lt; n$ the remnant $\\lambda - \\{m\\}$ must either contain partitions of $(i - m)$ and $n - i$; or $i$ and $n - m - i$. Is that sufficient?</p>\n\n<p>My intuition is that it's necessary, and testing on small examples (up to $n = 30$) supports that, but I haven't proved it.</p>\n\n<p>In the Online Encyclopedia of Integer Sequences it's <a href=\"https://oeis.org/A126796\" rel=\"nofollow noreferrer\">A126796</a> and a comment claims the characterisation</p>\n\n<blockquote>\n <p>A partition is complete iff each part is no more than 1 more than the sum of all smaller parts. (This includes the smallest part, which thus must be 1.) - Franklin T. Adams-Watters, Mar 22 2007</p>\n</blockquote>\n" }, { "answer_id": 2114653, "score": 1, "text": "<p>This is just a quick and dirty list of the first examples, for $n$ up to $10$. Feel free to edit, extend, or amend.</p>\n\n<p>$$1\\\\1+1\\\\\n1+1+1\\quad1+2\\\\\n1+1+1+1\\quad1+1+2\\\\\n1^5\\quad1+1+1+2\\quad1+2+2\\quad1+1+3\\\\\n1^6\\quad1^4+2\\quad1^2+2+2\\quad1^3+3\\quad1+2+3\\\\\n1^7\\quad1^5+2\\quad1^3+2+2\\quad1+2^3\\quad1^4+3\\quad1^2+2+3\\quad1^3+4\\quad1+2+4\\\\\n1^8\\quad1^6+2\\quad1^4+2+2\\quad1^2+2+2+2\\quad1^5+3\\quad1^3+2+3\\quad1^2+3+3\\quad1+2+2+3\\quad1^4+4\\quad1+1+2+4\\\\\n1^9\\quad1^7+2\\quad1^5+2+2\\quad1^3+2^3\\quad1+2^4\\quad1^6+3\\quad1^4+2+3\\quad1^2+2+2+3\\quad1^3+3+3\\quad1+2+3+3\\quad1^5+4\\quad1^3+2+4\\quad1+2+2+4\\quad1+1+3+4\\quad1^4+5\\quad1^2+2+5\\\\\n1^{10}\\quad1^8+2\\quad1^6+2^2\\quad1^4+2^3\\quad1^2+2^4\\quad1^7+3\\quad1^5+2+3\\quad1^3+2^2+3\\quad1+2^3+3\\quad1^4+3^2\\quad1^2+2+3^2\\quad1^6+4\\quad1^4+2+4\\quad1^2+2^2+4\\quad1^3+3+4\\quad1+2+3+4\\quad1^5+5\\quad1^3+2+5\\quad1+2^2+5\\quad1+1+3+5$$</p>\n\n<p>I hope the meaning of the superscript notation is clear, and I hope someone will check that I didn't make any mistakes or overlook anything. The list so far gives the sequence</p>\n\n<p>$$1,1,2,2,4,5,8,10,16,20,\\ldots$$</p>\n\n<p>which (after correcting a pair of mistakes in the original posting here) Peter Taylor found in the OEIS.</p>\n" }, { "answer_id": 2115251, "score": 1, "text": "<p>Let $\\lambda$ be a partition of $n$. The required condition is that $\\lambda$ contain partitions $\\lambda_i$ of each $1 \\le i &lt; n$. Clearly if $\\lambda$ contains a partition of $j$ then it also contains a partition of $(n - j)$, being the multiset difference $\\lambda - \\lambda_j$.</p>\n\n<p>Therefore the first thing to notice is that $\\lambda$ cannot contain any element $a &gt; \\lceil \\frac{n}{2} \\rceil$, for if it did then $\\{a\\}$ cannot be part of a partition of $\\lceil \\frac{n}{2} \\rceil$ and $\\lambda - \\{a\\}$ is a partition of $(n - a) &lt; (n - \\lceil \\frac{n}{2} \\rceil) &lt; \\lceil \\frac{n}{2} \\rceil$ cannot contain a partition of $\\lceil \\frac{n}{2} \\rceil$.</p>\n\n<p>Now, suppose that the largest element of $\\lambda$ is $m$. It is certainly <em>sufficient</em> that $\\lambda - \\{m\\}$ should satisfy the corresponding condition of providing partitions for each $1 \\le i &lt; (n - m)$. Proof: $\\lambda - \\{m\\}$ is a partition of $(n - m)$ and provides partitions for each smaller natural number, so it remains to construct partitions $\\lambda_i$ for $(n - m) &lt; i &lt; n$. We can do this by taking partitions from $\\lambda - \\{m\\}$ for $(n - 2m) &lt; j &lt; n - m$ and then adding $\\{m\\}$ to each one. This can only fail if $j &lt; 0$, which can only happen if $(n - 2m) &lt; -1$. Since $m \\le \\lceil \\frac{n}{2} \\rceil$ we have $n - 2m \\ge n - 2\\lceil \\frac{n}{2} \\rceil$, which is $0$ if $n$ is even and $-1$ if $n$ is odd, so all cases are covered.</p>\n\n<p>The interesting question is whether it's <em>necessary</em> that $\\lambda - \\{m\\}$ should satisfy the same condition. Clearly it must contain partitions of $1 \\le i &lt; m$, since $\\{m\\}$ doesn't participate in them. And by the simple principle of taking complements in $\\lambda$ it's clear that for each $m \\le i &lt; n$ the remnant $\\lambda - \\{m\\}$ must either contain partitions of $(i - m)$ and $n - i$; or $i$ and $n - m - i$. Is that sufficient?</p>\n\n<p>My intuition is that it's necessary, and testing on small examples (up to $n = 30$) supports that, but I haven't proved it.</p>\n\n<p>In the Online Encyclopedia of Integer Sequences it's <a href=\"https://oeis.org/A126796\" rel=\"nofollow noreferrer\">A126796</a> and a comment claims the characterisation</p>\n\n<blockquote>\n <p>A partition is complete iff each part is no more than 1 more than the sum of all smaller parts. (This includes the smallest part, which thus must be 1.) - Franklin T. Adams-Watters, Mar 22 2007</p>\n</blockquote>\n" }, { "answer_id": 2114653, "score": 1, "text": "<p>This is just a quick and dirty list of the first examples, for $n$ up to $10$. Feel free to edit, extend, or amend.</p>\n\n<p>$$1\\\\1+1\\\\\n1+1+1\\quad1+2\\\\\n1+1+1+1\\quad1+1+2\\\\\n1^5\\quad1+1+1+2\\quad1+2+2\\quad1+1+3\\\\\n1^6\\quad1^4+2\\quad1^2+2+2\\quad1^3+3\\quad1+2+3\\\\\n1^7\\quad1^5+2\\quad1^3+2+2\\quad1+2^3\\quad1^4+3\\quad1^2+2+3\\quad1^3+4\\quad1+2+4\\\\\n1^8\\quad1^6+2\\quad1^4+2+2\\quad1^2+2+2+2\\quad1^5+3\\quad1^3+2+3\\quad1^2+3+3\\quad1+2+2+3\\quad1^4+4\\quad1+1+2+4\\\\\n1^9\\quad1^7+2\\quad1^5+2+2\\quad1^3+2^3\\quad1+2^4\\quad1^6+3\\quad1^4+2+3\\quad1^2+2+2+3\\quad1^3+3+3\\quad1+2+3+3\\quad1^5+4\\quad1^3+2+4\\quad1+2+2+4\\quad1+1+3+4\\quad1^4+5\\quad1^2+2+5\\\\\n1^{10}\\quad1^8+2\\quad1^6+2^2\\quad1^4+2^3\\quad1^2+2^4\\quad1^7+3\\quad1^5+2+3\\quad1^3+2^2+3\\quad1+2^3+3\\quad1^4+3^2\\quad1^2+2+3^2\\quad1^6+4\\quad1^4+2+4\\quad1^2+2^2+4\\quad1^3+3+4\\quad1+2+3+4\\quad1^5+5\\quad1^3+2+5\\quad1+2^2+5\\quad1+1+3+5$$</p>\n\n<p>I hope the meaning of the superscript notation is clear, and I hope someone will check that I didn't make any mistakes or overlook anything. The list so far gives the sequence</p>\n\n<p>$$1,1,2,2,4,5,8,10,16,20,\\ldots$$</p>\n\n<p>which (after correcting a pair of mistakes in the original posting here) Peter Taylor found in the OEIS.</p>\n" }, { "answer_id": 2115251, "score": 1, "text": "<p>Let $\\lambda$ be a partition of $n$. The required condition is that $\\lambda$ contain partitions $\\lambda_i$ of each $1 \\le i &lt; n$. Clearly if $\\lambda$ contains a partition of $j$ then it also contains a partition of $(n - j)$, being the multiset difference $\\lambda - \\lambda_j$.</p>\n\n<p>Therefore the first thing to notice is that $\\lambda$ cannot contain any element $a &gt; \\lceil \\frac{n}{2} \\rceil$, for if it did then $\\{a\\}$ cannot be part of a partition of $\\lceil \\frac{n}{2} \\rceil$ and $\\lambda - \\{a\\}$ is a partition of $(n - a) &lt; (n - \\lceil \\frac{n}{2} \\rceil) &lt; \\lceil \\frac{n}{2} \\rceil$ cannot contain a partition of $\\lceil \\frac{n}{2} \\rceil$.</p>\n\n<p>Now, suppose that the largest element of $\\lambda$ is $m$. It is certainly <em>sufficient</em> that $\\lambda - \\{m\\}$ should satisfy the corresponding condition of providing partitions for each $1 \\le i &lt; (n - m)$. Proof: $\\lambda - \\{m\\}$ is a partition of $(n - m)$ and provides partitions for each smaller natural number, so it remains to construct partitions $\\lambda_i$ for $(n - m) &lt; i &lt; n$. We can do this by taking partitions from $\\lambda - \\{m\\}$ for $(n - 2m) &lt; j &lt; n - m$ and then adding $\\{m\\}$ to each one. This can only fail if $j &lt; 0$, which can only happen if $(n - 2m) &lt; -1$. Since $m \\le \\lceil \\frac{n}{2} \\rceil$ we have $n - 2m \\ge n - 2\\lceil \\frac{n}{2} \\rceil$, which is $0$ if $n$ is even and $-1$ if $n$ is odd, so all cases are covered.</p>\n\n<p>The interesting question is whether it's <em>necessary</em> that $\\lambda - \\{m\\}$ should satisfy the same condition. Clearly it must contain partitions of $1 \\le i &lt; m$, since $\\{m\\}$ doesn't participate in them. And by the simple principle of taking complements in $\\lambda$ it's clear that for each $m \\le i &lt; n$ the remnant $\\lambda - \\{m\\}$ must either contain partitions of $(i - m)$ and $n - i$; or $i$ and $n - m - i$. Is that sufficient?</p>\n\n<p>My intuition is that it's necessary, and testing on small examples (up to $n = 30$) supports that, but I haven't proved it.</p>\n\n<p>In the Online Encyclopedia of Integer Sequences it's <a href=\"https://oeis.org/A126796\" rel=\"nofollow noreferrer\">A126796</a> and a comment claims the characterisation</p>\n\n<blockquote>\n <p>A partition is complete iff each part is no more than 1 more than the sum of all smaller parts. (This includes the smallest part, which thus must be 1.) - Franklin T. Adams-Watters, Mar 22 2007</p>\n</blockquote>\n" }, { "answer_id": 2114653, "score": 1, "text": "<p>This is just a quick and dirty list of the first examples, for $n$ up to $10$. Feel free to edit, extend, or amend.</p>\n\n<p>$$1\\\\1+1\\\\\n1+1+1\\quad1+2\\\\\n1+1+1+1\\quad1+1+2\\\\\n1^5\\quad1+1+1+2\\quad1+2+2\\quad1+1+3\\\\\n1^6\\quad1^4+2\\quad1^2+2+2\\quad1^3+3\\quad1+2+3\\\\\n1^7\\quad1^5+2\\quad1^3+2+2\\quad1+2^3\\quad1^4+3\\quad1^2+2+3\\quad1^3+4\\quad1+2+4\\\\\n1^8\\quad1^6+2\\quad1^4+2+2\\quad1^2+2+2+2\\quad1^5+3\\quad1^3+2+3\\quad1^2+3+3\\quad1+2+2+3\\quad1^4+4\\quad1+1+2+4\\\\\n1^9\\quad1^7+2\\quad1^5+2+2\\quad1^3+2^3\\quad1+2^4\\quad1^6+3\\quad1^4+2+3\\quad1^2+2+2+3\\quad1^3+3+3\\quad1+2+3+3\\quad1^5+4\\quad1^3+2+4\\quad1+2+2+4\\quad1+1+3+4\\quad1^4+5\\quad1^2+2+5\\\\\n1^{10}\\quad1^8+2\\quad1^6+2^2\\quad1^4+2^3\\quad1^2+2^4\\quad1^7+3\\quad1^5+2+3\\quad1^3+2^2+3\\quad1+2^3+3\\quad1^4+3^2\\quad1^2+2+3^2\\quad1^6+4\\quad1^4+2+4\\quad1^2+2^2+4\\quad1^3+3+4\\quad1+2+3+4\\quad1^5+5\\quad1^3+2+5\\quad1+2^2+5\\quad1+1+3+5$$</p>\n\n<p>I hope the meaning of the superscript notation is clear, and I hope someone will check that I didn't make any mistakes or overlook anything. The list so far gives the sequence</p>\n\n<p>$$1,1,2,2,4,5,8,10,16,20,\\ldots$$</p>\n\n<p>which (after correcting a pair of mistakes in the original posting here) Peter Taylor found in the OEIS.</p>\n" }, { "answer_id": 2115251, "score": 1, "text": "<p>Let $\\lambda$ be a partition of $n$. The required condition is that $\\lambda$ contain partitions $\\lambda_i$ of each $1 \\le i &lt; n$. Clearly if $\\lambda$ contains a partition of $j$ then it also contains a partition of $(n - j)$, being the multiset difference $\\lambda - \\lambda_j$.</p>\n\n<p>Therefore the first thing to notice is that $\\lambda$ cannot contain any element $a &gt; \\lceil \\frac{n}{2} \\rceil$, for if it did then $\\{a\\}$ cannot be part of a partition of $\\lceil \\frac{n}{2} \\rceil$ and $\\lambda - \\{a\\}$ is a partition of $(n - a) &lt; (n - \\lceil \\frac{n}{2} \\rceil) &lt; \\lceil \\frac{n}{2} \\rceil$ cannot contain a partition of $\\lceil \\frac{n}{2} \\rceil$.</p>\n\n<p>Now, suppose that the largest element of $\\lambda$ is $m$. It is certainly <em>sufficient</em> that $\\lambda - \\{m\\}$ should satisfy the corresponding condition of providing partitions for each $1 \\le i &lt; (n - m)$. Proof: $\\lambda - \\{m\\}$ is a partition of $(n - m)$ and provides partitions for each smaller natural number, so it remains to construct partitions $\\lambda_i$ for $(n - m) &lt; i &lt; n$. We can do this by taking partitions from $\\lambda - \\{m\\}$ for $(n - 2m) &lt; j &lt; n - m$ and then adding $\\{m\\}$ to each one. This can only fail if $j &lt; 0$, which can only happen if $(n - 2m) &lt; -1$. Since $m \\le \\lceil \\frac{n}{2} \\rceil$ we have $n - 2m \\ge n - 2\\lceil \\frac{n}{2} \\rceil$, which is $0$ if $n$ is even and $-1$ if $n$ is odd, so all cases are covered.</p>\n\n<p>The interesting question is whether it's <em>necessary</em> that $\\lambda - \\{m\\}$ should satisfy the same condition. Clearly it must contain partitions of $1 \\le i &lt; m$, since $\\{m\\}$ doesn't participate in them. And by the simple principle of taking complements in $\\lambda$ it's clear that for each $m \\le i &lt; n$ the remnant $\\lambda - \\{m\\}$ must either contain partitions of $(i - m)$ and $n - i$; or $i$ and $n - m - i$. Is that sufficient?</p>\n\n<p>My intuition is that it's necessary, and testing on small examples (up to $n = 30$) supports that, but I haven't proved it.</p>\n\n<p>In the Online Encyclopedia of Integer Sequences it's <a href=\"https://oeis.org/A126796\" rel=\"nofollow noreferrer\">A126796</a> and a comment claims the characterisation</p>\n\n<blockquote>\n <p>A partition is complete iff each part is no more than 1 more than the sum of all smaller parts. (This includes the smallest part, which thus must be 1.) - Franklin T. Adams-Watters, Mar 22 2007</p>\n</blockquote>\n" }, { "answer_id": 2114653, "score": 1, "text": "<p>This is just a quick and dirty list of the first examples, for $n$ up to $10$. Feel free to edit, extend, or amend.</p>\n\n<p>$$1\\\\1+1\\\\\n1+1+1\\quad1+2\\\\\n1+1+1+1\\quad1+1+2\\\\\n1^5\\quad1+1+1+2\\quad1+2+2\\quad1+1+3\\\\\n1^6\\quad1^4+2\\quad1^2+2+2\\quad1^3+3\\quad1+2+3\\\\\n1^7\\quad1^5+2\\quad1^3+2+2\\quad1+2^3\\quad1^4+3\\quad1^2+2+3\\quad1^3+4\\quad1+2+4\\\\\n1^8\\quad1^6+2\\quad1^4+2+2\\quad1^2+2+2+2\\quad1^5+3\\quad1^3+2+3\\quad1^2+3+3\\quad1+2+2+3\\quad1^4+4\\quad1+1+2+4\\\\\n1^9\\quad1^7+2\\quad1^5+2+2\\quad1^3+2^3\\quad1+2^4\\quad1^6+3\\quad1^4+2+3\\quad1^2+2+2+3\\quad1^3+3+3\\quad1+2+3+3\\quad1^5+4\\quad1^3+2+4\\quad1+2+2+4\\quad1+1+3+4\\quad1^4+5\\quad1^2+2+5\\\\\n1^{10}\\quad1^8+2\\quad1^6+2^2\\quad1^4+2^3\\quad1^2+2^4\\quad1^7+3\\quad1^5+2+3\\quad1^3+2^2+3\\quad1+2^3+3\\quad1^4+3^2\\quad1^2+2+3^2\\quad1^6+4\\quad1^4+2+4\\quad1^2+2^2+4\\quad1^3+3+4\\quad1+2+3+4\\quad1^5+5\\quad1^3+2+5\\quad1+2^2+5\\quad1+1+3+5$$</p>\n\n<p>I hope the meaning of the superscript notation is clear, and I hope someone will check that I didn't make any mistakes or overlook anything. The list so far gives the sequence</p>\n\n<p>$$1,1,2,2,4,5,8,10,16,20,\\ldots$$</p>\n\n<p>which (after correcting a pair of mistakes in the original posting here) Peter Taylor found in the OEIS.</p>\n" }, { "answer_id": 2115251, "score": 1, "text": "<p>Let $\\lambda$ be a partition of $n$. The required condition is that $\\lambda$ contain partitions $\\lambda_i$ of each $1 \\le i &lt; n$. Clearly if $\\lambda$ contains a partition of $j$ then it also contains a partition of $(n - j)$, being the multiset difference $\\lambda - \\lambda_j$.</p>\n\n<p>Therefore the first thing to notice is that $\\lambda$ cannot contain any element $a &gt; \\lceil \\frac{n}{2} \\rceil$, for if it did then $\\{a\\}$ cannot be part of a partition of $\\lceil \\frac{n}{2} \\rceil$ and $\\lambda - \\{a\\}$ is a partition of $(n - a) &lt; (n - \\lceil \\frac{n}{2} \\rceil) &lt; \\lceil \\frac{n}{2} \\rceil$ cannot contain a partition of $\\lceil \\frac{n}{2} \\rceil$.</p>\n\n<p>Now, suppose that the largest element of $\\lambda$ is $m$. It is certainly <em>sufficient</em> that $\\lambda - \\{m\\}$ should satisfy the corresponding condition of providing partitions for each $1 \\le i &lt; (n - m)$. Proof: $\\lambda - \\{m\\}$ is a partition of $(n - m)$ and provides partitions for each smaller natural number, so it remains to construct partitions $\\lambda_i$ for $(n - m) &lt; i &lt; n$. We can do this by taking partitions from $\\lambda - \\{m\\}$ for $(n - 2m) &lt; j &lt; n - m$ and then adding $\\{m\\}$ to each one. This can only fail if $j &lt; 0$, which can only happen if $(n - 2m) &lt; -1$. Since $m \\le \\lceil \\frac{n}{2} \\rceil$ we have $n - 2m \\ge n - 2\\lceil \\frac{n}{2} \\rceil$, which is $0$ if $n$ is even and $-1$ if $n$ is odd, so all cases are covered.</p>\n\n<p>The interesting question is whether it's <em>necessary</em> that $\\lambda - \\{m\\}$ should satisfy the same condition. Clearly it must contain partitions of $1 \\le i &lt; m$, since $\\{m\\}$ doesn't participate in them. And by the simple principle of taking complements in $\\lambda$ it's clear that for each $m \\le i &lt; n$ the remnant $\\lambda - \\{m\\}$ must either contain partitions of $(i - m)$ and $n - i$; or $i$ and $n - m - i$. Is that sufficient?</p>\n\n<p>My intuition is that it's necessary, and testing on small examples (up to $n = 30$) supports that, but I haven't proved it.</p>\n\n<p>In the Online Encyclopedia of Integer Sequences it's <a href=\"https://oeis.org/A126796\" rel=\"nofollow noreferrer\">A126796</a> and a comment claims the characterisation</p>\n\n<blockquote>\n <p>A partition is complete iff each part is no more than 1 more than the sum of all smaller parts. (This includes the smallest part, which thus must be 1.) - Franklin T. Adams-Watters, Mar 22 2007</p>\n</blockquote>\n" }, { "answer_id": 2114653, "score": 1, "text": "<p>This is just a quick and dirty list of the first examples, for $n$ up to $10$. Feel free to edit, extend, or amend.</p>\n\n<p>$$1\\\\1+1\\\\\n1+1+1\\quad1+2\\\\\n1+1+1+1\\quad1+1+2\\\\\n1^5\\quad1+1+1+2\\quad1+2+2\\quad1+1+3\\\\\n1^6\\quad1^4+2\\quad1^2+2+2\\quad1^3+3\\quad1+2+3\\\\\n1^7\\quad1^5+2\\quad1^3+2+2\\quad1+2^3\\quad1^4+3\\quad1^2+2+3\\quad1^3+4\\quad1+2+4\\\\\n1^8\\quad1^6+2\\quad1^4+2+2\\quad1^2+2+2+2\\quad1^5+3\\quad1^3+2+3\\quad1^2+3+3\\quad1+2+2+3\\quad1^4+4\\quad1+1+2+4\\\\\n1^9\\quad1^7+2\\quad1^5+2+2\\quad1^3+2^3\\quad1+2^4\\quad1^6+3\\quad1^4+2+3\\quad1^2+2+2+3\\quad1^3+3+3\\quad1+2+3+3\\quad1^5+4\\quad1^3+2+4\\quad1+2+2+4\\quad1+1+3+4\\quad1^4+5\\quad1^2+2+5\\\\\n1^{10}\\quad1^8+2\\quad1^6+2^2\\quad1^4+2^3\\quad1^2+2^4\\quad1^7+3\\quad1^5+2+3\\quad1^3+2^2+3\\quad1+2^3+3\\quad1^4+3^2\\quad1^2+2+3^2\\quad1^6+4\\quad1^4+2+4\\quad1^2+2^2+4\\quad1^3+3+4\\quad1+2+3+4\\quad1^5+5\\quad1^3+2+5\\quad1+2^2+5\\quad1+1+3+5$$</p>\n\n<p>I hope the meaning of the superscript notation is clear, and I hope someone will check that I didn't make any mistakes or overlook anything. The list so far gives the sequence</p>\n\n<p>$$1,1,2,2,4,5,8,10,16,20,\\ldots$$</p>\n\n<p>which (after correcting a pair of mistakes in the original posting here) Peter Taylor found in the OEIS.</p>\n" }, { "answer_id": 2115251, "score": 1, "text": "<p>Let $\\lambda$ be a partition of $n$. The required condition is that $\\lambda$ contain partitions $\\lambda_i$ of each $1 \\le i &lt; n$. Clearly if $\\lambda$ contains a partition of $j$ then it also contains a partition of $(n - j)$, being the multiset difference $\\lambda - \\lambda_j$.</p>\n\n<p>Therefore the first thing to notice is that $\\lambda$ cannot contain any element $a &gt; \\lceil \\frac{n}{2} \\rceil$, for if it did then $\\{a\\}$ cannot be part of a partition of $\\lceil \\frac{n}{2} \\rceil$ and $\\lambda - \\{a\\}$ is a partition of $(n - a) &lt; (n - \\lceil \\frac{n}{2} \\rceil) &lt; \\lceil \\frac{n}{2} \\rceil$ cannot contain a partition of $\\lceil \\frac{n}{2} \\rceil$.</p>\n\n<p>Now, suppose that the largest element of $\\lambda$ is $m$. It is certainly <em>sufficient</em> that $\\lambda - \\{m\\}$ should satisfy the corresponding condition of providing partitions for each $1 \\le i &lt; (n - m)$. Proof: $\\lambda - \\{m\\}$ is a partition of $(n - m)$ and provides partitions for each smaller natural number, so it remains to construct partitions $\\lambda_i$ for $(n - m) &lt; i &lt; n$. We can do this by taking partitions from $\\lambda - \\{m\\}$ for $(n - 2m) &lt; j &lt; n - m$ and then adding $\\{m\\}$ to each one. This can only fail if $j &lt; 0$, which can only happen if $(n - 2m) &lt; -1$. Since $m \\le \\lceil \\frac{n}{2} \\rceil$ we have $n - 2m \\ge n - 2\\lceil \\frac{n}{2} \\rceil$, which is $0$ if $n$ is even and $-1$ if $n$ is odd, so all cases are covered.</p>\n\n<p>The interesting question is whether it's <em>necessary</em> that $\\lambda - \\{m\\}$ should satisfy the same condition. Clearly it must contain partitions of $1 \\le i &lt; m$, since $\\{m\\}$ doesn't participate in them. And by the simple principle of taking complements in $\\lambda$ it's clear that for each $m \\le i &lt; n$ the remnant $\\lambda - \\{m\\}$ must either contain partitions of $(i - m)$ and $n - i$; or $i$ and $n - m - i$. Is that sufficient?</p>\n\n<p>My intuition is that it's necessary, and testing on small examples (up to $n = 30$) supports that, but I haven't proved it.</p>\n\n<p>In the Online Encyclopedia of Integer Sequences it's <a href=\"https://oeis.org/A126796\" rel=\"nofollow noreferrer\">A126796</a> and a comment claims the characterisation</p>\n\n<blockquote>\n <p>A partition is complete iff each part is no more than 1 more than the sum of all smaller parts. (This includes the smallest part, which thus must be 1.) - Franklin T. Adams-Watters, Mar 22 2007</p>\n</blockquote>\n" } ]
Question: Partitions of n that generate all numbers smaller than n Answer 1: <p>This is just a quick and dirty list of the first examples, for $n$ up to $10$. Feel free to edit, extend, or amend.</p> <p>$$1\\1+1\\ 1+1+1\quad1+2\\ 1+1+1+1\quad1+1+2\\ 1^5\quad1+1+1+2\quad1+2+2\quad1+1+3\\ 1^6\quad1^4+2\quad1^2+2+2\quad1^3+3\quad1+2+3\\ 1^7\quad1^5+2\quad1^3+2+2\quad1+2^3\quad1^4+3\quad1^2+2+3\quad1^3+4\quad1+2+4\\ 1^8\quad1^6+2\quad1^4+2+2\quad1^2+2+2+2\quad1^5+3\quad1^3+2+3\quad1^2+3+3\quad1+2+2+3\quad1^4+4\quad1+1+2+4\\ 1^9\quad1^7+2\quad1^5+2+2\quad1^3+2^3\quad1+2^4\quad1^6+3\quad1^4+2+3\quad1^2+2+2+3\quad1^3+3+3\quad1+2+3+3\quad1^5+4\quad1^3+2+4\quad1+2+2+4\quad1+1+3+4\quad1^4+5\quad1^2+2+5\\ 1^{10}\quad1^8+2\quad1^6+2^2\quad1^4+2^3\quad1^2+2^4\quad1^7+3\quad1^5+2+3\quad1^3+2^2+3\quad1+2^3+3\quad1^4+3^2\quad1^2+2+3^2\quad1^6+4\quad1^4+2+4\quad1^2+2^2+4\quad1^3+3+4\quad1+2+3+4\quad1^5+5\quad1^3+2+5\quad1+2^2+5\quad1+1+3+5$$</p> <p>I hope the meaning of the superscript notation is clear, and I hope someone will check that I didn't make any mistakes or overlook anything. The list so far gives the sequence</p> <p>$$1,1,2,2,4,5,8,10,16,20,\ldots$$</p> <p>which (after correcting a pair of mistakes in the original posting here) Peter Taylor found in the OEIS.</p> Answer 2: <p>Let $\lambda$ be a partition of $n$. The required condition is that $\lambda$ contain partitions $\lambda_i$ of each $1 \le i &lt; n$. Clearly if $\lambda$ contains a partition of $j$ then it also contains a partition of $(n - j)$, being the multiset difference $\lambda - \lambda_j$.</p> <p>Therefore the first thing to notice is that $\lambda$ cannot contain any element $a &gt; \lceil \frac{n}{2} \rceil$, for if it did then $\{a\}$ cannot be part of a partition of $\lceil \frac{n}{2} \rceil$ and $\lambda - \{a\}$ is a partition of $(n - a) &lt; (n - \lceil \frac{n}{2} \rceil) &lt; \lceil \frac{n}{2} \rceil$ cannot contain a partition of $\lceil \frac{n}{2} \rceil$.</p> <p>Now, suppose that the largest element of $\lambda$ is $m$. It is certainly <em>sufficient</em> that $\lambda - \{m\}$ should satisfy the corresponding condition of providing partitions for each $1 \le i &lt; (n - m)$. Proof: $\lambda - \{m\}$ is a partition of $(n - m)$ and provides partitions for each smaller natural number, so it remains to construct partitions $\lambda_i$ for $(n - m) &lt; i &lt; n$. We can do this by taking partitions from $\lambda - \{m\}$ for $(n - 2m) &lt; j &lt; n - m$ and then adding $\{m\}$ to each one. This can only fail if $j &lt; 0$, which can only happen if $(n - 2m) &lt; -1$. Since $m \le \lceil \frac{n}{2} \rceil$ we have $n - 2m \ge n - 2\lceil \frac{n}{2} \rceil$, which is $0$ if $n$ is even and $-1$ if $n$ is odd, so all cases are covered.</p> <p>The interesting question is whether it's <em>necessary</em> that $\lambda - \{m\}$ should satisfy the same condition. Clearly it must contain partitions of $1 \le i &lt; m$, since $\{m\}$ doesn't participate in them. And by the simple principle of taking complements in $\lambda$ it's clear that for each $m \le i &lt; n$ the remnant $\lambda - \{m\}$ must either contain partitions of $(i - m)$ and $n - i$; or $i$ and $n - m - i$. Is that sufficient?</p> <p>My intuition is that it's necessary, and testing on small examples (up to $n = 30$) supports that, but I haven't proved it.</p> <p>In the Online Encyclopedia of Integer Sequences it's <a href="https://oeis.org/A126796" rel="nofollow noreferrer">A126796</a> and a comment claims the characterisation</p> <blockquote> <p>A partition is complete iff each part is no more than 1 more than the sum of all smaller parts. (This includes the smallest part, which thus must be 1.) - Franklin T. Adams-Watters, Mar 22 2007</p> </blockquote> Answer 3: <p>This is just a quick and dirty list of the first examples, for $n$ up to $10$. Feel free to edit, extend, or amend.</p> <p>$$1\\1+1\\ 1+1+1\quad1+2\\ 1+1+1+1\quad1+1+2\\ 1^5\quad1+1+1+2\quad1+2+2\quad1+1+3\\ 1^6\quad1^4+2\quad1^2+2+2\quad1^3+3\quad1+2+3\\ 1^7\quad1^5+2\quad1^3+2+2\quad1+2^3\quad1^4+3\quad1^2+2+3\quad1^3+4\quad1+2+4\\ 1^8\quad1^6+2\quad1^4+2+2\quad1^2+2+2+2\quad1^5+3\quad1^3+2+3\quad1^2+3+3\quad1+2+2+3\quad1^4+4\quad1+1+2+4\\ 1^9\quad1^7+2\quad1^5+2+2\quad1^3+2^3\quad1+2^4\quad1^6+3\quad1^4+2+3\quad1^2+2+2+3\quad1^3+3+3\quad1+2+3+3\quad1^5+4\quad1^3+2+4\quad1+2+2+4\quad1+1+3+4\quad1^4+5\quad1^2+2+5\\ 1^{10}\quad1^8+2\quad1^6+2^2\quad1^4+2^3\quad1^2+2^4\quad1^7+3\quad1^5+2+3\quad1^3+2^2+3\quad1+2^3+3\quad1^4+3^2\quad1^2+2+3^2\quad1^6+4\quad1^4+2+4\quad1^2+2^2+4\quad1^3+3+4\quad1+2+3+4\quad1^5+5\quad1^3+2+5\quad1+2^2+5\quad1+1+3+5$$</p> <p>I hope the meaning of the superscript notation is clear, and I hope someone will check that I didn't make any mistakes or overlook anything. The list so far gives the sequence</p> <p>$$1,1,2,2,4,5,8,10,16,20,\ldots$$</p> <p>which (after correcting a pair of mistakes in the original posting here) Peter Taylor found in the OEIS.</p> Answer 4: <p>Let $\lambda$ be a partition of $n$. The required condition is that $\lambda$ contain partitions $\lambda_i$ of each $1 \le i &lt; n$. Clearly if $\lambda$ contains a partition of $j$ then it also contains a partition of $(n - j)$, being the multiset difference $\lambda - \lambda_j$.</p> <p>Therefore the first thing to notice is that $\lambda$ cannot contain any element $a &gt; \lceil \frac{n}{2} \rceil$, for if it did then $\{a\}$ cannot be part of a partition of $\lceil \frac{n}{2} \rceil$ and $\lambda - \{a\}$ is a partition of $(n - a) &lt; (n - \lceil \frac{n}{2} \rceil) &lt; \lceil \frac{n}{2} \rceil$ cannot contain a partition of $\lceil \frac{n}{2} \rceil$.</p> <p>Now, suppose that the largest element of $\lambda$ is $m$. It is certainly <em>sufficient</em> that $\lambda - \{m\}$ should satisfy the corresponding condition of providing partitions for each $1 \le i &lt; (n - m)$. Proof: $\lambda - \{m\}$ is a partition of $(n - m)$ and provides partitions for each smaller natural number, so it remains to construct partitions $\lambda_i$ for $(n - m) &lt; i &lt; n$. We can do this by taking partitions from $\lambda - \{m\}$ for $(n - 2m) &lt; j &lt; n - m$ and then adding $\{m\}$ to each one. This can only fail if $j &lt; 0$, which can only happen if $(n - 2m) &lt; -1$. Since $m \le \lceil \frac{n}{2} \rceil$ we have $n - 2m \ge n - 2\lceil \frac{n}{2} \rceil$, which is $0$ if $n$ is even and $-1$ if $n$ is odd, so all cases are covered.</p> <p>The interesting question is whether it's <em>necessary</em> that $\lambda - \{m\}$ should satisfy the same condition. Clearly it must contain partitions of $1 \le i &lt; m$, since $\{m\}$ doesn't participate in them. And by the simple principle of taking complements in $\lambda$ it's clear that for each $m \le i &lt; n$ the remnant $\lambda - \{m\}$ must either contain partitions of $(i - m)$ and $n - i$; or $i$ and $n - m - i$. Is that sufficient?</p> <p>My intuition is that it's necessary, and testing on small examples (up to $n = 30$) supports that, but I haven't proved it.</p> <p>In the Online Encyclopedia of Integer Sequences it's <a href="https://oeis.org/A126796" rel="nofollow noreferrer">A126796</a> and a comment claims the characterisation</p> <blockquote> <p>A partition is complete iff each part is no more than 1 more than the sum of all smaller parts. (This includes the smallest part, which thus must be 1.) - Franklin T. Adams-Watters, Mar 22 2007</p> </blockquote> Answer 5: <p>This is just a quick and dirty list of the first examples, for $n$ up to $10$. Feel free to edit, extend, or amend.</p> <p>$$1\\1+1\\ 1+1+1\quad1+2\\ 1+1+1+1\quad1+1+2\\ 1^5\quad1+1+1+2\quad1+2+2\quad1+1+3\\ 1^6\quad1^4+2\quad1^2+2+2\quad1^3+3\quad1+2+3\\ 1^7\quad1^5+2\quad1^3+2+2\quad1+2^3\quad1^4+3\quad1^2+2+3\quad1^3+4\quad1+2+4\\ 1^8\quad1^6+2\quad1^4+2+2\quad1^2+2+2+2\quad1^5+3\quad1^3+2+3\quad1^2+3+3\quad1+2+2+3\quad1^4+4\quad1+1+2+4\\ 1^9\quad1^7+2\quad1^5+2+2\quad1^3+2^3\quad1+2^4\quad1^6+3\quad1^4+2+3\quad1^2+2+2+3\quad1^3+3+3\quad1+2+3+3\quad1^5+4\quad1^3+2+4\quad1+2+2+4\quad1+1+3+4\quad1^4+5\quad1^2+2+5\\ 1^{10}\quad1^8+2\quad1^6+2^2\quad1^4+2^3\quad1^2+2^4\quad1^7+3\quad1^5+2+3\quad1^3+2^2+3\quad1+2^3+3\quad1^4+3^2\quad1^2+2+3^2\quad1^6+4\quad1^4+2+4\quad1^2+2^2+4\quad1^3+3+4\quad1+2+3+4\quad1^5+5\quad1^3+2+5\quad1+2^2+5\quad1+1+3+5$$</p> <p>I hope the meaning of the superscript notation is clear, and I hope someone will check that I didn't make any mistakes or overlook anything. The list so far gives the sequence</p> <p>$$1,1,2,2,4,5,8,10,16,20,\ldots$$</p> <p>which (after correcting a pair of mistakes in the original posting here) Peter Taylor found in the OEIS.</p> Answer 6: <p>Let $\lambda$ be a partition of $n$. The required condition is that $\lambda$ contain partitions $\lambda_i$ of each $1 \le i &lt; n$. Clearly if $\lambda$ contains a partition of $j$ then it also contains a partition of $(n - j)$, being the multiset difference $\lambda - \lambda_j$.</p> <p>Therefore the first thing to notice is that $\lambda$ cannot contain any element $a &gt; \lceil \frac{n}{2} \rceil$, for if it did then $\{a\}$ cannot be part of a partition of $\lceil \frac{n}{2} \rceil$ and $\lambda - \{a\}$ is a partition of $(n - a) &lt; (n - \lceil \frac{n}{2} \rceil) &lt; \lceil \frac{n}{2} \rceil$ cannot contain a partition of $\lceil \frac{n}{2} \rceil$.</p> <p>Now, suppose that the largest element of $\lambda$ is $m$. It is certainly <em>sufficient</em> that $\lambda - \{m\}$ should satisfy the corresponding condition of providing partitions for each $1 \le i &lt; (n - m)$. Proof: $\lambda - \{m\}$ is a partition of $(n - m)$ and provides partitions for each smaller natural number, so it remains to construct partitions $\lambda_i$ for $(n - m) &lt; i &lt; n$. We can do this by taking partitions from $\lambda - \{m\}$ for $(n - 2m) &lt; j &lt; n - m$ and then adding $\{m\}$ to each one. This can only fail if $j &lt; 0$, which can only happen if $(n - 2m) &lt; -1$. Since $m \le \lceil \frac{n}{2} \rceil$ we have $n - 2m \ge n - 2\lceil \frac{n}{2} \rceil$, which is $0$ if $n$ is even and $-1$ if $n$ is odd, so all cases are covered.</p> <p>The interesting question is whether it's <em>necessary</em> that $\lambda - \{m\}$ should satisfy the same condition. Clearly it must contain partitions of $1 \le i &lt; m$, since $\{m\}$ doesn't participate in them. And by the simple principle of taking complements in $\lambda$ it's clear that for each $m \le i &lt; n$ the remnant $\lambda - \{m\}$ must either contain partitions of $(i - m)$ and $n - i$; or $i$ and $n - m - i$. Is that sufficient?</p> <p>My intuition is that it's necessary, and testing on small examples (up to $n = 30$) supports that, but I haven't proved it.</p> <p>In the Online Encyclopedia of Integer Sequences it's <a href="https://oeis.org/A126796" rel="nofollow noreferrer">A126796</a> and a comment claims the characterisation</p> <blockquote> <p>A partition is complete iff each part is no more than 1 more than the sum of all smaller parts. (This includes the smallest part, which thus must be 1.) - Franklin T. Adams-Watters, Mar 22 2007</p> </blockquote> Answer 7: <p>This is just a quick and dirty list of the first examples, for $n$ up to $10$. Feel free to edit, extend, or amend.</p> <p>$$1\\1+1\\ 1+1+1\quad1+2\\ 1+1+1+1\quad1+1+2\\ 1^5\quad1+1+1+2\quad1+2+2\quad1+1+3\\ 1^6\quad1^4+2\quad1^2+2+2\quad1^3+3\quad1+2+3\\ 1^7\quad1^5+2\quad1^3+2+2\quad1+2^3\quad1^4+3\quad1^2+2+3\quad1^3+4\quad1+2+4\\ 1^8\quad1^6+2\quad1^4+2+2\quad1^2+2+2+2\quad1^5+3\quad1^3+2+3\quad1^2+3+3\quad1+2+2+3\quad1^4+4\quad1+1+2+4\\ 1^9\quad1^7+2\quad1^5+2+2\quad1^3+2^3\quad1+2^4\quad1^6+3\quad1^4+2+3\quad1^2+2+2+3\quad1^3+3+3\quad1+2+3+3\quad1^5+4\quad1^3+2+4\quad1+2+2+4\quad1+1+3+4\quad1^4+5\quad1^2+2+5\\ 1^{10}\quad1^8+2\quad1^6+2^2\quad1^4+2^3\quad1^2+2^4\quad1^7+3\quad1^5+2+3\quad1^3+2^2+3\quad1+2^3+3\quad1^4+3^2\quad1^2+2+3^2\quad1^6+4\quad1^4+2+4\quad1^2+2^2+4\quad1^3+3+4\quad1+2+3+4\quad1^5+5\quad1^3+2+5\quad1+2^2+5\quad1+1+3+5$$</p> <p>I hope the meaning of the superscript notation is clear, and I hope someone will check that I didn't make any mistakes or overlook anything. The list so far gives the sequence</p> <p>$$1,1,2,2,4,5,8,10,16,20,\ldots$$</p> <p>which (after correcting a pair of mistakes in the original posting here) Peter Taylor found in the OEIS.</p> Answer 8: <p>Let $\lambda$ be a partition of $n$. The required condition is that $\lambda$ contain partitions $\lambda_i$ of each $1 \le i &lt; n$. Clearly if $\lambda$ contains a partition of $j$ then it also contains a partition of $(n - j)$, being the multiset difference $\lambda - \lambda_j$.</p> <p>Therefore the first thing to notice is that $\lambda$ cannot contain any element $a &gt; \lceil \frac{n}{2} \rceil$, for if it did then $\{a\}$ cannot be part of a partition of $\lceil \frac{n}{2} \rceil$ and $\lambda - \{a\}$ is a partition of $(n - a) &lt; (n - \lceil \frac{n}{2} \rceil) &lt; \lceil \frac{n}{2} \rceil$ cannot contain a partition of $\lceil \frac{n}{2} \rceil$.</p> <p>Now, suppose that the largest element of $\lambda$ is $m$. It is certainly <em>sufficient</em> that $\lambda - \{m\}$ should satisfy the corresponding condition of providing partitions for each $1 \le i &lt; (n - m)$. Proof: $\lambda - \{m\}$ is a partition of $(n - m)$ and provides partitions for each smaller natural number, so it remains to construct partitions $\lambda_i$ for $(n - m) &lt; i &lt; n$. We can do this by taking partitions from $\lambda - \{m\}$ for $(n - 2m) &lt; j &lt; n - m$ and then adding $\{m\}$ to each one. This can only fail if $j &lt; 0$, which can only happen if $(n - 2m) &lt; -1$. Since $m \le \lceil \frac{n}{2} \rceil$ we have $n - 2m \ge n - 2\lceil \frac{n}{2} \rceil$, which is $0$ if $n$ is even and $-1$ if $n$ is odd, so all cases are covered.</p> <p>The interesting question is whether it's <em>necessary</em> that $\lambda - \{m\}$ should satisfy the same condition. Clearly it must contain partitions of $1 \le i &lt; m$, since $\{m\}$ doesn't participate in them. And by the simple principle of taking complements in $\lambda$ it's clear that for each $m \le i &lt; n$ the remnant $\lambda - \{m\}$ must either contain partitions of $(i - m)$ and $n - i$; or $i$ and $n - m - i$. Is that sufficient?</p> <p>My intuition is that it's necessary, and testing on small examples (up to $n = 30$) supports that, but I haven't proved it.</p> <p>In the Online Encyclopedia of Integer Sequences it's <a href="https://oeis.org/A126796" rel="nofollow noreferrer">A126796</a> and a comment claims the characterisation</p> <blockquote> <p>A partition is complete iff each part is no more than 1 more than the sum of all smaller parts. (This includes the smallest part, which thus must be 1.) - Franklin T. Adams-Watters, Mar 22 2007</p> </blockquote> Answer 9: <p>This is just a quick and dirty list of the first examples, for $n$ up to $10$. Feel free to edit, extend, or amend.</p> <p>$$1\\1+1\\ 1+1+1\quad1+2\\ 1+1+1+1\quad1+1+2\\ 1^5\quad1+1+1+2\quad1+2+2\quad1+1+3\\ 1^6\quad1^4+2\quad1^2+2+2\quad1^3+3\quad1+2+3\\ 1^7\quad1^5+2\quad1^3+2+2\quad1+2^3\quad1^4+3\quad1^2+2+3\quad1^3+4\quad1+2+4\\ 1^8\quad1^6+2\quad1^4+2+2\quad1^2+2+2+2\quad1^5+3\quad1^3+2+3\quad1^2+3+3\quad1+2+2+3\quad1^4+4\quad1+1+2+4\\ 1^9\quad1^7+2\quad1^5+2+2\quad1^3+2^3\quad1+2^4\quad1^6+3\quad1^4+2+3\quad1^2+2+2+3\quad1^3+3+3\quad1+2+3+3\quad1^5+4\quad1^3+2+4\quad1+2+2+4\quad1+1+3+4\quad1^4+5\quad1^2+2+5\\ 1^{10}\quad1^8+2\quad1^6+2^2\quad1^4+2^3\quad1^2+2^4\quad1^7+3\quad1^5+2+3\quad1^3+2^2+3\quad1+2^3+3\quad1^4+3^2\quad1^2+2+3^2\quad1^6+4\quad1^4+2+4\quad1^2+2^2+4\quad1^3+3+4\quad1+2+3+4\quad1^5+5\quad1^3+2+5\quad1+2^2+5\quad1+1+3+5$$</p> <p>I hope the meaning of the superscript notation is clear, and I hope someone will check that I didn't make any mistakes or overlook anything. The list so far gives the sequence</p> <p>$$1,1,2,2,4,5,8,10,16,20,\ldots$$</p> <p>which (after correcting a pair of mistakes in the original posting here) Peter Taylor found in the OEIS.</p> Answer 10: <p>Let $\lambda$ be a partition of $n$. The required condition is that $\lambda$ contain partitions $\lambda_i$ of each $1 \le i &lt; n$. Clearly if $\lambda$ contains a partition of $j$ then it also contains a partition of $(n - j)$, being the multiset difference $\lambda - \lambda_j$.</p> <p>Therefore the first thing to notice is that $\lambda$ cannot contain any element $a &gt; \lceil \frac{n}{2} \rceil$, for if it did then $\{a\}$ cannot be part of a partition of $\lceil \frac{n}{2} \rceil$ and $\lambda - \{a\}$ is a partition of $(n - a) &lt; (n - \lceil \frac{n}{2} \rceil) &lt; \lceil \frac{n}{2} \rceil$ cannot contain a partition of $\lceil \frac{n}{2} \rceil$.</p> <p>Now, suppose that the largest element of $\lambda$ is $m$. It is certainly <em>sufficient</em> that $\lambda - \{m\}$ should satisfy the corresponding condition of providing partitions for each $1 \le i &lt; (n - m)$. Proof: $\lambda - \{m\}$ is a partition of $(n - m)$ and provides partitions for each smaller natural number, so it remains to construct partitions $\lambda_i$ for $(n - m) &lt; i &lt; n$. We can do this by taking partitions from $\lambda - \{m\}$ for $(n - 2m) &lt; j &lt; n - m$ and then adding $\{m\}$ to each one. This can only fail if $j &lt; 0$, which can only happen if $(n - 2m) &lt; -1$. Since $m \le \lceil \frac{n}{2} \rceil$ we have $n - 2m \ge n - 2\lceil \frac{n}{2} \rceil$, which is $0$ if $n$ is even and $-1$ if $n$ is odd, so all cases are covered.</p> <p>The interesting question is whether it's <em>necessary</em> that $\lambda - \{m\}$ should satisfy the same condition. Clearly it must contain partitions of $1 \le i &lt; m$, since $\{m\}$ doesn't participate in them. And by the simple principle of taking complements in $\lambda$ it's clear that for each $m \le i &lt; n$ the remnant $\lambda - \{m\}$ must either contain partitions of $(i - m)$ and $n - i$; or $i$ and $n - m - i$. Is that sufficient?</p> <p>My intuition is that it's necessary, and testing on small examples (up to $n = 30$) supports that, but I haven't proved it.</p> <p>In the Online Encyclopedia of Integer Sequences it's <a href="https://oeis.org/A126796" rel="nofollow noreferrer">A126796</a> and a comment claims the characterisation</p> <blockquote> <p>A partition is complete iff each part is no more than 1 more than the sum of all smaller parts. (This includes the smallest part, which thus must be 1.) - Franklin T. Adams-Watters, Mar 22 2007</p> </blockquote> Answer 11: <p>This is just a quick and dirty list of the first examples, for $n$ up to $10$. Feel free to edit, extend, or amend.</p> <p>$$1\\1+1\\ 1+1+1\quad1+2\\ 1+1+1+1\quad1+1+2\\ 1^5\quad1+1+1+2\quad1+2+2\quad1+1+3\\ 1^6\quad1^4+2\quad1^2+2+2\quad1^3+3\quad1+2+3\\ 1^7\quad1^5+2\quad1^3+2+2\quad1+2^3\quad1^4+3\quad1^2+2+3\quad1^3+4\quad1+2+4\\ 1^8\quad1^6+2\quad1^4+2+2\quad1^2+2+2+2\quad1^5+3\quad1^3+2+3\quad1^2+3+3\quad1+2+2+3\quad1^4+4\quad1+1+2+4\\ 1^9\quad1^7+2\quad1^5+2+2\quad1^3+2^3\quad1+2^4\quad1^6+3\quad1^4+2+3\quad1^2+2+2+3\quad1^3+3+3\quad1+2+3+3\quad1^5+4\quad1^3+2+4\quad1+2+2+4\quad1+1+3+4\quad1^4+5\quad1^2+2+5\\ 1^{10}\quad1^8+2\quad1^6+2^2\quad1^4+2^3\quad1^2+2^4\quad1^7+3\quad1^5+2+3\quad1^3+2^2+3\quad1+2^3+3\quad1^4+3^2\quad1^2+2+3^2\quad1^6+4\quad1^4+2+4\quad1^2+2^2+4\quad1^3+3+4\quad1+2+3+4\quad1^5+5\quad1^3+2+5\quad1+2^2+5\quad1+1+3+5$$</p> <p>I hope the meaning of the superscript notation is clear, and I hope someone will check that I didn't make any mistakes or overlook anything. The list so far gives the sequence</p> <p>$$1,1,2,2,4,5,8,10,16,20,\ldots$$</p> <p>which (after correcting a pair of mistakes in the original posting here) Peter Taylor found in the OEIS.</p> Answer 12: <p>Let $\lambda$ be a partition of $n$. The required condition is that $\lambda$ contain partitions $\lambda_i$ of each $1 \le i &lt; n$. Clearly if $\lambda$ contains a partition of $j$ then it also contains a partition of $(n - j)$, being the multiset difference $\lambda - \lambda_j$.</p> <p>Therefore the first thing to notice is that $\lambda$ cannot contain any element $a &gt; \lceil \frac{n}{2} \rceil$, for if it did then $\{a\}$ cannot be part of a partition of $\lceil \frac{n}{2} \rceil$ and $\lambda - \{a\}$ is a partition of $(n - a) &lt; (n - \lceil \frac{n}{2} \rceil) &lt; \lceil \frac{n}{2} \rceil$ cannot contain a partition of $\lceil \frac{n}{2} \rceil$.</p> <p>Now, suppose that the largest element of $\lambda$ is $m$. It is certainly <em>sufficient</em> that $\lambda - \{m\}$ should satisfy the corresponding condition of providing partitions for each $1 \le i &lt; (n - m)$. Proof: $\lambda - \{m\}$ is a partition of $(n - m)$ and provides partitions for each smaller natural number, so it remains to construct partitions $\lambda_i$ for $(n - m) &lt; i &lt; n$. We can do this by taking partitions from $\lambda - \{m\}$ for $(n - 2m) &lt; j &lt; n - m$ and then adding $\{m\}$ to each one. This can only fail if $j &lt; 0$, which can only happen if $(n - 2m) &lt; -1$. Since $m \le \lceil \frac{n}{2} \rceil$ we have $n - 2m \ge n - 2\lceil \frac{n}{2} \rceil$, which is $0$ if $n$ is even and $-1$ if $n$ is odd, so all cases are covered.</p> <p>The interesting question is whether it's <em>necessary</em> that $\lambda - \{m\}$ should satisfy the same condition. Clearly it must contain partitions of $1 \le i &lt; m$, since $\{m\}$ doesn't participate in them. And by the simple principle of taking complements in $\lambda$ it's clear that for each $m \le i &lt; n$ the remnant $\lambda - \{m\}$ must either contain partitions of $(i - m)$ and $n - i$; or $i$ and $n - m - i$. Is that sufficient?</p> <p>My intuition is that it's necessary, and testing on small examples (up to $n = 30$) supports that, but I haven't proved it.</p> <p>In the Online Encyclopedia of Integer Sequences it's <a href="https://oeis.org/A126796" rel="nofollow noreferrer">A126796</a> and a comment claims the characterisation</p> <blockquote> <p>A partition is complete iff each part is no more than 1 more than the sum of all smaller parts. (This includes the smallest part, which thus must be 1.) - Franklin T. Adams-Watters, Mar 22 2007</p> </blockquote> Answer 13: <p>This is just a quick and dirty list of the first examples, for $n$ up to $10$. Feel free to edit, extend, or amend.</p> <p>$$1\\1+1\\ 1+1+1\quad1+2\\ 1+1+1+1\quad1+1+2\\ 1^5\quad1+1+1+2\quad1+2+2\quad1+1+3\\ 1^6\quad1^4+2\quad1^2+2+2\quad1^3+3\quad1+2+3\\ 1^7\quad1^5+2\quad1^3+2+2\quad1+2^3\quad1^4+3\quad1^2+2+3\quad1^3+4\quad1+2+4\\ 1^8\quad1^6+2\quad1^4+2+2\quad1^2+2+2+2\quad1^5+3\quad1^3+2+3\quad1^2+3+3\quad1+2+2+3\quad1^4+4\quad1+1+2+4\\ 1^9\quad1^7+2\quad1^5+2+2\quad1^3+2^3\quad1+2^4\quad1^6+3\quad1^4+2+3\quad1^2+2+2+3\quad1^3+3+3\quad1+2+3+3\quad1^5+4\quad1^3+2+4\quad1+2+2+4\quad1+1+3+4\quad1^4+5\quad1^2+2+5\\ 1^{10}\quad1^8+2\quad1^6+2^2\quad1^4+2^3\quad1^2+2^4\quad1^7+3\quad1^5+2+3\quad1^3+2^2+3\quad1+2^3+3\quad1^4+3^2\quad1^2+2+3^2\quad1^6+4\quad1^4+2+4\quad1^2+2^2+4\quad1^3+3+4\quad1+2+3+4\quad1^5+5\quad1^3+2+5\quad1+2^2+5\quad1+1+3+5$$</p> <p>I hope the meaning of the superscript notation is clear, and I hope someone will check that I didn't make any mistakes or overlook anything. The list so far gives the sequence</p> <p>$$1,1,2,2,4,5,8,10,16,20,\ldots$$</p> <p>which (after correcting a pair of mistakes in the original posting here) Peter Taylor found in the OEIS.</p> Answer 14: <p>Let $\lambda$ be a partition of $n$. The required condition is that $\lambda$ contain partitions $\lambda_i$ of each $1 \le i &lt; n$. Clearly if $\lambda$ contains a partition of $j$ then it also contains a partition of $(n - j)$, being the multiset difference $\lambda - \lambda_j$.</p> <p>Therefore the first thing to notice is that $\lambda$ cannot contain any element $a &gt; \lceil \frac{n}{2} \rceil$, for if it did then $\{a\}$ cannot be part of a partition of $\lceil \frac{n}{2} \rceil$ and $\lambda - \{a\}$ is a partition of $(n - a) &lt; (n - \lceil \frac{n}{2} \rceil) &lt; \lceil \frac{n}{2} \rceil$ cannot contain a partition of $\lceil \frac{n}{2} \rceil$.</p> <p>Now, suppose that the largest element of $\lambda$ is $m$. It is certainly <em>sufficient</em> that $\lambda - \{m\}$ should satisfy the corresponding condition of providing partitions for each $1 \le i &lt; (n - m)$. Proof: $\lambda - \{m\}$ is a partition of $(n - m)$ and provides partitions for each smaller natural number, so it remains to construct partitions $\lambda_i$ for $(n - m) &lt; i &lt; n$. We can do this by taking partitions from $\lambda - \{m\}$ for $(n - 2m) &lt; j &lt; n - m$ and then adding $\{m\}$ to each one. This can only fail if $j &lt; 0$, which can only happen if $(n - 2m) &lt; -1$. Since $m \le \lceil \frac{n}{2} \rceil$ we have $n - 2m \ge n - 2\lceil \frac{n}{2} \rceil$, which is $0$ if $n$ is even and $-1$ if $n$ is odd, so all cases are covered.</p> <p>The interesting question is whether it's <em>necessary</em> that $\lambda - \{m\}$ should satisfy the same condition. Clearly it must contain partitions of $1 \le i &lt; m$, since $\{m\}$ doesn't participate in them. And by the simple principle of taking complements in $\lambda$ it's clear that for each $m \le i &lt; n$ the remnant $\lambda - \{m\}$ must either contain partitions of $(i - m)$ and $n - i$; or $i$ and $n - m - i$. Is that sufficient?</p> <p>My intuition is that it's necessary, and testing on small examples (up to $n = 30$) supports that, but I haven't proved it.</p> <p>In the Online Encyclopedia of Integer Sequences it's <a href="https://oeis.org/A126796" rel="nofollow noreferrer">A126796</a> and a comment claims the characterisation</p> <blockquote> <p>A partition is complete iff each part is no more than 1 more than the sum of all smaller parts. (This includes the smallest part, which thus must be 1.) - Franklin T. Adams-Watters, Mar 22 2007</p> </blockquote> Answer 15: <p>This is just a quick and dirty list of the first examples, for $n$ up to $10$. Feel free to edit, extend, or amend.</p> <p>$$1\\1+1\\ 1+1+1\quad1+2\\ 1+1+1+1\quad1+1+2\\ 1^5\quad1+1+1+2\quad1+2+2\quad1+1+3\\ 1^6\quad1^4+2\quad1^2+2+2\quad1^3+3\quad1+2+3\\ 1^7\quad1^5+2\quad1^3+2+2\quad1+2^3\quad1^4+3\quad1^2+2+3\quad1^3+4\quad1+2+4\\ 1^8\quad1^6+2\quad1^4+2+2\quad1^2+2+2+2\quad1^5+3\quad1^3+2+3\quad1^2+3+3\quad1+2+2+3\quad1^4+4\quad1+1+2+4\\ 1^9\quad1^7+2\quad1^5+2+2\quad1^3+2^3\quad1+2^4\quad1^6+3\quad1^4+2+3\quad1^2+2+2+3\quad1^3+3+3\quad1+2+3+3\quad1^5+4\quad1^3+2+4\quad1+2+2+4\quad1+1+3+4\quad1^4+5\quad1^2+2+5\\ 1^{10}\quad1^8+2\quad1^6+2^2\quad1^4+2^3\quad1^2+2^4\quad1^7+3\quad1^5+2+3\quad1^3+2^2+3\quad1+2^3+3\quad1^4+3^2\quad1^2+2+3^2\quad1^6+4\quad1^4+2+4\quad1^2+2^2+4\quad1^3+3+4\quad1+2+3+4\quad1^5+5\quad1^3+2+5\quad1+2^2+5\quad1+1+3+5$$</p> <p>I hope the meaning of the superscript notation is clear, and I hope someone will check that I didn't make any mistakes or overlook anything. The list so far gives the sequence</p> <p>$$1,1,2,2,4,5,8,10,16,20,\ldots$$</p> <p>which (after correcting a pair of mistakes in the original posting here) Peter Taylor found in the OEIS.</p> Answer 16: <p>Let $\lambda$ be a partition of $n$. The required condition is that $\lambda$ contain partitions $\lambda_i$ of each $1 \le i &lt; n$. Clearly if $\lambda$ contains a partition of $j$ then it also contains a partition of $(n - j)$, being the multiset difference $\lambda - \lambda_j$.</p> <p>Therefore the first thing to notice is that $\lambda$ cannot contain any element $a &gt; \lceil \frac{n}{2} \rceil$, for if it did then $\{a\}$ cannot be part of a partition of $\lceil \frac{n}{2} \rceil$ and $\lambda - \{a\}$ is a partition of $(n - a) &lt; (n - \lceil \frac{n}{2} \rceil) &lt; \lceil \frac{n}{2} \rceil$ cannot contain a partition of $\lceil \frac{n}{2} \rceil$.</p> <p>Now, suppose that the largest element of $\lambda$ is $m$. It is certainly <em>sufficient</em> that $\lambda - \{m\}$ should satisfy the corresponding condition of providing partitions for each $1 \le i &lt; (n - m)$. Proof: $\lambda - \{m\}$ is a partition of $(n - m)$ and provides partitions for each smaller natural number, so it remains to construct partitions $\lambda_i$ for $(n - m) &lt; i &lt; n$. We can do this by taking partitions from $\lambda - \{m\}$ for $(n - 2m) &lt; j &lt; n - m$ and then adding $\{m\}$ to each one. This can only fail if $j &lt; 0$, which can only happen if $(n - 2m) &lt; -1$. Since $m \le \lceil \frac{n}{2} \rceil$ we have $n - 2m \ge n - 2\lceil \frac{n}{2} \rceil$, which is $0$ if $n$ is even and $-1$ if $n$ is odd, so all cases are covered.</p> <p>The interesting question is whether it's <em>necessary</em> that $\lambda - \{m\}$ should satisfy the same condition. Clearly it must contain partitions of $1 \le i &lt; m$, since $\{m\}$ doesn't participate in them. And by the simple principle of taking complements in $\lambda$ it's clear that for each $m \le i &lt; n$ the remnant $\lambda - \{m\}$ must either contain partitions of $(i - m)$ and $n - i$; or $i$ and $n - m - i$. Is that sufficient?</p> <p>My intuition is that it's necessary, and testing on small examples (up to $n = 30$) supports that, but I haven't proved it.</p> <p>In the Online Encyclopedia of Integer Sequences it's <a href="https://oeis.org/A126796" rel="nofollow noreferrer">A126796</a> and a comment claims the characterisation</p> <blockquote> <p>A partition is complete iff each part is no more than 1 more than the sum of all smaller parts. (This includes the smallest part, which thus must be 1.) - Franklin T. Adams-Watters, Mar 22 2007</p> </blockquote> Answer 17: <p>This is just a quick and dirty list of the first examples, for $n$ up to $10$. Feel free to edit, extend, or amend.</p> <p>$$1\\1+1\\ 1+1+1\quad1+2\\ 1+1+1+1\quad1+1+2\\ 1^5\quad1+1+1+2\quad1+2+2\quad1+1+3\\ 1^6\quad1^4+2\quad1^2+2+2\quad1^3+3\quad1+2+3\\ 1^7\quad1^5+2\quad1^3+2+2\quad1+2^3\quad1^4+3\quad1^2+2+3\quad1^3+4\quad1+2+4\\ 1^8\quad1^6+2\quad1^4+2+2\quad1^2+2+2+2\quad1^5+3\quad1^3+2+3\quad1^2+3+3\quad1+2+2+3\quad1^4+4\quad1+1+2+4\\ 1^9\quad1^7+2\quad1^5+2+2\quad1^3+2^3\quad1+2^4\quad1^6+3\quad1^4+2+3\quad1^2+2+2+3\quad1^3+3+3\quad1+2+3+3\quad1^5+4\quad1^3+2+4\quad1+2+2+4\quad1+1+3+4\quad1^4+5\quad1^2+2+5\\ 1^{10}\quad1^8+2\quad1^6+2^2\quad1^4+2^3\quad1^2+2^4\quad1^7+3\quad1^5+2+3\quad1^3+2^2+3\quad1+2^3+3\quad1^4+3^2\quad1^2+2+3^2\quad1^6+4\quad1^4+2+4\quad1^2+2^2+4\quad1^3+3+4\quad1+2+3+4\quad1^5+5\quad1^3+2+5\quad1+2^2+5\quad1+1+3+5$$</p> <p>I hope the meaning of the superscript notation is clear, and I hope someone will check that I didn't make any mistakes or overlook anything. The list so far gives the sequence</p> <p>$$1,1,2,2,4,5,8,10,16,20,\ldots$$</p> <p>which (after correcting a pair of mistakes in the original posting here) Peter Taylor found in the OEIS.</p> Answer 18: <p>Let $\lambda$ be a partition of $n$. The required condition is that $\lambda$ contain partitions $\lambda_i$ of each $1 \le i &lt; n$. Clearly if $\lambda$ contains a partition of $j$ then it also contains a partition of $(n - j)$, being the multiset difference $\lambda - \lambda_j$.</p> <p>Therefore the first thing to notice is that $\lambda$ cannot contain any element $a &gt; \lceil \frac{n}{2} \rceil$, for if it did then $\{a\}$ cannot be part of a partition of $\lceil \frac{n}{2} \rceil$ and $\lambda - \{a\}$ is a partition of $(n - a) &lt; (n - \lceil \frac{n}{2} \rceil) &lt; \lceil \frac{n}{2} \rceil$ cannot contain a partition of $\lceil \frac{n}{2} \rceil$.</p> <p>Now, suppose that the largest element of $\lambda$ is $m$. It is certainly <em>sufficient</em> that $\lambda - \{m\}$ should satisfy the corresponding condition of providing partitions for each $1 \le i &lt; (n - m)$. Proof: $\lambda - \{m\}$ is a partition of $(n - m)$ and provides partitions for each smaller natural number, so it remains to construct partitions $\lambda_i$ for $(n - m) &lt; i &lt; n$. We can do this by taking partitions from $\lambda - \{m\}$ for $(n - 2m) &lt; j &lt; n - m$ and then adding $\{m\}$ to each one. This can only fail if $j &lt; 0$, which can only happen if $(n - 2m) &lt; -1$. Since $m \le \lceil \frac{n}{2} \rceil$ we have $n - 2m \ge n - 2\lceil \frac{n}{2} \rceil$, which is $0$ if $n$ is even and $-1$ if $n$ is odd, so all cases are covered.</p> <p>The interesting question is whether it's <em>necessary</em> that $\lambda - \{m\}$ should satisfy the same condition. Clearly it must contain partitions of $1 \le i &lt; m$, since $\{m\}$ doesn't participate in them. And by the simple principle of taking complements in $\lambda$ it's clear that for each $m \le i &lt; n$ the remnant $\lambda - \{m\}$ must either contain partitions of $(i - m)$ and $n - i$; or $i$ and $n - m - i$. Is that sufficient?</p> <p>My intuition is that it's necessary, and testing on small examples (up to $n = 30$) supports that, but I haven't proved it.</p> <p>In the Online Encyclopedia of Integer Sequences it's <a href="https://oeis.org/A126796" rel="nofollow noreferrer">A126796</a> and a comment claims the characterisation</p> <blockquote> <p>A partition is complete iff each part is no more than 1 more than the sum of all smaller parts. (This includes the smallest part, which thus must be 1.) - Franklin T. Adams-Watters, Mar 22 2007</p> </blockquote> Answer 19: <p>This is just a quick and dirty list of the first examples, for $n$ up to $10$. Feel free to edit, extend, or amend.</p> <p>$$1\\1+1\\ 1+1+1\quad1+2\\ 1+1+1+1\quad1+1+2\\ 1^5\quad1+1+1+2\quad1+2+2\quad1+1+3\\ 1^6\quad1^4+2\quad1^2+2+2\quad1^3+3\quad1+2+3\\ 1^7\quad1^5+2\quad1^3+2+2\quad1+2^3\quad1^4+3\quad1^2+2+3\quad1^3+4\quad1+2+4\\ 1^8\quad1^6+2\quad1^4+2+2\quad1^2+2+2+2\quad1^5+3\quad1^3+2+3\quad1^2+3+3\quad1+2+2+3\quad1^4+4\quad1+1+2+4\\ 1^9\quad1^7+2\quad1^5+2+2\quad1^3+2^3\quad1+2^4\quad1^6+3\quad1^4+2+3\quad1^2+2+2+3\quad1^3+3+3\quad1+2+3+3\quad1^5+4\quad1^3+2+4\quad1+2+2+4\quad1+1+3+4\quad1^4+5\quad1^2+2+5\\ 1^{10}\quad1^8+2\quad1^6+2^2\quad1^4+2^3\quad1^2+2^4\quad1^7+3\quad1^5+2+3\quad1^3+2^2+3\quad1+2^3+3\quad1^4+3^2\quad1^2+2+3^2\quad1^6+4\quad1^4+2+4\quad1^2+2^2+4\quad1^3+3+4\quad1+2+3+4\quad1^5+5\quad1^3+2+5\quad1+2^2+5\quad1+1+3+5$$</p> <p>I hope the meaning of the superscript notation is clear, and I hope someone will check that I didn't make any mistakes or overlook anything. The list so far gives the sequence</p> <p>$$1,1,2,2,4,5,8,10,16,20,\ldots$$</p> <p>which (after correcting a pair of mistakes in the original posting here) Peter Taylor found in the OEIS.</p> Answer 20: <p>Let $\lambda$ be a partition of $n$. The required condition is that $\lambda$ contain partitions $\lambda_i$ of each $1 \le i &lt; n$. Clearly if $\lambda$ contains a partition of $j$ then it also contains a partition of $(n - j)$, being the multiset difference $\lambda - \lambda_j$.</p> <p>Therefore the first thing to notice is that $\lambda$ cannot contain any element $a &gt; \lceil \frac{n}{2} \rceil$, for if it did then $\{a\}$ cannot be part of a partition of $\lceil \frac{n}{2} \rceil$ and $\lambda - \{a\}$ is a partition of $(n - a) &lt; (n - \lceil \frac{n}{2} \rceil) &lt; \lceil \frac{n}{2} \rceil$ cannot contain a partition of $\lceil \frac{n}{2} \rceil$.</p> <p>Now, suppose that the largest element of $\lambda$ is $m$. It is certainly <em>sufficient</em> that $\lambda - \{m\}$ should satisfy the corresponding condition of providing partitions for each $1 \le i &lt; (n - m)$. Proof: $\lambda - \{m\}$ is a partition of $(n - m)$ and provides partitions for each smaller natural number, so it remains to construct partitions $\lambda_i$ for $(n - m) &lt; i &lt; n$. We can do this by taking partitions from $\lambda - \{m\}$ for $(n - 2m) &lt; j &lt; n - m$ and then adding $\{m\}$ to each one. This can only fail if $j &lt; 0$, which can only happen if $(n - 2m) &lt; -1$. Since $m \le \lceil \frac{n}{2} \rceil$ we have $n - 2m \ge n - 2\lceil \frac{n}{2} \rceil$, which is $0$ if $n$ is even and $-1$ if $n$ is odd, so all cases are covered.</p> <p>The interesting question is whether it's <em>necessary</em> that $\lambda - \{m\}$ should satisfy the same condition. Clearly it must contain partitions of $1 \le i &lt; m$, since $\{m\}$ doesn't participate in them. And by the simple principle of taking complements in $\lambda$ it's clear that for each $m \le i &lt; n$ the remnant $\lambda - \{m\}$ must either contain partitions of $(i - m)$ and $n - i$; or $i$ and $n - m - i$. Is that sufficient?</p> <p>My intuition is that it's necessary, and testing on small examples (up to $n = 30$) supports that, but I haven't proved it.</p> <p>In the Online Encyclopedia of Integer Sequences it's <a href="https://oeis.org/A126796" rel="nofollow noreferrer">A126796</a> and a comment claims the characterisation</p> <blockquote> <p>A partition is complete iff each part is no more than 1 more than the sum of all smaller parts. (This includes the smallest part, which thus must be 1.) - Franklin T. Adams-Watters, Mar 22 2007</p> </blockquote>
How can I find $\sum_{cyc}\sin x\sin y$
2,114,698
6
https://math.stackexchange.com/questions/2114698/how-can-i-find-sum-cyc-sin-x-sin-y
<p>$x$, $y$ &amp; $z$ are real number such that</p> <p>$$\frac{\sin{x}+\sin{y}+\sin{z}}{\sin{(x+y+z)}}=\frac{\cos{x}+\cos{y}+\cos{z}}{\cos{(x+y+z)}}=2$$</p> <p>find the value </p> <p>$$\sum_{cyc}\sin x\sin y$$</p> <p>All help would be appreciated.</p>
[ { "answer_id": 2114758, "score": 3, "text": "<p>Since\n\\begin{align*}\n2 \\cos(x+y+z) &amp;= \\cos x + \\cos y + \\cos z \\\\\n2 \\sin(x+y+z) &amp;= \\sin x + \\sin y +\\sin z\n\\end{align*}\n\\begin{align*}\n2 e^{i(x+y+z)} &amp; = e^{ix} + e^{iy} + e^{iz} \n\\end{align*}\nMultiplying throughout by $e^{-ix}$, we get\n\\begin{align*}\n2e^{i(y+z)} = 1 + e^{i(y-x)} + e^{i(z-x)} \n\\end{align*}\nEquating the real parts, we get\n\\begin{align*}\n2\\cos(y+z) &amp;= 1 + \\cos(y-x) +\\cos(z-x)\\\\\n2(\\cos y \\cos z - \\sin y \\sin z) &amp; = 1 + \\cos x \\cos y + \\sin x \\sin y + \\cos z \\cos x + \\sin z \\sin x\n\\end{align*}\nSimilarly, we get\n\\begin{align*}\n2(\\cos x \\cos z - \\sin x \\sin z) &amp;= 1 + \\cos x \\cos y + \\sin x \\sin y + \\cos z \\cos y + \\sin z \\sin y\\\\\n2(\\cos x \\cos y - \\sin x \\sin y) &amp;= 1 + \\cos x \\cos z + \\sin x \\sin z+ \\cos z \\cos y + \\sin z \\sin y\\\\\n\\end{align*}\nAdding, and canceling $2 \\sum \\cos x \\cos y$ from the sides, we get\n$$\\sum \\sin x \\sin y = -\\frac{3}{4}$$</p>\n" }, { "answer_id": 2114762, "score": 3, "text": "<p>I used the Blue's beautiful idea.</p>\n\n<p>Let $e^{ix}=a$, $e^{iy}=b$ and $e^{iz}=c$.</p>\n\n<p>Hence, $\\sin{x}=\\frac{a-\\frac{1}{a}}{2i}=\\frac{a^2-1}{2ai}$, $\\cos{x}=\\frac{a^2+1}{2a}$, $\\sin{y}=\\frac{b^2-1}{2bi}$, $\\sin{x}=\\frac{c^2-1}{2ci}$ and $\\cos{x}=\\frac{c^2+1}{2c}$.</p>\n\n<p>Thus, $\\sum\\limits_{cyc}\\sin{x}=2\\sin(x+y+z)$ gives $\\sum\\limits_{cyc}(a^2bc-ab)=2(a^2b^2c^2-1)$ and</p>\n\n<p>$\\sum\\limits_{cyc}\\cos{x}=2\\cos(x+y+z)$ gives $\\sum\\limits_{cyc}(a^2bc+ab)=2(a^2b^2c^2+1)$ or\n$ab+ac+bc=2$ and $a+b+c=2abc$.</p>\n\n<p>Thus, $$-\\sum\\limits_{cyc}\\sin{x}\\sin{y}=\\sum_{cyc}\\frac{(a^2-1)(b^2-1)}{4ab}=\\frac{\\sum\\limits_{cyc}c(a^2-1)(b^2-1)}{4abc}=$$\n$$=\\frac{abc(ab+ac+bc)-\\sum\\limits_{cyc}(a^2b+a^2c)+a+b+c}{4abc}=\\frac{abc(ab+ac+bc)-(a+b+c)(ab+ac+bc)+3abc+a+b+c}{4abc}=\\frac{1}{2}-1+\\frac{3}{4}+\\frac{1}{2}=\\frac{3}{4}$$</p>\n" }, { "answer_id": 2114758, "score": 3, "text": "<p>Since\n\\begin{align*}\n2 \\cos(x+y+z) &amp;= \\cos x + \\cos y + \\cos z \\\\\n2 \\sin(x+y+z) &amp;= \\sin x + \\sin y +\\sin z\n\\end{align*}\n\\begin{align*}\n2 e^{i(x+y+z)} &amp; = e^{ix} + e^{iy} + e^{iz} \n\\end{align*}\nMultiplying throughout by $e^{-ix}$, we get\n\\begin{align*}\n2e^{i(y+z)} = 1 + e^{i(y-x)} + e^{i(z-x)} \n\\end{align*}\nEquating the real parts, we get\n\\begin{align*}\n2\\cos(y+z) &amp;= 1 + \\cos(y-x) +\\cos(z-x)\\\\\n2(\\cos y \\cos z - \\sin y \\sin z) &amp; = 1 + \\cos x \\cos y + \\sin x \\sin y + \\cos z \\cos x + \\sin z \\sin x\n\\end{align*}\nSimilarly, we get\n\\begin{align*}\n2(\\cos x \\cos z - \\sin x \\sin z) &amp;= 1 + \\cos x \\cos y + \\sin x \\sin y + \\cos z \\cos y + \\sin z \\sin y\\\\\n2(\\cos x \\cos y - \\sin x \\sin y) &amp;= 1 + \\cos x \\cos z + \\sin x \\sin z+ \\cos z \\cos y + \\sin z \\sin y\\\\\n\\end{align*}\nAdding, and canceling $2 \\sum \\cos x \\cos y$ from the sides, we get\n$$\\sum \\sin x \\sin y = -\\frac{3}{4}$$</p>\n" }, { "answer_id": 2114762, "score": 3, "text": "<p>I used the Blue's beautiful idea.</p>\n\n<p>Let $e^{ix}=a$, $e^{iy}=b$ and $e^{iz}=c$.</p>\n\n<p>Hence, $\\sin{x}=\\frac{a-\\frac{1}{a}}{2i}=\\frac{a^2-1}{2ai}$, $\\cos{x}=\\frac{a^2+1}{2a}$, $\\sin{y}=\\frac{b^2-1}{2bi}$, $\\sin{x}=\\frac{c^2-1}{2ci}$ and $\\cos{x}=\\frac{c^2+1}{2c}$.</p>\n\n<p>Thus, $\\sum\\limits_{cyc}\\sin{x}=2\\sin(x+y+z)$ gives $\\sum\\limits_{cyc}(a^2bc-ab)=2(a^2b^2c^2-1)$ and</p>\n\n<p>$\\sum\\limits_{cyc}\\cos{x}=2\\cos(x+y+z)$ gives $\\sum\\limits_{cyc}(a^2bc+ab)=2(a^2b^2c^2+1)$ or\n$ab+ac+bc=2$ and $a+b+c=2abc$.</p>\n\n<p>Thus, $$-\\sum\\limits_{cyc}\\sin{x}\\sin{y}=\\sum_{cyc}\\frac{(a^2-1)(b^2-1)}{4ab}=\\frac{\\sum\\limits_{cyc}c(a^2-1)(b^2-1)}{4abc}=$$\n$$=\\frac{abc(ab+ac+bc)-\\sum\\limits_{cyc}(a^2b+a^2c)+a+b+c}{4abc}=\\frac{abc(ab+ac+bc)-(a+b+c)(ab+ac+bc)+3abc+a+b+c}{4abc}=\\frac{1}{2}-1+\\frac{3}{4}+\\frac{1}{2}=\\frac{3}{4}$$</p>\n" }, { "answer_id": 2114758, "score": 3, "text": "<p>Since\n\\begin{align*}\n2 \\cos(x+y+z) &amp;= \\cos x + \\cos y + \\cos z \\\\\n2 \\sin(x+y+z) &amp;= \\sin x + \\sin y +\\sin z\n\\end{align*}\n\\begin{align*}\n2 e^{i(x+y+z)} &amp; = e^{ix} + e^{iy} + e^{iz} \n\\end{align*}\nMultiplying throughout by $e^{-ix}$, we get\n\\begin{align*}\n2e^{i(y+z)} = 1 + e^{i(y-x)} + e^{i(z-x)} \n\\end{align*}\nEquating the real parts, we get\n\\begin{align*}\n2\\cos(y+z) &amp;= 1 + \\cos(y-x) +\\cos(z-x)\\\\\n2(\\cos y \\cos z - \\sin y \\sin z) &amp; = 1 + \\cos x \\cos y + \\sin x \\sin y + \\cos z \\cos x + \\sin z \\sin x\n\\end{align*}\nSimilarly, we get\n\\begin{align*}\n2(\\cos x \\cos z - \\sin x \\sin z) &amp;= 1 + \\cos x \\cos y + \\sin x \\sin y + \\cos z \\cos y + \\sin z \\sin y\\\\\n2(\\cos x \\cos y - \\sin x \\sin y) &amp;= 1 + \\cos x \\cos z + \\sin x \\sin z+ \\cos z \\cos y + \\sin z \\sin y\\\\\n\\end{align*}\nAdding, and canceling $2 \\sum \\cos x \\cos y$ from the sides, we get\n$$\\sum \\sin x \\sin y = -\\frac{3}{4}$$</p>\n" }, { "answer_id": 2114762, "score": 3, "text": "<p>I used the Blue's beautiful idea.</p>\n\n<p>Let $e^{ix}=a$, $e^{iy}=b$ and $e^{iz}=c$.</p>\n\n<p>Hence, $\\sin{x}=\\frac{a-\\frac{1}{a}}{2i}=\\frac{a^2-1}{2ai}$, $\\cos{x}=\\frac{a^2+1}{2a}$, $\\sin{y}=\\frac{b^2-1}{2bi}$, $\\sin{x}=\\frac{c^2-1}{2ci}$ and $\\cos{x}=\\frac{c^2+1}{2c}$.</p>\n\n<p>Thus, $\\sum\\limits_{cyc}\\sin{x}=2\\sin(x+y+z)$ gives $\\sum\\limits_{cyc}(a^2bc-ab)=2(a^2b^2c^2-1)$ and</p>\n\n<p>$\\sum\\limits_{cyc}\\cos{x}=2\\cos(x+y+z)$ gives $\\sum\\limits_{cyc}(a^2bc+ab)=2(a^2b^2c^2+1)$ or\n$ab+ac+bc=2$ and $a+b+c=2abc$.</p>\n\n<p>Thus, $$-\\sum\\limits_{cyc}\\sin{x}\\sin{y}=\\sum_{cyc}\\frac{(a^2-1)(b^2-1)}{4ab}=\\frac{\\sum\\limits_{cyc}c(a^2-1)(b^2-1)}{4abc}=$$\n$$=\\frac{abc(ab+ac+bc)-\\sum\\limits_{cyc}(a^2b+a^2c)+a+b+c}{4abc}=\\frac{abc(ab+ac+bc)-(a+b+c)(ab+ac+bc)+3abc+a+b+c}{4abc}=\\frac{1}{2}-1+\\frac{3}{4}+\\frac{1}{2}=\\frac{3}{4}$$</p>\n" }, { "answer_id": 2114758, "score": 3, "text": "<p>Since\n\\begin{align*}\n2 \\cos(x+y+z) &amp;= \\cos x + \\cos y + \\cos z \\\\\n2 \\sin(x+y+z) &amp;= \\sin x + \\sin y +\\sin z\n\\end{align*}\n\\begin{align*}\n2 e^{i(x+y+z)} &amp; = e^{ix} + e^{iy} + e^{iz} \n\\end{align*}\nMultiplying throughout by $e^{-ix}$, we get\n\\begin{align*}\n2e^{i(y+z)} = 1 + e^{i(y-x)} + e^{i(z-x)} \n\\end{align*}\nEquating the real parts, we get\n\\begin{align*}\n2\\cos(y+z) &amp;= 1 + \\cos(y-x) +\\cos(z-x)\\\\\n2(\\cos y \\cos z - \\sin y \\sin z) &amp; = 1 + \\cos x \\cos y + \\sin x \\sin y + \\cos z \\cos x + \\sin z \\sin x\n\\end{align*}\nSimilarly, we get\n\\begin{align*}\n2(\\cos x \\cos z - \\sin x \\sin z) &amp;= 1 + \\cos x \\cos y + \\sin x \\sin y + \\cos z \\cos y + \\sin z \\sin y\\\\\n2(\\cos x \\cos y - \\sin x \\sin y) &amp;= 1 + \\cos x \\cos z + \\sin x \\sin z+ \\cos z \\cos y + \\sin z \\sin y\\\\\n\\end{align*}\nAdding, and canceling $2 \\sum \\cos x \\cos y$ from the sides, we get\n$$\\sum \\sin x \\sin y = -\\frac{3}{4}$$</p>\n" }, { "answer_id": 2114762, "score": 3, "text": "<p>I used the Blue's beautiful idea.</p>\n\n<p>Let $e^{ix}=a$, $e^{iy}=b$ and $e^{iz}=c$.</p>\n\n<p>Hence, $\\sin{x}=\\frac{a-\\frac{1}{a}}{2i}=\\frac{a^2-1}{2ai}$, $\\cos{x}=\\frac{a^2+1}{2a}$, $\\sin{y}=\\frac{b^2-1}{2bi}$, $\\sin{x}=\\frac{c^2-1}{2ci}$ and $\\cos{x}=\\frac{c^2+1}{2c}$.</p>\n\n<p>Thus, $\\sum\\limits_{cyc}\\sin{x}=2\\sin(x+y+z)$ gives $\\sum\\limits_{cyc}(a^2bc-ab)=2(a^2b^2c^2-1)$ and</p>\n\n<p>$\\sum\\limits_{cyc}\\cos{x}=2\\cos(x+y+z)$ gives $\\sum\\limits_{cyc}(a^2bc+ab)=2(a^2b^2c^2+1)$ or\n$ab+ac+bc=2$ and $a+b+c=2abc$.</p>\n\n<p>Thus, $$-\\sum\\limits_{cyc}\\sin{x}\\sin{y}=\\sum_{cyc}\\frac{(a^2-1)(b^2-1)}{4ab}=\\frac{\\sum\\limits_{cyc}c(a^2-1)(b^2-1)}{4abc}=$$\n$$=\\frac{abc(ab+ac+bc)-\\sum\\limits_{cyc}(a^2b+a^2c)+a+b+c}{4abc}=\\frac{abc(ab+ac+bc)-(a+b+c)(ab+ac+bc)+3abc+a+b+c}{4abc}=\\frac{1}{2}-1+\\frac{3}{4}+\\frac{1}{2}=\\frac{3}{4}$$</p>\n" }, { "answer_id": 2114758, "score": 3, "text": "<p>Since\n\\begin{align*}\n2 \\cos(x+y+z) &amp;= \\cos x + \\cos y + \\cos z \\\\\n2 \\sin(x+y+z) &amp;= \\sin x + \\sin y +\\sin z\n\\end{align*}\n\\begin{align*}\n2 e^{i(x+y+z)} &amp; = e^{ix} + e^{iy} + e^{iz} \n\\end{align*}\nMultiplying throughout by $e^{-ix}$, we get\n\\begin{align*}\n2e^{i(y+z)} = 1 + e^{i(y-x)} + e^{i(z-x)} \n\\end{align*}\nEquating the real parts, we get\n\\begin{align*}\n2\\cos(y+z) &amp;= 1 + \\cos(y-x) +\\cos(z-x)\\\\\n2(\\cos y \\cos z - \\sin y \\sin z) &amp; = 1 + \\cos x \\cos y + \\sin x \\sin y + \\cos z \\cos x + \\sin z \\sin x\n\\end{align*}\nSimilarly, we get\n\\begin{align*}\n2(\\cos x \\cos z - \\sin x \\sin z) &amp;= 1 + \\cos x \\cos y + \\sin x \\sin y + \\cos z \\cos y + \\sin z \\sin y\\\\\n2(\\cos x \\cos y - \\sin x \\sin y) &amp;= 1 + \\cos x \\cos z + \\sin x \\sin z+ \\cos z \\cos y + \\sin z \\sin y\\\\\n\\end{align*}\nAdding, and canceling $2 \\sum \\cos x \\cos y$ from the sides, we get\n$$\\sum \\sin x \\sin y = -\\frac{3}{4}$$</p>\n" }, { "answer_id": 2114762, "score": 3, "text": "<p>I used the Blue's beautiful idea.</p>\n\n<p>Let $e^{ix}=a$, $e^{iy}=b$ and $e^{iz}=c$.</p>\n\n<p>Hence, $\\sin{x}=\\frac{a-\\frac{1}{a}}{2i}=\\frac{a^2-1}{2ai}$, $\\cos{x}=\\frac{a^2+1}{2a}$, $\\sin{y}=\\frac{b^2-1}{2bi}$, $\\sin{x}=\\frac{c^2-1}{2ci}$ and $\\cos{x}=\\frac{c^2+1}{2c}$.</p>\n\n<p>Thus, $\\sum\\limits_{cyc}\\sin{x}=2\\sin(x+y+z)$ gives $\\sum\\limits_{cyc}(a^2bc-ab)=2(a^2b^2c^2-1)$ and</p>\n\n<p>$\\sum\\limits_{cyc}\\cos{x}=2\\cos(x+y+z)$ gives $\\sum\\limits_{cyc}(a^2bc+ab)=2(a^2b^2c^2+1)$ or\n$ab+ac+bc=2$ and $a+b+c=2abc$.</p>\n\n<p>Thus, $$-\\sum\\limits_{cyc}\\sin{x}\\sin{y}=\\sum_{cyc}\\frac{(a^2-1)(b^2-1)}{4ab}=\\frac{\\sum\\limits_{cyc}c(a^2-1)(b^2-1)}{4abc}=$$\n$$=\\frac{abc(ab+ac+bc)-\\sum\\limits_{cyc}(a^2b+a^2c)+a+b+c}{4abc}=\\frac{abc(ab+ac+bc)-(a+b+c)(ab+ac+bc)+3abc+a+b+c}{4abc}=\\frac{1}{2}-1+\\frac{3}{4}+\\frac{1}{2}=\\frac{3}{4}$$</p>\n" }, { "answer_id": 2114758, "score": 3, "text": "<p>Since\n\\begin{align*}\n2 \\cos(x+y+z) &amp;= \\cos x + \\cos y + \\cos z \\\\\n2 \\sin(x+y+z) &amp;= \\sin x + \\sin y +\\sin z\n\\end{align*}\n\\begin{align*}\n2 e^{i(x+y+z)} &amp; = e^{ix} + e^{iy} + e^{iz} \n\\end{align*}\nMultiplying throughout by $e^{-ix}$, we get\n\\begin{align*}\n2e^{i(y+z)} = 1 + e^{i(y-x)} + e^{i(z-x)} \n\\end{align*}\nEquating the real parts, we get\n\\begin{align*}\n2\\cos(y+z) &amp;= 1 + \\cos(y-x) +\\cos(z-x)\\\\\n2(\\cos y \\cos z - \\sin y \\sin z) &amp; = 1 + \\cos x \\cos y + \\sin x \\sin y + \\cos z \\cos x + \\sin z \\sin x\n\\end{align*}\nSimilarly, we get\n\\begin{align*}\n2(\\cos x \\cos z - \\sin x \\sin z) &amp;= 1 + \\cos x \\cos y + \\sin x \\sin y + \\cos z \\cos y + \\sin z \\sin y\\\\\n2(\\cos x \\cos y - \\sin x \\sin y) &amp;= 1 + \\cos x \\cos z + \\sin x \\sin z+ \\cos z \\cos y + \\sin z \\sin y\\\\\n\\end{align*}\nAdding, and canceling $2 \\sum \\cos x \\cos y$ from the sides, we get\n$$\\sum \\sin x \\sin y = -\\frac{3}{4}$$</p>\n" }, { "answer_id": 2114762, "score": 3, "text": "<p>I used the Blue's beautiful idea.</p>\n\n<p>Let $e^{ix}=a$, $e^{iy}=b$ and $e^{iz}=c$.</p>\n\n<p>Hence, $\\sin{x}=\\frac{a-\\frac{1}{a}}{2i}=\\frac{a^2-1}{2ai}$, $\\cos{x}=\\frac{a^2+1}{2a}$, $\\sin{y}=\\frac{b^2-1}{2bi}$, $\\sin{x}=\\frac{c^2-1}{2ci}$ and $\\cos{x}=\\frac{c^2+1}{2c}$.</p>\n\n<p>Thus, $\\sum\\limits_{cyc}\\sin{x}=2\\sin(x+y+z)$ gives $\\sum\\limits_{cyc}(a^2bc-ab)=2(a^2b^2c^2-1)$ and</p>\n\n<p>$\\sum\\limits_{cyc}\\cos{x}=2\\cos(x+y+z)$ gives $\\sum\\limits_{cyc}(a^2bc+ab)=2(a^2b^2c^2+1)$ or\n$ab+ac+bc=2$ and $a+b+c=2abc$.</p>\n\n<p>Thus, $$-\\sum\\limits_{cyc}\\sin{x}\\sin{y}=\\sum_{cyc}\\frac{(a^2-1)(b^2-1)}{4ab}=\\frac{\\sum\\limits_{cyc}c(a^2-1)(b^2-1)}{4abc}=$$\n$$=\\frac{abc(ab+ac+bc)-\\sum\\limits_{cyc}(a^2b+a^2c)+a+b+c}{4abc}=\\frac{abc(ab+ac+bc)-(a+b+c)(ab+ac+bc)+3abc+a+b+c}{4abc}=\\frac{1}{2}-1+\\frac{3}{4}+\\frac{1}{2}=\\frac{3}{4}$$</p>\n" }, { "answer_id": 2114758, "score": 3, "text": "<p>Since\n\\begin{align*}\n2 \\cos(x+y+z) &amp;= \\cos x + \\cos y + \\cos z \\\\\n2 \\sin(x+y+z) &amp;= \\sin x + \\sin y +\\sin z\n\\end{align*}\n\\begin{align*}\n2 e^{i(x+y+z)} &amp; = e^{ix} + e^{iy} + e^{iz} \n\\end{align*}\nMultiplying throughout by $e^{-ix}$, we get\n\\begin{align*}\n2e^{i(y+z)} = 1 + e^{i(y-x)} + e^{i(z-x)} \n\\end{align*}\nEquating the real parts, we get\n\\begin{align*}\n2\\cos(y+z) &amp;= 1 + \\cos(y-x) +\\cos(z-x)\\\\\n2(\\cos y \\cos z - \\sin y \\sin z) &amp; = 1 + \\cos x \\cos y + \\sin x \\sin y + \\cos z \\cos x + \\sin z \\sin x\n\\end{align*}\nSimilarly, we get\n\\begin{align*}\n2(\\cos x \\cos z - \\sin x \\sin z) &amp;= 1 + \\cos x \\cos y + \\sin x \\sin y + \\cos z \\cos y + \\sin z \\sin y\\\\\n2(\\cos x \\cos y - \\sin x \\sin y) &amp;= 1 + \\cos x \\cos z + \\sin x \\sin z+ \\cos z \\cos y + \\sin z \\sin y\\\\\n\\end{align*}\nAdding, and canceling $2 \\sum \\cos x \\cos y$ from the sides, we get\n$$\\sum \\sin x \\sin y = -\\frac{3}{4}$$</p>\n" }, { "answer_id": 2114762, "score": 3, "text": "<p>I used the Blue's beautiful idea.</p>\n\n<p>Let $e^{ix}=a$, $e^{iy}=b$ and $e^{iz}=c$.</p>\n\n<p>Hence, $\\sin{x}=\\frac{a-\\frac{1}{a}}{2i}=\\frac{a^2-1}{2ai}$, $\\cos{x}=\\frac{a^2+1}{2a}$, $\\sin{y}=\\frac{b^2-1}{2bi}$, $\\sin{x}=\\frac{c^2-1}{2ci}$ and $\\cos{x}=\\frac{c^2+1}{2c}$.</p>\n\n<p>Thus, $\\sum\\limits_{cyc}\\sin{x}=2\\sin(x+y+z)$ gives $\\sum\\limits_{cyc}(a^2bc-ab)=2(a^2b^2c^2-1)$ and</p>\n\n<p>$\\sum\\limits_{cyc}\\cos{x}=2\\cos(x+y+z)$ gives $\\sum\\limits_{cyc}(a^2bc+ab)=2(a^2b^2c^2+1)$ or\n$ab+ac+bc=2$ and $a+b+c=2abc$.</p>\n\n<p>Thus, $$-\\sum\\limits_{cyc}\\sin{x}\\sin{y}=\\sum_{cyc}\\frac{(a^2-1)(b^2-1)}{4ab}=\\frac{\\sum\\limits_{cyc}c(a^2-1)(b^2-1)}{4abc}=$$\n$$=\\frac{abc(ab+ac+bc)-\\sum\\limits_{cyc}(a^2b+a^2c)+a+b+c}{4abc}=\\frac{abc(ab+ac+bc)-(a+b+c)(ab+ac+bc)+3abc+a+b+c}{4abc}=\\frac{1}{2}-1+\\frac{3}{4}+\\frac{1}{2}=\\frac{3}{4}$$</p>\n" }, { "answer_id": 2114758, "score": 3, "text": "<p>Since\n\\begin{align*}\n2 \\cos(x+y+z) &amp;= \\cos x + \\cos y + \\cos z \\\\\n2 \\sin(x+y+z) &amp;= \\sin x + \\sin y +\\sin z\n\\end{align*}\n\\begin{align*}\n2 e^{i(x+y+z)} &amp; = e^{ix} + e^{iy} + e^{iz} \n\\end{align*}\nMultiplying throughout by $e^{-ix}$, we get\n\\begin{align*}\n2e^{i(y+z)} = 1 + e^{i(y-x)} + e^{i(z-x)} \n\\end{align*}\nEquating the real parts, we get\n\\begin{align*}\n2\\cos(y+z) &amp;= 1 + \\cos(y-x) +\\cos(z-x)\\\\\n2(\\cos y \\cos z - \\sin y \\sin z) &amp; = 1 + \\cos x \\cos y + \\sin x \\sin y + \\cos z \\cos x + \\sin z \\sin x\n\\end{align*}\nSimilarly, we get\n\\begin{align*}\n2(\\cos x \\cos z - \\sin x \\sin z) &amp;= 1 + \\cos x \\cos y + \\sin x \\sin y + \\cos z \\cos y + \\sin z \\sin y\\\\\n2(\\cos x \\cos y - \\sin x \\sin y) &amp;= 1 + \\cos x \\cos z + \\sin x \\sin z+ \\cos z \\cos y + \\sin z \\sin y\\\\\n\\end{align*}\nAdding, and canceling $2 \\sum \\cos x \\cos y$ from the sides, we get\n$$\\sum \\sin x \\sin y = -\\frac{3}{4}$$</p>\n" }, { "answer_id": 2114762, "score": 3, "text": "<p>I used the Blue's beautiful idea.</p>\n\n<p>Let $e^{ix}=a$, $e^{iy}=b$ and $e^{iz}=c$.</p>\n\n<p>Hence, $\\sin{x}=\\frac{a-\\frac{1}{a}}{2i}=\\frac{a^2-1}{2ai}$, $\\cos{x}=\\frac{a^2+1}{2a}$, $\\sin{y}=\\frac{b^2-1}{2bi}$, $\\sin{x}=\\frac{c^2-1}{2ci}$ and $\\cos{x}=\\frac{c^2+1}{2c}$.</p>\n\n<p>Thus, $\\sum\\limits_{cyc}\\sin{x}=2\\sin(x+y+z)$ gives $\\sum\\limits_{cyc}(a^2bc-ab)=2(a^2b^2c^2-1)$ and</p>\n\n<p>$\\sum\\limits_{cyc}\\cos{x}=2\\cos(x+y+z)$ gives $\\sum\\limits_{cyc}(a^2bc+ab)=2(a^2b^2c^2+1)$ or\n$ab+ac+bc=2$ and $a+b+c=2abc$.</p>\n\n<p>Thus, $$-\\sum\\limits_{cyc}\\sin{x}\\sin{y}=\\sum_{cyc}\\frac{(a^2-1)(b^2-1)}{4ab}=\\frac{\\sum\\limits_{cyc}c(a^2-1)(b^2-1)}{4abc}=$$\n$$=\\frac{abc(ab+ac+bc)-\\sum\\limits_{cyc}(a^2b+a^2c)+a+b+c}{4abc}=\\frac{abc(ab+ac+bc)-(a+b+c)(ab+ac+bc)+3abc+a+b+c}{4abc}=\\frac{1}{2}-1+\\frac{3}{4}+\\frac{1}{2}=\\frac{3}{4}$$</p>\n" }, { "answer_id": 2114758, "score": 3, "text": "<p>Since\n\\begin{align*}\n2 \\cos(x+y+z) &amp;= \\cos x + \\cos y + \\cos z \\\\\n2 \\sin(x+y+z) &amp;= \\sin x + \\sin y +\\sin z\n\\end{align*}\n\\begin{align*}\n2 e^{i(x+y+z)} &amp; = e^{ix} + e^{iy} + e^{iz} \n\\end{align*}\nMultiplying throughout by $e^{-ix}$, we get\n\\begin{align*}\n2e^{i(y+z)} = 1 + e^{i(y-x)} + e^{i(z-x)} \n\\end{align*}\nEquating the real parts, we get\n\\begin{align*}\n2\\cos(y+z) &amp;= 1 + \\cos(y-x) +\\cos(z-x)\\\\\n2(\\cos y \\cos z - \\sin y \\sin z) &amp; = 1 + \\cos x \\cos y + \\sin x \\sin y + \\cos z \\cos x + \\sin z \\sin x\n\\end{align*}\nSimilarly, we get\n\\begin{align*}\n2(\\cos x \\cos z - \\sin x \\sin z) &amp;= 1 + \\cos x \\cos y + \\sin x \\sin y + \\cos z \\cos y + \\sin z \\sin y\\\\\n2(\\cos x \\cos y - \\sin x \\sin y) &amp;= 1 + \\cos x \\cos z + \\sin x \\sin z+ \\cos z \\cos y + \\sin z \\sin y\\\\\n\\end{align*}\nAdding, and canceling $2 \\sum \\cos x \\cos y$ from the sides, we get\n$$\\sum \\sin x \\sin y = -\\frac{3}{4}$$</p>\n" }, { "answer_id": 2114762, "score": 3, "text": "<p>I used the Blue's beautiful idea.</p>\n\n<p>Let $e^{ix}=a$, $e^{iy}=b$ and $e^{iz}=c$.</p>\n\n<p>Hence, $\\sin{x}=\\frac{a-\\frac{1}{a}}{2i}=\\frac{a^2-1}{2ai}$, $\\cos{x}=\\frac{a^2+1}{2a}$, $\\sin{y}=\\frac{b^2-1}{2bi}$, $\\sin{x}=\\frac{c^2-1}{2ci}$ and $\\cos{x}=\\frac{c^2+1}{2c}$.</p>\n\n<p>Thus, $\\sum\\limits_{cyc}\\sin{x}=2\\sin(x+y+z)$ gives $\\sum\\limits_{cyc}(a^2bc-ab)=2(a^2b^2c^2-1)$ and</p>\n\n<p>$\\sum\\limits_{cyc}\\cos{x}=2\\cos(x+y+z)$ gives $\\sum\\limits_{cyc}(a^2bc+ab)=2(a^2b^2c^2+1)$ or\n$ab+ac+bc=2$ and $a+b+c=2abc$.</p>\n\n<p>Thus, $$-\\sum\\limits_{cyc}\\sin{x}\\sin{y}=\\sum_{cyc}\\frac{(a^2-1)(b^2-1)}{4ab}=\\frac{\\sum\\limits_{cyc}c(a^2-1)(b^2-1)}{4abc}=$$\n$$=\\frac{abc(ab+ac+bc)-\\sum\\limits_{cyc}(a^2b+a^2c)+a+b+c}{4abc}=\\frac{abc(ab+ac+bc)-(a+b+c)(ab+ac+bc)+3abc+a+b+c}{4abc}=\\frac{1}{2}-1+\\frac{3}{4}+\\frac{1}{2}=\\frac{3}{4}$$</p>\n" }, { "answer_id": 2114758, "score": 3, "text": "<p>Since\n\\begin{align*}\n2 \\cos(x+y+z) &amp;= \\cos x + \\cos y + \\cos z \\\\\n2 \\sin(x+y+z) &amp;= \\sin x + \\sin y +\\sin z\n\\end{align*}\n\\begin{align*}\n2 e^{i(x+y+z)} &amp; = e^{ix} + e^{iy} + e^{iz} \n\\end{align*}\nMultiplying throughout by $e^{-ix}$, we get\n\\begin{align*}\n2e^{i(y+z)} = 1 + e^{i(y-x)} + e^{i(z-x)} \n\\end{align*}\nEquating the real parts, we get\n\\begin{align*}\n2\\cos(y+z) &amp;= 1 + \\cos(y-x) +\\cos(z-x)\\\\\n2(\\cos y \\cos z - \\sin y \\sin z) &amp; = 1 + \\cos x \\cos y + \\sin x \\sin y + \\cos z \\cos x + \\sin z \\sin x\n\\end{align*}\nSimilarly, we get\n\\begin{align*}\n2(\\cos x \\cos z - \\sin x \\sin z) &amp;= 1 + \\cos x \\cos y + \\sin x \\sin y + \\cos z \\cos y + \\sin z \\sin y\\\\\n2(\\cos x \\cos y - \\sin x \\sin y) &amp;= 1 + \\cos x \\cos z + \\sin x \\sin z+ \\cos z \\cos y + \\sin z \\sin y\\\\\n\\end{align*}\nAdding, and canceling $2 \\sum \\cos x \\cos y$ from the sides, we get\n$$\\sum \\sin x \\sin y = -\\frac{3}{4}$$</p>\n" }, { "answer_id": 2114762, "score": 3, "text": "<p>I used the Blue's beautiful idea.</p>\n\n<p>Let $e^{ix}=a$, $e^{iy}=b$ and $e^{iz}=c$.</p>\n\n<p>Hence, $\\sin{x}=\\frac{a-\\frac{1}{a}}{2i}=\\frac{a^2-1}{2ai}$, $\\cos{x}=\\frac{a^2+1}{2a}$, $\\sin{y}=\\frac{b^2-1}{2bi}$, $\\sin{x}=\\frac{c^2-1}{2ci}$ and $\\cos{x}=\\frac{c^2+1}{2c}$.</p>\n\n<p>Thus, $\\sum\\limits_{cyc}\\sin{x}=2\\sin(x+y+z)$ gives $\\sum\\limits_{cyc}(a^2bc-ab)=2(a^2b^2c^2-1)$ and</p>\n\n<p>$\\sum\\limits_{cyc}\\cos{x}=2\\cos(x+y+z)$ gives $\\sum\\limits_{cyc}(a^2bc+ab)=2(a^2b^2c^2+1)$ or\n$ab+ac+bc=2$ and $a+b+c=2abc$.</p>\n\n<p>Thus, $$-\\sum\\limits_{cyc}\\sin{x}\\sin{y}=\\sum_{cyc}\\frac{(a^2-1)(b^2-1)}{4ab}=\\frac{\\sum\\limits_{cyc}c(a^2-1)(b^2-1)}{4abc}=$$\n$$=\\frac{abc(ab+ac+bc)-\\sum\\limits_{cyc}(a^2b+a^2c)+a+b+c}{4abc}=\\frac{abc(ab+ac+bc)-(a+b+c)(ab+ac+bc)+3abc+a+b+c}{4abc}=\\frac{1}{2}-1+\\frac{3}{4}+\\frac{1}{2}=\\frac{3}{4}$$</p>\n" } ]
Question: How can I find $\sum_{cyc}\sin x\sin y$ Answer 1: <p>Since \begin{align*} 2 \cos(x+y+z) &amp;= \cos x + \cos y + \cos z \\ 2 \sin(x+y+z) &amp;= \sin x + \sin y +\sin z \end{align*} \begin{align*} 2 e^{i(x+y+z)} &amp; = e^{ix} + e^{iy} + e^{iz} \end{align*} Multiplying throughout by $e^{-ix}$, we get \begin{align*} 2e^{i(y+z)} = 1 + e^{i(y-x)} + e^{i(z-x)} \end{align*} Equating the real parts, we get \begin{align*} 2\cos(y+z) &amp;= 1 + \cos(y-x) +\cos(z-x)\\ 2(\cos y \cos z - \sin y \sin z) &amp; = 1 + \cos x \cos y + \sin x \sin y + \cos z \cos x + \sin z \sin x \end{align*} Similarly, we get \begin{align*} 2(\cos x \cos z - \sin x \sin z) &amp;= 1 + \cos x \cos y + \sin x \sin y + \cos z \cos y + \sin z \sin y\\ 2(\cos x \cos y - \sin x \sin y) &amp;= 1 + \cos x \cos z + \sin x \sin z+ \cos z \cos y + \sin z \sin y\\ \end{align*} Adding, and canceling $2 \sum \cos x \cos y$ from the sides, we get $$\sum \sin x \sin y = -\frac{3}{4}$$</p> Answer 2: <p>I used the Blue's beautiful idea.</p> <p>Let $e^{ix}=a$, $e^{iy}=b$ and $e^{iz}=c$.</p> <p>Hence, $\sin{x}=\frac{a-\frac{1}{a}}{2i}=\frac{a^2-1}{2ai}$, $\cos{x}=\frac{a^2+1}{2a}$, $\sin{y}=\frac{b^2-1}{2bi}$, $\sin{x}=\frac{c^2-1}{2ci}$ and $\cos{x}=\frac{c^2+1}{2c}$.</p> <p>Thus, $\sum\limits_{cyc}\sin{x}=2\sin(x+y+z)$ gives $\sum\limits_{cyc}(a^2bc-ab)=2(a^2b^2c^2-1)$ and</p> <p>$\sum\limits_{cyc}\cos{x}=2\cos(x+y+z)$ gives $\sum\limits_{cyc}(a^2bc+ab)=2(a^2b^2c^2+1)$ or $ab+ac+bc=2$ and $a+b+c=2abc$.</p> <p>Thus, $$-\sum\limits_{cyc}\sin{x}\sin{y}=\sum_{cyc}\frac{(a^2-1)(b^2-1)}{4ab}=\frac{\sum\limits_{cyc}c(a^2-1)(b^2-1)}{4abc}=$$ $$=\frac{abc(ab+ac+bc)-\sum\limits_{cyc}(a^2b+a^2c)+a+b+c}{4abc}=\frac{abc(ab+ac+bc)-(a+b+c)(ab+ac+bc)+3abc+a+b+c}{4abc}=\frac{1}{2}-1+\frac{3}{4}+\frac{1}{2}=\frac{3}{4}$$</p> Answer 3: <p>Since \begin{align*} 2 \cos(x+y+z) &amp;= \cos x + \cos y + \cos z \\ 2 \sin(x+y+z) &amp;= \sin x + \sin y +\sin z \end{align*} \begin{align*} 2 e^{i(x+y+z)} &amp; = e^{ix} + e^{iy} + e^{iz} \end{align*} Multiplying throughout by $e^{-ix}$, we get \begin{align*} 2e^{i(y+z)} = 1 + e^{i(y-x)} + e^{i(z-x)} \end{align*} Equating the real parts, we get \begin{align*} 2\cos(y+z) &amp;= 1 + \cos(y-x) +\cos(z-x)\\ 2(\cos y \cos z - \sin y \sin z) &amp; = 1 + \cos x \cos y + \sin x \sin y + \cos z \cos x + \sin z \sin x \end{align*} Similarly, we get \begin{align*} 2(\cos x \cos z - \sin x \sin z) &amp;= 1 + \cos x \cos y + \sin x \sin y + \cos z \cos y + \sin z \sin y\\ 2(\cos x \cos y - \sin x \sin y) &amp;= 1 + \cos x \cos z + \sin x \sin z+ \cos z \cos y + \sin z \sin y\\ \end{align*} Adding, and canceling $2 \sum \cos x \cos y$ from the sides, we get $$\sum \sin x \sin y = -\frac{3}{4}$$</p> Answer 4: <p>I used the Blue's beautiful idea.</p> <p>Let $e^{ix}=a$, $e^{iy}=b$ and $e^{iz}=c$.</p> <p>Hence, $\sin{x}=\frac{a-\frac{1}{a}}{2i}=\frac{a^2-1}{2ai}$, $\cos{x}=\frac{a^2+1}{2a}$, $\sin{y}=\frac{b^2-1}{2bi}$, $\sin{x}=\frac{c^2-1}{2ci}$ and $\cos{x}=\frac{c^2+1}{2c}$.</p> <p>Thus, $\sum\limits_{cyc}\sin{x}=2\sin(x+y+z)$ gives $\sum\limits_{cyc}(a^2bc-ab)=2(a^2b^2c^2-1)$ and</p> <p>$\sum\limits_{cyc}\cos{x}=2\cos(x+y+z)$ gives $\sum\limits_{cyc}(a^2bc+ab)=2(a^2b^2c^2+1)$ or $ab+ac+bc=2$ and $a+b+c=2abc$.</p> <p>Thus, $$-\sum\limits_{cyc}\sin{x}\sin{y}=\sum_{cyc}\frac{(a^2-1)(b^2-1)}{4ab}=\frac{\sum\limits_{cyc}c(a^2-1)(b^2-1)}{4abc}=$$ $$=\frac{abc(ab+ac+bc)-\sum\limits_{cyc}(a^2b+a^2c)+a+b+c}{4abc}=\frac{abc(ab+ac+bc)-(a+b+c)(ab+ac+bc)+3abc+a+b+c}{4abc}=\frac{1}{2}-1+\frac{3}{4}+\frac{1}{2}=\frac{3}{4}$$</p> Answer 5: <p>Since \begin{align*} 2 \cos(x+y+z) &amp;= \cos x + \cos y + \cos z \\ 2 \sin(x+y+z) &amp;= \sin x + \sin y +\sin z \end{align*} \begin{align*} 2 e^{i(x+y+z)} &amp; = e^{ix} + e^{iy} + e^{iz} \end{align*} Multiplying throughout by $e^{-ix}$, we get \begin{align*} 2e^{i(y+z)} = 1 + e^{i(y-x)} + e^{i(z-x)} \end{align*} Equating the real parts, we get \begin{align*} 2\cos(y+z) &amp;= 1 + \cos(y-x) +\cos(z-x)\\ 2(\cos y \cos z - \sin y \sin z) &amp; = 1 + \cos x \cos y + \sin x \sin y + \cos z \cos x + \sin z \sin x \end{align*} Similarly, we get \begin{align*} 2(\cos x \cos z - \sin x \sin z) &amp;= 1 + \cos x \cos y + \sin x \sin y + \cos z \cos y + \sin z \sin y\\ 2(\cos x \cos y - \sin x \sin y) &amp;= 1 + \cos x \cos z + \sin x \sin z+ \cos z \cos y + \sin z \sin y\\ \end{align*} Adding, and canceling $2 \sum \cos x \cos y$ from the sides, we get $$\sum \sin x \sin y = -\frac{3}{4}$$</p> Answer 6: <p>I used the Blue's beautiful idea.</p> <p>Let $e^{ix}=a$, $e^{iy}=b$ and $e^{iz}=c$.</p> <p>Hence, $\sin{x}=\frac{a-\frac{1}{a}}{2i}=\frac{a^2-1}{2ai}$, $\cos{x}=\frac{a^2+1}{2a}$, $\sin{y}=\frac{b^2-1}{2bi}$, $\sin{x}=\frac{c^2-1}{2ci}$ and $\cos{x}=\frac{c^2+1}{2c}$.</p> <p>Thus, $\sum\limits_{cyc}\sin{x}=2\sin(x+y+z)$ gives $\sum\limits_{cyc}(a^2bc-ab)=2(a^2b^2c^2-1)$ and</p> <p>$\sum\limits_{cyc}\cos{x}=2\cos(x+y+z)$ gives $\sum\limits_{cyc}(a^2bc+ab)=2(a^2b^2c^2+1)$ or $ab+ac+bc=2$ and $a+b+c=2abc$.</p> <p>Thus, $$-\sum\limits_{cyc}\sin{x}\sin{y}=\sum_{cyc}\frac{(a^2-1)(b^2-1)}{4ab}=\frac{\sum\limits_{cyc}c(a^2-1)(b^2-1)}{4abc}=$$ $$=\frac{abc(ab+ac+bc)-\sum\limits_{cyc}(a^2b+a^2c)+a+b+c}{4abc}=\frac{abc(ab+ac+bc)-(a+b+c)(ab+ac+bc)+3abc+a+b+c}{4abc}=\frac{1}{2}-1+\frac{3}{4}+\frac{1}{2}=\frac{3}{4}$$</p> Answer 7: <p>Since \begin{align*} 2 \cos(x+y+z) &amp;= \cos x + \cos y + \cos z \\ 2 \sin(x+y+z) &amp;= \sin x + \sin y +\sin z \end{align*} \begin{align*} 2 e^{i(x+y+z)} &amp; = e^{ix} + e^{iy} + e^{iz} \end{align*} Multiplying throughout by $e^{-ix}$, we get \begin{align*} 2e^{i(y+z)} = 1 + e^{i(y-x)} + e^{i(z-x)} \end{align*} Equating the real parts, we get \begin{align*} 2\cos(y+z) &amp;= 1 + \cos(y-x) +\cos(z-x)\\ 2(\cos y \cos z - \sin y \sin z) &amp; = 1 + \cos x \cos y + \sin x \sin y + \cos z \cos x + \sin z \sin x \end{align*} Similarly, we get \begin{align*} 2(\cos x \cos z - \sin x \sin z) &amp;= 1 + \cos x \cos y + \sin x \sin y + \cos z \cos y + \sin z \sin y\\ 2(\cos x \cos y - \sin x \sin y) &amp;= 1 + \cos x \cos z + \sin x \sin z+ \cos z \cos y + \sin z \sin y\\ \end{align*} Adding, and canceling $2 \sum \cos x \cos y$ from the sides, we get $$\sum \sin x \sin y = -\frac{3}{4}$$</p> Answer 8: <p>I used the Blue's beautiful idea.</p> <p>Let $e^{ix}=a$, $e^{iy}=b$ and $e^{iz}=c$.</p> <p>Hence, $\sin{x}=\frac{a-\frac{1}{a}}{2i}=\frac{a^2-1}{2ai}$, $\cos{x}=\frac{a^2+1}{2a}$, $\sin{y}=\frac{b^2-1}{2bi}$, $\sin{x}=\frac{c^2-1}{2ci}$ and $\cos{x}=\frac{c^2+1}{2c}$.</p> <p>Thus, $\sum\limits_{cyc}\sin{x}=2\sin(x+y+z)$ gives $\sum\limits_{cyc}(a^2bc-ab)=2(a^2b^2c^2-1)$ and</p> <p>$\sum\limits_{cyc}\cos{x}=2\cos(x+y+z)$ gives $\sum\limits_{cyc}(a^2bc+ab)=2(a^2b^2c^2+1)$ or $ab+ac+bc=2$ and $a+b+c=2abc$.</p> <p>Thus, $$-\sum\limits_{cyc}\sin{x}\sin{y}=\sum_{cyc}\frac{(a^2-1)(b^2-1)}{4ab}=\frac{\sum\limits_{cyc}c(a^2-1)(b^2-1)}{4abc}=$$ $$=\frac{abc(ab+ac+bc)-\sum\limits_{cyc}(a^2b+a^2c)+a+b+c}{4abc}=\frac{abc(ab+ac+bc)-(a+b+c)(ab+ac+bc)+3abc+a+b+c}{4abc}=\frac{1}{2}-1+\frac{3}{4}+\frac{1}{2}=\frac{3}{4}$$</p> Answer 9: <p>Since \begin{align*} 2 \cos(x+y+z) &amp;= \cos x + \cos y + \cos z \\ 2 \sin(x+y+z) &amp;= \sin x + \sin y +\sin z \end{align*} \begin{align*} 2 e^{i(x+y+z)} &amp; = e^{ix} + e^{iy} + e^{iz} \end{align*} Multiplying throughout by $e^{-ix}$, we get \begin{align*} 2e^{i(y+z)} = 1 + e^{i(y-x)} + e^{i(z-x)} \end{align*} Equating the real parts, we get \begin{align*} 2\cos(y+z) &amp;= 1 + \cos(y-x) +\cos(z-x)\\ 2(\cos y \cos z - \sin y \sin z) &amp; = 1 + \cos x \cos y + \sin x \sin y + \cos z \cos x + \sin z \sin x \end{align*} Similarly, we get \begin{align*} 2(\cos x \cos z - \sin x \sin z) &amp;= 1 + \cos x \cos y + \sin x \sin y + \cos z \cos y + \sin z \sin y\\ 2(\cos x \cos y - \sin x \sin y) &amp;= 1 + \cos x \cos z + \sin x \sin z+ \cos z \cos y + \sin z \sin y\\ \end{align*} Adding, and canceling $2 \sum \cos x \cos y$ from the sides, we get $$\sum \sin x \sin y = -\frac{3}{4}$$</p> Answer 10: <p>I used the Blue's beautiful idea.</p> <p>Let $e^{ix}=a$, $e^{iy}=b$ and $e^{iz}=c$.</p> <p>Hence, $\sin{x}=\frac{a-\frac{1}{a}}{2i}=\frac{a^2-1}{2ai}$, $\cos{x}=\frac{a^2+1}{2a}$, $\sin{y}=\frac{b^2-1}{2bi}$, $\sin{x}=\frac{c^2-1}{2ci}$ and $\cos{x}=\frac{c^2+1}{2c}$.</p> <p>Thus, $\sum\limits_{cyc}\sin{x}=2\sin(x+y+z)$ gives $\sum\limits_{cyc}(a^2bc-ab)=2(a^2b^2c^2-1)$ and</p> <p>$\sum\limits_{cyc}\cos{x}=2\cos(x+y+z)$ gives $\sum\limits_{cyc}(a^2bc+ab)=2(a^2b^2c^2+1)$ or $ab+ac+bc=2$ and $a+b+c=2abc$.</p> <p>Thus, $$-\sum\limits_{cyc}\sin{x}\sin{y}=\sum_{cyc}\frac{(a^2-1)(b^2-1)}{4ab}=\frac{\sum\limits_{cyc}c(a^2-1)(b^2-1)}{4abc}=$$ $$=\frac{abc(ab+ac+bc)-\sum\limits_{cyc}(a^2b+a^2c)+a+b+c}{4abc}=\frac{abc(ab+ac+bc)-(a+b+c)(ab+ac+bc)+3abc+a+b+c}{4abc}=\frac{1}{2}-1+\frac{3}{4}+\frac{1}{2}=\frac{3}{4}$$</p> Answer 11: <p>Since \begin{align*} 2 \cos(x+y+z) &amp;= \cos x + \cos y + \cos z \\ 2 \sin(x+y+z) &amp;= \sin x + \sin y +\sin z \end{align*} \begin{align*} 2 e^{i(x+y+z)} &amp; = e^{ix} + e^{iy} + e^{iz} \end{align*} Multiplying throughout by $e^{-ix}$, we get \begin{align*} 2e^{i(y+z)} = 1 + e^{i(y-x)} + e^{i(z-x)} \end{align*} Equating the real parts, we get \begin{align*} 2\cos(y+z) &amp;= 1 + \cos(y-x) +\cos(z-x)\\ 2(\cos y \cos z - \sin y \sin z) &amp; = 1 + \cos x \cos y + \sin x \sin y + \cos z \cos x + \sin z \sin x \end{align*} Similarly, we get \begin{align*} 2(\cos x \cos z - \sin x \sin z) &amp;= 1 + \cos x \cos y + \sin x \sin y + \cos z \cos y + \sin z \sin y\\ 2(\cos x \cos y - \sin x \sin y) &amp;= 1 + \cos x \cos z + \sin x \sin z+ \cos z \cos y + \sin z \sin y\\ \end{align*} Adding, and canceling $2 \sum \cos x \cos y$ from the sides, we get $$\sum \sin x \sin y = -\frac{3}{4}$$</p> Answer 12: <p>I used the Blue's beautiful idea.</p> <p>Let $e^{ix}=a$, $e^{iy}=b$ and $e^{iz}=c$.</p> <p>Hence, $\sin{x}=\frac{a-\frac{1}{a}}{2i}=\frac{a^2-1}{2ai}$, $\cos{x}=\frac{a^2+1}{2a}$, $\sin{y}=\frac{b^2-1}{2bi}$, $\sin{x}=\frac{c^2-1}{2ci}$ and $\cos{x}=\frac{c^2+1}{2c}$.</p> <p>Thus, $\sum\limits_{cyc}\sin{x}=2\sin(x+y+z)$ gives $\sum\limits_{cyc}(a^2bc-ab)=2(a^2b^2c^2-1)$ and</p> <p>$\sum\limits_{cyc}\cos{x}=2\cos(x+y+z)$ gives $\sum\limits_{cyc}(a^2bc+ab)=2(a^2b^2c^2+1)$ or $ab+ac+bc=2$ and $a+b+c=2abc$.</p> <p>Thus, $$-\sum\limits_{cyc}\sin{x}\sin{y}=\sum_{cyc}\frac{(a^2-1)(b^2-1)}{4ab}=\frac{\sum\limits_{cyc}c(a^2-1)(b^2-1)}{4abc}=$$ $$=\frac{abc(ab+ac+bc)-\sum\limits_{cyc}(a^2b+a^2c)+a+b+c}{4abc}=\frac{abc(ab+ac+bc)-(a+b+c)(ab+ac+bc)+3abc+a+b+c}{4abc}=\frac{1}{2}-1+\frac{3}{4}+\frac{1}{2}=\frac{3}{4}$$</p> Answer 13: <p>Since \begin{align*} 2 \cos(x+y+z) &amp;= \cos x + \cos y + \cos z \\ 2 \sin(x+y+z) &amp;= \sin x + \sin y +\sin z \end{align*} \begin{align*} 2 e^{i(x+y+z)} &amp; = e^{ix} + e^{iy} + e^{iz} \end{align*} Multiplying throughout by $e^{-ix}$, we get \begin{align*} 2e^{i(y+z)} = 1 + e^{i(y-x)} + e^{i(z-x)} \end{align*} Equating the real parts, we get \begin{align*} 2\cos(y+z) &amp;= 1 + \cos(y-x) +\cos(z-x)\\ 2(\cos y \cos z - \sin y \sin z) &amp; = 1 + \cos x \cos y + \sin x \sin y + \cos z \cos x + \sin z \sin x \end{align*} Similarly, we get \begin{align*} 2(\cos x \cos z - \sin x \sin z) &amp;= 1 + \cos x \cos y + \sin x \sin y + \cos z \cos y + \sin z \sin y\\ 2(\cos x \cos y - \sin x \sin y) &amp;= 1 + \cos x \cos z + \sin x \sin z+ \cos z \cos y + \sin z \sin y\\ \end{align*} Adding, and canceling $2 \sum \cos x \cos y$ from the sides, we get $$\sum \sin x \sin y = -\frac{3}{4}$$</p> Answer 14: <p>I used the Blue's beautiful idea.</p> <p>Let $e^{ix}=a$, $e^{iy}=b$ and $e^{iz}=c$.</p> <p>Hence, $\sin{x}=\frac{a-\frac{1}{a}}{2i}=\frac{a^2-1}{2ai}$, $\cos{x}=\frac{a^2+1}{2a}$, $\sin{y}=\frac{b^2-1}{2bi}$, $\sin{x}=\frac{c^2-1}{2ci}$ and $\cos{x}=\frac{c^2+1}{2c}$.</p> <p>Thus, $\sum\limits_{cyc}\sin{x}=2\sin(x+y+z)$ gives $\sum\limits_{cyc}(a^2bc-ab)=2(a^2b^2c^2-1)$ and</p> <p>$\sum\limits_{cyc}\cos{x}=2\cos(x+y+z)$ gives $\sum\limits_{cyc}(a^2bc+ab)=2(a^2b^2c^2+1)$ or $ab+ac+bc=2$ and $a+b+c=2abc$.</p> <p>Thus, $$-\sum\limits_{cyc}\sin{x}\sin{y}=\sum_{cyc}\frac{(a^2-1)(b^2-1)}{4ab}=\frac{\sum\limits_{cyc}c(a^2-1)(b^2-1)}{4abc}=$$ $$=\frac{abc(ab+ac+bc)-\sum\limits_{cyc}(a^2b+a^2c)+a+b+c}{4abc}=\frac{abc(ab+ac+bc)-(a+b+c)(ab+ac+bc)+3abc+a+b+c}{4abc}=\frac{1}{2}-1+\frac{3}{4}+\frac{1}{2}=\frac{3}{4}$$</p> Answer 15: <p>Since \begin{align*} 2 \cos(x+y+z) &amp;= \cos x + \cos y + \cos z \\ 2 \sin(x+y+z) &amp;= \sin x + \sin y +\sin z \end{align*} \begin{align*} 2 e^{i(x+y+z)} &amp; = e^{ix} + e^{iy} + e^{iz} \end{align*} Multiplying throughout by $e^{-ix}$, we get \begin{align*} 2e^{i(y+z)} = 1 + e^{i(y-x)} + e^{i(z-x)} \end{align*} Equating the real parts, we get \begin{align*} 2\cos(y+z) &amp;= 1 + \cos(y-x) +\cos(z-x)\\ 2(\cos y \cos z - \sin y \sin z) &amp; = 1 + \cos x \cos y + \sin x \sin y + \cos z \cos x + \sin z \sin x \end{align*} Similarly, we get \begin{align*} 2(\cos x \cos z - \sin x \sin z) &amp;= 1 + \cos x \cos y + \sin x \sin y + \cos z \cos y + \sin z \sin y\\ 2(\cos x \cos y - \sin x \sin y) &amp;= 1 + \cos x \cos z + \sin x \sin z+ \cos z \cos y + \sin z \sin y\\ \end{align*} Adding, and canceling $2 \sum \cos x \cos y$ from the sides, we get $$\sum \sin x \sin y = -\frac{3}{4}$$</p> Answer 16: <p>I used the Blue's beautiful idea.</p> <p>Let $e^{ix}=a$, $e^{iy}=b$ and $e^{iz}=c$.</p> <p>Hence, $\sin{x}=\frac{a-\frac{1}{a}}{2i}=\frac{a^2-1}{2ai}$, $\cos{x}=\frac{a^2+1}{2a}$, $\sin{y}=\frac{b^2-1}{2bi}$, $\sin{x}=\frac{c^2-1}{2ci}$ and $\cos{x}=\frac{c^2+1}{2c}$.</p> <p>Thus, $\sum\limits_{cyc}\sin{x}=2\sin(x+y+z)$ gives $\sum\limits_{cyc}(a^2bc-ab)=2(a^2b^2c^2-1)$ and</p> <p>$\sum\limits_{cyc}\cos{x}=2\cos(x+y+z)$ gives $\sum\limits_{cyc}(a^2bc+ab)=2(a^2b^2c^2+1)$ or $ab+ac+bc=2$ and $a+b+c=2abc$.</p> <p>Thus, $$-\sum\limits_{cyc}\sin{x}\sin{y}=\sum_{cyc}\frac{(a^2-1)(b^2-1)}{4ab}=\frac{\sum\limits_{cyc}c(a^2-1)(b^2-1)}{4abc}=$$ $$=\frac{abc(ab+ac+bc)-\sum\limits_{cyc}(a^2b+a^2c)+a+b+c}{4abc}=\frac{abc(ab+ac+bc)-(a+b+c)(ab+ac+bc)+3abc+a+b+c}{4abc}=\frac{1}{2}-1+\frac{3}{4}+\frac{1}{2}=\frac{3}{4}$$</p> Answer 17: <p>Since \begin{align*} 2 \cos(x+y+z) &amp;= \cos x + \cos y + \cos z \\ 2 \sin(x+y+z) &amp;= \sin x + \sin y +\sin z \end{align*} \begin{align*} 2 e^{i(x+y+z)} &amp; = e^{ix} + e^{iy} + e^{iz} \end{align*} Multiplying throughout by $e^{-ix}$, we get \begin{align*} 2e^{i(y+z)} = 1 + e^{i(y-x)} + e^{i(z-x)} \end{align*} Equating the real parts, we get \begin{align*} 2\cos(y+z) &amp;= 1 + \cos(y-x) +\cos(z-x)\\ 2(\cos y \cos z - \sin y \sin z) &amp; = 1 + \cos x \cos y + \sin x \sin y + \cos z \cos x + \sin z \sin x \end{align*} Similarly, we get \begin{align*} 2(\cos x \cos z - \sin x \sin z) &amp;= 1 + \cos x \cos y + \sin x \sin y + \cos z \cos y + \sin z \sin y\\ 2(\cos x \cos y - \sin x \sin y) &amp;= 1 + \cos x \cos z + \sin x \sin z+ \cos z \cos y + \sin z \sin y\\ \end{align*} Adding, and canceling $2 \sum \cos x \cos y$ from the sides, we get $$\sum \sin x \sin y = -\frac{3}{4}$$</p> Answer 18: <p>I used the Blue's beautiful idea.</p> <p>Let $e^{ix}=a$, $e^{iy}=b$ and $e^{iz}=c$.</p> <p>Hence, $\sin{x}=\frac{a-\frac{1}{a}}{2i}=\frac{a^2-1}{2ai}$, $\cos{x}=\frac{a^2+1}{2a}$, $\sin{y}=\frac{b^2-1}{2bi}$, $\sin{x}=\frac{c^2-1}{2ci}$ and $\cos{x}=\frac{c^2+1}{2c}$.</p> <p>Thus, $\sum\limits_{cyc}\sin{x}=2\sin(x+y+z)$ gives $\sum\limits_{cyc}(a^2bc-ab)=2(a^2b^2c^2-1)$ and</p> <p>$\sum\limits_{cyc}\cos{x}=2\cos(x+y+z)$ gives $\sum\limits_{cyc}(a^2bc+ab)=2(a^2b^2c^2+1)$ or $ab+ac+bc=2$ and $a+b+c=2abc$.</p> <p>Thus, $$-\sum\limits_{cyc}\sin{x}\sin{y}=\sum_{cyc}\frac{(a^2-1)(b^2-1)}{4ab}=\frac{\sum\limits_{cyc}c(a^2-1)(b^2-1)}{4abc}=$$ $$=\frac{abc(ab+ac+bc)-\sum\limits_{cyc}(a^2b+a^2c)+a+b+c}{4abc}=\frac{abc(ab+ac+bc)-(a+b+c)(ab+ac+bc)+3abc+a+b+c}{4abc}=\frac{1}{2}-1+\frac{3}{4}+\frac{1}{2}=\frac{3}{4}$$</p> Answer 19: <p>Since \begin{align*} 2 \cos(x+y+z) &amp;= \cos x + \cos y + \cos z \\ 2 \sin(x+y+z) &amp;= \sin x + \sin y +\sin z \end{align*} \begin{align*} 2 e^{i(x+y+z)} &amp; = e^{ix} + e^{iy} + e^{iz} \end{align*} Multiplying throughout by $e^{-ix}$, we get \begin{align*} 2e^{i(y+z)} = 1 + e^{i(y-x)} + e^{i(z-x)} \end{align*} Equating the real parts, we get \begin{align*} 2\cos(y+z) &amp;= 1 + \cos(y-x) +\cos(z-x)\\ 2(\cos y \cos z - \sin y \sin z) &amp; = 1 + \cos x \cos y + \sin x \sin y + \cos z \cos x + \sin z \sin x \end{align*} Similarly, we get \begin{align*} 2(\cos x \cos z - \sin x \sin z) &amp;= 1 + \cos x \cos y + \sin x \sin y + \cos z \cos y + \sin z \sin y\\ 2(\cos x \cos y - \sin x \sin y) &amp;= 1 + \cos x \cos z + \sin x \sin z+ \cos z \cos y + \sin z \sin y\\ \end{align*} Adding, and canceling $2 \sum \cos x \cos y$ from the sides, we get $$\sum \sin x \sin y = -\frac{3}{4}$$</p> Answer 20: <p>I used the Blue's beautiful idea.</p> <p>Let $e^{ix}=a$, $e^{iy}=b$ and $e^{iz}=c$.</p> <p>Hence, $\sin{x}=\frac{a-\frac{1}{a}}{2i}=\frac{a^2-1}{2ai}$, $\cos{x}=\frac{a^2+1}{2a}$, $\sin{y}=\frac{b^2-1}{2bi}$, $\sin{x}=\frac{c^2-1}{2ci}$ and $\cos{x}=\frac{c^2+1}{2c}$.</p> <p>Thus, $\sum\limits_{cyc}\sin{x}=2\sin(x+y+z)$ gives $\sum\limits_{cyc}(a^2bc-ab)=2(a^2b^2c^2-1)$ and</p> <p>$\sum\limits_{cyc}\cos{x}=2\cos(x+y+z)$ gives $\sum\limits_{cyc}(a^2bc+ab)=2(a^2b^2c^2+1)$ or $ab+ac+bc=2$ and $a+b+c=2abc$.</p> <p>Thus, $$-\sum\limits_{cyc}\sin{x}\sin{y}=\sum_{cyc}\frac{(a^2-1)(b^2-1)}{4ab}=\frac{\sum\limits_{cyc}c(a^2-1)(b^2-1)}{4abc}=$$ $$=\frac{abc(ab+ac+bc)-\sum\limits_{cyc}(a^2b+a^2c)+a+b+c}{4abc}=\frac{abc(ab+ac+bc)-(a+b+c)(ab+ac+bc)+3abc+a+b+c}{4abc}=\frac{1}{2}-1+\frac{3}{4}+\frac{1}{2}=\frac{3}{4}$$</p>
On $Z = \max \left(X_1, X_2, \dots, X_N \right)$ where $X_i \sim \mathcal{N}(\mu_i, \sigma_i^2)$
2,115,048
6
https://math.stackexchange.com/questions/2115048/on-z-max-leftx-1-x-2-dots-x-n-right-where-x-i-sim-mathcaln-mu
<p>How does the maximum of $N$ normal random variables behave? If they are i.i.d., then on the right tail ($x \rightarrow \infty$) it behaves as if it is distributed with Gumbel. But any results for the generic case? </p> <p>That is, how does $Z = \max \left(X_1, X_2, \dots, X_N \right)$ behave where $X_i \sim \mathcal{N}(\mu_i, \sigma_i^2)$, or with shared variance $X_i \sim \mathcal{N}(\mu_i, \sigma^2)$. </p> <p>I assume the shared variance &amp; different mean case should behave similar to the i.i.d. case, but are there any known results?</p>
[ { "answer_id": 2125577, "score": 4, "text": "<p><a href=\"https://people.emich.edu/aross15/q/papers/bounds_Emax.pdf\" rel=\"nofollow noreferrer\">Ross 2003</a> gives an upper bound for the more general case, where $X_i$ can be dependent:</p>\n\n<p>$$\\mathbb{E} \\left[ \\max_i X_i \\right] \\le c + \\sum_{i=1}^N \\left(\\sigma^2_i + f_i(c) + (\\mu_i - c) Pr\\left\\{X_i &gt; c \\right\\} \\right)$$\nfor all values of $c$, where $f_i(c)$ is the pdf of $X_i$.</p>\n\n<p>The tightest bound can be obtained by using a $c$ that satisfies</p>\n\n<p>$$\\sum_{i=1}^N Pr\\left\\{X_i &gt; c \\right\\} = 1 \\: . $$</p>\n" }, { "answer_id": 2116615, "score": 2, "text": "<p>This problem is an example of order statistics with non-identical distributions; in this instance, due to having non-identical parameters. We are given a $N(\\mu, \\sigma^2)$ parent where identicality is relaxed by replacing $(\\mu, \\sigma)$ with $(\\mu_i, \\sigma_i)$ for $i = 1, ..., n$, with parent pdf say $f_i(x)$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/iXHoE.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iXHoE.png\" alt=\"enter image description here\"></a></p>\n\n<p>and cdf $F_i(x)$.</p>\n\n<p>The general form for the pdf of $Z = \\max \\left(X_1, X_2, \\dots, X_n \\right)$ will be:</p>\n\n<p>$$\\text{pdf}(Z) \\quad = \\quad \\sum _{i=1}^n \\big\\{ \\; f_i(z) \\prod _{j=1}^n F_j(z){}_{\\text{for } j\\neq i} \\big \\}$$</p>\n\n<p>This is manageable. To illustrate and check, consider a specific case of say $n = 4$. Then the pdf of the maximum is the pdf of the $4^{\\text{th}}$ order statistic in a sample of size 4, which can be evaluated automatically using the <code>OrderStatNonIdentical</code> function in the <em>mathStatica</em> package for <em>Mathematica</em>:</p>\n\n<p><a href=\"https://i.stack.imgur.com/UCO3R.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/UCO3R.png\" alt=\"enter image description here\"></a></p>\n\n<p>where the Normal cdf $F_i(x) = \\frac{1}{2} \\left(1+ \\text{Erf}\\left(\\frac{x-\\mu _i}{\\sqrt{2} \\sigma _i}\\right)\\right)$, and <code>Erf</code> denotes the error function.</p>\n\n<p>This can yield unusual functional forms, such as bimodal pdfs, by choosing suitably designed parameters. For example, here is a plot of the pdf when $n= 4$, and $\\mu _1\\to 4,\\mu _2\\to 3,\\mu _3\\to 2,\\mu _4\\to 7$ and $\\sigma _1\\to 0.1,\\sigma _2\\to 2,\\sigma _3\\to 3,\\sigma _4\\to 4$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/HUq8L.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/HUq8L.png\" alt=\"enter image description here\"></a></p>\n\n<p>More usual would just be a skewed right pdf. </p>\n\n<p><strong>Notes</strong>\n1. As disclosure, I should add that I am one of the authors of the function used above.</p>\n" }, { "answer_id": 2115216, "score": 1, "text": "<p>If we start with the CDF, \n\\begin{align*}\nF_Z(z) &amp;= P(Z \\leq z) \\\\\n&amp;= P(\\max(X_1,X_2,...,X_N) \\leq z) \\\\\n&amp;= P(X_1 \\leq z, X_2 \\leq z,...,X_N \\leq z) \\\\\n&amp;\\stackrel{(ind)}{=} \\prod_{1}^N P(X_i \\leq z) \\\\\n&amp;= \\prod_1^N F_{X_i}(z) \n\\end{align*}\nTaking the derivative yields the pdf. It doesn't seem to me like this will lead to a nice general expression.</p>\n" }, { "answer_id": 2125577, "score": 4, "text": "<p><a href=\"https://people.emich.edu/aross15/q/papers/bounds_Emax.pdf\" rel=\"nofollow noreferrer\">Ross 2003</a> gives an upper bound for the more general case, where $X_i$ can be dependent:</p>\n\n<p>$$\\mathbb{E} \\left[ \\max_i X_i \\right] \\le c + \\sum_{i=1}^N \\left(\\sigma^2_i + f_i(c) + (\\mu_i - c) Pr\\left\\{X_i &gt; c \\right\\} \\right)$$\nfor all values of $c$, where $f_i(c)$ is the pdf of $X_i$.</p>\n\n<p>The tightest bound can be obtained by using a $c$ that satisfies</p>\n\n<p>$$\\sum_{i=1}^N Pr\\left\\{X_i &gt; c \\right\\} = 1 \\: . $$</p>\n" }, { "answer_id": 2116615, "score": 2, "text": "<p>This problem is an example of order statistics with non-identical distributions; in this instance, due to having non-identical parameters. We are given a $N(\\mu, \\sigma^2)$ parent where identicality is relaxed by replacing $(\\mu, \\sigma)$ with $(\\mu_i, \\sigma_i)$ for $i = 1, ..., n$, with parent pdf say $f_i(x)$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/iXHoE.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iXHoE.png\" alt=\"enter image description here\"></a></p>\n\n<p>and cdf $F_i(x)$.</p>\n\n<p>The general form for the pdf of $Z = \\max \\left(X_1, X_2, \\dots, X_n \\right)$ will be:</p>\n\n<p>$$\\text{pdf}(Z) \\quad = \\quad \\sum _{i=1}^n \\big\\{ \\; f_i(z) \\prod _{j=1}^n F_j(z){}_{\\text{for } j\\neq i} \\big \\}$$</p>\n\n<p>This is manageable. To illustrate and check, consider a specific case of say $n = 4$. Then the pdf of the maximum is the pdf of the $4^{\\text{th}}$ order statistic in a sample of size 4, which can be evaluated automatically using the <code>OrderStatNonIdentical</code> function in the <em>mathStatica</em> package for <em>Mathematica</em>:</p>\n\n<p><a href=\"https://i.stack.imgur.com/UCO3R.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/UCO3R.png\" alt=\"enter image description here\"></a></p>\n\n<p>where the Normal cdf $F_i(x) = \\frac{1}{2} \\left(1+ \\text{Erf}\\left(\\frac{x-\\mu _i}{\\sqrt{2} \\sigma _i}\\right)\\right)$, and <code>Erf</code> denotes the error function.</p>\n\n<p>This can yield unusual functional forms, such as bimodal pdfs, by choosing suitably designed parameters. For example, here is a plot of the pdf when $n= 4$, and $\\mu _1\\to 4,\\mu _2\\to 3,\\mu _3\\to 2,\\mu _4\\to 7$ and $\\sigma _1\\to 0.1,\\sigma _2\\to 2,\\sigma _3\\to 3,\\sigma _4\\to 4$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/HUq8L.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/HUq8L.png\" alt=\"enter image description here\"></a></p>\n\n<p>More usual would just be a skewed right pdf. </p>\n\n<p><strong>Notes</strong>\n1. As disclosure, I should add that I am one of the authors of the function used above.</p>\n" }, { "answer_id": 2115216, "score": 1, "text": "<p>If we start with the CDF, \n\\begin{align*}\nF_Z(z) &amp;= P(Z \\leq z) \\\\\n&amp;= P(\\max(X_1,X_2,...,X_N) \\leq z) \\\\\n&amp;= P(X_1 \\leq z, X_2 \\leq z,...,X_N \\leq z) \\\\\n&amp;\\stackrel{(ind)}{=} \\prod_{1}^N P(X_i \\leq z) \\\\\n&amp;= \\prod_1^N F_{X_i}(z) \n\\end{align*}\nTaking the derivative yields the pdf. It doesn't seem to me like this will lead to a nice general expression.</p>\n" }, { "answer_id": 2125577, "score": 4, "text": "<p><a href=\"https://people.emich.edu/aross15/q/papers/bounds_Emax.pdf\" rel=\"nofollow noreferrer\">Ross 2003</a> gives an upper bound for the more general case, where $X_i$ can be dependent:</p>\n\n<p>$$\\mathbb{E} \\left[ \\max_i X_i \\right] \\le c + \\sum_{i=1}^N \\left(\\sigma^2_i + f_i(c) + (\\mu_i - c) Pr\\left\\{X_i &gt; c \\right\\} \\right)$$\nfor all values of $c$, where $f_i(c)$ is the pdf of $X_i$.</p>\n\n<p>The tightest bound can be obtained by using a $c$ that satisfies</p>\n\n<p>$$\\sum_{i=1}^N Pr\\left\\{X_i &gt; c \\right\\} = 1 \\: . $$</p>\n" }, { "answer_id": 2116615, "score": 2, "text": "<p>This problem is an example of order statistics with non-identical distributions; in this instance, due to having non-identical parameters. We are given a $N(\\mu, \\sigma^2)$ parent where identicality is relaxed by replacing $(\\mu, \\sigma)$ with $(\\mu_i, \\sigma_i)$ for $i = 1, ..., n$, with parent pdf say $f_i(x)$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/iXHoE.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iXHoE.png\" alt=\"enter image description here\"></a></p>\n\n<p>and cdf $F_i(x)$.</p>\n\n<p>The general form for the pdf of $Z = \\max \\left(X_1, X_2, \\dots, X_n \\right)$ will be:</p>\n\n<p>$$\\text{pdf}(Z) \\quad = \\quad \\sum _{i=1}^n \\big\\{ \\; f_i(z) \\prod _{j=1}^n F_j(z){}_{\\text{for } j\\neq i} \\big \\}$$</p>\n\n<p>This is manageable. To illustrate and check, consider a specific case of say $n = 4$. Then the pdf of the maximum is the pdf of the $4^{\\text{th}}$ order statistic in a sample of size 4, which can be evaluated automatically using the <code>OrderStatNonIdentical</code> function in the <em>mathStatica</em> package for <em>Mathematica</em>:</p>\n\n<p><a href=\"https://i.stack.imgur.com/UCO3R.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/UCO3R.png\" alt=\"enter image description here\"></a></p>\n\n<p>where the Normal cdf $F_i(x) = \\frac{1}{2} \\left(1+ \\text{Erf}\\left(\\frac{x-\\mu _i}{\\sqrt{2} \\sigma _i}\\right)\\right)$, and <code>Erf</code> denotes the error function.</p>\n\n<p>This can yield unusual functional forms, such as bimodal pdfs, by choosing suitably designed parameters. For example, here is a plot of the pdf when $n= 4$, and $\\mu _1\\to 4,\\mu _2\\to 3,\\mu _3\\to 2,\\mu _4\\to 7$ and $\\sigma _1\\to 0.1,\\sigma _2\\to 2,\\sigma _3\\to 3,\\sigma _4\\to 4$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/HUq8L.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/HUq8L.png\" alt=\"enter image description here\"></a></p>\n\n<p>More usual would just be a skewed right pdf. </p>\n\n<p><strong>Notes</strong>\n1. As disclosure, I should add that I am one of the authors of the function used above.</p>\n" }, { "answer_id": 2115216, "score": 1, "text": "<p>If we start with the CDF, \n\\begin{align*}\nF_Z(z) &amp;= P(Z \\leq z) \\\\\n&amp;= P(\\max(X_1,X_2,...,X_N) \\leq z) \\\\\n&amp;= P(X_1 \\leq z, X_2 \\leq z,...,X_N \\leq z) \\\\\n&amp;\\stackrel{(ind)}{=} \\prod_{1}^N P(X_i \\leq z) \\\\\n&amp;= \\prod_1^N F_{X_i}(z) \n\\end{align*}\nTaking the derivative yields the pdf. It doesn't seem to me like this will lead to a nice general expression.</p>\n" }, { "answer_id": 2125577, "score": 4, "text": "<p><a href=\"https://people.emich.edu/aross15/q/papers/bounds_Emax.pdf\" rel=\"nofollow noreferrer\">Ross 2003</a> gives an upper bound for the more general case, where $X_i$ can be dependent:</p>\n\n<p>$$\\mathbb{E} \\left[ \\max_i X_i \\right] \\le c + \\sum_{i=1}^N \\left(\\sigma^2_i + f_i(c) + (\\mu_i - c) Pr\\left\\{X_i &gt; c \\right\\} \\right)$$\nfor all values of $c$, where $f_i(c)$ is the pdf of $X_i$.</p>\n\n<p>The tightest bound can be obtained by using a $c$ that satisfies</p>\n\n<p>$$\\sum_{i=1}^N Pr\\left\\{X_i &gt; c \\right\\} = 1 \\: . $$</p>\n" }, { "answer_id": 2116615, "score": 2, "text": "<p>This problem is an example of order statistics with non-identical distributions; in this instance, due to having non-identical parameters. We are given a $N(\\mu, \\sigma^2)$ parent where identicality is relaxed by replacing $(\\mu, \\sigma)$ with $(\\mu_i, \\sigma_i)$ for $i = 1, ..., n$, with parent pdf say $f_i(x)$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/iXHoE.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iXHoE.png\" alt=\"enter image description here\"></a></p>\n\n<p>and cdf $F_i(x)$.</p>\n\n<p>The general form for the pdf of $Z = \\max \\left(X_1, X_2, \\dots, X_n \\right)$ will be:</p>\n\n<p>$$\\text{pdf}(Z) \\quad = \\quad \\sum _{i=1}^n \\big\\{ \\; f_i(z) \\prod _{j=1}^n F_j(z){}_{\\text{for } j\\neq i} \\big \\}$$</p>\n\n<p>This is manageable. To illustrate and check, consider a specific case of say $n = 4$. Then the pdf of the maximum is the pdf of the $4^{\\text{th}}$ order statistic in a sample of size 4, which can be evaluated automatically using the <code>OrderStatNonIdentical</code> function in the <em>mathStatica</em> package for <em>Mathematica</em>:</p>\n\n<p><a href=\"https://i.stack.imgur.com/UCO3R.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/UCO3R.png\" alt=\"enter image description here\"></a></p>\n\n<p>where the Normal cdf $F_i(x) = \\frac{1}{2} \\left(1+ \\text{Erf}\\left(\\frac{x-\\mu _i}{\\sqrt{2} \\sigma _i}\\right)\\right)$, and <code>Erf</code> denotes the error function.</p>\n\n<p>This can yield unusual functional forms, such as bimodal pdfs, by choosing suitably designed parameters. For example, here is a plot of the pdf when $n= 4$, and $\\mu _1\\to 4,\\mu _2\\to 3,\\mu _3\\to 2,\\mu _4\\to 7$ and $\\sigma _1\\to 0.1,\\sigma _2\\to 2,\\sigma _3\\to 3,\\sigma _4\\to 4$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/HUq8L.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/HUq8L.png\" alt=\"enter image description here\"></a></p>\n\n<p>More usual would just be a skewed right pdf. </p>\n\n<p><strong>Notes</strong>\n1. As disclosure, I should add that I am one of the authors of the function used above.</p>\n" }, { "answer_id": 2115216, "score": 1, "text": "<p>If we start with the CDF, \n\\begin{align*}\nF_Z(z) &amp;= P(Z \\leq z) \\\\\n&amp;= P(\\max(X_1,X_2,...,X_N) \\leq z) \\\\\n&amp;= P(X_1 \\leq z, X_2 \\leq z,...,X_N \\leq z) \\\\\n&amp;\\stackrel{(ind)}{=} \\prod_{1}^N P(X_i \\leq z) \\\\\n&amp;= \\prod_1^N F_{X_i}(z) \n\\end{align*}\nTaking the derivative yields the pdf. It doesn't seem to me like this will lead to a nice general expression.</p>\n" }, { "answer_id": 2125577, "score": 4, "text": "<p><a href=\"https://people.emich.edu/aross15/q/papers/bounds_Emax.pdf\" rel=\"nofollow noreferrer\">Ross 2003</a> gives an upper bound for the more general case, where $X_i$ can be dependent:</p>\n\n<p>$$\\mathbb{E} \\left[ \\max_i X_i \\right] \\le c + \\sum_{i=1}^N \\left(\\sigma^2_i + f_i(c) + (\\mu_i - c) Pr\\left\\{X_i &gt; c \\right\\} \\right)$$\nfor all values of $c$, where $f_i(c)$ is the pdf of $X_i$.</p>\n\n<p>The tightest bound can be obtained by using a $c$ that satisfies</p>\n\n<p>$$\\sum_{i=1}^N Pr\\left\\{X_i &gt; c \\right\\} = 1 \\: . $$</p>\n" }, { "answer_id": 2116615, "score": 2, "text": "<p>This problem is an example of order statistics with non-identical distributions; in this instance, due to having non-identical parameters. We are given a $N(\\mu, \\sigma^2)$ parent where identicality is relaxed by replacing $(\\mu, \\sigma)$ with $(\\mu_i, \\sigma_i)$ for $i = 1, ..., n$, with parent pdf say $f_i(x)$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/iXHoE.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iXHoE.png\" alt=\"enter image description here\"></a></p>\n\n<p>and cdf $F_i(x)$.</p>\n\n<p>The general form for the pdf of $Z = \\max \\left(X_1, X_2, \\dots, X_n \\right)$ will be:</p>\n\n<p>$$\\text{pdf}(Z) \\quad = \\quad \\sum _{i=1}^n \\big\\{ \\; f_i(z) \\prod _{j=1}^n F_j(z){}_{\\text{for } j\\neq i} \\big \\}$$</p>\n\n<p>This is manageable. To illustrate and check, consider a specific case of say $n = 4$. Then the pdf of the maximum is the pdf of the $4^{\\text{th}}$ order statistic in a sample of size 4, which can be evaluated automatically using the <code>OrderStatNonIdentical</code> function in the <em>mathStatica</em> package for <em>Mathematica</em>:</p>\n\n<p><a href=\"https://i.stack.imgur.com/UCO3R.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/UCO3R.png\" alt=\"enter image description here\"></a></p>\n\n<p>where the Normal cdf $F_i(x) = \\frac{1}{2} \\left(1+ \\text{Erf}\\left(\\frac{x-\\mu _i}{\\sqrt{2} \\sigma _i}\\right)\\right)$, and <code>Erf</code> denotes the error function.</p>\n\n<p>This can yield unusual functional forms, such as bimodal pdfs, by choosing suitably designed parameters. For example, here is a plot of the pdf when $n= 4$, and $\\mu _1\\to 4,\\mu _2\\to 3,\\mu _3\\to 2,\\mu _4\\to 7$ and $\\sigma _1\\to 0.1,\\sigma _2\\to 2,\\sigma _3\\to 3,\\sigma _4\\to 4$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/HUq8L.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/HUq8L.png\" alt=\"enter image description here\"></a></p>\n\n<p>More usual would just be a skewed right pdf. </p>\n\n<p><strong>Notes</strong>\n1. As disclosure, I should add that I am one of the authors of the function used above.</p>\n" }, { "answer_id": 2115216, "score": 1, "text": "<p>If we start with the CDF, \n\\begin{align*}\nF_Z(z) &amp;= P(Z \\leq z) \\\\\n&amp;= P(\\max(X_1,X_2,...,X_N) \\leq z) \\\\\n&amp;= P(X_1 \\leq z, X_2 \\leq z,...,X_N \\leq z) \\\\\n&amp;\\stackrel{(ind)}{=} \\prod_{1}^N P(X_i \\leq z) \\\\\n&amp;= \\prod_1^N F_{X_i}(z) \n\\end{align*}\nTaking the derivative yields the pdf. It doesn't seem to me like this will lead to a nice general expression.</p>\n" }, { "answer_id": 2125577, "score": 4, "text": "<p><a href=\"https://people.emich.edu/aross15/q/papers/bounds_Emax.pdf\" rel=\"nofollow noreferrer\">Ross 2003</a> gives an upper bound for the more general case, where $X_i$ can be dependent:</p>\n\n<p>$$\\mathbb{E} \\left[ \\max_i X_i \\right] \\le c + \\sum_{i=1}^N \\left(\\sigma^2_i + f_i(c) + (\\mu_i - c) Pr\\left\\{X_i &gt; c \\right\\} \\right)$$\nfor all values of $c$, where $f_i(c)$ is the pdf of $X_i$.</p>\n\n<p>The tightest bound can be obtained by using a $c$ that satisfies</p>\n\n<p>$$\\sum_{i=1}^N Pr\\left\\{X_i &gt; c \\right\\} = 1 \\: . $$</p>\n" }, { "answer_id": 2116615, "score": 2, "text": "<p>This problem is an example of order statistics with non-identical distributions; in this instance, due to having non-identical parameters. We are given a $N(\\mu, \\sigma^2)$ parent where identicality is relaxed by replacing $(\\mu, \\sigma)$ with $(\\mu_i, \\sigma_i)$ for $i = 1, ..., n$, with parent pdf say $f_i(x)$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/iXHoE.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iXHoE.png\" alt=\"enter image description here\"></a></p>\n\n<p>and cdf $F_i(x)$.</p>\n\n<p>The general form for the pdf of $Z = \\max \\left(X_1, X_2, \\dots, X_n \\right)$ will be:</p>\n\n<p>$$\\text{pdf}(Z) \\quad = \\quad \\sum _{i=1}^n \\big\\{ \\; f_i(z) \\prod _{j=1}^n F_j(z){}_{\\text{for } j\\neq i} \\big \\}$$</p>\n\n<p>This is manageable. To illustrate and check, consider a specific case of say $n = 4$. Then the pdf of the maximum is the pdf of the $4^{\\text{th}}$ order statistic in a sample of size 4, which can be evaluated automatically using the <code>OrderStatNonIdentical</code> function in the <em>mathStatica</em> package for <em>Mathematica</em>:</p>\n\n<p><a href=\"https://i.stack.imgur.com/UCO3R.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/UCO3R.png\" alt=\"enter image description here\"></a></p>\n\n<p>where the Normal cdf $F_i(x) = \\frac{1}{2} \\left(1+ \\text{Erf}\\left(\\frac{x-\\mu _i}{\\sqrt{2} \\sigma _i}\\right)\\right)$, and <code>Erf</code> denotes the error function.</p>\n\n<p>This can yield unusual functional forms, such as bimodal pdfs, by choosing suitably designed parameters. For example, here is a plot of the pdf when $n= 4$, and $\\mu _1\\to 4,\\mu _2\\to 3,\\mu _3\\to 2,\\mu _4\\to 7$ and $\\sigma _1\\to 0.1,\\sigma _2\\to 2,\\sigma _3\\to 3,\\sigma _4\\to 4$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/HUq8L.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/HUq8L.png\" alt=\"enter image description here\"></a></p>\n\n<p>More usual would just be a skewed right pdf. </p>\n\n<p><strong>Notes</strong>\n1. As disclosure, I should add that I am one of the authors of the function used above.</p>\n" }, { "answer_id": 2115216, "score": 1, "text": "<p>If we start with the CDF, \n\\begin{align*}\nF_Z(z) &amp;= P(Z \\leq z) \\\\\n&amp;= P(\\max(X_1,X_2,...,X_N) \\leq z) \\\\\n&amp;= P(X_1 \\leq z, X_2 \\leq z,...,X_N \\leq z) \\\\\n&amp;\\stackrel{(ind)}{=} \\prod_{1}^N P(X_i \\leq z) \\\\\n&amp;= \\prod_1^N F_{X_i}(z) \n\\end{align*}\nTaking the derivative yields the pdf. It doesn't seem to me like this will lead to a nice general expression.</p>\n" }, { "answer_id": 2125577, "score": 4, "text": "<p><a href=\"https://people.emich.edu/aross15/q/papers/bounds_Emax.pdf\" rel=\"nofollow noreferrer\">Ross 2003</a> gives an upper bound for the more general case, where $X_i$ can be dependent:</p>\n\n<p>$$\\mathbb{E} \\left[ \\max_i X_i \\right] \\le c + \\sum_{i=1}^N \\left(\\sigma^2_i + f_i(c) + (\\mu_i - c) Pr\\left\\{X_i &gt; c \\right\\} \\right)$$\nfor all values of $c$, where $f_i(c)$ is the pdf of $X_i$.</p>\n\n<p>The tightest bound can be obtained by using a $c$ that satisfies</p>\n\n<p>$$\\sum_{i=1}^N Pr\\left\\{X_i &gt; c \\right\\} = 1 \\: . $$</p>\n" }, { "answer_id": 2116615, "score": 2, "text": "<p>This problem is an example of order statistics with non-identical distributions; in this instance, due to having non-identical parameters. We are given a $N(\\mu, \\sigma^2)$ parent where identicality is relaxed by replacing $(\\mu, \\sigma)$ with $(\\mu_i, \\sigma_i)$ for $i = 1, ..., n$, with parent pdf say $f_i(x)$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/iXHoE.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iXHoE.png\" alt=\"enter image description here\"></a></p>\n\n<p>and cdf $F_i(x)$.</p>\n\n<p>The general form for the pdf of $Z = \\max \\left(X_1, X_2, \\dots, X_n \\right)$ will be:</p>\n\n<p>$$\\text{pdf}(Z) \\quad = \\quad \\sum _{i=1}^n \\big\\{ \\; f_i(z) \\prod _{j=1}^n F_j(z){}_{\\text{for } j\\neq i} \\big \\}$$</p>\n\n<p>This is manageable. To illustrate and check, consider a specific case of say $n = 4$. Then the pdf of the maximum is the pdf of the $4^{\\text{th}}$ order statistic in a sample of size 4, which can be evaluated automatically using the <code>OrderStatNonIdentical</code> function in the <em>mathStatica</em> package for <em>Mathematica</em>:</p>\n\n<p><a href=\"https://i.stack.imgur.com/UCO3R.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/UCO3R.png\" alt=\"enter image description here\"></a></p>\n\n<p>where the Normal cdf $F_i(x) = \\frac{1}{2} \\left(1+ \\text{Erf}\\left(\\frac{x-\\mu _i}{\\sqrt{2} \\sigma _i}\\right)\\right)$, and <code>Erf</code> denotes the error function.</p>\n\n<p>This can yield unusual functional forms, such as bimodal pdfs, by choosing suitably designed parameters. For example, here is a plot of the pdf when $n= 4$, and $\\mu _1\\to 4,\\mu _2\\to 3,\\mu _3\\to 2,\\mu _4\\to 7$ and $\\sigma _1\\to 0.1,\\sigma _2\\to 2,\\sigma _3\\to 3,\\sigma _4\\to 4$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/HUq8L.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/HUq8L.png\" alt=\"enter image description here\"></a></p>\n\n<p>More usual would just be a skewed right pdf. </p>\n\n<p><strong>Notes</strong>\n1. As disclosure, I should add that I am one of the authors of the function used above.</p>\n" }, { "answer_id": 2115216, "score": 1, "text": "<p>If we start with the CDF, \n\\begin{align*}\nF_Z(z) &amp;= P(Z \\leq z) \\\\\n&amp;= P(\\max(X_1,X_2,...,X_N) \\leq z) \\\\\n&amp;= P(X_1 \\leq z, X_2 \\leq z,...,X_N \\leq z) \\\\\n&amp;\\stackrel{(ind)}{=} \\prod_{1}^N P(X_i \\leq z) \\\\\n&amp;= \\prod_1^N F_{X_i}(z) \n\\end{align*}\nTaking the derivative yields the pdf. It doesn't seem to me like this will lead to a nice general expression.</p>\n" }, { "answer_id": 2125577, "score": 4, "text": "<p><a href=\"https://people.emich.edu/aross15/q/papers/bounds_Emax.pdf\" rel=\"nofollow noreferrer\">Ross 2003</a> gives an upper bound for the more general case, where $X_i$ can be dependent:</p>\n\n<p>$$\\mathbb{E} \\left[ \\max_i X_i \\right] \\le c + \\sum_{i=1}^N \\left(\\sigma^2_i + f_i(c) + (\\mu_i - c) Pr\\left\\{X_i &gt; c \\right\\} \\right)$$\nfor all values of $c$, where $f_i(c)$ is the pdf of $X_i$.</p>\n\n<p>The tightest bound can be obtained by using a $c$ that satisfies</p>\n\n<p>$$\\sum_{i=1}^N Pr\\left\\{X_i &gt; c \\right\\} = 1 \\: . $$</p>\n" }, { "answer_id": 2116615, "score": 2, "text": "<p>This problem is an example of order statistics with non-identical distributions; in this instance, due to having non-identical parameters. We are given a $N(\\mu, \\sigma^2)$ parent where identicality is relaxed by replacing $(\\mu, \\sigma)$ with $(\\mu_i, \\sigma_i)$ for $i = 1, ..., n$, with parent pdf say $f_i(x)$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/iXHoE.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iXHoE.png\" alt=\"enter image description here\"></a></p>\n\n<p>and cdf $F_i(x)$.</p>\n\n<p>The general form for the pdf of $Z = \\max \\left(X_1, X_2, \\dots, X_n \\right)$ will be:</p>\n\n<p>$$\\text{pdf}(Z) \\quad = \\quad \\sum _{i=1}^n \\big\\{ \\; f_i(z) \\prod _{j=1}^n F_j(z){}_{\\text{for } j\\neq i} \\big \\}$$</p>\n\n<p>This is manageable. To illustrate and check, consider a specific case of say $n = 4$. Then the pdf of the maximum is the pdf of the $4^{\\text{th}}$ order statistic in a sample of size 4, which can be evaluated automatically using the <code>OrderStatNonIdentical</code> function in the <em>mathStatica</em> package for <em>Mathematica</em>:</p>\n\n<p><a href=\"https://i.stack.imgur.com/UCO3R.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/UCO3R.png\" alt=\"enter image description here\"></a></p>\n\n<p>where the Normal cdf $F_i(x) = \\frac{1}{2} \\left(1+ \\text{Erf}\\left(\\frac{x-\\mu _i}{\\sqrt{2} \\sigma _i}\\right)\\right)$, and <code>Erf</code> denotes the error function.</p>\n\n<p>This can yield unusual functional forms, such as bimodal pdfs, by choosing suitably designed parameters. For example, here is a plot of the pdf when $n= 4$, and $\\mu _1\\to 4,\\mu _2\\to 3,\\mu _3\\to 2,\\mu _4\\to 7$ and $\\sigma _1\\to 0.1,\\sigma _2\\to 2,\\sigma _3\\to 3,\\sigma _4\\to 4$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/HUq8L.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/HUq8L.png\" alt=\"enter image description here\"></a></p>\n\n<p>More usual would just be a skewed right pdf. </p>\n\n<p><strong>Notes</strong>\n1. As disclosure, I should add that I am one of the authors of the function used above.</p>\n" }, { "answer_id": 2115216, "score": 1, "text": "<p>If we start with the CDF, \n\\begin{align*}\nF_Z(z) &amp;= P(Z \\leq z) \\\\\n&amp;= P(\\max(X_1,X_2,...,X_N) \\leq z) \\\\\n&amp;= P(X_1 \\leq z, X_2 \\leq z,...,X_N \\leq z) \\\\\n&amp;\\stackrel{(ind)}{=} \\prod_{1}^N P(X_i \\leq z) \\\\\n&amp;= \\prod_1^N F_{X_i}(z) \n\\end{align*}\nTaking the derivative yields the pdf. It doesn't seem to me like this will lead to a nice general expression.</p>\n" }, { "answer_id": 2125577, "score": 4, "text": "<p><a href=\"https://people.emich.edu/aross15/q/papers/bounds_Emax.pdf\" rel=\"nofollow noreferrer\">Ross 2003</a> gives an upper bound for the more general case, where $X_i$ can be dependent:</p>\n\n<p>$$\\mathbb{E} \\left[ \\max_i X_i \\right] \\le c + \\sum_{i=1}^N \\left(\\sigma^2_i + f_i(c) + (\\mu_i - c) Pr\\left\\{X_i &gt; c \\right\\} \\right)$$\nfor all values of $c$, where $f_i(c)$ is the pdf of $X_i$.</p>\n\n<p>The tightest bound can be obtained by using a $c$ that satisfies</p>\n\n<p>$$\\sum_{i=1}^N Pr\\left\\{X_i &gt; c \\right\\} = 1 \\: . $$</p>\n" }, { "answer_id": 2116615, "score": 2, "text": "<p>This problem is an example of order statistics with non-identical distributions; in this instance, due to having non-identical parameters. We are given a $N(\\mu, \\sigma^2)$ parent where identicality is relaxed by replacing $(\\mu, \\sigma)$ with $(\\mu_i, \\sigma_i)$ for $i = 1, ..., n$, with parent pdf say $f_i(x)$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/iXHoE.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iXHoE.png\" alt=\"enter image description here\"></a></p>\n\n<p>and cdf $F_i(x)$.</p>\n\n<p>The general form for the pdf of $Z = \\max \\left(X_1, X_2, \\dots, X_n \\right)$ will be:</p>\n\n<p>$$\\text{pdf}(Z) \\quad = \\quad \\sum _{i=1}^n \\big\\{ \\; f_i(z) \\prod _{j=1}^n F_j(z){}_{\\text{for } j\\neq i} \\big \\}$$</p>\n\n<p>This is manageable. To illustrate and check, consider a specific case of say $n = 4$. Then the pdf of the maximum is the pdf of the $4^{\\text{th}}$ order statistic in a sample of size 4, which can be evaluated automatically using the <code>OrderStatNonIdentical</code> function in the <em>mathStatica</em> package for <em>Mathematica</em>:</p>\n\n<p><a href=\"https://i.stack.imgur.com/UCO3R.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/UCO3R.png\" alt=\"enter image description here\"></a></p>\n\n<p>where the Normal cdf $F_i(x) = \\frac{1}{2} \\left(1+ \\text{Erf}\\left(\\frac{x-\\mu _i}{\\sqrt{2} \\sigma _i}\\right)\\right)$, and <code>Erf</code> denotes the error function.</p>\n\n<p>This can yield unusual functional forms, such as bimodal pdfs, by choosing suitably designed parameters. For example, here is a plot of the pdf when $n= 4$, and $\\mu _1\\to 4,\\mu _2\\to 3,\\mu _3\\to 2,\\mu _4\\to 7$ and $\\sigma _1\\to 0.1,\\sigma _2\\to 2,\\sigma _3\\to 3,\\sigma _4\\to 4$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/HUq8L.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/HUq8L.png\" alt=\"enter image description here\"></a></p>\n\n<p>More usual would just be a skewed right pdf. </p>\n\n<p><strong>Notes</strong>\n1. As disclosure, I should add that I am one of the authors of the function used above.</p>\n" }, { "answer_id": 2115216, "score": 1, "text": "<p>If we start with the CDF, \n\\begin{align*}\nF_Z(z) &amp;= P(Z \\leq z) \\\\\n&amp;= P(\\max(X_1,X_2,...,X_N) \\leq z) \\\\\n&amp;= P(X_1 \\leq z, X_2 \\leq z,...,X_N \\leq z) \\\\\n&amp;\\stackrel{(ind)}{=} \\prod_{1}^N P(X_i \\leq z) \\\\\n&amp;= \\prod_1^N F_{X_i}(z) \n\\end{align*}\nTaking the derivative yields the pdf. It doesn't seem to me like this will lead to a nice general expression.</p>\n" }, { "answer_id": 2125577, "score": 4, "text": "<p><a href=\"https://people.emich.edu/aross15/q/papers/bounds_Emax.pdf\" rel=\"nofollow noreferrer\">Ross 2003</a> gives an upper bound for the more general case, where $X_i$ can be dependent:</p>\n\n<p>$$\\mathbb{E} \\left[ \\max_i X_i \\right] \\le c + \\sum_{i=1}^N \\left(\\sigma^2_i + f_i(c) + (\\mu_i - c) Pr\\left\\{X_i &gt; c \\right\\} \\right)$$\nfor all values of $c$, where $f_i(c)$ is the pdf of $X_i$.</p>\n\n<p>The tightest bound can be obtained by using a $c$ that satisfies</p>\n\n<p>$$\\sum_{i=1}^N Pr\\left\\{X_i &gt; c \\right\\} = 1 \\: . $$</p>\n" }, { "answer_id": 2116615, "score": 2, "text": "<p>This problem is an example of order statistics with non-identical distributions; in this instance, due to having non-identical parameters. We are given a $N(\\mu, \\sigma^2)$ parent where identicality is relaxed by replacing $(\\mu, \\sigma)$ with $(\\mu_i, \\sigma_i)$ for $i = 1, ..., n$, with parent pdf say $f_i(x)$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/iXHoE.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iXHoE.png\" alt=\"enter image description here\"></a></p>\n\n<p>and cdf $F_i(x)$.</p>\n\n<p>The general form for the pdf of $Z = \\max \\left(X_1, X_2, \\dots, X_n \\right)$ will be:</p>\n\n<p>$$\\text{pdf}(Z) \\quad = \\quad \\sum _{i=1}^n \\big\\{ \\; f_i(z) \\prod _{j=1}^n F_j(z){}_{\\text{for } j\\neq i} \\big \\}$$</p>\n\n<p>This is manageable. To illustrate and check, consider a specific case of say $n = 4$. Then the pdf of the maximum is the pdf of the $4^{\\text{th}}$ order statistic in a sample of size 4, which can be evaluated automatically using the <code>OrderStatNonIdentical</code> function in the <em>mathStatica</em> package for <em>Mathematica</em>:</p>\n\n<p><a href=\"https://i.stack.imgur.com/UCO3R.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/UCO3R.png\" alt=\"enter image description here\"></a></p>\n\n<p>where the Normal cdf $F_i(x) = \\frac{1}{2} \\left(1+ \\text{Erf}\\left(\\frac{x-\\mu _i}{\\sqrt{2} \\sigma _i}\\right)\\right)$, and <code>Erf</code> denotes the error function.</p>\n\n<p>This can yield unusual functional forms, such as bimodal pdfs, by choosing suitably designed parameters. For example, here is a plot of the pdf when $n= 4$, and $\\mu _1\\to 4,\\mu _2\\to 3,\\mu _3\\to 2,\\mu _4\\to 7$ and $\\sigma _1\\to 0.1,\\sigma _2\\to 2,\\sigma _3\\to 3,\\sigma _4\\to 4$:</p>\n\n<p><a href=\"https://i.stack.imgur.com/HUq8L.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/HUq8L.png\" alt=\"enter image description here\"></a></p>\n\n<p>More usual would just be a skewed right pdf. </p>\n\n<p><strong>Notes</strong>\n1. As disclosure, I should add that I am one of the authors of the function used above.</p>\n" }, { "answer_id": 2115216, "score": 1, "text": "<p>If we start with the CDF, \n\\begin{align*}\nF_Z(z) &amp;= P(Z \\leq z) \\\\\n&amp;= P(\\max(X_1,X_2,...,X_N) \\leq z) \\\\\n&amp;= P(X_1 \\leq z, X_2 \\leq z,...,X_N \\leq z) \\\\\n&amp;\\stackrel{(ind)}{=} \\prod_{1}^N P(X_i \\leq z) \\\\\n&amp;= \\prod_1^N F_{X_i}(z) \n\\end{align*}\nTaking the derivative yields the pdf. It doesn't seem to me like this will lead to a nice general expression.</p>\n" } ]
Question: On $Z = \max \left(X_1, X_2, \dots, X_N \right)$ where $X_i \sim \mathcal{N}(\mu_i, \sigma_i^2)$ Answer 1: <p><a href="https://people.emich.edu/aross15/q/papers/bounds_Emax.pdf" rel="nofollow noreferrer">Ross 2003</a> gives an upper bound for the more general case, where $X_i$ can be dependent:</p> <p>$$\mathbb{E} \left[ \max_i X_i \right] \le c + \sum_{i=1}^N \left(\sigma^2_i + f_i(c) + (\mu_i - c) Pr\left\{X_i &gt; c \right\} \right)$$ for all values of $c$, where $f_i(c)$ is the pdf of $X_i$.</p> <p>The tightest bound can be obtained by using a $c$ that satisfies</p> <p>$$\sum_{i=1}^N Pr\left\{X_i &gt; c \right\} = 1 \: . $$</p> Answer 2: <p>This problem is an example of order statistics with non-identical distributions; in this instance, due to having non-identical parameters. We are given a $N(\mu, \sigma^2)$ parent where identicality is relaxed by replacing $(\mu, \sigma)$ with $(\mu_i, \sigma_i)$ for $i = 1, ..., n$, with parent pdf say $f_i(x)$:</p> <p><a href="https://i.stack.imgur.com/iXHoE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iXHoE.png" alt="enter image description here"></a></p> <p>and cdf $F_i(x)$.</p> <p>The general form for the pdf of $Z = \max \left(X_1, X_2, \dots, X_n \right)$ will be:</p> <p>$$\text{pdf}(Z) \quad = \quad \sum _{i=1}^n \big\{ \; f_i(z) \prod _{j=1}^n F_j(z){}_{\text{for } j\neq i} \big \}$$</p> <p>This is manageable. To illustrate and check, consider a specific case of say $n = 4$. Then the pdf of the maximum is the pdf of the $4^{\text{th}}$ order statistic in a sample of size 4, which can be evaluated automatically using the <code>OrderStatNonIdentical</code> function in the <em>mathStatica</em> package for <em>Mathematica</em>:</p> <p><a href="https://i.stack.imgur.com/UCO3R.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UCO3R.png" alt="enter image description here"></a></p> <p>where the Normal cdf $F_i(x) = \frac{1}{2} \left(1+ \text{Erf}\left(\frac{x-\mu _i}{\sqrt{2} \sigma _i}\right)\right)$, and <code>Erf</code> denotes the error function.</p> <p>This can yield unusual functional forms, such as bimodal pdfs, by choosing suitably designed parameters. For example, here is a plot of the pdf when $n= 4$, and $\mu _1\to 4,\mu _2\to 3,\mu _3\to 2,\mu _4\to 7$ and $\sigma _1\to 0.1,\sigma _2\to 2,\sigma _3\to 3,\sigma _4\to 4$:</p> <p><a href="https://i.stack.imgur.com/HUq8L.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HUq8L.png" alt="enter image description here"></a></p> <p>More usual would just be a skewed right pdf. </p> <p><strong>Notes</strong> 1. As disclosure, I should add that I am one of the authors of the function used above.</p> Answer 3: <p>If we start with the CDF, \begin{align*} F_Z(z) &amp;= P(Z \leq z) \\ &amp;= P(\max(X_1,X_2,...,X_N) \leq z) \\ &amp;= P(X_1 \leq z, X_2 \leq z,...,X_N \leq z) \\ &amp;\stackrel{(ind)}{=} \prod_{1}^N P(X_i \leq z) \\ &amp;= \prod_1^N F_{X_i}(z) \end{align*} Taking the derivative yields the pdf. It doesn't seem to me like this will lead to a nice general expression.</p> Answer 4: <p><a href="https://people.emich.edu/aross15/q/papers/bounds_Emax.pdf" rel="nofollow noreferrer">Ross 2003</a> gives an upper bound for the more general case, where $X_i$ can be dependent:</p> <p>$$\mathbb{E} \left[ \max_i X_i \right] \le c + \sum_{i=1}^N \left(\sigma^2_i + f_i(c) + (\mu_i - c) Pr\left\{X_i &gt; c \right\} \right)$$ for all values of $c$, where $f_i(c)$ is the pdf of $X_i$.</p> <p>The tightest bound can be obtained by using a $c$ that satisfies</p> <p>$$\sum_{i=1}^N Pr\left\{X_i &gt; c \right\} = 1 \: . $$</p> Answer 5: <p>This problem is an example of order statistics with non-identical distributions; in this instance, due to having non-identical parameters. We are given a $N(\mu, \sigma^2)$ parent where identicality is relaxed by replacing $(\mu, \sigma)$ with $(\mu_i, \sigma_i)$ for $i = 1, ..., n$, with parent pdf say $f_i(x)$:</p> <p><a href="https://i.stack.imgur.com/iXHoE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iXHoE.png" alt="enter image description here"></a></p> <p>and cdf $F_i(x)$.</p> <p>The general form for the pdf of $Z = \max \left(X_1, X_2, \dots, X_n \right)$ will be:</p> <p>$$\text{pdf}(Z) \quad = \quad \sum _{i=1}^n \big\{ \; f_i(z) \prod _{j=1}^n F_j(z){}_{\text{for } j\neq i} \big \}$$</p> <p>This is manageable. To illustrate and check, consider a specific case of say $n = 4$. Then the pdf of the maximum is the pdf of the $4^{\text{th}}$ order statistic in a sample of size 4, which can be evaluated automatically using the <code>OrderStatNonIdentical</code> function in the <em>mathStatica</em> package for <em>Mathematica</em>:</p> <p><a href="https://i.stack.imgur.com/UCO3R.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UCO3R.png" alt="enter image description here"></a></p> <p>where the Normal cdf $F_i(x) = \frac{1}{2} \left(1+ \text{Erf}\left(\frac{x-\mu _i}{\sqrt{2} \sigma _i}\right)\right)$, and <code>Erf</code> denotes the error function.</p> <p>This can yield unusual functional forms, such as bimodal pdfs, by choosing suitably designed parameters. For example, here is a plot of the pdf when $n= 4$, and $\mu _1\to 4,\mu _2\to 3,\mu _3\to 2,\mu _4\to 7$ and $\sigma _1\to 0.1,\sigma _2\to 2,\sigma _3\to 3,\sigma _4\to 4$:</p> <p><a href="https://i.stack.imgur.com/HUq8L.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HUq8L.png" alt="enter image description here"></a></p> <p>More usual would just be a skewed right pdf. </p> <p><strong>Notes</strong> 1. As disclosure, I should add that I am one of the authors of the function used above.</p> Answer 6: <p>If we start with the CDF, \begin{align*} F_Z(z) &amp;= P(Z \leq z) \\ &amp;= P(\max(X_1,X_2,...,X_N) \leq z) \\ &amp;= P(X_1 \leq z, X_2 \leq z,...,X_N \leq z) \\ &amp;\stackrel{(ind)}{=} \prod_{1}^N P(X_i \leq z) \\ &amp;= \prod_1^N F_{X_i}(z) \end{align*} Taking the derivative yields the pdf. It doesn't seem to me like this will lead to a nice general expression.</p> Answer 7: <p><a href="https://people.emich.edu/aross15/q/papers/bounds_Emax.pdf" rel="nofollow noreferrer">Ross 2003</a> gives an upper bound for the more general case, where $X_i$ can be dependent:</p> <p>$$\mathbb{E} \left[ \max_i X_i \right] \le c + \sum_{i=1}^N \left(\sigma^2_i + f_i(c) + (\mu_i - c) Pr\left\{X_i &gt; c \right\} \right)$$ for all values of $c$, where $f_i(c)$ is the pdf of $X_i$.</p> <p>The tightest bound can be obtained by using a $c$ that satisfies</p> <p>$$\sum_{i=1}^N Pr\left\{X_i &gt; c \right\} = 1 \: . $$</p> Answer 8: <p>This problem is an example of order statistics with non-identical distributions; in this instance, due to having non-identical parameters. We are given a $N(\mu, \sigma^2)$ parent where identicality is relaxed by replacing $(\mu, \sigma)$ with $(\mu_i, \sigma_i)$ for $i = 1, ..., n$, with parent pdf say $f_i(x)$:</p> <p><a href="https://i.stack.imgur.com/iXHoE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iXHoE.png" alt="enter image description here"></a></p> <p>and cdf $F_i(x)$.</p> <p>The general form for the pdf of $Z = \max \left(X_1, X_2, \dots, X_n \right)$ will be:</p> <p>$$\text{pdf}(Z) \quad = \quad \sum _{i=1}^n \big\{ \; f_i(z) \prod _{j=1}^n F_j(z){}_{\text{for } j\neq i} \big \}$$</p> <p>This is manageable. To illustrate and check, consider a specific case of say $n = 4$. Then the pdf of the maximum is the pdf of the $4^{\text{th}}$ order statistic in a sample of size 4, which can be evaluated automatically using the <code>OrderStatNonIdentical</code> function in the <em>mathStatica</em> package for <em>Mathematica</em>:</p> <p><a href="https://i.stack.imgur.com/UCO3R.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UCO3R.png" alt="enter image description here"></a></p> <p>where the Normal cdf $F_i(x) = \frac{1}{2} \left(1+ \text{Erf}\left(\frac{x-\mu _i}{\sqrt{2} \sigma _i}\right)\right)$, and <code>Erf</code> denotes the error function.</p> <p>This can yield unusual functional forms, such as bimodal pdfs, by choosing suitably designed parameters. For example, here is a plot of the pdf when $n= 4$, and $\mu _1\to 4,\mu _2\to 3,\mu _3\to 2,\mu _4\to 7$ and $\sigma _1\to 0.1,\sigma _2\to 2,\sigma _3\to 3,\sigma _4\to 4$:</p> <p><a href="https://i.stack.imgur.com/HUq8L.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HUq8L.png" alt="enter image description here"></a></p> <p>More usual would just be a skewed right pdf. </p> <p><strong>Notes</strong> 1. As disclosure, I should add that I am one of the authors of the function used above.</p> Answer 9: <p>If we start with the CDF, \begin{align*} F_Z(z) &amp;= P(Z \leq z) \\ &amp;= P(\max(X_1,X_2,...,X_N) \leq z) \\ &amp;= P(X_1 \leq z, X_2 \leq z,...,X_N \leq z) \\ &amp;\stackrel{(ind)}{=} \prod_{1}^N P(X_i \leq z) \\ &amp;= \prod_1^N F_{X_i}(z) \end{align*} Taking the derivative yields the pdf. It doesn't seem to me like this will lead to a nice general expression.</p> Answer 10: <p><a href="https://people.emich.edu/aross15/q/papers/bounds_Emax.pdf" rel="nofollow noreferrer">Ross 2003</a> gives an upper bound for the more general case, where $X_i$ can be dependent:</p> <p>$$\mathbb{E} \left[ \max_i X_i \right] \le c + \sum_{i=1}^N \left(\sigma^2_i + f_i(c) + (\mu_i - c) Pr\left\{X_i &gt; c \right\} \right)$$ for all values of $c$, where $f_i(c)$ is the pdf of $X_i$.</p> <p>The tightest bound can be obtained by using a $c$ that satisfies</p> <p>$$\sum_{i=1}^N Pr\left\{X_i &gt; c \right\} = 1 \: . $$</p> Answer 11: <p>This problem is an example of order statistics with non-identical distributions; in this instance, due to having non-identical parameters. We are given a $N(\mu, \sigma^2)$ parent where identicality is relaxed by replacing $(\mu, \sigma)$ with $(\mu_i, \sigma_i)$ for $i = 1, ..., n$, with parent pdf say $f_i(x)$:</p> <p><a href="https://i.stack.imgur.com/iXHoE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iXHoE.png" alt="enter image description here"></a></p> <p>and cdf $F_i(x)$.</p> <p>The general form for the pdf of $Z = \max \left(X_1, X_2, \dots, X_n \right)$ will be:</p> <p>$$\text{pdf}(Z) \quad = \quad \sum _{i=1}^n \big\{ \; f_i(z) \prod _{j=1}^n F_j(z){}_{\text{for } j\neq i} \big \}$$</p> <p>This is manageable. To illustrate and check, consider a specific case of say $n = 4$. Then the pdf of the maximum is the pdf of the $4^{\text{th}}$ order statistic in a sample of size 4, which can be evaluated automatically using the <code>OrderStatNonIdentical</code> function in the <em>mathStatica</em> package for <em>Mathematica</em>:</p> <p><a href="https://i.stack.imgur.com/UCO3R.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UCO3R.png" alt="enter image description here"></a></p> <p>where the Normal cdf $F_i(x) = \frac{1}{2} \left(1+ \text{Erf}\left(\frac{x-\mu _i}{\sqrt{2} \sigma _i}\right)\right)$, and <code>Erf</code> denotes the error function.</p> <p>This can yield unusual functional forms, such as bimodal pdfs, by choosing suitably designed parameters. For example, here is a plot of the pdf when $n= 4$, and $\mu _1\to 4,\mu _2\to 3,\mu _3\to 2,\mu _4\to 7$ and $\sigma _1\to 0.1,\sigma _2\to 2,\sigma _3\to 3,\sigma _4\to 4$:</p> <p><a href="https://i.stack.imgur.com/HUq8L.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HUq8L.png" alt="enter image description here"></a></p> <p>More usual would just be a skewed right pdf. </p> <p><strong>Notes</strong> 1. As disclosure, I should add that I am one of the authors of the function used above.</p> Answer 12: <p>If we start with the CDF, \begin{align*} F_Z(z) &amp;= P(Z \leq z) \\ &amp;= P(\max(X_1,X_2,...,X_N) \leq z) \\ &amp;= P(X_1 \leq z, X_2 \leq z,...,X_N \leq z) \\ &amp;\stackrel{(ind)}{=} \prod_{1}^N P(X_i \leq z) \\ &amp;= \prod_1^N F_{X_i}(z) \end{align*} Taking the derivative yields the pdf. It doesn't seem to me like this will lead to a nice general expression.</p> Answer 13: <p><a href="https://people.emich.edu/aross15/q/papers/bounds_Emax.pdf" rel="nofollow noreferrer">Ross 2003</a> gives an upper bound for the more general case, where $X_i$ can be dependent:</p> <p>$$\mathbb{E} \left[ \max_i X_i \right] \le c + \sum_{i=1}^N \left(\sigma^2_i + f_i(c) + (\mu_i - c) Pr\left\{X_i &gt; c \right\} \right)$$ for all values of $c$, where $f_i(c)$ is the pdf of $X_i$.</p> <p>The tightest bound can be obtained by using a $c$ that satisfies</p> <p>$$\sum_{i=1}^N Pr\left\{X_i &gt; c \right\} = 1 \: . $$</p> Answer 14: <p>This problem is an example of order statistics with non-identical distributions; in this instance, due to having non-identical parameters. We are given a $N(\mu, \sigma^2)$ parent where identicality is relaxed by replacing $(\mu, \sigma)$ with $(\mu_i, \sigma_i)$ for $i = 1, ..., n$, with parent pdf say $f_i(x)$:</p> <p><a href="https://i.stack.imgur.com/iXHoE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iXHoE.png" alt="enter image description here"></a></p> <p>and cdf $F_i(x)$.</p> <p>The general form for the pdf of $Z = \max \left(X_1, X_2, \dots, X_n \right)$ will be:</p> <p>$$\text{pdf}(Z) \quad = \quad \sum _{i=1}^n \big\{ \; f_i(z) \prod _{j=1}^n F_j(z){}_{\text{for } j\neq i} \big \}$$</p> <p>This is manageable. To illustrate and check, consider a specific case of say $n = 4$. Then the pdf of the maximum is the pdf of the $4^{\text{th}}$ order statistic in a sample of size 4, which can be evaluated automatically using the <code>OrderStatNonIdentical</code> function in the <em>mathStatica</em> package for <em>Mathematica</em>:</p> <p><a href="https://i.stack.imgur.com/UCO3R.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UCO3R.png" alt="enter image description here"></a></p> <p>where the Normal cdf $F_i(x) = \frac{1}{2} \left(1+ \text{Erf}\left(\frac{x-\mu _i}{\sqrt{2} \sigma _i}\right)\right)$, and <code>Erf</code> denotes the error function.</p> <p>This can yield unusual functional forms, such as bimodal pdfs, by choosing suitably designed parameters. For example, here is a plot of the pdf when $n= 4$, and $\mu _1\to 4,\mu _2\to 3,\mu _3\to 2,\mu _4\to 7$ and $\sigma _1\to 0.1,\sigma _2\to 2,\sigma _3\to 3,\sigma _4\to 4$:</p> <p><a href="https://i.stack.imgur.com/HUq8L.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HUq8L.png" alt="enter image description here"></a></p> <p>More usual would just be a skewed right pdf. </p> <p><strong>Notes</strong> 1. As disclosure, I should add that I am one of the authors of the function used above.</p> Answer 15: <p>If we start with the CDF, \begin{align*} F_Z(z) &amp;= P(Z \leq z) \\ &amp;= P(\max(X_1,X_2,...,X_N) \leq z) \\ &amp;= P(X_1 \leq z, X_2 \leq z,...,X_N \leq z) \\ &amp;\stackrel{(ind)}{=} \prod_{1}^N P(X_i \leq z) \\ &amp;= \prod_1^N F_{X_i}(z) \end{align*} Taking the derivative yields the pdf. It doesn't seem to me like this will lead to a nice general expression.</p> Answer 16: <p><a href="https://people.emich.edu/aross15/q/papers/bounds_Emax.pdf" rel="nofollow noreferrer">Ross 2003</a> gives an upper bound for the more general case, where $X_i$ can be dependent:</p> <p>$$\mathbb{E} \left[ \max_i X_i \right] \le c + \sum_{i=1}^N \left(\sigma^2_i + f_i(c) + (\mu_i - c) Pr\left\{X_i &gt; c \right\} \right)$$ for all values of $c$, where $f_i(c)$ is the pdf of $X_i$.</p> <p>The tightest bound can be obtained by using a $c$ that satisfies</p> <p>$$\sum_{i=1}^N Pr\left\{X_i &gt; c \right\} = 1 \: . $$</p> Answer 17: <p>This problem is an example of order statistics with non-identical distributions; in this instance, due to having non-identical parameters. We are given a $N(\mu, \sigma^2)$ parent where identicality is relaxed by replacing $(\mu, \sigma)$ with $(\mu_i, \sigma_i)$ for $i = 1, ..., n$, with parent pdf say $f_i(x)$:</p> <p><a href="https://i.stack.imgur.com/iXHoE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iXHoE.png" alt="enter image description here"></a></p> <p>and cdf $F_i(x)$.</p> <p>The general form for the pdf of $Z = \max \left(X_1, X_2, \dots, X_n \right)$ will be:</p> <p>$$\text{pdf}(Z) \quad = \quad \sum _{i=1}^n \big\{ \; f_i(z) \prod _{j=1}^n F_j(z){}_{\text{for } j\neq i} \big \}$$</p> <p>This is manageable. To illustrate and check, consider a specific case of say $n = 4$. Then the pdf of the maximum is the pdf of the $4^{\text{th}}$ order statistic in a sample of size 4, which can be evaluated automatically using the <code>OrderStatNonIdentical</code> function in the <em>mathStatica</em> package for <em>Mathematica</em>:</p> <p><a href="https://i.stack.imgur.com/UCO3R.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UCO3R.png" alt="enter image description here"></a></p> <p>where the Normal cdf $F_i(x) = \frac{1}{2} \left(1+ \text{Erf}\left(\frac{x-\mu _i}{\sqrt{2} \sigma _i}\right)\right)$, and <code>Erf</code> denotes the error function.</p> <p>This can yield unusual functional forms, such as bimodal pdfs, by choosing suitably designed parameters. For example, here is a plot of the pdf when $n= 4$, and $\mu _1\to 4,\mu _2\to 3,\mu _3\to 2,\mu _4\to 7$ and $\sigma _1\to 0.1,\sigma _2\to 2,\sigma _3\to 3,\sigma _4\to 4$:</p> <p><a href="https://i.stack.imgur.com/HUq8L.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HUq8L.png" alt="enter image description here"></a></p> <p>More usual would just be a skewed right pdf. </p> <p><strong>Notes</strong> 1. As disclosure, I should add that I am one of the authors of the function used above.</p> Answer 18: <p>If we start with the CDF, \begin{align*} F_Z(z) &amp;= P(Z \leq z) \\ &amp;= P(\max(X_1,X_2,...,X_N) \leq z) \\ &amp;= P(X_1 \leq z, X_2 \leq z,...,X_N \leq z) \\ &amp;\stackrel{(ind)}{=} \prod_{1}^N P(X_i \leq z) \\ &amp;= \prod_1^N F_{X_i}(z) \end{align*} Taking the derivative yields the pdf. It doesn't seem to me like this will lead to a nice general expression.</p> Answer 19: <p><a href="https://people.emich.edu/aross15/q/papers/bounds_Emax.pdf" rel="nofollow noreferrer">Ross 2003</a> gives an upper bound for the more general case, where $X_i$ can be dependent:</p> <p>$$\mathbb{E} \left[ \max_i X_i \right] \le c + \sum_{i=1}^N \left(\sigma^2_i + f_i(c) + (\mu_i - c) Pr\left\{X_i &gt; c \right\} \right)$$ for all values of $c$, where $f_i(c)$ is the pdf of $X_i$.</p> <p>The tightest bound can be obtained by using a $c$ that satisfies</p> <p>$$\sum_{i=1}^N Pr\left\{X_i &gt; c \right\} = 1 \: . $$</p> Answer 20: <p>This problem is an example of order statistics with non-identical distributions; in this instance, due to having non-identical parameters. We are given a $N(\mu, \sigma^2)$ parent where identicality is relaxed by replacing $(\mu, \sigma)$ with $(\mu_i, \sigma_i)$ for $i = 1, ..., n$, with parent pdf say $f_i(x)$:</p> <p><a href="https://i.stack.imgur.com/iXHoE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iXHoE.png" alt="enter image description here"></a></p> <p>and cdf $F_i(x)$.</p> <p>The general form for the pdf of $Z = \max \left(X_1, X_2, \dots, X_n \right)$ will be:</p> <p>$$\text{pdf}(Z) \quad = \quad \sum _{i=1}^n \big\{ \; f_i(z) \prod _{j=1}^n F_j(z){}_{\text{for } j\neq i} \big \}$$</p> <p>This is manageable. To illustrate and check, consider a specific case of say $n = 4$. Then the pdf of the maximum is the pdf of the $4^{\text{th}}$ order statistic in a sample of size 4, which can be evaluated automatically using the <code>OrderStatNonIdentical</code> function in the <em>mathStatica</em> package for <em>Mathematica</em>:</p> <p><a href="https://i.stack.imgur.com/UCO3R.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UCO3R.png" alt="enter image description here"></a></p> <p>where the Normal cdf $F_i(x) = \frac{1}{2} \left(1+ \text{Erf}\left(\frac{x-\mu _i}{\sqrt{2} \sigma _i}\right)\right)$, and <code>Erf</code> denotes the error function.</p> <p>This can yield unusual functional forms, such as bimodal pdfs, by choosing suitably designed parameters. For example, here is a plot of the pdf when $n= 4$, and $\mu _1\to 4,\mu _2\to 3,\mu _3\to 2,\mu _4\to 7$ and $\sigma _1\to 0.1,\sigma _2\to 2,\sigma _3\to 3,\sigma _4\to 4$:</p> <p><a href="https://i.stack.imgur.com/HUq8L.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HUq8L.png" alt="enter image description here"></a></p> <p>More usual would just be a skewed right pdf. </p> <p><strong>Notes</strong> 1. As disclosure, I should add that I am one of the authors of the function used above.</p> Answer 21: <p>If we start with the CDF, \begin{align*} F_Z(z) &amp;= P(Z \leq z) \\ &amp;= P(\max(X_1,X_2,...,X_N) \leq z) \\ &amp;= P(X_1 \leq z, X_2 \leq z,...,X_N \leq z) \\ &amp;\stackrel{(ind)}{=} \prod_{1}^N P(X_i \leq z) \\ &amp;= \prod_1^N F_{X_i}(z) \end{align*} Taking the derivative yields the pdf. It doesn't seem to me like this will lead to a nice general expression.</p> Answer 22: <p><a href="https://people.emich.edu/aross15/q/papers/bounds_Emax.pdf" rel="nofollow noreferrer">Ross 2003</a> gives an upper bound for the more general case, where $X_i$ can be dependent:</p> <p>$$\mathbb{E} \left[ \max_i X_i \right] \le c + \sum_{i=1}^N \left(\sigma^2_i + f_i(c) + (\mu_i - c) Pr\left\{X_i &gt; c \right\} \right)$$ for all values of $c$, where $f_i(c)$ is the pdf of $X_i$.</p> <p>The tightest bound can be obtained by using a $c$ that satisfies</p> <p>$$\sum_{i=1}^N Pr\left\{X_i &gt; c \right\} = 1 \: . $$</p> Answer 23: <p>This problem is an example of order statistics with non-identical distributions; in this instance, due to having non-identical parameters. We are given a $N(\mu, \sigma^2)$ parent where identicality is relaxed by replacing $(\mu, \sigma)$ with $(\mu_i, \sigma_i)$ for $i = 1, ..., n$, with parent pdf say $f_i(x)$:</p> <p><a href="https://i.stack.imgur.com/iXHoE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iXHoE.png" alt="enter image description here"></a></p> <p>and cdf $F_i(x)$.</p> <p>The general form for the pdf of $Z = \max \left(X_1, X_2, \dots, X_n \right)$ will be:</p> <p>$$\text{pdf}(Z) \quad = \quad \sum _{i=1}^n \big\{ \; f_i(z) \prod _{j=1}^n F_j(z){}_{\text{for } j\neq i} \big \}$$</p> <p>This is manageable. To illustrate and check, consider a specific case of say $n = 4$. Then the pdf of the maximum is the pdf of the $4^{\text{th}}$ order statistic in a sample of size 4, which can be evaluated automatically using the <code>OrderStatNonIdentical</code> function in the <em>mathStatica</em> package for <em>Mathematica</em>:</p> <p><a href="https://i.stack.imgur.com/UCO3R.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UCO3R.png" alt="enter image description here"></a></p> <p>where the Normal cdf $F_i(x) = \frac{1}{2} \left(1+ \text{Erf}\left(\frac{x-\mu _i}{\sqrt{2} \sigma _i}\right)\right)$, and <code>Erf</code> denotes the error function.</p> <p>This can yield unusual functional forms, such as bimodal pdfs, by choosing suitably designed parameters. For example, here is a plot of the pdf when $n= 4$, and $\mu _1\to 4,\mu _2\to 3,\mu _3\to 2,\mu _4\to 7$ and $\sigma _1\to 0.1,\sigma _2\to 2,\sigma _3\to 3,\sigma _4\to 4$:</p> <p><a href="https://i.stack.imgur.com/HUq8L.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HUq8L.png" alt="enter image description here"></a></p> <p>More usual would just be a skewed right pdf. </p> <p><strong>Notes</strong> 1. As disclosure, I should add that I am one of the authors of the function used above.</p> Answer 24: <p>If we start with the CDF, \begin{align*} F_Z(z) &amp;= P(Z \leq z) \\ &amp;= P(\max(X_1,X_2,...,X_N) \leq z) \\ &amp;= P(X_1 \leq z, X_2 \leq z,...,X_N \leq z) \\ &amp;\stackrel{(ind)}{=} \prod_{1}^N P(X_i \leq z) \\ &amp;= \prod_1^N F_{X_i}(z) \end{align*} Taking the derivative yields the pdf. It doesn't seem to me like this will lead to a nice general expression.</p> Answer 25: <p><a href="https://people.emich.edu/aross15/q/papers/bounds_Emax.pdf" rel="nofollow noreferrer">Ross 2003</a> gives an upper bound for the more general case, where $X_i$ can be dependent:</p> <p>$$\mathbb{E} \left[ \max_i X_i \right] \le c + \sum_{i=1}^N \left(\sigma^2_i + f_i(c) + (\mu_i - c) Pr\left\{X_i &gt; c \right\} \right)$$ for all values of $c$, where $f_i(c)$ is the pdf of $X_i$.</p> <p>The tightest bound can be obtained by using a $c$ that satisfies</p> <p>$$\sum_{i=1}^N Pr\left\{X_i &gt; c \right\} = 1 \: . $$</p> Answer 26: <p>This problem is an example of order statistics with non-identical distributions; in this instance, due to having non-identical parameters. We are given a $N(\mu, \sigma^2)$ parent where identicality is relaxed by replacing $(\mu, \sigma)$ with $(\mu_i, \sigma_i)$ for $i = 1, ..., n$, with parent pdf say $f_i(x)$:</p> <p><a href="https://i.stack.imgur.com/iXHoE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iXHoE.png" alt="enter image description here"></a></p> <p>and cdf $F_i(x)$.</p> <p>The general form for the pdf of $Z = \max \left(X_1, X_2, \dots, X_n \right)$ will be:</p> <p>$$\text{pdf}(Z) \quad = \quad \sum _{i=1}^n \big\{ \; f_i(z) \prod _{j=1}^n F_j(z){}_{\text{for } j\neq i} \big \}$$</p> <p>This is manageable. To illustrate and check, consider a specific case of say $n = 4$. Then the pdf of the maximum is the pdf of the $4^{\text{th}}$ order statistic in a sample of size 4, which can be evaluated automatically using the <code>OrderStatNonIdentical</code> function in the <em>mathStatica</em> package for <em>Mathematica</em>:</p> <p><a href="https://i.stack.imgur.com/UCO3R.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UCO3R.png" alt="enter image description here"></a></p> <p>where the Normal cdf $F_i(x) = \frac{1}{2} \left(1+ \text{Erf}\left(\frac{x-\mu _i}{\sqrt{2} \sigma _i}\right)\right)$, and <code>Erf</code> denotes the error function.</p> <p>This can yield unusual functional forms, such as bimodal pdfs, by choosing suitably designed parameters. For example, here is a plot of the pdf when $n= 4$, and $\mu _1\to 4,\mu _2\to 3,\mu _3\to 2,\mu _4\to 7$ and $\sigma _1\to 0.1,\sigma _2\to 2,\sigma _3\to 3,\sigma _4\to 4$:</p> <p><a href="https://i.stack.imgur.com/HUq8L.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HUq8L.png" alt="enter image description here"></a></p> <p>More usual would just be a skewed right pdf. </p> <p><strong>Notes</strong> 1. As disclosure, I should add that I am one of the authors of the function used above.</p> Answer 27: <p>If we start with the CDF, \begin{align*} F_Z(z) &amp;= P(Z \leq z) \\ &amp;= P(\max(X_1,X_2,...,X_N) \leq z) \\ &amp;= P(X_1 \leq z, X_2 \leq z,...,X_N \leq z) \\ &amp;\stackrel{(ind)}{=} \prod_{1}^N P(X_i \leq z) \\ &amp;= \prod_1^N F_{X_i}(z) \end{align*} Taking the derivative yields the pdf. It doesn't seem to me like this will lead to a nice general expression.</p> Answer 28: <p><a href="https://people.emich.edu/aross15/q/papers/bounds_Emax.pdf" rel="nofollow noreferrer">Ross 2003</a> gives an upper bound for the more general case, where $X_i$ can be dependent:</p> <p>$$\mathbb{E} \left[ \max_i X_i \right] \le c + \sum_{i=1}^N \left(\sigma^2_i + f_i(c) + (\mu_i - c) Pr\left\{X_i &gt; c \right\} \right)$$ for all values of $c$, where $f_i(c)$ is the pdf of $X_i$.</p> <p>The tightest bound can be obtained by using a $c$ that satisfies</p> <p>$$\sum_{i=1}^N Pr\left\{X_i &gt; c \right\} = 1 \: . $$</p> Answer 29: <p>This problem is an example of order statistics with non-identical distributions; in this instance, due to having non-identical parameters. We are given a $N(\mu, \sigma^2)$ parent where identicality is relaxed by replacing $(\mu, \sigma)$ with $(\mu_i, \sigma_i)$ for $i = 1, ..., n$, with parent pdf say $f_i(x)$:</p> <p><a href="https://i.stack.imgur.com/iXHoE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iXHoE.png" alt="enter image description here"></a></p> <p>and cdf $F_i(x)$.</p> <p>The general form for the pdf of $Z = \max \left(X_1, X_2, \dots, X_n \right)$ will be:</p> <p>$$\text{pdf}(Z) \quad = \quad \sum _{i=1}^n \big\{ \; f_i(z) \prod _{j=1}^n F_j(z){}_{\text{for } j\neq i} \big \}$$</p> <p>This is manageable. To illustrate and check, consider a specific case of say $n = 4$. Then the pdf of the maximum is the pdf of the $4^{\text{th}}$ order statistic in a sample of size 4, which can be evaluated automatically using the <code>OrderStatNonIdentical</code> function in the <em>mathStatica</em> package for <em>Mathematica</em>:</p> <p><a href="https://i.stack.imgur.com/UCO3R.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UCO3R.png" alt="enter image description here"></a></p> <p>where the Normal cdf $F_i(x) = \frac{1}{2} \left(1+ \text{Erf}\left(\frac{x-\mu _i}{\sqrt{2} \sigma _i}\right)\right)$, and <code>Erf</code> denotes the error function.</p> <p>This can yield unusual functional forms, such as bimodal pdfs, by choosing suitably designed parameters. For example, here is a plot of the pdf when $n= 4$, and $\mu _1\to 4,\mu _2\to 3,\mu _3\to 2,\mu _4\to 7$ and $\sigma _1\to 0.1,\sigma _2\to 2,\sigma _3\to 3,\sigma _4\to 4$:</p> <p><a href="https://i.stack.imgur.com/HUq8L.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HUq8L.png" alt="enter image description here"></a></p> <p>More usual would just be a skewed right pdf. </p> <p><strong>Notes</strong> 1. As disclosure, I should add that I am one of the authors of the function used above.</p> Answer 30: <p>If we start with the CDF, \begin{align*} F_Z(z) &amp;= P(Z \leq z) \\ &amp;= P(\max(X_1,X_2,...,X_N) \leq z) \\ &amp;= P(X_1 \leq z, X_2 \leq z,...,X_N \leq z) \\ &amp;\stackrel{(ind)}{=} \prod_{1}^N P(X_i \leq z) \\ &amp;= \prod_1^N F_{X_i}(z) \end{align*} Taking the derivative yields the pdf. It doesn't seem to me like this will lead to a nice general expression.</p>
The dual group of the profinite integers
2,115,224
6
https://math.stackexchange.com/questions/2115224/the-dual-group-of-the-profinite-integers
<p>I am trying to figure out the following easy example of Pontryagin duality (see <a href="https://math.stackexchange.com/questions/126584/pontrjagin-duality-for-profinite-and-torsion-abelian-groups">Pontrjagin duality for profinite and torsion abelian groups</a>).</p> <p>For an abelian group $A$ (torsion or profinite), let its <strong>dual</strong> $A^D$ be the group of homomorphisms $\operatorname{Hom}_\mathbb{Z} (A, \mathbb{Q}/\mathbb{Z})$.</p> <p>For example, the dual of $\mathbb{Q}/\mathbb{Z}$ are the profinite integers $\hat{\mathbb{Z}}$: $$\operatorname{Hom}_\mathbb{Z} (\mathbb{Q}/\mathbb{Z}, \mathbb{Q}/\mathbb{Z}) \cong \operatorname{Hom}_\mathbb{Z} (\varinjlim_m \mathbb{Z}/m\mathbb{Z}, \mathbb{Q}/\mathbb{Z}) \cong \varprojlim_m \operatorname{Hom}_\mathbb{Z} (\mathbb{Z}/m\mathbb{Z}, \mathbb{Q}/\mathbb{Z}) \cong \varprojlim_m \mathbb{Z}/m\mathbb{Z} \cong \hat{\mathbb{Z}},$$ as the $m$-torsion part of $\mathbb{Q}/\mathbb{Z}$ is given by $\left[\frac{i}{m}\right]$ for $i = 0, 1, \ldots, m-1$. So we have $$\mathbb{Q}/\mathbb{Z}^D \cong \hat{\mathbb{Z}}.$$</p> <p>Then we should have $\operatorname{Hom}_\mathbb{Z} (\hat{\mathbb{Z}}, \mathbb{Q}/\mathbb{Z}) \cong \mathbb{Q}/\mathbb{Z}$, but what is the right and easy proof? We just saw that $\hat{\mathbb{Z}}^D \cong (\mathbb{Q}/\mathbb{Z}^D)^D$, but how do I see directly that $(\mathbb{Q}/\mathbb{Z}^D)^D \cong \mathbb{Q}/\mathbb{Z}$ without invoking the Pontryagin duality?</p>
[ { "answer_id": 2115515, "score": 3, "text": "<p>It is not true that $\\operatorname{Hom}_\\mathbb{Z} (\\hat{\\mathbb{Z}}, \\mathbb{Q}/\\mathbb{Z}) \\cong \\mathbb{Q}/\\mathbb{Z}$. Indeed, $\\hat{\\mathbb{Z}}$ is an uncountable torsion-free abelian group, so tensoring with $\\mathbb{Q}$ gives an uncountable-dimensional $\\mathbb{Q}$-vector space $\\hat{\\mathbb{Z}}\\otimes\\mathbb{Q}$. You can then use this to construct uncountably many different homomorphisms $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ (for instance, pick a basis for $\\hat{\\mathbb{Z}}\\otimes\\mathbb{Q}$ consisting of elements of $\\hat{\\mathbb{Z}}$ and consider the maps to $\\mathbb{Q}$ given by each basis element composed with the map $\\mathbb{Q}\\stackrel{1/2}{\\to}\\mathbb{Q}/\\mathbb{Z}$; for each basis element this gives a homomorphism $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ sending that basis element to $1/2$ but every other basis element to $0$).</p>\n\n<p>What is true is that the group of <em>continuous</em> homomorphisms $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ is isomorphic to $\\mathbb{Q}/\\mathbb{Z}$, where you give $\\hat{\\mathbb{Z}}$ the profinite topology and $\\mathbb{Q}/\\mathbb{Z}$ the quotient topology from $\\mathbb{Q}$ (or the discrete topology, if you prefer; it turns out to make no difference in this case). To prove this, note that since $\\hat{\\mathbb{Z}}$ is compact, the image of any homomorphism $f:\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ is compact. But the only compact subgroups of $\\mathbb{Q}/\\mathbb{Z}$ are finite subgroups, so this means the image of $f$ is finite. There is then some $n$ such that $nf(x)=0$ for all $x$, so $f$ factors through the quotient $\\hat{\\mathbb{Z}}\\to\\hat{\\mathbb{Z}}/n\\hat{\\mathbb{Z}}\\cong\\mathbb{Z}/n\\mathbb{Z}$.</p>\n\n<p>Thus every continuous homomorphism $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ factors through one of the finite quotients $\\mathbb{Z}/n\\mathbb{Z}$ of $\\hat{\\mathbb{Z}}$. This means the functor $\\operatorname{Hom}_c (-, \\mathbb{Q}/\\mathbb{Z})$ (consisting of only continuous homomorphisms) actually <em>does</em> preserve the limit in question. So the group of continuous homomorphisms $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ is the colimit of the dual groups $(\\mathbb{Z}/n\\mathbb{Z})^D\\cong\\mathbb{Z}/n\\mathbb{Z}$, and this colimit is $\\mathbb{Q}/\\mathbb{Z}$.</p>\n" }, { "answer_id": 2115399, "score": 0, "text": "<p>That's basically all there is to the \"right and easy\" proof for this.</p>\n\n<blockquote>\n <p>$$\\hom\\left(\\Bbb Q/\\Bbb Z,S^1\\right)=\\hom\\left(\\varinjlim_n \\Bbb Z/n, S^1\\right).$$</p>\n</blockquote>\n\n<p>By definition of categorical duals you get this to be</p>\n\n<p>$$\\varprojlim_n\\hom\\left(\\Bbb Z/n, S^1\\right)=\\varprojlim_n\\Bbb Z/n=\\widehat{\\Bbb Z}$$</p>\n\n<p>since finite groups are self-dual. You can bootstrap from this and the fact that the double dual is canonically isomorphic to the original group and you can show the Pontryjagin dual of $\\widehat{\\Bbb Z}$ is $\\Bbb Q/\\Bbb Z$. I always find this particularly pleasing since torsion free dualizes to connected and $\\Bbb Q/\\Bbb Z$ is a completely torsion group just as $\\widehat{\\Bbb Z}$ is totally disconnected.</p>\n\n<p>This is basically what you wrote in fewer words, but it's definitely the \"right\" proof in some sense of the word: it appeals only to canonical categorical facts and is based on simple, straightforward classical facts from the basic theory.</p>\n" }, { "answer_id": 2115515, "score": 3, "text": "<p>It is not true that $\\operatorname{Hom}_\\mathbb{Z} (\\hat{\\mathbb{Z}}, \\mathbb{Q}/\\mathbb{Z}) \\cong \\mathbb{Q}/\\mathbb{Z}$. Indeed, $\\hat{\\mathbb{Z}}$ is an uncountable torsion-free abelian group, so tensoring with $\\mathbb{Q}$ gives an uncountable-dimensional $\\mathbb{Q}$-vector space $\\hat{\\mathbb{Z}}\\otimes\\mathbb{Q}$. You can then use this to construct uncountably many different homomorphisms $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ (for instance, pick a basis for $\\hat{\\mathbb{Z}}\\otimes\\mathbb{Q}$ consisting of elements of $\\hat{\\mathbb{Z}}$ and consider the maps to $\\mathbb{Q}$ given by each basis element composed with the map $\\mathbb{Q}\\stackrel{1/2}{\\to}\\mathbb{Q}/\\mathbb{Z}$; for each basis element this gives a homomorphism $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ sending that basis element to $1/2$ but every other basis element to $0$).</p>\n\n<p>What is true is that the group of <em>continuous</em> homomorphisms $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ is isomorphic to $\\mathbb{Q}/\\mathbb{Z}$, where you give $\\hat{\\mathbb{Z}}$ the profinite topology and $\\mathbb{Q}/\\mathbb{Z}$ the quotient topology from $\\mathbb{Q}$ (or the discrete topology, if you prefer; it turns out to make no difference in this case). To prove this, note that since $\\hat{\\mathbb{Z}}$ is compact, the image of any homomorphism $f:\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ is compact. But the only compact subgroups of $\\mathbb{Q}/\\mathbb{Z}$ are finite subgroups, so this means the image of $f$ is finite. There is then some $n$ such that $nf(x)=0$ for all $x$, so $f$ factors through the quotient $\\hat{\\mathbb{Z}}\\to\\hat{\\mathbb{Z}}/n\\hat{\\mathbb{Z}}\\cong\\mathbb{Z}/n\\mathbb{Z}$.</p>\n\n<p>Thus every continuous homomorphism $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ factors through one of the finite quotients $\\mathbb{Z}/n\\mathbb{Z}$ of $\\hat{\\mathbb{Z}}$. This means the functor $\\operatorname{Hom}_c (-, \\mathbb{Q}/\\mathbb{Z})$ (consisting of only continuous homomorphisms) actually <em>does</em> preserve the limit in question. So the group of continuous homomorphisms $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ is the colimit of the dual groups $(\\mathbb{Z}/n\\mathbb{Z})^D\\cong\\mathbb{Z}/n\\mathbb{Z}$, and this colimit is $\\mathbb{Q}/\\mathbb{Z}$.</p>\n" }, { "answer_id": 2115399, "score": 0, "text": "<p>That's basically all there is to the \"right and easy\" proof for this.</p>\n\n<blockquote>\n <p>$$\\hom\\left(\\Bbb Q/\\Bbb Z,S^1\\right)=\\hom\\left(\\varinjlim_n \\Bbb Z/n, S^1\\right).$$</p>\n</blockquote>\n\n<p>By definition of categorical duals you get this to be</p>\n\n<p>$$\\varprojlim_n\\hom\\left(\\Bbb Z/n, S^1\\right)=\\varprojlim_n\\Bbb Z/n=\\widehat{\\Bbb Z}$$</p>\n\n<p>since finite groups are self-dual. You can bootstrap from this and the fact that the double dual is canonically isomorphic to the original group and you can show the Pontryjagin dual of $\\widehat{\\Bbb Z}$ is $\\Bbb Q/\\Bbb Z$. I always find this particularly pleasing since torsion free dualizes to connected and $\\Bbb Q/\\Bbb Z$ is a completely torsion group just as $\\widehat{\\Bbb Z}$ is totally disconnected.</p>\n\n<p>This is basically what you wrote in fewer words, but it's definitely the \"right\" proof in some sense of the word: it appeals only to canonical categorical facts and is based on simple, straightforward classical facts from the basic theory.</p>\n" }, { "answer_id": 2115515, "score": 3, "text": "<p>It is not true that $\\operatorname{Hom}_\\mathbb{Z} (\\hat{\\mathbb{Z}}, \\mathbb{Q}/\\mathbb{Z}) \\cong \\mathbb{Q}/\\mathbb{Z}$. Indeed, $\\hat{\\mathbb{Z}}$ is an uncountable torsion-free abelian group, so tensoring with $\\mathbb{Q}$ gives an uncountable-dimensional $\\mathbb{Q}$-vector space $\\hat{\\mathbb{Z}}\\otimes\\mathbb{Q}$. You can then use this to construct uncountably many different homomorphisms $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ (for instance, pick a basis for $\\hat{\\mathbb{Z}}\\otimes\\mathbb{Q}$ consisting of elements of $\\hat{\\mathbb{Z}}$ and consider the maps to $\\mathbb{Q}$ given by each basis element composed with the map $\\mathbb{Q}\\stackrel{1/2}{\\to}\\mathbb{Q}/\\mathbb{Z}$; for each basis element this gives a homomorphism $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ sending that basis element to $1/2$ but every other basis element to $0$).</p>\n\n<p>What is true is that the group of <em>continuous</em> homomorphisms $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ is isomorphic to $\\mathbb{Q}/\\mathbb{Z}$, where you give $\\hat{\\mathbb{Z}}$ the profinite topology and $\\mathbb{Q}/\\mathbb{Z}$ the quotient topology from $\\mathbb{Q}$ (or the discrete topology, if you prefer; it turns out to make no difference in this case). To prove this, note that since $\\hat{\\mathbb{Z}}$ is compact, the image of any homomorphism $f:\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ is compact. But the only compact subgroups of $\\mathbb{Q}/\\mathbb{Z}$ are finite subgroups, so this means the image of $f$ is finite. There is then some $n$ such that $nf(x)=0$ for all $x$, so $f$ factors through the quotient $\\hat{\\mathbb{Z}}\\to\\hat{\\mathbb{Z}}/n\\hat{\\mathbb{Z}}\\cong\\mathbb{Z}/n\\mathbb{Z}$.</p>\n\n<p>Thus every continuous homomorphism $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ factors through one of the finite quotients $\\mathbb{Z}/n\\mathbb{Z}$ of $\\hat{\\mathbb{Z}}$. This means the functor $\\operatorname{Hom}_c (-, \\mathbb{Q}/\\mathbb{Z})$ (consisting of only continuous homomorphisms) actually <em>does</em> preserve the limit in question. So the group of continuous homomorphisms $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ is the colimit of the dual groups $(\\mathbb{Z}/n\\mathbb{Z})^D\\cong\\mathbb{Z}/n\\mathbb{Z}$, and this colimit is $\\mathbb{Q}/\\mathbb{Z}$.</p>\n" }, { "answer_id": 2115399, "score": 0, "text": "<p>That's basically all there is to the \"right and easy\" proof for this.</p>\n\n<blockquote>\n <p>$$\\hom\\left(\\Bbb Q/\\Bbb Z,S^1\\right)=\\hom\\left(\\varinjlim_n \\Bbb Z/n, S^1\\right).$$</p>\n</blockquote>\n\n<p>By definition of categorical duals you get this to be</p>\n\n<p>$$\\varprojlim_n\\hom\\left(\\Bbb Z/n, S^1\\right)=\\varprojlim_n\\Bbb Z/n=\\widehat{\\Bbb Z}$$</p>\n\n<p>since finite groups are self-dual. You can bootstrap from this and the fact that the double dual is canonically isomorphic to the original group and you can show the Pontryjagin dual of $\\widehat{\\Bbb Z}$ is $\\Bbb Q/\\Bbb Z$. I always find this particularly pleasing since torsion free dualizes to connected and $\\Bbb Q/\\Bbb Z$ is a completely torsion group just as $\\widehat{\\Bbb Z}$ is totally disconnected.</p>\n\n<p>This is basically what you wrote in fewer words, but it's definitely the \"right\" proof in some sense of the word: it appeals only to canonical categorical facts and is based on simple, straightforward classical facts from the basic theory.</p>\n" }, { "answer_id": 2115515, "score": 3, "text": "<p>It is not true that $\\operatorname{Hom}_\\mathbb{Z} (\\hat{\\mathbb{Z}}, \\mathbb{Q}/\\mathbb{Z}) \\cong \\mathbb{Q}/\\mathbb{Z}$. Indeed, $\\hat{\\mathbb{Z}}$ is an uncountable torsion-free abelian group, so tensoring with $\\mathbb{Q}$ gives an uncountable-dimensional $\\mathbb{Q}$-vector space $\\hat{\\mathbb{Z}}\\otimes\\mathbb{Q}$. You can then use this to construct uncountably many different homomorphisms $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ (for instance, pick a basis for $\\hat{\\mathbb{Z}}\\otimes\\mathbb{Q}$ consisting of elements of $\\hat{\\mathbb{Z}}$ and consider the maps to $\\mathbb{Q}$ given by each basis element composed with the map $\\mathbb{Q}\\stackrel{1/2}{\\to}\\mathbb{Q}/\\mathbb{Z}$; for each basis element this gives a homomorphism $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ sending that basis element to $1/2$ but every other basis element to $0$).</p>\n\n<p>What is true is that the group of <em>continuous</em> homomorphisms $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ is isomorphic to $\\mathbb{Q}/\\mathbb{Z}$, where you give $\\hat{\\mathbb{Z}}$ the profinite topology and $\\mathbb{Q}/\\mathbb{Z}$ the quotient topology from $\\mathbb{Q}$ (or the discrete topology, if you prefer; it turns out to make no difference in this case). To prove this, note that since $\\hat{\\mathbb{Z}}$ is compact, the image of any homomorphism $f:\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ is compact. But the only compact subgroups of $\\mathbb{Q}/\\mathbb{Z}$ are finite subgroups, so this means the image of $f$ is finite. There is then some $n$ such that $nf(x)=0$ for all $x$, so $f$ factors through the quotient $\\hat{\\mathbb{Z}}\\to\\hat{\\mathbb{Z}}/n\\hat{\\mathbb{Z}}\\cong\\mathbb{Z}/n\\mathbb{Z}$.</p>\n\n<p>Thus every continuous homomorphism $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ factors through one of the finite quotients $\\mathbb{Z}/n\\mathbb{Z}$ of $\\hat{\\mathbb{Z}}$. This means the functor $\\operatorname{Hom}_c (-, \\mathbb{Q}/\\mathbb{Z})$ (consisting of only continuous homomorphisms) actually <em>does</em> preserve the limit in question. So the group of continuous homomorphisms $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ is the colimit of the dual groups $(\\mathbb{Z}/n\\mathbb{Z})^D\\cong\\mathbb{Z}/n\\mathbb{Z}$, and this colimit is $\\mathbb{Q}/\\mathbb{Z}$.</p>\n" }, { "answer_id": 2115399, "score": 0, "text": "<p>That's basically all there is to the \"right and easy\" proof for this.</p>\n\n<blockquote>\n <p>$$\\hom\\left(\\Bbb Q/\\Bbb Z,S^1\\right)=\\hom\\left(\\varinjlim_n \\Bbb Z/n, S^1\\right).$$</p>\n</blockquote>\n\n<p>By definition of categorical duals you get this to be</p>\n\n<p>$$\\varprojlim_n\\hom\\left(\\Bbb Z/n, S^1\\right)=\\varprojlim_n\\Bbb Z/n=\\widehat{\\Bbb Z}$$</p>\n\n<p>since finite groups are self-dual. You can bootstrap from this and the fact that the double dual is canonically isomorphic to the original group and you can show the Pontryjagin dual of $\\widehat{\\Bbb Z}$ is $\\Bbb Q/\\Bbb Z$. I always find this particularly pleasing since torsion free dualizes to connected and $\\Bbb Q/\\Bbb Z$ is a completely torsion group just as $\\widehat{\\Bbb Z}$ is totally disconnected.</p>\n\n<p>This is basically what you wrote in fewer words, but it's definitely the \"right\" proof in some sense of the word: it appeals only to canonical categorical facts and is based on simple, straightforward classical facts from the basic theory.</p>\n" }, { "answer_id": 2115515, "score": 3, "text": "<p>It is not true that $\\operatorname{Hom}_\\mathbb{Z} (\\hat{\\mathbb{Z}}, \\mathbb{Q}/\\mathbb{Z}) \\cong \\mathbb{Q}/\\mathbb{Z}$. Indeed, $\\hat{\\mathbb{Z}}$ is an uncountable torsion-free abelian group, so tensoring with $\\mathbb{Q}$ gives an uncountable-dimensional $\\mathbb{Q}$-vector space $\\hat{\\mathbb{Z}}\\otimes\\mathbb{Q}$. You can then use this to construct uncountably many different homomorphisms $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ (for instance, pick a basis for $\\hat{\\mathbb{Z}}\\otimes\\mathbb{Q}$ consisting of elements of $\\hat{\\mathbb{Z}}$ and consider the maps to $\\mathbb{Q}$ given by each basis element composed with the map $\\mathbb{Q}\\stackrel{1/2}{\\to}\\mathbb{Q}/\\mathbb{Z}$; for each basis element this gives a homomorphism $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ sending that basis element to $1/2$ but every other basis element to $0$).</p>\n\n<p>What is true is that the group of <em>continuous</em> homomorphisms $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ is isomorphic to $\\mathbb{Q}/\\mathbb{Z}$, where you give $\\hat{\\mathbb{Z}}$ the profinite topology and $\\mathbb{Q}/\\mathbb{Z}$ the quotient topology from $\\mathbb{Q}$ (or the discrete topology, if you prefer; it turns out to make no difference in this case). To prove this, note that since $\\hat{\\mathbb{Z}}$ is compact, the image of any homomorphism $f:\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ is compact. But the only compact subgroups of $\\mathbb{Q}/\\mathbb{Z}$ are finite subgroups, so this means the image of $f$ is finite. There is then some $n$ such that $nf(x)=0$ for all $x$, so $f$ factors through the quotient $\\hat{\\mathbb{Z}}\\to\\hat{\\mathbb{Z}}/n\\hat{\\mathbb{Z}}\\cong\\mathbb{Z}/n\\mathbb{Z}$.</p>\n\n<p>Thus every continuous homomorphism $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ factors through one of the finite quotients $\\mathbb{Z}/n\\mathbb{Z}$ of $\\hat{\\mathbb{Z}}$. This means the functor $\\operatorname{Hom}_c (-, \\mathbb{Q}/\\mathbb{Z})$ (consisting of only continuous homomorphisms) actually <em>does</em> preserve the limit in question. So the group of continuous homomorphisms $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ is the colimit of the dual groups $(\\mathbb{Z}/n\\mathbb{Z})^D\\cong\\mathbb{Z}/n\\mathbb{Z}$, and this colimit is $\\mathbb{Q}/\\mathbb{Z}$.</p>\n" }, { "answer_id": 2115399, "score": 0, "text": "<p>That's basically all there is to the \"right and easy\" proof for this.</p>\n\n<blockquote>\n <p>$$\\hom\\left(\\Bbb Q/\\Bbb Z,S^1\\right)=\\hom\\left(\\varinjlim_n \\Bbb Z/n, S^1\\right).$$</p>\n</blockquote>\n\n<p>By definition of categorical duals you get this to be</p>\n\n<p>$$\\varprojlim_n\\hom\\left(\\Bbb Z/n, S^1\\right)=\\varprojlim_n\\Bbb Z/n=\\widehat{\\Bbb Z}$$</p>\n\n<p>since finite groups are self-dual. You can bootstrap from this and the fact that the double dual is canonically isomorphic to the original group and you can show the Pontryjagin dual of $\\widehat{\\Bbb Z}$ is $\\Bbb Q/\\Bbb Z$. I always find this particularly pleasing since torsion free dualizes to connected and $\\Bbb Q/\\Bbb Z$ is a completely torsion group just as $\\widehat{\\Bbb Z}$ is totally disconnected.</p>\n\n<p>This is basically what you wrote in fewer words, but it's definitely the \"right\" proof in some sense of the word: it appeals only to canonical categorical facts and is based on simple, straightforward classical facts from the basic theory.</p>\n" }, { "answer_id": 2115515, "score": 3, "text": "<p>It is not true that $\\operatorname{Hom}_\\mathbb{Z} (\\hat{\\mathbb{Z}}, \\mathbb{Q}/\\mathbb{Z}) \\cong \\mathbb{Q}/\\mathbb{Z}$. Indeed, $\\hat{\\mathbb{Z}}$ is an uncountable torsion-free abelian group, so tensoring with $\\mathbb{Q}$ gives an uncountable-dimensional $\\mathbb{Q}$-vector space $\\hat{\\mathbb{Z}}\\otimes\\mathbb{Q}$. You can then use this to construct uncountably many different homomorphisms $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ (for instance, pick a basis for $\\hat{\\mathbb{Z}}\\otimes\\mathbb{Q}$ consisting of elements of $\\hat{\\mathbb{Z}}$ and consider the maps to $\\mathbb{Q}$ given by each basis element composed with the map $\\mathbb{Q}\\stackrel{1/2}{\\to}\\mathbb{Q}/\\mathbb{Z}$; for each basis element this gives a homomorphism $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ sending that basis element to $1/2$ but every other basis element to $0$).</p>\n\n<p>What is true is that the group of <em>continuous</em> homomorphisms $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ is isomorphic to $\\mathbb{Q}/\\mathbb{Z}$, where you give $\\hat{\\mathbb{Z}}$ the profinite topology and $\\mathbb{Q}/\\mathbb{Z}$ the quotient topology from $\\mathbb{Q}$ (or the discrete topology, if you prefer; it turns out to make no difference in this case). To prove this, note that since $\\hat{\\mathbb{Z}}$ is compact, the image of any homomorphism $f:\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ is compact. But the only compact subgroups of $\\mathbb{Q}/\\mathbb{Z}$ are finite subgroups, so this means the image of $f$ is finite. There is then some $n$ such that $nf(x)=0$ for all $x$, so $f$ factors through the quotient $\\hat{\\mathbb{Z}}\\to\\hat{\\mathbb{Z}}/n\\hat{\\mathbb{Z}}\\cong\\mathbb{Z}/n\\mathbb{Z}$.</p>\n\n<p>Thus every continuous homomorphism $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ factors through one of the finite quotients $\\mathbb{Z}/n\\mathbb{Z}$ of $\\hat{\\mathbb{Z}}$. This means the functor $\\operatorname{Hom}_c (-, \\mathbb{Q}/\\mathbb{Z})$ (consisting of only continuous homomorphisms) actually <em>does</em> preserve the limit in question. So the group of continuous homomorphisms $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ is the colimit of the dual groups $(\\mathbb{Z}/n\\mathbb{Z})^D\\cong\\mathbb{Z}/n\\mathbb{Z}$, and this colimit is $\\mathbb{Q}/\\mathbb{Z}$.</p>\n" }, { "answer_id": 2115399, "score": 0, "text": "<p>That's basically all there is to the \"right and easy\" proof for this.</p>\n\n<blockquote>\n <p>$$\\hom\\left(\\Bbb Q/\\Bbb Z,S^1\\right)=\\hom\\left(\\varinjlim_n \\Bbb Z/n, S^1\\right).$$</p>\n</blockquote>\n\n<p>By definition of categorical duals you get this to be</p>\n\n<p>$$\\varprojlim_n\\hom\\left(\\Bbb Z/n, S^1\\right)=\\varprojlim_n\\Bbb Z/n=\\widehat{\\Bbb Z}$$</p>\n\n<p>since finite groups are self-dual. You can bootstrap from this and the fact that the double dual is canonically isomorphic to the original group and you can show the Pontryjagin dual of $\\widehat{\\Bbb Z}$ is $\\Bbb Q/\\Bbb Z$. I always find this particularly pleasing since torsion free dualizes to connected and $\\Bbb Q/\\Bbb Z$ is a completely torsion group just as $\\widehat{\\Bbb Z}$ is totally disconnected.</p>\n\n<p>This is basically what you wrote in fewer words, but it's definitely the \"right\" proof in some sense of the word: it appeals only to canonical categorical facts and is based on simple, straightforward classical facts from the basic theory.</p>\n" }, { "answer_id": 2115515, "score": 3, "text": "<p>It is not true that $\\operatorname{Hom}_\\mathbb{Z} (\\hat{\\mathbb{Z}}, \\mathbb{Q}/\\mathbb{Z}) \\cong \\mathbb{Q}/\\mathbb{Z}$. Indeed, $\\hat{\\mathbb{Z}}$ is an uncountable torsion-free abelian group, so tensoring with $\\mathbb{Q}$ gives an uncountable-dimensional $\\mathbb{Q}$-vector space $\\hat{\\mathbb{Z}}\\otimes\\mathbb{Q}$. You can then use this to construct uncountably many different homomorphisms $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ (for instance, pick a basis for $\\hat{\\mathbb{Z}}\\otimes\\mathbb{Q}$ consisting of elements of $\\hat{\\mathbb{Z}}$ and consider the maps to $\\mathbb{Q}$ given by each basis element composed with the map $\\mathbb{Q}\\stackrel{1/2}{\\to}\\mathbb{Q}/\\mathbb{Z}$; for each basis element this gives a homomorphism $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ sending that basis element to $1/2$ but every other basis element to $0$).</p>\n\n<p>What is true is that the group of <em>continuous</em> homomorphisms $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ is isomorphic to $\\mathbb{Q}/\\mathbb{Z}$, where you give $\\hat{\\mathbb{Z}}$ the profinite topology and $\\mathbb{Q}/\\mathbb{Z}$ the quotient topology from $\\mathbb{Q}$ (or the discrete topology, if you prefer; it turns out to make no difference in this case). To prove this, note that since $\\hat{\\mathbb{Z}}$ is compact, the image of any homomorphism $f:\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ is compact. But the only compact subgroups of $\\mathbb{Q}/\\mathbb{Z}$ are finite subgroups, so this means the image of $f$ is finite. There is then some $n$ such that $nf(x)=0$ for all $x$, so $f$ factors through the quotient $\\hat{\\mathbb{Z}}\\to\\hat{\\mathbb{Z}}/n\\hat{\\mathbb{Z}}\\cong\\mathbb{Z}/n\\mathbb{Z}$.</p>\n\n<p>Thus every continuous homomorphism $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ factors through one of the finite quotients $\\mathbb{Z}/n\\mathbb{Z}$ of $\\hat{\\mathbb{Z}}$. This means the functor $\\operatorname{Hom}_c (-, \\mathbb{Q}/\\mathbb{Z})$ (consisting of only continuous homomorphisms) actually <em>does</em> preserve the limit in question. So the group of continuous homomorphisms $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ is the colimit of the dual groups $(\\mathbb{Z}/n\\mathbb{Z})^D\\cong\\mathbb{Z}/n\\mathbb{Z}$, and this colimit is $\\mathbb{Q}/\\mathbb{Z}$.</p>\n" }, { "answer_id": 2115399, "score": 0, "text": "<p>That's basically all there is to the \"right and easy\" proof for this.</p>\n\n<blockquote>\n <p>$$\\hom\\left(\\Bbb Q/\\Bbb Z,S^1\\right)=\\hom\\left(\\varinjlim_n \\Bbb Z/n, S^1\\right).$$</p>\n</blockquote>\n\n<p>By definition of categorical duals you get this to be</p>\n\n<p>$$\\varprojlim_n\\hom\\left(\\Bbb Z/n, S^1\\right)=\\varprojlim_n\\Bbb Z/n=\\widehat{\\Bbb Z}$$</p>\n\n<p>since finite groups are self-dual. You can bootstrap from this and the fact that the double dual is canonically isomorphic to the original group and you can show the Pontryjagin dual of $\\widehat{\\Bbb Z}$ is $\\Bbb Q/\\Bbb Z$. I always find this particularly pleasing since torsion free dualizes to connected and $\\Bbb Q/\\Bbb Z$ is a completely torsion group just as $\\widehat{\\Bbb Z}$ is totally disconnected.</p>\n\n<p>This is basically what you wrote in fewer words, but it's definitely the \"right\" proof in some sense of the word: it appeals only to canonical categorical facts and is based on simple, straightforward classical facts from the basic theory.</p>\n" }, { "answer_id": 2115515, "score": 3, "text": "<p>It is not true that $\\operatorname{Hom}_\\mathbb{Z} (\\hat{\\mathbb{Z}}, \\mathbb{Q}/\\mathbb{Z}) \\cong \\mathbb{Q}/\\mathbb{Z}$. Indeed, $\\hat{\\mathbb{Z}}$ is an uncountable torsion-free abelian group, so tensoring with $\\mathbb{Q}$ gives an uncountable-dimensional $\\mathbb{Q}$-vector space $\\hat{\\mathbb{Z}}\\otimes\\mathbb{Q}$. You can then use this to construct uncountably many different homomorphisms $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ (for instance, pick a basis for $\\hat{\\mathbb{Z}}\\otimes\\mathbb{Q}$ consisting of elements of $\\hat{\\mathbb{Z}}$ and consider the maps to $\\mathbb{Q}$ given by each basis element composed with the map $\\mathbb{Q}\\stackrel{1/2}{\\to}\\mathbb{Q}/\\mathbb{Z}$; for each basis element this gives a homomorphism $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ sending that basis element to $1/2$ but every other basis element to $0$).</p>\n\n<p>What is true is that the group of <em>continuous</em> homomorphisms $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ is isomorphic to $\\mathbb{Q}/\\mathbb{Z}$, where you give $\\hat{\\mathbb{Z}}$ the profinite topology and $\\mathbb{Q}/\\mathbb{Z}$ the quotient topology from $\\mathbb{Q}$ (or the discrete topology, if you prefer; it turns out to make no difference in this case). To prove this, note that since $\\hat{\\mathbb{Z}}$ is compact, the image of any homomorphism $f:\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ is compact. But the only compact subgroups of $\\mathbb{Q}/\\mathbb{Z}$ are finite subgroups, so this means the image of $f$ is finite. There is then some $n$ such that $nf(x)=0$ for all $x$, so $f$ factors through the quotient $\\hat{\\mathbb{Z}}\\to\\hat{\\mathbb{Z}}/n\\hat{\\mathbb{Z}}\\cong\\mathbb{Z}/n\\mathbb{Z}$.</p>\n\n<p>Thus every continuous homomorphism $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ factors through one of the finite quotients $\\mathbb{Z}/n\\mathbb{Z}$ of $\\hat{\\mathbb{Z}}$. This means the functor $\\operatorname{Hom}_c (-, \\mathbb{Q}/\\mathbb{Z})$ (consisting of only continuous homomorphisms) actually <em>does</em> preserve the limit in question. So the group of continuous homomorphisms $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ is the colimit of the dual groups $(\\mathbb{Z}/n\\mathbb{Z})^D\\cong\\mathbb{Z}/n\\mathbb{Z}$, and this colimit is $\\mathbb{Q}/\\mathbb{Z}$.</p>\n" }, { "answer_id": 2115399, "score": 0, "text": "<p>That's basically all there is to the \"right and easy\" proof for this.</p>\n\n<blockquote>\n <p>$$\\hom\\left(\\Bbb Q/\\Bbb Z,S^1\\right)=\\hom\\left(\\varinjlim_n \\Bbb Z/n, S^1\\right).$$</p>\n</blockquote>\n\n<p>By definition of categorical duals you get this to be</p>\n\n<p>$$\\varprojlim_n\\hom\\left(\\Bbb Z/n, S^1\\right)=\\varprojlim_n\\Bbb Z/n=\\widehat{\\Bbb Z}$$</p>\n\n<p>since finite groups are self-dual. You can bootstrap from this and the fact that the double dual is canonically isomorphic to the original group and you can show the Pontryjagin dual of $\\widehat{\\Bbb Z}$ is $\\Bbb Q/\\Bbb Z$. I always find this particularly pleasing since torsion free dualizes to connected and $\\Bbb Q/\\Bbb Z$ is a completely torsion group just as $\\widehat{\\Bbb Z}$ is totally disconnected.</p>\n\n<p>This is basically what you wrote in fewer words, but it's definitely the \"right\" proof in some sense of the word: it appeals only to canonical categorical facts and is based on simple, straightforward classical facts from the basic theory.</p>\n" }, { "answer_id": 2115515, "score": 3, "text": "<p>It is not true that $\\operatorname{Hom}_\\mathbb{Z} (\\hat{\\mathbb{Z}}, \\mathbb{Q}/\\mathbb{Z}) \\cong \\mathbb{Q}/\\mathbb{Z}$. Indeed, $\\hat{\\mathbb{Z}}$ is an uncountable torsion-free abelian group, so tensoring with $\\mathbb{Q}$ gives an uncountable-dimensional $\\mathbb{Q}$-vector space $\\hat{\\mathbb{Z}}\\otimes\\mathbb{Q}$. You can then use this to construct uncountably many different homomorphisms $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ (for instance, pick a basis for $\\hat{\\mathbb{Z}}\\otimes\\mathbb{Q}$ consisting of elements of $\\hat{\\mathbb{Z}}$ and consider the maps to $\\mathbb{Q}$ given by each basis element composed with the map $\\mathbb{Q}\\stackrel{1/2}{\\to}\\mathbb{Q}/\\mathbb{Z}$; for each basis element this gives a homomorphism $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ sending that basis element to $1/2$ but every other basis element to $0$).</p>\n\n<p>What is true is that the group of <em>continuous</em> homomorphisms $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ is isomorphic to $\\mathbb{Q}/\\mathbb{Z}$, where you give $\\hat{\\mathbb{Z}}$ the profinite topology and $\\mathbb{Q}/\\mathbb{Z}$ the quotient topology from $\\mathbb{Q}$ (or the discrete topology, if you prefer; it turns out to make no difference in this case). To prove this, note that since $\\hat{\\mathbb{Z}}$ is compact, the image of any homomorphism $f:\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ is compact. But the only compact subgroups of $\\mathbb{Q}/\\mathbb{Z}$ are finite subgroups, so this means the image of $f$ is finite. There is then some $n$ such that $nf(x)=0$ for all $x$, so $f$ factors through the quotient $\\hat{\\mathbb{Z}}\\to\\hat{\\mathbb{Z}}/n\\hat{\\mathbb{Z}}\\cong\\mathbb{Z}/n\\mathbb{Z}$.</p>\n\n<p>Thus every continuous homomorphism $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ factors through one of the finite quotients $\\mathbb{Z}/n\\mathbb{Z}$ of $\\hat{\\mathbb{Z}}$. This means the functor $\\operatorname{Hom}_c (-, \\mathbb{Q}/\\mathbb{Z})$ (consisting of only continuous homomorphisms) actually <em>does</em> preserve the limit in question. So the group of continuous homomorphisms $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ is the colimit of the dual groups $(\\mathbb{Z}/n\\mathbb{Z})^D\\cong\\mathbb{Z}/n\\mathbb{Z}$, and this colimit is $\\mathbb{Q}/\\mathbb{Z}$.</p>\n" }, { "answer_id": 2115399, "score": 0, "text": "<p>That's basically all there is to the \"right and easy\" proof for this.</p>\n\n<blockquote>\n <p>$$\\hom\\left(\\Bbb Q/\\Bbb Z,S^1\\right)=\\hom\\left(\\varinjlim_n \\Bbb Z/n, S^1\\right).$$</p>\n</blockquote>\n\n<p>By definition of categorical duals you get this to be</p>\n\n<p>$$\\varprojlim_n\\hom\\left(\\Bbb Z/n, S^1\\right)=\\varprojlim_n\\Bbb Z/n=\\widehat{\\Bbb Z}$$</p>\n\n<p>since finite groups are self-dual. You can bootstrap from this and the fact that the double dual is canonically isomorphic to the original group and you can show the Pontryjagin dual of $\\widehat{\\Bbb Z}$ is $\\Bbb Q/\\Bbb Z$. I always find this particularly pleasing since torsion free dualizes to connected and $\\Bbb Q/\\Bbb Z$ is a completely torsion group just as $\\widehat{\\Bbb Z}$ is totally disconnected.</p>\n\n<p>This is basically what you wrote in fewer words, but it's definitely the \"right\" proof in some sense of the word: it appeals only to canonical categorical facts and is based on simple, straightforward classical facts from the basic theory.</p>\n" }, { "answer_id": 2115515, "score": 3, "text": "<p>It is not true that $\\operatorname{Hom}_\\mathbb{Z} (\\hat{\\mathbb{Z}}, \\mathbb{Q}/\\mathbb{Z}) \\cong \\mathbb{Q}/\\mathbb{Z}$. Indeed, $\\hat{\\mathbb{Z}}$ is an uncountable torsion-free abelian group, so tensoring with $\\mathbb{Q}$ gives an uncountable-dimensional $\\mathbb{Q}$-vector space $\\hat{\\mathbb{Z}}\\otimes\\mathbb{Q}$. You can then use this to construct uncountably many different homomorphisms $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ (for instance, pick a basis for $\\hat{\\mathbb{Z}}\\otimes\\mathbb{Q}$ consisting of elements of $\\hat{\\mathbb{Z}}$ and consider the maps to $\\mathbb{Q}$ given by each basis element composed with the map $\\mathbb{Q}\\stackrel{1/2}{\\to}\\mathbb{Q}/\\mathbb{Z}$; for each basis element this gives a homomorphism $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ sending that basis element to $1/2$ but every other basis element to $0$).</p>\n\n<p>What is true is that the group of <em>continuous</em> homomorphisms $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ is isomorphic to $\\mathbb{Q}/\\mathbb{Z}$, where you give $\\hat{\\mathbb{Z}}$ the profinite topology and $\\mathbb{Q}/\\mathbb{Z}$ the quotient topology from $\\mathbb{Q}$ (or the discrete topology, if you prefer; it turns out to make no difference in this case). To prove this, note that since $\\hat{\\mathbb{Z}}$ is compact, the image of any homomorphism $f:\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ is compact. But the only compact subgroups of $\\mathbb{Q}/\\mathbb{Z}$ are finite subgroups, so this means the image of $f$ is finite. There is then some $n$ such that $nf(x)=0$ for all $x$, so $f$ factors through the quotient $\\hat{\\mathbb{Z}}\\to\\hat{\\mathbb{Z}}/n\\hat{\\mathbb{Z}}\\cong\\mathbb{Z}/n\\mathbb{Z}$.</p>\n\n<p>Thus every continuous homomorphism $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ factors through one of the finite quotients $\\mathbb{Z}/n\\mathbb{Z}$ of $\\hat{\\mathbb{Z}}$. This means the functor $\\operatorname{Hom}_c (-, \\mathbb{Q}/\\mathbb{Z})$ (consisting of only continuous homomorphisms) actually <em>does</em> preserve the limit in question. So the group of continuous homomorphisms $\\hat{\\mathbb{Z}}\\to\\mathbb{Q}/\\mathbb{Z}$ is the colimit of the dual groups $(\\mathbb{Z}/n\\mathbb{Z})^D\\cong\\mathbb{Z}/n\\mathbb{Z}$, and this colimit is $\\mathbb{Q}/\\mathbb{Z}$.</p>\n" }, { "answer_id": 2115399, "score": 0, "text": "<p>That's basically all there is to the \"right and easy\" proof for this.</p>\n\n<blockquote>\n <p>$$\\hom\\left(\\Bbb Q/\\Bbb Z,S^1\\right)=\\hom\\left(\\varinjlim_n \\Bbb Z/n, S^1\\right).$$</p>\n</blockquote>\n\n<p>By definition of categorical duals you get this to be</p>\n\n<p>$$\\varprojlim_n\\hom\\left(\\Bbb Z/n, S^1\\right)=\\varprojlim_n\\Bbb Z/n=\\widehat{\\Bbb Z}$$</p>\n\n<p>since finite groups are self-dual. You can bootstrap from this and the fact that the double dual is canonically isomorphic to the original group and you can show the Pontryjagin dual of $\\widehat{\\Bbb Z}$ is $\\Bbb Q/\\Bbb Z$. I always find this particularly pleasing since torsion free dualizes to connected and $\\Bbb Q/\\Bbb Z$ is a completely torsion group just as $\\widehat{\\Bbb Z}$ is totally disconnected.</p>\n\n<p>This is basically what you wrote in fewer words, but it's definitely the \"right\" proof in some sense of the word: it appeals only to canonical categorical facts and is based on simple, straightforward classical facts from the basic theory.</p>\n" } ]
Question: The dual group of the profinite integers Answer 1: <p>It is not true that $\operatorname{Hom}_\mathbb{Z} (\hat{\mathbb{Z}}, \mathbb{Q}/\mathbb{Z}) \cong \mathbb{Q}/\mathbb{Z}$. Indeed, $\hat{\mathbb{Z}}$ is an uncountable torsion-free abelian group, so tensoring with $\mathbb{Q}$ gives an uncountable-dimensional $\mathbb{Q}$-vector space $\hat{\mathbb{Z}}\otimes\mathbb{Q}$. You can then use this to construct uncountably many different homomorphisms $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ (for instance, pick a basis for $\hat{\mathbb{Z}}\otimes\mathbb{Q}$ consisting of elements of $\hat{\mathbb{Z}}$ and consider the maps to $\mathbb{Q}$ given by each basis element composed with the map $\mathbb{Q}\stackrel{1/2}{\to}\mathbb{Q}/\mathbb{Z}$; for each basis element this gives a homomorphism $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ sending that basis element to $1/2$ but every other basis element to $0$).</p> <p>What is true is that the group of <em>continuous</em> homomorphisms $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ is isomorphic to $\mathbb{Q}/\mathbb{Z}$, where you give $\hat{\mathbb{Z}}$ the profinite topology and $\mathbb{Q}/\mathbb{Z}$ the quotient topology from $\mathbb{Q}$ (or the discrete topology, if you prefer; it turns out to make no difference in this case). To prove this, note that since $\hat{\mathbb{Z}}$ is compact, the image of any homomorphism $f:\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ is compact. But the only compact subgroups of $\mathbb{Q}/\mathbb{Z}$ are finite subgroups, so this means the image of $f$ is finite. There is then some $n$ such that $nf(x)=0$ for all $x$, so $f$ factors through the quotient $\hat{\mathbb{Z}}\to\hat{\mathbb{Z}}/n\hat{\mathbb{Z}}\cong\mathbb{Z}/n\mathbb{Z}$.</p> <p>Thus every continuous homomorphism $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ factors through one of the finite quotients $\mathbb{Z}/n\mathbb{Z}$ of $\hat{\mathbb{Z}}$. This means the functor $\operatorname{Hom}_c (-, \mathbb{Q}/\mathbb{Z})$ (consisting of only continuous homomorphisms) actually <em>does</em> preserve the limit in question. So the group of continuous homomorphisms $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ is the colimit of the dual groups $(\mathbb{Z}/n\mathbb{Z})^D\cong\mathbb{Z}/n\mathbb{Z}$, and this colimit is $\mathbb{Q}/\mathbb{Z}$.</p> Answer 2: <p>That's basically all there is to the "right and easy" proof for this.</p> <blockquote> <p>$$\hom\left(\Bbb Q/\Bbb Z,S^1\right)=\hom\left(\varinjlim_n \Bbb Z/n, S^1\right).$$</p> </blockquote> <p>By definition of categorical duals you get this to be</p> <p>$$\varprojlim_n\hom\left(\Bbb Z/n, S^1\right)=\varprojlim_n\Bbb Z/n=\widehat{\Bbb Z}$$</p> <p>since finite groups are self-dual. You can bootstrap from this and the fact that the double dual is canonically isomorphic to the original group and you can show the Pontryjagin dual of $\widehat{\Bbb Z}$ is $\Bbb Q/\Bbb Z$. I always find this particularly pleasing since torsion free dualizes to connected and $\Bbb Q/\Bbb Z$ is a completely torsion group just as $\widehat{\Bbb Z}$ is totally disconnected.</p> <p>This is basically what you wrote in fewer words, but it's definitely the "right" proof in some sense of the word: it appeals only to canonical categorical facts and is based on simple, straightforward classical facts from the basic theory.</p> Answer 3: <p>It is not true that $\operatorname{Hom}_\mathbb{Z} (\hat{\mathbb{Z}}, \mathbb{Q}/\mathbb{Z}) \cong \mathbb{Q}/\mathbb{Z}$. Indeed, $\hat{\mathbb{Z}}$ is an uncountable torsion-free abelian group, so tensoring with $\mathbb{Q}$ gives an uncountable-dimensional $\mathbb{Q}$-vector space $\hat{\mathbb{Z}}\otimes\mathbb{Q}$. You can then use this to construct uncountably many different homomorphisms $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ (for instance, pick a basis for $\hat{\mathbb{Z}}\otimes\mathbb{Q}$ consisting of elements of $\hat{\mathbb{Z}}$ and consider the maps to $\mathbb{Q}$ given by each basis element composed with the map $\mathbb{Q}\stackrel{1/2}{\to}\mathbb{Q}/\mathbb{Z}$; for each basis element this gives a homomorphism $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ sending that basis element to $1/2$ but every other basis element to $0$).</p> <p>What is true is that the group of <em>continuous</em> homomorphisms $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ is isomorphic to $\mathbb{Q}/\mathbb{Z}$, where you give $\hat{\mathbb{Z}}$ the profinite topology and $\mathbb{Q}/\mathbb{Z}$ the quotient topology from $\mathbb{Q}$ (or the discrete topology, if you prefer; it turns out to make no difference in this case). To prove this, note that since $\hat{\mathbb{Z}}$ is compact, the image of any homomorphism $f:\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ is compact. But the only compact subgroups of $\mathbb{Q}/\mathbb{Z}$ are finite subgroups, so this means the image of $f$ is finite. There is then some $n$ such that $nf(x)=0$ for all $x$, so $f$ factors through the quotient $\hat{\mathbb{Z}}\to\hat{\mathbb{Z}}/n\hat{\mathbb{Z}}\cong\mathbb{Z}/n\mathbb{Z}$.</p> <p>Thus every continuous homomorphism $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ factors through one of the finite quotients $\mathbb{Z}/n\mathbb{Z}$ of $\hat{\mathbb{Z}}$. This means the functor $\operatorname{Hom}_c (-, \mathbb{Q}/\mathbb{Z})$ (consisting of only continuous homomorphisms) actually <em>does</em> preserve the limit in question. So the group of continuous homomorphisms $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ is the colimit of the dual groups $(\mathbb{Z}/n\mathbb{Z})^D\cong\mathbb{Z}/n\mathbb{Z}$, and this colimit is $\mathbb{Q}/\mathbb{Z}$.</p> Answer 4: <p>That's basically all there is to the "right and easy" proof for this.</p> <blockquote> <p>$$\hom\left(\Bbb Q/\Bbb Z,S^1\right)=\hom\left(\varinjlim_n \Bbb Z/n, S^1\right).$$</p> </blockquote> <p>By definition of categorical duals you get this to be</p> <p>$$\varprojlim_n\hom\left(\Bbb Z/n, S^1\right)=\varprojlim_n\Bbb Z/n=\widehat{\Bbb Z}$$</p> <p>since finite groups are self-dual. You can bootstrap from this and the fact that the double dual is canonically isomorphic to the original group and you can show the Pontryjagin dual of $\widehat{\Bbb Z}$ is $\Bbb Q/\Bbb Z$. I always find this particularly pleasing since torsion free dualizes to connected and $\Bbb Q/\Bbb Z$ is a completely torsion group just as $\widehat{\Bbb Z}$ is totally disconnected.</p> <p>This is basically what you wrote in fewer words, but it's definitely the "right" proof in some sense of the word: it appeals only to canonical categorical facts and is based on simple, straightforward classical facts from the basic theory.</p> Answer 5: <p>It is not true that $\operatorname{Hom}_\mathbb{Z} (\hat{\mathbb{Z}}, \mathbb{Q}/\mathbb{Z}) \cong \mathbb{Q}/\mathbb{Z}$. Indeed, $\hat{\mathbb{Z}}$ is an uncountable torsion-free abelian group, so tensoring with $\mathbb{Q}$ gives an uncountable-dimensional $\mathbb{Q}$-vector space $\hat{\mathbb{Z}}\otimes\mathbb{Q}$. You can then use this to construct uncountably many different homomorphisms $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ (for instance, pick a basis for $\hat{\mathbb{Z}}\otimes\mathbb{Q}$ consisting of elements of $\hat{\mathbb{Z}}$ and consider the maps to $\mathbb{Q}$ given by each basis element composed with the map $\mathbb{Q}\stackrel{1/2}{\to}\mathbb{Q}/\mathbb{Z}$; for each basis element this gives a homomorphism $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ sending that basis element to $1/2$ but every other basis element to $0$).</p> <p>What is true is that the group of <em>continuous</em> homomorphisms $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ is isomorphic to $\mathbb{Q}/\mathbb{Z}$, where you give $\hat{\mathbb{Z}}$ the profinite topology and $\mathbb{Q}/\mathbb{Z}$ the quotient topology from $\mathbb{Q}$ (or the discrete topology, if you prefer; it turns out to make no difference in this case). To prove this, note that since $\hat{\mathbb{Z}}$ is compact, the image of any homomorphism $f:\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ is compact. But the only compact subgroups of $\mathbb{Q}/\mathbb{Z}$ are finite subgroups, so this means the image of $f$ is finite. There is then some $n$ such that $nf(x)=0$ for all $x$, so $f$ factors through the quotient $\hat{\mathbb{Z}}\to\hat{\mathbb{Z}}/n\hat{\mathbb{Z}}\cong\mathbb{Z}/n\mathbb{Z}$.</p> <p>Thus every continuous homomorphism $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ factors through one of the finite quotients $\mathbb{Z}/n\mathbb{Z}$ of $\hat{\mathbb{Z}}$. This means the functor $\operatorname{Hom}_c (-, \mathbb{Q}/\mathbb{Z})$ (consisting of only continuous homomorphisms) actually <em>does</em> preserve the limit in question. So the group of continuous homomorphisms $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ is the colimit of the dual groups $(\mathbb{Z}/n\mathbb{Z})^D\cong\mathbb{Z}/n\mathbb{Z}$, and this colimit is $\mathbb{Q}/\mathbb{Z}$.</p> Answer 6: <p>That's basically all there is to the "right and easy" proof for this.</p> <blockquote> <p>$$\hom\left(\Bbb Q/\Bbb Z,S^1\right)=\hom\left(\varinjlim_n \Bbb Z/n, S^1\right).$$</p> </blockquote> <p>By definition of categorical duals you get this to be</p> <p>$$\varprojlim_n\hom\left(\Bbb Z/n, S^1\right)=\varprojlim_n\Bbb Z/n=\widehat{\Bbb Z}$$</p> <p>since finite groups are self-dual. You can bootstrap from this and the fact that the double dual is canonically isomorphic to the original group and you can show the Pontryjagin dual of $\widehat{\Bbb Z}$ is $\Bbb Q/\Bbb Z$. I always find this particularly pleasing since torsion free dualizes to connected and $\Bbb Q/\Bbb Z$ is a completely torsion group just as $\widehat{\Bbb Z}$ is totally disconnected.</p> <p>This is basically what you wrote in fewer words, but it's definitely the "right" proof in some sense of the word: it appeals only to canonical categorical facts and is based on simple, straightforward classical facts from the basic theory.</p> Answer 7: <p>It is not true that $\operatorname{Hom}_\mathbb{Z} (\hat{\mathbb{Z}}, \mathbb{Q}/\mathbb{Z}) \cong \mathbb{Q}/\mathbb{Z}$. Indeed, $\hat{\mathbb{Z}}$ is an uncountable torsion-free abelian group, so tensoring with $\mathbb{Q}$ gives an uncountable-dimensional $\mathbb{Q}$-vector space $\hat{\mathbb{Z}}\otimes\mathbb{Q}$. You can then use this to construct uncountably many different homomorphisms $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ (for instance, pick a basis for $\hat{\mathbb{Z}}\otimes\mathbb{Q}$ consisting of elements of $\hat{\mathbb{Z}}$ and consider the maps to $\mathbb{Q}$ given by each basis element composed with the map $\mathbb{Q}\stackrel{1/2}{\to}\mathbb{Q}/\mathbb{Z}$; for each basis element this gives a homomorphism $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ sending that basis element to $1/2$ but every other basis element to $0$).</p> <p>What is true is that the group of <em>continuous</em> homomorphisms $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ is isomorphic to $\mathbb{Q}/\mathbb{Z}$, where you give $\hat{\mathbb{Z}}$ the profinite topology and $\mathbb{Q}/\mathbb{Z}$ the quotient topology from $\mathbb{Q}$ (or the discrete topology, if you prefer; it turns out to make no difference in this case). To prove this, note that since $\hat{\mathbb{Z}}$ is compact, the image of any homomorphism $f:\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ is compact. But the only compact subgroups of $\mathbb{Q}/\mathbb{Z}$ are finite subgroups, so this means the image of $f$ is finite. There is then some $n$ such that $nf(x)=0$ for all $x$, so $f$ factors through the quotient $\hat{\mathbb{Z}}\to\hat{\mathbb{Z}}/n\hat{\mathbb{Z}}\cong\mathbb{Z}/n\mathbb{Z}$.</p> <p>Thus every continuous homomorphism $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ factors through one of the finite quotients $\mathbb{Z}/n\mathbb{Z}$ of $\hat{\mathbb{Z}}$. This means the functor $\operatorname{Hom}_c (-, \mathbb{Q}/\mathbb{Z})$ (consisting of only continuous homomorphisms) actually <em>does</em> preserve the limit in question. So the group of continuous homomorphisms $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ is the colimit of the dual groups $(\mathbb{Z}/n\mathbb{Z})^D\cong\mathbb{Z}/n\mathbb{Z}$, and this colimit is $\mathbb{Q}/\mathbb{Z}$.</p> Answer 8: <p>That's basically all there is to the "right and easy" proof for this.</p> <blockquote> <p>$$\hom\left(\Bbb Q/\Bbb Z,S^1\right)=\hom\left(\varinjlim_n \Bbb Z/n, S^1\right).$$</p> </blockquote> <p>By definition of categorical duals you get this to be</p> <p>$$\varprojlim_n\hom\left(\Bbb Z/n, S^1\right)=\varprojlim_n\Bbb Z/n=\widehat{\Bbb Z}$$</p> <p>since finite groups are self-dual. You can bootstrap from this and the fact that the double dual is canonically isomorphic to the original group and you can show the Pontryjagin dual of $\widehat{\Bbb Z}$ is $\Bbb Q/\Bbb Z$. I always find this particularly pleasing since torsion free dualizes to connected and $\Bbb Q/\Bbb Z$ is a completely torsion group just as $\widehat{\Bbb Z}$ is totally disconnected.</p> <p>This is basically what you wrote in fewer words, but it's definitely the "right" proof in some sense of the word: it appeals only to canonical categorical facts and is based on simple, straightforward classical facts from the basic theory.</p> Answer 9: <p>It is not true that $\operatorname{Hom}_\mathbb{Z} (\hat{\mathbb{Z}}, \mathbb{Q}/\mathbb{Z}) \cong \mathbb{Q}/\mathbb{Z}$. Indeed, $\hat{\mathbb{Z}}$ is an uncountable torsion-free abelian group, so tensoring with $\mathbb{Q}$ gives an uncountable-dimensional $\mathbb{Q}$-vector space $\hat{\mathbb{Z}}\otimes\mathbb{Q}$. You can then use this to construct uncountably many different homomorphisms $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ (for instance, pick a basis for $\hat{\mathbb{Z}}\otimes\mathbb{Q}$ consisting of elements of $\hat{\mathbb{Z}}$ and consider the maps to $\mathbb{Q}$ given by each basis element composed with the map $\mathbb{Q}\stackrel{1/2}{\to}\mathbb{Q}/\mathbb{Z}$; for each basis element this gives a homomorphism $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ sending that basis element to $1/2$ but every other basis element to $0$).</p> <p>What is true is that the group of <em>continuous</em> homomorphisms $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ is isomorphic to $\mathbb{Q}/\mathbb{Z}$, where you give $\hat{\mathbb{Z}}$ the profinite topology and $\mathbb{Q}/\mathbb{Z}$ the quotient topology from $\mathbb{Q}$ (or the discrete topology, if you prefer; it turns out to make no difference in this case). To prove this, note that since $\hat{\mathbb{Z}}$ is compact, the image of any homomorphism $f:\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ is compact. But the only compact subgroups of $\mathbb{Q}/\mathbb{Z}$ are finite subgroups, so this means the image of $f$ is finite. There is then some $n$ such that $nf(x)=0$ for all $x$, so $f$ factors through the quotient $\hat{\mathbb{Z}}\to\hat{\mathbb{Z}}/n\hat{\mathbb{Z}}\cong\mathbb{Z}/n\mathbb{Z}$.</p> <p>Thus every continuous homomorphism $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ factors through one of the finite quotients $\mathbb{Z}/n\mathbb{Z}$ of $\hat{\mathbb{Z}}$. This means the functor $\operatorname{Hom}_c (-, \mathbb{Q}/\mathbb{Z})$ (consisting of only continuous homomorphisms) actually <em>does</em> preserve the limit in question. So the group of continuous homomorphisms $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ is the colimit of the dual groups $(\mathbb{Z}/n\mathbb{Z})^D\cong\mathbb{Z}/n\mathbb{Z}$, and this colimit is $\mathbb{Q}/\mathbb{Z}$.</p> Answer 10: <p>That's basically all there is to the "right and easy" proof for this.</p> <blockquote> <p>$$\hom\left(\Bbb Q/\Bbb Z,S^1\right)=\hom\left(\varinjlim_n \Bbb Z/n, S^1\right).$$</p> </blockquote> <p>By definition of categorical duals you get this to be</p> <p>$$\varprojlim_n\hom\left(\Bbb Z/n, S^1\right)=\varprojlim_n\Bbb Z/n=\widehat{\Bbb Z}$$</p> <p>since finite groups are self-dual. You can bootstrap from this and the fact that the double dual is canonically isomorphic to the original group and you can show the Pontryjagin dual of $\widehat{\Bbb Z}$ is $\Bbb Q/\Bbb Z$. I always find this particularly pleasing since torsion free dualizes to connected and $\Bbb Q/\Bbb Z$ is a completely torsion group just as $\widehat{\Bbb Z}$ is totally disconnected.</p> <p>This is basically what you wrote in fewer words, but it's definitely the "right" proof in some sense of the word: it appeals only to canonical categorical facts and is based on simple, straightforward classical facts from the basic theory.</p> Answer 11: <p>It is not true that $\operatorname{Hom}_\mathbb{Z} (\hat{\mathbb{Z}}, \mathbb{Q}/\mathbb{Z}) \cong \mathbb{Q}/\mathbb{Z}$. Indeed, $\hat{\mathbb{Z}}$ is an uncountable torsion-free abelian group, so tensoring with $\mathbb{Q}$ gives an uncountable-dimensional $\mathbb{Q}$-vector space $\hat{\mathbb{Z}}\otimes\mathbb{Q}$. You can then use this to construct uncountably many different homomorphisms $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ (for instance, pick a basis for $\hat{\mathbb{Z}}\otimes\mathbb{Q}$ consisting of elements of $\hat{\mathbb{Z}}$ and consider the maps to $\mathbb{Q}$ given by each basis element composed with the map $\mathbb{Q}\stackrel{1/2}{\to}\mathbb{Q}/\mathbb{Z}$; for each basis element this gives a homomorphism $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ sending that basis element to $1/2$ but every other basis element to $0$).</p> <p>What is true is that the group of <em>continuous</em> homomorphisms $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ is isomorphic to $\mathbb{Q}/\mathbb{Z}$, where you give $\hat{\mathbb{Z}}$ the profinite topology and $\mathbb{Q}/\mathbb{Z}$ the quotient topology from $\mathbb{Q}$ (or the discrete topology, if you prefer; it turns out to make no difference in this case). To prove this, note that since $\hat{\mathbb{Z}}$ is compact, the image of any homomorphism $f:\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ is compact. But the only compact subgroups of $\mathbb{Q}/\mathbb{Z}$ are finite subgroups, so this means the image of $f$ is finite. There is then some $n$ such that $nf(x)=0$ for all $x$, so $f$ factors through the quotient $\hat{\mathbb{Z}}\to\hat{\mathbb{Z}}/n\hat{\mathbb{Z}}\cong\mathbb{Z}/n\mathbb{Z}$.</p> <p>Thus every continuous homomorphism $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ factors through one of the finite quotients $\mathbb{Z}/n\mathbb{Z}$ of $\hat{\mathbb{Z}}$. This means the functor $\operatorname{Hom}_c (-, \mathbb{Q}/\mathbb{Z})$ (consisting of only continuous homomorphisms) actually <em>does</em> preserve the limit in question. So the group of continuous homomorphisms $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ is the colimit of the dual groups $(\mathbb{Z}/n\mathbb{Z})^D\cong\mathbb{Z}/n\mathbb{Z}$, and this colimit is $\mathbb{Q}/\mathbb{Z}$.</p> Answer 12: <p>That's basically all there is to the "right and easy" proof for this.</p> <blockquote> <p>$$\hom\left(\Bbb Q/\Bbb Z,S^1\right)=\hom\left(\varinjlim_n \Bbb Z/n, S^1\right).$$</p> </blockquote> <p>By definition of categorical duals you get this to be</p> <p>$$\varprojlim_n\hom\left(\Bbb Z/n, S^1\right)=\varprojlim_n\Bbb Z/n=\widehat{\Bbb Z}$$</p> <p>since finite groups are self-dual. You can bootstrap from this and the fact that the double dual is canonically isomorphic to the original group and you can show the Pontryjagin dual of $\widehat{\Bbb Z}$ is $\Bbb Q/\Bbb Z$. I always find this particularly pleasing since torsion free dualizes to connected and $\Bbb Q/\Bbb Z$ is a completely torsion group just as $\widehat{\Bbb Z}$ is totally disconnected.</p> <p>This is basically what you wrote in fewer words, but it's definitely the "right" proof in some sense of the word: it appeals only to canonical categorical facts and is based on simple, straightforward classical facts from the basic theory.</p> Answer 13: <p>It is not true that $\operatorname{Hom}_\mathbb{Z} (\hat{\mathbb{Z}}, \mathbb{Q}/\mathbb{Z}) \cong \mathbb{Q}/\mathbb{Z}$. Indeed, $\hat{\mathbb{Z}}$ is an uncountable torsion-free abelian group, so tensoring with $\mathbb{Q}$ gives an uncountable-dimensional $\mathbb{Q}$-vector space $\hat{\mathbb{Z}}\otimes\mathbb{Q}$. You can then use this to construct uncountably many different homomorphisms $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ (for instance, pick a basis for $\hat{\mathbb{Z}}\otimes\mathbb{Q}$ consisting of elements of $\hat{\mathbb{Z}}$ and consider the maps to $\mathbb{Q}$ given by each basis element composed with the map $\mathbb{Q}\stackrel{1/2}{\to}\mathbb{Q}/\mathbb{Z}$; for each basis element this gives a homomorphism $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ sending that basis element to $1/2$ but every other basis element to $0$).</p> <p>What is true is that the group of <em>continuous</em> homomorphisms $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ is isomorphic to $\mathbb{Q}/\mathbb{Z}$, where you give $\hat{\mathbb{Z}}$ the profinite topology and $\mathbb{Q}/\mathbb{Z}$ the quotient topology from $\mathbb{Q}$ (or the discrete topology, if you prefer; it turns out to make no difference in this case). To prove this, note that since $\hat{\mathbb{Z}}$ is compact, the image of any homomorphism $f:\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ is compact. But the only compact subgroups of $\mathbb{Q}/\mathbb{Z}$ are finite subgroups, so this means the image of $f$ is finite. There is then some $n$ such that $nf(x)=0$ for all $x$, so $f$ factors through the quotient $\hat{\mathbb{Z}}\to\hat{\mathbb{Z}}/n\hat{\mathbb{Z}}\cong\mathbb{Z}/n\mathbb{Z}$.</p> <p>Thus every continuous homomorphism $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ factors through one of the finite quotients $\mathbb{Z}/n\mathbb{Z}$ of $\hat{\mathbb{Z}}$. This means the functor $\operatorname{Hom}_c (-, \mathbb{Q}/\mathbb{Z})$ (consisting of only continuous homomorphisms) actually <em>does</em> preserve the limit in question. So the group of continuous homomorphisms $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ is the colimit of the dual groups $(\mathbb{Z}/n\mathbb{Z})^D\cong\mathbb{Z}/n\mathbb{Z}$, and this colimit is $\mathbb{Q}/\mathbb{Z}$.</p> Answer 14: <p>That's basically all there is to the "right and easy" proof for this.</p> <blockquote> <p>$$\hom\left(\Bbb Q/\Bbb Z,S^1\right)=\hom\left(\varinjlim_n \Bbb Z/n, S^1\right).$$</p> </blockquote> <p>By definition of categorical duals you get this to be</p> <p>$$\varprojlim_n\hom\left(\Bbb Z/n, S^1\right)=\varprojlim_n\Bbb Z/n=\widehat{\Bbb Z}$$</p> <p>since finite groups are self-dual. You can bootstrap from this and the fact that the double dual is canonically isomorphic to the original group and you can show the Pontryjagin dual of $\widehat{\Bbb Z}$ is $\Bbb Q/\Bbb Z$. I always find this particularly pleasing since torsion free dualizes to connected and $\Bbb Q/\Bbb Z$ is a completely torsion group just as $\widehat{\Bbb Z}$ is totally disconnected.</p> <p>This is basically what you wrote in fewer words, but it's definitely the "right" proof in some sense of the word: it appeals only to canonical categorical facts and is based on simple, straightforward classical facts from the basic theory.</p> Answer 15: <p>It is not true that $\operatorname{Hom}_\mathbb{Z} (\hat{\mathbb{Z}}, \mathbb{Q}/\mathbb{Z}) \cong \mathbb{Q}/\mathbb{Z}$. Indeed, $\hat{\mathbb{Z}}$ is an uncountable torsion-free abelian group, so tensoring with $\mathbb{Q}$ gives an uncountable-dimensional $\mathbb{Q}$-vector space $\hat{\mathbb{Z}}\otimes\mathbb{Q}$. You can then use this to construct uncountably many different homomorphisms $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ (for instance, pick a basis for $\hat{\mathbb{Z}}\otimes\mathbb{Q}$ consisting of elements of $\hat{\mathbb{Z}}$ and consider the maps to $\mathbb{Q}$ given by each basis element composed with the map $\mathbb{Q}\stackrel{1/2}{\to}\mathbb{Q}/\mathbb{Z}$; for each basis element this gives a homomorphism $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ sending that basis element to $1/2$ but every other basis element to $0$).</p> <p>What is true is that the group of <em>continuous</em> homomorphisms $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ is isomorphic to $\mathbb{Q}/\mathbb{Z}$, where you give $\hat{\mathbb{Z}}$ the profinite topology and $\mathbb{Q}/\mathbb{Z}$ the quotient topology from $\mathbb{Q}$ (or the discrete topology, if you prefer; it turns out to make no difference in this case). To prove this, note that since $\hat{\mathbb{Z}}$ is compact, the image of any homomorphism $f:\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ is compact. But the only compact subgroups of $\mathbb{Q}/\mathbb{Z}$ are finite subgroups, so this means the image of $f$ is finite. There is then some $n$ such that $nf(x)=0$ for all $x$, so $f$ factors through the quotient $\hat{\mathbb{Z}}\to\hat{\mathbb{Z}}/n\hat{\mathbb{Z}}\cong\mathbb{Z}/n\mathbb{Z}$.</p> <p>Thus every continuous homomorphism $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ factors through one of the finite quotients $\mathbb{Z}/n\mathbb{Z}$ of $\hat{\mathbb{Z}}$. This means the functor $\operatorname{Hom}_c (-, \mathbb{Q}/\mathbb{Z})$ (consisting of only continuous homomorphisms) actually <em>does</em> preserve the limit in question. So the group of continuous homomorphisms $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ is the colimit of the dual groups $(\mathbb{Z}/n\mathbb{Z})^D\cong\mathbb{Z}/n\mathbb{Z}$, and this colimit is $\mathbb{Q}/\mathbb{Z}$.</p> Answer 16: <p>That's basically all there is to the "right and easy" proof for this.</p> <blockquote> <p>$$\hom\left(\Bbb Q/\Bbb Z,S^1\right)=\hom\left(\varinjlim_n \Bbb Z/n, S^1\right).$$</p> </blockquote> <p>By definition of categorical duals you get this to be</p> <p>$$\varprojlim_n\hom\left(\Bbb Z/n, S^1\right)=\varprojlim_n\Bbb Z/n=\widehat{\Bbb Z}$$</p> <p>since finite groups are self-dual. You can bootstrap from this and the fact that the double dual is canonically isomorphic to the original group and you can show the Pontryjagin dual of $\widehat{\Bbb Z}$ is $\Bbb Q/\Bbb Z$. I always find this particularly pleasing since torsion free dualizes to connected and $\Bbb Q/\Bbb Z$ is a completely torsion group just as $\widehat{\Bbb Z}$ is totally disconnected.</p> <p>This is basically what you wrote in fewer words, but it's definitely the "right" proof in some sense of the word: it appeals only to canonical categorical facts and is based on simple, straightforward classical facts from the basic theory.</p> Answer 17: <p>It is not true that $\operatorname{Hom}_\mathbb{Z} (\hat{\mathbb{Z}}, \mathbb{Q}/\mathbb{Z}) \cong \mathbb{Q}/\mathbb{Z}$. Indeed, $\hat{\mathbb{Z}}$ is an uncountable torsion-free abelian group, so tensoring with $\mathbb{Q}$ gives an uncountable-dimensional $\mathbb{Q}$-vector space $\hat{\mathbb{Z}}\otimes\mathbb{Q}$. You can then use this to construct uncountably many different homomorphisms $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ (for instance, pick a basis for $\hat{\mathbb{Z}}\otimes\mathbb{Q}$ consisting of elements of $\hat{\mathbb{Z}}$ and consider the maps to $\mathbb{Q}$ given by each basis element composed with the map $\mathbb{Q}\stackrel{1/2}{\to}\mathbb{Q}/\mathbb{Z}$; for each basis element this gives a homomorphism $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ sending that basis element to $1/2$ but every other basis element to $0$).</p> <p>What is true is that the group of <em>continuous</em> homomorphisms $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ is isomorphic to $\mathbb{Q}/\mathbb{Z}$, where you give $\hat{\mathbb{Z}}$ the profinite topology and $\mathbb{Q}/\mathbb{Z}$ the quotient topology from $\mathbb{Q}$ (or the discrete topology, if you prefer; it turns out to make no difference in this case). To prove this, note that since $\hat{\mathbb{Z}}$ is compact, the image of any homomorphism $f:\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ is compact. But the only compact subgroups of $\mathbb{Q}/\mathbb{Z}$ are finite subgroups, so this means the image of $f$ is finite. There is then some $n$ such that $nf(x)=0$ for all $x$, so $f$ factors through the quotient $\hat{\mathbb{Z}}\to\hat{\mathbb{Z}}/n\hat{\mathbb{Z}}\cong\mathbb{Z}/n\mathbb{Z}$.</p> <p>Thus every continuous homomorphism $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ factors through one of the finite quotients $\mathbb{Z}/n\mathbb{Z}$ of $\hat{\mathbb{Z}}$. This means the functor $\operatorname{Hom}_c (-, \mathbb{Q}/\mathbb{Z})$ (consisting of only continuous homomorphisms) actually <em>does</em> preserve the limit in question. So the group of continuous homomorphisms $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ is the colimit of the dual groups $(\mathbb{Z}/n\mathbb{Z})^D\cong\mathbb{Z}/n\mathbb{Z}$, and this colimit is $\mathbb{Q}/\mathbb{Z}$.</p> Answer 18: <p>That's basically all there is to the "right and easy" proof for this.</p> <blockquote> <p>$$\hom\left(\Bbb Q/\Bbb Z,S^1\right)=\hom\left(\varinjlim_n \Bbb Z/n, S^1\right).$$</p> </blockquote> <p>By definition of categorical duals you get this to be</p> <p>$$\varprojlim_n\hom\left(\Bbb Z/n, S^1\right)=\varprojlim_n\Bbb Z/n=\widehat{\Bbb Z}$$</p> <p>since finite groups are self-dual. You can bootstrap from this and the fact that the double dual is canonically isomorphic to the original group and you can show the Pontryjagin dual of $\widehat{\Bbb Z}$ is $\Bbb Q/\Bbb Z$. I always find this particularly pleasing since torsion free dualizes to connected and $\Bbb Q/\Bbb Z$ is a completely torsion group just as $\widehat{\Bbb Z}$ is totally disconnected.</p> <p>This is basically what you wrote in fewer words, but it's definitely the "right" proof in some sense of the word: it appeals only to canonical categorical facts and is based on simple, straightforward classical facts from the basic theory.</p> Answer 19: <p>It is not true that $\operatorname{Hom}_\mathbb{Z} (\hat{\mathbb{Z}}, \mathbb{Q}/\mathbb{Z}) \cong \mathbb{Q}/\mathbb{Z}$. Indeed, $\hat{\mathbb{Z}}$ is an uncountable torsion-free abelian group, so tensoring with $\mathbb{Q}$ gives an uncountable-dimensional $\mathbb{Q}$-vector space $\hat{\mathbb{Z}}\otimes\mathbb{Q}$. You can then use this to construct uncountably many different homomorphisms $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ (for instance, pick a basis for $\hat{\mathbb{Z}}\otimes\mathbb{Q}$ consisting of elements of $\hat{\mathbb{Z}}$ and consider the maps to $\mathbb{Q}$ given by each basis element composed with the map $\mathbb{Q}\stackrel{1/2}{\to}\mathbb{Q}/\mathbb{Z}$; for each basis element this gives a homomorphism $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ sending that basis element to $1/2$ but every other basis element to $0$).</p> <p>What is true is that the group of <em>continuous</em> homomorphisms $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ is isomorphic to $\mathbb{Q}/\mathbb{Z}$, where you give $\hat{\mathbb{Z}}$ the profinite topology and $\mathbb{Q}/\mathbb{Z}$ the quotient topology from $\mathbb{Q}$ (or the discrete topology, if you prefer; it turns out to make no difference in this case). To prove this, note that since $\hat{\mathbb{Z}}$ is compact, the image of any homomorphism $f:\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ is compact. But the only compact subgroups of $\mathbb{Q}/\mathbb{Z}$ are finite subgroups, so this means the image of $f$ is finite. There is then some $n$ such that $nf(x)=0$ for all $x$, so $f$ factors through the quotient $\hat{\mathbb{Z}}\to\hat{\mathbb{Z}}/n\hat{\mathbb{Z}}\cong\mathbb{Z}/n\mathbb{Z}$.</p> <p>Thus every continuous homomorphism $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ factors through one of the finite quotients $\mathbb{Z}/n\mathbb{Z}$ of $\hat{\mathbb{Z}}$. This means the functor $\operatorname{Hom}_c (-, \mathbb{Q}/\mathbb{Z})$ (consisting of only continuous homomorphisms) actually <em>does</em> preserve the limit in question. So the group of continuous homomorphisms $\hat{\mathbb{Z}}\to\mathbb{Q}/\mathbb{Z}$ is the colimit of the dual groups $(\mathbb{Z}/n\mathbb{Z})^D\cong\mathbb{Z}/n\mathbb{Z}$, and this colimit is $\mathbb{Q}/\mathbb{Z}$.</p> Answer 20: <p>That's basically all there is to the "right and easy" proof for this.</p> <blockquote> <p>$$\hom\left(\Bbb Q/\Bbb Z,S^1\right)=\hom\left(\varinjlim_n \Bbb Z/n, S^1\right).$$</p> </blockquote> <p>By definition of categorical duals you get this to be</p> <p>$$\varprojlim_n\hom\left(\Bbb Z/n, S^1\right)=\varprojlim_n\Bbb Z/n=\widehat{\Bbb Z}$$</p> <p>since finite groups are self-dual. You can bootstrap from this and the fact that the double dual is canonically isomorphic to the original group and you can show the Pontryjagin dual of $\widehat{\Bbb Z}$ is $\Bbb Q/\Bbb Z$. I always find this particularly pleasing since torsion free dualizes to connected and $\Bbb Q/\Bbb Z$ is a completely torsion group just as $\widehat{\Bbb Z}$ is totally disconnected.</p> <p>This is basically what you wrote in fewer words, but it's definitely the "right" proof in some sense of the word: it appeals only to canonical categorical facts and is based on simple, straightforward classical facts from the basic theory.</p>
Please critique my proof that $\sqrt{12}$ is irrational
2,115,380
6
https://math.stackexchange.com/questions/2115380/please-critique-my-proof-that-sqrt12-is-irrational
<p>I would like critiques on correctness, conciseness, and clarity. Thanks!</p> <p>Proposition: There is no rational number whose square is 12</p> <p>Proof: Suppose there <em>were</em> such a number, $a = \in \mathbb{Q}$ s.t. $a^2 = 12$.</p> <p>This implies that $\exists$ $m, n \in \mathbb{Z}$ s.t. $\frac{m^2}{n^2} = 12.$ Assume without loss of generality that $m,~ n$ have no factors in common.</p> <p>$\Rightarrow m^2 = 12n^2$. </p> <p>This implies that $m^2$ is even, and therefore that $m$ is even; it can thus be written $2k = m$ for some $k \in \mathbb{Z}$.</p> <p>Thus $m^2 = 12n^2 $</p> <p>$\Rightarrow 4k^2 = 12n^2 $</p> <p>$\Rightarrow \frac{k^2}{3} = n^2$</p> <p>Because $n^2$ is an integer, it is clear that $3$ divides $k^2$ which imples that $k$ has $3$ or $\frac{k}{n}$ has a factor (because $\frac{k^2}{n^2}= 3$)</p> <p>Suppose that the former is true, and $3$ is a factor of $k$. Then $k = 3j$ for some integer j, which implies that $(3j)^2 = 3n^2$</p> <p>$\Rightarrow 9j^2 = 3n^2 $</p> <p>$\Rightarrow n^2 = 3j^2 $</p> <p>$\Rightarrow n^2 = \frac{k^2}{n^2}j^2$</p> <p>$\Rightarrow k = \frac{n^2}{j}$ but this implies that $j$ divides $n^2$, but $j$ divides $m$, and by initial assumption $n$ and $m$ have no factors in common, so this is a contradiction.</p> <p>Suppose now that $\frac{k}{n}$ is a factor of k. Then $k = \frac{k}{n}j$ for some integer $j$. Then $(\frac{k}{n}j)^2 = 3n^2$ which implies that $3j^2 = 3n^2 \Rightarrow j^2 = n^2 \Rightarrow j = n$. But this means that $n$ divides $m$, which again is a contradiction. Thus any rational representation of the number whose square equals $12$ leads to a contradiction and this number must therefore have no rational representation.</p>
[ { "answer_id": 2115472, "score": 6, "text": "<p><strong>Proof.</strong> &nbsp;\nAssume $\\sqrt{12} \\in \\mathbb{Q}$ is rational, then it can be written as $\\sqrt{12}=\\cfrac{m}{n}$ with $m,n \\in \\mathbb{Z}$ coprime.</p>\n\n<p>Squaring the equality gives $m^2 = 12 n^2 = 3 \\cdot 4 \\cdot n^2\\,$. Therefore $3 \\mid m^2 = m \\cdot m$ and, since $3$ is a prime, it follows by <a href=\"https://en.wikipedia.org/wiki/Euclid&#39;s_lemma\" rel=\"noreferrer\">Euclid's Lemma</a> that $3 \\mid m\\,$.</p>\n\n<p>Then $m = 3k$ for some $k \\in \\mathbb{Z}$ and substituting back gives $9 k^2 = 12 n^2 \\iff 3 k^2 = 4 n^2\\,$. Therefore $3 \\mid 4 n^2$ and, since $3 \\not \\mid 4$ it follows that $3 \\mid n^2$ then, again by Euclid's Lemma, $3 \\mid n\\,$.</p>\n\n<p>But $3 \\mid m$ and $3 \\mid n$ contradicts the assumption that $m,n$ are coprime, so the premise that $\\sqrt{12} \\in \\mathbb{Q}$ must be false, therefore $\\sqrt{12}$ is irrational.</p>\n\n<p><br>\n<strong>Critique</strong> of the posted proof.</p>\n\n<blockquote>\n <p>Proof: Suppose there <em>were</em> such a number, $a = \\in \\mathbb{Q}$ s.t. $a^2 = 12$.</p>\n \n <p>This implies that $\\exists$ $m, n \\in \\mathbb{Z}$ s.t. $\\frac{m^2}{n^2} = 12.$ Assume without loss of generality that $m,~ n$ have no factors in common.</p>\n \n <p>$\\Rightarrow m^2 = 12n^2$. </p>\n</blockquote>\n\n<p>So far so good.</p>\n\n<blockquote>\n <p>This implies that $m^2$ is even, and therefore that $m$ is even;</p>\n</blockquote>\n\n<p>The fact that $2 \\mid m^2 \\implies 2 \\mid m$ may sound obvious, but still needs some justification. You could argue by contradiction, or use Euclid's Lemma.</p>\n\n<blockquote>\n <p>it can thus be written $2k = m$ for some $k \\in \\mathbb{Z}$.</p>\n \n <p>Thus $m^2 = 12n^2 $</p>\n \n <p>$\\Rightarrow 4k^2 = 12n^2 $</p>\n</blockquote>\n\n<p>Correct. As an observation, $k^2 = 3 n^2$ just eliminated the perfect square factor of $4$ and reduced the problem to proving that $\\sqrt{3}$ is irrational.</p>\n\n<blockquote>\n <p>$\\Rightarrow \\frac{k^2}{3} = n^2$</p>\n \n <p>Because $n^2$ is an integer, it is clear that $3$ divides $k^2$ which imples that $k$ has $3$</p>\n</blockquote>\n\n<p>You should generally avoid fractions where they are not necessary. The previous line gave $k^2 = 3 n^2\\,$, which directly implies that $3 \\mid k^2\\,$.</p>\n\n<blockquote>\n <p>or $\\frac{k}{n}$ has a factor (because $\\frac{k^2}{n^2}= 3$)</p>\n</blockquote>\n\n<p>This makes no sense, and it is in fact not needed to complete the proof.</p>\n\n<blockquote>\n <p>Suppose that the former is true, and $3$ is a factor of $k$. Then $k = 3j$ for some integer j, which implies that $(3j)^2 = 3n^2$</p>\n \n <p>$\\Rightarrow 9j^2 = 3n^2 $</p>\n \n <p>$\\Rightarrow n^2 = 3j^2 $</p>\n</blockquote>\n\n<p>The proof is complete right here at this point, if you just note that the last equality implies that $3 \\mid n^2\\,$, and therefore $3 \\mid n$ which contradicts the assumption that $m,n$ are coprime.</p>\n\n<blockquote>\n <p>[ <em>rest of post snipped</em> ]</p>\n</blockquote>\n" }, { "answer_id": 2115610, "score": 2, "text": "<p>\"This implies that ∃ m,n∈Z s.t. m2n2=12. Assume without loss of generality that m, n have no factors in common.\"</p>\n\n<p>I, personally, would not argue \"without loss of generality\" . $q \\in \\mathbb Q$ is defined as $q = \\frac mn$ for some relatively prime integers. So we declare them to have no factors in common by fiat-- not merely by lack of loss of generality. (It's not that big of an issue.)</p>\n\n<p>\"This implies that $m^2$ is even, and therefore that m is even\"</p>\n\n<p>I'd accept this but dxiv very much has a point, that it should require some justification. I personally would simply put it in more definitive language. I'd say: \"Therefore $2|m^2$ and, as $2$ is prime, $2|m$\". This could require a little justification in that all numbers have a unique prime factorization so that for prime $p$ we know if $p|ab$ then $p|a$ or $p|b$ so if $p|m^2$ then $p|m$ or $p|m$. </p>\n\n<p>\"Because $n^2$ is an integer, it is clear that 3 divides $k^2$ which implies that k has 3 or $\\frac kn$ has a factor \".</p>\n\n<p>As $\\frac {k^2}3$ is a integer, it implies $3|k^2$. Period. That always happens. That any thing else <em>may</em> happen doesn't matter. It <em>may</em> have $k/n$ as a factor or it may have $7$ as a factor. Or it may not. Those don't matter.</p>\n\n<p>Also, if $\\frac kn$ is an integer at all, then it is trivial that $\\frac kn$ is a factor $k$ whether or not $k^2/3$ is an integer or not. And if $\\frac kn$ is not an integer then the statement $\\frac kn$ is a factor of $k$ is meaningless.</p>\n\n<p>And if $k/n$ is an integer, then $n|m = 2k$ and as $n,m$ have no factor in common then $n = 1$. (Which would mean $\\sqrt{12} = 2\\sqrt{3}$ is an integer which is easy to verify is not the case).</p>\n\n<p>\"if the former ($3|k$) then .... \"</p>\n\n<p>All that is just fine and the rest is unneeded.</p>\n\n<p>But the rest is a bit of a mess.</p>\n\n<p>\"Suppose now that $k/n$ is a factor of $k$\" Again, this is trivial if $n|k$ and is meaningless if $n \\not \\mid k$.</p>\n\n<p>And we can rule out $n|k$ as that would imply $\\sqrt{12} = m/n = 2k/n$ is an integer. Which</p>\n\n<p>But beware. This is true of <em>all</em> numbers and nothing relevant is likely to arise. And it doesn't:</p>\n\n<p>\" Then $(\\frac knj)^2=3n^2$ which implies that $3j^2=3n^2$\"</p>\n\n<p>Actually, no, it implies $3j^2 = 3n^4$. And thus we get $j = n^2$ (we can assume $j$ is positive this time as we can assume $k$ and $n$ are positive).</p>\n\n<p>\"But this means that n divides m, which again is a contradiction.\"</p>\n\n<p>Actually it's not a contradiction if $n = 1$. </p>\n\n<p>But this isn't a contradiction that needed to be reached. $k/n$ is a factor of $k$ only makes sense if $n|k$ which would imply $n|m = 2k$.</p>\n" }, { "answer_id": 2115546, "score": 0, "text": "<p>See this proof that \nif $n$ is not a perfect square\nthen $\\sqrt{n}$\nis irrational:</p>\n\n<p><a href=\"https://math.stackexchange.com/questions/1391411/follow-up-question-proof-of-irrationality-of-sqrt3/1391466#1391466\">Follow-up Question: Proof of Irrationality of $\\sqrt{3}$</a></p>\n\n<p>The proof starts by saying that\nif $n$ is not a perfect square\nthen there is a $k$ such that\n$k^2 &lt; n &lt; (k+1)^2$.\nThe proof breaks down if\n$k^2 = n$.</p>\n\n<p>Note that\nthis proof does <em>not</em>\nuse divisibility.</p>\n" }, { "answer_id": 2115472, "score": 6, "text": "<p><strong>Proof.</strong> &nbsp;\nAssume $\\sqrt{12} \\in \\mathbb{Q}$ is rational, then it can be written as $\\sqrt{12}=\\cfrac{m}{n}$ with $m,n \\in \\mathbb{Z}$ coprime.</p>\n\n<p>Squaring the equality gives $m^2 = 12 n^2 = 3 \\cdot 4 \\cdot n^2\\,$. Therefore $3 \\mid m^2 = m \\cdot m$ and, since $3$ is a prime, it follows by <a href=\"https://en.wikipedia.org/wiki/Euclid&#39;s_lemma\" rel=\"noreferrer\">Euclid's Lemma</a> that $3 \\mid m\\,$.</p>\n\n<p>Then $m = 3k$ for some $k \\in \\mathbb{Z}$ and substituting back gives $9 k^2 = 12 n^2 \\iff 3 k^2 = 4 n^2\\,$. Therefore $3 \\mid 4 n^2$ and, since $3 \\not \\mid 4$ it follows that $3 \\mid n^2$ then, again by Euclid's Lemma, $3 \\mid n\\,$.</p>\n\n<p>But $3 \\mid m$ and $3 \\mid n$ contradicts the assumption that $m,n$ are coprime, so the premise that $\\sqrt{12} \\in \\mathbb{Q}$ must be false, therefore $\\sqrt{12}$ is irrational.</p>\n\n<p><br>\n<strong>Critique</strong> of the posted proof.</p>\n\n<blockquote>\n <p>Proof: Suppose there <em>were</em> such a number, $a = \\in \\mathbb{Q}$ s.t. $a^2 = 12$.</p>\n \n <p>This implies that $\\exists$ $m, n \\in \\mathbb{Z}$ s.t. $\\frac{m^2}{n^2} = 12.$ Assume without loss of generality that $m,~ n$ have no factors in common.</p>\n \n <p>$\\Rightarrow m^2 = 12n^2$. </p>\n</blockquote>\n\n<p>So far so good.</p>\n\n<blockquote>\n <p>This implies that $m^2$ is even, and therefore that $m$ is even;</p>\n</blockquote>\n\n<p>The fact that $2 \\mid m^2 \\implies 2 \\mid m$ may sound obvious, but still needs some justification. You could argue by contradiction, or use Euclid's Lemma.</p>\n\n<blockquote>\n <p>it can thus be written $2k = m$ for some $k \\in \\mathbb{Z}$.</p>\n \n <p>Thus $m^2 = 12n^2 $</p>\n \n <p>$\\Rightarrow 4k^2 = 12n^2 $</p>\n</blockquote>\n\n<p>Correct. As an observation, $k^2 = 3 n^2$ just eliminated the perfect square factor of $4$ and reduced the problem to proving that $\\sqrt{3}$ is irrational.</p>\n\n<blockquote>\n <p>$\\Rightarrow \\frac{k^2}{3} = n^2$</p>\n \n <p>Because $n^2$ is an integer, it is clear that $3$ divides $k^2$ which imples that $k$ has $3$</p>\n</blockquote>\n\n<p>You should generally avoid fractions where they are not necessary. The previous line gave $k^2 = 3 n^2\\,$, which directly implies that $3 \\mid k^2\\,$.</p>\n\n<blockquote>\n <p>or $\\frac{k}{n}$ has a factor (because $\\frac{k^2}{n^2}= 3$)</p>\n</blockquote>\n\n<p>This makes no sense, and it is in fact not needed to complete the proof.</p>\n\n<blockquote>\n <p>Suppose that the former is true, and $3$ is a factor of $k$. Then $k = 3j$ for some integer j, which implies that $(3j)^2 = 3n^2$</p>\n \n <p>$\\Rightarrow 9j^2 = 3n^2 $</p>\n \n <p>$\\Rightarrow n^2 = 3j^2 $</p>\n</blockquote>\n\n<p>The proof is complete right here at this point, if you just note that the last equality implies that $3 \\mid n^2\\,$, and therefore $3 \\mid n$ which contradicts the assumption that $m,n$ are coprime.</p>\n\n<blockquote>\n <p>[ <em>rest of post snipped</em> ]</p>\n</blockquote>\n" }, { "answer_id": 2115610, "score": 2, "text": "<p>\"This implies that ∃ m,n∈Z s.t. m2n2=12. Assume without loss of generality that m, n have no factors in common.\"</p>\n\n<p>I, personally, would not argue \"without loss of generality\" . $q \\in \\mathbb Q$ is defined as $q = \\frac mn$ for some relatively prime integers. So we declare them to have no factors in common by fiat-- not merely by lack of loss of generality. (It's not that big of an issue.)</p>\n\n<p>\"This implies that $m^2$ is even, and therefore that m is even\"</p>\n\n<p>I'd accept this but dxiv very much has a point, that it should require some justification. I personally would simply put it in more definitive language. I'd say: \"Therefore $2|m^2$ and, as $2$ is prime, $2|m$\". This could require a little justification in that all numbers have a unique prime factorization so that for prime $p$ we know if $p|ab$ then $p|a$ or $p|b$ so if $p|m^2$ then $p|m$ or $p|m$. </p>\n\n<p>\"Because $n^2$ is an integer, it is clear that 3 divides $k^2$ which implies that k has 3 or $\\frac kn$ has a factor \".</p>\n\n<p>As $\\frac {k^2}3$ is a integer, it implies $3|k^2$. Period. That always happens. That any thing else <em>may</em> happen doesn't matter. It <em>may</em> have $k/n$ as a factor or it may have $7$ as a factor. Or it may not. Those don't matter.</p>\n\n<p>Also, if $\\frac kn$ is an integer at all, then it is trivial that $\\frac kn$ is a factor $k$ whether or not $k^2/3$ is an integer or not. And if $\\frac kn$ is not an integer then the statement $\\frac kn$ is a factor of $k$ is meaningless.</p>\n\n<p>And if $k/n$ is an integer, then $n|m = 2k$ and as $n,m$ have no factor in common then $n = 1$. (Which would mean $\\sqrt{12} = 2\\sqrt{3}$ is an integer which is easy to verify is not the case).</p>\n\n<p>\"if the former ($3|k$) then .... \"</p>\n\n<p>All that is just fine and the rest is unneeded.</p>\n\n<p>But the rest is a bit of a mess.</p>\n\n<p>\"Suppose now that $k/n$ is a factor of $k$\" Again, this is trivial if $n|k$ and is meaningless if $n \\not \\mid k$.</p>\n\n<p>And we can rule out $n|k$ as that would imply $\\sqrt{12} = m/n = 2k/n$ is an integer. Which</p>\n\n<p>But beware. This is true of <em>all</em> numbers and nothing relevant is likely to arise. And it doesn't:</p>\n\n<p>\" Then $(\\frac knj)^2=3n^2$ which implies that $3j^2=3n^2$\"</p>\n\n<p>Actually, no, it implies $3j^2 = 3n^4$. And thus we get $j = n^2$ (we can assume $j$ is positive this time as we can assume $k$ and $n$ are positive).</p>\n\n<p>\"But this means that n divides m, which again is a contradiction.\"</p>\n\n<p>Actually it's not a contradiction if $n = 1$. </p>\n\n<p>But this isn't a contradiction that needed to be reached. $k/n$ is a factor of $k$ only makes sense if $n|k$ which would imply $n|m = 2k$.</p>\n" }, { "answer_id": 2115546, "score": 0, "text": "<p>See this proof that \nif $n$ is not a perfect square\nthen $\\sqrt{n}$\nis irrational:</p>\n\n<p><a href=\"https://math.stackexchange.com/questions/1391411/follow-up-question-proof-of-irrationality-of-sqrt3/1391466#1391466\">Follow-up Question: Proof of Irrationality of $\\sqrt{3}$</a></p>\n\n<p>The proof starts by saying that\nif $n$ is not a perfect square\nthen there is a $k$ such that\n$k^2 &lt; n &lt; (k+1)^2$.\nThe proof breaks down if\n$k^2 = n$.</p>\n\n<p>Note that\nthis proof does <em>not</em>\nuse divisibility.</p>\n" }, { "answer_id": 2115472, "score": 6, "text": "<p><strong>Proof.</strong> &nbsp;\nAssume $\\sqrt{12} \\in \\mathbb{Q}$ is rational, then it can be written as $\\sqrt{12}=\\cfrac{m}{n}$ with $m,n \\in \\mathbb{Z}$ coprime.</p>\n\n<p>Squaring the equality gives $m^2 = 12 n^2 = 3 \\cdot 4 \\cdot n^2\\,$. Therefore $3 \\mid m^2 = m \\cdot m$ and, since $3$ is a prime, it follows by <a href=\"https://en.wikipedia.org/wiki/Euclid&#39;s_lemma\" rel=\"noreferrer\">Euclid's Lemma</a> that $3 \\mid m\\,$.</p>\n\n<p>Then $m = 3k$ for some $k \\in \\mathbb{Z}$ and substituting back gives $9 k^2 = 12 n^2 \\iff 3 k^2 = 4 n^2\\,$. Therefore $3 \\mid 4 n^2$ and, since $3 \\not \\mid 4$ it follows that $3 \\mid n^2$ then, again by Euclid's Lemma, $3 \\mid n\\,$.</p>\n\n<p>But $3 \\mid m$ and $3 \\mid n$ contradicts the assumption that $m,n$ are coprime, so the premise that $\\sqrt{12} \\in \\mathbb{Q}$ must be false, therefore $\\sqrt{12}$ is irrational.</p>\n\n<p><br>\n<strong>Critique</strong> of the posted proof.</p>\n\n<blockquote>\n <p>Proof: Suppose there <em>were</em> such a number, $a = \\in \\mathbb{Q}$ s.t. $a^2 = 12$.</p>\n \n <p>This implies that $\\exists$ $m, n \\in \\mathbb{Z}$ s.t. $\\frac{m^2}{n^2} = 12.$ Assume without loss of generality that $m,~ n$ have no factors in common.</p>\n \n <p>$\\Rightarrow m^2 = 12n^2$. </p>\n</blockquote>\n\n<p>So far so good.</p>\n\n<blockquote>\n <p>This implies that $m^2$ is even, and therefore that $m$ is even;</p>\n</blockquote>\n\n<p>The fact that $2 \\mid m^2 \\implies 2 \\mid m$ may sound obvious, but still needs some justification. You could argue by contradiction, or use Euclid's Lemma.</p>\n\n<blockquote>\n <p>it can thus be written $2k = m$ for some $k \\in \\mathbb{Z}$.</p>\n \n <p>Thus $m^2 = 12n^2 $</p>\n \n <p>$\\Rightarrow 4k^2 = 12n^2 $</p>\n</blockquote>\n\n<p>Correct. As an observation, $k^2 = 3 n^2$ just eliminated the perfect square factor of $4$ and reduced the problem to proving that $\\sqrt{3}$ is irrational.</p>\n\n<blockquote>\n <p>$\\Rightarrow \\frac{k^2}{3} = n^2$</p>\n \n <p>Because $n^2$ is an integer, it is clear that $3$ divides $k^2$ which imples that $k$ has $3$</p>\n</blockquote>\n\n<p>You should generally avoid fractions where they are not necessary. The previous line gave $k^2 = 3 n^2\\,$, which directly implies that $3 \\mid k^2\\,$.</p>\n\n<blockquote>\n <p>or $\\frac{k}{n}$ has a factor (because $\\frac{k^2}{n^2}= 3$)</p>\n</blockquote>\n\n<p>This makes no sense, and it is in fact not needed to complete the proof.</p>\n\n<blockquote>\n <p>Suppose that the former is true, and $3$ is a factor of $k$. Then $k = 3j$ for some integer j, which implies that $(3j)^2 = 3n^2$</p>\n \n <p>$\\Rightarrow 9j^2 = 3n^2 $</p>\n \n <p>$\\Rightarrow n^2 = 3j^2 $</p>\n</blockquote>\n\n<p>The proof is complete right here at this point, if you just note that the last equality implies that $3 \\mid n^2\\,$, and therefore $3 \\mid n$ which contradicts the assumption that $m,n$ are coprime.</p>\n\n<blockquote>\n <p>[ <em>rest of post snipped</em> ]</p>\n</blockquote>\n" }, { "answer_id": 2115610, "score": 2, "text": "<p>\"This implies that ∃ m,n∈Z s.t. m2n2=12. Assume without loss of generality that m, n have no factors in common.\"</p>\n\n<p>I, personally, would not argue \"without loss of generality\" . $q \\in \\mathbb Q$ is defined as $q = \\frac mn$ for some relatively prime integers. So we declare them to have no factors in common by fiat-- not merely by lack of loss of generality. (It's not that big of an issue.)</p>\n\n<p>\"This implies that $m^2$ is even, and therefore that m is even\"</p>\n\n<p>I'd accept this but dxiv very much has a point, that it should require some justification. I personally would simply put it in more definitive language. I'd say: \"Therefore $2|m^2$ and, as $2$ is prime, $2|m$\". This could require a little justification in that all numbers have a unique prime factorization so that for prime $p$ we know if $p|ab$ then $p|a$ or $p|b$ so if $p|m^2$ then $p|m$ or $p|m$. </p>\n\n<p>\"Because $n^2$ is an integer, it is clear that 3 divides $k^2$ which implies that k has 3 or $\\frac kn$ has a factor \".</p>\n\n<p>As $\\frac {k^2}3$ is a integer, it implies $3|k^2$. Period. That always happens. That any thing else <em>may</em> happen doesn't matter. It <em>may</em> have $k/n$ as a factor or it may have $7$ as a factor. Or it may not. Those don't matter.</p>\n\n<p>Also, if $\\frac kn$ is an integer at all, then it is trivial that $\\frac kn$ is a factor $k$ whether or not $k^2/3$ is an integer or not. And if $\\frac kn$ is not an integer then the statement $\\frac kn$ is a factor of $k$ is meaningless.</p>\n\n<p>And if $k/n$ is an integer, then $n|m = 2k$ and as $n,m$ have no factor in common then $n = 1$. (Which would mean $\\sqrt{12} = 2\\sqrt{3}$ is an integer which is easy to verify is not the case).</p>\n\n<p>\"if the former ($3|k$) then .... \"</p>\n\n<p>All that is just fine and the rest is unneeded.</p>\n\n<p>But the rest is a bit of a mess.</p>\n\n<p>\"Suppose now that $k/n$ is a factor of $k$\" Again, this is trivial if $n|k$ and is meaningless if $n \\not \\mid k$.</p>\n\n<p>And we can rule out $n|k$ as that would imply $\\sqrt{12} = m/n = 2k/n$ is an integer. Which</p>\n\n<p>But beware. This is true of <em>all</em> numbers and nothing relevant is likely to arise. And it doesn't:</p>\n\n<p>\" Then $(\\frac knj)^2=3n^2$ which implies that $3j^2=3n^2$\"</p>\n\n<p>Actually, no, it implies $3j^2 = 3n^4$. And thus we get $j = n^2$ (we can assume $j$ is positive this time as we can assume $k$ and $n$ are positive).</p>\n\n<p>\"But this means that n divides m, which again is a contradiction.\"</p>\n\n<p>Actually it's not a contradiction if $n = 1$. </p>\n\n<p>But this isn't a contradiction that needed to be reached. $k/n$ is a factor of $k$ only makes sense if $n|k$ which would imply $n|m = 2k$.</p>\n" }, { "answer_id": 2115546, "score": 0, "text": "<p>See this proof that \nif $n$ is not a perfect square\nthen $\\sqrt{n}$\nis irrational:</p>\n\n<p><a href=\"https://math.stackexchange.com/questions/1391411/follow-up-question-proof-of-irrationality-of-sqrt3/1391466#1391466\">Follow-up Question: Proof of Irrationality of $\\sqrt{3}$</a></p>\n\n<p>The proof starts by saying that\nif $n$ is not a perfect square\nthen there is a $k$ such that\n$k^2 &lt; n &lt; (k+1)^2$.\nThe proof breaks down if\n$k^2 = n$.</p>\n\n<p>Note that\nthis proof does <em>not</em>\nuse divisibility.</p>\n" }, { "answer_id": 2115472, "score": 6, "text": "<p><strong>Proof.</strong> &nbsp;\nAssume $\\sqrt{12} \\in \\mathbb{Q}$ is rational, then it can be written as $\\sqrt{12}=\\cfrac{m}{n}$ with $m,n \\in \\mathbb{Z}$ coprime.</p>\n\n<p>Squaring the equality gives $m^2 = 12 n^2 = 3 \\cdot 4 \\cdot n^2\\,$. Therefore $3 \\mid m^2 = m \\cdot m$ and, since $3$ is a prime, it follows by <a href=\"https://en.wikipedia.org/wiki/Euclid&#39;s_lemma\" rel=\"noreferrer\">Euclid's Lemma</a> that $3 \\mid m\\,$.</p>\n\n<p>Then $m = 3k$ for some $k \\in \\mathbb{Z}$ and substituting back gives $9 k^2 = 12 n^2 \\iff 3 k^2 = 4 n^2\\,$. Therefore $3 \\mid 4 n^2$ and, since $3 \\not \\mid 4$ it follows that $3 \\mid n^2$ then, again by Euclid's Lemma, $3 \\mid n\\,$.</p>\n\n<p>But $3 \\mid m$ and $3 \\mid n$ contradicts the assumption that $m,n$ are coprime, so the premise that $\\sqrt{12} \\in \\mathbb{Q}$ must be false, therefore $\\sqrt{12}$ is irrational.</p>\n\n<p><br>\n<strong>Critique</strong> of the posted proof.</p>\n\n<blockquote>\n <p>Proof: Suppose there <em>were</em> such a number, $a = \\in \\mathbb{Q}$ s.t. $a^2 = 12$.</p>\n \n <p>This implies that $\\exists$ $m, n \\in \\mathbb{Z}$ s.t. $\\frac{m^2}{n^2} = 12.$ Assume without loss of generality that $m,~ n$ have no factors in common.</p>\n \n <p>$\\Rightarrow m^2 = 12n^2$. </p>\n</blockquote>\n\n<p>So far so good.</p>\n\n<blockquote>\n <p>This implies that $m^2$ is even, and therefore that $m$ is even;</p>\n</blockquote>\n\n<p>The fact that $2 \\mid m^2 \\implies 2 \\mid m$ may sound obvious, but still needs some justification. You could argue by contradiction, or use Euclid's Lemma.</p>\n\n<blockquote>\n <p>it can thus be written $2k = m$ for some $k \\in \\mathbb{Z}$.</p>\n \n <p>Thus $m^2 = 12n^2 $</p>\n \n <p>$\\Rightarrow 4k^2 = 12n^2 $</p>\n</blockquote>\n\n<p>Correct. As an observation, $k^2 = 3 n^2$ just eliminated the perfect square factor of $4$ and reduced the problem to proving that $\\sqrt{3}$ is irrational.</p>\n\n<blockquote>\n <p>$\\Rightarrow \\frac{k^2}{3} = n^2$</p>\n \n <p>Because $n^2$ is an integer, it is clear that $3$ divides $k^2$ which imples that $k$ has $3$</p>\n</blockquote>\n\n<p>You should generally avoid fractions where they are not necessary. The previous line gave $k^2 = 3 n^2\\,$, which directly implies that $3 \\mid k^2\\,$.</p>\n\n<blockquote>\n <p>or $\\frac{k}{n}$ has a factor (because $\\frac{k^2}{n^2}= 3$)</p>\n</blockquote>\n\n<p>This makes no sense, and it is in fact not needed to complete the proof.</p>\n\n<blockquote>\n <p>Suppose that the former is true, and $3$ is a factor of $k$. Then $k = 3j$ for some integer j, which implies that $(3j)^2 = 3n^2$</p>\n \n <p>$\\Rightarrow 9j^2 = 3n^2 $</p>\n \n <p>$\\Rightarrow n^2 = 3j^2 $</p>\n</blockquote>\n\n<p>The proof is complete right here at this point, if you just note that the last equality implies that $3 \\mid n^2\\,$, and therefore $3 \\mid n$ which contradicts the assumption that $m,n$ are coprime.</p>\n\n<blockquote>\n <p>[ <em>rest of post snipped</em> ]</p>\n</blockquote>\n" }, { "answer_id": 2115610, "score": 2, "text": "<p>\"This implies that ∃ m,n∈Z s.t. m2n2=12. Assume without loss of generality that m, n have no factors in common.\"</p>\n\n<p>I, personally, would not argue \"without loss of generality\" . $q \\in \\mathbb Q$ is defined as $q = \\frac mn$ for some relatively prime integers. So we declare them to have no factors in common by fiat-- not merely by lack of loss of generality. (It's not that big of an issue.)</p>\n\n<p>\"This implies that $m^2$ is even, and therefore that m is even\"</p>\n\n<p>I'd accept this but dxiv very much has a point, that it should require some justification. I personally would simply put it in more definitive language. I'd say: \"Therefore $2|m^2$ and, as $2$ is prime, $2|m$\". This could require a little justification in that all numbers have a unique prime factorization so that for prime $p$ we know if $p|ab$ then $p|a$ or $p|b$ so if $p|m^2$ then $p|m$ or $p|m$. </p>\n\n<p>\"Because $n^2$ is an integer, it is clear that 3 divides $k^2$ which implies that k has 3 or $\\frac kn$ has a factor \".</p>\n\n<p>As $\\frac {k^2}3$ is a integer, it implies $3|k^2$. Period. That always happens. That any thing else <em>may</em> happen doesn't matter. It <em>may</em> have $k/n$ as a factor or it may have $7$ as a factor. Or it may not. Those don't matter.</p>\n\n<p>Also, if $\\frac kn$ is an integer at all, then it is trivial that $\\frac kn$ is a factor $k$ whether or not $k^2/3$ is an integer or not. And if $\\frac kn$ is not an integer then the statement $\\frac kn$ is a factor of $k$ is meaningless.</p>\n\n<p>And if $k/n$ is an integer, then $n|m = 2k$ and as $n,m$ have no factor in common then $n = 1$. (Which would mean $\\sqrt{12} = 2\\sqrt{3}$ is an integer which is easy to verify is not the case).</p>\n\n<p>\"if the former ($3|k$) then .... \"</p>\n\n<p>All that is just fine and the rest is unneeded.</p>\n\n<p>But the rest is a bit of a mess.</p>\n\n<p>\"Suppose now that $k/n$ is a factor of $k$\" Again, this is trivial if $n|k$ and is meaningless if $n \\not \\mid k$.</p>\n\n<p>And we can rule out $n|k$ as that would imply $\\sqrt{12} = m/n = 2k/n$ is an integer. Which</p>\n\n<p>But beware. This is true of <em>all</em> numbers and nothing relevant is likely to arise. And it doesn't:</p>\n\n<p>\" Then $(\\frac knj)^2=3n^2$ which implies that $3j^2=3n^2$\"</p>\n\n<p>Actually, no, it implies $3j^2 = 3n^4$. And thus we get $j = n^2$ (we can assume $j$ is positive this time as we can assume $k$ and $n$ are positive).</p>\n\n<p>\"But this means that n divides m, which again is a contradiction.\"</p>\n\n<p>Actually it's not a contradiction if $n = 1$. </p>\n\n<p>But this isn't a contradiction that needed to be reached. $k/n$ is a factor of $k$ only makes sense if $n|k$ which would imply $n|m = 2k$.</p>\n" }, { "answer_id": 2115546, "score": 0, "text": "<p>See this proof that \nif $n$ is not a perfect square\nthen $\\sqrt{n}$\nis irrational:</p>\n\n<p><a href=\"https://math.stackexchange.com/questions/1391411/follow-up-question-proof-of-irrationality-of-sqrt3/1391466#1391466\">Follow-up Question: Proof of Irrationality of $\\sqrt{3}$</a></p>\n\n<p>The proof starts by saying that\nif $n$ is not a perfect square\nthen there is a $k$ such that\n$k^2 &lt; n &lt; (k+1)^2$.\nThe proof breaks down if\n$k^2 = n$.</p>\n\n<p>Note that\nthis proof does <em>not</em>\nuse divisibility.</p>\n" }, { "answer_id": 2115472, "score": 6, "text": "<p><strong>Proof.</strong> &nbsp;\nAssume $\\sqrt{12} \\in \\mathbb{Q}$ is rational, then it can be written as $\\sqrt{12}=\\cfrac{m}{n}$ with $m,n \\in \\mathbb{Z}$ coprime.</p>\n\n<p>Squaring the equality gives $m^2 = 12 n^2 = 3 \\cdot 4 \\cdot n^2\\,$. Therefore $3 \\mid m^2 = m \\cdot m$ and, since $3$ is a prime, it follows by <a href=\"https://en.wikipedia.org/wiki/Euclid&#39;s_lemma\" rel=\"noreferrer\">Euclid's Lemma</a> that $3 \\mid m\\,$.</p>\n\n<p>Then $m = 3k$ for some $k \\in \\mathbb{Z}$ and substituting back gives $9 k^2 = 12 n^2 \\iff 3 k^2 = 4 n^2\\,$. Therefore $3 \\mid 4 n^2$ and, since $3 \\not \\mid 4$ it follows that $3 \\mid n^2$ then, again by Euclid's Lemma, $3 \\mid n\\,$.</p>\n\n<p>But $3 \\mid m$ and $3 \\mid n$ contradicts the assumption that $m,n$ are coprime, so the premise that $\\sqrt{12} \\in \\mathbb{Q}$ must be false, therefore $\\sqrt{12}$ is irrational.</p>\n\n<p><br>\n<strong>Critique</strong> of the posted proof.</p>\n\n<blockquote>\n <p>Proof: Suppose there <em>were</em> such a number, $a = \\in \\mathbb{Q}$ s.t. $a^2 = 12$.</p>\n \n <p>This implies that $\\exists$ $m, n \\in \\mathbb{Z}$ s.t. $\\frac{m^2}{n^2} = 12.$ Assume without loss of generality that $m,~ n$ have no factors in common.</p>\n \n <p>$\\Rightarrow m^2 = 12n^2$. </p>\n</blockquote>\n\n<p>So far so good.</p>\n\n<blockquote>\n <p>This implies that $m^2$ is even, and therefore that $m$ is even;</p>\n</blockquote>\n\n<p>The fact that $2 \\mid m^2 \\implies 2 \\mid m$ may sound obvious, but still needs some justification. You could argue by contradiction, or use Euclid's Lemma.</p>\n\n<blockquote>\n <p>it can thus be written $2k = m$ for some $k \\in \\mathbb{Z}$.</p>\n \n <p>Thus $m^2 = 12n^2 $</p>\n \n <p>$\\Rightarrow 4k^2 = 12n^2 $</p>\n</blockquote>\n\n<p>Correct. As an observation, $k^2 = 3 n^2$ just eliminated the perfect square factor of $4$ and reduced the problem to proving that $\\sqrt{3}$ is irrational.</p>\n\n<blockquote>\n <p>$\\Rightarrow \\frac{k^2}{3} = n^2$</p>\n \n <p>Because $n^2$ is an integer, it is clear that $3$ divides $k^2$ which imples that $k$ has $3$</p>\n</blockquote>\n\n<p>You should generally avoid fractions where they are not necessary. The previous line gave $k^2 = 3 n^2\\,$, which directly implies that $3 \\mid k^2\\,$.</p>\n\n<blockquote>\n <p>or $\\frac{k}{n}$ has a factor (because $\\frac{k^2}{n^2}= 3$)</p>\n</blockquote>\n\n<p>This makes no sense, and it is in fact not needed to complete the proof.</p>\n\n<blockquote>\n <p>Suppose that the former is true, and $3$ is a factor of $k$. Then $k = 3j$ for some integer j, which implies that $(3j)^2 = 3n^2$</p>\n \n <p>$\\Rightarrow 9j^2 = 3n^2 $</p>\n \n <p>$\\Rightarrow n^2 = 3j^2 $</p>\n</blockquote>\n\n<p>The proof is complete right here at this point, if you just note that the last equality implies that $3 \\mid n^2\\,$, and therefore $3 \\mid n$ which contradicts the assumption that $m,n$ are coprime.</p>\n\n<blockquote>\n <p>[ <em>rest of post snipped</em> ]</p>\n</blockquote>\n" }, { "answer_id": 2115610, "score": 2, "text": "<p>\"This implies that ∃ m,n∈Z s.t. m2n2=12. Assume without loss of generality that m, n have no factors in common.\"</p>\n\n<p>I, personally, would not argue \"without loss of generality\" . $q \\in \\mathbb Q$ is defined as $q = \\frac mn$ for some relatively prime integers. So we declare them to have no factors in common by fiat-- not merely by lack of loss of generality. (It's not that big of an issue.)</p>\n\n<p>\"This implies that $m^2$ is even, and therefore that m is even\"</p>\n\n<p>I'd accept this but dxiv very much has a point, that it should require some justification. I personally would simply put it in more definitive language. I'd say: \"Therefore $2|m^2$ and, as $2$ is prime, $2|m$\". This could require a little justification in that all numbers have a unique prime factorization so that for prime $p$ we know if $p|ab$ then $p|a$ or $p|b$ so if $p|m^2$ then $p|m$ or $p|m$. </p>\n\n<p>\"Because $n^2$ is an integer, it is clear that 3 divides $k^2$ which implies that k has 3 or $\\frac kn$ has a factor \".</p>\n\n<p>As $\\frac {k^2}3$ is a integer, it implies $3|k^2$. Period. That always happens. That any thing else <em>may</em> happen doesn't matter. It <em>may</em> have $k/n$ as a factor or it may have $7$ as a factor. Or it may not. Those don't matter.</p>\n\n<p>Also, if $\\frac kn$ is an integer at all, then it is trivial that $\\frac kn$ is a factor $k$ whether or not $k^2/3$ is an integer or not. And if $\\frac kn$ is not an integer then the statement $\\frac kn$ is a factor of $k$ is meaningless.</p>\n\n<p>And if $k/n$ is an integer, then $n|m = 2k$ and as $n,m$ have no factor in common then $n = 1$. (Which would mean $\\sqrt{12} = 2\\sqrt{3}$ is an integer which is easy to verify is not the case).</p>\n\n<p>\"if the former ($3|k$) then .... \"</p>\n\n<p>All that is just fine and the rest is unneeded.</p>\n\n<p>But the rest is a bit of a mess.</p>\n\n<p>\"Suppose now that $k/n$ is a factor of $k$\" Again, this is trivial if $n|k$ and is meaningless if $n \\not \\mid k$.</p>\n\n<p>And we can rule out $n|k$ as that would imply $\\sqrt{12} = m/n = 2k/n$ is an integer. Which</p>\n\n<p>But beware. This is true of <em>all</em> numbers and nothing relevant is likely to arise. And it doesn't:</p>\n\n<p>\" Then $(\\frac knj)^2=3n^2$ which implies that $3j^2=3n^2$\"</p>\n\n<p>Actually, no, it implies $3j^2 = 3n^4$. And thus we get $j = n^2$ (we can assume $j$ is positive this time as we can assume $k$ and $n$ are positive).</p>\n\n<p>\"But this means that n divides m, which again is a contradiction.\"</p>\n\n<p>Actually it's not a contradiction if $n = 1$. </p>\n\n<p>But this isn't a contradiction that needed to be reached. $k/n$ is a factor of $k$ only makes sense if $n|k$ which would imply $n|m = 2k$.</p>\n" }, { "answer_id": 2115546, "score": 0, "text": "<p>See this proof that \nif $n$ is not a perfect square\nthen $\\sqrt{n}$\nis irrational:</p>\n\n<p><a href=\"https://math.stackexchange.com/questions/1391411/follow-up-question-proof-of-irrationality-of-sqrt3/1391466#1391466\">Follow-up Question: Proof of Irrationality of $\\sqrt{3}$</a></p>\n\n<p>The proof starts by saying that\nif $n$ is not a perfect square\nthen there is a $k$ such that\n$k^2 &lt; n &lt; (k+1)^2$.\nThe proof breaks down if\n$k^2 = n$.</p>\n\n<p>Note that\nthis proof does <em>not</em>\nuse divisibility.</p>\n" }, { "answer_id": 2115472, "score": 6, "text": "<p><strong>Proof.</strong> &nbsp;\nAssume $\\sqrt{12} \\in \\mathbb{Q}$ is rational, then it can be written as $\\sqrt{12}=\\cfrac{m}{n}$ with $m,n \\in \\mathbb{Z}$ coprime.</p>\n\n<p>Squaring the equality gives $m^2 = 12 n^2 = 3 \\cdot 4 \\cdot n^2\\,$. Therefore $3 \\mid m^2 = m \\cdot m$ and, since $3$ is a prime, it follows by <a href=\"https://en.wikipedia.org/wiki/Euclid&#39;s_lemma\" rel=\"noreferrer\">Euclid's Lemma</a> that $3 \\mid m\\,$.</p>\n\n<p>Then $m = 3k$ for some $k \\in \\mathbb{Z}$ and substituting back gives $9 k^2 = 12 n^2 \\iff 3 k^2 = 4 n^2\\,$. Therefore $3 \\mid 4 n^2$ and, since $3 \\not \\mid 4$ it follows that $3 \\mid n^2$ then, again by Euclid's Lemma, $3 \\mid n\\,$.</p>\n\n<p>But $3 \\mid m$ and $3 \\mid n$ contradicts the assumption that $m,n$ are coprime, so the premise that $\\sqrt{12} \\in \\mathbb{Q}$ must be false, therefore $\\sqrt{12}$ is irrational.</p>\n\n<p><br>\n<strong>Critique</strong> of the posted proof.</p>\n\n<blockquote>\n <p>Proof: Suppose there <em>were</em> such a number, $a = \\in \\mathbb{Q}$ s.t. $a^2 = 12$.</p>\n \n <p>This implies that $\\exists$ $m, n \\in \\mathbb{Z}$ s.t. $\\frac{m^2}{n^2} = 12.$ Assume without loss of generality that $m,~ n$ have no factors in common.</p>\n \n <p>$\\Rightarrow m^2 = 12n^2$. </p>\n</blockquote>\n\n<p>So far so good.</p>\n\n<blockquote>\n <p>This implies that $m^2$ is even, and therefore that $m$ is even;</p>\n</blockquote>\n\n<p>The fact that $2 \\mid m^2 \\implies 2 \\mid m$ may sound obvious, but still needs some justification. You could argue by contradiction, or use Euclid's Lemma.</p>\n\n<blockquote>\n <p>it can thus be written $2k = m$ for some $k \\in \\mathbb{Z}$.</p>\n \n <p>Thus $m^2 = 12n^2 $</p>\n \n <p>$\\Rightarrow 4k^2 = 12n^2 $</p>\n</blockquote>\n\n<p>Correct. As an observation, $k^2 = 3 n^2$ just eliminated the perfect square factor of $4$ and reduced the problem to proving that $\\sqrt{3}$ is irrational.</p>\n\n<blockquote>\n <p>$\\Rightarrow \\frac{k^2}{3} = n^2$</p>\n \n <p>Because $n^2$ is an integer, it is clear that $3$ divides $k^2$ which imples that $k$ has $3$</p>\n</blockquote>\n\n<p>You should generally avoid fractions where they are not necessary. The previous line gave $k^2 = 3 n^2\\,$, which directly implies that $3 \\mid k^2\\,$.</p>\n\n<blockquote>\n <p>or $\\frac{k}{n}$ has a factor (because $\\frac{k^2}{n^2}= 3$)</p>\n</blockquote>\n\n<p>This makes no sense, and it is in fact not needed to complete the proof.</p>\n\n<blockquote>\n <p>Suppose that the former is true, and $3$ is a factor of $k$. Then $k = 3j$ for some integer j, which implies that $(3j)^2 = 3n^2$</p>\n \n <p>$\\Rightarrow 9j^2 = 3n^2 $</p>\n \n <p>$\\Rightarrow n^2 = 3j^2 $</p>\n</blockquote>\n\n<p>The proof is complete right here at this point, if you just note that the last equality implies that $3 \\mid n^2\\,$, and therefore $3 \\mid n$ which contradicts the assumption that $m,n$ are coprime.</p>\n\n<blockquote>\n <p>[ <em>rest of post snipped</em> ]</p>\n</blockquote>\n" }, { "answer_id": 2115610, "score": 2, "text": "<p>\"This implies that ∃ m,n∈Z s.t. m2n2=12. Assume without loss of generality that m, n have no factors in common.\"</p>\n\n<p>I, personally, would not argue \"without loss of generality\" . $q \\in \\mathbb Q$ is defined as $q = \\frac mn$ for some relatively prime integers. So we declare them to have no factors in common by fiat-- not merely by lack of loss of generality. (It's not that big of an issue.)</p>\n\n<p>\"This implies that $m^2$ is even, and therefore that m is even\"</p>\n\n<p>I'd accept this but dxiv very much has a point, that it should require some justification. I personally would simply put it in more definitive language. I'd say: \"Therefore $2|m^2$ and, as $2$ is prime, $2|m$\". This could require a little justification in that all numbers have a unique prime factorization so that for prime $p$ we know if $p|ab$ then $p|a$ or $p|b$ so if $p|m^2$ then $p|m$ or $p|m$. </p>\n\n<p>\"Because $n^2$ is an integer, it is clear that 3 divides $k^2$ which implies that k has 3 or $\\frac kn$ has a factor \".</p>\n\n<p>As $\\frac {k^2}3$ is a integer, it implies $3|k^2$. Period. That always happens. That any thing else <em>may</em> happen doesn't matter. It <em>may</em> have $k/n$ as a factor or it may have $7$ as a factor. Or it may not. Those don't matter.</p>\n\n<p>Also, if $\\frac kn$ is an integer at all, then it is trivial that $\\frac kn$ is a factor $k$ whether or not $k^2/3$ is an integer or not. And if $\\frac kn$ is not an integer then the statement $\\frac kn$ is a factor of $k$ is meaningless.</p>\n\n<p>And if $k/n$ is an integer, then $n|m = 2k$ and as $n,m$ have no factor in common then $n = 1$. (Which would mean $\\sqrt{12} = 2\\sqrt{3}$ is an integer which is easy to verify is not the case).</p>\n\n<p>\"if the former ($3|k$) then .... \"</p>\n\n<p>All that is just fine and the rest is unneeded.</p>\n\n<p>But the rest is a bit of a mess.</p>\n\n<p>\"Suppose now that $k/n$ is a factor of $k$\" Again, this is trivial if $n|k$ and is meaningless if $n \\not \\mid k$.</p>\n\n<p>And we can rule out $n|k$ as that would imply $\\sqrt{12} = m/n = 2k/n$ is an integer. Which</p>\n\n<p>But beware. This is true of <em>all</em> numbers and nothing relevant is likely to arise. And it doesn't:</p>\n\n<p>\" Then $(\\frac knj)^2=3n^2$ which implies that $3j^2=3n^2$\"</p>\n\n<p>Actually, no, it implies $3j^2 = 3n^4$. And thus we get $j = n^2$ (we can assume $j$ is positive this time as we can assume $k$ and $n$ are positive).</p>\n\n<p>\"But this means that n divides m, which again is a contradiction.\"</p>\n\n<p>Actually it's not a contradiction if $n = 1$. </p>\n\n<p>But this isn't a contradiction that needed to be reached. $k/n$ is a factor of $k$ only makes sense if $n|k$ which would imply $n|m = 2k$.</p>\n" }, { "answer_id": 2115546, "score": 0, "text": "<p>See this proof that \nif $n$ is not a perfect square\nthen $\\sqrt{n}$\nis irrational:</p>\n\n<p><a href=\"https://math.stackexchange.com/questions/1391411/follow-up-question-proof-of-irrationality-of-sqrt3/1391466#1391466\">Follow-up Question: Proof of Irrationality of $\\sqrt{3}$</a></p>\n\n<p>The proof starts by saying that\nif $n$ is not a perfect square\nthen there is a $k$ such that\n$k^2 &lt; n &lt; (k+1)^2$.\nThe proof breaks down if\n$k^2 = n$.</p>\n\n<p>Note that\nthis proof does <em>not</em>\nuse divisibility.</p>\n" }, { "answer_id": 2115472, "score": 6, "text": "<p><strong>Proof.</strong> &nbsp;\nAssume $\\sqrt{12} \\in \\mathbb{Q}$ is rational, then it can be written as $\\sqrt{12}=\\cfrac{m}{n}$ with $m,n \\in \\mathbb{Z}$ coprime.</p>\n\n<p>Squaring the equality gives $m^2 = 12 n^2 = 3 \\cdot 4 \\cdot n^2\\,$. Therefore $3 \\mid m^2 = m \\cdot m$ and, since $3$ is a prime, it follows by <a href=\"https://en.wikipedia.org/wiki/Euclid&#39;s_lemma\" rel=\"noreferrer\">Euclid's Lemma</a> that $3 \\mid m\\,$.</p>\n\n<p>Then $m = 3k$ for some $k \\in \\mathbb{Z}$ and substituting back gives $9 k^2 = 12 n^2 \\iff 3 k^2 = 4 n^2\\,$. Therefore $3 \\mid 4 n^2$ and, since $3 \\not \\mid 4$ it follows that $3 \\mid n^2$ then, again by Euclid's Lemma, $3 \\mid n\\,$.</p>\n\n<p>But $3 \\mid m$ and $3 \\mid n$ contradicts the assumption that $m,n$ are coprime, so the premise that $\\sqrt{12} \\in \\mathbb{Q}$ must be false, therefore $\\sqrt{12}$ is irrational.</p>\n\n<p><br>\n<strong>Critique</strong> of the posted proof.</p>\n\n<blockquote>\n <p>Proof: Suppose there <em>were</em> such a number, $a = \\in \\mathbb{Q}$ s.t. $a^2 = 12$.</p>\n \n <p>This implies that $\\exists$ $m, n \\in \\mathbb{Z}$ s.t. $\\frac{m^2}{n^2} = 12.$ Assume without loss of generality that $m,~ n$ have no factors in common.</p>\n \n <p>$\\Rightarrow m^2 = 12n^2$. </p>\n</blockquote>\n\n<p>So far so good.</p>\n\n<blockquote>\n <p>This implies that $m^2$ is even, and therefore that $m$ is even;</p>\n</blockquote>\n\n<p>The fact that $2 \\mid m^2 \\implies 2 \\mid m$ may sound obvious, but still needs some justification. You could argue by contradiction, or use Euclid's Lemma.</p>\n\n<blockquote>\n <p>it can thus be written $2k = m$ for some $k \\in \\mathbb{Z}$.</p>\n \n <p>Thus $m^2 = 12n^2 $</p>\n \n <p>$\\Rightarrow 4k^2 = 12n^2 $</p>\n</blockquote>\n\n<p>Correct. As an observation, $k^2 = 3 n^2$ just eliminated the perfect square factor of $4$ and reduced the problem to proving that $\\sqrt{3}$ is irrational.</p>\n\n<blockquote>\n <p>$\\Rightarrow \\frac{k^2}{3} = n^2$</p>\n \n <p>Because $n^2$ is an integer, it is clear that $3$ divides $k^2$ which imples that $k$ has $3$</p>\n</blockquote>\n\n<p>You should generally avoid fractions where they are not necessary. The previous line gave $k^2 = 3 n^2\\,$, which directly implies that $3 \\mid k^2\\,$.</p>\n\n<blockquote>\n <p>or $\\frac{k}{n}$ has a factor (because $\\frac{k^2}{n^2}= 3$)</p>\n</blockquote>\n\n<p>This makes no sense, and it is in fact not needed to complete the proof.</p>\n\n<blockquote>\n <p>Suppose that the former is true, and $3$ is a factor of $k$. Then $k = 3j$ for some integer j, which implies that $(3j)^2 = 3n^2$</p>\n \n <p>$\\Rightarrow 9j^2 = 3n^2 $</p>\n \n <p>$\\Rightarrow n^2 = 3j^2 $</p>\n</blockquote>\n\n<p>The proof is complete right here at this point, if you just note that the last equality implies that $3 \\mid n^2\\,$, and therefore $3 \\mid n$ which contradicts the assumption that $m,n$ are coprime.</p>\n\n<blockquote>\n <p>[ <em>rest of post snipped</em> ]</p>\n</blockquote>\n" }, { "answer_id": 2115610, "score": 2, "text": "<p>\"This implies that ∃ m,n∈Z s.t. m2n2=12. Assume without loss of generality that m, n have no factors in common.\"</p>\n\n<p>I, personally, would not argue \"without loss of generality\" . $q \\in \\mathbb Q$ is defined as $q = \\frac mn$ for some relatively prime integers. So we declare them to have no factors in common by fiat-- not merely by lack of loss of generality. (It's not that big of an issue.)</p>\n\n<p>\"This implies that $m^2$ is even, and therefore that m is even\"</p>\n\n<p>I'd accept this but dxiv very much has a point, that it should require some justification. I personally would simply put it in more definitive language. I'd say: \"Therefore $2|m^2$ and, as $2$ is prime, $2|m$\". This could require a little justification in that all numbers have a unique prime factorization so that for prime $p$ we know if $p|ab$ then $p|a$ or $p|b$ so if $p|m^2$ then $p|m$ or $p|m$. </p>\n\n<p>\"Because $n^2$ is an integer, it is clear that 3 divides $k^2$ which implies that k has 3 or $\\frac kn$ has a factor \".</p>\n\n<p>As $\\frac {k^2}3$ is a integer, it implies $3|k^2$. Period. That always happens. That any thing else <em>may</em> happen doesn't matter. It <em>may</em> have $k/n$ as a factor or it may have $7$ as a factor. Or it may not. Those don't matter.</p>\n\n<p>Also, if $\\frac kn$ is an integer at all, then it is trivial that $\\frac kn$ is a factor $k$ whether or not $k^2/3$ is an integer or not. And if $\\frac kn$ is not an integer then the statement $\\frac kn$ is a factor of $k$ is meaningless.</p>\n\n<p>And if $k/n$ is an integer, then $n|m = 2k$ and as $n,m$ have no factor in common then $n = 1$. (Which would mean $\\sqrt{12} = 2\\sqrt{3}$ is an integer which is easy to verify is not the case).</p>\n\n<p>\"if the former ($3|k$) then .... \"</p>\n\n<p>All that is just fine and the rest is unneeded.</p>\n\n<p>But the rest is a bit of a mess.</p>\n\n<p>\"Suppose now that $k/n$ is a factor of $k$\" Again, this is trivial if $n|k$ and is meaningless if $n \\not \\mid k$.</p>\n\n<p>And we can rule out $n|k$ as that would imply $\\sqrt{12} = m/n = 2k/n$ is an integer. Which</p>\n\n<p>But beware. This is true of <em>all</em> numbers and nothing relevant is likely to arise. And it doesn't:</p>\n\n<p>\" Then $(\\frac knj)^2=3n^2$ which implies that $3j^2=3n^2$\"</p>\n\n<p>Actually, no, it implies $3j^2 = 3n^4$. And thus we get $j = n^2$ (we can assume $j$ is positive this time as we can assume $k$ and $n$ are positive).</p>\n\n<p>\"But this means that n divides m, which again is a contradiction.\"</p>\n\n<p>Actually it's not a contradiction if $n = 1$. </p>\n\n<p>But this isn't a contradiction that needed to be reached. $k/n$ is a factor of $k$ only makes sense if $n|k$ which would imply $n|m = 2k$.</p>\n" }, { "answer_id": 2115546, "score": 0, "text": "<p>See this proof that \nif $n$ is not a perfect square\nthen $\\sqrt{n}$\nis irrational:</p>\n\n<p><a href=\"https://math.stackexchange.com/questions/1391411/follow-up-question-proof-of-irrationality-of-sqrt3/1391466#1391466\">Follow-up Question: Proof of Irrationality of $\\sqrt{3}$</a></p>\n\n<p>The proof starts by saying that\nif $n$ is not a perfect square\nthen there is a $k$ such that\n$k^2 &lt; n &lt; (k+1)^2$.\nThe proof breaks down if\n$k^2 = n$.</p>\n\n<p>Note that\nthis proof does <em>not</em>\nuse divisibility.</p>\n" }, { "answer_id": 2115472, "score": 6, "text": "<p><strong>Proof.</strong> &nbsp;\nAssume $\\sqrt{12} \\in \\mathbb{Q}$ is rational, then it can be written as $\\sqrt{12}=\\cfrac{m}{n}$ with $m,n \\in \\mathbb{Z}$ coprime.</p>\n\n<p>Squaring the equality gives $m^2 = 12 n^2 = 3 \\cdot 4 \\cdot n^2\\,$. Therefore $3 \\mid m^2 = m \\cdot m$ and, since $3$ is a prime, it follows by <a href=\"https://en.wikipedia.org/wiki/Euclid&#39;s_lemma\" rel=\"noreferrer\">Euclid's Lemma</a> that $3 \\mid m\\,$.</p>\n\n<p>Then $m = 3k$ for some $k \\in \\mathbb{Z}$ and substituting back gives $9 k^2 = 12 n^2 \\iff 3 k^2 = 4 n^2\\,$. Therefore $3 \\mid 4 n^2$ and, since $3 \\not \\mid 4$ it follows that $3 \\mid n^2$ then, again by Euclid's Lemma, $3 \\mid n\\,$.</p>\n\n<p>But $3 \\mid m$ and $3 \\mid n$ contradicts the assumption that $m,n$ are coprime, so the premise that $\\sqrt{12} \\in \\mathbb{Q}$ must be false, therefore $\\sqrt{12}$ is irrational.</p>\n\n<p><br>\n<strong>Critique</strong> of the posted proof.</p>\n\n<blockquote>\n <p>Proof: Suppose there <em>were</em> such a number, $a = \\in \\mathbb{Q}$ s.t. $a^2 = 12$.</p>\n \n <p>This implies that $\\exists$ $m, n \\in \\mathbb{Z}$ s.t. $\\frac{m^2}{n^2} = 12.$ Assume without loss of generality that $m,~ n$ have no factors in common.</p>\n \n <p>$\\Rightarrow m^2 = 12n^2$. </p>\n</blockquote>\n\n<p>So far so good.</p>\n\n<blockquote>\n <p>This implies that $m^2$ is even, and therefore that $m$ is even;</p>\n</blockquote>\n\n<p>The fact that $2 \\mid m^2 \\implies 2 \\mid m$ may sound obvious, but still needs some justification. You could argue by contradiction, or use Euclid's Lemma.</p>\n\n<blockquote>\n <p>it can thus be written $2k = m$ for some $k \\in \\mathbb{Z}$.</p>\n \n <p>Thus $m^2 = 12n^2 $</p>\n \n <p>$\\Rightarrow 4k^2 = 12n^2 $</p>\n</blockquote>\n\n<p>Correct. As an observation, $k^2 = 3 n^2$ just eliminated the perfect square factor of $4$ and reduced the problem to proving that $\\sqrt{3}$ is irrational.</p>\n\n<blockquote>\n <p>$\\Rightarrow \\frac{k^2}{3} = n^2$</p>\n \n <p>Because $n^2$ is an integer, it is clear that $3$ divides $k^2$ which imples that $k$ has $3$</p>\n</blockquote>\n\n<p>You should generally avoid fractions where they are not necessary. The previous line gave $k^2 = 3 n^2\\,$, which directly implies that $3 \\mid k^2\\,$.</p>\n\n<blockquote>\n <p>or $\\frac{k}{n}$ has a factor (because $\\frac{k^2}{n^2}= 3$)</p>\n</blockquote>\n\n<p>This makes no sense, and it is in fact not needed to complete the proof.</p>\n\n<blockquote>\n <p>Suppose that the former is true, and $3$ is a factor of $k$. Then $k = 3j$ for some integer j, which implies that $(3j)^2 = 3n^2$</p>\n \n <p>$\\Rightarrow 9j^2 = 3n^2 $</p>\n \n <p>$\\Rightarrow n^2 = 3j^2 $</p>\n</blockquote>\n\n<p>The proof is complete right here at this point, if you just note that the last equality implies that $3 \\mid n^2\\,$, and therefore $3 \\mid n$ which contradicts the assumption that $m,n$ are coprime.</p>\n\n<blockquote>\n <p>[ <em>rest of post snipped</em> ]</p>\n</blockquote>\n" }, { "answer_id": 2115610, "score": 2, "text": "<p>\"This implies that ∃ m,n∈Z s.t. m2n2=12. Assume without loss of generality that m, n have no factors in common.\"</p>\n\n<p>I, personally, would not argue \"without loss of generality\" . $q \\in \\mathbb Q$ is defined as $q = \\frac mn$ for some relatively prime integers. So we declare them to have no factors in common by fiat-- not merely by lack of loss of generality. (It's not that big of an issue.)</p>\n\n<p>\"This implies that $m^2$ is even, and therefore that m is even\"</p>\n\n<p>I'd accept this but dxiv very much has a point, that it should require some justification. I personally would simply put it in more definitive language. I'd say: \"Therefore $2|m^2$ and, as $2$ is prime, $2|m$\". This could require a little justification in that all numbers have a unique prime factorization so that for prime $p$ we know if $p|ab$ then $p|a$ or $p|b$ so if $p|m^2$ then $p|m$ or $p|m$. </p>\n\n<p>\"Because $n^2$ is an integer, it is clear that 3 divides $k^2$ which implies that k has 3 or $\\frac kn$ has a factor \".</p>\n\n<p>As $\\frac {k^2}3$ is a integer, it implies $3|k^2$. Period. That always happens. That any thing else <em>may</em> happen doesn't matter. It <em>may</em> have $k/n$ as a factor or it may have $7$ as a factor. Or it may not. Those don't matter.</p>\n\n<p>Also, if $\\frac kn$ is an integer at all, then it is trivial that $\\frac kn$ is a factor $k$ whether or not $k^2/3$ is an integer or not. And if $\\frac kn$ is not an integer then the statement $\\frac kn$ is a factor of $k$ is meaningless.</p>\n\n<p>And if $k/n$ is an integer, then $n|m = 2k$ and as $n,m$ have no factor in common then $n = 1$. (Which would mean $\\sqrt{12} = 2\\sqrt{3}$ is an integer which is easy to verify is not the case).</p>\n\n<p>\"if the former ($3|k$) then .... \"</p>\n\n<p>All that is just fine and the rest is unneeded.</p>\n\n<p>But the rest is a bit of a mess.</p>\n\n<p>\"Suppose now that $k/n$ is a factor of $k$\" Again, this is trivial if $n|k$ and is meaningless if $n \\not \\mid k$.</p>\n\n<p>And we can rule out $n|k$ as that would imply $\\sqrt{12} = m/n = 2k/n$ is an integer. Which</p>\n\n<p>But beware. This is true of <em>all</em> numbers and nothing relevant is likely to arise. And it doesn't:</p>\n\n<p>\" Then $(\\frac knj)^2=3n^2$ which implies that $3j^2=3n^2$\"</p>\n\n<p>Actually, no, it implies $3j^2 = 3n^4$. And thus we get $j = n^2$ (we can assume $j$ is positive this time as we can assume $k$ and $n$ are positive).</p>\n\n<p>\"But this means that n divides m, which again is a contradiction.\"</p>\n\n<p>Actually it's not a contradiction if $n = 1$. </p>\n\n<p>But this isn't a contradiction that needed to be reached. $k/n$ is a factor of $k$ only makes sense if $n|k$ which would imply $n|m = 2k$.</p>\n" }, { "answer_id": 2115546, "score": 0, "text": "<p>See this proof that \nif $n$ is not a perfect square\nthen $\\sqrt{n}$\nis irrational:</p>\n\n<p><a href=\"https://math.stackexchange.com/questions/1391411/follow-up-question-proof-of-irrationality-of-sqrt3/1391466#1391466\">Follow-up Question: Proof of Irrationality of $\\sqrt{3}$</a></p>\n\n<p>The proof starts by saying that\nif $n$ is not a perfect square\nthen there is a $k$ such that\n$k^2 &lt; n &lt; (k+1)^2$.\nThe proof breaks down if\n$k^2 = n$.</p>\n\n<p>Note that\nthis proof does <em>not</em>\nuse divisibility.</p>\n" }, { "answer_id": 2115472, "score": 6, "text": "<p><strong>Proof.</strong> &nbsp;\nAssume $\\sqrt{12} \\in \\mathbb{Q}$ is rational, then it can be written as $\\sqrt{12}=\\cfrac{m}{n}$ with $m,n \\in \\mathbb{Z}$ coprime.</p>\n\n<p>Squaring the equality gives $m^2 = 12 n^2 = 3 \\cdot 4 \\cdot n^2\\,$. Therefore $3 \\mid m^2 = m \\cdot m$ and, since $3$ is a prime, it follows by <a href=\"https://en.wikipedia.org/wiki/Euclid&#39;s_lemma\" rel=\"noreferrer\">Euclid's Lemma</a> that $3 \\mid m\\,$.</p>\n\n<p>Then $m = 3k$ for some $k \\in \\mathbb{Z}$ and substituting back gives $9 k^2 = 12 n^2 \\iff 3 k^2 = 4 n^2\\,$. Therefore $3 \\mid 4 n^2$ and, since $3 \\not \\mid 4$ it follows that $3 \\mid n^2$ then, again by Euclid's Lemma, $3 \\mid n\\,$.</p>\n\n<p>But $3 \\mid m$ and $3 \\mid n$ contradicts the assumption that $m,n$ are coprime, so the premise that $\\sqrt{12} \\in \\mathbb{Q}$ must be false, therefore $\\sqrt{12}$ is irrational.</p>\n\n<p><br>\n<strong>Critique</strong> of the posted proof.</p>\n\n<blockquote>\n <p>Proof: Suppose there <em>were</em> such a number, $a = \\in \\mathbb{Q}$ s.t. $a^2 = 12$.</p>\n \n <p>This implies that $\\exists$ $m, n \\in \\mathbb{Z}$ s.t. $\\frac{m^2}{n^2} = 12.$ Assume without loss of generality that $m,~ n$ have no factors in common.</p>\n \n <p>$\\Rightarrow m^2 = 12n^2$. </p>\n</blockquote>\n\n<p>So far so good.</p>\n\n<blockquote>\n <p>This implies that $m^2$ is even, and therefore that $m$ is even;</p>\n</blockquote>\n\n<p>The fact that $2 \\mid m^2 \\implies 2 \\mid m$ may sound obvious, but still needs some justification. You could argue by contradiction, or use Euclid's Lemma.</p>\n\n<blockquote>\n <p>it can thus be written $2k = m$ for some $k \\in \\mathbb{Z}$.</p>\n \n <p>Thus $m^2 = 12n^2 $</p>\n \n <p>$\\Rightarrow 4k^2 = 12n^2 $</p>\n</blockquote>\n\n<p>Correct. As an observation, $k^2 = 3 n^2$ just eliminated the perfect square factor of $4$ and reduced the problem to proving that $\\sqrt{3}$ is irrational.</p>\n\n<blockquote>\n <p>$\\Rightarrow \\frac{k^2}{3} = n^2$</p>\n \n <p>Because $n^2$ is an integer, it is clear that $3$ divides $k^2$ which imples that $k$ has $3$</p>\n</blockquote>\n\n<p>You should generally avoid fractions where they are not necessary. The previous line gave $k^2 = 3 n^2\\,$, which directly implies that $3 \\mid k^2\\,$.</p>\n\n<blockquote>\n <p>or $\\frac{k}{n}$ has a factor (because $\\frac{k^2}{n^2}= 3$)</p>\n</blockquote>\n\n<p>This makes no sense, and it is in fact not needed to complete the proof.</p>\n\n<blockquote>\n <p>Suppose that the former is true, and $3$ is a factor of $k$. Then $k = 3j$ for some integer j, which implies that $(3j)^2 = 3n^2$</p>\n \n <p>$\\Rightarrow 9j^2 = 3n^2 $</p>\n \n <p>$\\Rightarrow n^2 = 3j^2 $</p>\n</blockquote>\n\n<p>The proof is complete right here at this point, if you just note that the last equality implies that $3 \\mid n^2\\,$, and therefore $3 \\mid n$ which contradicts the assumption that $m,n$ are coprime.</p>\n\n<blockquote>\n <p>[ <em>rest of post snipped</em> ]</p>\n</blockquote>\n" }, { "answer_id": 2115610, "score": 2, "text": "<p>\"This implies that ∃ m,n∈Z s.t. m2n2=12. Assume without loss of generality that m, n have no factors in common.\"</p>\n\n<p>I, personally, would not argue \"without loss of generality\" . $q \\in \\mathbb Q$ is defined as $q = \\frac mn$ for some relatively prime integers. So we declare them to have no factors in common by fiat-- not merely by lack of loss of generality. (It's not that big of an issue.)</p>\n\n<p>\"This implies that $m^2$ is even, and therefore that m is even\"</p>\n\n<p>I'd accept this but dxiv very much has a point, that it should require some justification. I personally would simply put it in more definitive language. I'd say: \"Therefore $2|m^2$ and, as $2$ is prime, $2|m$\". This could require a little justification in that all numbers have a unique prime factorization so that for prime $p$ we know if $p|ab$ then $p|a$ or $p|b$ so if $p|m^2$ then $p|m$ or $p|m$. </p>\n\n<p>\"Because $n^2$ is an integer, it is clear that 3 divides $k^2$ which implies that k has 3 or $\\frac kn$ has a factor \".</p>\n\n<p>As $\\frac {k^2}3$ is a integer, it implies $3|k^2$. Period. That always happens. That any thing else <em>may</em> happen doesn't matter. It <em>may</em> have $k/n$ as a factor or it may have $7$ as a factor. Or it may not. Those don't matter.</p>\n\n<p>Also, if $\\frac kn$ is an integer at all, then it is trivial that $\\frac kn$ is a factor $k$ whether or not $k^2/3$ is an integer or not. And if $\\frac kn$ is not an integer then the statement $\\frac kn$ is a factor of $k$ is meaningless.</p>\n\n<p>And if $k/n$ is an integer, then $n|m = 2k$ and as $n,m$ have no factor in common then $n = 1$. (Which would mean $\\sqrt{12} = 2\\sqrt{3}$ is an integer which is easy to verify is not the case).</p>\n\n<p>\"if the former ($3|k$) then .... \"</p>\n\n<p>All that is just fine and the rest is unneeded.</p>\n\n<p>But the rest is a bit of a mess.</p>\n\n<p>\"Suppose now that $k/n$ is a factor of $k$\" Again, this is trivial if $n|k$ and is meaningless if $n \\not \\mid k$.</p>\n\n<p>And we can rule out $n|k$ as that would imply $\\sqrt{12} = m/n = 2k/n$ is an integer. Which</p>\n\n<p>But beware. This is true of <em>all</em> numbers and nothing relevant is likely to arise. And it doesn't:</p>\n\n<p>\" Then $(\\frac knj)^2=3n^2$ which implies that $3j^2=3n^2$\"</p>\n\n<p>Actually, no, it implies $3j^2 = 3n^4$. And thus we get $j = n^2$ (we can assume $j$ is positive this time as we can assume $k$ and $n$ are positive).</p>\n\n<p>\"But this means that n divides m, which again is a contradiction.\"</p>\n\n<p>Actually it's not a contradiction if $n = 1$. </p>\n\n<p>But this isn't a contradiction that needed to be reached. $k/n$ is a factor of $k$ only makes sense if $n|k$ which would imply $n|m = 2k$.</p>\n" }, { "answer_id": 2115546, "score": 0, "text": "<p>See this proof that \nif $n$ is not a perfect square\nthen $\\sqrt{n}$\nis irrational:</p>\n\n<p><a href=\"https://math.stackexchange.com/questions/1391411/follow-up-question-proof-of-irrationality-of-sqrt3/1391466#1391466\">Follow-up Question: Proof of Irrationality of $\\sqrt{3}$</a></p>\n\n<p>The proof starts by saying that\nif $n$ is not a perfect square\nthen there is a $k$ such that\n$k^2 &lt; n &lt; (k+1)^2$.\nThe proof breaks down if\n$k^2 = n$.</p>\n\n<p>Note that\nthis proof does <em>not</em>\nuse divisibility.</p>\n" }, { "answer_id": 2115472, "score": 6, "text": "<p><strong>Proof.</strong> &nbsp;\nAssume $\\sqrt{12} \\in \\mathbb{Q}$ is rational, then it can be written as $\\sqrt{12}=\\cfrac{m}{n}$ with $m,n \\in \\mathbb{Z}$ coprime.</p>\n\n<p>Squaring the equality gives $m^2 = 12 n^2 = 3 \\cdot 4 \\cdot n^2\\,$. Therefore $3 \\mid m^2 = m \\cdot m$ and, since $3$ is a prime, it follows by <a href=\"https://en.wikipedia.org/wiki/Euclid&#39;s_lemma\" rel=\"noreferrer\">Euclid's Lemma</a> that $3 \\mid m\\,$.</p>\n\n<p>Then $m = 3k$ for some $k \\in \\mathbb{Z}$ and substituting back gives $9 k^2 = 12 n^2 \\iff 3 k^2 = 4 n^2\\,$. Therefore $3 \\mid 4 n^2$ and, since $3 \\not \\mid 4$ it follows that $3 \\mid n^2$ then, again by Euclid's Lemma, $3 \\mid n\\,$.</p>\n\n<p>But $3 \\mid m$ and $3 \\mid n$ contradicts the assumption that $m,n$ are coprime, so the premise that $\\sqrt{12} \\in \\mathbb{Q}$ must be false, therefore $\\sqrt{12}$ is irrational.</p>\n\n<p><br>\n<strong>Critique</strong> of the posted proof.</p>\n\n<blockquote>\n <p>Proof: Suppose there <em>were</em> such a number, $a = \\in \\mathbb{Q}$ s.t. $a^2 = 12$.</p>\n \n <p>This implies that $\\exists$ $m, n \\in \\mathbb{Z}$ s.t. $\\frac{m^2}{n^2} = 12.$ Assume without loss of generality that $m,~ n$ have no factors in common.</p>\n \n <p>$\\Rightarrow m^2 = 12n^2$. </p>\n</blockquote>\n\n<p>So far so good.</p>\n\n<blockquote>\n <p>This implies that $m^2$ is even, and therefore that $m$ is even;</p>\n</blockquote>\n\n<p>The fact that $2 \\mid m^2 \\implies 2 \\mid m$ may sound obvious, but still needs some justification. You could argue by contradiction, or use Euclid's Lemma.</p>\n\n<blockquote>\n <p>it can thus be written $2k = m$ for some $k \\in \\mathbb{Z}$.</p>\n \n <p>Thus $m^2 = 12n^2 $</p>\n \n <p>$\\Rightarrow 4k^2 = 12n^2 $</p>\n</blockquote>\n\n<p>Correct. As an observation, $k^2 = 3 n^2$ just eliminated the perfect square factor of $4$ and reduced the problem to proving that $\\sqrt{3}$ is irrational.</p>\n\n<blockquote>\n <p>$\\Rightarrow \\frac{k^2}{3} = n^2$</p>\n \n <p>Because $n^2$ is an integer, it is clear that $3$ divides $k^2$ which imples that $k$ has $3$</p>\n</blockquote>\n\n<p>You should generally avoid fractions where they are not necessary. The previous line gave $k^2 = 3 n^2\\,$, which directly implies that $3 \\mid k^2\\,$.</p>\n\n<blockquote>\n <p>or $\\frac{k}{n}$ has a factor (because $\\frac{k^2}{n^2}= 3$)</p>\n</blockquote>\n\n<p>This makes no sense, and it is in fact not needed to complete the proof.</p>\n\n<blockquote>\n <p>Suppose that the former is true, and $3$ is a factor of $k$. Then $k = 3j$ for some integer j, which implies that $(3j)^2 = 3n^2$</p>\n \n <p>$\\Rightarrow 9j^2 = 3n^2 $</p>\n \n <p>$\\Rightarrow n^2 = 3j^2 $</p>\n</blockquote>\n\n<p>The proof is complete right here at this point, if you just note that the last equality implies that $3 \\mid n^2\\,$, and therefore $3 \\mid n$ which contradicts the assumption that $m,n$ are coprime.</p>\n\n<blockquote>\n <p>[ <em>rest of post snipped</em> ]</p>\n</blockquote>\n" }, { "answer_id": 2115610, "score": 2, "text": "<p>\"This implies that ∃ m,n∈Z s.t. m2n2=12. Assume without loss of generality that m, n have no factors in common.\"</p>\n\n<p>I, personally, would not argue \"without loss of generality\" . $q \\in \\mathbb Q$ is defined as $q = \\frac mn$ for some relatively prime integers. So we declare them to have no factors in common by fiat-- not merely by lack of loss of generality. (It's not that big of an issue.)</p>\n\n<p>\"This implies that $m^2$ is even, and therefore that m is even\"</p>\n\n<p>I'd accept this but dxiv very much has a point, that it should require some justification. I personally would simply put it in more definitive language. I'd say: \"Therefore $2|m^2$ and, as $2$ is prime, $2|m$\". This could require a little justification in that all numbers have a unique prime factorization so that for prime $p$ we know if $p|ab$ then $p|a$ or $p|b$ so if $p|m^2$ then $p|m$ or $p|m$. </p>\n\n<p>\"Because $n^2$ is an integer, it is clear that 3 divides $k^2$ which implies that k has 3 or $\\frac kn$ has a factor \".</p>\n\n<p>As $\\frac {k^2}3$ is a integer, it implies $3|k^2$. Period. That always happens. That any thing else <em>may</em> happen doesn't matter. It <em>may</em> have $k/n$ as a factor or it may have $7$ as a factor. Or it may not. Those don't matter.</p>\n\n<p>Also, if $\\frac kn$ is an integer at all, then it is trivial that $\\frac kn$ is a factor $k$ whether or not $k^2/3$ is an integer or not. And if $\\frac kn$ is not an integer then the statement $\\frac kn$ is a factor of $k$ is meaningless.</p>\n\n<p>And if $k/n$ is an integer, then $n|m = 2k$ and as $n,m$ have no factor in common then $n = 1$. (Which would mean $\\sqrt{12} = 2\\sqrt{3}$ is an integer which is easy to verify is not the case).</p>\n\n<p>\"if the former ($3|k$) then .... \"</p>\n\n<p>All that is just fine and the rest is unneeded.</p>\n\n<p>But the rest is a bit of a mess.</p>\n\n<p>\"Suppose now that $k/n$ is a factor of $k$\" Again, this is trivial if $n|k$ and is meaningless if $n \\not \\mid k$.</p>\n\n<p>And we can rule out $n|k$ as that would imply $\\sqrt{12} = m/n = 2k/n$ is an integer. Which</p>\n\n<p>But beware. This is true of <em>all</em> numbers and nothing relevant is likely to arise. And it doesn't:</p>\n\n<p>\" Then $(\\frac knj)^2=3n^2$ which implies that $3j^2=3n^2$\"</p>\n\n<p>Actually, no, it implies $3j^2 = 3n^4$. And thus we get $j = n^2$ (we can assume $j$ is positive this time as we can assume $k$ and $n$ are positive).</p>\n\n<p>\"But this means that n divides m, which again is a contradiction.\"</p>\n\n<p>Actually it's not a contradiction if $n = 1$. </p>\n\n<p>But this isn't a contradiction that needed to be reached. $k/n$ is a factor of $k$ only makes sense if $n|k$ which would imply $n|m = 2k$.</p>\n" }, { "answer_id": 2115546, "score": 0, "text": "<p>See this proof that \nif $n$ is not a perfect square\nthen $\\sqrt{n}$\nis irrational:</p>\n\n<p><a href=\"https://math.stackexchange.com/questions/1391411/follow-up-question-proof-of-irrationality-of-sqrt3/1391466#1391466\">Follow-up Question: Proof of Irrationality of $\\sqrt{3}$</a></p>\n\n<p>The proof starts by saying that\nif $n$ is not a perfect square\nthen there is a $k$ such that\n$k^2 &lt; n &lt; (k+1)^2$.\nThe proof breaks down if\n$k^2 = n$.</p>\n\n<p>Note that\nthis proof does <em>not</em>\nuse divisibility.</p>\n" } ]
Question: Please critique my proof that $\sqrt{12}$ is irrational Answer 1: <p><strong>Proof.</strong> &nbsp; Assume $\sqrt{12} \in \mathbb{Q}$ is rational, then it can be written as $\sqrt{12}=\cfrac{m}{n}$ with $m,n \in \mathbb{Z}$ coprime.</p> <p>Squaring the equality gives $m^2 = 12 n^2 = 3 \cdot 4 \cdot n^2\,$. Therefore $3 \mid m^2 = m \cdot m$ and, since $3$ is a prime, it follows by <a href="https://en.wikipedia.org/wiki/Euclid&#39;s_lemma" rel="noreferrer">Euclid's Lemma</a> that $3 \mid m\,$.</p> <p>Then $m = 3k$ for some $k \in \mathbb{Z}$ and substituting back gives $9 k^2 = 12 n^2 \iff 3 k^2 = 4 n^2\,$. Therefore $3 \mid 4 n^2$ and, since $3 \not \mid 4$ it follows that $3 \mid n^2$ then, again by Euclid's Lemma, $3 \mid n\,$.</p> <p>But $3 \mid m$ and $3 \mid n$ contradicts the assumption that $m,n$ are coprime, so the premise that $\sqrt{12} \in \mathbb{Q}$ must be false, therefore $\sqrt{12}$ is irrational.</p> <p><br> <strong>Critique</strong> of the posted proof.</p> <blockquote> <p>Proof: Suppose there <em>were</em> such a number, $a = \in \mathbb{Q}$ s.t. $a^2 = 12$.</p> <p>This implies that $\exists$ $m, n \in \mathbb{Z}$ s.t. $\frac{m^2}{n^2} = 12.$ Assume without loss of generality that $m,~ n$ have no factors in common.</p> <p>$\Rightarrow m^2 = 12n^2$. </p> </blockquote> <p>So far so good.</p> <blockquote> <p>This implies that $m^2$ is even, and therefore that $m$ is even;</p> </blockquote> <p>The fact that $2 \mid m^2 \implies 2 \mid m$ may sound obvious, but still needs some justification. You could argue by contradiction, or use Euclid's Lemma.</p> <blockquote> <p>it can thus be written $2k = m$ for some $k \in \mathbb{Z}$.</p> <p>Thus $m^2 = 12n^2 $</p> <p>$\Rightarrow 4k^2 = 12n^2 $</p> </blockquote> <p>Correct. As an observation, $k^2 = 3 n^2$ just eliminated the perfect square factor of $4$ and reduced the problem to proving that $\sqrt{3}$ is irrational.</p> <blockquote> <p>$\Rightarrow \frac{k^2}{3} = n^2$</p> <p>Because $n^2$ is an integer, it is clear that $3$ divides $k^2$ which imples that $k$ has $3$</p> </blockquote> <p>You should generally avoid fractions where they are not necessary. The previous line gave $k^2 = 3 n^2\,$, which directly implies that $3 \mid k^2\,$.</p> <blockquote> <p>or $\frac{k}{n}$ has a factor (because $\frac{k^2}{n^2}= 3$)</p> </blockquote> <p>This makes no sense, and it is in fact not needed to complete the proof.</p> <blockquote> <p>Suppose that the former is true, and $3$ is a factor of $k$. Then $k = 3j$ for some integer j, which implies that $(3j)^2 = 3n^2$</p> <p>$\Rightarrow 9j^2 = 3n^2 $</p> <p>$\Rightarrow n^2 = 3j^2 $</p> </blockquote> <p>The proof is complete right here at this point, if you just note that the last equality implies that $3 \mid n^2\,$, and therefore $3 \mid n$ which contradicts the assumption that $m,n$ are coprime.</p> <blockquote> <p>[ <em>rest of post snipped</em> ]</p> </blockquote> Answer 2: <p>"This implies that ∃ m,n∈Z s.t. m2n2=12. Assume without loss of generality that m, n have no factors in common."</p> <p>I, personally, would not argue "without loss of generality" . $q \in \mathbb Q$ is defined as $q = \frac mn$ for some relatively prime integers. So we declare them to have no factors in common by fiat-- not merely by lack of loss of generality. (It's not that big of an issue.)</p> <p>"This implies that $m^2$ is even, and therefore that m is even"</p> <p>I'd accept this but dxiv very much has a point, that it should require some justification. I personally would simply put it in more definitive language. I'd say: "Therefore $2|m^2$ and, as $2$ is prime, $2|m$". This could require a little justification in that all numbers have a unique prime factorization so that for prime $p$ we know if $p|ab$ then $p|a$ or $p|b$ so if $p|m^2$ then $p|m$ or $p|m$. </p> <p>"Because $n^2$ is an integer, it is clear that 3 divides $k^2$ which implies that k has 3 or $\frac kn$ has a factor ".</p> <p>As $\frac {k^2}3$ is a integer, it implies $3|k^2$. Period. That always happens. That any thing else <em>may</em> happen doesn't matter. It <em>may</em> have $k/n$ as a factor or it may have $7$ as a factor. Or it may not. Those don't matter.</p> <p>Also, if $\frac kn$ is an integer at all, then it is trivial that $\frac kn$ is a factor $k$ whether or not $k^2/3$ is an integer or not. And if $\frac kn$ is not an integer then the statement $\frac kn$ is a factor of $k$ is meaningless.</p> <p>And if $k/n$ is an integer, then $n|m = 2k$ and as $n,m$ have no factor in common then $n = 1$. (Which would mean $\sqrt{12} = 2\sqrt{3}$ is an integer which is easy to verify is not the case).</p> <p>"if the former ($3|k$) then .... "</p> <p>All that is just fine and the rest is unneeded.</p> <p>But the rest is a bit of a mess.</p> <p>"Suppose now that $k/n$ is a factor of $k$" Again, this is trivial if $n|k$ and is meaningless if $n \not \mid k$.</p> <p>And we can rule out $n|k$ as that would imply $\sqrt{12} = m/n = 2k/n$ is an integer. Which</p> <p>But beware. This is true of <em>all</em> numbers and nothing relevant is likely to arise. And it doesn't:</p> <p>" Then $(\frac knj)^2=3n^2$ which implies that $3j^2=3n^2$"</p> <p>Actually, no, it implies $3j^2 = 3n^4$. And thus we get $j = n^2$ (we can assume $j$ is positive this time as we can assume $k$ and $n$ are positive).</p> <p>"But this means that n divides m, which again is a contradiction."</p> <p>Actually it's not a contradiction if $n = 1$. </p> <p>But this isn't a contradiction that needed to be reached. $k/n$ is a factor of $k$ only makes sense if $n|k$ which would imply $n|m = 2k$.</p> Answer 3: <p>See this proof that if $n$ is not a perfect square then $\sqrt{n}$ is irrational:</p> <p><a href="https://math.stackexchange.com/questions/1391411/follow-up-question-proof-of-irrationality-of-sqrt3/1391466#1391466">Follow-up Question: Proof of Irrationality of $\sqrt{3}$</a></p> <p>The proof starts by saying that if $n$ is not a perfect square then there is a $k$ such that $k^2 &lt; n &lt; (k+1)^2$. The proof breaks down if $k^2 = n$.</p> <p>Note that this proof does <em>not</em> use divisibility.</p> Answer 4: <p><strong>Proof.</strong> &nbsp; Assume $\sqrt{12} \in \mathbb{Q}$ is rational, then it can be written as $\sqrt{12}=\cfrac{m}{n}$ with $m,n \in \mathbb{Z}$ coprime.</p> <p>Squaring the equality gives $m^2 = 12 n^2 = 3 \cdot 4 \cdot n^2\,$. Therefore $3 \mid m^2 = m \cdot m$ and, since $3$ is a prime, it follows by <a href="https://en.wikipedia.org/wiki/Euclid&#39;s_lemma" rel="noreferrer">Euclid's Lemma</a> that $3 \mid m\,$.</p> <p>Then $m = 3k$ for some $k \in \mathbb{Z}$ and substituting back gives $9 k^2 = 12 n^2 \iff 3 k^2 = 4 n^2\,$. Therefore $3 \mid 4 n^2$ and, since $3 \not \mid 4$ it follows that $3 \mid n^2$ then, again by Euclid's Lemma, $3 \mid n\,$.</p> <p>But $3 \mid m$ and $3 \mid n$ contradicts the assumption that $m,n$ are coprime, so the premise that $\sqrt{12} \in \mathbb{Q}$ must be false, therefore $\sqrt{12}$ is irrational.</p> <p><br> <strong>Critique</strong> of the posted proof.</p> <blockquote> <p>Proof: Suppose there <em>were</em> such a number, $a = \in \mathbb{Q}$ s.t. $a^2 = 12$.</p> <p>This implies that $\exists$ $m, n \in \mathbb{Z}$ s.t. $\frac{m^2}{n^2} = 12.$ Assume without loss of generality that $m,~ n$ have no factors in common.</p> <p>$\Rightarrow m^2 = 12n^2$. </p> </blockquote> <p>So far so good.</p> <blockquote> <p>This implies that $m^2$ is even, and therefore that $m$ is even;</p> </blockquote> <p>The fact that $2 \mid m^2 \implies 2 \mid m$ may sound obvious, but still needs some justification. You could argue by contradiction, or use Euclid's Lemma.</p> <blockquote> <p>it can thus be written $2k = m$ for some $k \in \mathbb{Z}$.</p> <p>Thus $m^2 = 12n^2 $</p> <p>$\Rightarrow 4k^2 = 12n^2 $</p> </blockquote> <p>Correct. As an observation, $k^2 = 3 n^2$ just eliminated the perfect square factor of $4$ and reduced the problem to proving that $\sqrt{3}$ is irrational.</p> <blockquote> <p>$\Rightarrow \frac{k^2}{3} = n^2$</p> <p>Because $n^2$ is an integer, it is clear that $3$ divides $k^2$ which imples that $k$ has $3$</p> </blockquote> <p>You should generally avoid fractions where they are not necessary. The previous line gave $k^2 = 3 n^2\,$, which directly implies that $3 \mid k^2\,$.</p> <blockquote> <p>or $\frac{k}{n}$ has a factor (because $\frac{k^2}{n^2}= 3$)</p> </blockquote> <p>This makes no sense, and it is in fact not needed to complete the proof.</p> <blockquote> <p>Suppose that the former is true, and $3$ is a factor of $k$. Then $k = 3j$ for some integer j, which implies that $(3j)^2 = 3n^2$</p> <p>$\Rightarrow 9j^2 = 3n^2 $</p> <p>$\Rightarrow n^2 = 3j^2 $</p> </blockquote> <p>The proof is complete right here at this point, if you just note that the last equality implies that $3 \mid n^2\,$, and therefore $3 \mid n$ which contradicts the assumption that $m,n$ are coprime.</p> <blockquote> <p>[ <em>rest of post snipped</em> ]</p> </blockquote> Answer 5: <p>"This implies that ∃ m,n∈Z s.t. m2n2=12. Assume without loss of generality that m, n have no factors in common."</p> <p>I, personally, would not argue "without loss of generality" . $q \in \mathbb Q$ is defined as $q = \frac mn$ for some relatively prime integers. So we declare them to have no factors in common by fiat-- not merely by lack of loss of generality. (It's not that big of an issue.)</p> <p>"This implies that $m^2$ is even, and therefore that m is even"</p> <p>I'd accept this but dxiv very much has a point, that it should require some justification. I personally would simply put it in more definitive language. I'd say: "Therefore $2|m^2$ and, as $2$ is prime, $2|m$". This could require a little justification in that all numbers have a unique prime factorization so that for prime $p$ we know if $p|ab$ then $p|a$ or $p|b$ so if $p|m^2$ then $p|m$ or $p|m$. </p> <p>"Because $n^2$ is an integer, it is clear that 3 divides $k^2$ which implies that k has 3 or $\frac kn$ has a factor ".</p> <p>As $\frac {k^2}3$ is a integer, it implies $3|k^2$. Period. That always happens. That any thing else <em>may</em> happen doesn't matter. It <em>may</em> have $k/n$ as a factor or it may have $7$ as a factor. Or it may not. Those don't matter.</p> <p>Also, if $\frac kn$ is an integer at all, then it is trivial that $\frac kn$ is a factor $k$ whether or not $k^2/3$ is an integer or not. And if $\frac kn$ is not an integer then the statement $\frac kn$ is a factor of $k$ is meaningless.</p> <p>And if $k/n$ is an integer, then $n|m = 2k$ and as $n,m$ have no factor in common then $n = 1$. (Which would mean $\sqrt{12} = 2\sqrt{3}$ is an integer which is easy to verify is not the case).</p> <p>"if the former ($3|k$) then .... "</p> <p>All that is just fine and the rest is unneeded.</p> <p>But the rest is a bit of a mess.</p> <p>"Suppose now that $k/n$ is a factor of $k$" Again, this is trivial if $n|k$ and is meaningless if $n \not \mid k$.</p> <p>And we can rule out $n|k$ as that would imply $\sqrt{12} = m/n = 2k/n$ is an integer. Which</p> <p>But beware. This is true of <em>all</em> numbers and nothing relevant is likely to arise. And it doesn't:</p> <p>" Then $(\frac knj)^2=3n^2$ which implies that $3j^2=3n^2$"</p> <p>Actually, no, it implies $3j^2 = 3n^4$. And thus we get $j = n^2$ (we can assume $j$ is positive this time as we can assume $k$ and $n$ are positive).</p> <p>"But this means that n divides m, which again is a contradiction."</p> <p>Actually it's not a contradiction if $n = 1$. </p> <p>But this isn't a contradiction that needed to be reached. $k/n$ is a factor of $k$ only makes sense if $n|k$ which would imply $n|m = 2k$.</p> Answer 6: <p>See this proof that if $n$ is not a perfect square then $\sqrt{n}$ is irrational:</p> <p><a href="https://math.stackexchange.com/questions/1391411/follow-up-question-proof-of-irrationality-of-sqrt3/1391466#1391466">Follow-up Question: Proof of Irrationality of $\sqrt{3}$</a></p> <p>The proof starts by saying that if $n$ is not a perfect square then there is a $k$ such that $k^2 &lt; n &lt; (k+1)^2$. The proof breaks down if $k^2 = n$.</p> <p>Note that this proof does <em>not</em> use divisibility.</p> Answer 7: <p><strong>Proof.</strong> &nbsp; Assume $\sqrt{12} \in \mathbb{Q}$ is rational, then it can be written as $\sqrt{12}=\cfrac{m}{n}$ with $m,n \in \mathbb{Z}$ coprime.</p> <p>Squaring the equality gives $m^2 = 12 n^2 = 3 \cdot 4 \cdot n^2\,$. Therefore $3 \mid m^2 = m \cdot m$ and, since $3$ is a prime, it follows by <a href="https://en.wikipedia.org/wiki/Euclid&#39;s_lemma" rel="noreferrer">Euclid's Lemma</a> that $3 \mid m\,$.</p> <p>Then $m = 3k$ for some $k \in \mathbb{Z}$ and substituting back gives $9 k^2 = 12 n^2 \iff 3 k^2 = 4 n^2\,$. Therefore $3 \mid 4 n^2$ and, since $3 \not \mid 4$ it follows that $3 \mid n^2$ then, again by Euclid's Lemma, $3 \mid n\,$.</p> <p>But $3 \mid m$ and $3 \mid n$ contradicts the assumption that $m,n$ are coprime, so the premise that $\sqrt{12} \in \mathbb{Q}$ must be false, therefore $\sqrt{12}$ is irrational.</p> <p><br> <strong>Critique</strong> of the posted proof.</p> <blockquote> <p>Proof: Suppose there <em>were</em> such a number, $a = \in \mathbb{Q}$ s.t. $a^2 = 12$.</p> <p>This implies that $\exists$ $m, n \in \mathbb{Z}$ s.t. $\frac{m^2}{n^2} = 12.$ Assume without loss of generality that $m,~ n$ have no factors in common.</p> <p>$\Rightarrow m^2 = 12n^2$. </p> </blockquote> <p>So far so good.</p> <blockquote> <p>This implies that $m^2$ is even, and therefore that $m$ is even;</p> </blockquote> <p>The fact that $2 \mid m^2 \implies 2 \mid m$ may sound obvious, but still needs some justification. You could argue by contradiction, or use Euclid's Lemma.</p> <blockquote> <p>it can thus be written $2k = m$ for some $k \in \mathbb{Z}$.</p> <p>Thus $m^2 = 12n^2 $</p> <p>$\Rightarrow 4k^2 = 12n^2 $</p> </blockquote> <p>Correct. As an observation, $k^2 = 3 n^2$ just eliminated the perfect square factor of $4$ and reduced the problem to proving that $\sqrt{3}$ is irrational.</p> <blockquote> <p>$\Rightarrow \frac{k^2}{3} = n^2$</p> <p>Because $n^2$ is an integer, it is clear that $3$ divides $k^2$ which imples that $k$ has $3$</p> </blockquote> <p>You should generally avoid fractions where they are not necessary. The previous line gave $k^2 = 3 n^2\,$, which directly implies that $3 \mid k^2\,$.</p> <blockquote> <p>or $\frac{k}{n}$ has a factor (because $\frac{k^2}{n^2}= 3$)</p> </blockquote> <p>This makes no sense, and it is in fact not needed to complete the proof.</p> <blockquote> <p>Suppose that the former is true, and $3$ is a factor of $k$. Then $k = 3j$ for some integer j, which implies that $(3j)^2 = 3n^2$</p> <p>$\Rightarrow 9j^2 = 3n^2 $</p> <p>$\Rightarrow n^2 = 3j^2 $</p> </blockquote> <p>The proof is complete right here at this point, if you just note that the last equality implies that $3 \mid n^2\,$, and therefore $3 \mid n$ which contradicts the assumption that $m,n$ are coprime.</p> <blockquote> <p>[ <em>rest of post snipped</em> ]</p> </blockquote> Answer 8: <p>"This implies that ∃ m,n∈Z s.t. m2n2=12. Assume without loss of generality that m, n have no factors in common."</p> <p>I, personally, would not argue "without loss of generality" . $q \in \mathbb Q$ is defined as $q = \frac mn$ for some relatively prime integers. So we declare them to have no factors in common by fiat-- not merely by lack of loss of generality. (It's not that big of an issue.)</p> <p>"This implies that $m^2$ is even, and therefore that m is even"</p> <p>I'd accept this but dxiv very much has a point, that it should require some justification. I personally would simply put it in more definitive language. I'd say: "Therefore $2|m^2$ and, as $2$ is prime, $2|m$". This could require a little justification in that all numbers have a unique prime factorization so that for prime $p$ we know if $p|ab$ then $p|a$ or $p|b$ so if $p|m^2$ then $p|m$ or $p|m$. </p> <p>"Because $n^2$ is an integer, it is clear that 3 divides $k^2$ which implies that k has 3 or $\frac kn$ has a factor ".</p> <p>As $\frac {k^2}3$ is a integer, it implies $3|k^2$. Period. That always happens. That any thing else <em>may</em> happen doesn't matter. It <em>may</em> have $k/n$ as a factor or it may have $7$ as a factor. Or it may not. Those don't matter.</p> <p>Also, if $\frac kn$ is an integer at all, then it is trivial that $\frac kn$ is a factor $k$ whether or not $k^2/3$ is an integer or not. And if $\frac kn$ is not an integer then the statement $\frac kn$ is a factor of $k$ is meaningless.</p> <p>And if $k/n$ is an integer, then $n|m = 2k$ and as $n,m$ have no factor in common then $n = 1$. (Which would mean $\sqrt{12} = 2\sqrt{3}$ is an integer which is easy to verify is not the case).</p> <p>"if the former ($3|k$) then .... "</p> <p>All that is just fine and the rest is unneeded.</p> <p>But the rest is a bit of a mess.</p> <p>"Suppose now that $k/n$ is a factor of $k$" Again, this is trivial if $n|k$ and is meaningless if $n \not \mid k$.</p> <p>And we can rule out $n|k$ as that would imply $\sqrt{12} = m/n = 2k/n$ is an integer. Which</p> <p>But beware. This is true of <em>all</em> numbers and nothing relevant is likely to arise. And it doesn't:</p> <p>" Then $(\frac knj)^2=3n^2$ which implies that $3j^2=3n^2$"</p> <p>Actually, no, it implies $3j^2 = 3n^4$. And thus we get $j = n^2$ (we can assume $j$ is positive this time as we can assume $k$ and $n$ are positive).</p> <p>"But this means that n divides m, which again is a contradiction."</p> <p>Actually it's not a contradiction if $n = 1$. </p> <p>But this isn't a contradiction that needed to be reached. $k/n$ is a factor of $k$ only makes sense if $n|k$ which would imply $n|m = 2k$.</p> Answer 9: <p>See this proof that if $n$ is not a perfect square then $\sqrt{n}$ is irrational:</p> <p><a href="https://math.stackexchange.com/questions/1391411/follow-up-question-proof-of-irrationality-of-sqrt3/1391466#1391466">Follow-up Question: Proof of Irrationality of $\sqrt{3}$</a></p> <p>The proof starts by saying that if $n$ is not a perfect square then there is a $k$ such that $k^2 &lt; n &lt; (k+1)^2$. The proof breaks down if $k^2 = n$.</p> <p>Note that this proof does <em>not</em> use divisibility.</p> Answer 10: <p><strong>Proof.</strong> &nbsp; Assume $\sqrt{12} \in \mathbb{Q}$ is rational, then it can be written as $\sqrt{12}=\cfrac{m}{n}$ with $m,n \in \mathbb{Z}$ coprime.</p> <p>Squaring the equality gives $m^2 = 12 n^2 = 3 \cdot 4 \cdot n^2\,$. Therefore $3 \mid m^2 = m \cdot m$ and, since $3$ is a prime, it follows by <a href="https://en.wikipedia.org/wiki/Euclid&#39;s_lemma" rel="noreferrer">Euclid's Lemma</a> that $3 \mid m\,$.</p> <p>Then $m = 3k$ for some $k \in \mathbb{Z}$ and substituting back gives $9 k^2 = 12 n^2 \iff 3 k^2 = 4 n^2\,$. Therefore $3 \mid 4 n^2$ and, since $3 \not \mid 4$ it follows that $3 \mid n^2$ then, again by Euclid's Lemma, $3 \mid n\,$.</p> <p>But $3 \mid m$ and $3 \mid n$ contradicts the assumption that $m,n$ are coprime, so the premise that $\sqrt{12} \in \mathbb{Q}$ must be false, therefore $\sqrt{12}$ is irrational.</p> <p><br> <strong>Critique</strong> of the posted proof.</p> <blockquote> <p>Proof: Suppose there <em>were</em> such a number, $a = \in \mathbb{Q}$ s.t. $a^2 = 12$.</p> <p>This implies that $\exists$ $m, n \in \mathbb{Z}$ s.t. $\frac{m^2}{n^2} = 12.$ Assume without loss of generality that $m,~ n$ have no factors in common.</p> <p>$\Rightarrow m^2 = 12n^2$. </p> </blockquote> <p>So far so good.</p> <blockquote> <p>This implies that $m^2$ is even, and therefore that $m$ is even;</p> </blockquote> <p>The fact that $2 \mid m^2 \implies 2 \mid m$ may sound obvious, but still needs some justification. You could argue by contradiction, or use Euclid's Lemma.</p> <blockquote> <p>it can thus be written $2k = m$ for some $k \in \mathbb{Z}$.</p> <p>Thus $m^2 = 12n^2 $</p> <p>$\Rightarrow 4k^2 = 12n^2 $</p> </blockquote> <p>Correct. As an observation, $k^2 = 3 n^2$ just eliminated the perfect square factor of $4$ and reduced the problem to proving that $\sqrt{3}$ is irrational.</p> <blockquote> <p>$\Rightarrow \frac{k^2}{3} = n^2$</p> <p>Because $n^2$ is an integer, it is clear that $3$ divides $k^2$ which imples that $k$ has $3$</p> </blockquote> <p>You should generally avoid fractions where they are not necessary. The previous line gave $k^2 = 3 n^2\,$, which directly implies that $3 \mid k^2\,$.</p> <blockquote> <p>or $\frac{k}{n}$ has a factor (because $\frac{k^2}{n^2}= 3$)</p> </blockquote> <p>This makes no sense, and it is in fact not needed to complete the proof.</p> <blockquote> <p>Suppose that the former is true, and $3$ is a factor of $k$. Then $k = 3j$ for some integer j, which implies that $(3j)^2 = 3n^2$</p> <p>$\Rightarrow 9j^2 = 3n^2 $</p> <p>$\Rightarrow n^2 = 3j^2 $</p> </blockquote> <p>The proof is complete right here at this point, if you just note that the last equality implies that $3 \mid n^2\,$, and therefore $3 \mid n$ which contradicts the assumption that $m,n$ are coprime.</p> <blockquote> <p>[ <em>rest of post snipped</em> ]</p> </blockquote> Answer 11: <p>"This implies that ∃ m,n∈Z s.t. m2n2=12. Assume without loss of generality that m, n have no factors in common."</p> <p>I, personally, would not argue "without loss of generality" . $q \in \mathbb Q$ is defined as $q = \frac mn$ for some relatively prime integers. So we declare them to have no factors in common by fiat-- not merely by lack of loss of generality. (It's not that big of an issue.)</p> <p>"This implies that $m^2$ is even, and therefore that m is even"</p> <p>I'd accept this but dxiv very much has a point, that it should require some justification. I personally would simply put it in more definitive language. I'd say: "Therefore $2|m^2$ and, as $2$ is prime, $2|m$". This could require a little justification in that all numbers have a unique prime factorization so that for prime $p$ we know if $p|ab$ then $p|a$ or $p|b$ so if $p|m^2$ then $p|m$ or $p|m$. </p> <p>"Because $n^2$ is an integer, it is clear that 3 divides $k^2$ which implies that k has 3 or $\frac kn$ has a factor ".</p> <p>As $\frac {k^2}3$ is a integer, it implies $3|k^2$. Period. That always happens. That any thing else <em>may</em> happen doesn't matter. It <em>may</em> have $k/n$ as a factor or it may have $7$ as a factor. Or it may not. Those don't matter.</p> <p>Also, if $\frac kn$ is an integer at all, then it is trivial that $\frac kn$ is a factor $k$ whether or not $k^2/3$ is an integer or not. And if $\frac kn$ is not an integer then the statement $\frac kn$ is a factor of $k$ is meaningless.</p> <p>And if $k/n$ is an integer, then $n|m = 2k$ and as $n,m$ have no factor in common then $n = 1$. (Which would mean $\sqrt{12} = 2\sqrt{3}$ is an integer which is easy to verify is not the case).</p> <p>"if the former ($3|k$) then .... "</p> <p>All that is just fine and the rest is unneeded.</p> <p>But the rest is a bit of a mess.</p> <p>"Suppose now that $k/n$ is a factor of $k$" Again, this is trivial if $n|k$ and is meaningless if $n \not \mid k$.</p> <p>And we can rule out $n|k$ as that would imply $\sqrt{12} = m/n = 2k/n$ is an integer. Which</p> <p>But beware. This is true of <em>all</em> numbers and nothing relevant is likely to arise. And it doesn't:</p> <p>" Then $(\frac knj)^2=3n^2$ which implies that $3j^2=3n^2$"</p> <p>Actually, no, it implies $3j^2 = 3n^4$. And thus we get $j = n^2$ (we can assume $j$ is positive this time as we can assume $k$ and $n$ are positive).</p> <p>"But this means that n divides m, which again is a contradiction."</p> <p>Actually it's not a contradiction if $n = 1$. </p> <p>But this isn't a contradiction that needed to be reached. $k/n$ is a factor of $k$ only makes sense if $n|k$ which would imply $n|m = 2k$.</p> Answer 12: <p>See this proof that if $n$ is not a perfect square then $\sqrt{n}$ is irrational:</p> <p><a href="https://math.stackexchange.com/questions/1391411/follow-up-question-proof-of-irrationality-of-sqrt3/1391466#1391466">Follow-up Question: Proof of Irrationality of $\sqrt{3}$</a></p> <p>The proof starts by saying that if $n$ is not a perfect square then there is a $k$ such that $k^2 &lt; n &lt; (k+1)^2$. The proof breaks down if $k^2 = n$.</p> <p>Note that this proof does <em>not</em> use divisibility.</p> Answer 13: <p><strong>Proof.</strong> &nbsp; Assume $\sqrt{12} \in \mathbb{Q}$ is rational, then it can be written as $\sqrt{12}=\cfrac{m}{n}$ with $m,n \in \mathbb{Z}$ coprime.</p> <p>Squaring the equality gives $m^2 = 12 n^2 = 3 \cdot 4 \cdot n^2\,$. Therefore $3 \mid m^2 = m \cdot m$ and, since $3$ is a prime, it follows by <a href="https://en.wikipedia.org/wiki/Euclid&#39;s_lemma" rel="noreferrer">Euclid's Lemma</a> that $3 \mid m\,$.</p> <p>Then $m = 3k$ for some $k \in \mathbb{Z}$ and substituting back gives $9 k^2 = 12 n^2 \iff 3 k^2 = 4 n^2\,$. Therefore $3 \mid 4 n^2$ and, since $3 \not \mid 4$ it follows that $3 \mid n^2$ then, again by Euclid's Lemma, $3 \mid n\,$.</p> <p>But $3 \mid m$ and $3 \mid n$ contradicts the assumption that $m,n$ are coprime, so the premise that $\sqrt{12} \in \mathbb{Q}$ must be false, therefore $\sqrt{12}$ is irrational.</p> <p><br> <strong>Critique</strong> of the posted proof.</p> <blockquote> <p>Proof: Suppose there <em>were</em> such a number, $a = \in \mathbb{Q}$ s.t. $a^2 = 12$.</p> <p>This implies that $\exists$ $m, n \in \mathbb{Z}$ s.t. $\frac{m^2}{n^2} = 12.$ Assume without loss of generality that $m,~ n$ have no factors in common.</p> <p>$\Rightarrow m^2 = 12n^2$. </p> </blockquote> <p>So far so good.</p> <blockquote> <p>This implies that $m^2$ is even, and therefore that $m$ is even;</p> </blockquote> <p>The fact that $2 \mid m^2 \implies 2 \mid m$ may sound obvious, but still needs some justification. You could argue by contradiction, or use Euclid's Lemma.</p> <blockquote> <p>it can thus be written $2k = m$ for some $k \in \mathbb{Z}$.</p> <p>Thus $m^2 = 12n^2 $</p> <p>$\Rightarrow 4k^2 = 12n^2 $</p> </blockquote> <p>Correct. As an observation, $k^2 = 3 n^2$ just eliminated the perfect square factor of $4$ and reduced the problem to proving that $\sqrt{3}$ is irrational.</p> <blockquote> <p>$\Rightarrow \frac{k^2}{3} = n^2$</p> <p>Because $n^2$ is an integer, it is clear that $3$ divides $k^2$ which imples that $k$ has $3$</p> </blockquote> <p>You should generally avoid fractions where they are not necessary. The previous line gave $k^2 = 3 n^2\,$, which directly implies that $3 \mid k^2\,$.</p> <blockquote> <p>or $\frac{k}{n}$ has a factor (because $\frac{k^2}{n^2}= 3$)</p> </blockquote> <p>This makes no sense, and it is in fact not needed to complete the proof.</p> <blockquote> <p>Suppose that the former is true, and $3$ is a factor of $k$. Then $k = 3j$ for some integer j, which implies that $(3j)^2 = 3n^2$</p> <p>$\Rightarrow 9j^2 = 3n^2 $</p> <p>$\Rightarrow n^2 = 3j^2 $</p> </blockquote> <p>The proof is complete right here at this point, if you just note that the last equality implies that $3 \mid n^2\,$, and therefore $3 \mid n$ which contradicts the assumption that $m,n$ are coprime.</p> <blockquote> <p>[ <em>rest of post snipped</em> ]</p> </blockquote> Answer 14: <p>"This implies that ∃ m,n∈Z s.t. m2n2=12. Assume without loss of generality that m, n have no factors in common."</p> <p>I, personally, would not argue "without loss of generality" . $q \in \mathbb Q$ is defined as $q = \frac mn$ for some relatively prime integers. So we declare them to have no factors in common by fiat-- not merely by lack of loss of generality. (It's not that big of an issue.)</p> <p>"This implies that $m^2$ is even, and therefore that m is even"</p> <p>I'd accept this but dxiv very much has a point, that it should require some justification. I personally would simply put it in more definitive language. I'd say: "Therefore $2|m^2$ and, as $2$ is prime, $2|m$". This could require a little justification in that all numbers have a unique prime factorization so that for prime $p$ we know if $p|ab$ then $p|a$ or $p|b$ so if $p|m^2$ then $p|m$ or $p|m$. </p> <p>"Because $n^2$ is an integer, it is clear that 3 divides $k^2$ which implies that k has 3 or $\frac kn$ has a factor ".</p> <p>As $\frac {k^2}3$ is a integer, it implies $3|k^2$. Period. That always happens. That any thing else <em>may</em> happen doesn't matter. It <em>may</em> have $k/n$ as a factor or it may have $7$ as a factor. Or it may not. Those don't matter.</p> <p>Also, if $\frac kn$ is an integer at all, then it is trivial that $\frac kn$ is a factor $k$ whether or not $k^2/3$ is an integer or not. And if $\frac kn$ is not an integer then the statement $\frac kn$ is a factor of $k$ is meaningless.</p> <p>And if $k/n$ is an integer, then $n|m = 2k$ and as $n,m$ have no factor in common then $n = 1$. (Which would mean $\sqrt{12} = 2\sqrt{3}$ is an integer which is easy to verify is not the case).</p> <p>"if the former ($3|k$) then .... "</p> <p>All that is just fine and the rest is unneeded.</p> <p>But the rest is a bit of a mess.</p> <p>"Suppose now that $k/n$ is a factor of $k$" Again, this is trivial if $n|k$ and is meaningless if $n \not \mid k$.</p> <p>And we can rule out $n|k$ as that would imply $\sqrt{12} = m/n = 2k/n$ is an integer. Which</p> <p>But beware. This is true of <em>all</em> numbers and nothing relevant is likely to arise. And it doesn't:</p> <p>" Then $(\frac knj)^2=3n^2$ which implies that $3j^2=3n^2$"</p> <p>Actually, no, it implies $3j^2 = 3n^4$. And thus we get $j = n^2$ (we can assume $j$ is positive this time as we can assume $k$ and $n$ are positive).</p> <p>"But this means that n divides m, which again is a contradiction."</p> <p>Actually it's not a contradiction if $n = 1$. </p> <p>But this isn't a contradiction that needed to be reached. $k/n$ is a factor of $k$ only makes sense if $n|k$ which would imply $n|m = 2k$.</p> Answer 15: <p>See this proof that if $n$ is not a perfect square then $\sqrt{n}$ is irrational:</p> <p><a href="https://math.stackexchange.com/questions/1391411/follow-up-question-proof-of-irrationality-of-sqrt3/1391466#1391466">Follow-up Question: Proof of Irrationality of $\sqrt{3}$</a></p> <p>The proof starts by saying that if $n$ is not a perfect square then there is a $k$ such that $k^2 &lt; n &lt; (k+1)^2$. The proof breaks down if $k^2 = n$.</p> <p>Note that this proof does <em>not</em> use divisibility.</p> Answer 16: <p><strong>Proof.</strong> &nbsp; Assume $\sqrt{12} \in \mathbb{Q}$ is rational, then it can be written as $\sqrt{12}=\cfrac{m}{n}$ with $m,n \in \mathbb{Z}$ coprime.</p> <p>Squaring the equality gives $m^2 = 12 n^2 = 3 \cdot 4 \cdot n^2\,$. Therefore $3 \mid m^2 = m \cdot m$ and, since $3$ is a prime, it follows by <a href="https://en.wikipedia.org/wiki/Euclid&#39;s_lemma" rel="noreferrer">Euclid's Lemma</a> that $3 \mid m\,$.</p> <p>Then $m = 3k$ for some $k \in \mathbb{Z}$ and substituting back gives $9 k^2 = 12 n^2 \iff 3 k^2 = 4 n^2\,$. Therefore $3 \mid 4 n^2$ and, since $3 \not \mid 4$ it follows that $3 \mid n^2$ then, again by Euclid's Lemma, $3 \mid n\,$.</p> <p>But $3 \mid m$ and $3 \mid n$ contradicts the assumption that $m,n$ are coprime, so the premise that $\sqrt{12} \in \mathbb{Q}$ must be false, therefore $\sqrt{12}$ is irrational.</p> <p><br> <strong>Critique</strong> of the posted proof.</p> <blockquote> <p>Proof: Suppose there <em>were</em> such a number, $a = \in \mathbb{Q}$ s.t. $a^2 = 12$.</p> <p>This implies that $\exists$ $m, n \in \mathbb{Z}$ s.t. $\frac{m^2}{n^2} = 12.$ Assume without loss of generality that $m,~ n$ have no factors in common.</p> <p>$\Rightarrow m^2 = 12n^2$. </p> </blockquote> <p>So far so good.</p> <blockquote> <p>This implies that $m^2$ is even, and therefore that $m$ is even;</p> </blockquote> <p>The fact that $2 \mid m^2 \implies 2 \mid m$ may sound obvious, but still needs some justification. You could argue by contradiction, or use Euclid's Lemma.</p> <blockquote> <p>it can thus be written $2k = m$ for some $k \in \mathbb{Z}$.</p> <p>Thus $m^2 = 12n^2 $</p> <p>$\Rightarrow 4k^2 = 12n^2 $</p> </blockquote> <p>Correct. As an observation, $k^2 = 3 n^2$ just eliminated the perfect square factor of $4$ and reduced the problem to proving that $\sqrt{3}$ is irrational.</p> <blockquote> <p>$\Rightarrow \frac{k^2}{3} = n^2$</p> <p>Because $n^2$ is an integer, it is clear that $3$ divides $k^2$ which imples that $k$ has $3$</p> </blockquote> <p>You should generally avoid fractions where they are not necessary. The previous line gave $k^2 = 3 n^2\,$, which directly implies that $3 \mid k^2\,$.</p> <blockquote> <p>or $\frac{k}{n}$ has a factor (because $\frac{k^2}{n^2}= 3$)</p> </blockquote> <p>This makes no sense, and it is in fact not needed to complete the proof.</p> <blockquote> <p>Suppose that the former is true, and $3$ is a factor of $k$. Then $k = 3j$ for some integer j, which implies that $(3j)^2 = 3n^2$</p> <p>$\Rightarrow 9j^2 = 3n^2 $</p> <p>$\Rightarrow n^2 = 3j^2 $</p> </blockquote> <p>The proof is complete right here at this point, if you just note that the last equality implies that $3 \mid n^2\,$, and therefore $3 \mid n$ which contradicts the assumption that $m,n$ are coprime.</p> <blockquote> <p>[ <em>rest of post snipped</em> ]</p> </blockquote> Answer 17: <p>"This implies that ∃ m,n∈Z s.t. m2n2=12. Assume without loss of generality that m, n have no factors in common."</p> <p>I, personally, would not argue "without loss of generality" . $q \in \mathbb Q$ is defined as $q = \frac mn$ for some relatively prime integers. So we declare them to have no factors in common by fiat-- not merely by lack of loss of generality. (It's not that big of an issue.)</p> <p>"This implies that $m^2$ is even, and therefore that m is even"</p> <p>I'd accept this but dxiv very much has a point, that it should require some justification. I personally would simply put it in more definitive language. I'd say: "Therefore $2|m^2$ and, as $2$ is prime, $2|m$". This could require a little justification in that all numbers have a unique prime factorization so that for prime $p$ we know if $p|ab$ then $p|a$ or $p|b$ so if $p|m^2$ then $p|m$ or $p|m$. </p> <p>"Because $n^2$ is an integer, it is clear that 3 divides $k^2$ which implies that k has 3 or $\frac kn$ has a factor ".</p> <p>As $\frac {k^2}3$ is a integer, it implies $3|k^2$. Period. That always happens. That any thing else <em>may</em> happen doesn't matter. It <em>may</em> have $k/n$ as a factor or it may have $7$ as a factor. Or it may not. Those don't matter.</p> <p>Also, if $\frac kn$ is an integer at all, then it is trivial that $\frac kn$ is a factor $k$ whether or not $k^2/3$ is an integer or not. And if $\frac kn$ is not an integer then the statement $\frac kn$ is a factor of $k$ is meaningless.</p> <p>And if $k/n$ is an integer, then $n|m = 2k$ and as $n,m$ have no factor in common then $n = 1$. (Which would mean $\sqrt{12} = 2\sqrt{3}$ is an integer which is easy to verify is not the case).</p> <p>"if the former ($3|k$) then .... "</p> <p>All that is just fine and the rest is unneeded.</p> <p>But the rest is a bit of a mess.</p> <p>"Suppose now that $k/n$ is a factor of $k$" Again, this is trivial if $n|k$ and is meaningless if $n \not \mid k$.</p> <p>And we can rule out $n|k$ as that would imply $\sqrt{12} = m/n = 2k/n$ is an integer. Which</p> <p>But beware. This is true of <em>all</em> numbers and nothing relevant is likely to arise. And it doesn't:</p> <p>" Then $(\frac knj)^2=3n^2$ which implies that $3j^2=3n^2$"</p> <p>Actually, no, it implies $3j^2 = 3n^4$. And thus we get $j = n^2$ (we can assume $j$ is positive this time as we can assume $k$ and $n$ are positive).</p> <p>"But this means that n divides m, which again is a contradiction."</p> <p>Actually it's not a contradiction if $n = 1$. </p> <p>But this isn't a contradiction that needed to be reached. $k/n$ is a factor of $k$ only makes sense if $n|k$ which would imply $n|m = 2k$.</p> Answer 18: <p>See this proof that if $n$ is not a perfect square then $\sqrt{n}$ is irrational:</p> <p><a href="https://math.stackexchange.com/questions/1391411/follow-up-question-proof-of-irrationality-of-sqrt3/1391466#1391466">Follow-up Question: Proof of Irrationality of $\sqrt{3}$</a></p> <p>The proof starts by saying that if $n$ is not a perfect square then there is a $k$ such that $k^2 &lt; n &lt; (k+1)^2$. The proof breaks down if $k^2 = n$.</p> <p>Note that this proof does <em>not</em> use divisibility.</p> Answer 19: <p><strong>Proof.</strong> &nbsp; Assume $\sqrt{12} \in \mathbb{Q}$ is rational, then it can be written as $\sqrt{12}=\cfrac{m}{n}$ with $m,n \in \mathbb{Z}$ coprime.</p> <p>Squaring the equality gives $m^2 = 12 n^2 = 3 \cdot 4 \cdot n^2\,$. Therefore $3 \mid m^2 = m \cdot m$ and, since $3$ is a prime, it follows by <a href="https://en.wikipedia.org/wiki/Euclid&#39;s_lemma" rel="noreferrer">Euclid's Lemma</a> that $3 \mid m\,$.</p> <p>Then $m = 3k$ for some $k \in \mathbb{Z}$ and substituting back gives $9 k^2 = 12 n^2 \iff 3 k^2 = 4 n^2\,$. Therefore $3 \mid 4 n^2$ and, since $3 \not \mid 4$ it follows that $3 \mid n^2$ then, again by Euclid's Lemma, $3 \mid n\,$.</p> <p>But $3 \mid m$ and $3 \mid n$ contradicts the assumption that $m,n$ are coprime, so the premise that $\sqrt{12} \in \mathbb{Q}$ must be false, therefore $\sqrt{12}$ is irrational.</p> <p><br> <strong>Critique</strong> of the posted proof.</p> <blockquote> <p>Proof: Suppose there <em>were</em> such a number, $a = \in \mathbb{Q}$ s.t. $a^2 = 12$.</p> <p>This implies that $\exists$ $m, n \in \mathbb{Z}$ s.t. $\frac{m^2}{n^2} = 12.$ Assume without loss of generality that $m,~ n$ have no factors in common.</p> <p>$\Rightarrow m^2 = 12n^2$. </p> </blockquote> <p>So far so good.</p> <blockquote> <p>This implies that $m^2$ is even, and therefore that $m$ is even;</p> </blockquote> <p>The fact that $2 \mid m^2 \implies 2 \mid m$ may sound obvious, but still needs some justification. You could argue by contradiction, or use Euclid's Lemma.</p> <blockquote> <p>it can thus be written $2k = m$ for some $k \in \mathbb{Z}$.</p> <p>Thus $m^2 = 12n^2 $</p> <p>$\Rightarrow 4k^2 = 12n^2 $</p> </blockquote> <p>Correct. As an observation, $k^2 = 3 n^2$ just eliminated the perfect square factor of $4$ and reduced the problem to proving that $\sqrt{3}$ is irrational.</p> <blockquote> <p>$\Rightarrow \frac{k^2}{3} = n^2$</p> <p>Because $n^2$ is an integer, it is clear that $3$ divides $k^2$ which imples that $k$ has $3$</p> </blockquote> <p>You should generally avoid fractions where they are not necessary. The previous line gave $k^2 = 3 n^2\,$, which directly implies that $3 \mid k^2\,$.</p> <blockquote> <p>or $\frac{k}{n}$ has a factor (because $\frac{k^2}{n^2}= 3$)</p> </blockquote> <p>This makes no sense, and it is in fact not needed to complete the proof.</p> <blockquote> <p>Suppose that the former is true, and $3$ is a factor of $k$. Then $k = 3j$ for some integer j, which implies that $(3j)^2 = 3n^2$</p> <p>$\Rightarrow 9j^2 = 3n^2 $</p> <p>$\Rightarrow n^2 = 3j^2 $</p> </blockquote> <p>The proof is complete right here at this point, if you just note that the last equality implies that $3 \mid n^2\,$, and therefore $3 \mid n$ which contradicts the assumption that $m,n$ are coprime.</p> <blockquote> <p>[ <em>rest of post snipped</em> ]</p> </blockquote> Answer 20: <p>"This implies that ∃ m,n∈Z s.t. m2n2=12. Assume without loss of generality that m, n have no factors in common."</p> <p>I, personally, would not argue "without loss of generality" . $q \in \mathbb Q$ is defined as $q = \frac mn$ for some relatively prime integers. So we declare them to have no factors in common by fiat-- not merely by lack of loss of generality. (It's not that big of an issue.)</p> <p>"This implies that $m^2$ is even, and therefore that m is even"</p> <p>I'd accept this but dxiv very much has a point, that it should require some justification. I personally would simply put it in more definitive language. I'd say: "Therefore $2|m^2$ and, as $2$ is prime, $2|m$". This could require a little justification in that all numbers have a unique prime factorization so that for prime $p$ we know if $p|ab$ then $p|a$ or $p|b$ so if $p|m^2$ then $p|m$ or $p|m$. </p> <p>"Because $n^2$ is an integer, it is clear that 3 divides $k^2$ which implies that k has 3 or $\frac kn$ has a factor ".</p> <p>As $\frac {k^2}3$ is a integer, it implies $3|k^2$. Period. That always happens. That any thing else <em>may</em> happen doesn't matter. It <em>may</em> have $k/n$ as a factor or it may have $7$ as a factor. Or it may not. Those don't matter.</p> <p>Also, if $\frac kn$ is an integer at all, then it is trivial that $\frac kn$ is a factor $k$ whether or not $k^2/3$ is an integer or not. And if $\frac kn$ is not an integer then the statement $\frac kn$ is a factor of $k$ is meaningless.</p> <p>And if $k/n$ is an integer, then $n|m = 2k$ and as $n,m$ have no factor in common then $n = 1$. (Which would mean $\sqrt{12} = 2\sqrt{3}$ is an integer which is easy to verify is not the case).</p> <p>"if the former ($3|k$) then .... "</p> <p>All that is just fine and the rest is unneeded.</p> <p>But the rest is a bit of a mess.</p> <p>"Suppose now that $k/n$ is a factor of $k$" Again, this is trivial if $n|k$ and is meaningless if $n \not \mid k$.</p> <p>And we can rule out $n|k$ as that would imply $\sqrt{12} = m/n = 2k/n$ is an integer. Which</p> <p>But beware. This is true of <em>all</em> numbers and nothing relevant is likely to arise. And it doesn't:</p> <p>" Then $(\frac knj)^2=3n^2$ which implies that $3j^2=3n^2$"</p> <p>Actually, no, it implies $3j^2 = 3n^4$. And thus we get $j = n^2$ (we can assume $j$ is positive this time as we can assume $k$ and $n$ are positive).</p> <p>"But this means that n divides m, which again is a contradiction."</p> <p>Actually it's not a contradiction if $n = 1$. </p> <p>But this isn't a contradiction that needed to be reached. $k/n$ is a factor of $k$ only makes sense if $n|k$ which would imply $n|m = 2k$.</p> Answer 21: <p>See this proof that if $n$ is not a perfect square then $\sqrt{n}$ is irrational:</p> <p><a href="https://math.stackexchange.com/questions/1391411/follow-up-question-proof-of-irrationality-of-sqrt3/1391466#1391466">Follow-up Question: Proof of Irrationality of $\sqrt{3}$</a></p> <p>The proof starts by saying that if $n$ is not a perfect square then there is a $k$ such that $k^2 &lt; n &lt; (k+1)^2$. The proof breaks down if $k^2 = n$.</p> <p>Note that this proof does <em>not</em> use divisibility.</p> Answer 22: <p><strong>Proof.</strong> &nbsp; Assume $\sqrt{12} \in \mathbb{Q}$ is rational, then it can be written as $\sqrt{12}=\cfrac{m}{n}$ with $m,n \in \mathbb{Z}$ coprime.</p> <p>Squaring the equality gives $m^2 = 12 n^2 = 3 \cdot 4 \cdot n^2\,$. Therefore $3 \mid m^2 = m \cdot m$ and, since $3$ is a prime, it follows by <a href="https://en.wikipedia.org/wiki/Euclid&#39;s_lemma" rel="noreferrer">Euclid's Lemma</a> that $3 \mid m\,$.</p> <p>Then $m = 3k$ for some $k \in \mathbb{Z}$ and substituting back gives $9 k^2 = 12 n^2 \iff 3 k^2 = 4 n^2\,$. Therefore $3 \mid 4 n^2$ and, since $3 \not \mid 4$ it follows that $3 \mid n^2$ then, again by Euclid's Lemma, $3 \mid n\,$.</p> <p>But $3 \mid m$ and $3 \mid n$ contradicts the assumption that $m,n$ are coprime, so the premise that $\sqrt{12} \in \mathbb{Q}$ must be false, therefore $\sqrt{12}$ is irrational.</p> <p><br> <strong>Critique</strong> of the posted proof.</p> <blockquote> <p>Proof: Suppose there <em>were</em> such a number, $a = \in \mathbb{Q}$ s.t. $a^2 = 12$.</p> <p>This implies that $\exists$ $m, n \in \mathbb{Z}$ s.t. $\frac{m^2}{n^2} = 12.$ Assume without loss of generality that $m,~ n$ have no factors in common.</p> <p>$\Rightarrow m^2 = 12n^2$. </p> </blockquote> <p>So far so good.</p> <blockquote> <p>This implies that $m^2$ is even, and therefore that $m$ is even;</p> </blockquote> <p>The fact that $2 \mid m^2 \implies 2 \mid m$ may sound obvious, but still needs some justification. You could argue by contradiction, or use Euclid's Lemma.</p> <blockquote> <p>it can thus be written $2k = m$ for some $k \in \mathbb{Z}$.</p> <p>Thus $m^2 = 12n^2 $</p> <p>$\Rightarrow 4k^2 = 12n^2 $</p> </blockquote> <p>Correct. As an observation, $k^2 = 3 n^2$ just eliminated the perfect square factor of $4$ and reduced the problem to proving that $\sqrt{3}$ is irrational.</p> <blockquote> <p>$\Rightarrow \frac{k^2}{3} = n^2$</p> <p>Because $n^2$ is an integer, it is clear that $3$ divides $k^2$ which imples that $k$ has $3$</p> </blockquote> <p>You should generally avoid fractions where they are not necessary. The previous line gave $k^2 = 3 n^2\,$, which directly implies that $3 \mid k^2\,$.</p> <blockquote> <p>or $\frac{k}{n}$ has a factor (because $\frac{k^2}{n^2}= 3$)</p> </blockquote> <p>This makes no sense, and it is in fact not needed to complete the proof.</p> <blockquote> <p>Suppose that the former is true, and $3$ is a factor of $k$. Then $k = 3j$ for some integer j, which implies that $(3j)^2 = 3n^2$</p> <p>$\Rightarrow 9j^2 = 3n^2 $</p> <p>$\Rightarrow n^2 = 3j^2 $</p> </blockquote> <p>The proof is complete right here at this point, if you just note that the last equality implies that $3 \mid n^2\,$, and therefore $3 \mid n$ which contradicts the assumption that $m,n$ are coprime.</p> <blockquote> <p>[ <em>rest of post snipped</em> ]</p> </blockquote> Answer 23: <p>"This implies that ∃ m,n∈Z s.t. m2n2=12. Assume without loss of generality that m, n have no factors in common."</p> <p>I, personally, would not argue "without loss of generality" . $q \in \mathbb Q$ is defined as $q = \frac mn$ for some relatively prime integers. So we declare them to have no factors in common by fiat-- not merely by lack of loss of generality. (It's not that big of an issue.)</p> <p>"This implies that $m^2$ is even, and therefore that m is even"</p> <p>I'd accept this but dxiv very much has a point, that it should require some justification. I personally would simply put it in more definitive language. I'd say: "Therefore $2|m^2$ and, as $2$ is prime, $2|m$". This could require a little justification in that all numbers have a unique prime factorization so that for prime $p$ we know if $p|ab$ then $p|a$ or $p|b$ so if $p|m^2$ then $p|m$ or $p|m$. </p> <p>"Because $n^2$ is an integer, it is clear that 3 divides $k^2$ which implies that k has 3 or $\frac kn$ has a factor ".</p> <p>As $\frac {k^2}3$ is a integer, it implies $3|k^2$. Period. That always happens. That any thing else <em>may</em> happen doesn't matter. It <em>may</em> have $k/n$ as a factor or it may have $7$ as a factor. Or it may not. Those don't matter.</p> <p>Also, if $\frac kn$ is an integer at all, then it is trivial that $\frac kn$ is a factor $k$ whether or not $k^2/3$ is an integer or not. And if $\frac kn$ is not an integer then the statement $\frac kn$ is a factor of $k$ is meaningless.</p> <p>And if $k/n$ is an integer, then $n|m = 2k$ and as $n,m$ have no factor in common then $n = 1$. (Which would mean $\sqrt{12} = 2\sqrt{3}$ is an integer which is easy to verify is not the case).</p> <p>"if the former ($3|k$) then .... "</p> <p>All that is just fine and the rest is unneeded.</p> <p>But the rest is a bit of a mess.</p> <p>"Suppose now that $k/n$ is a factor of $k$" Again, this is trivial if $n|k$ and is meaningless if $n \not \mid k$.</p> <p>And we can rule out $n|k$ as that would imply $\sqrt{12} = m/n = 2k/n$ is an integer. Which</p> <p>But beware. This is true of <em>all</em> numbers and nothing relevant is likely to arise. And it doesn't:</p> <p>" Then $(\frac knj)^2=3n^2$ which implies that $3j^2=3n^2$"</p> <p>Actually, no, it implies $3j^2 = 3n^4$. And thus we get $j = n^2$ (we can assume $j$ is positive this time as we can assume $k$ and $n$ are positive).</p> <p>"But this means that n divides m, which again is a contradiction."</p> <p>Actually it's not a contradiction if $n = 1$. </p> <p>But this isn't a contradiction that needed to be reached. $k/n$ is a factor of $k$ only makes sense if $n|k$ which would imply $n|m = 2k$.</p> Answer 24: <p>See this proof that if $n$ is not a perfect square then $\sqrt{n}$ is irrational:</p> <p><a href="https://math.stackexchange.com/questions/1391411/follow-up-question-proof-of-irrationality-of-sqrt3/1391466#1391466">Follow-up Question: Proof of Irrationality of $\sqrt{3}$</a></p> <p>The proof starts by saying that if $n$ is not a perfect square then there is a $k$ such that $k^2 &lt; n &lt; (k+1)^2$. The proof breaks down if $k^2 = n$.</p> <p>Note that this proof does <em>not</em> use divisibility.</p> Answer 25: <p><strong>Proof.</strong> &nbsp; Assume $\sqrt{12} \in \mathbb{Q}$ is rational, then it can be written as $\sqrt{12}=\cfrac{m}{n}$ with $m,n \in \mathbb{Z}$ coprime.</p> <p>Squaring the equality gives $m^2 = 12 n^2 = 3 \cdot 4 \cdot n^2\,$. Therefore $3 \mid m^2 = m \cdot m$ and, since $3$ is a prime, it follows by <a href="https://en.wikipedia.org/wiki/Euclid&#39;s_lemma" rel="noreferrer">Euclid's Lemma</a> that $3 \mid m\,$.</p> <p>Then $m = 3k$ for some $k \in \mathbb{Z}$ and substituting back gives $9 k^2 = 12 n^2 \iff 3 k^2 = 4 n^2\,$. Therefore $3 \mid 4 n^2$ and, since $3 \not \mid 4$ it follows that $3 \mid n^2$ then, again by Euclid's Lemma, $3 \mid n\,$.</p> <p>But $3 \mid m$ and $3 \mid n$ contradicts the assumption that $m,n$ are coprime, so the premise that $\sqrt{12} \in \mathbb{Q}$ must be false, therefore $\sqrt{12}$ is irrational.</p> <p><br> <strong>Critique</strong> of the posted proof.</p> <blockquote> <p>Proof: Suppose there <em>were</em> such a number, $a = \in \mathbb{Q}$ s.t. $a^2 = 12$.</p> <p>This implies that $\exists$ $m, n \in \mathbb{Z}$ s.t. $\frac{m^2}{n^2} = 12.$ Assume without loss of generality that $m,~ n$ have no factors in common.</p> <p>$\Rightarrow m^2 = 12n^2$. </p> </blockquote> <p>So far so good.</p> <blockquote> <p>This implies that $m^2$ is even, and therefore that $m$ is even;</p> </blockquote> <p>The fact that $2 \mid m^2 \implies 2 \mid m$ may sound obvious, but still needs some justification. You could argue by contradiction, or use Euclid's Lemma.</p> <blockquote> <p>it can thus be written $2k = m$ for some $k \in \mathbb{Z}$.</p> <p>Thus $m^2 = 12n^2 $</p> <p>$\Rightarrow 4k^2 = 12n^2 $</p> </blockquote> <p>Correct. As an observation, $k^2 = 3 n^2$ just eliminated the perfect square factor of $4$ and reduced the problem to proving that $\sqrt{3}$ is irrational.</p> <blockquote> <p>$\Rightarrow \frac{k^2}{3} = n^2$</p> <p>Because $n^2$ is an integer, it is clear that $3$ divides $k^2$ which imples that $k$ has $3$</p> </blockquote> <p>You should generally avoid fractions where they are not necessary. The previous line gave $k^2 = 3 n^2\,$, which directly implies that $3 \mid k^2\,$.</p> <blockquote> <p>or $\frac{k}{n}$ has a factor (because $\frac{k^2}{n^2}= 3$)</p> </blockquote> <p>This makes no sense, and it is in fact not needed to complete the proof.</p> <blockquote> <p>Suppose that the former is true, and $3$ is a factor of $k$. Then $k = 3j$ for some integer j, which implies that $(3j)^2 = 3n^2$</p> <p>$\Rightarrow 9j^2 = 3n^2 $</p> <p>$\Rightarrow n^2 = 3j^2 $</p> </blockquote> <p>The proof is complete right here at this point, if you just note that the last equality implies that $3 \mid n^2\,$, and therefore $3 \mid n$ which contradicts the assumption that $m,n$ are coprime.</p> <blockquote> <p>[ <em>rest of post snipped</em> ]</p> </blockquote> Answer 26: <p>"This implies that ∃ m,n∈Z s.t. m2n2=12. Assume without loss of generality that m, n have no factors in common."</p> <p>I, personally, would not argue "without loss of generality" . $q \in \mathbb Q$ is defined as $q = \frac mn$ for some relatively prime integers. So we declare them to have no factors in common by fiat-- not merely by lack of loss of generality. (It's not that big of an issue.)</p> <p>"This implies that $m^2$ is even, and therefore that m is even"</p> <p>I'd accept this but dxiv very much has a point, that it should require some justification. I personally would simply put it in more definitive language. I'd say: "Therefore $2|m^2$ and, as $2$ is prime, $2|m$". This could require a little justification in that all numbers have a unique prime factorization so that for prime $p$ we know if $p|ab$ then $p|a$ or $p|b$ so if $p|m^2$ then $p|m$ or $p|m$. </p> <p>"Because $n^2$ is an integer, it is clear that 3 divides $k^2$ which implies that k has 3 or $\frac kn$ has a factor ".</p> <p>As $\frac {k^2}3$ is a integer, it implies $3|k^2$. Period. That always happens. That any thing else <em>may</em> happen doesn't matter. It <em>may</em> have $k/n$ as a factor or it may have $7$ as a factor. Or it may not. Those don't matter.</p> <p>Also, if $\frac kn$ is an integer at all, then it is trivial that $\frac kn$ is a factor $k$ whether or not $k^2/3$ is an integer or not. And if $\frac kn$ is not an integer then the statement $\frac kn$ is a factor of $k$ is meaningless.</p> <p>And if $k/n$ is an integer, then $n|m = 2k$ and as $n,m$ have no factor in common then $n = 1$. (Which would mean $\sqrt{12} = 2\sqrt{3}$ is an integer which is easy to verify is not the case).</p> <p>"if the former ($3|k$) then .... "</p> <p>All that is just fine and the rest is unneeded.</p> <p>But the rest is a bit of a mess.</p> <p>"Suppose now that $k/n$ is a factor of $k$" Again, this is trivial if $n|k$ and is meaningless if $n \not \mid k$.</p> <p>And we can rule out $n|k$ as that would imply $\sqrt{12} = m/n = 2k/n$ is an integer. Which</p> <p>But beware. This is true of <em>all</em> numbers and nothing relevant is likely to arise. And it doesn't:</p> <p>" Then $(\frac knj)^2=3n^2$ which implies that $3j^2=3n^2$"</p> <p>Actually, no, it implies $3j^2 = 3n^4$. And thus we get $j = n^2$ (we can assume $j$ is positive this time as we can assume $k$ and $n$ are positive).</p> <p>"But this means that n divides m, which again is a contradiction."</p> <p>Actually it's not a contradiction if $n = 1$. </p> <p>But this isn't a contradiction that needed to be reached. $k/n$ is a factor of $k$ only makes sense if $n|k$ which would imply $n|m = 2k$.</p> Answer 27: <p>See this proof that if $n$ is not a perfect square then $\sqrt{n}$ is irrational:</p> <p><a href="https://math.stackexchange.com/questions/1391411/follow-up-question-proof-of-irrationality-of-sqrt3/1391466#1391466">Follow-up Question: Proof of Irrationality of $\sqrt{3}$</a></p> <p>The proof starts by saying that if $n$ is not a perfect square then there is a $k$ such that $k^2 &lt; n &lt; (k+1)^2$. The proof breaks down if $k^2 = n$.</p> <p>Note that this proof does <em>not</em> use divisibility.</p> Answer 28: <p><strong>Proof.</strong> &nbsp; Assume $\sqrt{12} \in \mathbb{Q}$ is rational, then it can be written as $\sqrt{12}=\cfrac{m}{n}$ with $m,n \in \mathbb{Z}$ coprime.</p> <p>Squaring the equality gives $m^2 = 12 n^2 = 3 \cdot 4 \cdot n^2\,$. Therefore $3 \mid m^2 = m \cdot m$ and, since $3$ is a prime, it follows by <a href="https://en.wikipedia.org/wiki/Euclid&#39;s_lemma" rel="noreferrer">Euclid's Lemma</a> that $3 \mid m\,$.</p> <p>Then $m = 3k$ for some $k \in \mathbb{Z}$ and substituting back gives $9 k^2 = 12 n^2 \iff 3 k^2 = 4 n^2\,$. Therefore $3 \mid 4 n^2$ and, since $3 \not \mid 4$ it follows that $3 \mid n^2$ then, again by Euclid's Lemma, $3 \mid n\,$.</p> <p>But $3 \mid m$ and $3 \mid n$ contradicts the assumption that $m,n$ are coprime, so the premise that $\sqrt{12} \in \mathbb{Q}$ must be false, therefore $\sqrt{12}$ is irrational.</p> <p><br> <strong>Critique</strong> of the posted proof.</p> <blockquote> <p>Proof: Suppose there <em>were</em> such a number, $a = \in \mathbb{Q}$ s.t. $a^2 = 12$.</p> <p>This implies that $\exists$ $m, n \in \mathbb{Z}$ s.t. $\frac{m^2}{n^2} = 12.$ Assume without loss of generality that $m,~ n$ have no factors in common.</p> <p>$\Rightarrow m^2 = 12n^2$. </p> </blockquote> <p>So far so good.</p> <blockquote> <p>This implies that $m^2$ is even, and therefore that $m$ is even;</p> </blockquote> <p>The fact that $2 \mid m^2 \implies 2 \mid m$ may sound obvious, but still needs some justification. You could argue by contradiction, or use Euclid's Lemma.</p> <blockquote> <p>it can thus be written $2k = m$ for some $k \in \mathbb{Z}$.</p> <p>Thus $m^2 = 12n^2 $</p> <p>$\Rightarrow 4k^2 = 12n^2 $</p> </blockquote> <p>Correct. As an observation, $k^2 = 3 n^2$ just eliminated the perfect square factor of $4$ and reduced the problem to proving that $\sqrt{3}$ is irrational.</p> <blockquote> <p>$\Rightarrow \frac{k^2}{3} = n^2$</p> <p>Because $n^2$ is an integer, it is clear that $3$ divides $k^2$ which imples that $k$ has $3$</p> </blockquote> <p>You should generally avoid fractions where they are not necessary. The previous line gave $k^2 = 3 n^2\,$, which directly implies that $3 \mid k^2\,$.</p> <blockquote> <p>or $\frac{k}{n}$ has a factor (because $\frac{k^2}{n^2}= 3$)</p> </blockquote> <p>This makes no sense, and it is in fact not needed to complete the proof.</p> <blockquote> <p>Suppose that the former is true, and $3$ is a factor of $k$. Then $k = 3j$ for some integer j, which implies that $(3j)^2 = 3n^2$</p> <p>$\Rightarrow 9j^2 = 3n^2 $</p> <p>$\Rightarrow n^2 = 3j^2 $</p> </blockquote> <p>The proof is complete right here at this point, if you just note that the last equality implies that $3 \mid n^2\,$, and therefore $3 \mid n$ which contradicts the assumption that $m,n$ are coprime.</p> <blockquote> <p>[ <em>rest of post snipped</em> ]</p> </blockquote> Answer 29: <p>"This implies that ∃ m,n∈Z s.t. m2n2=12. Assume without loss of generality that m, n have no factors in common."</p> <p>I, personally, would not argue "without loss of generality" . $q \in \mathbb Q$ is defined as $q = \frac mn$ for some relatively prime integers. So we declare them to have no factors in common by fiat-- not merely by lack of loss of generality. (It's not that big of an issue.)</p> <p>"This implies that $m^2$ is even, and therefore that m is even"</p> <p>I'd accept this but dxiv very much has a point, that it should require some justification. I personally would simply put it in more definitive language. I'd say: "Therefore $2|m^2$ and, as $2$ is prime, $2|m$". This could require a little justification in that all numbers have a unique prime factorization so that for prime $p$ we know if $p|ab$ then $p|a$ or $p|b$ so if $p|m^2$ then $p|m$ or $p|m$. </p> <p>"Because $n^2$ is an integer, it is clear that 3 divides $k^2$ which implies that k has 3 or $\frac kn$ has a factor ".</p> <p>As $\frac {k^2}3$ is a integer, it implies $3|k^2$. Period. That always happens. That any thing else <em>may</em> happen doesn't matter. It <em>may</em> have $k/n$ as a factor or it may have $7$ as a factor. Or it may not. Those don't matter.</p> <p>Also, if $\frac kn$ is an integer at all, then it is trivial that $\frac kn$ is a factor $k$ whether or not $k^2/3$ is an integer or not. And if $\frac kn$ is not an integer then the statement $\frac kn$ is a factor of $k$ is meaningless.</p> <p>And if $k/n$ is an integer, then $n|m = 2k$ and as $n,m$ have no factor in common then $n = 1$. (Which would mean $\sqrt{12} = 2\sqrt{3}$ is an integer which is easy to verify is not the case).</p> <p>"if the former ($3|k$) then .... "</p> <p>All that is just fine and the rest is unneeded.</p> <p>But the rest is a bit of a mess.</p> <p>"Suppose now that $k/n$ is a factor of $k$" Again, this is trivial if $n|k$ and is meaningless if $n \not \mid k$.</p> <p>And we can rule out $n|k$ as that would imply $\sqrt{12} = m/n = 2k/n$ is an integer. Which</p> <p>But beware. This is true of <em>all</em> numbers and nothing relevant is likely to arise. And it doesn't:</p> <p>" Then $(\frac knj)^2=3n^2$ which implies that $3j^2=3n^2$"</p> <p>Actually, no, it implies $3j^2 = 3n^4$. And thus we get $j = n^2$ (we can assume $j$ is positive this time as we can assume $k$ and $n$ are positive).</p> <p>"But this means that n divides m, which again is a contradiction."</p> <p>Actually it's not a contradiction if $n = 1$. </p> <p>But this isn't a contradiction that needed to be reached. $k/n$ is a factor of $k$ only makes sense if $n|k$ which would imply $n|m = 2k$.</p> Answer 30: <p>See this proof that if $n$ is not a perfect square then $\sqrt{n}$ is irrational:</p> <p><a href="https://math.stackexchange.com/questions/1391411/follow-up-question-proof-of-irrationality-of-sqrt3/1391466#1391466">Follow-up Question: Proof of Irrationality of $\sqrt{3}$</a></p> <p>The proof starts by saying that if $n$ is not a perfect square then there is a $k$ such that $k^2 &lt; n &lt; (k+1)^2$. The proof breaks down if $k^2 = n$.</p> <p>Note that this proof does <em>not</em> use divisibility.</p>
Where is the mathematical logic with this exponent behaviour?
2,116,219
6
https://math.stackexchange.com/questions/2116219/where-is-the-mathematical-logic-with-this-exponent-behaviour
<p>I'm learning how the diffie-hellman key exchange algorithm, works and I came to a mathematical conflict which I can't understand its logic .</p> <p>This is the story: ( yuu can bypass this , it's just a brief)</p> <p>Alice and Bob wants to generate the same secret number for they can use to encrypt data. (Eve is the bad guy and listening to every transmission over the network).</p> <pre><code>Alice Eve Bob secret 15 secret 13 Alice says to Bob ( let's choose the formula : `3^e % 17`) Eve knows now 3,7 3^secret % 17 =6 3^15 % 17 = 6 ----&gt; 6 Eve knows now 3,7,6 3^secret % 17 = 12 3^13 % 17 = 12 12 &lt;---- Eve knows now 3,7,6,12 (bob's 12)^secret %17 12^15 %17 = 10 (Alice's 6)^secret %17 6^13 %17 = 10 </code></pre> <p>They got the same number 10 , and it's hard for Eve to try figure 10 with very large numbers</p> <pre><code>And it's becuase : (3^13 % 17) ^ 15 %17 is equal to ((3^15 % 17)^13 %17) </code></pre> <p>But I was wondering about that : </p> <pre><code>(3^13 % 17) ^ 15 %17 </code></pre> <p>It appears that <code>(3^13 % 17) ^ 15 %17</code> is equal also to <code>(3^13 ) ^ 15 %17</code></p> <p>My question is what is the logic behind it. I'm not asking for the accurate mathematical proof , I just want to understand the simple-human-logic behind this.</p>
[ { "answer_id": 2120674, "score": 2, "text": "<p>Assuming the number and base are positive, all the modulo operator <code>n % 17</code> does is to give you the last \"digit\" of $n$ when written in base 17.</p>\n\n<p>Now when you want to calculate the last digit (base 17) of $(n^{13})^{15}$, you could calculate $x=n^{13}$, then raise $x$ to the power $15$, then look at the last digit. In order to raise your big number $x$ to the power $15$, you have to multiply a bunch of numbers (all equal to $x$) together. But if all you care about is the last digit (base 17) of the answer, all you need to know is the last digit of $x$, because when you do a multiplication longhand only the digits in the last column affect the last column of the answer. This is perhaps easier to convince yourself of working in base 10, since that's how we're used to doing long multiplication, but it works equally well in any base -- the last digit of $a\\times b$ only depends on the last digits of $a$ and $b$.</p>\n" }, { "answer_id": 2116236, "score": 0, "text": "<p>My view is that I don't see modulo as an operation the way computers do. To me, it's a <em>relation</em>, and its symbol looks like ${\\cdot}\\equiv {\\cdot}\\pmod{17}$. Two numbers can be checked against one another (we write that by filling in the two numbers where the dots are) and either they fulfill that property, or they don't. This property works well with addition, subtraction, multiplication and exponentiation in the sense that if $$\na\\equiv b\\pmod{17}\n$$\nis true, then\n$$\na+n \\equiv b+n\\pmod{17}\\\\\na-n \\equiv b-n\\pmod{17}\\\\\na\\cdot n \\equiv b\\cdot n\\pmod{17}\\\\\na^n \\equiv b^n\\pmod{17}\n$$\nare also all true for any integer $n$.</p>\n\n<p>What you do is first saying that $3^{13}\\equiv 12\\pmod{17}$, in other words, $a = 3^{13}$ and $b = 12$. Then you raise each side of that to the $15$th power, i.e. set $n = 15$ and use the last of the four above properties. The relation therefore still holds.</p>\n\n<p>What <code>a % k</code> does in a computer program is simply to find the smallest natural number $b$ such that $a \\equiv b\\pmod{k}$. As long as your operations are addition, subtraction, multiplication and exponentiation, it doesn't matter whether you do this reducing step during intermediate calculations or not, precisely by the four properties above.</p>\n" }, { "answer_id": 2120057, "score": 0, "text": "<p>Denoting the <a href=\"https://en.wikipedia.org/wiki/Modulo_operation\" rel=\"nofollow noreferrer\">Modulo operator %</a> by $\\left(\\,\\text{mod}\\,\\right)$, we have for all $\\,a,b,c,d\\in{\\mathbb N}^{+}\\,$:\n$$ \n\\left(\\,a^{b\\,c}\\,\\right)\\color{blue}{\\text{mod}}\\,d =\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{c}\\,\\,\\color{blue}{\\text{mod}}\\,d =\\left(\\,a^{c}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d =\\left(\\,a\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{b\\,c}\\,\\,\\color{blue}{\\text{mod}}\\,d \\\\[2mm] \n\\small \\left(3^{\\small13\\times15}\\right)\\text{mod}\\,17 =\\left(3^{\\small13}\\,\\text{mod}\\,17\\right)^{\\small15}\\,\\text{mod}\\,17 =\\left(3^{\\small15}\\,\\text{mod}\\,17\\right)^{\\small13}\\,\\text{mod}\\,17 =\\left(3\\,\\text{mod}\\,17\\right)^{\\small13\\times15}\\,\\text{mod}\\,17 \n$$\nWhere above identity results from the multiplication distributive property:\n$$ \n\\left[ a_{\\small1}\\,.\\,a_{\\small2}\\,.\\,a_{\\small3}\\,.\\,\\cdots\\, \\right]\\,\\color{blue}{\\text{mod}}\\,d \\,=\\, \\left[ \\left(a_{\\small1}\\,\\color{blue}{\\text{mod}}\\,d\\right)\\left(a_{\\small2}\\,\\color{blue}{\\text{mod}}\\,d\\right)\\left(a_{\\small3}\\,\\color{blue}{\\text{mod}}\\,d\\right) \\cdots\\, \\right]\\,\\color{blue}{\\text{mod}}\\,d \\,\\,\\Rightarrow \\\\[6mm] \n\\begin{align} \n\\left(\\,a^{b\\,c}\\,\\right)\\color{blue}{\\text{mod}}\\,d &amp;= \\left(\\,a^{b}\\,\\right)^{c}\\,\\color{blue}{\\text{mod}}\\,d = \\underbrace{\\left[a^b.a^b.\\,\\cdots\\,.a^b\\right]}_{c\\,\\text{ times}}\\,\\,\\color{blue}{\\text{mod}}\\,d \\\\[2mm] \n&amp;= \\underbrace{\\left[\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)\\,\\cdots\\,\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)\\right]}_{c\\,\\text{ times}}\\,\\,\\color{blue}{\\text{mod}}\\,d = \\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{c}\\,\\,\\color{blue}{\\text{mod}}\\,d \n\\end{align} \n$$\n <br/> \n$$ \n\\begin{align} \n{\\small\\text{Let:}}\\,a_{\\small1}=q_{\\small1}.d+r_{\\small1}\\,, &amp;\\quad a_{\\small2}=q_{\\small2}.d+r_{\\small2}\\,, \\quad a_{\\small3}=q_{\\small3}.d+r_{\\small3}\\,, \\,\\,\\cdots \\\\[2mm] \n\\left[ a_{\\small1}\\,a_{\\small2}\\,a_{\\small3}\\,\\cdots\\, \\right]\\,\\text{mod}\\,d &amp;=\\left[ \\left(q_{\\small1}.d+r_{\\small1}\\right)\\,\\left(q_{\\small2}.d+r_{\\small2}\\right)\\,\\left(q_{\\small3}.d+r_{\\small3}\\right)\\,\\cdots\\, \\right]\\,\\text{mod}\\,d \\\\[2mm] \n&amp;\\quad \\,\\,\\{{\\small\\text{ all terms will have a power of }}\\,\\,d\\,\\, {\\small\\text{ except the last term }}\\} \\\\[2mm] \n&amp;= \\left[ {\\large\\alpha}\\left(q_{\\small1},r_{\\small1},q_{\\small2},r_{\\small2},q_{\\small3},r_{\\small3},\\cdots\\right).d+\\left(r_{\\small1}\\,r_{\\small2}\\,r_{\\small3}\\,\\cdots\\,\\right) \\right]\\,\\text{mod}\\,d \\\\[2mm] \n&amp;= \\left[\\,r_{\\small1}\\,r_{\\small2}\\,r_{\\small3}\\,\\cdots\\, \\right]\\,\\text{mod}\\,d \\quad\\qquad\\{{\\small\\left(\\alpha.d\\right)\\,\\text{mod}\\,d=0}\\} \\\\[2mm] \n&amp;= \\left[ \\left(a_{\\small1}\\,\\text{mod}\\,d\\right)\\left(a_{\\small2}\\,\\text{mod}\\,d\\right)\\left(a_{\\small3}\\,\\text{mod}\\,d\\right) \\cdots\\, \\right]\\,\\text{mod}\\,d \\\\[2mm] \n\\end{align} \n$$</p>\n" }, { "answer_id": 2120674, "score": 2, "text": "<p>Assuming the number and base are positive, all the modulo operator <code>n % 17</code> does is to give you the last \"digit\" of $n$ when written in base 17.</p>\n\n<p>Now when you want to calculate the last digit (base 17) of $(n^{13})^{15}$, you could calculate $x=n^{13}$, then raise $x$ to the power $15$, then look at the last digit. In order to raise your big number $x$ to the power $15$, you have to multiply a bunch of numbers (all equal to $x$) together. But if all you care about is the last digit (base 17) of the answer, all you need to know is the last digit of $x$, because when you do a multiplication longhand only the digits in the last column affect the last column of the answer. This is perhaps easier to convince yourself of working in base 10, since that's how we're used to doing long multiplication, but it works equally well in any base -- the last digit of $a\\times b$ only depends on the last digits of $a$ and $b$.</p>\n" }, { "answer_id": 2116236, "score": 0, "text": "<p>My view is that I don't see modulo as an operation the way computers do. To me, it's a <em>relation</em>, and its symbol looks like ${\\cdot}\\equiv {\\cdot}\\pmod{17}$. Two numbers can be checked against one another (we write that by filling in the two numbers where the dots are) and either they fulfill that property, or they don't. This property works well with addition, subtraction, multiplication and exponentiation in the sense that if $$\na\\equiv b\\pmod{17}\n$$\nis true, then\n$$\na+n \\equiv b+n\\pmod{17}\\\\\na-n \\equiv b-n\\pmod{17}\\\\\na\\cdot n \\equiv b\\cdot n\\pmod{17}\\\\\na^n \\equiv b^n\\pmod{17}\n$$\nare also all true for any integer $n$.</p>\n\n<p>What you do is first saying that $3^{13}\\equiv 12\\pmod{17}$, in other words, $a = 3^{13}$ and $b = 12$. Then you raise each side of that to the $15$th power, i.e. set $n = 15$ and use the last of the four above properties. The relation therefore still holds.</p>\n\n<p>What <code>a % k</code> does in a computer program is simply to find the smallest natural number $b$ such that $a \\equiv b\\pmod{k}$. As long as your operations are addition, subtraction, multiplication and exponentiation, it doesn't matter whether you do this reducing step during intermediate calculations or not, precisely by the four properties above.</p>\n" }, { "answer_id": 2120057, "score": 0, "text": "<p>Denoting the <a href=\"https://en.wikipedia.org/wiki/Modulo_operation\" rel=\"nofollow noreferrer\">Modulo operator %</a> by $\\left(\\,\\text{mod}\\,\\right)$, we have for all $\\,a,b,c,d\\in{\\mathbb N}^{+}\\,$:\n$$ \n\\left(\\,a^{b\\,c}\\,\\right)\\color{blue}{\\text{mod}}\\,d =\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{c}\\,\\,\\color{blue}{\\text{mod}}\\,d =\\left(\\,a^{c}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d =\\left(\\,a\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{b\\,c}\\,\\,\\color{blue}{\\text{mod}}\\,d \\\\[2mm] \n\\small \\left(3^{\\small13\\times15}\\right)\\text{mod}\\,17 =\\left(3^{\\small13}\\,\\text{mod}\\,17\\right)^{\\small15}\\,\\text{mod}\\,17 =\\left(3^{\\small15}\\,\\text{mod}\\,17\\right)^{\\small13}\\,\\text{mod}\\,17 =\\left(3\\,\\text{mod}\\,17\\right)^{\\small13\\times15}\\,\\text{mod}\\,17 \n$$\nWhere above identity results from the multiplication distributive property:\n$$ \n\\left[ a_{\\small1}\\,.\\,a_{\\small2}\\,.\\,a_{\\small3}\\,.\\,\\cdots\\, \\right]\\,\\color{blue}{\\text{mod}}\\,d \\,=\\, \\left[ \\left(a_{\\small1}\\,\\color{blue}{\\text{mod}}\\,d\\right)\\left(a_{\\small2}\\,\\color{blue}{\\text{mod}}\\,d\\right)\\left(a_{\\small3}\\,\\color{blue}{\\text{mod}}\\,d\\right) \\cdots\\, \\right]\\,\\color{blue}{\\text{mod}}\\,d \\,\\,\\Rightarrow \\\\[6mm] \n\\begin{align} \n\\left(\\,a^{b\\,c}\\,\\right)\\color{blue}{\\text{mod}}\\,d &amp;= \\left(\\,a^{b}\\,\\right)^{c}\\,\\color{blue}{\\text{mod}}\\,d = \\underbrace{\\left[a^b.a^b.\\,\\cdots\\,.a^b\\right]}_{c\\,\\text{ times}}\\,\\,\\color{blue}{\\text{mod}}\\,d \\\\[2mm] \n&amp;= \\underbrace{\\left[\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)\\,\\cdots\\,\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)\\right]}_{c\\,\\text{ times}}\\,\\,\\color{blue}{\\text{mod}}\\,d = \\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{c}\\,\\,\\color{blue}{\\text{mod}}\\,d \n\\end{align} \n$$\n <br/> \n$$ \n\\begin{align} \n{\\small\\text{Let:}}\\,a_{\\small1}=q_{\\small1}.d+r_{\\small1}\\,, &amp;\\quad a_{\\small2}=q_{\\small2}.d+r_{\\small2}\\,, \\quad a_{\\small3}=q_{\\small3}.d+r_{\\small3}\\,, \\,\\,\\cdots \\\\[2mm] \n\\left[ a_{\\small1}\\,a_{\\small2}\\,a_{\\small3}\\,\\cdots\\, \\right]\\,\\text{mod}\\,d &amp;=\\left[ \\left(q_{\\small1}.d+r_{\\small1}\\right)\\,\\left(q_{\\small2}.d+r_{\\small2}\\right)\\,\\left(q_{\\small3}.d+r_{\\small3}\\right)\\,\\cdots\\, \\right]\\,\\text{mod}\\,d \\\\[2mm] \n&amp;\\quad \\,\\,\\{{\\small\\text{ all terms will have a power of }}\\,\\,d\\,\\, {\\small\\text{ except the last term }}\\} \\\\[2mm] \n&amp;= \\left[ {\\large\\alpha}\\left(q_{\\small1},r_{\\small1},q_{\\small2},r_{\\small2},q_{\\small3},r_{\\small3},\\cdots\\right).d+\\left(r_{\\small1}\\,r_{\\small2}\\,r_{\\small3}\\,\\cdots\\,\\right) \\right]\\,\\text{mod}\\,d \\\\[2mm] \n&amp;= \\left[\\,r_{\\small1}\\,r_{\\small2}\\,r_{\\small3}\\,\\cdots\\, \\right]\\,\\text{mod}\\,d \\quad\\qquad\\{{\\small\\left(\\alpha.d\\right)\\,\\text{mod}\\,d=0}\\} \\\\[2mm] \n&amp;= \\left[ \\left(a_{\\small1}\\,\\text{mod}\\,d\\right)\\left(a_{\\small2}\\,\\text{mod}\\,d\\right)\\left(a_{\\small3}\\,\\text{mod}\\,d\\right) \\cdots\\, \\right]\\,\\text{mod}\\,d \\\\[2mm] \n\\end{align} \n$$</p>\n" }, { "answer_id": 2120674, "score": 2, "text": "<p>Assuming the number and base are positive, all the modulo operator <code>n % 17</code> does is to give you the last \"digit\" of $n$ when written in base 17.</p>\n\n<p>Now when you want to calculate the last digit (base 17) of $(n^{13})^{15}$, you could calculate $x=n^{13}$, then raise $x$ to the power $15$, then look at the last digit. In order to raise your big number $x$ to the power $15$, you have to multiply a bunch of numbers (all equal to $x$) together. But if all you care about is the last digit (base 17) of the answer, all you need to know is the last digit of $x$, because when you do a multiplication longhand only the digits in the last column affect the last column of the answer. This is perhaps easier to convince yourself of working in base 10, since that's how we're used to doing long multiplication, but it works equally well in any base -- the last digit of $a\\times b$ only depends on the last digits of $a$ and $b$.</p>\n" }, { "answer_id": 2116236, "score": 0, "text": "<p>My view is that I don't see modulo as an operation the way computers do. To me, it's a <em>relation</em>, and its symbol looks like ${\\cdot}\\equiv {\\cdot}\\pmod{17}$. Two numbers can be checked against one another (we write that by filling in the two numbers where the dots are) and either they fulfill that property, or they don't. This property works well with addition, subtraction, multiplication and exponentiation in the sense that if $$\na\\equiv b\\pmod{17}\n$$\nis true, then\n$$\na+n \\equiv b+n\\pmod{17}\\\\\na-n \\equiv b-n\\pmod{17}\\\\\na\\cdot n \\equiv b\\cdot n\\pmod{17}\\\\\na^n \\equiv b^n\\pmod{17}\n$$\nare also all true for any integer $n$.</p>\n\n<p>What you do is first saying that $3^{13}\\equiv 12\\pmod{17}$, in other words, $a = 3^{13}$ and $b = 12$. Then you raise each side of that to the $15$th power, i.e. set $n = 15$ and use the last of the four above properties. The relation therefore still holds.</p>\n\n<p>What <code>a % k</code> does in a computer program is simply to find the smallest natural number $b$ such that $a \\equiv b\\pmod{k}$. As long as your operations are addition, subtraction, multiplication and exponentiation, it doesn't matter whether you do this reducing step during intermediate calculations or not, precisely by the four properties above.</p>\n" }, { "answer_id": 2120057, "score": 0, "text": "<p>Denoting the <a href=\"https://en.wikipedia.org/wiki/Modulo_operation\" rel=\"nofollow noreferrer\">Modulo operator %</a> by $\\left(\\,\\text{mod}\\,\\right)$, we have for all $\\,a,b,c,d\\in{\\mathbb N}^{+}\\,$:\n$$ \n\\left(\\,a^{b\\,c}\\,\\right)\\color{blue}{\\text{mod}}\\,d =\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{c}\\,\\,\\color{blue}{\\text{mod}}\\,d =\\left(\\,a^{c}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d =\\left(\\,a\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{b\\,c}\\,\\,\\color{blue}{\\text{mod}}\\,d \\\\[2mm] \n\\small \\left(3^{\\small13\\times15}\\right)\\text{mod}\\,17 =\\left(3^{\\small13}\\,\\text{mod}\\,17\\right)^{\\small15}\\,\\text{mod}\\,17 =\\left(3^{\\small15}\\,\\text{mod}\\,17\\right)^{\\small13}\\,\\text{mod}\\,17 =\\left(3\\,\\text{mod}\\,17\\right)^{\\small13\\times15}\\,\\text{mod}\\,17 \n$$\nWhere above identity results from the multiplication distributive property:\n$$ \n\\left[ a_{\\small1}\\,.\\,a_{\\small2}\\,.\\,a_{\\small3}\\,.\\,\\cdots\\, \\right]\\,\\color{blue}{\\text{mod}}\\,d \\,=\\, \\left[ \\left(a_{\\small1}\\,\\color{blue}{\\text{mod}}\\,d\\right)\\left(a_{\\small2}\\,\\color{blue}{\\text{mod}}\\,d\\right)\\left(a_{\\small3}\\,\\color{blue}{\\text{mod}}\\,d\\right) \\cdots\\, \\right]\\,\\color{blue}{\\text{mod}}\\,d \\,\\,\\Rightarrow \\\\[6mm] \n\\begin{align} \n\\left(\\,a^{b\\,c}\\,\\right)\\color{blue}{\\text{mod}}\\,d &amp;= \\left(\\,a^{b}\\,\\right)^{c}\\,\\color{blue}{\\text{mod}}\\,d = \\underbrace{\\left[a^b.a^b.\\,\\cdots\\,.a^b\\right]}_{c\\,\\text{ times}}\\,\\,\\color{blue}{\\text{mod}}\\,d \\\\[2mm] \n&amp;= \\underbrace{\\left[\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)\\,\\cdots\\,\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)\\right]}_{c\\,\\text{ times}}\\,\\,\\color{blue}{\\text{mod}}\\,d = \\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{c}\\,\\,\\color{blue}{\\text{mod}}\\,d \n\\end{align} \n$$\n <br/> \n$$ \n\\begin{align} \n{\\small\\text{Let:}}\\,a_{\\small1}=q_{\\small1}.d+r_{\\small1}\\,, &amp;\\quad a_{\\small2}=q_{\\small2}.d+r_{\\small2}\\,, \\quad a_{\\small3}=q_{\\small3}.d+r_{\\small3}\\,, \\,\\,\\cdots \\\\[2mm] \n\\left[ a_{\\small1}\\,a_{\\small2}\\,a_{\\small3}\\,\\cdots\\, \\right]\\,\\text{mod}\\,d &amp;=\\left[ \\left(q_{\\small1}.d+r_{\\small1}\\right)\\,\\left(q_{\\small2}.d+r_{\\small2}\\right)\\,\\left(q_{\\small3}.d+r_{\\small3}\\right)\\,\\cdots\\, \\right]\\,\\text{mod}\\,d \\\\[2mm] \n&amp;\\quad \\,\\,\\{{\\small\\text{ all terms will have a power of }}\\,\\,d\\,\\, {\\small\\text{ except the last term }}\\} \\\\[2mm] \n&amp;= \\left[ {\\large\\alpha}\\left(q_{\\small1},r_{\\small1},q_{\\small2},r_{\\small2},q_{\\small3},r_{\\small3},\\cdots\\right).d+\\left(r_{\\small1}\\,r_{\\small2}\\,r_{\\small3}\\,\\cdots\\,\\right) \\right]\\,\\text{mod}\\,d \\\\[2mm] \n&amp;= \\left[\\,r_{\\small1}\\,r_{\\small2}\\,r_{\\small3}\\,\\cdots\\, \\right]\\,\\text{mod}\\,d \\quad\\qquad\\{{\\small\\left(\\alpha.d\\right)\\,\\text{mod}\\,d=0}\\} \\\\[2mm] \n&amp;= \\left[ \\left(a_{\\small1}\\,\\text{mod}\\,d\\right)\\left(a_{\\small2}\\,\\text{mod}\\,d\\right)\\left(a_{\\small3}\\,\\text{mod}\\,d\\right) \\cdots\\, \\right]\\,\\text{mod}\\,d \\\\[2mm] \n\\end{align} \n$$</p>\n" }, { "answer_id": 2120674, "score": 2, "text": "<p>Assuming the number and base are positive, all the modulo operator <code>n % 17</code> does is to give you the last \"digit\" of $n$ when written in base 17.</p>\n\n<p>Now when you want to calculate the last digit (base 17) of $(n^{13})^{15}$, you could calculate $x=n^{13}$, then raise $x$ to the power $15$, then look at the last digit. In order to raise your big number $x$ to the power $15$, you have to multiply a bunch of numbers (all equal to $x$) together. But if all you care about is the last digit (base 17) of the answer, all you need to know is the last digit of $x$, because when you do a multiplication longhand only the digits in the last column affect the last column of the answer. This is perhaps easier to convince yourself of working in base 10, since that's how we're used to doing long multiplication, but it works equally well in any base -- the last digit of $a\\times b$ only depends on the last digits of $a$ and $b$.</p>\n" }, { "answer_id": 2116236, "score": 0, "text": "<p>My view is that I don't see modulo as an operation the way computers do. To me, it's a <em>relation</em>, and its symbol looks like ${\\cdot}\\equiv {\\cdot}\\pmod{17}$. Two numbers can be checked against one another (we write that by filling in the two numbers where the dots are) and either they fulfill that property, or they don't. This property works well with addition, subtraction, multiplication and exponentiation in the sense that if $$\na\\equiv b\\pmod{17}\n$$\nis true, then\n$$\na+n \\equiv b+n\\pmod{17}\\\\\na-n \\equiv b-n\\pmod{17}\\\\\na\\cdot n \\equiv b\\cdot n\\pmod{17}\\\\\na^n \\equiv b^n\\pmod{17}\n$$\nare also all true for any integer $n$.</p>\n\n<p>What you do is first saying that $3^{13}\\equiv 12\\pmod{17}$, in other words, $a = 3^{13}$ and $b = 12$. Then you raise each side of that to the $15$th power, i.e. set $n = 15$ and use the last of the four above properties. The relation therefore still holds.</p>\n\n<p>What <code>a % k</code> does in a computer program is simply to find the smallest natural number $b$ such that $a \\equiv b\\pmod{k}$. As long as your operations are addition, subtraction, multiplication and exponentiation, it doesn't matter whether you do this reducing step during intermediate calculations or not, precisely by the four properties above.</p>\n" }, { "answer_id": 2120057, "score": 0, "text": "<p>Denoting the <a href=\"https://en.wikipedia.org/wiki/Modulo_operation\" rel=\"nofollow noreferrer\">Modulo operator %</a> by $\\left(\\,\\text{mod}\\,\\right)$, we have for all $\\,a,b,c,d\\in{\\mathbb N}^{+}\\,$:\n$$ \n\\left(\\,a^{b\\,c}\\,\\right)\\color{blue}{\\text{mod}}\\,d =\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{c}\\,\\,\\color{blue}{\\text{mod}}\\,d =\\left(\\,a^{c}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d =\\left(\\,a\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{b\\,c}\\,\\,\\color{blue}{\\text{mod}}\\,d \\\\[2mm] \n\\small \\left(3^{\\small13\\times15}\\right)\\text{mod}\\,17 =\\left(3^{\\small13}\\,\\text{mod}\\,17\\right)^{\\small15}\\,\\text{mod}\\,17 =\\left(3^{\\small15}\\,\\text{mod}\\,17\\right)^{\\small13}\\,\\text{mod}\\,17 =\\left(3\\,\\text{mod}\\,17\\right)^{\\small13\\times15}\\,\\text{mod}\\,17 \n$$\nWhere above identity results from the multiplication distributive property:\n$$ \n\\left[ a_{\\small1}\\,.\\,a_{\\small2}\\,.\\,a_{\\small3}\\,.\\,\\cdots\\, \\right]\\,\\color{blue}{\\text{mod}}\\,d \\,=\\, \\left[ \\left(a_{\\small1}\\,\\color{blue}{\\text{mod}}\\,d\\right)\\left(a_{\\small2}\\,\\color{blue}{\\text{mod}}\\,d\\right)\\left(a_{\\small3}\\,\\color{blue}{\\text{mod}}\\,d\\right) \\cdots\\, \\right]\\,\\color{blue}{\\text{mod}}\\,d \\,\\,\\Rightarrow \\\\[6mm] \n\\begin{align} \n\\left(\\,a^{b\\,c}\\,\\right)\\color{blue}{\\text{mod}}\\,d &amp;= \\left(\\,a^{b}\\,\\right)^{c}\\,\\color{blue}{\\text{mod}}\\,d = \\underbrace{\\left[a^b.a^b.\\,\\cdots\\,.a^b\\right]}_{c\\,\\text{ times}}\\,\\,\\color{blue}{\\text{mod}}\\,d \\\\[2mm] \n&amp;= \\underbrace{\\left[\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)\\,\\cdots\\,\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)\\right]}_{c\\,\\text{ times}}\\,\\,\\color{blue}{\\text{mod}}\\,d = \\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{c}\\,\\,\\color{blue}{\\text{mod}}\\,d \n\\end{align} \n$$\n <br/> \n$$ \n\\begin{align} \n{\\small\\text{Let:}}\\,a_{\\small1}=q_{\\small1}.d+r_{\\small1}\\,, &amp;\\quad a_{\\small2}=q_{\\small2}.d+r_{\\small2}\\,, \\quad a_{\\small3}=q_{\\small3}.d+r_{\\small3}\\,, \\,\\,\\cdots \\\\[2mm] \n\\left[ a_{\\small1}\\,a_{\\small2}\\,a_{\\small3}\\,\\cdots\\, \\right]\\,\\text{mod}\\,d &amp;=\\left[ \\left(q_{\\small1}.d+r_{\\small1}\\right)\\,\\left(q_{\\small2}.d+r_{\\small2}\\right)\\,\\left(q_{\\small3}.d+r_{\\small3}\\right)\\,\\cdots\\, \\right]\\,\\text{mod}\\,d \\\\[2mm] \n&amp;\\quad \\,\\,\\{{\\small\\text{ all terms will have a power of }}\\,\\,d\\,\\, {\\small\\text{ except the last term }}\\} \\\\[2mm] \n&amp;= \\left[ {\\large\\alpha}\\left(q_{\\small1},r_{\\small1},q_{\\small2},r_{\\small2},q_{\\small3},r_{\\small3},\\cdots\\right).d+\\left(r_{\\small1}\\,r_{\\small2}\\,r_{\\small3}\\,\\cdots\\,\\right) \\right]\\,\\text{mod}\\,d \\\\[2mm] \n&amp;= \\left[\\,r_{\\small1}\\,r_{\\small2}\\,r_{\\small3}\\,\\cdots\\, \\right]\\,\\text{mod}\\,d \\quad\\qquad\\{{\\small\\left(\\alpha.d\\right)\\,\\text{mod}\\,d=0}\\} \\\\[2mm] \n&amp;= \\left[ \\left(a_{\\small1}\\,\\text{mod}\\,d\\right)\\left(a_{\\small2}\\,\\text{mod}\\,d\\right)\\left(a_{\\small3}\\,\\text{mod}\\,d\\right) \\cdots\\, \\right]\\,\\text{mod}\\,d \\\\[2mm] \n\\end{align} \n$$</p>\n" }, { "answer_id": 2120674, "score": 2, "text": "<p>Assuming the number and base are positive, all the modulo operator <code>n % 17</code> does is to give you the last \"digit\" of $n$ when written in base 17.</p>\n\n<p>Now when you want to calculate the last digit (base 17) of $(n^{13})^{15}$, you could calculate $x=n^{13}$, then raise $x$ to the power $15$, then look at the last digit. In order to raise your big number $x$ to the power $15$, you have to multiply a bunch of numbers (all equal to $x$) together. But if all you care about is the last digit (base 17) of the answer, all you need to know is the last digit of $x$, because when you do a multiplication longhand only the digits in the last column affect the last column of the answer. This is perhaps easier to convince yourself of working in base 10, since that's how we're used to doing long multiplication, but it works equally well in any base -- the last digit of $a\\times b$ only depends on the last digits of $a$ and $b$.</p>\n" }, { "answer_id": 2116236, "score": 0, "text": "<p>My view is that I don't see modulo as an operation the way computers do. To me, it's a <em>relation</em>, and its symbol looks like ${\\cdot}\\equiv {\\cdot}\\pmod{17}$. Two numbers can be checked against one another (we write that by filling in the two numbers where the dots are) and either they fulfill that property, or they don't. This property works well with addition, subtraction, multiplication and exponentiation in the sense that if $$\na\\equiv b\\pmod{17}\n$$\nis true, then\n$$\na+n \\equiv b+n\\pmod{17}\\\\\na-n \\equiv b-n\\pmod{17}\\\\\na\\cdot n \\equiv b\\cdot n\\pmod{17}\\\\\na^n \\equiv b^n\\pmod{17}\n$$\nare also all true for any integer $n$.</p>\n\n<p>What you do is first saying that $3^{13}\\equiv 12\\pmod{17}$, in other words, $a = 3^{13}$ and $b = 12$. Then you raise each side of that to the $15$th power, i.e. set $n = 15$ and use the last of the four above properties. The relation therefore still holds.</p>\n\n<p>What <code>a % k</code> does in a computer program is simply to find the smallest natural number $b$ such that $a \\equiv b\\pmod{k}$. As long as your operations are addition, subtraction, multiplication and exponentiation, it doesn't matter whether you do this reducing step during intermediate calculations or not, precisely by the four properties above.</p>\n" }, { "answer_id": 2120057, "score": 0, "text": "<p>Denoting the <a href=\"https://en.wikipedia.org/wiki/Modulo_operation\" rel=\"nofollow noreferrer\">Modulo operator %</a> by $\\left(\\,\\text{mod}\\,\\right)$, we have for all $\\,a,b,c,d\\in{\\mathbb N}^{+}\\,$:\n$$ \n\\left(\\,a^{b\\,c}\\,\\right)\\color{blue}{\\text{mod}}\\,d =\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{c}\\,\\,\\color{blue}{\\text{mod}}\\,d =\\left(\\,a^{c}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d =\\left(\\,a\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{b\\,c}\\,\\,\\color{blue}{\\text{mod}}\\,d \\\\[2mm] \n\\small \\left(3^{\\small13\\times15}\\right)\\text{mod}\\,17 =\\left(3^{\\small13}\\,\\text{mod}\\,17\\right)^{\\small15}\\,\\text{mod}\\,17 =\\left(3^{\\small15}\\,\\text{mod}\\,17\\right)^{\\small13}\\,\\text{mod}\\,17 =\\left(3\\,\\text{mod}\\,17\\right)^{\\small13\\times15}\\,\\text{mod}\\,17 \n$$\nWhere above identity results from the multiplication distributive property:\n$$ \n\\left[ a_{\\small1}\\,.\\,a_{\\small2}\\,.\\,a_{\\small3}\\,.\\,\\cdots\\, \\right]\\,\\color{blue}{\\text{mod}}\\,d \\,=\\, \\left[ \\left(a_{\\small1}\\,\\color{blue}{\\text{mod}}\\,d\\right)\\left(a_{\\small2}\\,\\color{blue}{\\text{mod}}\\,d\\right)\\left(a_{\\small3}\\,\\color{blue}{\\text{mod}}\\,d\\right) \\cdots\\, \\right]\\,\\color{blue}{\\text{mod}}\\,d \\,\\,\\Rightarrow \\\\[6mm] \n\\begin{align} \n\\left(\\,a^{b\\,c}\\,\\right)\\color{blue}{\\text{mod}}\\,d &amp;= \\left(\\,a^{b}\\,\\right)^{c}\\,\\color{blue}{\\text{mod}}\\,d = \\underbrace{\\left[a^b.a^b.\\,\\cdots\\,.a^b\\right]}_{c\\,\\text{ times}}\\,\\,\\color{blue}{\\text{mod}}\\,d \\\\[2mm] \n&amp;= \\underbrace{\\left[\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)\\,\\cdots\\,\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)\\right]}_{c\\,\\text{ times}}\\,\\,\\color{blue}{\\text{mod}}\\,d = \\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{c}\\,\\,\\color{blue}{\\text{mod}}\\,d \n\\end{align} \n$$\n <br/> \n$$ \n\\begin{align} \n{\\small\\text{Let:}}\\,a_{\\small1}=q_{\\small1}.d+r_{\\small1}\\,, &amp;\\quad a_{\\small2}=q_{\\small2}.d+r_{\\small2}\\,, \\quad a_{\\small3}=q_{\\small3}.d+r_{\\small3}\\,, \\,\\,\\cdots \\\\[2mm] \n\\left[ a_{\\small1}\\,a_{\\small2}\\,a_{\\small3}\\,\\cdots\\, \\right]\\,\\text{mod}\\,d &amp;=\\left[ \\left(q_{\\small1}.d+r_{\\small1}\\right)\\,\\left(q_{\\small2}.d+r_{\\small2}\\right)\\,\\left(q_{\\small3}.d+r_{\\small3}\\right)\\,\\cdots\\, \\right]\\,\\text{mod}\\,d \\\\[2mm] \n&amp;\\quad \\,\\,\\{{\\small\\text{ all terms will have a power of }}\\,\\,d\\,\\, {\\small\\text{ except the last term }}\\} \\\\[2mm] \n&amp;= \\left[ {\\large\\alpha}\\left(q_{\\small1},r_{\\small1},q_{\\small2},r_{\\small2},q_{\\small3},r_{\\small3},\\cdots\\right).d+\\left(r_{\\small1}\\,r_{\\small2}\\,r_{\\small3}\\,\\cdots\\,\\right) \\right]\\,\\text{mod}\\,d \\\\[2mm] \n&amp;= \\left[\\,r_{\\small1}\\,r_{\\small2}\\,r_{\\small3}\\,\\cdots\\, \\right]\\,\\text{mod}\\,d \\quad\\qquad\\{{\\small\\left(\\alpha.d\\right)\\,\\text{mod}\\,d=0}\\} \\\\[2mm] \n&amp;= \\left[ \\left(a_{\\small1}\\,\\text{mod}\\,d\\right)\\left(a_{\\small2}\\,\\text{mod}\\,d\\right)\\left(a_{\\small3}\\,\\text{mod}\\,d\\right) \\cdots\\, \\right]\\,\\text{mod}\\,d \\\\[2mm] \n\\end{align} \n$$</p>\n" }, { "answer_id": 2120674, "score": 2, "text": "<p>Assuming the number and base are positive, all the modulo operator <code>n % 17</code> does is to give you the last \"digit\" of $n$ when written in base 17.</p>\n\n<p>Now when you want to calculate the last digit (base 17) of $(n^{13})^{15}$, you could calculate $x=n^{13}$, then raise $x$ to the power $15$, then look at the last digit. In order to raise your big number $x$ to the power $15$, you have to multiply a bunch of numbers (all equal to $x$) together. But if all you care about is the last digit (base 17) of the answer, all you need to know is the last digit of $x$, because when you do a multiplication longhand only the digits in the last column affect the last column of the answer. This is perhaps easier to convince yourself of working in base 10, since that's how we're used to doing long multiplication, but it works equally well in any base -- the last digit of $a\\times b$ only depends on the last digits of $a$ and $b$.</p>\n" }, { "answer_id": 2116236, "score": 0, "text": "<p>My view is that I don't see modulo as an operation the way computers do. To me, it's a <em>relation</em>, and its symbol looks like ${\\cdot}\\equiv {\\cdot}\\pmod{17}$. Two numbers can be checked against one another (we write that by filling in the two numbers where the dots are) and either they fulfill that property, or they don't. This property works well with addition, subtraction, multiplication and exponentiation in the sense that if $$\na\\equiv b\\pmod{17}\n$$\nis true, then\n$$\na+n \\equiv b+n\\pmod{17}\\\\\na-n \\equiv b-n\\pmod{17}\\\\\na\\cdot n \\equiv b\\cdot n\\pmod{17}\\\\\na^n \\equiv b^n\\pmod{17}\n$$\nare also all true for any integer $n$.</p>\n\n<p>What you do is first saying that $3^{13}\\equiv 12\\pmod{17}$, in other words, $a = 3^{13}$ and $b = 12$. Then you raise each side of that to the $15$th power, i.e. set $n = 15$ and use the last of the four above properties. The relation therefore still holds.</p>\n\n<p>What <code>a % k</code> does in a computer program is simply to find the smallest natural number $b$ such that $a \\equiv b\\pmod{k}$. As long as your operations are addition, subtraction, multiplication and exponentiation, it doesn't matter whether you do this reducing step during intermediate calculations or not, precisely by the four properties above.</p>\n" }, { "answer_id": 2120057, "score": 0, "text": "<p>Denoting the <a href=\"https://en.wikipedia.org/wiki/Modulo_operation\" rel=\"nofollow noreferrer\">Modulo operator %</a> by $\\left(\\,\\text{mod}\\,\\right)$, we have for all $\\,a,b,c,d\\in{\\mathbb N}^{+}\\,$:\n$$ \n\\left(\\,a^{b\\,c}\\,\\right)\\color{blue}{\\text{mod}}\\,d =\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{c}\\,\\,\\color{blue}{\\text{mod}}\\,d =\\left(\\,a^{c}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d =\\left(\\,a\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{b\\,c}\\,\\,\\color{blue}{\\text{mod}}\\,d \\\\[2mm] \n\\small \\left(3^{\\small13\\times15}\\right)\\text{mod}\\,17 =\\left(3^{\\small13}\\,\\text{mod}\\,17\\right)^{\\small15}\\,\\text{mod}\\,17 =\\left(3^{\\small15}\\,\\text{mod}\\,17\\right)^{\\small13}\\,\\text{mod}\\,17 =\\left(3\\,\\text{mod}\\,17\\right)^{\\small13\\times15}\\,\\text{mod}\\,17 \n$$\nWhere above identity results from the multiplication distributive property:\n$$ \n\\left[ a_{\\small1}\\,.\\,a_{\\small2}\\,.\\,a_{\\small3}\\,.\\,\\cdots\\, \\right]\\,\\color{blue}{\\text{mod}}\\,d \\,=\\, \\left[ \\left(a_{\\small1}\\,\\color{blue}{\\text{mod}}\\,d\\right)\\left(a_{\\small2}\\,\\color{blue}{\\text{mod}}\\,d\\right)\\left(a_{\\small3}\\,\\color{blue}{\\text{mod}}\\,d\\right) \\cdots\\, \\right]\\,\\color{blue}{\\text{mod}}\\,d \\,\\,\\Rightarrow \\\\[6mm] \n\\begin{align} \n\\left(\\,a^{b\\,c}\\,\\right)\\color{blue}{\\text{mod}}\\,d &amp;= \\left(\\,a^{b}\\,\\right)^{c}\\,\\color{blue}{\\text{mod}}\\,d = \\underbrace{\\left[a^b.a^b.\\,\\cdots\\,.a^b\\right]}_{c\\,\\text{ times}}\\,\\,\\color{blue}{\\text{mod}}\\,d \\\\[2mm] \n&amp;= \\underbrace{\\left[\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)\\,\\cdots\\,\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)\\right]}_{c\\,\\text{ times}}\\,\\,\\color{blue}{\\text{mod}}\\,d = \\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{c}\\,\\,\\color{blue}{\\text{mod}}\\,d \n\\end{align} \n$$\n <br/> \n$$ \n\\begin{align} \n{\\small\\text{Let:}}\\,a_{\\small1}=q_{\\small1}.d+r_{\\small1}\\,, &amp;\\quad a_{\\small2}=q_{\\small2}.d+r_{\\small2}\\,, \\quad a_{\\small3}=q_{\\small3}.d+r_{\\small3}\\,, \\,\\,\\cdots \\\\[2mm] \n\\left[ a_{\\small1}\\,a_{\\small2}\\,a_{\\small3}\\,\\cdots\\, \\right]\\,\\text{mod}\\,d &amp;=\\left[ \\left(q_{\\small1}.d+r_{\\small1}\\right)\\,\\left(q_{\\small2}.d+r_{\\small2}\\right)\\,\\left(q_{\\small3}.d+r_{\\small3}\\right)\\,\\cdots\\, \\right]\\,\\text{mod}\\,d \\\\[2mm] \n&amp;\\quad \\,\\,\\{{\\small\\text{ all terms will have a power of }}\\,\\,d\\,\\, {\\small\\text{ except the last term }}\\} \\\\[2mm] \n&amp;= \\left[ {\\large\\alpha}\\left(q_{\\small1},r_{\\small1},q_{\\small2},r_{\\small2},q_{\\small3},r_{\\small3},\\cdots\\right).d+\\left(r_{\\small1}\\,r_{\\small2}\\,r_{\\small3}\\,\\cdots\\,\\right) \\right]\\,\\text{mod}\\,d \\\\[2mm] \n&amp;= \\left[\\,r_{\\small1}\\,r_{\\small2}\\,r_{\\small3}\\,\\cdots\\, \\right]\\,\\text{mod}\\,d \\quad\\qquad\\{{\\small\\left(\\alpha.d\\right)\\,\\text{mod}\\,d=0}\\} \\\\[2mm] \n&amp;= \\left[ \\left(a_{\\small1}\\,\\text{mod}\\,d\\right)\\left(a_{\\small2}\\,\\text{mod}\\,d\\right)\\left(a_{\\small3}\\,\\text{mod}\\,d\\right) \\cdots\\, \\right]\\,\\text{mod}\\,d \\\\[2mm] \n\\end{align} \n$$</p>\n" }, { "answer_id": 2120674, "score": 2, "text": "<p>Assuming the number and base are positive, all the modulo operator <code>n % 17</code> does is to give you the last \"digit\" of $n$ when written in base 17.</p>\n\n<p>Now when you want to calculate the last digit (base 17) of $(n^{13})^{15}$, you could calculate $x=n^{13}$, then raise $x$ to the power $15$, then look at the last digit. In order to raise your big number $x$ to the power $15$, you have to multiply a bunch of numbers (all equal to $x$) together. But if all you care about is the last digit (base 17) of the answer, all you need to know is the last digit of $x$, because when you do a multiplication longhand only the digits in the last column affect the last column of the answer. This is perhaps easier to convince yourself of working in base 10, since that's how we're used to doing long multiplication, but it works equally well in any base -- the last digit of $a\\times b$ only depends on the last digits of $a$ and $b$.</p>\n" }, { "answer_id": 2116236, "score": 0, "text": "<p>My view is that I don't see modulo as an operation the way computers do. To me, it's a <em>relation</em>, and its symbol looks like ${\\cdot}\\equiv {\\cdot}\\pmod{17}$. Two numbers can be checked against one another (we write that by filling in the two numbers where the dots are) and either they fulfill that property, or they don't. This property works well with addition, subtraction, multiplication and exponentiation in the sense that if $$\na\\equiv b\\pmod{17}\n$$\nis true, then\n$$\na+n \\equiv b+n\\pmod{17}\\\\\na-n \\equiv b-n\\pmod{17}\\\\\na\\cdot n \\equiv b\\cdot n\\pmod{17}\\\\\na^n \\equiv b^n\\pmod{17}\n$$\nare also all true for any integer $n$.</p>\n\n<p>What you do is first saying that $3^{13}\\equiv 12\\pmod{17}$, in other words, $a = 3^{13}$ and $b = 12$. Then you raise each side of that to the $15$th power, i.e. set $n = 15$ and use the last of the four above properties. The relation therefore still holds.</p>\n\n<p>What <code>a % k</code> does in a computer program is simply to find the smallest natural number $b$ such that $a \\equiv b\\pmod{k}$. As long as your operations are addition, subtraction, multiplication and exponentiation, it doesn't matter whether you do this reducing step during intermediate calculations or not, precisely by the four properties above.</p>\n" }, { "answer_id": 2120057, "score": 0, "text": "<p>Denoting the <a href=\"https://en.wikipedia.org/wiki/Modulo_operation\" rel=\"nofollow noreferrer\">Modulo operator %</a> by $\\left(\\,\\text{mod}\\,\\right)$, we have for all $\\,a,b,c,d\\in{\\mathbb N}^{+}\\,$:\n$$ \n\\left(\\,a^{b\\,c}\\,\\right)\\color{blue}{\\text{mod}}\\,d =\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{c}\\,\\,\\color{blue}{\\text{mod}}\\,d =\\left(\\,a^{c}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d =\\left(\\,a\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{b\\,c}\\,\\,\\color{blue}{\\text{mod}}\\,d \\\\[2mm] \n\\small \\left(3^{\\small13\\times15}\\right)\\text{mod}\\,17 =\\left(3^{\\small13}\\,\\text{mod}\\,17\\right)^{\\small15}\\,\\text{mod}\\,17 =\\left(3^{\\small15}\\,\\text{mod}\\,17\\right)^{\\small13}\\,\\text{mod}\\,17 =\\left(3\\,\\text{mod}\\,17\\right)^{\\small13\\times15}\\,\\text{mod}\\,17 \n$$\nWhere above identity results from the multiplication distributive property:\n$$ \n\\left[ a_{\\small1}\\,.\\,a_{\\small2}\\,.\\,a_{\\small3}\\,.\\,\\cdots\\, \\right]\\,\\color{blue}{\\text{mod}}\\,d \\,=\\, \\left[ \\left(a_{\\small1}\\,\\color{blue}{\\text{mod}}\\,d\\right)\\left(a_{\\small2}\\,\\color{blue}{\\text{mod}}\\,d\\right)\\left(a_{\\small3}\\,\\color{blue}{\\text{mod}}\\,d\\right) \\cdots\\, \\right]\\,\\color{blue}{\\text{mod}}\\,d \\,\\,\\Rightarrow \\\\[6mm] \n\\begin{align} \n\\left(\\,a^{b\\,c}\\,\\right)\\color{blue}{\\text{mod}}\\,d &amp;= \\left(\\,a^{b}\\,\\right)^{c}\\,\\color{blue}{\\text{mod}}\\,d = \\underbrace{\\left[a^b.a^b.\\,\\cdots\\,.a^b\\right]}_{c\\,\\text{ times}}\\,\\,\\color{blue}{\\text{mod}}\\,d \\\\[2mm] \n&amp;= \\underbrace{\\left[\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)\\,\\cdots\\,\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)\\right]}_{c\\,\\text{ times}}\\,\\,\\color{blue}{\\text{mod}}\\,d = \\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{c}\\,\\,\\color{blue}{\\text{mod}}\\,d \n\\end{align} \n$$\n <br/> \n$$ \n\\begin{align} \n{\\small\\text{Let:}}\\,a_{\\small1}=q_{\\small1}.d+r_{\\small1}\\,, &amp;\\quad a_{\\small2}=q_{\\small2}.d+r_{\\small2}\\,, \\quad a_{\\small3}=q_{\\small3}.d+r_{\\small3}\\,, \\,\\,\\cdots \\\\[2mm] \n\\left[ a_{\\small1}\\,a_{\\small2}\\,a_{\\small3}\\,\\cdots\\, \\right]\\,\\text{mod}\\,d &amp;=\\left[ \\left(q_{\\small1}.d+r_{\\small1}\\right)\\,\\left(q_{\\small2}.d+r_{\\small2}\\right)\\,\\left(q_{\\small3}.d+r_{\\small3}\\right)\\,\\cdots\\, \\right]\\,\\text{mod}\\,d \\\\[2mm] \n&amp;\\quad \\,\\,\\{{\\small\\text{ all terms will have a power of }}\\,\\,d\\,\\, {\\small\\text{ except the last term }}\\} \\\\[2mm] \n&amp;= \\left[ {\\large\\alpha}\\left(q_{\\small1},r_{\\small1},q_{\\small2},r_{\\small2},q_{\\small3},r_{\\small3},\\cdots\\right).d+\\left(r_{\\small1}\\,r_{\\small2}\\,r_{\\small3}\\,\\cdots\\,\\right) \\right]\\,\\text{mod}\\,d \\\\[2mm] \n&amp;= \\left[\\,r_{\\small1}\\,r_{\\small2}\\,r_{\\small3}\\,\\cdots\\, \\right]\\,\\text{mod}\\,d \\quad\\qquad\\{{\\small\\left(\\alpha.d\\right)\\,\\text{mod}\\,d=0}\\} \\\\[2mm] \n&amp;= \\left[ \\left(a_{\\small1}\\,\\text{mod}\\,d\\right)\\left(a_{\\small2}\\,\\text{mod}\\,d\\right)\\left(a_{\\small3}\\,\\text{mod}\\,d\\right) \\cdots\\, \\right]\\,\\text{mod}\\,d \\\\[2mm] \n\\end{align} \n$$</p>\n" }, { "answer_id": 2120674, "score": 2, "text": "<p>Assuming the number and base are positive, all the modulo operator <code>n % 17</code> does is to give you the last \"digit\" of $n$ when written in base 17.</p>\n\n<p>Now when you want to calculate the last digit (base 17) of $(n^{13})^{15}$, you could calculate $x=n^{13}$, then raise $x$ to the power $15$, then look at the last digit. In order to raise your big number $x$ to the power $15$, you have to multiply a bunch of numbers (all equal to $x$) together. But if all you care about is the last digit (base 17) of the answer, all you need to know is the last digit of $x$, because when you do a multiplication longhand only the digits in the last column affect the last column of the answer. This is perhaps easier to convince yourself of working in base 10, since that's how we're used to doing long multiplication, but it works equally well in any base -- the last digit of $a\\times b$ only depends on the last digits of $a$ and $b$.</p>\n" }, { "answer_id": 2116236, "score": 0, "text": "<p>My view is that I don't see modulo as an operation the way computers do. To me, it's a <em>relation</em>, and its symbol looks like ${\\cdot}\\equiv {\\cdot}\\pmod{17}$. Two numbers can be checked against one another (we write that by filling in the two numbers where the dots are) and either they fulfill that property, or they don't. This property works well with addition, subtraction, multiplication and exponentiation in the sense that if $$\na\\equiv b\\pmod{17}\n$$\nis true, then\n$$\na+n \\equiv b+n\\pmod{17}\\\\\na-n \\equiv b-n\\pmod{17}\\\\\na\\cdot n \\equiv b\\cdot n\\pmod{17}\\\\\na^n \\equiv b^n\\pmod{17}\n$$\nare also all true for any integer $n$.</p>\n\n<p>What you do is first saying that $3^{13}\\equiv 12\\pmod{17}$, in other words, $a = 3^{13}$ and $b = 12$. Then you raise each side of that to the $15$th power, i.e. set $n = 15$ and use the last of the four above properties. The relation therefore still holds.</p>\n\n<p>What <code>a % k</code> does in a computer program is simply to find the smallest natural number $b$ such that $a \\equiv b\\pmod{k}$. As long as your operations are addition, subtraction, multiplication and exponentiation, it doesn't matter whether you do this reducing step during intermediate calculations or not, precisely by the four properties above.</p>\n" }, { "answer_id": 2120057, "score": 0, "text": "<p>Denoting the <a href=\"https://en.wikipedia.org/wiki/Modulo_operation\" rel=\"nofollow noreferrer\">Modulo operator %</a> by $\\left(\\,\\text{mod}\\,\\right)$, we have for all $\\,a,b,c,d\\in{\\mathbb N}^{+}\\,$:\n$$ \n\\left(\\,a^{b\\,c}\\,\\right)\\color{blue}{\\text{mod}}\\,d =\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{c}\\,\\,\\color{blue}{\\text{mod}}\\,d =\\left(\\,a^{c}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d =\\left(\\,a\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{b\\,c}\\,\\,\\color{blue}{\\text{mod}}\\,d \\\\[2mm] \n\\small \\left(3^{\\small13\\times15}\\right)\\text{mod}\\,17 =\\left(3^{\\small13}\\,\\text{mod}\\,17\\right)^{\\small15}\\,\\text{mod}\\,17 =\\left(3^{\\small15}\\,\\text{mod}\\,17\\right)^{\\small13}\\,\\text{mod}\\,17 =\\left(3\\,\\text{mod}\\,17\\right)^{\\small13\\times15}\\,\\text{mod}\\,17 \n$$\nWhere above identity results from the multiplication distributive property:\n$$ \n\\left[ a_{\\small1}\\,.\\,a_{\\small2}\\,.\\,a_{\\small3}\\,.\\,\\cdots\\, \\right]\\,\\color{blue}{\\text{mod}}\\,d \\,=\\, \\left[ \\left(a_{\\small1}\\,\\color{blue}{\\text{mod}}\\,d\\right)\\left(a_{\\small2}\\,\\color{blue}{\\text{mod}}\\,d\\right)\\left(a_{\\small3}\\,\\color{blue}{\\text{mod}}\\,d\\right) \\cdots\\, \\right]\\,\\color{blue}{\\text{mod}}\\,d \\,\\,\\Rightarrow \\\\[6mm] \n\\begin{align} \n\\left(\\,a^{b\\,c}\\,\\right)\\color{blue}{\\text{mod}}\\,d &amp;= \\left(\\,a^{b}\\,\\right)^{c}\\,\\color{blue}{\\text{mod}}\\,d = \\underbrace{\\left[a^b.a^b.\\,\\cdots\\,.a^b\\right]}_{c\\,\\text{ times}}\\,\\,\\color{blue}{\\text{mod}}\\,d \\\\[2mm] \n&amp;= \\underbrace{\\left[\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)\\,\\cdots\\,\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)\\right]}_{c\\,\\text{ times}}\\,\\,\\color{blue}{\\text{mod}}\\,d = \\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{c}\\,\\,\\color{blue}{\\text{mod}}\\,d \n\\end{align} \n$$\n <br/> \n$$ \n\\begin{align} \n{\\small\\text{Let:}}\\,a_{\\small1}=q_{\\small1}.d+r_{\\small1}\\,, &amp;\\quad a_{\\small2}=q_{\\small2}.d+r_{\\small2}\\,, \\quad a_{\\small3}=q_{\\small3}.d+r_{\\small3}\\,, \\,\\,\\cdots \\\\[2mm] \n\\left[ a_{\\small1}\\,a_{\\small2}\\,a_{\\small3}\\,\\cdots\\, \\right]\\,\\text{mod}\\,d &amp;=\\left[ \\left(q_{\\small1}.d+r_{\\small1}\\right)\\,\\left(q_{\\small2}.d+r_{\\small2}\\right)\\,\\left(q_{\\small3}.d+r_{\\small3}\\right)\\,\\cdots\\, \\right]\\,\\text{mod}\\,d \\\\[2mm] \n&amp;\\quad \\,\\,\\{{\\small\\text{ all terms will have a power of }}\\,\\,d\\,\\, {\\small\\text{ except the last term }}\\} \\\\[2mm] \n&amp;= \\left[ {\\large\\alpha}\\left(q_{\\small1},r_{\\small1},q_{\\small2},r_{\\small2},q_{\\small3},r_{\\small3},\\cdots\\right).d+\\left(r_{\\small1}\\,r_{\\small2}\\,r_{\\small3}\\,\\cdots\\,\\right) \\right]\\,\\text{mod}\\,d \\\\[2mm] \n&amp;= \\left[\\,r_{\\small1}\\,r_{\\small2}\\,r_{\\small3}\\,\\cdots\\, \\right]\\,\\text{mod}\\,d \\quad\\qquad\\{{\\small\\left(\\alpha.d\\right)\\,\\text{mod}\\,d=0}\\} \\\\[2mm] \n&amp;= \\left[ \\left(a_{\\small1}\\,\\text{mod}\\,d\\right)\\left(a_{\\small2}\\,\\text{mod}\\,d\\right)\\left(a_{\\small3}\\,\\text{mod}\\,d\\right) \\cdots\\, \\right]\\,\\text{mod}\\,d \\\\[2mm] \n\\end{align} \n$$</p>\n" }, { "answer_id": 2120674, "score": 2, "text": "<p>Assuming the number and base are positive, all the modulo operator <code>n % 17</code> does is to give you the last \"digit\" of $n$ when written in base 17.</p>\n\n<p>Now when you want to calculate the last digit (base 17) of $(n^{13})^{15}$, you could calculate $x=n^{13}$, then raise $x$ to the power $15$, then look at the last digit. In order to raise your big number $x$ to the power $15$, you have to multiply a bunch of numbers (all equal to $x$) together. But if all you care about is the last digit (base 17) of the answer, all you need to know is the last digit of $x$, because when you do a multiplication longhand only the digits in the last column affect the last column of the answer. This is perhaps easier to convince yourself of working in base 10, since that's how we're used to doing long multiplication, but it works equally well in any base -- the last digit of $a\\times b$ only depends on the last digits of $a$ and $b$.</p>\n" }, { "answer_id": 2116236, "score": 0, "text": "<p>My view is that I don't see modulo as an operation the way computers do. To me, it's a <em>relation</em>, and its symbol looks like ${\\cdot}\\equiv {\\cdot}\\pmod{17}$. Two numbers can be checked against one another (we write that by filling in the two numbers where the dots are) and either they fulfill that property, or they don't. This property works well with addition, subtraction, multiplication and exponentiation in the sense that if $$\na\\equiv b\\pmod{17}\n$$\nis true, then\n$$\na+n \\equiv b+n\\pmod{17}\\\\\na-n \\equiv b-n\\pmod{17}\\\\\na\\cdot n \\equiv b\\cdot n\\pmod{17}\\\\\na^n \\equiv b^n\\pmod{17}\n$$\nare also all true for any integer $n$.</p>\n\n<p>What you do is first saying that $3^{13}\\equiv 12\\pmod{17}$, in other words, $a = 3^{13}$ and $b = 12$. Then you raise each side of that to the $15$th power, i.e. set $n = 15$ and use the last of the four above properties. The relation therefore still holds.</p>\n\n<p>What <code>a % k</code> does in a computer program is simply to find the smallest natural number $b$ such that $a \\equiv b\\pmod{k}$. As long as your operations are addition, subtraction, multiplication and exponentiation, it doesn't matter whether you do this reducing step during intermediate calculations or not, precisely by the four properties above.</p>\n" }, { "answer_id": 2120057, "score": 0, "text": "<p>Denoting the <a href=\"https://en.wikipedia.org/wiki/Modulo_operation\" rel=\"nofollow noreferrer\">Modulo operator %</a> by $\\left(\\,\\text{mod}\\,\\right)$, we have for all $\\,a,b,c,d\\in{\\mathbb N}^{+}\\,$:\n$$ \n\\left(\\,a^{b\\,c}\\,\\right)\\color{blue}{\\text{mod}}\\,d =\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{c}\\,\\,\\color{blue}{\\text{mod}}\\,d =\\left(\\,a^{c}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d =\\left(\\,a\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{b\\,c}\\,\\,\\color{blue}{\\text{mod}}\\,d \\\\[2mm] \n\\small \\left(3^{\\small13\\times15}\\right)\\text{mod}\\,17 =\\left(3^{\\small13}\\,\\text{mod}\\,17\\right)^{\\small15}\\,\\text{mod}\\,17 =\\left(3^{\\small15}\\,\\text{mod}\\,17\\right)^{\\small13}\\,\\text{mod}\\,17 =\\left(3\\,\\text{mod}\\,17\\right)^{\\small13\\times15}\\,\\text{mod}\\,17 \n$$\nWhere above identity results from the multiplication distributive property:\n$$ \n\\left[ a_{\\small1}\\,.\\,a_{\\small2}\\,.\\,a_{\\small3}\\,.\\,\\cdots\\, \\right]\\,\\color{blue}{\\text{mod}}\\,d \\,=\\, \\left[ \\left(a_{\\small1}\\,\\color{blue}{\\text{mod}}\\,d\\right)\\left(a_{\\small2}\\,\\color{blue}{\\text{mod}}\\,d\\right)\\left(a_{\\small3}\\,\\color{blue}{\\text{mod}}\\,d\\right) \\cdots\\, \\right]\\,\\color{blue}{\\text{mod}}\\,d \\,\\,\\Rightarrow \\\\[6mm] \n\\begin{align} \n\\left(\\,a^{b\\,c}\\,\\right)\\color{blue}{\\text{mod}}\\,d &amp;= \\left(\\,a^{b}\\,\\right)^{c}\\,\\color{blue}{\\text{mod}}\\,d = \\underbrace{\\left[a^b.a^b.\\,\\cdots\\,.a^b\\right]}_{c\\,\\text{ times}}\\,\\,\\color{blue}{\\text{mod}}\\,d \\\\[2mm] \n&amp;= \\underbrace{\\left[\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)\\,\\cdots\\,\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)\\right]}_{c\\,\\text{ times}}\\,\\,\\color{blue}{\\text{mod}}\\,d = \\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{c}\\,\\,\\color{blue}{\\text{mod}}\\,d \n\\end{align} \n$$\n <br/> \n$$ \n\\begin{align} \n{\\small\\text{Let:}}\\,a_{\\small1}=q_{\\small1}.d+r_{\\small1}\\,, &amp;\\quad a_{\\small2}=q_{\\small2}.d+r_{\\small2}\\,, \\quad a_{\\small3}=q_{\\small3}.d+r_{\\small3}\\,, \\,\\,\\cdots \\\\[2mm] \n\\left[ a_{\\small1}\\,a_{\\small2}\\,a_{\\small3}\\,\\cdots\\, \\right]\\,\\text{mod}\\,d &amp;=\\left[ \\left(q_{\\small1}.d+r_{\\small1}\\right)\\,\\left(q_{\\small2}.d+r_{\\small2}\\right)\\,\\left(q_{\\small3}.d+r_{\\small3}\\right)\\,\\cdots\\, \\right]\\,\\text{mod}\\,d \\\\[2mm] \n&amp;\\quad \\,\\,\\{{\\small\\text{ all terms will have a power of }}\\,\\,d\\,\\, {\\small\\text{ except the last term }}\\} \\\\[2mm] \n&amp;= \\left[ {\\large\\alpha}\\left(q_{\\small1},r_{\\small1},q_{\\small2},r_{\\small2},q_{\\small3},r_{\\small3},\\cdots\\right).d+\\left(r_{\\small1}\\,r_{\\small2}\\,r_{\\small3}\\,\\cdots\\,\\right) \\right]\\,\\text{mod}\\,d \\\\[2mm] \n&amp;= \\left[\\,r_{\\small1}\\,r_{\\small2}\\,r_{\\small3}\\,\\cdots\\, \\right]\\,\\text{mod}\\,d \\quad\\qquad\\{{\\small\\left(\\alpha.d\\right)\\,\\text{mod}\\,d=0}\\} \\\\[2mm] \n&amp;= \\left[ \\left(a_{\\small1}\\,\\text{mod}\\,d\\right)\\left(a_{\\small2}\\,\\text{mod}\\,d\\right)\\left(a_{\\small3}\\,\\text{mod}\\,d\\right) \\cdots\\, \\right]\\,\\text{mod}\\,d \\\\[2mm] \n\\end{align} \n$$</p>\n" }, { "answer_id": 2120674, "score": 2, "text": "<p>Assuming the number and base are positive, all the modulo operator <code>n % 17</code> does is to give you the last \"digit\" of $n$ when written in base 17.</p>\n\n<p>Now when you want to calculate the last digit (base 17) of $(n^{13})^{15}$, you could calculate $x=n^{13}$, then raise $x$ to the power $15$, then look at the last digit. In order to raise your big number $x$ to the power $15$, you have to multiply a bunch of numbers (all equal to $x$) together. But if all you care about is the last digit (base 17) of the answer, all you need to know is the last digit of $x$, because when you do a multiplication longhand only the digits in the last column affect the last column of the answer. This is perhaps easier to convince yourself of working in base 10, since that's how we're used to doing long multiplication, but it works equally well in any base -- the last digit of $a\\times b$ only depends on the last digits of $a$ and $b$.</p>\n" }, { "answer_id": 2116236, "score": 0, "text": "<p>My view is that I don't see modulo as an operation the way computers do. To me, it's a <em>relation</em>, and its symbol looks like ${\\cdot}\\equiv {\\cdot}\\pmod{17}$. Two numbers can be checked against one another (we write that by filling in the two numbers where the dots are) and either they fulfill that property, or they don't. This property works well with addition, subtraction, multiplication and exponentiation in the sense that if $$\na\\equiv b\\pmod{17}\n$$\nis true, then\n$$\na+n \\equiv b+n\\pmod{17}\\\\\na-n \\equiv b-n\\pmod{17}\\\\\na\\cdot n \\equiv b\\cdot n\\pmod{17}\\\\\na^n \\equiv b^n\\pmod{17}\n$$\nare also all true for any integer $n$.</p>\n\n<p>What you do is first saying that $3^{13}\\equiv 12\\pmod{17}$, in other words, $a = 3^{13}$ and $b = 12$. Then you raise each side of that to the $15$th power, i.e. set $n = 15$ and use the last of the four above properties. The relation therefore still holds.</p>\n\n<p>What <code>a % k</code> does in a computer program is simply to find the smallest natural number $b$ such that $a \\equiv b\\pmod{k}$. As long as your operations are addition, subtraction, multiplication and exponentiation, it doesn't matter whether you do this reducing step during intermediate calculations or not, precisely by the four properties above.</p>\n" }, { "answer_id": 2120057, "score": 0, "text": "<p>Denoting the <a href=\"https://en.wikipedia.org/wiki/Modulo_operation\" rel=\"nofollow noreferrer\">Modulo operator %</a> by $\\left(\\,\\text{mod}\\,\\right)$, we have for all $\\,a,b,c,d\\in{\\mathbb N}^{+}\\,$:\n$$ \n\\left(\\,a^{b\\,c}\\,\\right)\\color{blue}{\\text{mod}}\\,d =\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{c}\\,\\,\\color{blue}{\\text{mod}}\\,d =\\left(\\,a^{c}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d =\\left(\\,a\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{b\\,c}\\,\\,\\color{blue}{\\text{mod}}\\,d \\\\[2mm] \n\\small \\left(3^{\\small13\\times15}\\right)\\text{mod}\\,17 =\\left(3^{\\small13}\\,\\text{mod}\\,17\\right)^{\\small15}\\,\\text{mod}\\,17 =\\left(3^{\\small15}\\,\\text{mod}\\,17\\right)^{\\small13}\\,\\text{mod}\\,17 =\\left(3\\,\\text{mod}\\,17\\right)^{\\small13\\times15}\\,\\text{mod}\\,17 \n$$\nWhere above identity results from the multiplication distributive property:\n$$ \n\\left[ a_{\\small1}\\,.\\,a_{\\small2}\\,.\\,a_{\\small3}\\,.\\,\\cdots\\, \\right]\\,\\color{blue}{\\text{mod}}\\,d \\,=\\, \\left[ \\left(a_{\\small1}\\,\\color{blue}{\\text{mod}}\\,d\\right)\\left(a_{\\small2}\\,\\color{blue}{\\text{mod}}\\,d\\right)\\left(a_{\\small3}\\,\\color{blue}{\\text{mod}}\\,d\\right) \\cdots\\, \\right]\\,\\color{blue}{\\text{mod}}\\,d \\,\\,\\Rightarrow \\\\[6mm] \n\\begin{align} \n\\left(\\,a^{b\\,c}\\,\\right)\\color{blue}{\\text{mod}}\\,d &amp;= \\left(\\,a^{b}\\,\\right)^{c}\\,\\color{blue}{\\text{mod}}\\,d = \\underbrace{\\left[a^b.a^b.\\,\\cdots\\,.a^b\\right]}_{c\\,\\text{ times}}\\,\\,\\color{blue}{\\text{mod}}\\,d \\\\[2mm] \n&amp;= \\underbrace{\\left[\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)\\,\\cdots\\,\\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)\\right]}_{c\\,\\text{ times}}\\,\\,\\color{blue}{\\text{mod}}\\,d = \\left(\\,a^{b}\\,\\,\\color{blue}{\\text{mod}}\\,d\\,\\right)^{c}\\,\\,\\color{blue}{\\text{mod}}\\,d \n\\end{align} \n$$\n <br/> \n$$ \n\\begin{align} \n{\\small\\text{Let:}}\\,a_{\\small1}=q_{\\small1}.d+r_{\\small1}\\,, &amp;\\quad a_{\\small2}=q_{\\small2}.d+r_{\\small2}\\,, \\quad a_{\\small3}=q_{\\small3}.d+r_{\\small3}\\,, \\,\\,\\cdots \\\\[2mm] \n\\left[ a_{\\small1}\\,a_{\\small2}\\,a_{\\small3}\\,\\cdots\\, \\right]\\,\\text{mod}\\,d &amp;=\\left[ \\left(q_{\\small1}.d+r_{\\small1}\\right)\\,\\left(q_{\\small2}.d+r_{\\small2}\\right)\\,\\left(q_{\\small3}.d+r_{\\small3}\\right)\\,\\cdots\\, \\right]\\,\\text{mod}\\,d \\\\[2mm] \n&amp;\\quad \\,\\,\\{{\\small\\text{ all terms will have a power of }}\\,\\,d\\,\\, {\\small\\text{ except the last term }}\\} \\\\[2mm] \n&amp;= \\left[ {\\large\\alpha}\\left(q_{\\small1},r_{\\small1},q_{\\small2},r_{\\small2},q_{\\small3},r_{\\small3},\\cdots\\right).d+\\left(r_{\\small1}\\,r_{\\small2}\\,r_{\\small3}\\,\\cdots\\,\\right) \\right]\\,\\text{mod}\\,d \\\\[2mm] \n&amp;= \\left[\\,r_{\\small1}\\,r_{\\small2}\\,r_{\\small3}\\,\\cdots\\, \\right]\\,\\text{mod}\\,d \\quad\\qquad\\{{\\small\\left(\\alpha.d\\right)\\,\\text{mod}\\,d=0}\\} \\\\[2mm] \n&amp;= \\left[ \\left(a_{\\small1}\\,\\text{mod}\\,d\\right)\\left(a_{\\small2}\\,\\text{mod}\\,d\\right)\\left(a_{\\small3}\\,\\text{mod}\\,d\\right) \\cdots\\, \\right]\\,\\text{mod}\\,d \\\\[2mm] \n\\end{align} \n$$</p>\n" } ]
Question: Where is the mathematical logic with this exponent behaviour? Answer 1: <p>Assuming the number and base are positive, all the modulo operator <code>n % 17</code> does is to give you the last "digit" of $n$ when written in base 17.</p> <p>Now when you want to calculate the last digit (base 17) of $(n^{13})^{15}$, you could calculate $x=n^{13}$, then raise $x$ to the power $15$, then look at the last digit. In order to raise your big number $x$ to the power $15$, you have to multiply a bunch of numbers (all equal to $x$) together. But if all you care about is the last digit (base 17) of the answer, all you need to know is the last digit of $x$, because when you do a multiplication longhand only the digits in the last column affect the last column of the answer. This is perhaps easier to convince yourself of working in base 10, since that's how we're used to doing long multiplication, but it works equally well in any base -- the last digit of $a\times b$ only depends on the last digits of $a$ and $b$.</p> Answer 2: <p>My view is that I don't see modulo as an operation the way computers do. To me, it's a <em>relation</em>, and its symbol looks like ${\cdot}\equiv {\cdot}\pmod{17}$. Two numbers can be checked against one another (we write that by filling in the two numbers where the dots are) and either they fulfill that property, or they don't. This property works well with addition, subtraction, multiplication and exponentiation in the sense that if $$ a\equiv b\pmod{17} $$ is true, then $$ a+n \equiv b+n\pmod{17}\\ a-n \equiv b-n\pmod{17}\\ a\cdot n \equiv b\cdot n\pmod{17}\\ a^n \equiv b^n\pmod{17} $$ are also all true for any integer $n$.</p> <p>What you do is first saying that $3^{13}\equiv 12\pmod{17}$, in other words, $a = 3^{13}$ and $b = 12$. Then you raise each side of that to the $15$th power, i.e. set $n = 15$ and use the last of the four above properties. The relation therefore still holds.</p> <p>What <code>a % k</code> does in a computer program is simply to find the smallest natural number $b$ such that $a \equiv b\pmod{k}$. As long as your operations are addition, subtraction, multiplication and exponentiation, it doesn't matter whether you do this reducing step during intermediate calculations or not, precisely by the four properties above.</p> Answer 3: <p>Denoting the <a href="https://en.wikipedia.org/wiki/Modulo_operation" rel="nofollow noreferrer">Modulo operator %</a> by $\left(\,\text{mod}\,\right)$, we have for all $\,a,b,c,d\in{\mathbb N}^{+}\,$: $$ \left(\,a^{b\,c}\,\right)\color{blue}{\text{mod}}\,d =\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)^{c}\,\,\color{blue}{\text{mod}}\,d =\left(\,a^{c}\,\,\color{blue}{\text{mod}}\,d\,\right)^{b}\,\,\color{blue}{\text{mod}}\,d =\left(\,a\,\color{blue}{\text{mod}}\,d\,\right)^{b\,c}\,\,\color{blue}{\text{mod}}\,d \\[2mm] \small \left(3^{\small13\times15}\right)\text{mod}\,17 =\left(3^{\small13}\,\text{mod}\,17\right)^{\small15}\,\text{mod}\,17 =\left(3^{\small15}\,\text{mod}\,17\right)^{\small13}\,\text{mod}\,17 =\left(3\,\text{mod}\,17\right)^{\small13\times15}\,\text{mod}\,17 $$ Where above identity results from the multiplication distributive property: $$ \left[ a_{\small1}\,.\,a_{\small2}\,.\,a_{\small3}\,.\,\cdots\, \right]\,\color{blue}{\text{mod}}\,d \,=\, \left[ \left(a_{\small1}\,\color{blue}{\text{mod}}\,d\right)\left(a_{\small2}\,\color{blue}{\text{mod}}\,d\right)\left(a_{\small3}\,\color{blue}{\text{mod}}\,d\right) \cdots\, \right]\,\color{blue}{\text{mod}}\,d \,\,\Rightarrow \\[6mm] \begin{align} \left(\,a^{b\,c}\,\right)\color{blue}{\text{mod}}\,d &amp;= \left(\,a^{b}\,\right)^{c}\,\color{blue}{\text{mod}}\,d = \underbrace{\left[a^b.a^b.\,\cdots\,.a^b\right]}_{c\,\text{ times}}\,\,\color{blue}{\text{mod}}\,d \\[2mm] &amp;= \underbrace{\left[\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)\,\cdots\,\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)\right]}_{c\,\text{ times}}\,\,\color{blue}{\text{mod}}\,d = \left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)^{c}\,\,\color{blue}{\text{mod}}\,d \end{align} $$ <br/> $$ \begin{align} {\small\text{Let:}}\,a_{\small1}=q_{\small1}.d+r_{\small1}\,, &amp;\quad a_{\small2}=q_{\small2}.d+r_{\small2}\,, \quad a_{\small3}=q_{\small3}.d+r_{\small3}\,, \,\,\cdots \\[2mm] \left[ a_{\small1}\,a_{\small2}\,a_{\small3}\,\cdots\, \right]\,\text{mod}\,d &amp;=\left[ \left(q_{\small1}.d+r_{\small1}\right)\,\left(q_{\small2}.d+r_{\small2}\right)\,\left(q_{\small3}.d+r_{\small3}\right)\,\cdots\, \right]\,\text{mod}\,d \\[2mm] &amp;\quad \,\,\{{\small\text{ all terms will have a power of }}\,\,d\,\, {\small\text{ except the last term }}\} \\[2mm] &amp;= \left[ {\large\alpha}\left(q_{\small1},r_{\small1},q_{\small2},r_{\small2},q_{\small3},r_{\small3},\cdots\right).d+\left(r_{\small1}\,r_{\small2}\,r_{\small3}\,\cdots\,\right) \right]\,\text{mod}\,d \\[2mm] &amp;= \left[\,r_{\small1}\,r_{\small2}\,r_{\small3}\,\cdots\, \right]\,\text{mod}\,d \quad\qquad\{{\small\left(\alpha.d\right)\,\text{mod}\,d=0}\} \\[2mm] &amp;= \left[ \left(a_{\small1}\,\text{mod}\,d\right)\left(a_{\small2}\,\text{mod}\,d\right)\left(a_{\small3}\,\text{mod}\,d\right) \cdots\, \right]\,\text{mod}\,d \\[2mm] \end{align} $$</p> Answer 4: <p>Assuming the number and base are positive, all the modulo operator <code>n % 17</code> does is to give you the last "digit" of $n$ when written in base 17.</p> <p>Now when you want to calculate the last digit (base 17) of $(n^{13})^{15}$, you could calculate $x=n^{13}$, then raise $x$ to the power $15$, then look at the last digit. In order to raise your big number $x$ to the power $15$, you have to multiply a bunch of numbers (all equal to $x$) together. But if all you care about is the last digit (base 17) of the answer, all you need to know is the last digit of $x$, because when you do a multiplication longhand only the digits in the last column affect the last column of the answer. This is perhaps easier to convince yourself of working in base 10, since that's how we're used to doing long multiplication, but it works equally well in any base -- the last digit of $a\times b$ only depends on the last digits of $a$ and $b$.</p> Answer 5: <p>My view is that I don't see modulo as an operation the way computers do. To me, it's a <em>relation</em>, and its symbol looks like ${\cdot}\equiv {\cdot}\pmod{17}$. Two numbers can be checked against one another (we write that by filling in the two numbers where the dots are) and either they fulfill that property, or they don't. This property works well with addition, subtraction, multiplication and exponentiation in the sense that if $$ a\equiv b\pmod{17} $$ is true, then $$ a+n \equiv b+n\pmod{17}\\ a-n \equiv b-n\pmod{17}\\ a\cdot n \equiv b\cdot n\pmod{17}\\ a^n \equiv b^n\pmod{17} $$ are also all true for any integer $n$.</p> <p>What you do is first saying that $3^{13}\equiv 12\pmod{17}$, in other words, $a = 3^{13}$ and $b = 12$. Then you raise each side of that to the $15$th power, i.e. set $n = 15$ and use the last of the four above properties. The relation therefore still holds.</p> <p>What <code>a % k</code> does in a computer program is simply to find the smallest natural number $b$ such that $a \equiv b\pmod{k}$. As long as your operations are addition, subtraction, multiplication and exponentiation, it doesn't matter whether you do this reducing step during intermediate calculations or not, precisely by the four properties above.</p> Answer 6: <p>Denoting the <a href="https://en.wikipedia.org/wiki/Modulo_operation" rel="nofollow noreferrer">Modulo operator %</a> by $\left(\,\text{mod}\,\right)$, we have for all $\,a,b,c,d\in{\mathbb N}^{+}\,$: $$ \left(\,a^{b\,c}\,\right)\color{blue}{\text{mod}}\,d =\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)^{c}\,\,\color{blue}{\text{mod}}\,d =\left(\,a^{c}\,\,\color{blue}{\text{mod}}\,d\,\right)^{b}\,\,\color{blue}{\text{mod}}\,d =\left(\,a\,\color{blue}{\text{mod}}\,d\,\right)^{b\,c}\,\,\color{blue}{\text{mod}}\,d \\[2mm] \small \left(3^{\small13\times15}\right)\text{mod}\,17 =\left(3^{\small13}\,\text{mod}\,17\right)^{\small15}\,\text{mod}\,17 =\left(3^{\small15}\,\text{mod}\,17\right)^{\small13}\,\text{mod}\,17 =\left(3\,\text{mod}\,17\right)^{\small13\times15}\,\text{mod}\,17 $$ Where above identity results from the multiplication distributive property: $$ \left[ a_{\small1}\,.\,a_{\small2}\,.\,a_{\small3}\,.\,\cdots\, \right]\,\color{blue}{\text{mod}}\,d \,=\, \left[ \left(a_{\small1}\,\color{blue}{\text{mod}}\,d\right)\left(a_{\small2}\,\color{blue}{\text{mod}}\,d\right)\left(a_{\small3}\,\color{blue}{\text{mod}}\,d\right) \cdots\, \right]\,\color{blue}{\text{mod}}\,d \,\,\Rightarrow \\[6mm] \begin{align} \left(\,a^{b\,c}\,\right)\color{blue}{\text{mod}}\,d &amp;= \left(\,a^{b}\,\right)^{c}\,\color{blue}{\text{mod}}\,d = \underbrace{\left[a^b.a^b.\,\cdots\,.a^b\right]}_{c\,\text{ times}}\,\,\color{blue}{\text{mod}}\,d \\[2mm] &amp;= \underbrace{\left[\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)\,\cdots\,\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)\right]}_{c\,\text{ times}}\,\,\color{blue}{\text{mod}}\,d = \left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)^{c}\,\,\color{blue}{\text{mod}}\,d \end{align} $$ <br/> $$ \begin{align} {\small\text{Let:}}\,a_{\small1}=q_{\small1}.d+r_{\small1}\,, &amp;\quad a_{\small2}=q_{\small2}.d+r_{\small2}\,, \quad a_{\small3}=q_{\small3}.d+r_{\small3}\,, \,\,\cdots \\[2mm] \left[ a_{\small1}\,a_{\small2}\,a_{\small3}\,\cdots\, \right]\,\text{mod}\,d &amp;=\left[ \left(q_{\small1}.d+r_{\small1}\right)\,\left(q_{\small2}.d+r_{\small2}\right)\,\left(q_{\small3}.d+r_{\small3}\right)\,\cdots\, \right]\,\text{mod}\,d \\[2mm] &amp;\quad \,\,\{{\small\text{ all terms will have a power of }}\,\,d\,\, {\small\text{ except the last term }}\} \\[2mm] &amp;= \left[ {\large\alpha}\left(q_{\small1},r_{\small1},q_{\small2},r_{\small2},q_{\small3},r_{\small3},\cdots\right).d+\left(r_{\small1}\,r_{\small2}\,r_{\small3}\,\cdots\,\right) \right]\,\text{mod}\,d \\[2mm] &amp;= \left[\,r_{\small1}\,r_{\small2}\,r_{\small3}\,\cdots\, \right]\,\text{mod}\,d \quad\qquad\{{\small\left(\alpha.d\right)\,\text{mod}\,d=0}\} \\[2mm] &amp;= \left[ \left(a_{\small1}\,\text{mod}\,d\right)\left(a_{\small2}\,\text{mod}\,d\right)\left(a_{\small3}\,\text{mod}\,d\right) \cdots\, \right]\,\text{mod}\,d \\[2mm] \end{align} $$</p> Answer 7: <p>Assuming the number and base are positive, all the modulo operator <code>n % 17</code> does is to give you the last "digit" of $n$ when written in base 17.</p> <p>Now when you want to calculate the last digit (base 17) of $(n^{13})^{15}$, you could calculate $x=n^{13}$, then raise $x$ to the power $15$, then look at the last digit. In order to raise your big number $x$ to the power $15$, you have to multiply a bunch of numbers (all equal to $x$) together. But if all you care about is the last digit (base 17) of the answer, all you need to know is the last digit of $x$, because when you do a multiplication longhand only the digits in the last column affect the last column of the answer. This is perhaps easier to convince yourself of working in base 10, since that's how we're used to doing long multiplication, but it works equally well in any base -- the last digit of $a\times b$ only depends on the last digits of $a$ and $b$.</p> Answer 8: <p>My view is that I don't see modulo as an operation the way computers do. To me, it's a <em>relation</em>, and its symbol looks like ${\cdot}\equiv {\cdot}\pmod{17}$. Two numbers can be checked against one another (we write that by filling in the two numbers where the dots are) and either they fulfill that property, or they don't. This property works well with addition, subtraction, multiplication and exponentiation in the sense that if $$ a\equiv b\pmod{17} $$ is true, then $$ a+n \equiv b+n\pmod{17}\\ a-n \equiv b-n\pmod{17}\\ a\cdot n \equiv b\cdot n\pmod{17}\\ a^n \equiv b^n\pmod{17} $$ are also all true for any integer $n$.</p> <p>What you do is first saying that $3^{13}\equiv 12\pmod{17}$, in other words, $a = 3^{13}$ and $b = 12$. Then you raise each side of that to the $15$th power, i.e. set $n = 15$ and use the last of the four above properties. The relation therefore still holds.</p> <p>What <code>a % k</code> does in a computer program is simply to find the smallest natural number $b$ such that $a \equiv b\pmod{k}$. As long as your operations are addition, subtraction, multiplication and exponentiation, it doesn't matter whether you do this reducing step during intermediate calculations or not, precisely by the four properties above.</p> Answer 9: <p>Denoting the <a href="https://en.wikipedia.org/wiki/Modulo_operation" rel="nofollow noreferrer">Modulo operator %</a> by $\left(\,\text{mod}\,\right)$, we have for all $\,a,b,c,d\in{\mathbb N}^{+}\,$: $$ \left(\,a^{b\,c}\,\right)\color{blue}{\text{mod}}\,d =\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)^{c}\,\,\color{blue}{\text{mod}}\,d =\left(\,a^{c}\,\,\color{blue}{\text{mod}}\,d\,\right)^{b}\,\,\color{blue}{\text{mod}}\,d =\left(\,a\,\color{blue}{\text{mod}}\,d\,\right)^{b\,c}\,\,\color{blue}{\text{mod}}\,d \\[2mm] \small \left(3^{\small13\times15}\right)\text{mod}\,17 =\left(3^{\small13}\,\text{mod}\,17\right)^{\small15}\,\text{mod}\,17 =\left(3^{\small15}\,\text{mod}\,17\right)^{\small13}\,\text{mod}\,17 =\left(3\,\text{mod}\,17\right)^{\small13\times15}\,\text{mod}\,17 $$ Where above identity results from the multiplication distributive property: $$ \left[ a_{\small1}\,.\,a_{\small2}\,.\,a_{\small3}\,.\,\cdots\, \right]\,\color{blue}{\text{mod}}\,d \,=\, \left[ \left(a_{\small1}\,\color{blue}{\text{mod}}\,d\right)\left(a_{\small2}\,\color{blue}{\text{mod}}\,d\right)\left(a_{\small3}\,\color{blue}{\text{mod}}\,d\right) \cdots\, \right]\,\color{blue}{\text{mod}}\,d \,\,\Rightarrow \\[6mm] \begin{align} \left(\,a^{b\,c}\,\right)\color{blue}{\text{mod}}\,d &amp;= \left(\,a^{b}\,\right)^{c}\,\color{blue}{\text{mod}}\,d = \underbrace{\left[a^b.a^b.\,\cdots\,.a^b\right]}_{c\,\text{ times}}\,\,\color{blue}{\text{mod}}\,d \\[2mm] &amp;= \underbrace{\left[\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)\,\cdots\,\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)\right]}_{c\,\text{ times}}\,\,\color{blue}{\text{mod}}\,d = \left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)^{c}\,\,\color{blue}{\text{mod}}\,d \end{align} $$ <br/> $$ \begin{align} {\small\text{Let:}}\,a_{\small1}=q_{\small1}.d+r_{\small1}\,, &amp;\quad a_{\small2}=q_{\small2}.d+r_{\small2}\,, \quad a_{\small3}=q_{\small3}.d+r_{\small3}\,, \,\,\cdots \\[2mm] \left[ a_{\small1}\,a_{\small2}\,a_{\small3}\,\cdots\, \right]\,\text{mod}\,d &amp;=\left[ \left(q_{\small1}.d+r_{\small1}\right)\,\left(q_{\small2}.d+r_{\small2}\right)\,\left(q_{\small3}.d+r_{\small3}\right)\,\cdots\, \right]\,\text{mod}\,d \\[2mm] &amp;\quad \,\,\{{\small\text{ all terms will have a power of }}\,\,d\,\, {\small\text{ except the last term }}\} \\[2mm] &amp;= \left[ {\large\alpha}\left(q_{\small1},r_{\small1},q_{\small2},r_{\small2},q_{\small3},r_{\small3},\cdots\right).d+\left(r_{\small1}\,r_{\small2}\,r_{\small3}\,\cdots\,\right) \right]\,\text{mod}\,d \\[2mm] &amp;= \left[\,r_{\small1}\,r_{\small2}\,r_{\small3}\,\cdots\, \right]\,\text{mod}\,d \quad\qquad\{{\small\left(\alpha.d\right)\,\text{mod}\,d=0}\} \\[2mm] &amp;= \left[ \left(a_{\small1}\,\text{mod}\,d\right)\left(a_{\small2}\,\text{mod}\,d\right)\left(a_{\small3}\,\text{mod}\,d\right) \cdots\, \right]\,\text{mod}\,d \\[2mm] \end{align} $$</p> Answer 10: <p>Assuming the number and base are positive, all the modulo operator <code>n % 17</code> does is to give you the last "digit" of $n$ when written in base 17.</p> <p>Now when you want to calculate the last digit (base 17) of $(n^{13})^{15}$, you could calculate $x=n^{13}$, then raise $x$ to the power $15$, then look at the last digit. In order to raise your big number $x$ to the power $15$, you have to multiply a bunch of numbers (all equal to $x$) together. But if all you care about is the last digit (base 17) of the answer, all you need to know is the last digit of $x$, because when you do a multiplication longhand only the digits in the last column affect the last column of the answer. This is perhaps easier to convince yourself of working in base 10, since that's how we're used to doing long multiplication, but it works equally well in any base -- the last digit of $a\times b$ only depends on the last digits of $a$ and $b$.</p> Answer 11: <p>My view is that I don't see modulo as an operation the way computers do. To me, it's a <em>relation</em>, and its symbol looks like ${\cdot}\equiv {\cdot}\pmod{17}$. Two numbers can be checked against one another (we write that by filling in the two numbers where the dots are) and either they fulfill that property, or they don't. This property works well with addition, subtraction, multiplication and exponentiation in the sense that if $$ a\equiv b\pmod{17} $$ is true, then $$ a+n \equiv b+n\pmod{17}\\ a-n \equiv b-n\pmod{17}\\ a\cdot n \equiv b\cdot n\pmod{17}\\ a^n \equiv b^n\pmod{17} $$ are also all true for any integer $n$.</p> <p>What you do is first saying that $3^{13}\equiv 12\pmod{17}$, in other words, $a = 3^{13}$ and $b = 12$. Then you raise each side of that to the $15$th power, i.e. set $n = 15$ and use the last of the four above properties. The relation therefore still holds.</p> <p>What <code>a % k</code> does in a computer program is simply to find the smallest natural number $b$ such that $a \equiv b\pmod{k}$. As long as your operations are addition, subtraction, multiplication and exponentiation, it doesn't matter whether you do this reducing step during intermediate calculations or not, precisely by the four properties above.</p> Answer 12: <p>Denoting the <a href="https://en.wikipedia.org/wiki/Modulo_operation" rel="nofollow noreferrer">Modulo operator %</a> by $\left(\,\text{mod}\,\right)$, we have for all $\,a,b,c,d\in{\mathbb N}^{+}\,$: $$ \left(\,a^{b\,c}\,\right)\color{blue}{\text{mod}}\,d =\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)^{c}\,\,\color{blue}{\text{mod}}\,d =\left(\,a^{c}\,\,\color{blue}{\text{mod}}\,d\,\right)^{b}\,\,\color{blue}{\text{mod}}\,d =\left(\,a\,\color{blue}{\text{mod}}\,d\,\right)^{b\,c}\,\,\color{blue}{\text{mod}}\,d \\[2mm] \small \left(3^{\small13\times15}\right)\text{mod}\,17 =\left(3^{\small13}\,\text{mod}\,17\right)^{\small15}\,\text{mod}\,17 =\left(3^{\small15}\,\text{mod}\,17\right)^{\small13}\,\text{mod}\,17 =\left(3\,\text{mod}\,17\right)^{\small13\times15}\,\text{mod}\,17 $$ Where above identity results from the multiplication distributive property: $$ \left[ a_{\small1}\,.\,a_{\small2}\,.\,a_{\small3}\,.\,\cdots\, \right]\,\color{blue}{\text{mod}}\,d \,=\, \left[ \left(a_{\small1}\,\color{blue}{\text{mod}}\,d\right)\left(a_{\small2}\,\color{blue}{\text{mod}}\,d\right)\left(a_{\small3}\,\color{blue}{\text{mod}}\,d\right) \cdots\, \right]\,\color{blue}{\text{mod}}\,d \,\,\Rightarrow \\[6mm] \begin{align} \left(\,a^{b\,c}\,\right)\color{blue}{\text{mod}}\,d &amp;= \left(\,a^{b}\,\right)^{c}\,\color{blue}{\text{mod}}\,d = \underbrace{\left[a^b.a^b.\,\cdots\,.a^b\right]}_{c\,\text{ times}}\,\,\color{blue}{\text{mod}}\,d \\[2mm] &amp;= \underbrace{\left[\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)\,\cdots\,\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)\right]}_{c\,\text{ times}}\,\,\color{blue}{\text{mod}}\,d = \left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)^{c}\,\,\color{blue}{\text{mod}}\,d \end{align} $$ <br/> $$ \begin{align} {\small\text{Let:}}\,a_{\small1}=q_{\small1}.d+r_{\small1}\,, &amp;\quad a_{\small2}=q_{\small2}.d+r_{\small2}\,, \quad a_{\small3}=q_{\small3}.d+r_{\small3}\,, \,\,\cdots \\[2mm] \left[ a_{\small1}\,a_{\small2}\,a_{\small3}\,\cdots\, \right]\,\text{mod}\,d &amp;=\left[ \left(q_{\small1}.d+r_{\small1}\right)\,\left(q_{\small2}.d+r_{\small2}\right)\,\left(q_{\small3}.d+r_{\small3}\right)\,\cdots\, \right]\,\text{mod}\,d \\[2mm] &amp;\quad \,\,\{{\small\text{ all terms will have a power of }}\,\,d\,\, {\small\text{ except the last term }}\} \\[2mm] &amp;= \left[ {\large\alpha}\left(q_{\small1},r_{\small1},q_{\small2},r_{\small2},q_{\small3},r_{\small3},\cdots\right).d+\left(r_{\small1}\,r_{\small2}\,r_{\small3}\,\cdots\,\right) \right]\,\text{mod}\,d \\[2mm] &amp;= \left[\,r_{\small1}\,r_{\small2}\,r_{\small3}\,\cdots\, \right]\,\text{mod}\,d \quad\qquad\{{\small\left(\alpha.d\right)\,\text{mod}\,d=0}\} \\[2mm] &amp;= \left[ \left(a_{\small1}\,\text{mod}\,d\right)\left(a_{\small2}\,\text{mod}\,d\right)\left(a_{\small3}\,\text{mod}\,d\right) \cdots\, \right]\,\text{mod}\,d \\[2mm] \end{align} $$</p> Answer 13: <p>Assuming the number and base are positive, all the modulo operator <code>n % 17</code> does is to give you the last "digit" of $n$ when written in base 17.</p> <p>Now when you want to calculate the last digit (base 17) of $(n^{13})^{15}$, you could calculate $x=n^{13}$, then raise $x$ to the power $15$, then look at the last digit. In order to raise your big number $x$ to the power $15$, you have to multiply a bunch of numbers (all equal to $x$) together. But if all you care about is the last digit (base 17) of the answer, all you need to know is the last digit of $x$, because when you do a multiplication longhand only the digits in the last column affect the last column of the answer. This is perhaps easier to convince yourself of working in base 10, since that's how we're used to doing long multiplication, but it works equally well in any base -- the last digit of $a\times b$ only depends on the last digits of $a$ and $b$.</p> Answer 14: <p>My view is that I don't see modulo as an operation the way computers do. To me, it's a <em>relation</em>, and its symbol looks like ${\cdot}\equiv {\cdot}\pmod{17}$. Two numbers can be checked against one another (we write that by filling in the two numbers where the dots are) and either they fulfill that property, or they don't. This property works well with addition, subtraction, multiplication and exponentiation in the sense that if $$ a\equiv b\pmod{17} $$ is true, then $$ a+n \equiv b+n\pmod{17}\\ a-n \equiv b-n\pmod{17}\\ a\cdot n \equiv b\cdot n\pmod{17}\\ a^n \equiv b^n\pmod{17} $$ are also all true for any integer $n$.</p> <p>What you do is first saying that $3^{13}\equiv 12\pmod{17}$, in other words, $a = 3^{13}$ and $b = 12$. Then you raise each side of that to the $15$th power, i.e. set $n = 15$ and use the last of the four above properties. The relation therefore still holds.</p> <p>What <code>a % k</code> does in a computer program is simply to find the smallest natural number $b$ such that $a \equiv b\pmod{k}$. As long as your operations are addition, subtraction, multiplication and exponentiation, it doesn't matter whether you do this reducing step during intermediate calculations or not, precisely by the four properties above.</p> Answer 15: <p>Denoting the <a href="https://en.wikipedia.org/wiki/Modulo_operation" rel="nofollow noreferrer">Modulo operator %</a> by $\left(\,\text{mod}\,\right)$, we have for all $\,a,b,c,d\in{\mathbb N}^{+}\,$: $$ \left(\,a^{b\,c}\,\right)\color{blue}{\text{mod}}\,d =\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)^{c}\,\,\color{blue}{\text{mod}}\,d =\left(\,a^{c}\,\,\color{blue}{\text{mod}}\,d\,\right)^{b}\,\,\color{blue}{\text{mod}}\,d =\left(\,a\,\color{blue}{\text{mod}}\,d\,\right)^{b\,c}\,\,\color{blue}{\text{mod}}\,d \\[2mm] \small \left(3^{\small13\times15}\right)\text{mod}\,17 =\left(3^{\small13}\,\text{mod}\,17\right)^{\small15}\,\text{mod}\,17 =\left(3^{\small15}\,\text{mod}\,17\right)^{\small13}\,\text{mod}\,17 =\left(3\,\text{mod}\,17\right)^{\small13\times15}\,\text{mod}\,17 $$ Where above identity results from the multiplication distributive property: $$ \left[ a_{\small1}\,.\,a_{\small2}\,.\,a_{\small3}\,.\,\cdots\, \right]\,\color{blue}{\text{mod}}\,d \,=\, \left[ \left(a_{\small1}\,\color{blue}{\text{mod}}\,d\right)\left(a_{\small2}\,\color{blue}{\text{mod}}\,d\right)\left(a_{\small3}\,\color{blue}{\text{mod}}\,d\right) \cdots\, \right]\,\color{blue}{\text{mod}}\,d \,\,\Rightarrow \\[6mm] \begin{align} \left(\,a^{b\,c}\,\right)\color{blue}{\text{mod}}\,d &amp;= \left(\,a^{b}\,\right)^{c}\,\color{blue}{\text{mod}}\,d = \underbrace{\left[a^b.a^b.\,\cdots\,.a^b\right]}_{c\,\text{ times}}\,\,\color{blue}{\text{mod}}\,d \\[2mm] &amp;= \underbrace{\left[\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)\,\cdots\,\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)\right]}_{c\,\text{ times}}\,\,\color{blue}{\text{mod}}\,d = \left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)^{c}\,\,\color{blue}{\text{mod}}\,d \end{align} $$ <br/> $$ \begin{align} {\small\text{Let:}}\,a_{\small1}=q_{\small1}.d+r_{\small1}\,, &amp;\quad a_{\small2}=q_{\small2}.d+r_{\small2}\,, \quad a_{\small3}=q_{\small3}.d+r_{\small3}\,, \,\,\cdots \\[2mm] \left[ a_{\small1}\,a_{\small2}\,a_{\small3}\,\cdots\, \right]\,\text{mod}\,d &amp;=\left[ \left(q_{\small1}.d+r_{\small1}\right)\,\left(q_{\small2}.d+r_{\small2}\right)\,\left(q_{\small3}.d+r_{\small3}\right)\,\cdots\, \right]\,\text{mod}\,d \\[2mm] &amp;\quad \,\,\{{\small\text{ all terms will have a power of }}\,\,d\,\, {\small\text{ except the last term }}\} \\[2mm] &amp;= \left[ {\large\alpha}\left(q_{\small1},r_{\small1},q_{\small2},r_{\small2},q_{\small3},r_{\small3},\cdots\right).d+\left(r_{\small1}\,r_{\small2}\,r_{\small3}\,\cdots\,\right) \right]\,\text{mod}\,d \\[2mm] &amp;= \left[\,r_{\small1}\,r_{\small2}\,r_{\small3}\,\cdots\, \right]\,\text{mod}\,d \quad\qquad\{{\small\left(\alpha.d\right)\,\text{mod}\,d=0}\} \\[2mm] &amp;= \left[ \left(a_{\small1}\,\text{mod}\,d\right)\left(a_{\small2}\,\text{mod}\,d\right)\left(a_{\small3}\,\text{mod}\,d\right) \cdots\, \right]\,\text{mod}\,d \\[2mm] \end{align} $$</p> Answer 16: <p>Assuming the number and base are positive, all the modulo operator <code>n % 17</code> does is to give you the last "digit" of $n$ when written in base 17.</p> <p>Now when you want to calculate the last digit (base 17) of $(n^{13})^{15}$, you could calculate $x=n^{13}$, then raise $x$ to the power $15$, then look at the last digit. In order to raise your big number $x$ to the power $15$, you have to multiply a bunch of numbers (all equal to $x$) together. But if all you care about is the last digit (base 17) of the answer, all you need to know is the last digit of $x$, because when you do a multiplication longhand only the digits in the last column affect the last column of the answer. This is perhaps easier to convince yourself of working in base 10, since that's how we're used to doing long multiplication, but it works equally well in any base -- the last digit of $a\times b$ only depends on the last digits of $a$ and $b$.</p> Answer 17: <p>My view is that I don't see modulo as an operation the way computers do. To me, it's a <em>relation</em>, and its symbol looks like ${\cdot}\equiv {\cdot}\pmod{17}$. Two numbers can be checked against one another (we write that by filling in the two numbers where the dots are) and either they fulfill that property, or they don't. This property works well with addition, subtraction, multiplication and exponentiation in the sense that if $$ a\equiv b\pmod{17} $$ is true, then $$ a+n \equiv b+n\pmod{17}\\ a-n \equiv b-n\pmod{17}\\ a\cdot n \equiv b\cdot n\pmod{17}\\ a^n \equiv b^n\pmod{17} $$ are also all true for any integer $n$.</p> <p>What you do is first saying that $3^{13}\equiv 12\pmod{17}$, in other words, $a = 3^{13}$ and $b = 12$. Then you raise each side of that to the $15$th power, i.e. set $n = 15$ and use the last of the four above properties. The relation therefore still holds.</p> <p>What <code>a % k</code> does in a computer program is simply to find the smallest natural number $b$ such that $a \equiv b\pmod{k}$. As long as your operations are addition, subtraction, multiplication and exponentiation, it doesn't matter whether you do this reducing step during intermediate calculations or not, precisely by the four properties above.</p> Answer 18: <p>Denoting the <a href="https://en.wikipedia.org/wiki/Modulo_operation" rel="nofollow noreferrer">Modulo operator %</a> by $\left(\,\text{mod}\,\right)$, we have for all $\,a,b,c,d\in{\mathbb N}^{+}\,$: $$ \left(\,a^{b\,c}\,\right)\color{blue}{\text{mod}}\,d =\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)^{c}\,\,\color{blue}{\text{mod}}\,d =\left(\,a^{c}\,\,\color{blue}{\text{mod}}\,d\,\right)^{b}\,\,\color{blue}{\text{mod}}\,d =\left(\,a\,\color{blue}{\text{mod}}\,d\,\right)^{b\,c}\,\,\color{blue}{\text{mod}}\,d \\[2mm] \small \left(3^{\small13\times15}\right)\text{mod}\,17 =\left(3^{\small13}\,\text{mod}\,17\right)^{\small15}\,\text{mod}\,17 =\left(3^{\small15}\,\text{mod}\,17\right)^{\small13}\,\text{mod}\,17 =\left(3\,\text{mod}\,17\right)^{\small13\times15}\,\text{mod}\,17 $$ Where above identity results from the multiplication distributive property: $$ \left[ a_{\small1}\,.\,a_{\small2}\,.\,a_{\small3}\,.\,\cdots\, \right]\,\color{blue}{\text{mod}}\,d \,=\, \left[ \left(a_{\small1}\,\color{blue}{\text{mod}}\,d\right)\left(a_{\small2}\,\color{blue}{\text{mod}}\,d\right)\left(a_{\small3}\,\color{blue}{\text{mod}}\,d\right) \cdots\, \right]\,\color{blue}{\text{mod}}\,d \,\,\Rightarrow \\[6mm] \begin{align} \left(\,a^{b\,c}\,\right)\color{blue}{\text{mod}}\,d &amp;= \left(\,a^{b}\,\right)^{c}\,\color{blue}{\text{mod}}\,d = \underbrace{\left[a^b.a^b.\,\cdots\,.a^b\right]}_{c\,\text{ times}}\,\,\color{blue}{\text{mod}}\,d \\[2mm] &amp;= \underbrace{\left[\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)\,\cdots\,\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)\right]}_{c\,\text{ times}}\,\,\color{blue}{\text{mod}}\,d = \left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)^{c}\,\,\color{blue}{\text{mod}}\,d \end{align} $$ <br/> $$ \begin{align} {\small\text{Let:}}\,a_{\small1}=q_{\small1}.d+r_{\small1}\,, &amp;\quad a_{\small2}=q_{\small2}.d+r_{\small2}\,, \quad a_{\small3}=q_{\small3}.d+r_{\small3}\,, \,\,\cdots \\[2mm] \left[ a_{\small1}\,a_{\small2}\,a_{\small3}\,\cdots\, \right]\,\text{mod}\,d &amp;=\left[ \left(q_{\small1}.d+r_{\small1}\right)\,\left(q_{\small2}.d+r_{\small2}\right)\,\left(q_{\small3}.d+r_{\small3}\right)\,\cdots\, \right]\,\text{mod}\,d \\[2mm] &amp;\quad \,\,\{{\small\text{ all terms will have a power of }}\,\,d\,\, {\small\text{ except the last term }}\} \\[2mm] &amp;= \left[ {\large\alpha}\left(q_{\small1},r_{\small1},q_{\small2},r_{\small2},q_{\small3},r_{\small3},\cdots\right).d+\left(r_{\small1}\,r_{\small2}\,r_{\small3}\,\cdots\,\right) \right]\,\text{mod}\,d \\[2mm] &amp;= \left[\,r_{\small1}\,r_{\small2}\,r_{\small3}\,\cdots\, \right]\,\text{mod}\,d \quad\qquad\{{\small\left(\alpha.d\right)\,\text{mod}\,d=0}\} \\[2mm] &amp;= \left[ \left(a_{\small1}\,\text{mod}\,d\right)\left(a_{\small2}\,\text{mod}\,d\right)\left(a_{\small3}\,\text{mod}\,d\right) \cdots\, \right]\,\text{mod}\,d \\[2mm] \end{align} $$</p> Answer 19: <p>Assuming the number and base are positive, all the modulo operator <code>n % 17</code> does is to give you the last "digit" of $n$ when written in base 17.</p> <p>Now when you want to calculate the last digit (base 17) of $(n^{13})^{15}$, you could calculate $x=n^{13}$, then raise $x$ to the power $15$, then look at the last digit. In order to raise your big number $x$ to the power $15$, you have to multiply a bunch of numbers (all equal to $x$) together. But if all you care about is the last digit (base 17) of the answer, all you need to know is the last digit of $x$, because when you do a multiplication longhand only the digits in the last column affect the last column of the answer. This is perhaps easier to convince yourself of working in base 10, since that's how we're used to doing long multiplication, but it works equally well in any base -- the last digit of $a\times b$ only depends on the last digits of $a$ and $b$.</p> Answer 20: <p>My view is that I don't see modulo as an operation the way computers do. To me, it's a <em>relation</em>, and its symbol looks like ${\cdot}\equiv {\cdot}\pmod{17}$. Two numbers can be checked against one another (we write that by filling in the two numbers where the dots are) and either they fulfill that property, or they don't. This property works well with addition, subtraction, multiplication and exponentiation in the sense that if $$ a\equiv b\pmod{17} $$ is true, then $$ a+n \equiv b+n\pmod{17}\\ a-n \equiv b-n\pmod{17}\\ a\cdot n \equiv b\cdot n\pmod{17}\\ a^n \equiv b^n\pmod{17} $$ are also all true for any integer $n$.</p> <p>What you do is first saying that $3^{13}\equiv 12\pmod{17}$, in other words, $a = 3^{13}$ and $b = 12$. Then you raise each side of that to the $15$th power, i.e. set $n = 15$ and use the last of the four above properties. The relation therefore still holds.</p> <p>What <code>a % k</code> does in a computer program is simply to find the smallest natural number $b$ such that $a \equiv b\pmod{k}$. As long as your operations are addition, subtraction, multiplication and exponentiation, it doesn't matter whether you do this reducing step during intermediate calculations or not, precisely by the four properties above.</p> Answer 21: <p>Denoting the <a href="https://en.wikipedia.org/wiki/Modulo_operation" rel="nofollow noreferrer">Modulo operator %</a> by $\left(\,\text{mod}\,\right)$, we have for all $\,a,b,c,d\in{\mathbb N}^{+}\,$: $$ \left(\,a^{b\,c}\,\right)\color{blue}{\text{mod}}\,d =\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)^{c}\,\,\color{blue}{\text{mod}}\,d =\left(\,a^{c}\,\,\color{blue}{\text{mod}}\,d\,\right)^{b}\,\,\color{blue}{\text{mod}}\,d =\left(\,a\,\color{blue}{\text{mod}}\,d\,\right)^{b\,c}\,\,\color{blue}{\text{mod}}\,d \\[2mm] \small \left(3^{\small13\times15}\right)\text{mod}\,17 =\left(3^{\small13}\,\text{mod}\,17\right)^{\small15}\,\text{mod}\,17 =\left(3^{\small15}\,\text{mod}\,17\right)^{\small13}\,\text{mod}\,17 =\left(3\,\text{mod}\,17\right)^{\small13\times15}\,\text{mod}\,17 $$ Where above identity results from the multiplication distributive property: $$ \left[ a_{\small1}\,.\,a_{\small2}\,.\,a_{\small3}\,.\,\cdots\, \right]\,\color{blue}{\text{mod}}\,d \,=\, \left[ \left(a_{\small1}\,\color{blue}{\text{mod}}\,d\right)\left(a_{\small2}\,\color{blue}{\text{mod}}\,d\right)\left(a_{\small3}\,\color{blue}{\text{mod}}\,d\right) \cdots\, \right]\,\color{blue}{\text{mod}}\,d \,\,\Rightarrow \\[6mm] \begin{align} \left(\,a^{b\,c}\,\right)\color{blue}{\text{mod}}\,d &amp;= \left(\,a^{b}\,\right)^{c}\,\color{blue}{\text{mod}}\,d = \underbrace{\left[a^b.a^b.\,\cdots\,.a^b\right]}_{c\,\text{ times}}\,\,\color{blue}{\text{mod}}\,d \\[2mm] &amp;= \underbrace{\left[\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)\,\cdots\,\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)\right]}_{c\,\text{ times}}\,\,\color{blue}{\text{mod}}\,d = \left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)^{c}\,\,\color{blue}{\text{mod}}\,d \end{align} $$ <br/> $$ \begin{align} {\small\text{Let:}}\,a_{\small1}=q_{\small1}.d+r_{\small1}\,, &amp;\quad a_{\small2}=q_{\small2}.d+r_{\small2}\,, \quad a_{\small3}=q_{\small3}.d+r_{\small3}\,, \,\,\cdots \\[2mm] \left[ a_{\small1}\,a_{\small2}\,a_{\small3}\,\cdots\, \right]\,\text{mod}\,d &amp;=\left[ \left(q_{\small1}.d+r_{\small1}\right)\,\left(q_{\small2}.d+r_{\small2}\right)\,\left(q_{\small3}.d+r_{\small3}\right)\,\cdots\, \right]\,\text{mod}\,d \\[2mm] &amp;\quad \,\,\{{\small\text{ all terms will have a power of }}\,\,d\,\, {\small\text{ except the last term }}\} \\[2mm] &amp;= \left[ {\large\alpha}\left(q_{\small1},r_{\small1},q_{\small2},r_{\small2},q_{\small3},r_{\small3},\cdots\right).d+\left(r_{\small1}\,r_{\small2}\,r_{\small3}\,\cdots\,\right) \right]\,\text{mod}\,d \\[2mm] &amp;= \left[\,r_{\small1}\,r_{\small2}\,r_{\small3}\,\cdots\, \right]\,\text{mod}\,d \quad\qquad\{{\small\left(\alpha.d\right)\,\text{mod}\,d=0}\} \\[2mm] &amp;= \left[ \left(a_{\small1}\,\text{mod}\,d\right)\left(a_{\small2}\,\text{mod}\,d\right)\left(a_{\small3}\,\text{mod}\,d\right) \cdots\, \right]\,\text{mod}\,d \\[2mm] \end{align} $$</p> Answer 22: <p>Assuming the number and base are positive, all the modulo operator <code>n % 17</code> does is to give you the last "digit" of $n$ when written in base 17.</p> <p>Now when you want to calculate the last digit (base 17) of $(n^{13})^{15}$, you could calculate $x=n^{13}$, then raise $x$ to the power $15$, then look at the last digit. In order to raise your big number $x$ to the power $15$, you have to multiply a bunch of numbers (all equal to $x$) together. But if all you care about is the last digit (base 17) of the answer, all you need to know is the last digit of $x$, because when you do a multiplication longhand only the digits in the last column affect the last column of the answer. This is perhaps easier to convince yourself of working in base 10, since that's how we're used to doing long multiplication, but it works equally well in any base -- the last digit of $a\times b$ only depends on the last digits of $a$ and $b$.</p> Answer 23: <p>My view is that I don't see modulo as an operation the way computers do. To me, it's a <em>relation</em>, and its symbol looks like ${\cdot}\equiv {\cdot}\pmod{17}$. Two numbers can be checked against one another (we write that by filling in the two numbers where the dots are) and either they fulfill that property, or they don't. This property works well with addition, subtraction, multiplication and exponentiation in the sense that if $$ a\equiv b\pmod{17} $$ is true, then $$ a+n \equiv b+n\pmod{17}\\ a-n \equiv b-n\pmod{17}\\ a\cdot n \equiv b\cdot n\pmod{17}\\ a^n \equiv b^n\pmod{17} $$ are also all true for any integer $n$.</p> <p>What you do is first saying that $3^{13}\equiv 12\pmod{17}$, in other words, $a = 3^{13}$ and $b = 12$. Then you raise each side of that to the $15$th power, i.e. set $n = 15$ and use the last of the four above properties. The relation therefore still holds.</p> <p>What <code>a % k</code> does in a computer program is simply to find the smallest natural number $b$ such that $a \equiv b\pmod{k}$. As long as your operations are addition, subtraction, multiplication and exponentiation, it doesn't matter whether you do this reducing step during intermediate calculations or not, precisely by the four properties above.</p> Answer 24: <p>Denoting the <a href="https://en.wikipedia.org/wiki/Modulo_operation" rel="nofollow noreferrer">Modulo operator %</a> by $\left(\,\text{mod}\,\right)$, we have for all $\,a,b,c,d\in{\mathbb N}^{+}\,$: $$ \left(\,a^{b\,c}\,\right)\color{blue}{\text{mod}}\,d =\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)^{c}\,\,\color{blue}{\text{mod}}\,d =\left(\,a^{c}\,\,\color{blue}{\text{mod}}\,d\,\right)^{b}\,\,\color{blue}{\text{mod}}\,d =\left(\,a\,\color{blue}{\text{mod}}\,d\,\right)^{b\,c}\,\,\color{blue}{\text{mod}}\,d \\[2mm] \small \left(3^{\small13\times15}\right)\text{mod}\,17 =\left(3^{\small13}\,\text{mod}\,17\right)^{\small15}\,\text{mod}\,17 =\left(3^{\small15}\,\text{mod}\,17\right)^{\small13}\,\text{mod}\,17 =\left(3\,\text{mod}\,17\right)^{\small13\times15}\,\text{mod}\,17 $$ Where above identity results from the multiplication distributive property: $$ \left[ a_{\small1}\,.\,a_{\small2}\,.\,a_{\small3}\,.\,\cdots\, \right]\,\color{blue}{\text{mod}}\,d \,=\, \left[ \left(a_{\small1}\,\color{blue}{\text{mod}}\,d\right)\left(a_{\small2}\,\color{blue}{\text{mod}}\,d\right)\left(a_{\small3}\,\color{blue}{\text{mod}}\,d\right) \cdots\, \right]\,\color{blue}{\text{mod}}\,d \,\,\Rightarrow \\[6mm] \begin{align} \left(\,a^{b\,c}\,\right)\color{blue}{\text{mod}}\,d &amp;= \left(\,a^{b}\,\right)^{c}\,\color{blue}{\text{mod}}\,d = \underbrace{\left[a^b.a^b.\,\cdots\,.a^b\right]}_{c\,\text{ times}}\,\,\color{blue}{\text{mod}}\,d \\[2mm] &amp;= \underbrace{\left[\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)\,\cdots\,\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)\right]}_{c\,\text{ times}}\,\,\color{blue}{\text{mod}}\,d = \left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)^{c}\,\,\color{blue}{\text{mod}}\,d \end{align} $$ <br/> $$ \begin{align} {\small\text{Let:}}\,a_{\small1}=q_{\small1}.d+r_{\small1}\,, &amp;\quad a_{\small2}=q_{\small2}.d+r_{\small2}\,, \quad a_{\small3}=q_{\small3}.d+r_{\small3}\,, \,\,\cdots \\[2mm] \left[ a_{\small1}\,a_{\small2}\,a_{\small3}\,\cdots\, \right]\,\text{mod}\,d &amp;=\left[ \left(q_{\small1}.d+r_{\small1}\right)\,\left(q_{\small2}.d+r_{\small2}\right)\,\left(q_{\small3}.d+r_{\small3}\right)\,\cdots\, \right]\,\text{mod}\,d \\[2mm] &amp;\quad \,\,\{{\small\text{ all terms will have a power of }}\,\,d\,\, {\small\text{ except the last term }}\} \\[2mm] &amp;= \left[ {\large\alpha}\left(q_{\small1},r_{\small1},q_{\small2},r_{\small2},q_{\small3},r_{\small3},\cdots\right).d+\left(r_{\small1}\,r_{\small2}\,r_{\small3}\,\cdots\,\right) \right]\,\text{mod}\,d \\[2mm] &amp;= \left[\,r_{\small1}\,r_{\small2}\,r_{\small3}\,\cdots\, \right]\,\text{mod}\,d \quad\qquad\{{\small\left(\alpha.d\right)\,\text{mod}\,d=0}\} \\[2mm] &amp;= \left[ \left(a_{\small1}\,\text{mod}\,d\right)\left(a_{\small2}\,\text{mod}\,d\right)\left(a_{\small3}\,\text{mod}\,d\right) \cdots\, \right]\,\text{mod}\,d \\[2mm] \end{align} $$</p> Answer 25: <p>Assuming the number and base are positive, all the modulo operator <code>n % 17</code> does is to give you the last "digit" of $n$ when written in base 17.</p> <p>Now when you want to calculate the last digit (base 17) of $(n^{13})^{15}$, you could calculate $x=n^{13}$, then raise $x$ to the power $15$, then look at the last digit. In order to raise your big number $x$ to the power $15$, you have to multiply a bunch of numbers (all equal to $x$) together. But if all you care about is the last digit (base 17) of the answer, all you need to know is the last digit of $x$, because when you do a multiplication longhand only the digits in the last column affect the last column of the answer. This is perhaps easier to convince yourself of working in base 10, since that's how we're used to doing long multiplication, but it works equally well in any base -- the last digit of $a\times b$ only depends on the last digits of $a$ and $b$.</p> Answer 26: <p>My view is that I don't see modulo as an operation the way computers do. To me, it's a <em>relation</em>, and its symbol looks like ${\cdot}\equiv {\cdot}\pmod{17}$. Two numbers can be checked against one another (we write that by filling in the two numbers where the dots are) and either they fulfill that property, or they don't. This property works well with addition, subtraction, multiplication and exponentiation in the sense that if $$ a\equiv b\pmod{17} $$ is true, then $$ a+n \equiv b+n\pmod{17}\\ a-n \equiv b-n\pmod{17}\\ a\cdot n \equiv b\cdot n\pmod{17}\\ a^n \equiv b^n\pmod{17} $$ are also all true for any integer $n$.</p> <p>What you do is first saying that $3^{13}\equiv 12\pmod{17}$, in other words, $a = 3^{13}$ and $b = 12$. Then you raise each side of that to the $15$th power, i.e. set $n = 15$ and use the last of the four above properties. The relation therefore still holds.</p> <p>What <code>a % k</code> does in a computer program is simply to find the smallest natural number $b$ such that $a \equiv b\pmod{k}$. As long as your operations are addition, subtraction, multiplication and exponentiation, it doesn't matter whether you do this reducing step during intermediate calculations or not, precisely by the four properties above.</p> Answer 27: <p>Denoting the <a href="https://en.wikipedia.org/wiki/Modulo_operation" rel="nofollow noreferrer">Modulo operator %</a> by $\left(\,\text{mod}\,\right)$, we have for all $\,a,b,c,d\in{\mathbb N}^{+}\,$: $$ \left(\,a^{b\,c}\,\right)\color{blue}{\text{mod}}\,d =\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)^{c}\,\,\color{blue}{\text{mod}}\,d =\left(\,a^{c}\,\,\color{blue}{\text{mod}}\,d\,\right)^{b}\,\,\color{blue}{\text{mod}}\,d =\left(\,a\,\color{blue}{\text{mod}}\,d\,\right)^{b\,c}\,\,\color{blue}{\text{mod}}\,d \\[2mm] \small \left(3^{\small13\times15}\right)\text{mod}\,17 =\left(3^{\small13}\,\text{mod}\,17\right)^{\small15}\,\text{mod}\,17 =\left(3^{\small15}\,\text{mod}\,17\right)^{\small13}\,\text{mod}\,17 =\left(3\,\text{mod}\,17\right)^{\small13\times15}\,\text{mod}\,17 $$ Where above identity results from the multiplication distributive property: $$ \left[ a_{\small1}\,.\,a_{\small2}\,.\,a_{\small3}\,.\,\cdots\, \right]\,\color{blue}{\text{mod}}\,d \,=\, \left[ \left(a_{\small1}\,\color{blue}{\text{mod}}\,d\right)\left(a_{\small2}\,\color{blue}{\text{mod}}\,d\right)\left(a_{\small3}\,\color{blue}{\text{mod}}\,d\right) \cdots\, \right]\,\color{blue}{\text{mod}}\,d \,\,\Rightarrow \\[6mm] \begin{align} \left(\,a^{b\,c}\,\right)\color{blue}{\text{mod}}\,d &amp;= \left(\,a^{b}\,\right)^{c}\,\color{blue}{\text{mod}}\,d = \underbrace{\left[a^b.a^b.\,\cdots\,.a^b\right]}_{c\,\text{ times}}\,\,\color{blue}{\text{mod}}\,d \\[2mm] &amp;= \underbrace{\left[\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)\,\cdots\,\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)\right]}_{c\,\text{ times}}\,\,\color{blue}{\text{mod}}\,d = \left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)^{c}\,\,\color{blue}{\text{mod}}\,d \end{align} $$ <br/> $$ \begin{align} {\small\text{Let:}}\,a_{\small1}=q_{\small1}.d+r_{\small1}\,, &amp;\quad a_{\small2}=q_{\small2}.d+r_{\small2}\,, \quad a_{\small3}=q_{\small3}.d+r_{\small3}\,, \,\,\cdots \\[2mm] \left[ a_{\small1}\,a_{\small2}\,a_{\small3}\,\cdots\, \right]\,\text{mod}\,d &amp;=\left[ \left(q_{\small1}.d+r_{\small1}\right)\,\left(q_{\small2}.d+r_{\small2}\right)\,\left(q_{\small3}.d+r_{\small3}\right)\,\cdots\, \right]\,\text{mod}\,d \\[2mm] &amp;\quad \,\,\{{\small\text{ all terms will have a power of }}\,\,d\,\, {\small\text{ except the last term }}\} \\[2mm] &amp;= \left[ {\large\alpha}\left(q_{\small1},r_{\small1},q_{\small2},r_{\small2},q_{\small3},r_{\small3},\cdots\right).d+\left(r_{\small1}\,r_{\small2}\,r_{\small3}\,\cdots\,\right) \right]\,\text{mod}\,d \\[2mm] &amp;= \left[\,r_{\small1}\,r_{\small2}\,r_{\small3}\,\cdots\, \right]\,\text{mod}\,d \quad\qquad\{{\small\left(\alpha.d\right)\,\text{mod}\,d=0}\} \\[2mm] &amp;= \left[ \left(a_{\small1}\,\text{mod}\,d\right)\left(a_{\small2}\,\text{mod}\,d\right)\left(a_{\small3}\,\text{mod}\,d\right) \cdots\, \right]\,\text{mod}\,d \\[2mm] \end{align} $$</p> Answer 28: <p>Assuming the number and base are positive, all the modulo operator <code>n % 17</code> does is to give you the last "digit" of $n$ when written in base 17.</p> <p>Now when you want to calculate the last digit (base 17) of $(n^{13})^{15}$, you could calculate $x=n^{13}$, then raise $x$ to the power $15$, then look at the last digit. In order to raise your big number $x$ to the power $15$, you have to multiply a bunch of numbers (all equal to $x$) together. But if all you care about is the last digit (base 17) of the answer, all you need to know is the last digit of $x$, because when you do a multiplication longhand only the digits in the last column affect the last column of the answer. This is perhaps easier to convince yourself of working in base 10, since that's how we're used to doing long multiplication, but it works equally well in any base -- the last digit of $a\times b$ only depends on the last digits of $a$ and $b$.</p> Answer 29: <p>My view is that I don't see modulo as an operation the way computers do. To me, it's a <em>relation</em>, and its symbol looks like ${\cdot}\equiv {\cdot}\pmod{17}$. Two numbers can be checked against one another (we write that by filling in the two numbers where the dots are) and either they fulfill that property, or they don't. This property works well with addition, subtraction, multiplication and exponentiation in the sense that if $$ a\equiv b\pmod{17} $$ is true, then $$ a+n \equiv b+n\pmod{17}\\ a-n \equiv b-n\pmod{17}\\ a\cdot n \equiv b\cdot n\pmod{17}\\ a^n \equiv b^n\pmod{17} $$ are also all true for any integer $n$.</p> <p>What you do is first saying that $3^{13}\equiv 12\pmod{17}$, in other words, $a = 3^{13}$ and $b = 12$. Then you raise each side of that to the $15$th power, i.e. set $n = 15$ and use the last of the four above properties. The relation therefore still holds.</p> <p>What <code>a % k</code> does in a computer program is simply to find the smallest natural number $b$ such that $a \equiv b\pmod{k}$. As long as your operations are addition, subtraction, multiplication and exponentiation, it doesn't matter whether you do this reducing step during intermediate calculations or not, precisely by the four properties above.</p> Answer 30: <p>Denoting the <a href="https://en.wikipedia.org/wiki/Modulo_operation" rel="nofollow noreferrer">Modulo operator %</a> by $\left(\,\text{mod}\,\right)$, we have for all $\,a,b,c,d\in{\mathbb N}^{+}\,$: $$ \left(\,a^{b\,c}\,\right)\color{blue}{\text{mod}}\,d =\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)^{c}\,\,\color{blue}{\text{mod}}\,d =\left(\,a^{c}\,\,\color{blue}{\text{mod}}\,d\,\right)^{b}\,\,\color{blue}{\text{mod}}\,d =\left(\,a\,\color{blue}{\text{mod}}\,d\,\right)^{b\,c}\,\,\color{blue}{\text{mod}}\,d \\[2mm] \small \left(3^{\small13\times15}\right)\text{mod}\,17 =\left(3^{\small13}\,\text{mod}\,17\right)^{\small15}\,\text{mod}\,17 =\left(3^{\small15}\,\text{mod}\,17\right)^{\small13}\,\text{mod}\,17 =\left(3\,\text{mod}\,17\right)^{\small13\times15}\,\text{mod}\,17 $$ Where above identity results from the multiplication distributive property: $$ \left[ a_{\small1}\,.\,a_{\small2}\,.\,a_{\small3}\,.\,\cdots\, \right]\,\color{blue}{\text{mod}}\,d \,=\, \left[ \left(a_{\small1}\,\color{blue}{\text{mod}}\,d\right)\left(a_{\small2}\,\color{blue}{\text{mod}}\,d\right)\left(a_{\small3}\,\color{blue}{\text{mod}}\,d\right) \cdots\, \right]\,\color{blue}{\text{mod}}\,d \,\,\Rightarrow \\[6mm] \begin{align} \left(\,a^{b\,c}\,\right)\color{blue}{\text{mod}}\,d &amp;= \left(\,a^{b}\,\right)^{c}\,\color{blue}{\text{mod}}\,d = \underbrace{\left[a^b.a^b.\,\cdots\,.a^b\right]}_{c\,\text{ times}}\,\,\color{blue}{\text{mod}}\,d \\[2mm] &amp;= \underbrace{\left[\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)\,\cdots\,\left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)\right]}_{c\,\text{ times}}\,\,\color{blue}{\text{mod}}\,d = \left(\,a^{b}\,\,\color{blue}{\text{mod}}\,d\,\right)^{c}\,\,\color{blue}{\text{mod}}\,d \end{align} $$ <br/> $$ \begin{align} {\small\text{Let:}}\,a_{\small1}=q_{\small1}.d+r_{\small1}\,, &amp;\quad a_{\small2}=q_{\small2}.d+r_{\small2}\,, \quad a_{\small3}=q_{\small3}.d+r_{\small3}\,, \,\,\cdots \\[2mm] \left[ a_{\small1}\,a_{\small2}\,a_{\small3}\,\cdots\, \right]\,\text{mod}\,d &amp;=\left[ \left(q_{\small1}.d+r_{\small1}\right)\,\left(q_{\small2}.d+r_{\small2}\right)\,\left(q_{\small3}.d+r_{\small3}\right)\,\cdots\, \right]\,\text{mod}\,d \\[2mm] &amp;\quad \,\,\{{\small\text{ all terms will have a power of }}\,\,d\,\, {\small\text{ except the last term }}\} \\[2mm] &amp;= \left[ {\large\alpha}\left(q_{\small1},r_{\small1},q_{\small2},r_{\small2},q_{\small3},r_{\small3},\cdots\right).d+\left(r_{\small1}\,r_{\small2}\,r_{\small3}\,\cdots\,\right) \right]\,\text{mod}\,d \\[2mm] &amp;= \left[\,r_{\small1}\,r_{\small2}\,r_{\small3}\,\cdots\, \right]\,\text{mod}\,d \quad\qquad\{{\small\left(\alpha.d\right)\,\text{mod}\,d=0}\} \\[2mm] &amp;= \left[ \left(a_{\small1}\,\text{mod}\,d\right)\left(a_{\small2}\,\text{mod}\,d\right)\left(a_{\small3}\,\text{mod}\,d\right) \cdots\, \right]\,\text{mod}\,d \\[2mm] \end{align} $$</p>
Proving a well-known inequality using S.O.S
2,116,233
6
https://math.stackexchange.com/questions/2116233/proving-a-well-known-inequality-using-s-o-s
<p>Using $AM-GM$ inequality, it is easy to show for $a,b,c&gt;0$, $$\frac{a}{b} + \frac{b}{c} + \frac{c}{a} \ge 3.$$ However, I can't seem to find an <a href="http://www.artofproblemsolving.com/community/c6h80127" rel="nofollow noreferrer">S.O.S form</a> for $a,b,c$ $$f(a,b,c) = \frac{a}{b} + \frac{b}{c} + \frac{c}{a} - 3 = \sum_{cyc}S_A(b-c)^2 \ge 0.$$</p> <p><strong>Update:</strong></p> <p>Please note that I'm looking for an S.O.S form for $a, b, c$, or a proof that there is no S.O.S form for $a, b, c$. Substituting other variables may help to solve the problem using the S.O.S method, but those are S.O.S forms for some other variables, not $a, b, c$.</p>
[ { "answer_id": 2116261, "score": 4, "text": "<p>Let $c=\\min\\{a,b,c\\}$. Hence,\n$$\\frac{a}{b}+\\frac{b}{c}+\\frac{c}{a}-3=\\frac{a}{b}+\\frac{b}{a}-2+\\frac{b}{c}+\\frac{c}{a}-\\frac{b}{a}-1=\\frac{(a-b)^2}{ab}+\\frac{(c-a)(c-b)}{ac}\\geq0$$</p>\n\n<p>From here we can get a SOS form:\n$$\\frac{a}{b}+\\frac{b}{c}+\\frac{c}{a}-3=\\frac{1}{6abc}\\sum_{cyc}(a-b)^2(3c+a-b)$$</p>\n" }, { "answer_id": 3798184, "score": 2, "text": "<p>Here is another SOS (Shortest)\n<span class=\"math-container\">$$ab^2+bc^2+ca^2-3abc={\\frac {a \\left( c-a \\right) ^{2} \\left({b}^{2}+ ac+cb \\right) +b\n \\left( 2\\,ab+{c}^{2}-3\\,ac \\right) ^{2}}{4\\,ab+ \\left( c-a \\right) ^\n{2}}} \\geqslant 0$$</span></p>\n" }, { "answer_id": 2116312, "score": 1, "text": "<p>$$\\frac{a}{b} + \\frac{b}{c} + \\frac{c}{a} \\ge 3 \\Leftrightarrow a^2c + b^2a + c^2b \\ge 3abc$$\nSo we use these S.O.S forms:</p>\n\n<ul>\n<li>$a^3 + b^3 + c^3 - 3abc = \\frac{1}{2}(a+b+c)\\sum_{cyc}(a-b)^2$.</li>\n<li>$a^3 + b^3 + c^3 - a^2c - b^2a - c^2b = \\frac{1}{3}\\sum_{cyc}a^3 + a^3 + c^3-3a^2c = \\frac{1}{3}\\sum_{cyc} (2a+c)(c-a)^2$.\nHence, the S.O.S form would be \n$$\\frac{a}{b} + \\frac{b}{c} + \\frac{c}{a} - 3 = \\sum_{cyc}\\frac{3b+c-a}{6abc}(c-a)^2.$$</li>\n</ul>\n" }, { "answer_id": 3783775, "score": 1, "text": "<p>Because<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$\\frac{a}{b}+\\frac{b}{c}+\\frac{c}{a} -3=\\frac{ab^2 +bc^2 +ca^2 -3abc}{abc}$$</span>\nIt's enough to prove<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$ab^2 +bc^2 +ca^2 -3abc \\geqslant 0$$</span> by SOS.</p>\n<p>Let <span class=\"math-container\">$$\\text{P}=2(a+b+c)\\cdot (ab^2 +bc^2 +ca^2 -3abc)$$</span>\nWe have<span class=\"math-container\">$:$</span>\n<span class=\"math-container\">$$\\text{P}=a \\left( a+2\\,b \\right) \\left( b-c \\right) ^{2}+c \\left( c+2\\,a\n \\right) \\left( a-b \\right) ^{2}+b \\left( b+2\\,c \\right) \\left( c-a\n \\right) ^{2}$$</span>\nLet me explain the method<span class=\"math-container\">$,$</span> let\n<span class=\"math-container\">$$\\text{P}_{\\text{sos}}=\\sum \\left( {\\it QQ}_{{1}}{a}^{2}+{\\it QQ}_{{4}}ab+{\\it QQ}_{{5}}ac+{\\it \nQQ}_{{2}}{b}^{2}+{\\it QQ}_{{6}}bc+{\\it QQ}_{{3}}{c}^{2} \\right) \n \\left( a-b \\right) ^{2}\n$$</span>\nGet an identity give <span class=\"math-container\">$$\\left\\{\\begin{matrix}\n -{\\it QQ}_{{1}}-{\\it QQ}_{{2}}=0&amp;\\\\2\\,{\\it QQ}_{{1}}-{\\it QQ}_{\n{4}}-{\\it QQ}_{{6}}=0&amp;\\\\2\\,{\\it QQ}_{{2}}-{\\it QQ}_{{4}}-{\\it QQ}_{{5}}+\n2=0&amp;\\\\2\\,{\\it QQ}_{{3}}+{\\it QQ}_{{5}}+{\\it QQ}_{{6}}-4=0&amp;\\\\-{\\it QQ}_{{1}\n}-{\\it QQ}_{{2}}-2\\,{\\it QQ}_{{3}}+2\\,{\\it QQ}_{{4}}+2=0 \n &amp;\n\\end{matrix}\\right.$$</span></p>\n<p>Solve this with <span class=\"math-container\">$Q_i \\geqslant 0 ,(i=1..6)$</span> give us<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$ \\left\\{ {\\it QQ}_{{1}}=0,{\\it QQ}_{{2}}=0,{\\it QQ}_{{3}}=1,{\\it QQ}_{\n{4}}=0,{\\it QQ}_{{5}}=2,{\\it QQ}_{{6}}=0 \\right\\} $$</span></p>\n<p>Take it into <span class=\"math-container\">$\\text{P}_{\\text{sos}}$</span> give the SOS form.\nBy the same way<span class=\"math-container\">$,$</span> we have<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$\\text{P}=\\frac{1}{3} \\sum (2ab-bc-ca)^2 +\\sum 2ab(b-c)^2$$</span></p>\n" }, { "answer_id": 3784645, "score": 1, "text": "<p>We have\n<span class=\"math-container\">$$\\frac{a}{b} + \\frac{b}{c} + \\frac{c}{a} - 3 = \\frac{(a-b)^2}{(a+b)b} + \\frac{(b-c)^2}{(a+b)c} + \\frac{(c-a)^2b}{(a+b)ca}.$$</span></p>\n" }, { "answer_id": 2116261, "score": 4, "text": "<p>Let $c=\\min\\{a,b,c\\}$. Hence,\n$$\\frac{a}{b}+\\frac{b}{c}+\\frac{c}{a}-3=\\frac{a}{b}+\\frac{b}{a}-2+\\frac{b}{c}+\\frac{c}{a}-\\frac{b}{a}-1=\\frac{(a-b)^2}{ab}+\\frac{(c-a)(c-b)}{ac}\\geq0$$</p>\n\n<p>From here we can get a SOS form:\n$$\\frac{a}{b}+\\frac{b}{c}+\\frac{c}{a}-3=\\frac{1}{6abc}\\sum_{cyc}(a-b)^2(3c+a-b)$$</p>\n" }, { "answer_id": 3798184, "score": 2, "text": "<p>Here is another SOS (Shortest)\n<span class=\"math-container\">$$ab^2+bc^2+ca^2-3abc={\\frac {a \\left( c-a \\right) ^{2} \\left({b}^{2}+ ac+cb \\right) +b\n \\left( 2\\,ab+{c}^{2}-3\\,ac \\right) ^{2}}{4\\,ab+ \\left( c-a \\right) ^\n{2}}} \\geqslant 0$$</span></p>\n" }, { "answer_id": 2116312, "score": 1, "text": "<p>$$\\frac{a}{b} + \\frac{b}{c} + \\frac{c}{a} \\ge 3 \\Leftrightarrow a^2c + b^2a + c^2b \\ge 3abc$$\nSo we use these S.O.S forms:</p>\n\n<ul>\n<li>$a^3 + b^3 + c^3 - 3abc = \\frac{1}{2}(a+b+c)\\sum_{cyc}(a-b)^2$.</li>\n<li>$a^3 + b^3 + c^3 - a^2c - b^2a - c^2b = \\frac{1}{3}\\sum_{cyc}a^3 + a^3 + c^3-3a^2c = \\frac{1}{3}\\sum_{cyc} (2a+c)(c-a)^2$.\nHence, the S.O.S form would be \n$$\\frac{a}{b} + \\frac{b}{c} + \\frac{c}{a} - 3 = \\sum_{cyc}\\frac{3b+c-a}{6abc}(c-a)^2.$$</li>\n</ul>\n" }, { "answer_id": 3783775, "score": 1, "text": "<p>Because<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$\\frac{a}{b}+\\frac{b}{c}+\\frac{c}{a} -3=\\frac{ab^2 +bc^2 +ca^2 -3abc}{abc}$$</span>\nIt's enough to prove<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$ab^2 +bc^2 +ca^2 -3abc \\geqslant 0$$</span> by SOS.</p>\n<p>Let <span class=\"math-container\">$$\\text{P}=2(a+b+c)\\cdot (ab^2 +bc^2 +ca^2 -3abc)$$</span>\nWe have<span class=\"math-container\">$:$</span>\n<span class=\"math-container\">$$\\text{P}=a \\left( a+2\\,b \\right) \\left( b-c \\right) ^{2}+c \\left( c+2\\,a\n \\right) \\left( a-b \\right) ^{2}+b \\left( b+2\\,c \\right) \\left( c-a\n \\right) ^{2}$$</span>\nLet me explain the method<span class=\"math-container\">$,$</span> let\n<span class=\"math-container\">$$\\text{P}_{\\text{sos}}=\\sum \\left( {\\it QQ}_{{1}}{a}^{2}+{\\it QQ}_{{4}}ab+{\\it QQ}_{{5}}ac+{\\it \nQQ}_{{2}}{b}^{2}+{\\it QQ}_{{6}}bc+{\\it QQ}_{{3}}{c}^{2} \\right) \n \\left( a-b \\right) ^{2}\n$$</span>\nGet an identity give <span class=\"math-container\">$$\\left\\{\\begin{matrix}\n -{\\it QQ}_{{1}}-{\\it QQ}_{{2}}=0&amp;\\\\2\\,{\\it QQ}_{{1}}-{\\it QQ}_{\n{4}}-{\\it QQ}_{{6}}=0&amp;\\\\2\\,{\\it QQ}_{{2}}-{\\it QQ}_{{4}}-{\\it QQ}_{{5}}+\n2=0&amp;\\\\2\\,{\\it QQ}_{{3}}+{\\it QQ}_{{5}}+{\\it QQ}_{{6}}-4=0&amp;\\\\-{\\it QQ}_{{1}\n}-{\\it QQ}_{{2}}-2\\,{\\it QQ}_{{3}}+2\\,{\\it QQ}_{{4}}+2=0 \n &amp;\n\\end{matrix}\\right.$$</span></p>\n<p>Solve this with <span class=\"math-container\">$Q_i \\geqslant 0 ,(i=1..6)$</span> give us<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$ \\left\\{ {\\it QQ}_{{1}}=0,{\\it QQ}_{{2}}=0,{\\it QQ}_{{3}}=1,{\\it QQ}_{\n{4}}=0,{\\it QQ}_{{5}}=2,{\\it QQ}_{{6}}=0 \\right\\} $$</span></p>\n<p>Take it into <span class=\"math-container\">$\\text{P}_{\\text{sos}}$</span> give the SOS form.\nBy the same way<span class=\"math-container\">$,$</span> we have<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$\\text{P}=\\frac{1}{3} \\sum (2ab-bc-ca)^2 +\\sum 2ab(b-c)^2$$</span></p>\n" }, { "answer_id": 3784645, "score": 1, "text": "<p>We have\n<span class=\"math-container\">$$\\frac{a}{b} + \\frac{b}{c} + \\frac{c}{a} - 3 = \\frac{(a-b)^2}{(a+b)b} + \\frac{(b-c)^2}{(a+b)c} + \\frac{(c-a)^2b}{(a+b)ca}.$$</span></p>\n" }, { "answer_id": 2116261, "score": 4, "text": "<p>Let $c=\\min\\{a,b,c\\}$. Hence,\n$$\\frac{a}{b}+\\frac{b}{c}+\\frac{c}{a}-3=\\frac{a}{b}+\\frac{b}{a}-2+\\frac{b}{c}+\\frac{c}{a}-\\frac{b}{a}-1=\\frac{(a-b)^2}{ab}+\\frac{(c-a)(c-b)}{ac}\\geq0$$</p>\n\n<p>From here we can get a SOS form:\n$$\\frac{a}{b}+\\frac{b}{c}+\\frac{c}{a}-3=\\frac{1}{6abc}\\sum_{cyc}(a-b)^2(3c+a-b)$$</p>\n" }, { "answer_id": 3798184, "score": 2, "text": "<p>Here is another SOS (Shortest)\n<span class=\"math-container\">$$ab^2+bc^2+ca^2-3abc={\\frac {a \\left( c-a \\right) ^{2} \\left({b}^{2}+ ac+cb \\right) +b\n \\left( 2\\,ab+{c}^{2}-3\\,ac \\right) ^{2}}{4\\,ab+ \\left( c-a \\right) ^\n{2}}} \\geqslant 0$$</span></p>\n" }, { "answer_id": 2116312, "score": 1, "text": "<p>$$\\frac{a}{b} + \\frac{b}{c} + \\frac{c}{a} \\ge 3 \\Leftrightarrow a^2c + b^2a + c^2b \\ge 3abc$$\nSo we use these S.O.S forms:</p>\n\n<ul>\n<li>$a^3 + b^3 + c^3 - 3abc = \\frac{1}{2}(a+b+c)\\sum_{cyc}(a-b)^2$.</li>\n<li>$a^3 + b^3 + c^3 - a^2c - b^2a - c^2b = \\frac{1}{3}\\sum_{cyc}a^3 + a^3 + c^3-3a^2c = \\frac{1}{3}\\sum_{cyc} (2a+c)(c-a)^2$.\nHence, the S.O.S form would be \n$$\\frac{a}{b} + \\frac{b}{c} + \\frac{c}{a} - 3 = \\sum_{cyc}\\frac{3b+c-a}{6abc}(c-a)^2.$$</li>\n</ul>\n" }, { "answer_id": 3783775, "score": 1, "text": "<p>Because<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$\\frac{a}{b}+\\frac{b}{c}+\\frac{c}{a} -3=\\frac{ab^2 +bc^2 +ca^2 -3abc}{abc}$$</span>\nIt's enough to prove<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$ab^2 +bc^2 +ca^2 -3abc \\geqslant 0$$</span> by SOS.</p>\n<p>Let <span class=\"math-container\">$$\\text{P}=2(a+b+c)\\cdot (ab^2 +bc^2 +ca^2 -3abc)$$</span>\nWe have<span class=\"math-container\">$:$</span>\n<span class=\"math-container\">$$\\text{P}=a \\left( a+2\\,b \\right) \\left( b-c \\right) ^{2}+c \\left( c+2\\,a\n \\right) \\left( a-b \\right) ^{2}+b \\left( b+2\\,c \\right) \\left( c-a\n \\right) ^{2}$$</span>\nLet me explain the method<span class=\"math-container\">$,$</span> let\n<span class=\"math-container\">$$\\text{P}_{\\text{sos}}=\\sum \\left( {\\it QQ}_{{1}}{a}^{2}+{\\it QQ}_{{4}}ab+{\\it QQ}_{{5}}ac+{\\it \nQQ}_{{2}}{b}^{2}+{\\it QQ}_{{6}}bc+{\\it QQ}_{{3}}{c}^{2} \\right) \n \\left( a-b \\right) ^{2}\n$$</span>\nGet an identity give <span class=\"math-container\">$$\\left\\{\\begin{matrix}\n -{\\it QQ}_{{1}}-{\\it QQ}_{{2}}=0&amp;\\\\2\\,{\\it QQ}_{{1}}-{\\it QQ}_{\n{4}}-{\\it QQ}_{{6}}=0&amp;\\\\2\\,{\\it QQ}_{{2}}-{\\it QQ}_{{4}}-{\\it QQ}_{{5}}+\n2=0&amp;\\\\2\\,{\\it QQ}_{{3}}+{\\it QQ}_{{5}}+{\\it QQ}_{{6}}-4=0&amp;\\\\-{\\it QQ}_{{1}\n}-{\\it QQ}_{{2}}-2\\,{\\it QQ}_{{3}}+2\\,{\\it QQ}_{{4}}+2=0 \n &amp;\n\\end{matrix}\\right.$$</span></p>\n<p>Solve this with <span class=\"math-container\">$Q_i \\geqslant 0 ,(i=1..6)$</span> give us<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$ \\left\\{ {\\it QQ}_{{1}}=0,{\\it QQ}_{{2}}=0,{\\it QQ}_{{3}}=1,{\\it QQ}_{\n{4}}=0,{\\it QQ}_{{5}}=2,{\\it QQ}_{{6}}=0 \\right\\} $$</span></p>\n<p>Take it into <span class=\"math-container\">$\\text{P}_{\\text{sos}}$</span> give the SOS form.\nBy the same way<span class=\"math-container\">$,$</span> we have<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$\\text{P}=\\frac{1}{3} \\sum (2ab-bc-ca)^2 +\\sum 2ab(b-c)^2$$</span></p>\n" }, { "answer_id": 3784645, "score": 1, "text": "<p>We have\n<span class=\"math-container\">$$\\frac{a}{b} + \\frac{b}{c} + \\frac{c}{a} - 3 = \\frac{(a-b)^2}{(a+b)b} + \\frac{(b-c)^2}{(a+b)c} + \\frac{(c-a)^2b}{(a+b)ca}.$$</span></p>\n" }, { "answer_id": 2116261, "score": 4, "text": "<p>Let $c=\\min\\{a,b,c\\}$. Hence,\n$$\\frac{a}{b}+\\frac{b}{c}+\\frac{c}{a}-3=\\frac{a}{b}+\\frac{b}{a}-2+\\frac{b}{c}+\\frac{c}{a}-\\frac{b}{a}-1=\\frac{(a-b)^2}{ab}+\\frac{(c-a)(c-b)}{ac}\\geq0$$</p>\n\n<p>From here we can get a SOS form:\n$$\\frac{a}{b}+\\frac{b}{c}+\\frac{c}{a}-3=\\frac{1}{6abc}\\sum_{cyc}(a-b)^2(3c+a-b)$$</p>\n" }, { "answer_id": 3798184, "score": 2, "text": "<p>Here is another SOS (Shortest)\n<span class=\"math-container\">$$ab^2+bc^2+ca^2-3abc={\\frac {a \\left( c-a \\right) ^{2} \\left({b}^{2}+ ac+cb \\right) +b\n \\left( 2\\,ab+{c}^{2}-3\\,ac \\right) ^{2}}{4\\,ab+ \\left( c-a \\right) ^\n{2}}} \\geqslant 0$$</span></p>\n" }, { "answer_id": 2116312, "score": 1, "text": "<p>$$\\frac{a}{b} + \\frac{b}{c} + \\frac{c}{a} \\ge 3 \\Leftrightarrow a^2c + b^2a + c^2b \\ge 3abc$$\nSo we use these S.O.S forms:</p>\n\n<ul>\n<li>$a^3 + b^3 + c^3 - 3abc = \\frac{1}{2}(a+b+c)\\sum_{cyc}(a-b)^2$.</li>\n<li>$a^3 + b^3 + c^3 - a^2c - b^2a - c^2b = \\frac{1}{3}\\sum_{cyc}a^3 + a^3 + c^3-3a^2c = \\frac{1}{3}\\sum_{cyc} (2a+c)(c-a)^2$.\nHence, the S.O.S form would be \n$$\\frac{a}{b} + \\frac{b}{c} + \\frac{c}{a} - 3 = \\sum_{cyc}\\frac{3b+c-a}{6abc}(c-a)^2.$$</li>\n</ul>\n" }, { "answer_id": 3783775, "score": 1, "text": "<p>Because<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$\\frac{a}{b}+\\frac{b}{c}+\\frac{c}{a} -3=\\frac{ab^2 +bc^2 +ca^2 -3abc}{abc}$$</span>\nIt's enough to prove<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$ab^2 +bc^2 +ca^2 -3abc \\geqslant 0$$</span> by SOS.</p>\n<p>Let <span class=\"math-container\">$$\\text{P}=2(a+b+c)\\cdot (ab^2 +bc^2 +ca^2 -3abc)$$</span>\nWe have<span class=\"math-container\">$:$</span>\n<span class=\"math-container\">$$\\text{P}=a \\left( a+2\\,b \\right) \\left( b-c \\right) ^{2}+c \\left( c+2\\,a\n \\right) \\left( a-b \\right) ^{2}+b \\left( b+2\\,c \\right) \\left( c-a\n \\right) ^{2}$$</span>\nLet me explain the method<span class=\"math-container\">$,$</span> let\n<span class=\"math-container\">$$\\text{P}_{\\text{sos}}=\\sum \\left( {\\it QQ}_{{1}}{a}^{2}+{\\it QQ}_{{4}}ab+{\\it QQ}_{{5}}ac+{\\it \nQQ}_{{2}}{b}^{2}+{\\it QQ}_{{6}}bc+{\\it QQ}_{{3}}{c}^{2} \\right) \n \\left( a-b \\right) ^{2}\n$$</span>\nGet an identity give <span class=\"math-container\">$$\\left\\{\\begin{matrix}\n -{\\it QQ}_{{1}}-{\\it QQ}_{{2}}=0&amp;\\\\2\\,{\\it QQ}_{{1}}-{\\it QQ}_{\n{4}}-{\\it QQ}_{{6}}=0&amp;\\\\2\\,{\\it QQ}_{{2}}-{\\it QQ}_{{4}}-{\\it QQ}_{{5}}+\n2=0&amp;\\\\2\\,{\\it QQ}_{{3}}+{\\it QQ}_{{5}}+{\\it QQ}_{{6}}-4=0&amp;\\\\-{\\it QQ}_{{1}\n}-{\\it QQ}_{{2}}-2\\,{\\it QQ}_{{3}}+2\\,{\\it QQ}_{{4}}+2=0 \n &amp;\n\\end{matrix}\\right.$$</span></p>\n<p>Solve this with <span class=\"math-container\">$Q_i \\geqslant 0 ,(i=1..6)$</span> give us<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$ \\left\\{ {\\it QQ}_{{1}}=0,{\\it QQ}_{{2}}=0,{\\it QQ}_{{3}}=1,{\\it QQ}_{\n{4}}=0,{\\it QQ}_{{5}}=2,{\\it QQ}_{{6}}=0 \\right\\} $$</span></p>\n<p>Take it into <span class=\"math-container\">$\\text{P}_{\\text{sos}}$</span> give the SOS form.\nBy the same way<span class=\"math-container\">$,$</span> we have<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$\\text{P}=\\frac{1}{3} \\sum (2ab-bc-ca)^2 +\\sum 2ab(b-c)^2$$</span></p>\n" }, { "answer_id": 3784645, "score": 1, "text": "<p>We have\n<span class=\"math-container\">$$\\frac{a}{b} + \\frac{b}{c} + \\frac{c}{a} - 3 = \\frac{(a-b)^2}{(a+b)b} + \\frac{(b-c)^2}{(a+b)c} + \\frac{(c-a)^2b}{(a+b)ca}.$$</span></p>\n" }, { "answer_id": 2116261, "score": 4, "text": "<p>Let $c=\\min\\{a,b,c\\}$. Hence,\n$$\\frac{a}{b}+\\frac{b}{c}+\\frac{c}{a}-3=\\frac{a}{b}+\\frac{b}{a}-2+\\frac{b}{c}+\\frac{c}{a}-\\frac{b}{a}-1=\\frac{(a-b)^2}{ab}+\\frac{(c-a)(c-b)}{ac}\\geq0$$</p>\n\n<p>From here we can get a SOS form:\n$$\\frac{a}{b}+\\frac{b}{c}+\\frac{c}{a}-3=\\frac{1}{6abc}\\sum_{cyc}(a-b)^2(3c+a-b)$$</p>\n" }, { "answer_id": 3798184, "score": 2, "text": "<p>Here is another SOS (Shortest)\n<span class=\"math-container\">$$ab^2+bc^2+ca^2-3abc={\\frac {a \\left( c-a \\right) ^{2} \\left({b}^{2}+ ac+cb \\right) +b\n \\left( 2\\,ab+{c}^{2}-3\\,ac \\right) ^{2}}{4\\,ab+ \\left( c-a \\right) ^\n{2}}} \\geqslant 0$$</span></p>\n" }, { "answer_id": 2116312, "score": 1, "text": "<p>$$\\frac{a}{b} + \\frac{b}{c} + \\frac{c}{a} \\ge 3 \\Leftrightarrow a^2c + b^2a + c^2b \\ge 3abc$$\nSo we use these S.O.S forms:</p>\n\n<ul>\n<li>$a^3 + b^3 + c^3 - 3abc = \\frac{1}{2}(a+b+c)\\sum_{cyc}(a-b)^2$.</li>\n<li>$a^3 + b^3 + c^3 - a^2c - b^2a - c^2b = \\frac{1}{3}\\sum_{cyc}a^3 + a^3 + c^3-3a^2c = \\frac{1}{3}\\sum_{cyc} (2a+c)(c-a)^2$.\nHence, the S.O.S form would be \n$$\\frac{a}{b} + \\frac{b}{c} + \\frac{c}{a} - 3 = \\sum_{cyc}\\frac{3b+c-a}{6abc}(c-a)^2.$$</li>\n</ul>\n" }, { "answer_id": 3783775, "score": 1, "text": "<p>Because<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$\\frac{a}{b}+\\frac{b}{c}+\\frac{c}{a} -3=\\frac{ab^2 +bc^2 +ca^2 -3abc}{abc}$$</span>\nIt's enough to prove<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$ab^2 +bc^2 +ca^2 -3abc \\geqslant 0$$</span> by SOS.</p>\n<p>Let <span class=\"math-container\">$$\\text{P}=2(a+b+c)\\cdot (ab^2 +bc^2 +ca^2 -3abc)$$</span>\nWe have<span class=\"math-container\">$:$</span>\n<span class=\"math-container\">$$\\text{P}=a \\left( a+2\\,b \\right) \\left( b-c \\right) ^{2}+c \\left( c+2\\,a\n \\right) \\left( a-b \\right) ^{2}+b \\left( b+2\\,c \\right) \\left( c-a\n \\right) ^{2}$$</span>\nLet me explain the method<span class=\"math-container\">$,$</span> let\n<span class=\"math-container\">$$\\text{P}_{\\text{sos}}=\\sum \\left( {\\it QQ}_{{1}}{a}^{2}+{\\it QQ}_{{4}}ab+{\\it QQ}_{{5}}ac+{\\it \nQQ}_{{2}}{b}^{2}+{\\it QQ}_{{6}}bc+{\\it QQ}_{{3}}{c}^{2} \\right) \n \\left( a-b \\right) ^{2}\n$$</span>\nGet an identity give <span class=\"math-container\">$$\\left\\{\\begin{matrix}\n -{\\it QQ}_{{1}}-{\\it QQ}_{{2}}=0&amp;\\\\2\\,{\\it QQ}_{{1}}-{\\it QQ}_{\n{4}}-{\\it QQ}_{{6}}=0&amp;\\\\2\\,{\\it QQ}_{{2}}-{\\it QQ}_{{4}}-{\\it QQ}_{{5}}+\n2=0&amp;\\\\2\\,{\\it QQ}_{{3}}+{\\it QQ}_{{5}}+{\\it QQ}_{{6}}-4=0&amp;\\\\-{\\it QQ}_{{1}\n}-{\\it QQ}_{{2}}-2\\,{\\it QQ}_{{3}}+2\\,{\\it QQ}_{{4}}+2=0 \n &amp;\n\\end{matrix}\\right.$$</span></p>\n<p>Solve this with <span class=\"math-container\">$Q_i \\geqslant 0 ,(i=1..6)$</span> give us<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$ \\left\\{ {\\it QQ}_{{1}}=0,{\\it QQ}_{{2}}=0,{\\it QQ}_{{3}}=1,{\\it QQ}_{\n{4}}=0,{\\it QQ}_{{5}}=2,{\\it QQ}_{{6}}=0 \\right\\} $$</span></p>\n<p>Take it into <span class=\"math-container\">$\\text{P}_{\\text{sos}}$</span> give the SOS form.\nBy the same way<span class=\"math-container\">$,$</span> we have<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$\\text{P}=\\frac{1}{3} \\sum (2ab-bc-ca)^2 +\\sum 2ab(b-c)^2$$</span></p>\n" }, { "answer_id": 3784645, "score": 1, "text": "<p>We have\n<span class=\"math-container\">$$\\frac{a}{b} + \\frac{b}{c} + \\frac{c}{a} - 3 = \\frac{(a-b)^2}{(a+b)b} + \\frac{(b-c)^2}{(a+b)c} + \\frac{(c-a)^2b}{(a+b)ca}.$$</span></p>\n" }, { "answer_id": 2116261, "score": 4, "text": "<p>Let $c=\\min\\{a,b,c\\}$. Hence,\n$$\\frac{a}{b}+\\frac{b}{c}+\\frac{c}{a}-3=\\frac{a}{b}+\\frac{b}{a}-2+\\frac{b}{c}+\\frac{c}{a}-\\frac{b}{a}-1=\\frac{(a-b)^2}{ab}+\\frac{(c-a)(c-b)}{ac}\\geq0$$</p>\n\n<p>From here we can get a SOS form:\n$$\\frac{a}{b}+\\frac{b}{c}+\\frac{c}{a}-3=\\frac{1}{6abc}\\sum_{cyc}(a-b)^2(3c+a-b)$$</p>\n" }, { "answer_id": 3798184, "score": 2, "text": "<p>Here is another SOS (Shortest)\n<span class=\"math-container\">$$ab^2+bc^2+ca^2-3abc={\\frac {a \\left( c-a \\right) ^{2} \\left({b}^{2}+ ac+cb \\right) +b\n \\left( 2\\,ab+{c}^{2}-3\\,ac \\right) ^{2}}{4\\,ab+ \\left( c-a \\right) ^\n{2}}} \\geqslant 0$$</span></p>\n" }, { "answer_id": 2116312, "score": 1, "text": "<p>$$\\frac{a}{b} + \\frac{b}{c} + \\frac{c}{a} \\ge 3 \\Leftrightarrow a^2c + b^2a + c^2b \\ge 3abc$$\nSo we use these S.O.S forms:</p>\n\n<ul>\n<li>$a^3 + b^3 + c^3 - 3abc = \\frac{1}{2}(a+b+c)\\sum_{cyc}(a-b)^2$.</li>\n<li>$a^3 + b^3 + c^3 - a^2c - b^2a - c^2b = \\frac{1}{3}\\sum_{cyc}a^3 + a^3 + c^3-3a^2c = \\frac{1}{3}\\sum_{cyc} (2a+c)(c-a)^2$.\nHence, the S.O.S form would be \n$$\\frac{a}{b} + \\frac{b}{c} + \\frac{c}{a} - 3 = \\sum_{cyc}\\frac{3b+c-a}{6abc}(c-a)^2.$$</li>\n</ul>\n" }, { "answer_id": 3783775, "score": 1, "text": "<p>Because<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$\\frac{a}{b}+\\frac{b}{c}+\\frac{c}{a} -3=\\frac{ab^2 +bc^2 +ca^2 -3abc}{abc}$$</span>\nIt's enough to prove<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$ab^2 +bc^2 +ca^2 -3abc \\geqslant 0$$</span> by SOS.</p>\n<p>Let <span class=\"math-container\">$$\\text{P}=2(a+b+c)\\cdot (ab^2 +bc^2 +ca^2 -3abc)$$</span>\nWe have<span class=\"math-container\">$:$</span>\n<span class=\"math-container\">$$\\text{P}=a \\left( a+2\\,b \\right) \\left( b-c \\right) ^{2}+c \\left( c+2\\,a\n \\right) \\left( a-b \\right) ^{2}+b \\left( b+2\\,c \\right) \\left( c-a\n \\right) ^{2}$$</span>\nLet me explain the method<span class=\"math-container\">$,$</span> let\n<span class=\"math-container\">$$\\text{P}_{\\text{sos}}=\\sum \\left( {\\it QQ}_{{1}}{a}^{2}+{\\it QQ}_{{4}}ab+{\\it QQ}_{{5}}ac+{\\it \nQQ}_{{2}}{b}^{2}+{\\it QQ}_{{6}}bc+{\\it QQ}_{{3}}{c}^{2} \\right) \n \\left( a-b \\right) ^{2}\n$$</span>\nGet an identity give <span class=\"math-container\">$$\\left\\{\\begin{matrix}\n -{\\it QQ}_{{1}}-{\\it QQ}_{{2}}=0&amp;\\\\2\\,{\\it QQ}_{{1}}-{\\it QQ}_{\n{4}}-{\\it QQ}_{{6}}=0&amp;\\\\2\\,{\\it QQ}_{{2}}-{\\it QQ}_{{4}}-{\\it QQ}_{{5}}+\n2=0&amp;\\\\2\\,{\\it QQ}_{{3}}+{\\it QQ}_{{5}}+{\\it QQ}_{{6}}-4=0&amp;\\\\-{\\it QQ}_{{1}\n}-{\\it QQ}_{{2}}-2\\,{\\it QQ}_{{3}}+2\\,{\\it QQ}_{{4}}+2=0 \n &amp;\n\\end{matrix}\\right.$$</span></p>\n<p>Solve this with <span class=\"math-container\">$Q_i \\geqslant 0 ,(i=1..6)$</span> give us<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$ \\left\\{ {\\it QQ}_{{1}}=0,{\\it QQ}_{{2}}=0,{\\it QQ}_{{3}}=1,{\\it QQ}_{\n{4}}=0,{\\it QQ}_{{5}}=2,{\\it QQ}_{{6}}=0 \\right\\} $$</span></p>\n<p>Take it into <span class=\"math-container\">$\\text{P}_{\\text{sos}}$</span> give the SOS form.\nBy the same way<span class=\"math-container\">$,$</span> we have<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$\\text{P}=\\frac{1}{3} \\sum (2ab-bc-ca)^2 +\\sum 2ab(b-c)^2$$</span></p>\n" }, { "answer_id": 3784645, "score": 1, "text": "<p>We have\n<span class=\"math-container\">$$\\frac{a}{b} + \\frac{b}{c} + \\frac{c}{a} - 3 = \\frac{(a-b)^2}{(a+b)b} + \\frac{(b-c)^2}{(a+b)c} + \\frac{(c-a)^2b}{(a+b)ca}.$$</span></p>\n" }, { "answer_id": 2116261, "score": 4, "text": "<p>Let $c=\\min\\{a,b,c\\}$. Hence,\n$$\\frac{a}{b}+\\frac{b}{c}+\\frac{c}{a}-3=\\frac{a}{b}+\\frac{b}{a}-2+\\frac{b}{c}+\\frac{c}{a}-\\frac{b}{a}-1=\\frac{(a-b)^2}{ab}+\\frac{(c-a)(c-b)}{ac}\\geq0$$</p>\n\n<p>From here we can get a SOS form:\n$$\\frac{a}{b}+\\frac{b}{c}+\\frac{c}{a}-3=\\frac{1}{6abc}\\sum_{cyc}(a-b)^2(3c+a-b)$$</p>\n" }, { "answer_id": 3798184, "score": 2, "text": "<p>Here is another SOS (Shortest)\n<span class=\"math-container\">$$ab^2+bc^2+ca^2-3abc={\\frac {a \\left( c-a \\right) ^{2} \\left({b}^{2}+ ac+cb \\right) +b\n \\left( 2\\,ab+{c}^{2}-3\\,ac \\right) ^{2}}{4\\,ab+ \\left( c-a \\right) ^\n{2}}} \\geqslant 0$$</span></p>\n" }, { "answer_id": 2116312, "score": 1, "text": "<p>$$\\frac{a}{b} + \\frac{b}{c} + \\frac{c}{a} \\ge 3 \\Leftrightarrow a^2c + b^2a + c^2b \\ge 3abc$$\nSo we use these S.O.S forms:</p>\n\n<ul>\n<li>$a^3 + b^3 + c^3 - 3abc = \\frac{1}{2}(a+b+c)\\sum_{cyc}(a-b)^2$.</li>\n<li>$a^3 + b^3 + c^3 - a^2c - b^2a - c^2b = \\frac{1}{3}\\sum_{cyc}a^3 + a^3 + c^3-3a^2c = \\frac{1}{3}\\sum_{cyc} (2a+c)(c-a)^2$.\nHence, the S.O.S form would be \n$$\\frac{a}{b} + \\frac{b}{c} + \\frac{c}{a} - 3 = \\sum_{cyc}\\frac{3b+c-a}{6abc}(c-a)^2.$$</li>\n</ul>\n" }, { "answer_id": 3783775, "score": 1, "text": "<p>Because<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$\\frac{a}{b}+\\frac{b}{c}+\\frac{c}{a} -3=\\frac{ab^2 +bc^2 +ca^2 -3abc}{abc}$$</span>\nIt's enough to prove<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$ab^2 +bc^2 +ca^2 -3abc \\geqslant 0$$</span> by SOS.</p>\n<p>Let <span class=\"math-container\">$$\\text{P}=2(a+b+c)\\cdot (ab^2 +bc^2 +ca^2 -3abc)$$</span>\nWe have<span class=\"math-container\">$:$</span>\n<span class=\"math-container\">$$\\text{P}=a \\left( a+2\\,b \\right) \\left( b-c \\right) ^{2}+c \\left( c+2\\,a\n \\right) \\left( a-b \\right) ^{2}+b \\left( b+2\\,c \\right) \\left( c-a\n \\right) ^{2}$$</span>\nLet me explain the method<span class=\"math-container\">$,$</span> let\n<span class=\"math-container\">$$\\text{P}_{\\text{sos}}=\\sum \\left( {\\it QQ}_{{1}}{a}^{2}+{\\it QQ}_{{4}}ab+{\\it QQ}_{{5}}ac+{\\it \nQQ}_{{2}}{b}^{2}+{\\it QQ}_{{6}}bc+{\\it QQ}_{{3}}{c}^{2} \\right) \n \\left( a-b \\right) ^{2}\n$$</span>\nGet an identity give <span class=\"math-container\">$$\\left\\{\\begin{matrix}\n -{\\it QQ}_{{1}}-{\\it QQ}_{{2}}=0&amp;\\\\2\\,{\\it QQ}_{{1}}-{\\it QQ}_{\n{4}}-{\\it QQ}_{{6}}=0&amp;\\\\2\\,{\\it QQ}_{{2}}-{\\it QQ}_{{4}}-{\\it QQ}_{{5}}+\n2=0&amp;\\\\2\\,{\\it QQ}_{{3}}+{\\it QQ}_{{5}}+{\\it QQ}_{{6}}-4=0&amp;\\\\-{\\it QQ}_{{1}\n}-{\\it QQ}_{{2}}-2\\,{\\it QQ}_{{3}}+2\\,{\\it QQ}_{{4}}+2=0 \n &amp;\n\\end{matrix}\\right.$$</span></p>\n<p>Solve this with <span class=\"math-container\">$Q_i \\geqslant 0 ,(i=1..6)$</span> give us<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$ \\left\\{ {\\it QQ}_{{1}}=0,{\\it QQ}_{{2}}=0,{\\it QQ}_{{3}}=1,{\\it QQ}_{\n{4}}=0,{\\it QQ}_{{5}}=2,{\\it QQ}_{{6}}=0 \\right\\} $$</span></p>\n<p>Take it into <span class=\"math-container\">$\\text{P}_{\\text{sos}}$</span> give the SOS form.\nBy the same way<span class=\"math-container\">$,$</span> we have<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$\\text{P}=\\frac{1}{3} \\sum (2ab-bc-ca)^2 +\\sum 2ab(b-c)^2$$</span></p>\n" }, { "answer_id": 3784645, "score": 1, "text": "<p>We have\n<span class=\"math-container\">$$\\frac{a}{b} + \\frac{b}{c} + \\frac{c}{a} - 3 = \\frac{(a-b)^2}{(a+b)b} + \\frac{(b-c)^2}{(a+b)c} + \\frac{(c-a)^2b}{(a+b)ca}.$$</span></p>\n" }, { "answer_id": 2116261, "score": 4, "text": "<p>Let $c=\\min\\{a,b,c\\}$. Hence,\n$$\\frac{a}{b}+\\frac{b}{c}+\\frac{c}{a}-3=\\frac{a}{b}+\\frac{b}{a}-2+\\frac{b}{c}+\\frac{c}{a}-\\frac{b}{a}-1=\\frac{(a-b)^2}{ab}+\\frac{(c-a)(c-b)}{ac}\\geq0$$</p>\n\n<p>From here we can get a SOS form:\n$$\\frac{a}{b}+\\frac{b}{c}+\\frac{c}{a}-3=\\frac{1}{6abc}\\sum_{cyc}(a-b)^2(3c+a-b)$$</p>\n" }, { "answer_id": 3798184, "score": 2, "text": "<p>Here is another SOS (Shortest)\n<span class=\"math-container\">$$ab^2+bc^2+ca^2-3abc={\\frac {a \\left( c-a \\right) ^{2} \\left({b}^{2}+ ac+cb \\right) +b\n \\left( 2\\,ab+{c}^{2}-3\\,ac \\right) ^{2}}{4\\,ab+ \\left( c-a \\right) ^\n{2}}} \\geqslant 0$$</span></p>\n" }, { "answer_id": 2116312, "score": 1, "text": "<p>$$\\frac{a}{b} + \\frac{b}{c} + \\frac{c}{a} \\ge 3 \\Leftrightarrow a^2c + b^2a + c^2b \\ge 3abc$$\nSo we use these S.O.S forms:</p>\n\n<ul>\n<li>$a^3 + b^3 + c^3 - 3abc = \\frac{1}{2}(a+b+c)\\sum_{cyc}(a-b)^2$.</li>\n<li>$a^3 + b^3 + c^3 - a^2c - b^2a - c^2b = \\frac{1}{3}\\sum_{cyc}a^3 + a^3 + c^3-3a^2c = \\frac{1}{3}\\sum_{cyc} (2a+c)(c-a)^2$.\nHence, the S.O.S form would be \n$$\\frac{a}{b} + \\frac{b}{c} + \\frac{c}{a} - 3 = \\sum_{cyc}\\frac{3b+c-a}{6abc}(c-a)^2.$$</li>\n</ul>\n" }, { "answer_id": 3783775, "score": 1, "text": "<p>Because<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$\\frac{a}{b}+\\frac{b}{c}+\\frac{c}{a} -3=\\frac{ab^2 +bc^2 +ca^2 -3abc}{abc}$$</span>\nIt's enough to prove<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$ab^2 +bc^2 +ca^2 -3abc \\geqslant 0$$</span> by SOS.</p>\n<p>Let <span class=\"math-container\">$$\\text{P}=2(a+b+c)\\cdot (ab^2 +bc^2 +ca^2 -3abc)$$</span>\nWe have<span class=\"math-container\">$:$</span>\n<span class=\"math-container\">$$\\text{P}=a \\left( a+2\\,b \\right) \\left( b-c \\right) ^{2}+c \\left( c+2\\,a\n \\right) \\left( a-b \\right) ^{2}+b \\left( b+2\\,c \\right) \\left( c-a\n \\right) ^{2}$$</span>\nLet me explain the method<span class=\"math-container\">$,$</span> let\n<span class=\"math-container\">$$\\text{P}_{\\text{sos}}=\\sum \\left( {\\it QQ}_{{1}}{a}^{2}+{\\it QQ}_{{4}}ab+{\\it QQ}_{{5}}ac+{\\it \nQQ}_{{2}}{b}^{2}+{\\it QQ}_{{6}}bc+{\\it QQ}_{{3}}{c}^{2} \\right) \n \\left( a-b \\right) ^{2}\n$$</span>\nGet an identity give <span class=\"math-container\">$$\\left\\{\\begin{matrix}\n -{\\it QQ}_{{1}}-{\\it QQ}_{{2}}=0&amp;\\\\2\\,{\\it QQ}_{{1}}-{\\it QQ}_{\n{4}}-{\\it QQ}_{{6}}=0&amp;\\\\2\\,{\\it QQ}_{{2}}-{\\it QQ}_{{4}}-{\\it QQ}_{{5}}+\n2=0&amp;\\\\2\\,{\\it QQ}_{{3}}+{\\it QQ}_{{5}}+{\\it QQ}_{{6}}-4=0&amp;\\\\-{\\it QQ}_{{1}\n}-{\\it QQ}_{{2}}-2\\,{\\it QQ}_{{3}}+2\\,{\\it QQ}_{{4}}+2=0 \n &amp;\n\\end{matrix}\\right.$$</span></p>\n<p>Solve this with <span class=\"math-container\">$Q_i \\geqslant 0 ,(i=1..6)$</span> give us<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$ \\left\\{ {\\it QQ}_{{1}}=0,{\\it QQ}_{{2}}=0,{\\it QQ}_{{3}}=1,{\\it QQ}_{\n{4}}=0,{\\it QQ}_{{5}}=2,{\\it QQ}_{{6}}=0 \\right\\} $$</span></p>\n<p>Take it into <span class=\"math-container\">$\\text{P}_{\\text{sos}}$</span> give the SOS form.\nBy the same way<span class=\"math-container\">$,$</span> we have<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$\\text{P}=\\frac{1}{3} \\sum (2ab-bc-ca)^2 +\\sum 2ab(b-c)^2$$</span></p>\n" }, { "answer_id": 3784645, "score": 1, "text": "<p>We have\n<span class=\"math-container\">$$\\frac{a}{b} + \\frac{b}{c} + \\frac{c}{a} - 3 = \\frac{(a-b)^2}{(a+b)b} + \\frac{(b-c)^2}{(a+b)c} + \\frac{(c-a)^2b}{(a+b)ca}.$$</span></p>\n" }, { "answer_id": 2116261, "score": 4, "text": "<p>Let $c=\\min\\{a,b,c\\}$. Hence,\n$$\\frac{a}{b}+\\frac{b}{c}+\\frac{c}{a}-3=\\frac{a}{b}+\\frac{b}{a}-2+\\frac{b}{c}+\\frac{c}{a}-\\frac{b}{a}-1=\\frac{(a-b)^2}{ab}+\\frac{(c-a)(c-b)}{ac}\\geq0$$</p>\n\n<p>From here we can get a SOS form:\n$$\\frac{a}{b}+\\frac{b}{c}+\\frac{c}{a}-3=\\frac{1}{6abc}\\sum_{cyc}(a-b)^2(3c+a-b)$$</p>\n" }, { "answer_id": 3798184, "score": 2, "text": "<p>Here is another SOS (Shortest)\n<span class=\"math-container\">$$ab^2+bc^2+ca^2-3abc={\\frac {a \\left( c-a \\right) ^{2} \\left({b}^{2}+ ac+cb \\right) +b\n \\left( 2\\,ab+{c}^{2}-3\\,ac \\right) ^{2}}{4\\,ab+ \\left( c-a \\right) ^\n{2}}} \\geqslant 0$$</span></p>\n" }, { "answer_id": 2116312, "score": 1, "text": "<p>$$\\frac{a}{b} + \\frac{b}{c} + \\frac{c}{a} \\ge 3 \\Leftrightarrow a^2c + b^2a + c^2b \\ge 3abc$$\nSo we use these S.O.S forms:</p>\n\n<ul>\n<li>$a^3 + b^3 + c^3 - 3abc = \\frac{1}{2}(a+b+c)\\sum_{cyc}(a-b)^2$.</li>\n<li>$a^3 + b^3 + c^3 - a^2c - b^2a - c^2b = \\frac{1}{3}\\sum_{cyc}a^3 + a^3 + c^3-3a^2c = \\frac{1}{3}\\sum_{cyc} (2a+c)(c-a)^2$.\nHence, the S.O.S form would be \n$$\\frac{a}{b} + \\frac{b}{c} + \\frac{c}{a} - 3 = \\sum_{cyc}\\frac{3b+c-a}{6abc}(c-a)^2.$$</li>\n</ul>\n" }, { "answer_id": 3783775, "score": 1, "text": "<p>Because<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$\\frac{a}{b}+\\frac{b}{c}+\\frac{c}{a} -3=\\frac{ab^2 +bc^2 +ca^2 -3abc}{abc}$$</span>\nIt's enough to prove<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$ab^2 +bc^2 +ca^2 -3abc \\geqslant 0$$</span> by SOS.</p>\n<p>Let <span class=\"math-container\">$$\\text{P}=2(a+b+c)\\cdot (ab^2 +bc^2 +ca^2 -3abc)$$</span>\nWe have<span class=\"math-container\">$:$</span>\n<span class=\"math-container\">$$\\text{P}=a \\left( a+2\\,b \\right) \\left( b-c \\right) ^{2}+c \\left( c+2\\,a\n \\right) \\left( a-b \\right) ^{2}+b \\left( b+2\\,c \\right) \\left( c-a\n \\right) ^{2}$$</span>\nLet me explain the method<span class=\"math-container\">$,$</span> let\n<span class=\"math-container\">$$\\text{P}_{\\text{sos}}=\\sum \\left( {\\it QQ}_{{1}}{a}^{2}+{\\it QQ}_{{4}}ab+{\\it QQ}_{{5}}ac+{\\it \nQQ}_{{2}}{b}^{2}+{\\it QQ}_{{6}}bc+{\\it QQ}_{{3}}{c}^{2} \\right) \n \\left( a-b \\right) ^{2}\n$$</span>\nGet an identity give <span class=\"math-container\">$$\\left\\{\\begin{matrix}\n -{\\it QQ}_{{1}}-{\\it QQ}_{{2}}=0&amp;\\\\2\\,{\\it QQ}_{{1}}-{\\it QQ}_{\n{4}}-{\\it QQ}_{{6}}=0&amp;\\\\2\\,{\\it QQ}_{{2}}-{\\it QQ}_{{4}}-{\\it QQ}_{{5}}+\n2=0&amp;\\\\2\\,{\\it QQ}_{{3}}+{\\it QQ}_{{5}}+{\\it QQ}_{{6}}-4=0&amp;\\\\-{\\it QQ}_{{1}\n}-{\\it QQ}_{{2}}-2\\,{\\it QQ}_{{3}}+2\\,{\\it QQ}_{{4}}+2=0 \n &amp;\n\\end{matrix}\\right.$$</span></p>\n<p>Solve this with <span class=\"math-container\">$Q_i \\geqslant 0 ,(i=1..6)$</span> give us<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$ \\left\\{ {\\it QQ}_{{1}}=0,{\\it QQ}_{{2}}=0,{\\it QQ}_{{3}}=1,{\\it QQ}_{\n{4}}=0,{\\it QQ}_{{5}}=2,{\\it QQ}_{{6}}=0 \\right\\} $$</span></p>\n<p>Take it into <span class=\"math-container\">$\\text{P}_{\\text{sos}}$</span> give the SOS form.\nBy the same way<span class=\"math-container\">$,$</span> we have<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$\\text{P}=\\frac{1}{3} \\sum (2ab-bc-ca)^2 +\\sum 2ab(b-c)^2$$</span></p>\n" }, { "answer_id": 3784645, "score": 1, "text": "<p>We have\n<span class=\"math-container\">$$\\frac{a}{b} + \\frac{b}{c} + \\frac{c}{a} - 3 = \\frac{(a-b)^2}{(a+b)b} + \\frac{(b-c)^2}{(a+b)c} + \\frac{(c-a)^2b}{(a+b)ca}.$$</span></p>\n" }, { "answer_id": 2116261, "score": 4, "text": "<p>Let $c=\\min\\{a,b,c\\}$. Hence,\n$$\\frac{a}{b}+\\frac{b}{c}+\\frac{c}{a}-3=\\frac{a}{b}+\\frac{b}{a}-2+\\frac{b}{c}+\\frac{c}{a}-\\frac{b}{a}-1=\\frac{(a-b)^2}{ab}+\\frac{(c-a)(c-b)}{ac}\\geq0$$</p>\n\n<p>From here we can get a SOS form:\n$$\\frac{a}{b}+\\frac{b}{c}+\\frac{c}{a}-3=\\frac{1}{6abc}\\sum_{cyc}(a-b)^2(3c+a-b)$$</p>\n" }, { "answer_id": 3798184, "score": 2, "text": "<p>Here is another SOS (Shortest)\n<span class=\"math-container\">$$ab^2+bc^2+ca^2-3abc={\\frac {a \\left( c-a \\right) ^{2} \\left({b}^{2}+ ac+cb \\right) +b\n \\left( 2\\,ab+{c}^{2}-3\\,ac \\right) ^{2}}{4\\,ab+ \\left( c-a \\right) ^\n{2}}} \\geqslant 0$$</span></p>\n" }, { "answer_id": 2116312, "score": 1, "text": "<p>$$\\frac{a}{b} + \\frac{b}{c} + \\frac{c}{a} \\ge 3 \\Leftrightarrow a^2c + b^2a + c^2b \\ge 3abc$$\nSo we use these S.O.S forms:</p>\n\n<ul>\n<li>$a^3 + b^3 + c^3 - 3abc = \\frac{1}{2}(a+b+c)\\sum_{cyc}(a-b)^2$.</li>\n<li>$a^3 + b^3 + c^3 - a^2c - b^2a - c^2b = \\frac{1}{3}\\sum_{cyc}a^3 + a^3 + c^3-3a^2c = \\frac{1}{3}\\sum_{cyc} (2a+c)(c-a)^2$.\nHence, the S.O.S form would be \n$$\\frac{a}{b} + \\frac{b}{c} + \\frac{c}{a} - 3 = \\sum_{cyc}\\frac{3b+c-a}{6abc}(c-a)^2.$$</li>\n</ul>\n" }, { "answer_id": 3783775, "score": 1, "text": "<p>Because<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$\\frac{a}{b}+\\frac{b}{c}+\\frac{c}{a} -3=\\frac{ab^2 +bc^2 +ca^2 -3abc}{abc}$$</span>\nIt's enough to prove<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$ab^2 +bc^2 +ca^2 -3abc \\geqslant 0$$</span> by SOS.</p>\n<p>Let <span class=\"math-container\">$$\\text{P}=2(a+b+c)\\cdot (ab^2 +bc^2 +ca^2 -3abc)$$</span>\nWe have<span class=\"math-container\">$:$</span>\n<span class=\"math-container\">$$\\text{P}=a \\left( a+2\\,b \\right) \\left( b-c \\right) ^{2}+c \\left( c+2\\,a\n \\right) \\left( a-b \\right) ^{2}+b \\left( b+2\\,c \\right) \\left( c-a\n \\right) ^{2}$$</span>\nLet me explain the method<span class=\"math-container\">$,$</span> let\n<span class=\"math-container\">$$\\text{P}_{\\text{sos}}=\\sum \\left( {\\it QQ}_{{1}}{a}^{2}+{\\it QQ}_{{4}}ab+{\\it QQ}_{{5}}ac+{\\it \nQQ}_{{2}}{b}^{2}+{\\it QQ}_{{6}}bc+{\\it QQ}_{{3}}{c}^{2} \\right) \n \\left( a-b \\right) ^{2}\n$$</span>\nGet an identity give <span class=\"math-container\">$$\\left\\{\\begin{matrix}\n -{\\it QQ}_{{1}}-{\\it QQ}_{{2}}=0&amp;\\\\2\\,{\\it QQ}_{{1}}-{\\it QQ}_{\n{4}}-{\\it QQ}_{{6}}=0&amp;\\\\2\\,{\\it QQ}_{{2}}-{\\it QQ}_{{4}}-{\\it QQ}_{{5}}+\n2=0&amp;\\\\2\\,{\\it QQ}_{{3}}+{\\it QQ}_{{5}}+{\\it QQ}_{{6}}-4=0&amp;\\\\-{\\it QQ}_{{1}\n}-{\\it QQ}_{{2}}-2\\,{\\it QQ}_{{3}}+2\\,{\\it QQ}_{{4}}+2=0 \n &amp;\n\\end{matrix}\\right.$$</span></p>\n<p>Solve this with <span class=\"math-container\">$Q_i \\geqslant 0 ,(i=1..6)$</span> give us<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$ \\left\\{ {\\it QQ}_{{1}}=0,{\\it QQ}_{{2}}=0,{\\it QQ}_{{3}}=1,{\\it QQ}_{\n{4}}=0,{\\it QQ}_{{5}}=2,{\\it QQ}_{{6}}=0 \\right\\} $$</span></p>\n<p>Take it into <span class=\"math-container\">$\\text{P}_{\\text{sos}}$</span> give the SOS form.\nBy the same way<span class=\"math-container\">$,$</span> we have<span class=\"math-container\">$:$</span> <span class=\"math-container\">$$\\text{P}=\\frac{1}{3} \\sum (2ab-bc-ca)^2 +\\sum 2ab(b-c)^2$$</span></p>\n" }, { "answer_id": 3784645, "score": 1, "text": "<p>We have\n<span class=\"math-container\">$$\\frac{a}{b} + \\frac{b}{c} + \\frac{c}{a} - 3 = \\frac{(a-b)^2}{(a+b)b} + \\frac{(b-c)^2}{(a+b)c} + \\frac{(c-a)^2b}{(a+b)ca}.$$</span></p>\n" } ]
Question: Proving a well-known inequality using S.O.S Answer 1: <p>Let $c=\min\{a,b,c\}$. Hence, $$\frac{a}{b}+\frac{b}{c}+\frac{c}{a}-3=\frac{a}{b}+\frac{b}{a}-2+\frac{b}{c}+\frac{c}{a}-\frac{b}{a}-1=\frac{(a-b)^2}{ab}+\frac{(c-a)(c-b)}{ac}\geq0$$</p> <p>From here we can get a SOS form: $$\frac{a}{b}+\frac{b}{c}+\frac{c}{a}-3=\frac{1}{6abc}\sum_{cyc}(a-b)^2(3c+a-b)$$</p> Answer 2: <p>Here is another SOS (Shortest) <span class="math-container">$$ab^2+bc^2+ca^2-3abc={\frac {a \left( c-a \right) ^{2} \left({b}^{2}+ ac+cb \right) +b \left( 2\,ab+{c}^{2}-3\,ac \right) ^{2}}{4\,ab+ \left( c-a \right) ^ {2}}} \geqslant 0$$</span></p> Answer 3: <p>$$\frac{a}{b} + \frac{b}{c} + \frac{c}{a} \ge 3 \Leftrightarrow a^2c + b^2a + c^2b \ge 3abc$$ So we use these S.O.S forms:</p> <ul> <li>$a^3 + b^3 + c^3 - 3abc = \frac{1}{2}(a+b+c)\sum_{cyc}(a-b)^2$.</li> <li>$a^3 + b^3 + c^3 - a^2c - b^2a - c^2b = \frac{1}{3}\sum_{cyc}a^3 + a^3 + c^3-3a^2c = \frac{1}{3}\sum_{cyc} (2a+c)(c-a)^2$. Hence, the S.O.S form would be $$\frac{a}{b} + \frac{b}{c} + \frac{c}{a} - 3 = \sum_{cyc}\frac{3b+c-a}{6abc}(c-a)^2.$$</li> </ul> Answer 4: <p>Because<span class="math-container">$:$</span> <span class="math-container">$$\frac{a}{b}+\frac{b}{c}+\frac{c}{a} -3=\frac{ab^2 +bc^2 +ca^2 -3abc}{abc}$$</span> It's enough to prove<span class="math-container">$:$</span> <span class="math-container">$$ab^2 +bc^2 +ca^2 -3abc \geqslant 0$$</span> by SOS.</p> <p>Let <span class="math-container">$$\text{P}=2(a+b+c)\cdot (ab^2 +bc^2 +ca^2 -3abc)$$</span> We have<span class="math-container">$:$</span> <span class="math-container">$$\text{P}=a \left( a+2\,b \right) \left( b-c \right) ^{2}+c \left( c+2\,a \right) \left( a-b \right) ^{2}+b \left( b+2\,c \right) \left( c-a \right) ^{2}$$</span> Let me explain the method<span class="math-container">$,$</span> let <span class="math-container">$$\text{P}_{\text{sos}}=\sum \left( {\it QQ}_{{1}}{a}^{2}+{\it QQ}_{{4}}ab+{\it QQ}_{{5}}ac+{\it QQ}_{{2}}{b}^{2}+{\it QQ}_{{6}}bc+{\it QQ}_{{3}}{c}^{2} \right) \left( a-b \right) ^{2} $$</span> Get an identity give <span class="math-container">$$\left\{\begin{matrix} -{\it QQ}_{{1}}-{\it QQ}_{{2}}=0&amp;\\2\,{\it QQ}_{{1}}-{\it QQ}_{ {4}}-{\it QQ}_{{6}}=0&amp;\\2\,{\it QQ}_{{2}}-{\it QQ}_{{4}}-{\it QQ}_{{5}}+ 2=0&amp;\\2\,{\it QQ}_{{3}}+{\it QQ}_{{5}}+{\it QQ}_{{6}}-4=0&amp;\\-{\it QQ}_{{1} }-{\it QQ}_{{2}}-2\,{\it QQ}_{{3}}+2\,{\it QQ}_{{4}}+2=0 &amp; \end{matrix}\right.$$</span></p> <p>Solve this with <span class="math-container">$Q_i \geqslant 0 ,(i=1..6)$</span> give us<span class="math-container">$:$</span> <span class="math-container">$$ \left\{ {\it QQ}_{{1}}=0,{\it QQ}_{{2}}=0,{\it QQ}_{{3}}=1,{\it QQ}_{ {4}}=0,{\it QQ}_{{5}}=2,{\it QQ}_{{6}}=0 \right\} $$</span></p> <p>Take it into <span class="math-container">$\text{P}_{\text{sos}}$</span> give the SOS form. By the same way<span class="math-container">$,$</span> we have<span class="math-container">$:$</span> <span class="math-container">$$\text{P}=\frac{1}{3} \sum (2ab-bc-ca)^2 +\sum 2ab(b-c)^2$$</span></p> Answer 5: <p>We have <span class="math-container">$$\frac{a}{b} + \frac{b}{c} + \frac{c}{a} - 3 = \frac{(a-b)^2}{(a+b)b} + \frac{(b-c)^2}{(a+b)c} + \frac{(c-a)^2b}{(a+b)ca}.$$</span></p> Answer 6: <p>Let $c=\min\{a,b,c\}$. Hence, $$\frac{a}{b}+\frac{b}{c}+\frac{c}{a}-3=\frac{a}{b}+\frac{b}{a}-2+\frac{b}{c}+\frac{c}{a}-\frac{b}{a}-1=\frac{(a-b)^2}{ab}+\frac{(c-a)(c-b)}{ac}\geq0$$</p> <p>From here we can get a SOS form: $$\frac{a}{b}+\frac{b}{c}+\frac{c}{a}-3=\frac{1}{6abc}\sum_{cyc}(a-b)^2(3c+a-b)$$</p> Answer 7: <p>Here is another SOS (Shortest) <span class="math-container">$$ab^2+bc^2+ca^2-3abc={\frac {a \left( c-a \right) ^{2} \left({b}^{2}+ ac+cb \right) +b \left( 2\,ab+{c}^{2}-3\,ac \right) ^{2}}{4\,ab+ \left( c-a \right) ^ {2}}} \geqslant 0$$</span></p> Answer 8: <p>$$\frac{a}{b} + \frac{b}{c} + \frac{c}{a} \ge 3 \Leftrightarrow a^2c + b^2a + c^2b \ge 3abc$$ So we use these S.O.S forms:</p> <ul> <li>$a^3 + b^3 + c^3 - 3abc = \frac{1}{2}(a+b+c)\sum_{cyc}(a-b)^2$.</li> <li>$a^3 + b^3 + c^3 - a^2c - b^2a - c^2b = \frac{1}{3}\sum_{cyc}a^3 + a^3 + c^3-3a^2c = \frac{1}{3}\sum_{cyc} (2a+c)(c-a)^2$. Hence, the S.O.S form would be $$\frac{a}{b} + \frac{b}{c} + \frac{c}{a} - 3 = \sum_{cyc}\frac{3b+c-a}{6abc}(c-a)^2.$$</li> </ul> Answer 9: <p>Because<span class="math-container">$:$</span> <span class="math-container">$$\frac{a}{b}+\frac{b}{c}+\frac{c}{a} -3=\frac{ab^2 +bc^2 +ca^2 -3abc}{abc}$$</span> It's enough to prove<span class="math-container">$:$</span> <span class="math-container">$$ab^2 +bc^2 +ca^2 -3abc \geqslant 0$$</span> by SOS.</p> <p>Let <span class="math-container">$$\text{P}=2(a+b+c)\cdot (ab^2 +bc^2 +ca^2 -3abc)$$</span> We have<span class="math-container">$:$</span> <span class="math-container">$$\text{P}=a \left( a+2\,b \right) \left( b-c \right) ^{2}+c \left( c+2\,a \right) \left( a-b \right) ^{2}+b \left( b+2\,c \right) \left( c-a \right) ^{2}$$</span> Let me explain the method<span class="math-container">$,$</span> let <span class="math-container">$$\text{P}_{\text{sos}}=\sum \left( {\it QQ}_{{1}}{a}^{2}+{\it QQ}_{{4}}ab+{\it QQ}_{{5}}ac+{\it QQ}_{{2}}{b}^{2}+{\it QQ}_{{6}}bc+{\it QQ}_{{3}}{c}^{2} \right) \left( a-b \right) ^{2} $$</span> Get an identity give <span class="math-container">$$\left\{\begin{matrix} -{\it QQ}_{{1}}-{\it QQ}_{{2}}=0&amp;\\2\,{\it QQ}_{{1}}-{\it QQ}_{ {4}}-{\it QQ}_{{6}}=0&amp;\\2\,{\it QQ}_{{2}}-{\it QQ}_{{4}}-{\it QQ}_{{5}}+ 2=0&amp;\\2\,{\it QQ}_{{3}}+{\it QQ}_{{5}}+{\it QQ}_{{6}}-4=0&amp;\\-{\it QQ}_{{1} }-{\it QQ}_{{2}}-2\,{\it QQ}_{{3}}+2\,{\it QQ}_{{4}}+2=0 &amp; \end{matrix}\right.$$</span></p> <p>Solve this with <span class="math-container">$Q_i \geqslant 0 ,(i=1..6)$</span> give us<span class="math-container">$:$</span> <span class="math-container">$$ \left\{ {\it QQ}_{{1}}=0,{\it QQ}_{{2}}=0,{\it QQ}_{{3}}=1,{\it QQ}_{ {4}}=0,{\it QQ}_{{5}}=2,{\it QQ}_{{6}}=0 \right\} $$</span></p> <p>Take it into <span class="math-container">$\text{P}_{\text{sos}}$</span> give the SOS form. By the same way<span class="math-container">$,$</span> we have<span class="math-container">$:$</span> <span class="math-container">$$\text{P}=\frac{1}{3} \sum (2ab-bc-ca)^2 +\sum 2ab(b-c)^2$$</span></p> Answer 10: <p>We have <span class="math-container">$$\frac{a}{b} + \frac{b}{c} + \frac{c}{a} - 3 = \frac{(a-b)^2}{(a+b)b} + \frac{(b-c)^2}{(a+b)c} + \frac{(c-a)^2b}{(a+b)ca}.$$</span></p> Answer 11: <p>Let $c=\min\{a,b,c\}$. Hence, $$\frac{a}{b}+\frac{b}{c}+\frac{c}{a}-3=\frac{a}{b}+\frac{b}{a}-2+\frac{b}{c}+\frac{c}{a}-\frac{b}{a}-1=\frac{(a-b)^2}{ab}+\frac{(c-a)(c-b)}{ac}\geq0$$</p> <p>From here we can get a SOS form: $$\frac{a}{b}+\frac{b}{c}+\frac{c}{a}-3=\frac{1}{6abc}\sum_{cyc}(a-b)^2(3c+a-b)$$</p> Answer 12: <p>Here is another SOS (Shortest) <span class="math-container">$$ab^2+bc^2+ca^2-3abc={\frac {a \left( c-a \right) ^{2} \left({b}^{2}+ ac+cb \right) +b \left( 2\,ab+{c}^{2}-3\,ac \right) ^{2}}{4\,ab+ \left( c-a \right) ^ {2}}} \geqslant 0$$</span></p> Answer 13: <p>$$\frac{a}{b} + \frac{b}{c} + \frac{c}{a} \ge 3 \Leftrightarrow a^2c + b^2a + c^2b \ge 3abc$$ So we use these S.O.S forms:</p> <ul> <li>$a^3 + b^3 + c^3 - 3abc = \frac{1}{2}(a+b+c)\sum_{cyc}(a-b)^2$.</li> <li>$a^3 + b^3 + c^3 - a^2c - b^2a - c^2b = \frac{1}{3}\sum_{cyc}a^3 + a^3 + c^3-3a^2c = \frac{1}{3}\sum_{cyc} (2a+c)(c-a)^2$. Hence, the S.O.S form would be $$\frac{a}{b} + \frac{b}{c} + \frac{c}{a} - 3 = \sum_{cyc}\frac{3b+c-a}{6abc}(c-a)^2.$$</li> </ul> Answer 14: <p>Because<span class="math-container">$:$</span> <span class="math-container">$$\frac{a}{b}+\frac{b}{c}+\frac{c}{a} -3=\frac{ab^2 +bc^2 +ca^2 -3abc}{abc}$$</span> It's enough to prove<span class="math-container">$:$</span> <span class="math-container">$$ab^2 +bc^2 +ca^2 -3abc \geqslant 0$$</span> by SOS.</p> <p>Let <span class="math-container">$$\text{P}=2(a+b+c)\cdot (ab^2 +bc^2 +ca^2 -3abc)$$</span> We have<span class="math-container">$:$</span> <span class="math-container">$$\text{P}=a \left( a+2\,b \right) \left( b-c \right) ^{2}+c \left( c+2\,a \right) \left( a-b \right) ^{2}+b \left( b+2\,c \right) \left( c-a \right) ^{2}$$</span> Let me explain the method<span class="math-container">$,$</span> let <span class="math-container">$$\text{P}_{\text{sos}}=\sum \left( {\it QQ}_{{1}}{a}^{2}+{\it QQ}_{{4}}ab+{\it QQ}_{{5}}ac+{\it QQ}_{{2}}{b}^{2}+{\it QQ}_{{6}}bc+{\it QQ}_{{3}}{c}^{2} \right) \left( a-b \right) ^{2} $$</span> Get an identity give <span class="math-container">$$\left\{\begin{matrix} -{\it QQ}_{{1}}-{\it QQ}_{{2}}=0&amp;\\2\,{\it QQ}_{{1}}-{\it QQ}_{ {4}}-{\it QQ}_{{6}}=0&amp;\\2\,{\it QQ}_{{2}}-{\it QQ}_{{4}}-{\it QQ}_{{5}}+ 2=0&amp;\\2\,{\it QQ}_{{3}}+{\it QQ}_{{5}}+{\it QQ}_{{6}}-4=0&amp;\\-{\it QQ}_{{1} }-{\it QQ}_{{2}}-2\,{\it QQ}_{{3}}+2\,{\it QQ}_{{4}}+2=0 &amp; \end{matrix}\right.$$</span></p> <p>Solve this with <span class="math-container">$Q_i \geqslant 0 ,(i=1..6)$</span> give us<span class="math-container">$:$</span> <span class="math-container">$$ \left\{ {\it QQ}_{{1}}=0,{\it QQ}_{{2}}=0,{\it QQ}_{{3}}=1,{\it QQ}_{ {4}}=0,{\it QQ}_{{5}}=2,{\it QQ}_{{6}}=0 \right\} $$</span></p> <p>Take it into <span class="math-container">$\text{P}_{\text{sos}}$</span> give the SOS form. By the same way<span class="math-container">$,$</span> we have<span class="math-container">$:$</span> <span class="math-container">$$\text{P}=\frac{1}{3} \sum (2ab-bc-ca)^2 +\sum 2ab(b-c)^2$$</span></p> Answer 15: <p>We have <span class="math-container">$$\frac{a}{b} + \frac{b}{c} + \frac{c}{a} - 3 = \frac{(a-b)^2}{(a+b)b} + \frac{(b-c)^2}{(a+b)c} + \frac{(c-a)^2b}{(a+b)ca}.$$</span></p> Answer 16: <p>Let $c=\min\{a,b,c\}$. Hence, $$\frac{a}{b}+\frac{b}{c}+\frac{c}{a}-3=\frac{a}{b}+\frac{b}{a}-2+\frac{b}{c}+\frac{c}{a}-\frac{b}{a}-1=\frac{(a-b)^2}{ab}+\frac{(c-a)(c-b)}{ac}\geq0$$</p> <p>From here we can get a SOS form: $$\frac{a}{b}+\frac{b}{c}+\frac{c}{a}-3=\frac{1}{6abc}\sum_{cyc}(a-b)^2(3c+a-b)$$</p> Answer 17: <p>Here is another SOS (Shortest) <span class="math-container">$$ab^2+bc^2+ca^2-3abc={\frac {a \left( c-a \right) ^{2} \left({b}^{2}+ ac+cb \right) +b \left( 2\,ab+{c}^{2}-3\,ac \right) ^{2}}{4\,ab+ \left( c-a \right) ^ {2}}} \geqslant 0$$</span></p> Answer 18: <p>$$\frac{a}{b} + \frac{b}{c} + \frac{c}{a} \ge 3 \Leftrightarrow a^2c + b^2a + c^2b \ge 3abc$$ So we use these S.O.S forms:</p> <ul> <li>$a^3 + b^3 + c^3 - 3abc = \frac{1}{2}(a+b+c)\sum_{cyc}(a-b)^2$.</li> <li>$a^3 + b^3 + c^3 - a^2c - b^2a - c^2b = \frac{1}{3}\sum_{cyc}a^3 + a^3 + c^3-3a^2c = \frac{1}{3}\sum_{cyc} (2a+c)(c-a)^2$. Hence, the S.O.S form would be $$\frac{a}{b} + \frac{b}{c} + \frac{c}{a} - 3 = \sum_{cyc}\frac{3b+c-a}{6abc}(c-a)^2.$$</li> </ul> Answer 19: <p>Because<span class="math-container">$:$</span> <span class="math-container">$$\frac{a}{b}+\frac{b}{c}+\frac{c}{a} -3=\frac{ab^2 +bc^2 +ca^2 -3abc}{abc}$$</span> It's enough to prove<span class="math-container">$:$</span> <span class="math-container">$$ab^2 +bc^2 +ca^2 -3abc \geqslant 0$$</span> by SOS.</p> <p>Let <span class="math-container">$$\text{P}=2(a+b+c)\cdot (ab^2 +bc^2 +ca^2 -3abc)$$</span> We have<span class="math-container">$:$</span> <span class="math-container">$$\text{P}=a \left( a+2\,b \right) \left( b-c \right) ^{2}+c \left( c+2\,a \right) \left( a-b \right) ^{2}+b \left( b+2\,c \right) \left( c-a \right) ^{2}$$</span> Let me explain the method<span class="math-container">$,$</span> let <span class="math-container">$$\text{P}_{\text{sos}}=\sum \left( {\it QQ}_{{1}}{a}^{2}+{\it QQ}_{{4}}ab+{\it QQ}_{{5}}ac+{\it QQ}_{{2}}{b}^{2}+{\it QQ}_{{6}}bc+{\it QQ}_{{3}}{c}^{2} \right) \left( a-b \right) ^{2} $$</span> Get an identity give <span class="math-container">$$\left\{\begin{matrix} -{\it QQ}_{{1}}-{\it QQ}_{{2}}=0&amp;\\2\,{\it QQ}_{{1}}-{\it QQ}_{ {4}}-{\it QQ}_{{6}}=0&amp;\\2\,{\it QQ}_{{2}}-{\it QQ}_{{4}}-{\it QQ}_{{5}}+ 2=0&amp;\\2\,{\it QQ}_{{3}}+{\it QQ}_{{5}}+{\it QQ}_{{6}}-4=0&amp;\\-{\it QQ}_{{1} }-{\it QQ}_{{2}}-2\,{\it QQ}_{{3}}+2\,{\it QQ}_{{4}}+2=0 &amp; \end{matrix}\right.$$</span></p> <p>Solve this with <span class="math-container">$Q_i \geqslant 0 ,(i=1..6)$</span> give us<span class="math-container">$:$</span> <span class="math-container">$$ \left\{ {\it QQ}_{{1}}=0,{\it QQ}_{{2}}=0,{\it QQ}_{{3}}=1,{\it QQ}_{ {4}}=0,{\it QQ}_{{5}}=2,{\it QQ}_{{6}}=0 \right\} $$</span></p> <p>Take it into <span class="math-container">$\text{P}_{\text{sos}}$</span> give the SOS form. By the same way<span class="math-container">$,$</span> we have<span class="math-container">$:$</span> <span class="math-container">$$\text{P}=\frac{1}{3} \sum (2ab-bc-ca)^2 +\sum 2ab(b-c)^2$$</span></p> Answer 20: <p>We have <span class="math-container">$$\frac{a}{b} + \frac{b}{c} + \frac{c}{a} - 3 = \frac{(a-b)^2}{(a+b)b} + \frac{(b-c)^2}{(a+b)c} + \frac{(c-a)^2b}{(a+b)ca}.$$</span></p> Answer 21: <p>Let $c=\min\{a,b,c\}$. Hence, $$\frac{a}{b}+\frac{b}{c}+\frac{c}{a}-3=\frac{a}{b}+\frac{b}{a}-2+\frac{b}{c}+\frac{c}{a}-\frac{b}{a}-1=\frac{(a-b)^2}{ab}+\frac{(c-a)(c-b)}{ac}\geq0$$</p> <p>From here we can get a SOS form: $$\frac{a}{b}+\frac{b}{c}+\frac{c}{a}-3=\frac{1}{6abc}\sum_{cyc}(a-b)^2(3c+a-b)$$</p> Answer 22: <p>Here is another SOS (Shortest) <span class="math-container">$$ab^2+bc^2+ca^2-3abc={\frac {a \left( c-a \right) ^{2} \left({b}^{2}+ ac+cb \right) +b \left( 2\,ab+{c}^{2}-3\,ac \right) ^{2}}{4\,ab+ \left( c-a \right) ^ {2}}} \geqslant 0$$</span></p> Answer 23: <p>$$\frac{a}{b} + \frac{b}{c} + \frac{c}{a} \ge 3 \Leftrightarrow a^2c + b^2a + c^2b \ge 3abc$$ So we use these S.O.S forms:</p> <ul> <li>$a^3 + b^3 + c^3 - 3abc = \frac{1}{2}(a+b+c)\sum_{cyc}(a-b)^2$.</li> <li>$a^3 + b^3 + c^3 - a^2c - b^2a - c^2b = \frac{1}{3}\sum_{cyc}a^3 + a^3 + c^3-3a^2c = \frac{1}{3}\sum_{cyc} (2a+c)(c-a)^2$. Hence, the S.O.S form would be $$\frac{a}{b} + \frac{b}{c} + \frac{c}{a} - 3 = \sum_{cyc}\frac{3b+c-a}{6abc}(c-a)^2.$$</li> </ul> Answer 24: <p>Because<span class="math-container">$:$</span> <span class="math-container">$$\frac{a}{b}+\frac{b}{c}+\frac{c}{a} -3=\frac{ab^2 +bc^2 +ca^2 -3abc}{abc}$$</span> It's enough to prove<span class="math-container">$:$</span> <span class="math-container">$$ab^2 +bc^2 +ca^2 -3abc \geqslant 0$$</span> by SOS.</p> <p>Let <span class="math-container">$$\text{P}=2(a+b+c)\cdot (ab^2 +bc^2 +ca^2 -3abc)$$</span> We have<span class="math-container">$:$</span> <span class="math-container">$$\text{P}=a \left( a+2\,b \right) \left( b-c \right) ^{2}+c \left( c+2\,a \right) \left( a-b \right) ^{2}+b \left( b+2\,c \right) \left( c-a \right) ^{2}$$</span> Let me explain the method<span class="math-container">$,$</span> let <span class="math-container">$$\text{P}_{\text{sos}}=\sum \left( {\it QQ}_{{1}}{a}^{2}+{\it QQ}_{{4}}ab+{\it QQ}_{{5}}ac+{\it QQ}_{{2}}{b}^{2}+{\it QQ}_{{6}}bc+{\it QQ}_{{3}}{c}^{2} \right) \left( a-b \right) ^{2} $$</span> Get an identity give <span class="math-container">$$\left\{\begin{matrix} -{\it QQ}_{{1}}-{\it QQ}_{{2}}=0&amp;\\2\,{\it QQ}_{{1}}-{\it QQ}_{ {4}}-{\it QQ}_{{6}}=0&amp;\\2\,{\it QQ}_{{2}}-{\it QQ}_{{4}}-{\it QQ}_{{5}}+ 2=0&amp;\\2\,{\it QQ}_{{3}}+{\it QQ}_{{5}}+{\it QQ}_{{6}}-4=0&amp;\\-{\it QQ}_{{1} }-{\it QQ}_{{2}}-2\,{\it QQ}_{{3}}+2\,{\it QQ}_{{4}}+2=0 &amp; \end{matrix}\right.$$</span></p> <p>Solve this with <span class="math-container">$Q_i \geqslant 0 ,(i=1..6)$</span> give us<span class="math-container">$:$</span> <span class="math-container">$$ \left\{ {\it QQ}_{{1}}=0,{\it QQ}_{{2}}=0,{\it QQ}_{{3}}=1,{\it QQ}_{ {4}}=0,{\it QQ}_{{5}}=2,{\it QQ}_{{6}}=0 \right\} $$</span></p> <p>Take it into <span class="math-container">$\text{P}_{\text{sos}}$</span> give the SOS form. By the same way<span class="math-container">$,$</span> we have<span class="math-container">$:$</span> <span class="math-container">$$\text{P}=\frac{1}{3} \sum (2ab-bc-ca)^2 +\sum 2ab(b-c)^2$$</span></p> Answer 25: <p>We have <span class="math-container">$$\frac{a}{b} + \frac{b}{c} + \frac{c}{a} - 3 = \frac{(a-b)^2}{(a+b)b} + \frac{(b-c)^2}{(a+b)c} + \frac{(c-a)^2b}{(a+b)ca}.$$</span></p> Answer 26: <p>Let $c=\min\{a,b,c\}$. Hence, $$\frac{a}{b}+\frac{b}{c}+\frac{c}{a}-3=\frac{a}{b}+\frac{b}{a}-2+\frac{b}{c}+\frac{c}{a}-\frac{b}{a}-1=\frac{(a-b)^2}{ab}+\frac{(c-a)(c-b)}{ac}\geq0$$</p> <p>From here we can get a SOS form: $$\frac{a}{b}+\frac{b}{c}+\frac{c}{a}-3=\frac{1}{6abc}\sum_{cyc}(a-b)^2(3c+a-b)$$</p> Answer 27: <p>Here is another SOS (Shortest) <span class="math-container">$$ab^2+bc^2+ca^2-3abc={\frac {a \left( c-a \right) ^{2} \left({b}^{2}+ ac+cb \right) +b \left( 2\,ab+{c}^{2}-3\,ac \right) ^{2}}{4\,ab+ \left( c-a \right) ^ {2}}} \geqslant 0$$</span></p> Answer 28: <p>$$\frac{a}{b} + \frac{b}{c} + \frac{c}{a} \ge 3 \Leftrightarrow a^2c + b^2a + c^2b \ge 3abc$$ So we use these S.O.S forms:</p> <ul> <li>$a^3 + b^3 + c^3 - 3abc = \frac{1}{2}(a+b+c)\sum_{cyc}(a-b)^2$.</li> <li>$a^3 + b^3 + c^3 - a^2c - b^2a - c^2b = \frac{1}{3}\sum_{cyc}a^3 + a^3 + c^3-3a^2c = \frac{1}{3}\sum_{cyc} (2a+c)(c-a)^2$. Hence, the S.O.S form would be $$\frac{a}{b} + \frac{b}{c} + \frac{c}{a} - 3 = \sum_{cyc}\frac{3b+c-a}{6abc}(c-a)^2.$$</li> </ul> Answer 29: <p>Because<span class="math-container">$:$</span> <span class="math-container">$$\frac{a}{b}+\frac{b}{c}+\frac{c}{a} -3=\frac{ab^2 +bc^2 +ca^2 -3abc}{abc}$$</span> It's enough to prove<span class="math-container">$:$</span> <span class="math-container">$$ab^2 +bc^2 +ca^2 -3abc \geqslant 0$$</span> by SOS.</p> <p>Let <span class="math-container">$$\text{P}=2(a+b+c)\cdot (ab^2 +bc^2 +ca^2 -3abc)$$</span> We have<span class="math-container">$:$</span> <span class="math-container">$$\text{P}=a \left( a+2\,b \right) \left( b-c \right) ^{2}+c \left( c+2\,a \right) \left( a-b \right) ^{2}+b \left( b+2\,c \right) \left( c-a \right) ^{2}$$</span> Let me explain the method<span class="math-container">$,$</span> let <span class="math-container">$$\text{P}_{\text{sos}}=\sum \left( {\it QQ}_{{1}}{a}^{2}+{\it QQ}_{{4}}ab+{\it QQ}_{{5}}ac+{\it QQ}_{{2}}{b}^{2}+{\it QQ}_{{6}}bc+{\it QQ}_{{3}}{c}^{2} \right) \left( a-b \right) ^{2} $$</span> Get an identity give <span class="math-container">$$\left\{\begin{matrix} -{\it QQ}_{{1}}-{\it QQ}_{{2}}=0&amp;\\2\,{\it QQ}_{{1}}-{\it QQ}_{ {4}}-{\it QQ}_{{6}}=0&amp;\\2\,{\it QQ}_{{2}}-{\it QQ}_{{4}}-{\it QQ}_{{5}}+ 2=0&amp;\\2\,{\it QQ}_{{3}}+{\it QQ}_{{5}}+{\it QQ}_{{6}}-4=0&amp;\\-{\it QQ}_{{1} }-{\it QQ}_{{2}}-2\,{\it QQ}_{{3}}+2\,{\it QQ}_{{4}}+2=0 &amp; \end{matrix}\right.$$</span></p> <p>Solve this with <span class="math-container">$Q_i \geqslant 0 ,(i=1..6)$</span> give us<span class="math-container">$:$</span> <span class="math-container">$$ \left\{ {\it QQ}_{{1}}=0,{\it QQ}_{{2}}=0,{\it QQ}_{{3}}=1,{\it QQ}_{ {4}}=0,{\it QQ}_{{5}}=2,{\it QQ}_{{6}}=0 \right\} $$</span></p> <p>Take it into <span class="math-container">$\text{P}_{\text{sos}}$</span> give the SOS form. By the same way<span class="math-container">$,$</span> we have<span class="math-container">$:$</span> <span class="math-container">$$\text{P}=\frac{1}{3} \sum (2ab-bc-ca)^2 +\sum 2ab(b-c)^2$$</span></p> Answer 30: <p>We have <span class="math-container">$$\frac{a}{b} + \frac{b}{c} + \frac{c}{a} - 3 = \frac{(a-b)^2}{(a+b)b} + \frac{(b-c)^2}{(a+b)c} + \frac{(c-a)^2b}{(a+b)ca}.$$</span></p> Answer 31: <p>Let $c=\min\{a,b,c\}$. Hence, $$\frac{a}{b}+\frac{b}{c}+\frac{c}{a}-3=\frac{a}{b}+\frac{b}{a}-2+\frac{b}{c}+\frac{c}{a}-\frac{b}{a}-1=\frac{(a-b)^2}{ab}+\frac{(c-a)(c-b)}{ac}\geq0$$</p> <p>From here we can get a SOS form: $$\frac{a}{b}+\frac{b}{c}+\frac{c}{a}-3=\frac{1}{6abc}\sum_{cyc}(a-b)^2(3c+a-b)$$</p> Answer 32: <p>Here is another SOS (Shortest) <span class="math-container">$$ab^2+bc^2+ca^2-3abc={\frac {a \left( c-a \right) ^{2} \left({b}^{2}+ ac+cb \right) +b \left( 2\,ab+{c}^{2}-3\,ac \right) ^{2}}{4\,ab+ \left( c-a \right) ^ {2}}} \geqslant 0$$</span></p> Answer 33: <p>$$\frac{a}{b} + \frac{b}{c} + \frac{c}{a} \ge 3 \Leftrightarrow a^2c + b^2a + c^2b \ge 3abc$$ So we use these S.O.S forms:</p> <ul> <li>$a^3 + b^3 + c^3 - 3abc = \frac{1}{2}(a+b+c)\sum_{cyc}(a-b)^2$.</li> <li>$a^3 + b^3 + c^3 - a^2c - b^2a - c^2b = \frac{1}{3}\sum_{cyc}a^3 + a^3 + c^3-3a^2c = \frac{1}{3}\sum_{cyc} (2a+c)(c-a)^2$. Hence, the S.O.S form would be $$\frac{a}{b} + \frac{b}{c} + \frac{c}{a} - 3 = \sum_{cyc}\frac{3b+c-a}{6abc}(c-a)^2.$$</li> </ul> Answer 34: <p>Because<span class="math-container">$:$</span> <span class="math-container">$$\frac{a}{b}+\frac{b}{c}+\frac{c}{a} -3=\frac{ab^2 +bc^2 +ca^2 -3abc}{abc}$$</span> It's enough to prove<span class="math-container">$:$</span> <span class="math-container">$$ab^2 +bc^2 +ca^2 -3abc \geqslant 0$$</span> by SOS.</p> <p>Let <span class="math-container">$$\text{P}=2(a+b+c)\cdot (ab^2 +bc^2 +ca^2 -3abc)$$</span> We have<span class="math-container">$:$</span> <span class="math-container">$$\text{P}=a \left( a+2\,b \right) \left( b-c \right) ^{2}+c \left( c+2\,a \right) \left( a-b \right) ^{2}+b \left( b+2\,c \right) \left( c-a \right) ^{2}$$</span> Let me explain the method<span class="math-container">$,$</span> let <span class="math-container">$$\text{P}_{\text{sos}}=\sum \left( {\it QQ}_{{1}}{a}^{2}+{\it QQ}_{{4}}ab+{\it QQ}_{{5}}ac+{\it QQ}_{{2}}{b}^{2}+{\it QQ}_{{6}}bc+{\it QQ}_{{3}}{c}^{2} \right) \left( a-b \right) ^{2} $$</span> Get an identity give <span class="math-container">$$\left\{\begin{matrix} -{\it QQ}_{{1}}-{\it QQ}_{{2}}=0&amp;\\2\,{\it QQ}_{{1}}-{\it QQ}_{ {4}}-{\it QQ}_{{6}}=0&amp;\\2\,{\it QQ}_{{2}}-{\it QQ}_{{4}}-{\it QQ}_{{5}}+ 2=0&amp;\\2\,{\it QQ}_{{3}}+{\it QQ}_{{5}}+{\it QQ}_{{6}}-4=0&amp;\\-{\it QQ}_{{1} }-{\it QQ}_{{2}}-2\,{\it QQ}_{{3}}+2\,{\it QQ}_{{4}}+2=0 &amp; \end{matrix}\right.$$</span></p> <p>Solve this with <span class="math-container">$Q_i \geqslant 0 ,(i=1..6)$</span> give us<span class="math-container">$:$</span> <span class="math-container">$$ \left\{ {\it QQ}_{{1}}=0,{\it QQ}_{{2}}=0,{\it QQ}_{{3}}=1,{\it QQ}_{ {4}}=0,{\it QQ}_{{5}}=2,{\it QQ}_{{6}}=0 \right\} $$</span></p> <p>Take it into <span class="math-container">$\text{P}_{\text{sos}}$</span> give the SOS form. By the same way<span class="math-container">$,$</span> we have<span class="math-container">$:$</span> <span class="math-container">$$\text{P}=\frac{1}{3} \sum (2ab-bc-ca)^2 +\sum 2ab(b-c)^2$$</span></p> Answer 35: <p>We have <span class="math-container">$$\frac{a}{b} + \frac{b}{c} + \frac{c}{a} - 3 = \frac{(a-b)^2}{(a+b)b} + \frac{(b-c)^2}{(a+b)c} + \frac{(c-a)^2b}{(a+b)ca}.$$</span></p> Answer 36: <p>Let $c=\min\{a,b,c\}$. Hence, $$\frac{a}{b}+\frac{b}{c}+\frac{c}{a}-3=\frac{a}{b}+\frac{b}{a}-2+\frac{b}{c}+\frac{c}{a}-\frac{b}{a}-1=\frac{(a-b)^2}{ab}+\frac{(c-a)(c-b)}{ac}\geq0$$</p> <p>From here we can get a SOS form: $$\frac{a}{b}+\frac{b}{c}+\frac{c}{a}-3=\frac{1}{6abc}\sum_{cyc}(a-b)^2(3c+a-b)$$</p> Answer 37: <p>Here is another SOS (Shortest) <span class="math-container">$$ab^2+bc^2+ca^2-3abc={\frac {a \left( c-a \right) ^{2} \left({b}^{2}+ ac+cb \right) +b \left( 2\,ab+{c}^{2}-3\,ac \right) ^{2}}{4\,ab+ \left( c-a \right) ^ {2}}} \geqslant 0$$</span></p> Answer 38: <p>$$\frac{a}{b} + \frac{b}{c} + \frac{c}{a} \ge 3 \Leftrightarrow a^2c + b^2a + c^2b \ge 3abc$$ So we use these S.O.S forms:</p> <ul> <li>$a^3 + b^3 + c^3 - 3abc = \frac{1}{2}(a+b+c)\sum_{cyc}(a-b)^2$.</li> <li>$a^3 + b^3 + c^3 - a^2c - b^2a - c^2b = \frac{1}{3}\sum_{cyc}a^3 + a^3 + c^3-3a^2c = \frac{1}{3}\sum_{cyc} (2a+c)(c-a)^2$. Hence, the S.O.S form would be $$\frac{a}{b} + \frac{b}{c} + \frac{c}{a} - 3 = \sum_{cyc}\frac{3b+c-a}{6abc}(c-a)^2.$$</li> </ul> Answer 39: <p>Because<span class="math-container">$:$</span> <span class="math-container">$$\frac{a}{b}+\frac{b}{c}+\frac{c}{a} -3=\frac{ab^2 +bc^2 +ca^2 -3abc}{abc}$$</span> It's enough to prove<span class="math-container">$:$</span> <span class="math-container">$$ab^2 +bc^2 +ca^2 -3abc \geqslant 0$$</span> by SOS.</p> <p>Let <span class="math-container">$$\text{P}=2(a+b+c)\cdot (ab^2 +bc^2 +ca^2 -3abc)$$</span> We have<span class="math-container">$:$</span> <span class="math-container">$$\text{P}=a \left( a+2\,b \right) \left( b-c \right) ^{2}+c \left( c+2\,a \right) \left( a-b \right) ^{2}+b \left( b+2\,c \right) \left( c-a \right) ^{2}$$</span> Let me explain the method<span class="math-container">$,$</span> let <span class="math-container">$$\text{P}_{\text{sos}}=\sum \left( {\it QQ}_{{1}}{a}^{2}+{\it QQ}_{{4}}ab+{\it QQ}_{{5}}ac+{\it QQ}_{{2}}{b}^{2}+{\it QQ}_{{6}}bc+{\it QQ}_{{3}}{c}^{2} \right) \left( a-b \right) ^{2} $$</span> Get an identity give <span class="math-container">$$\left\{\begin{matrix} -{\it QQ}_{{1}}-{\it QQ}_{{2}}=0&amp;\\2\,{\it QQ}_{{1}}-{\it QQ}_{ {4}}-{\it QQ}_{{6}}=0&amp;\\2\,{\it QQ}_{{2}}-{\it QQ}_{{4}}-{\it QQ}_{{5}}+ 2=0&amp;\\2\,{\it QQ}_{{3}}+{\it QQ}_{{5}}+{\it QQ}_{{6}}-4=0&amp;\\-{\it QQ}_{{1} }-{\it QQ}_{{2}}-2\,{\it QQ}_{{3}}+2\,{\it QQ}_{{4}}+2=0 &amp; \end{matrix}\right.$$</span></p> <p>Solve this with <span class="math-container">$Q_i \geqslant 0 ,(i=1..6)$</span> give us<span class="math-container">$:$</span> <span class="math-container">$$ \left\{ {\it QQ}_{{1}}=0,{\it QQ}_{{2}}=0,{\it QQ}_{{3}}=1,{\it QQ}_{ {4}}=0,{\it QQ}_{{5}}=2,{\it QQ}_{{6}}=0 \right\} $$</span></p> <p>Take it into <span class="math-container">$\text{P}_{\text{sos}}$</span> give the SOS form. By the same way<span class="math-container">$,$</span> we have<span class="math-container">$:$</span> <span class="math-container">$$\text{P}=\frac{1}{3} \sum (2ab-bc-ca)^2 +\sum 2ab(b-c)^2$$</span></p> Answer 40: <p>We have <span class="math-container">$$\frac{a}{b} + \frac{b}{c} + \frac{c}{a} - 3 = \frac{(a-b)^2}{(a+b)b} + \frac{(b-c)^2}{(a+b)c} + \frac{(c-a)^2b}{(a+b)ca}.$$</span></p> Answer 41: <p>Let $c=\min\{a,b,c\}$. Hence, $$\frac{a}{b}+\frac{b}{c}+\frac{c}{a}-3=\frac{a}{b}+\frac{b}{a}-2+\frac{b}{c}+\frac{c}{a}-\frac{b}{a}-1=\frac{(a-b)^2}{ab}+\frac{(c-a)(c-b)}{ac}\geq0$$</p> <p>From here we can get a SOS form: $$\frac{a}{b}+\frac{b}{c}+\frac{c}{a}-3=\frac{1}{6abc}\sum_{cyc}(a-b)^2(3c+a-b)$$</p> Answer 42: <p>Here is another SOS (Shortest) <span class="math-container">$$ab^2+bc^2+ca^2-3abc={\frac {a \left( c-a \right) ^{2} \left({b}^{2}+ ac+cb \right) +b \left( 2\,ab+{c}^{2}-3\,ac \right) ^{2}}{4\,ab+ \left( c-a \right) ^ {2}}} \geqslant 0$$</span></p> Answer 43: <p>$$\frac{a}{b} + \frac{b}{c} + \frac{c}{a} \ge 3 \Leftrightarrow a^2c + b^2a + c^2b \ge 3abc$$ So we use these S.O.S forms:</p> <ul> <li>$a^3 + b^3 + c^3 - 3abc = \frac{1}{2}(a+b+c)\sum_{cyc}(a-b)^2$.</li> <li>$a^3 + b^3 + c^3 - a^2c - b^2a - c^2b = \frac{1}{3}\sum_{cyc}a^3 + a^3 + c^3-3a^2c = \frac{1}{3}\sum_{cyc} (2a+c)(c-a)^2$. Hence, the S.O.S form would be $$\frac{a}{b} + \frac{b}{c} + \frac{c}{a} - 3 = \sum_{cyc}\frac{3b+c-a}{6abc}(c-a)^2.$$</li> </ul> Answer 44: <p>Because<span class="math-container">$:$</span> <span class="math-container">$$\frac{a}{b}+\frac{b}{c}+\frac{c}{a} -3=\frac{ab^2 +bc^2 +ca^2 -3abc}{abc}$$</span> It's enough to prove<span class="math-container">$:$</span> <span class="math-container">$$ab^2 +bc^2 +ca^2 -3abc \geqslant 0$$</span> by SOS.</p> <p>Let <span class="math-container">$$\text{P}=2(a+b+c)\cdot (ab^2 +bc^2 +ca^2 -3abc)$$</span> We have<span class="math-container">$:$</span> <span class="math-container">$$\text{P}=a \left( a+2\,b \right) \left( b-c \right) ^{2}+c \left( c+2\,a \right) \left( a-b \right) ^{2}+b \left( b+2\,c \right) \left( c-a \right) ^{2}$$</span> Let me explain the method<span class="math-container">$,$</span> let <span class="math-container">$$\text{P}_{\text{sos}}=\sum \left( {\it QQ}_{{1}}{a}^{2}+{\it QQ}_{{4}}ab+{\it QQ}_{{5}}ac+{\it QQ}_{{2}}{b}^{2}+{\it QQ}_{{6}}bc+{\it QQ}_{{3}}{c}^{2} \right) \left( a-b \right) ^{2} $$</span> Get an identity give <span class="math-container">$$\left\{\begin{matrix} -{\it QQ}_{{1}}-{\it QQ}_{{2}}=0&amp;\\2\,{\it QQ}_{{1}}-{\it QQ}_{ {4}}-{\it QQ}_{{6}}=0&amp;\\2\,{\it QQ}_{{2}}-{\it QQ}_{{4}}-{\it QQ}_{{5}}+ 2=0&amp;\\2\,{\it QQ}_{{3}}+{\it QQ}_{{5}}+{\it QQ}_{{6}}-4=0&amp;\\-{\it QQ}_{{1} }-{\it QQ}_{{2}}-2\,{\it QQ}_{{3}}+2\,{\it QQ}_{{4}}+2=0 &amp; \end{matrix}\right.$$</span></p> <p>Solve this with <span class="math-container">$Q_i \geqslant 0 ,(i=1..6)$</span> give us<span class="math-container">$:$</span> <span class="math-container">$$ \left\{ {\it QQ}_{{1}}=0,{\it QQ}_{{2}}=0,{\it QQ}_{{3}}=1,{\it QQ}_{ {4}}=0,{\it QQ}_{{5}}=2,{\it QQ}_{{6}}=0 \right\} $$</span></p> <p>Take it into <span class="math-container">$\text{P}_{\text{sos}}$</span> give the SOS form. By the same way<span class="math-container">$,$</span> we have<span class="math-container">$:$</span> <span class="math-container">$$\text{P}=\frac{1}{3} \sum (2ab-bc-ca)^2 +\sum 2ab(b-c)^2$$</span></p> Answer 45: <p>We have <span class="math-container">$$\frac{a}{b} + \frac{b}{c} + \frac{c}{a} - 3 = \frac{(a-b)^2}{(a+b)b} + \frac{(b-c)^2}{(a+b)c} + \frac{(c-a)^2b}{(a+b)ca}.$$</span></p> Answer 46: <p>Let $c=\min\{a,b,c\}$. Hence, $$\frac{a}{b}+\frac{b}{c}+\frac{c}{a}-3=\frac{a}{b}+\frac{b}{a}-2+\frac{b}{c}+\frac{c}{a}-\frac{b}{a}-1=\frac{(a-b)^2}{ab}+\frac{(c-a)(c-b)}{ac}\geq0$$</p> <p>From here we can get a SOS form: $$\frac{a}{b}+\frac{b}{c}+\frac{c}{a}-3=\frac{1}{6abc}\sum_{cyc}(a-b)^2(3c+a-b)$$</p> Answer 47: <p>Here is another SOS (Shortest) <span class="math-container">$$ab^2+bc^2+ca^2-3abc={\frac {a \left( c-a \right) ^{2} \left({b}^{2}+ ac+cb \right) +b \left( 2\,ab+{c}^{2}-3\,ac \right) ^{2}}{4\,ab+ \left( c-a \right) ^ {2}}} \geqslant 0$$</span></p> Answer 48: <p>$$\frac{a}{b} + \frac{b}{c} + \frac{c}{a} \ge 3 \Leftrightarrow a^2c + b^2a + c^2b \ge 3abc$$ So we use these S.O.S forms:</p> <ul> <li>$a^3 + b^3 + c^3 - 3abc = \frac{1}{2}(a+b+c)\sum_{cyc}(a-b)^2$.</li> <li>$a^3 + b^3 + c^3 - a^2c - b^2a - c^2b = \frac{1}{3}\sum_{cyc}a^3 + a^3 + c^3-3a^2c = \frac{1}{3}\sum_{cyc} (2a+c)(c-a)^2$. Hence, the S.O.S form would be $$\frac{a}{b} + \frac{b}{c} + \frac{c}{a} - 3 = \sum_{cyc}\frac{3b+c-a}{6abc}(c-a)^2.$$</li> </ul> Answer 49: <p>Because<span class="math-container">$:$</span> <span class="math-container">$$\frac{a}{b}+\frac{b}{c}+\frac{c}{a} -3=\frac{ab^2 +bc^2 +ca^2 -3abc}{abc}$$</span> It's enough to prove<span class="math-container">$:$</span> <span class="math-container">$$ab^2 +bc^2 +ca^2 -3abc \geqslant 0$$</span> by SOS.</p> <p>Let <span class="math-container">$$\text{P}=2(a+b+c)\cdot (ab^2 +bc^2 +ca^2 -3abc)$$</span> We have<span class="math-container">$:$</span> <span class="math-container">$$\text{P}=a \left( a+2\,b \right) \left( b-c \right) ^{2}+c \left( c+2\,a \right) \left( a-b \right) ^{2}+b \left( b+2\,c \right) \left( c-a \right) ^{2}$$</span> Let me explain the method<span class="math-container">$,$</span> let <span class="math-container">$$\text{P}_{\text{sos}}=\sum \left( {\it QQ}_{{1}}{a}^{2}+{\it QQ}_{{4}}ab+{\it QQ}_{{5}}ac+{\it QQ}_{{2}}{b}^{2}+{\it QQ}_{{6}}bc+{\it QQ}_{{3}}{c}^{2} \right) \left( a-b \right) ^{2} $$</span> Get an identity give <span class="math-container">$$\left\{\begin{matrix} -{\it QQ}_{{1}}-{\it QQ}_{{2}}=0&amp;\\2\,{\it QQ}_{{1}}-{\it QQ}_{ {4}}-{\it QQ}_{{6}}=0&amp;\\2\,{\it QQ}_{{2}}-{\it QQ}_{{4}}-{\it QQ}_{{5}}+ 2=0&amp;\\2\,{\it QQ}_{{3}}+{\it QQ}_{{5}}+{\it QQ}_{{6}}-4=0&amp;\\-{\it QQ}_{{1} }-{\it QQ}_{{2}}-2\,{\it QQ}_{{3}}+2\,{\it QQ}_{{4}}+2=0 &amp; \end{matrix}\right.$$</span></p> <p>Solve this with <span class="math-container">$Q_i \geqslant 0 ,(i=1..6)$</span> give us<span class="math-container">$:$</span> <span class="math-container">$$ \left\{ {\it QQ}_{{1}}=0,{\it QQ}_{{2}}=0,{\it QQ}_{{3}}=1,{\it QQ}_{ {4}}=0,{\it QQ}_{{5}}=2,{\it QQ}_{{6}}=0 \right\} $$</span></p> <p>Take it into <span class="math-container">$\text{P}_{\text{sos}}$</span> give the SOS form. By the same way<span class="math-container">$,$</span> we have<span class="math-container">$:$</span> <span class="math-container">$$\text{P}=\frac{1}{3} \sum (2ab-bc-ca)^2 +\sum 2ab(b-c)^2$$</span></p> Answer 50: <p>We have <span class="math-container">$$\frac{a}{b} + \frac{b}{c} + \frac{c}{a} - 3 = \frac{(a-b)^2}{(a+b)b} + \frac{(b-c)^2}{(a+b)c} + \frac{(c-a)^2b}{(a+b)ca}.$$</span></p>
If we suppose, that a perfect odd number exists, can we determine whether infinite many exist?
2,116,257
6
https://math.stackexchange.com/questions/2116257/if-we-suppose-that-a-perfect-odd-number-exists-can-we-determine-whether-infini
<p>It is currently not known whether odd perfect numbers (numbers with the property $\sigma(n)=2n$, where $\sigma(n)$ is the sum of divisors of $n$, including $1$ and $n$) exist. </p> <p>But suppose, a perfect odd number exists.</p> <blockquote> <p>Do we then know whether there are infinite many perfect odd numbers ?</p> </blockquote> <p>This could be possible because various necessary conditions are known for an odd number to be perfect. Perhaps they allow to construct arbitary many perfect odd numbers, supposing that an odd perfect number exists.</p>
[ { "answer_id": 2420388, "score": 1, "text": "<p><em>This is not a direct answer to your question, but it is certainly related.</em></p>\n\n<p>We do know that\n<span class=\"math-container\">$$I(q^k) &lt; \\frac{5}{4} &lt; \\frac{3}{2} \\leq I(2^{p-1})$$</span>\nwhere <span class=\"math-container\">$N=q^k n^2$</span> is an odd perfect number in Eulerian form and <span class=\"math-container\">$M=(2^p - 1){2^{p-1}}$</span> is an even perfect number in Euclidean form.</p>\n\n<p>Because of a property satisfied by the abundancy index <span class=\"math-container\">$I(x)=\\sigma(x)/x$</span> at prime powers, it follows that\n<span class=\"math-container\">$$2^{p-1} &lt; q^k.$$</span></p>\n\n<p>(<strong>NOTE THAT THE REASONING IS UNSOUND IN THE LAST PARAGRAPH, AS IT ONLY FOLLOWS FROM <span class=\"math-container\">$I(q^k) &lt; I(2^{p-1})$</span> THAT <span class=\"math-container\">$2 &lt; q$</span>. WHAT FOLLOWS IS THEREFORE INCONCLUSIVE, AT THIS POINT.</strong>)</p>\n\n<p>It follows that, since <span class=\"math-container\">$q^k &lt; n^2$</span> <a href=\"https://cs.uwaterloo.ca/journals/JIS/VOL15/Dris/dris8.html\" rel=\"nofollow noreferrer\">[Dris, 2012]</a>, if <span class=\"math-container\">$N$</span> is bounded, then <span class=\"math-container\">$M$</span> is also bounded.</p>\n\n<p>Thus, if there exists an odd perfect number, we would infer that there are only finitely many even perfect numbers.</p>\n" }, { "answer_id": 3532589, "score": 1, "text": "<p>This <a href=\"https://math.stackexchange.com/a/2420379/28816\">answer</a> attempts to prove your conjecture that if there is an odd perfect number, then there are infinitely many of them.</p>\n\n<p>Briefly, the argument makes use of the following lemma:</p>\n\n<blockquote>\n <p><strong>Lemma</strong> If <span class=\"math-container\">$q^k n^2$</span> is an odd perfect number with special/Euler prime <span class=\"math-container\">$q$</span>, then the biconditional\n <span class=\"math-container\">$$\\frac{q^2}{n}+\\frac{n}{q^2} \\text{ is bounded from above } \\iff \\frac{\\sigma(q^2)}{n}+\\frac{\\sigma(n)}{q^2} \\text{ is bounded from above}$$</span>\n holds.</p>\n</blockquote>\n\n<p>Details are in the following question:\n<a href=\"https://math.stackexchange.com/questions/2420351\">If <span class=\"math-container\">$q^k n^2$</span> is an odd perfect number with Euler prime <span class=\"math-container\">$q$</span>, which of the following relationships between <span class=\"math-container\">$q^2$</span> and <span class=\"math-container\">$n$</span> hold?</a></p>\n" }, { "answer_id": 2420388, "score": 1, "text": "<p><em>This is not a direct answer to your question, but it is certainly related.</em></p>\n\n<p>We do know that\n<span class=\"math-container\">$$I(q^k) &lt; \\frac{5}{4} &lt; \\frac{3}{2} \\leq I(2^{p-1})$$</span>\nwhere <span class=\"math-container\">$N=q^k n^2$</span> is an odd perfect number in Eulerian form and <span class=\"math-container\">$M=(2^p - 1){2^{p-1}}$</span> is an even perfect number in Euclidean form.</p>\n\n<p>Because of a property satisfied by the abundancy index <span class=\"math-container\">$I(x)=\\sigma(x)/x$</span> at prime powers, it follows that\n<span class=\"math-container\">$$2^{p-1} &lt; q^k.$$</span></p>\n\n<p>(<strong>NOTE THAT THE REASONING IS UNSOUND IN THE LAST PARAGRAPH, AS IT ONLY FOLLOWS FROM <span class=\"math-container\">$I(q^k) &lt; I(2^{p-1})$</span> THAT <span class=\"math-container\">$2 &lt; q$</span>. WHAT FOLLOWS IS THEREFORE INCONCLUSIVE, AT THIS POINT.</strong>)</p>\n\n<p>It follows that, since <span class=\"math-container\">$q^k &lt; n^2$</span> <a href=\"https://cs.uwaterloo.ca/journals/JIS/VOL15/Dris/dris8.html\" rel=\"nofollow noreferrer\">[Dris, 2012]</a>, if <span class=\"math-container\">$N$</span> is bounded, then <span class=\"math-container\">$M$</span> is also bounded.</p>\n\n<p>Thus, if there exists an odd perfect number, we would infer that there are only finitely many even perfect numbers.</p>\n" }, { "answer_id": 3532589, "score": 1, "text": "<p>This <a href=\"https://math.stackexchange.com/a/2420379/28816\">answer</a> attempts to prove your conjecture that if there is an odd perfect number, then there are infinitely many of them.</p>\n\n<p>Briefly, the argument makes use of the following lemma:</p>\n\n<blockquote>\n <p><strong>Lemma</strong> If <span class=\"math-container\">$q^k n^2$</span> is an odd perfect number with special/Euler prime <span class=\"math-container\">$q$</span>, then the biconditional\n <span class=\"math-container\">$$\\frac{q^2}{n}+\\frac{n}{q^2} \\text{ is bounded from above } \\iff \\frac{\\sigma(q^2)}{n}+\\frac{\\sigma(n)}{q^2} \\text{ is bounded from above}$$</span>\n holds.</p>\n</blockquote>\n\n<p>Details are in the following question:\n<a href=\"https://math.stackexchange.com/questions/2420351\">If <span class=\"math-container\">$q^k n^2$</span> is an odd perfect number with Euler prime <span class=\"math-container\">$q$</span>, which of the following relationships between <span class=\"math-container\">$q^2$</span> and <span class=\"math-container\">$n$</span> hold?</a></p>\n" }, { "answer_id": 2420388, "score": 1, "text": "<p><em>This is not a direct answer to your question, but it is certainly related.</em></p>\n\n<p>We do know that\n<span class=\"math-container\">$$I(q^k) &lt; \\frac{5}{4} &lt; \\frac{3}{2} \\leq I(2^{p-1})$$</span>\nwhere <span class=\"math-container\">$N=q^k n^2$</span> is an odd perfect number in Eulerian form and <span class=\"math-container\">$M=(2^p - 1){2^{p-1}}$</span> is an even perfect number in Euclidean form.</p>\n\n<p>Because of a property satisfied by the abundancy index <span class=\"math-container\">$I(x)=\\sigma(x)/x$</span> at prime powers, it follows that\n<span class=\"math-container\">$$2^{p-1} &lt; q^k.$$</span></p>\n\n<p>(<strong>NOTE THAT THE REASONING IS UNSOUND IN THE LAST PARAGRAPH, AS IT ONLY FOLLOWS FROM <span class=\"math-container\">$I(q^k) &lt; I(2^{p-1})$</span> THAT <span class=\"math-container\">$2 &lt; q$</span>. WHAT FOLLOWS IS THEREFORE INCONCLUSIVE, AT THIS POINT.</strong>)</p>\n\n<p>It follows that, since <span class=\"math-container\">$q^k &lt; n^2$</span> <a href=\"https://cs.uwaterloo.ca/journals/JIS/VOL15/Dris/dris8.html\" rel=\"nofollow noreferrer\">[Dris, 2012]</a>, if <span class=\"math-container\">$N$</span> is bounded, then <span class=\"math-container\">$M$</span> is also bounded.</p>\n\n<p>Thus, if there exists an odd perfect number, we would infer that there are only finitely many even perfect numbers.</p>\n" }, { "answer_id": 3532589, "score": 1, "text": "<p>This <a href=\"https://math.stackexchange.com/a/2420379/28816\">answer</a> attempts to prove your conjecture that if there is an odd perfect number, then there are infinitely many of them.</p>\n\n<p>Briefly, the argument makes use of the following lemma:</p>\n\n<blockquote>\n <p><strong>Lemma</strong> If <span class=\"math-container\">$q^k n^2$</span> is an odd perfect number with special/Euler prime <span class=\"math-container\">$q$</span>, then the biconditional\n <span class=\"math-container\">$$\\frac{q^2}{n}+\\frac{n}{q^2} \\text{ is bounded from above } \\iff \\frac{\\sigma(q^2)}{n}+\\frac{\\sigma(n)}{q^2} \\text{ is bounded from above}$$</span>\n holds.</p>\n</blockquote>\n\n<p>Details are in the following question:\n<a href=\"https://math.stackexchange.com/questions/2420351\">If <span class=\"math-container\">$q^k n^2$</span> is an odd perfect number with Euler prime <span class=\"math-container\">$q$</span>, which of the following relationships between <span class=\"math-container\">$q^2$</span> and <span class=\"math-container\">$n$</span> hold?</a></p>\n" }, { "answer_id": 2420388, "score": 1, "text": "<p><em>This is not a direct answer to your question, but it is certainly related.</em></p>\n\n<p>We do know that\n<span class=\"math-container\">$$I(q^k) &lt; \\frac{5}{4} &lt; \\frac{3}{2} \\leq I(2^{p-1})$$</span>\nwhere <span class=\"math-container\">$N=q^k n^2$</span> is an odd perfect number in Eulerian form and <span class=\"math-container\">$M=(2^p - 1){2^{p-1}}$</span> is an even perfect number in Euclidean form.</p>\n\n<p>Because of a property satisfied by the abundancy index <span class=\"math-container\">$I(x)=\\sigma(x)/x$</span> at prime powers, it follows that\n<span class=\"math-container\">$$2^{p-1} &lt; q^k.$$</span></p>\n\n<p>(<strong>NOTE THAT THE REASONING IS UNSOUND IN THE LAST PARAGRAPH, AS IT ONLY FOLLOWS FROM <span class=\"math-container\">$I(q^k) &lt; I(2^{p-1})$</span> THAT <span class=\"math-container\">$2 &lt; q$</span>. WHAT FOLLOWS IS THEREFORE INCONCLUSIVE, AT THIS POINT.</strong>)</p>\n\n<p>It follows that, since <span class=\"math-container\">$q^k &lt; n^2$</span> <a href=\"https://cs.uwaterloo.ca/journals/JIS/VOL15/Dris/dris8.html\" rel=\"nofollow noreferrer\">[Dris, 2012]</a>, if <span class=\"math-container\">$N$</span> is bounded, then <span class=\"math-container\">$M$</span> is also bounded.</p>\n\n<p>Thus, if there exists an odd perfect number, we would infer that there are only finitely many even perfect numbers.</p>\n" }, { "answer_id": 3532589, "score": 1, "text": "<p>This <a href=\"https://math.stackexchange.com/a/2420379/28816\">answer</a> attempts to prove your conjecture that if there is an odd perfect number, then there are infinitely many of them.</p>\n\n<p>Briefly, the argument makes use of the following lemma:</p>\n\n<blockquote>\n <p><strong>Lemma</strong> If <span class=\"math-container\">$q^k n^2$</span> is an odd perfect number with special/Euler prime <span class=\"math-container\">$q$</span>, then the biconditional\n <span class=\"math-container\">$$\\frac{q^2}{n}+\\frac{n}{q^2} \\text{ is bounded from above } \\iff \\frac{\\sigma(q^2)}{n}+\\frac{\\sigma(n)}{q^2} \\text{ is bounded from above}$$</span>\n holds.</p>\n</blockquote>\n\n<p>Details are in the following question:\n<a href=\"https://math.stackexchange.com/questions/2420351\">If <span class=\"math-container\">$q^k n^2$</span> is an odd perfect number with Euler prime <span class=\"math-container\">$q$</span>, which of the following relationships between <span class=\"math-container\">$q^2$</span> and <span class=\"math-container\">$n$</span> hold?</a></p>\n" }, { "answer_id": 2420388, "score": 1, "text": "<p><em>This is not a direct answer to your question, but it is certainly related.</em></p>\n\n<p>We do know that\n<span class=\"math-container\">$$I(q^k) &lt; \\frac{5}{4} &lt; \\frac{3}{2} \\leq I(2^{p-1})$$</span>\nwhere <span class=\"math-container\">$N=q^k n^2$</span> is an odd perfect number in Eulerian form and <span class=\"math-container\">$M=(2^p - 1){2^{p-1}}$</span> is an even perfect number in Euclidean form.</p>\n\n<p>Because of a property satisfied by the abundancy index <span class=\"math-container\">$I(x)=\\sigma(x)/x$</span> at prime powers, it follows that\n<span class=\"math-container\">$$2^{p-1} &lt; q^k.$$</span></p>\n\n<p>(<strong>NOTE THAT THE REASONING IS UNSOUND IN THE LAST PARAGRAPH, AS IT ONLY FOLLOWS FROM <span class=\"math-container\">$I(q^k) &lt; I(2^{p-1})$</span> THAT <span class=\"math-container\">$2 &lt; q$</span>. WHAT FOLLOWS IS THEREFORE INCONCLUSIVE, AT THIS POINT.</strong>)</p>\n\n<p>It follows that, since <span class=\"math-container\">$q^k &lt; n^2$</span> <a href=\"https://cs.uwaterloo.ca/journals/JIS/VOL15/Dris/dris8.html\" rel=\"nofollow noreferrer\">[Dris, 2012]</a>, if <span class=\"math-container\">$N$</span> is bounded, then <span class=\"math-container\">$M$</span> is also bounded.</p>\n\n<p>Thus, if there exists an odd perfect number, we would infer that there are only finitely many even perfect numbers.</p>\n" }, { "answer_id": 3532589, "score": 1, "text": "<p>This <a href=\"https://math.stackexchange.com/a/2420379/28816\">answer</a> attempts to prove your conjecture that if there is an odd perfect number, then there are infinitely many of them.</p>\n\n<p>Briefly, the argument makes use of the following lemma:</p>\n\n<blockquote>\n <p><strong>Lemma</strong> If <span class=\"math-container\">$q^k n^2$</span> is an odd perfect number with special/Euler prime <span class=\"math-container\">$q$</span>, then the biconditional\n <span class=\"math-container\">$$\\frac{q^2}{n}+\\frac{n}{q^2} \\text{ is bounded from above } \\iff \\frac{\\sigma(q^2)}{n}+\\frac{\\sigma(n)}{q^2} \\text{ is bounded from above}$$</span>\n holds.</p>\n</blockquote>\n\n<p>Details are in the following question:\n<a href=\"https://math.stackexchange.com/questions/2420351\">If <span class=\"math-container\">$q^k n^2$</span> is an odd perfect number with Euler prime <span class=\"math-container\">$q$</span>, which of the following relationships between <span class=\"math-container\">$q^2$</span> and <span class=\"math-container\">$n$</span> hold?</a></p>\n" }, { "answer_id": 2420388, "score": 1, "text": "<p><em>This is not a direct answer to your question, but it is certainly related.</em></p>\n\n<p>We do know that\n<span class=\"math-container\">$$I(q^k) &lt; \\frac{5}{4} &lt; \\frac{3}{2} \\leq I(2^{p-1})$$</span>\nwhere <span class=\"math-container\">$N=q^k n^2$</span> is an odd perfect number in Eulerian form and <span class=\"math-container\">$M=(2^p - 1){2^{p-1}}$</span> is an even perfect number in Euclidean form.</p>\n\n<p>Because of a property satisfied by the abundancy index <span class=\"math-container\">$I(x)=\\sigma(x)/x$</span> at prime powers, it follows that\n<span class=\"math-container\">$$2^{p-1} &lt; q^k.$$</span></p>\n\n<p>(<strong>NOTE THAT THE REASONING IS UNSOUND IN THE LAST PARAGRAPH, AS IT ONLY FOLLOWS FROM <span class=\"math-container\">$I(q^k) &lt; I(2^{p-1})$</span> THAT <span class=\"math-container\">$2 &lt; q$</span>. WHAT FOLLOWS IS THEREFORE INCONCLUSIVE, AT THIS POINT.</strong>)</p>\n\n<p>It follows that, since <span class=\"math-container\">$q^k &lt; n^2$</span> <a href=\"https://cs.uwaterloo.ca/journals/JIS/VOL15/Dris/dris8.html\" rel=\"nofollow noreferrer\">[Dris, 2012]</a>, if <span class=\"math-container\">$N$</span> is bounded, then <span class=\"math-container\">$M$</span> is also bounded.</p>\n\n<p>Thus, if there exists an odd perfect number, we would infer that there are only finitely many even perfect numbers.</p>\n" }, { "answer_id": 3532589, "score": 1, "text": "<p>This <a href=\"https://math.stackexchange.com/a/2420379/28816\">answer</a> attempts to prove your conjecture that if there is an odd perfect number, then there are infinitely many of them.</p>\n\n<p>Briefly, the argument makes use of the following lemma:</p>\n\n<blockquote>\n <p><strong>Lemma</strong> If <span class=\"math-container\">$q^k n^2$</span> is an odd perfect number with special/Euler prime <span class=\"math-container\">$q$</span>, then the biconditional\n <span class=\"math-container\">$$\\frac{q^2}{n}+\\frac{n}{q^2} \\text{ is bounded from above } \\iff \\frac{\\sigma(q^2)}{n}+\\frac{\\sigma(n)}{q^2} \\text{ is bounded from above}$$</span>\n holds.</p>\n</blockquote>\n\n<p>Details are in the following question:\n<a href=\"https://math.stackexchange.com/questions/2420351\">If <span class=\"math-container\">$q^k n^2$</span> is an odd perfect number with Euler prime <span class=\"math-container\">$q$</span>, which of the following relationships between <span class=\"math-container\">$q^2$</span> and <span class=\"math-container\">$n$</span> hold?</a></p>\n" }, { "answer_id": 2420388, "score": 1, "text": "<p><em>This is not a direct answer to your question, but it is certainly related.</em></p>\n\n<p>We do know that\n<span class=\"math-container\">$$I(q^k) &lt; \\frac{5}{4} &lt; \\frac{3}{2} \\leq I(2^{p-1})$$</span>\nwhere <span class=\"math-container\">$N=q^k n^2$</span> is an odd perfect number in Eulerian form and <span class=\"math-container\">$M=(2^p - 1){2^{p-1}}$</span> is an even perfect number in Euclidean form.</p>\n\n<p>Because of a property satisfied by the abundancy index <span class=\"math-container\">$I(x)=\\sigma(x)/x$</span> at prime powers, it follows that\n<span class=\"math-container\">$$2^{p-1} &lt; q^k.$$</span></p>\n\n<p>(<strong>NOTE THAT THE REASONING IS UNSOUND IN THE LAST PARAGRAPH, AS IT ONLY FOLLOWS FROM <span class=\"math-container\">$I(q^k) &lt; I(2^{p-1})$</span> THAT <span class=\"math-container\">$2 &lt; q$</span>. WHAT FOLLOWS IS THEREFORE INCONCLUSIVE, AT THIS POINT.</strong>)</p>\n\n<p>It follows that, since <span class=\"math-container\">$q^k &lt; n^2$</span> <a href=\"https://cs.uwaterloo.ca/journals/JIS/VOL15/Dris/dris8.html\" rel=\"nofollow noreferrer\">[Dris, 2012]</a>, if <span class=\"math-container\">$N$</span> is bounded, then <span class=\"math-container\">$M$</span> is also bounded.</p>\n\n<p>Thus, if there exists an odd perfect number, we would infer that there are only finitely many even perfect numbers.</p>\n" }, { "answer_id": 3532589, "score": 1, "text": "<p>This <a href=\"https://math.stackexchange.com/a/2420379/28816\">answer</a> attempts to prove your conjecture that if there is an odd perfect number, then there are infinitely many of them.</p>\n\n<p>Briefly, the argument makes use of the following lemma:</p>\n\n<blockquote>\n <p><strong>Lemma</strong> If <span class=\"math-container\">$q^k n^2$</span> is an odd perfect number with special/Euler prime <span class=\"math-container\">$q$</span>, then the biconditional\n <span class=\"math-container\">$$\\frac{q^2}{n}+\\frac{n}{q^2} \\text{ is bounded from above } \\iff \\frac{\\sigma(q^2)}{n}+\\frac{\\sigma(n)}{q^2} \\text{ is bounded from above}$$</span>\n holds.</p>\n</blockquote>\n\n<p>Details are in the following question:\n<a href=\"https://math.stackexchange.com/questions/2420351\">If <span class=\"math-container\">$q^k n^2$</span> is an odd perfect number with Euler prime <span class=\"math-container\">$q$</span>, which of the following relationships between <span class=\"math-container\">$q^2$</span> and <span class=\"math-container\">$n$</span> hold?</a></p>\n" }, { "answer_id": 2420388, "score": 1, "text": "<p><em>This is not a direct answer to your question, but it is certainly related.</em></p>\n\n<p>We do know that\n<span class=\"math-container\">$$I(q^k) &lt; \\frac{5}{4} &lt; \\frac{3}{2} \\leq I(2^{p-1})$$</span>\nwhere <span class=\"math-container\">$N=q^k n^2$</span> is an odd perfect number in Eulerian form and <span class=\"math-container\">$M=(2^p - 1){2^{p-1}}$</span> is an even perfect number in Euclidean form.</p>\n\n<p>Because of a property satisfied by the abundancy index <span class=\"math-container\">$I(x)=\\sigma(x)/x$</span> at prime powers, it follows that\n<span class=\"math-container\">$$2^{p-1} &lt; q^k.$$</span></p>\n\n<p>(<strong>NOTE THAT THE REASONING IS UNSOUND IN THE LAST PARAGRAPH, AS IT ONLY FOLLOWS FROM <span class=\"math-container\">$I(q^k) &lt; I(2^{p-1})$</span> THAT <span class=\"math-container\">$2 &lt; q$</span>. WHAT FOLLOWS IS THEREFORE INCONCLUSIVE, AT THIS POINT.</strong>)</p>\n\n<p>It follows that, since <span class=\"math-container\">$q^k &lt; n^2$</span> <a href=\"https://cs.uwaterloo.ca/journals/JIS/VOL15/Dris/dris8.html\" rel=\"nofollow noreferrer\">[Dris, 2012]</a>, if <span class=\"math-container\">$N$</span> is bounded, then <span class=\"math-container\">$M$</span> is also bounded.</p>\n\n<p>Thus, if there exists an odd perfect number, we would infer that there are only finitely many even perfect numbers.</p>\n" }, { "answer_id": 3532589, "score": 1, "text": "<p>This <a href=\"https://math.stackexchange.com/a/2420379/28816\">answer</a> attempts to prove your conjecture that if there is an odd perfect number, then there are infinitely many of them.</p>\n\n<p>Briefly, the argument makes use of the following lemma:</p>\n\n<blockquote>\n <p><strong>Lemma</strong> If <span class=\"math-container\">$q^k n^2$</span> is an odd perfect number with special/Euler prime <span class=\"math-container\">$q$</span>, then the biconditional\n <span class=\"math-container\">$$\\frac{q^2}{n}+\\frac{n}{q^2} \\text{ is bounded from above } \\iff \\frac{\\sigma(q^2)}{n}+\\frac{\\sigma(n)}{q^2} \\text{ is bounded from above}$$</span>\n holds.</p>\n</blockquote>\n\n<p>Details are in the following question:\n<a href=\"https://math.stackexchange.com/questions/2420351\">If <span class=\"math-container\">$q^k n^2$</span> is an odd perfect number with Euler prime <span class=\"math-container\">$q$</span>, which of the following relationships between <span class=\"math-container\">$q^2$</span> and <span class=\"math-container\">$n$</span> hold?</a></p>\n" }, { "answer_id": 2420388, "score": 1, "text": "<p><em>This is not a direct answer to your question, but it is certainly related.</em></p>\n\n<p>We do know that\n<span class=\"math-container\">$$I(q^k) &lt; \\frac{5}{4} &lt; \\frac{3}{2} \\leq I(2^{p-1})$$</span>\nwhere <span class=\"math-container\">$N=q^k n^2$</span> is an odd perfect number in Eulerian form and <span class=\"math-container\">$M=(2^p - 1){2^{p-1}}$</span> is an even perfect number in Euclidean form.</p>\n\n<p>Because of a property satisfied by the abundancy index <span class=\"math-container\">$I(x)=\\sigma(x)/x$</span> at prime powers, it follows that\n<span class=\"math-container\">$$2^{p-1} &lt; q^k.$$</span></p>\n\n<p>(<strong>NOTE THAT THE REASONING IS UNSOUND IN THE LAST PARAGRAPH, AS IT ONLY FOLLOWS FROM <span class=\"math-container\">$I(q^k) &lt; I(2^{p-1})$</span> THAT <span class=\"math-container\">$2 &lt; q$</span>. WHAT FOLLOWS IS THEREFORE INCONCLUSIVE, AT THIS POINT.</strong>)</p>\n\n<p>It follows that, since <span class=\"math-container\">$q^k &lt; n^2$</span> <a href=\"https://cs.uwaterloo.ca/journals/JIS/VOL15/Dris/dris8.html\" rel=\"nofollow noreferrer\">[Dris, 2012]</a>, if <span class=\"math-container\">$N$</span> is bounded, then <span class=\"math-container\">$M$</span> is also bounded.</p>\n\n<p>Thus, if there exists an odd perfect number, we would infer that there are only finitely many even perfect numbers.</p>\n" }, { "answer_id": 3532589, "score": 1, "text": "<p>This <a href=\"https://math.stackexchange.com/a/2420379/28816\">answer</a> attempts to prove your conjecture that if there is an odd perfect number, then there are infinitely many of them.</p>\n\n<p>Briefly, the argument makes use of the following lemma:</p>\n\n<blockquote>\n <p><strong>Lemma</strong> If <span class=\"math-container\">$q^k n^2$</span> is an odd perfect number with special/Euler prime <span class=\"math-container\">$q$</span>, then the biconditional\n <span class=\"math-container\">$$\\frac{q^2}{n}+\\frac{n}{q^2} \\text{ is bounded from above } \\iff \\frac{\\sigma(q^2)}{n}+\\frac{\\sigma(n)}{q^2} \\text{ is bounded from above}$$</span>\n holds.</p>\n</blockquote>\n\n<p>Details are in the following question:\n<a href=\"https://math.stackexchange.com/questions/2420351\">If <span class=\"math-container\">$q^k n^2$</span> is an odd perfect number with Euler prime <span class=\"math-container\">$q$</span>, which of the following relationships between <span class=\"math-container\">$q^2$</span> and <span class=\"math-container\">$n$</span> hold?</a></p>\n" }, { "answer_id": 2420388, "score": 1, "text": "<p><em>This is not a direct answer to your question, but it is certainly related.</em></p>\n\n<p>We do know that\n<span class=\"math-container\">$$I(q^k) &lt; \\frac{5}{4} &lt; \\frac{3}{2} \\leq I(2^{p-1})$$</span>\nwhere <span class=\"math-container\">$N=q^k n^2$</span> is an odd perfect number in Eulerian form and <span class=\"math-container\">$M=(2^p - 1){2^{p-1}}$</span> is an even perfect number in Euclidean form.</p>\n\n<p>Because of a property satisfied by the abundancy index <span class=\"math-container\">$I(x)=\\sigma(x)/x$</span> at prime powers, it follows that\n<span class=\"math-container\">$$2^{p-1} &lt; q^k.$$</span></p>\n\n<p>(<strong>NOTE THAT THE REASONING IS UNSOUND IN THE LAST PARAGRAPH, AS IT ONLY FOLLOWS FROM <span class=\"math-container\">$I(q^k) &lt; I(2^{p-1})$</span> THAT <span class=\"math-container\">$2 &lt; q$</span>. WHAT FOLLOWS IS THEREFORE INCONCLUSIVE, AT THIS POINT.</strong>)</p>\n\n<p>It follows that, since <span class=\"math-container\">$q^k &lt; n^2$</span> <a href=\"https://cs.uwaterloo.ca/journals/JIS/VOL15/Dris/dris8.html\" rel=\"nofollow noreferrer\">[Dris, 2012]</a>, if <span class=\"math-container\">$N$</span> is bounded, then <span class=\"math-container\">$M$</span> is also bounded.</p>\n\n<p>Thus, if there exists an odd perfect number, we would infer that there are only finitely many even perfect numbers.</p>\n" }, { "answer_id": 3532589, "score": 1, "text": "<p>This <a href=\"https://math.stackexchange.com/a/2420379/28816\">answer</a> attempts to prove your conjecture that if there is an odd perfect number, then there are infinitely many of them.</p>\n\n<p>Briefly, the argument makes use of the following lemma:</p>\n\n<blockquote>\n <p><strong>Lemma</strong> If <span class=\"math-container\">$q^k n^2$</span> is an odd perfect number with special/Euler prime <span class=\"math-container\">$q$</span>, then the biconditional\n <span class=\"math-container\">$$\\frac{q^2}{n}+\\frac{n}{q^2} \\text{ is bounded from above } \\iff \\frac{\\sigma(q^2)}{n}+\\frac{\\sigma(n)}{q^2} \\text{ is bounded from above}$$</span>\n holds.</p>\n</blockquote>\n\n<p>Details are in the following question:\n<a href=\"https://math.stackexchange.com/questions/2420351\">If <span class=\"math-container\">$q^k n^2$</span> is an odd perfect number with Euler prime <span class=\"math-container\">$q$</span>, which of the following relationships between <span class=\"math-container\">$q^2$</span> and <span class=\"math-container\">$n$</span> hold?</a></p>\n" } ]
Question: If we suppose, that a perfect odd number exists, can we determine whether infinite many exist? Answer 1: <p><em>This is not a direct answer to your question, but it is certainly related.</em></p> <p>We do know that <span class="math-container">$$I(q^k) &lt; \frac{5}{4} &lt; \frac{3}{2} \leq I(2^{p-1})$$</span> where <span class="math-container">$N=q^k n^2$</span> is an odd perfect number in Eulerian form and <span class="math-container">$M=(2^p - 1){2^{p-1}}$</span> is an even perfect number in Euclidean form.</p> <p>Because of a property satisfied by the abundancy index <span class="math-container">$I(x)=\sigma(x)/x$</span> at prime powers, it follows that <span class="math-container">$$2^{p-1} &lt; q^k.$$</span></p> <p>(<strong>NOTE THAT THE REASONING IS UNSOUND IN THE LAST PARAGRAPH, AS IT ONLY FOLLOWS FROM <span class="math-container">$I(q^k) &lt; I(2^{p-1})$</span> THAT <span class="math-container">$2 &lt; q$</span>. WHAT FOLLOWS IS THEREFORE INCONCLUSIVE, AT THIS POINT.</strong>)</p> <p>It follows that, since <span class="math-container">$q^k &lt; n^2$</span> <a href="https://cs.uwaterloo.ca/journals/JIS/VOL15/Dris/dris8.html" rel="nofollow noreferrer">[Dris, 2012]</a>, if <span class="math-container">$N$</span> is bounded, then <span class="math-container">$M$</span> is also bounded.</p> <p>Thus, if there exists an odd perfect number, we would infer that there are only finitely many even perfect numbers.</p> Answer 2: <p>This <a href="https://math.stackexchange.com/a/2420379/28816">answer</a> attempts to prove your conjecture that if there is an odd perfect number, then there are infinitely many of them.</p> <p>Briefly, the argument makes use of the following lemma:</p> <blockquote> <p><strong>Lemma</strong> If <span class="math-container">$q^k n^2$</span> is an odd perfect number with special/Euler prime <span class="math-container">$q$</span>, then the biconditional <span class="math-container">$$\frac{q^2}{n}+\frac{n}{q^2} \text{ is bounded from above } \iff \frac{\sigma(q^2)}{n}+\frac{\sigma(n)}{q^2} \text{ is bounded from above}$$</span> holds.</p> </blockquote> <p>Details are in the following question: <a href="https://math.stackexchange.com/questions/2420351">If <span class="math-container">$q^k n^2$</span> is an odd perfect number with Euler prime <span class="math-container">$q$</span>, which of the following relationships between <span class="math-container">$q^2$</span> and <span class="math-container">$n$</span> hold?</a></p> Answer 3: <p><em>This is not a direct answer to your question, but it is certainly related.</em></p> <p>We do know that <span class="math-container">$$I(q^k) &lt; \frac{5}{4} &lt; \frac{3}{2} \leq I(2^{p-1})$$</span> where <span class="math-container">$N=q^k n^2$</span> is an odd perfect number in Eulerian form and <span class="math-container">$M=(2^p - 1){2^{p-1}}$</span> is an even perfect number in Euclidean form.</p> <p>Because of a property satisfied by the abundancy index <span class="math-container">$I(x)=\sigma(x)/x$</span> at prime powers, it follows that <span class="math-container">$$2^{p-1} &lt; q^k.$$</span></p> <p>(<strong>NOTE THAT THE REASONING IS UNSOUND IN THE LAST PARAGRAPH, AS IT ONLY FOLLOWS FROM <span class="math-container">$I(q^k) &lt; I(2^{p-1})$</span> THAT <span class="math-container">$2 &lt; q$</span>. WHAT FOLLOWS IS THEREFORE INCONCLUSIVE, AT THIS POINT.</strong>)</p> <p>It follows that, since <span class="math-container">$q^k &lt; n^2$</span> <a href="https://cs.uwaterloo.ca/journals/JIS/VOL15/Dris/dris8.html" rel="nofollow noreferrer">[Dris, 2012]</a>, if <span class="math-container">$N$</span> is bounded, then <span class="math-container">$M$</span> is also bounded.</p> <p>Thus, if there exists an odd perfect number, we would infer that there are only finitely many even perfect numbers.</p> Answer 4: <p>This <a href="https://math.stackexchange.com/a/2420379/28816">answer</a> attempts to prove your conjecture that if there is an odd perfect number, then there are infinitely many of them.</p> <p>Briefly, the argument makes use of the following lemma:</p> <blockquote> <p><strong>Lemma</strong> If <span class="math-container">$q^k n^2$</span> is an odd perfect number with special/Euler prime <span class="math-container">$q$</span>, then the biconditional <span class="math-container">$$\frac{q^2}{n}+\frac{n}{q^2} \text{ is bounded from above } \iff \frac{\sigma(q^2)}{n}+\frac{\sigma(n)}{q^2} \text{ is bounded from above}$$</span> holds.</p> </blockquote> <p>Details are in the following question: <a href="https://math.stackexchange.com/questions/2420351">If <span class="math-container">$q^k n^2$</span> is an odd perfect number with Euler prime <span class="math-container">$q$</span>, which of the following relationships between <span class="math-container">$q^2$</span> and <span class="math-container">$n$</span> hold?</a></p> Answer 5: <p><em>This is not a direct answer to your question, but it is certainly related.</em></p> <p>We do know that <span class="math-container">$$I(q^k) &lt; \frac{5}{4} &lt; \frac{3}{2} \leq I(2^{p-1})$$</span> where <span class="math-container">$N=q^k n^2$</span> is an odd perfect number in Eulerian form and <span class="math-container">$M=(2^p - 1){2^{p-1}}$</span> is an even perfect number in Euclidean form.</p> <p>Because of a property satisfied by the abundancy index <span class="math-container">$I(x)=\sigma(x)/x$</span> at prime powers, it follows that <span class="math-container">$$2^{p-1} &lt; q^k.$$</span></p> <p>(<strong>NOTE THAT THE REASONING IS UNSOUND IN THE LAST PARAGRAPH, AS IT ONLY FOLLOWS FROM <span class="math-container">$I(q^k) &lt; I(2^{p-1})$</span> THAT <span class="math-container">$2 &lt; q$</span>. WHAT FOLLOWS IS THEREFORE INCONCLUSIVE, AT THIS POINT.</strong>)</p> <p>It follows that, since <span class="math-container">$q^k &lt; n^2$</span> <a href="https://cs.uwaterloo.ca/journals/JIS/VOL15/Dris/dris8.html" rel="nofollow noreferrer">[Dris, 2012]</a>, if <span class="math-container">$N$</span> is bounded, then <span class="math-container">$M$</span> is also bounded.</p> <p>Thus, if there exists an odd perfect number, we would infer that there are only finitely many even perfect numbers.</p> Answer 6: <p>This <a href="https://math.stackexchange.com/a/2420379/28816">answer</a> attempts to prove your conjecture that if there is an odd perfect number, then there are infinitely many of them.</p> <p>Briefly, the argument makes use of the following lemma:</p> <blockquote> <p><strong>Lemma</strong> If <span class="math-container">$q^k n^2$</span> is an odd perfect number with special/Euler prime <span class="math-container">$q$</span>, then the biconditional <span class="math-container">$$\frac{q^2}{n}+\frac{n}{q^2} \text{ is bounded from above } \iff \frac{\sigma(q^2)}{n}+\frac{\sigma(n)}{q^2} \text{ is bounded from above}$$</span> holds.</p> </blockquote> <p>Details are in the following question: <a href="https://math.stackexchange.com/questions/2420351">If <span class="math-container">$q^k n^2$</span> is an odd perfect number with Euler prime <span class="math-container">$q$</span>, which of the following relationships between <span class="math-container">$q^2$</span> and <span class="math-container">$n$</span> hold?</a></p> Answer 7: <p><em>This is not a direct answer to your question, but it is certainly related.</em></p> <p>We do know that <span class="math-container">$$I(q^k) &lt; \frac{5}{4} &lt; \frac{3}{2} \leq I(2^{p-1})$$</span> where <span class="math-container">$N=q^k n^2$</span> is an odd perfect number in Eulerian form and <span class="math-container">$M=(2^p - 1){2^{p-1}}$</span> is an even perfect number in Euclidean form.</p> <p>Because of a property satisfied by the abundancy index <span class="math-container">$I(x)=\sigma(x)/x$</span> at prime powers, it follows that <span class="math-container">$$2^{p-1} &lt; q^k.$$</span></p> <p>(<strong>NOTE THAT THE REASONING IS UNSOUND IN THE LAST PARAGRAPH, AS IT ONLY FOLLOWS FROM <span class="math-container">$I(q^k) &lt; I(2^{p-1})$</span> THAT <span class="math-container">$2 &lt; q$</span>. WHAT FOLLOWS IS THEREFORE INCONCLUSIVE, AT THIS POINT.</strong>)</p> <p>It follows that, since <span class="math-container">$q^k &lt; n^2$</span> <a href="https://cs.uwaterloo.ca/journals/JIS/VOL15/Dris/dris8.html" rel="nofollow noreferrer">[Dris, 2012]</a>, if <span class="math-container">$N$</span> is bounded, then <span class="math-container">$M$</span> is also bounded.</p> <p>Thus, if there exists an odd perfect number, we would infer that there are only finitely many even perfect numbers.</p> Answer 8: <p>This <a href="https://math.stackexchange.com/a/2420379/28816">answer</a> attempts to prove your conjecture that if there is an odd perfect number, then there are infinitely many of them.</p> <p>Briefly, the argument makes use of the following lemma:</p> <blockquote> <p><strong>Lemma</strong> If <span class="math-container">$q^k n^2$</span> is an odd perfect number with special/Euler prime <span class="math-container">$q$</span>, then the biconditional <span class="math-container">$$\frac{q^2}{n}+\frac{n}{q^2} \text{ is bounded from above } \iff \frac{\sigma(q^2)}{n}+\frac{\sigma(n)}{q^2} \text{ is bounded from above}$$</span> holds.</p> </blockquote> <p>Details are in the following question: <a href="https://math.stackexchange.com/questions/2420351">If <span class="math-container">$q^k n^2$</span> is an odd perfect number with Euler prime <span class="math-container">$q$</span>, which of the following relationships between <span class="math-container">$q^2$</span> and <span class="math-container">$n$</span> hold?</a></p> Answer 9: <p><em>This is not a direct answer to your question, but it is certainly related.</em></p> <p>We do know that <span class="math-container">$$I(q^k) &lt; \frac{5}{4} &lt; \frac{3}{2} \leq I(2^{p-1})$$</span> where <span class="math-container">$N=q^k n^2$</span> is an odd perfect number in Eulerian form and <span class="math-container">$M=(2^p - 1){2^{p-1}}$</span> is an even perfect number in Euclidean form.</p> <p>Because of a property satisfied by the abundancy index <span class="math-container">$I(x)=\sigma(x)/x$</span> at prime powers, it follows that <span class="math-container">$$2^{p-1} &lt; q^k.$$</span></p> <p>(<strong>NOTE THAT THE REASONING IS UNSOUND IN THE LAST PARAGRAPH, AS IT ONLY FOLLOWS FROM <span class="math-container">$I(q^k) &lt; I(2^{p-1})$</span> THAT <span class="math-container">$2 &lt; q$</span>. WHAT FOLLOWS IS THEREFORE INCONCLUSIVE, AT THIS POINT.</strong>)</p> <p>It follows that, since <span class="math-container">$q^k &lt; n^2$</span> <a href="https://cs.uwaterloo.ca/journals/JIS/VOL15/Dris/dris8.html" rel="nofollow noreferrer">[Dris, 2012]</a>, if <span class="math-container">$N$</span> is bounded, then <span class="math-container">$M$</span> is also bounded.</p> <p>Thus, if there exists an odd perfect number, we would infer that there are only finitely many even perfect numbers.</p> Answer 10: <p>This <a href="https://math.stackexchange.com/a/2420379/28816">answer</a> attempts to prove your conjecture that if there is an odd perfect number, then there are infinitely many of them.</p> <p>Briefly, the argument makes use of the following lemma:</p> <blockquote> <p><strong>Lemma</strong> If <span class="math-container">$q^k n^2$</span> is an odd perfect number with special/Euler prime <span class="math-container">$q$</span>, then the biconditional <span class="math-container">$$\frac{q^2}{n}+\frac{n}{q^2} \text{ is bounded from above } \iff \frac{\sigma(q^2)}{n}+\frac{\sigma(n)}{q^2} \text{ is bounded from above}$$</span> holds.</p> </blockquote> <p>Details are in the following question: <a href="https://math.stackexchange.com/questions/2420351">If <span class="math-container">$q^k n^2$</span> is an odd perfect number with Euler prime <span class="math-container">$q$</span>, which of the following relationships between <span class="math-container">$q^2$</span> and <span class="math-container">$n$</span> hold?</a></p> Answer 11: <p><em>This is not a direct answer to your question, but it is certainly related.</em></p> <p>We do know that <span class="math-container">$$I(q^k) &lt; \frac{5}{4} &lt; \frac{3}{2} \leq I(2^{p-1})$$</span> where <span class="math-container">$N=q^k n^2$</span> is an odd perfect number in Eulerian form and <span class="math-container">$M=(2^p - 1){2^{p-1}}$</span> is an even perfect number in Euclidean form.</p> <p>Because of a property satisfied by the abundancy index <span class="math-container">$I(x)=\sigma(x)/x$</span> at prime powers, it follows that <span class="math-container">$$2^{p-1} &lt; q^k.$$</span></p> <p>(<strong>NOTE THAT THE REASONING IS UNSOUND IN THE LAST PARAGRAPH, AS IT ONLY FOLLOWS FROM <span class="math-container">$I(q^k) &lt; I(2^{p-1})$</span> THAT <span class="math-container">$2 &lt; q$</span>. WHAT FOLLOWS IS THEREFORE INCONCLUSIVE, AT THIS POINT.</strong>)</p> <p>It follows that, since <span class="math-container">$q^k &lt; n^2$</span> <a href="https://cs.uwaterloo.ca/journals/JIS/VOL15/Dris/dris8.html" rel="nofollow noreferrer">[Dris, 2012]</a>, if <span class="math-container">$N$</span> is bounded, then <span class="math-container">$M$</span> is also bounded.</p> <p>Thus, if there exists an odd perfect number, we would infer that there are only finitely many even perfect numbers.</p> Answer 12: <p>This <a href="https://math.stackexchange.com/a/2420379/28816">answer</a> attempts to prove your conjecture that if there is an odd perfect number, then there are infinitely many of them.</p> <p>Briefly, the argument makes use of the following lemma:</p> <blockquote> <p><strong>Lemma</strong> If <span class="math-container">$q^k n^2$</span> is an odd perfect number with special/Euler prime <span class="math-container">$q$</span>, then the biconditional <span class="math-container">$$\frac{q^2}{n}+\frac{n}{q^2} \text{ is bounded from above } \iff \frac{\sigma(q^2)}{n}+\frac{\sigma(n)}{q^2} \text{ is bounded from above}$$</span> holds.</p> </blockquote> <p>Details are in the following question: <a href="https://math.stackexchange.com/questions/2420351">If <span class="math-container">$q^k n^2$</span> is an odd perfect number with Euler prime <span class="math-container">$q$</span>, which of the following relationships between <span class="math-container">$q^2$</span> and <span class="math-container">$n$</span> hold?</a></p> Answer 13: <p><em>This is not a direct answer to your question, but it is certainly related.</em></p> <p>We do know that <span class="math-container">$$I(q^k) &lt; \frac{5}{4} &lt; \frac{3}{2} \leq I(2^{p-1})$$</span> where <span class="math-container">$N=q^k n^2$</span> is an odd perfect number in Eulerian form and <span class="math-container">$M=(2^p - 1){2^{p-1}}$</span> is an even perfect number in Euclidean form.</p> <p>Because of a property satisfied by the abundancy index <span class="math-container">$I(x)=\sigma(x)/x$</span> at prime powers, it follows that <span class="math-container">$$2^{p-1} &lt; q^k.$$</span></p> <p>(<strong>NOTE THAT THE REASONING IS UNSOUND IN THE LAST PARAGRAPH, AS IT ONLY FOLLOWS FROM <span class="math-container">$I(q^k) &lt; I(2^{p-1})$</span> THAT <span class="math-container">$2 &lt; q$</span>. WHAT FOLLOWS IS THEREFORE INCONCLUSIVE, AT THIS POINT.</strong>)</p> <p>It follows that, since <span class="math-container">$q^k &lt; n^2$</span> <a href="https://cs.uwaterloo.ca/journals/JIS/VOL15/Dris/dris8.html" rel="nofollow noreferrer">[Dris, 2012]</a>, if <span class="math-container">$N$</span> is bounded, then <span class="math-container">$M$</span> is also bounded.</p> <p>Thus, if there exists an odd perfect number, we would infer that there are only finitely many even perfect numbers.</p> Answer 14: <p>This <a href="https://math.stackexchange.com/a/2420379/28816">answer</a> attempts to prove your conjecture that if there is an odd perfect number, then there are infinitely many of them.</p> <p>Briefly, the argument makes use of the following lemma:</p> <blockquote> <p><strong>Lemma</strong> If <span class="math-container">$q^k n^2$</span> is an odd perfect number with special/Euler prime <span class="math-container">$q$</span>, then the biconditional <span class="math-container">$$\frac{q^2}{n}+\frac{n}{q^2} \text{ is bounded from above } \iff \frac{\sigma(q^2)}{n}+\frac{\sigma(n)}{q^2} \text{ is bounded from above}$$</span> holds.</p> </blockquote> <p>Details are in the following question: <a href="https://math.stackexchange.com/questions/2420351">If <span class="math-container">$q^k n^2$</span> is an odd perfect number with Euler prime <span class="math-container">$q$</span>, which of the following relationships between <span class="math-container">$q^2$</span> and <span class="math-container">$n$</span> hold?</a></p> Answer 15: <p><em>This is not a direct answer to your question, but it is certainly related.</em></p> <p>We do know that <span class="math-container">$$I(q^k) &lt; \frac{5}{4} &lt; \frac{3}{2} \leq I(2^{p-1})$$</span> where <span class="math-container">$N=q^k n^2$</span> is an odd perfect number in Eulerian form and <span class="math-container">$M=(2^p - 1){2^{p-1}}$</span> is an even perfect number in Euclidean form.</p> <p>Because of a property satisfied by the abundancy index <span class="math-container">$I(x)=\sigma(x)/x$</span> at prime powers, it follows that <span class="math-container">$$2^{p-1} &lt; q^k.$$</span></p> <p>(<strong>NOTE THAT THE REASONING IS UNSOUND IN THE LAST PARAGRAPH, AS IT ONLY FOLLOWS FROM <span class="math-container">$I(q^k) &lt; I(2^{p-1})$</span> THAT <span class="math-container">$2 &lt; q$</span>. WHAT FOLLOWS IS THEREFORE INCONCLUSIVE, AT THIS POINT.</strong>)</p> <p>It follows that, since <span class="math-container">$q^k &lt; n^2$</span> <a href="https://cs.uwaterloo.ca/journals/JIS/VOL15/Dris/dris8.html" rel="nofollow noreferrer">[Dris, 2012]</a>, if <span class="math-container">$N$</span> is bounded, then <span class="math-container">$M$</span> is also bounded.</p> <p>Thus, if there exists an odd perfect number, we would infer that there are only finitely many even perfect numbers.</p> Answer 16: <p>This <a href="https://math.stackexchange.com/a/2420379/28816">answer</a> attempts to prove your conjecture that if there is an odd perfect number, then there are infinitely many of them.</p> <p>Briefly, the argument makes use of the following lemma:</p> <blockquote> <p><strong>Lemma</strong> If <span class="math-container">$q^k n^2$</span> is an odd perfect number with special/Euler prime <span class="math-container">$q$</span>, then the biconditional <span class="math-container">$$\frac{q^2}{n}+\frac{n}{q^2} \text{ is bounded from above } \iff \frac{\sigma(q^2)}{n}+\frac{\sigma(n)}{q^2} \text{ is bounded from above}$$</span> holds.</p> </blockquote> <p>Details are in the following question: <a href="https://math.stackexchange.com/questions/2420351">If <span class="math-container">$q^k n^2$</span> is an odd perfect number with Euler prime <span class="math-container">$q$</span>, which of the following relationships between <span class="math-container">$q^2$</span> and <span class="math-container">$n$</span> hold?</a></p> Answer 17: <p><em>This is not a direct answer to your question, but it is certainly related.</em></p> <p>We do know that <span class="math-container">$$I(q^k) &lt; \frac{5}{4} &lt; \frac{3}{2} \leq I(2^{p-1})$$</span> where <span class="math-container">$N=q^k n^2$</span> is an odd perfect number in Eulerian form and <span class="math-container">$M=(2^p - 1){2^{p-1}}$</span> is an even perfect number in Euclidean form.</p> <p>Because of a property satisfied by the abundancy index <span class="math-container">$I(x)=\sigma(x)/x$</span> at prime powers, it follows that <span class="math-container">$$2^{p-1} &lt; q^k.$$</span></p> <p>(<strong>NOTE THAT THE REASONING IS UNSOUND IN THE LAST PARAGRAPH, AS IT ONLY FOLLOWS FROM <span class="math-container">$I(q^k) &lt; I(2^{p-1})$</span> THAT <span class="math-container">$2 &lt; q$</span>. WHAT FOLLOWS IS THEREFORE INCONCLUSIVE, AT THIS POINT.</strong>)</p> <p>It follows that, since <span class="math-container">$q^k &lt; n^2$</span> <a href="https://cs.uwaterloo.ca/journals/JIS/VOL15/Dris/dris8.html" rel="nofollow noreferrer">[Dris, 2012]</a>, if <span class="math-container">$N$</span> is bounded, then <span class="math-container">$M$</span> is also bounded.</p> <p>Thus, if there exists an odd perfect number, we would infer that there are only finitely many even perfect numbers.</p> Answer 18: <p>This <a href="https://math.stackexchange.com/a/2420379/28816">answer</a> attempts to prove your conjecture that if there is an odd perfect number, then there are infinitely many of them.</p> <p>Briefly, the argument makes use of the following lemma:</p> <blockquote> <p><strong>Lemma</strong> If <span class="math-container">$q^k n^2$</span> is an odd perfect number with special/Euler prime <span class="math-container">$q$</span>, then the biconditional <span class="math-container">$$\frac{q^2}{n}+\frac{n}{q^2} \text{ is bounded from above } \iff \frac{\sigma(q^2)}{n}+\frac{\sigma(n)}{q^2} \text{ is bounded from above}$$</span> holds.</p> </blockquote> <p>Details are in the following question: <a href="https://math.stackexchange.com/questions/2420351">If <span class="math-container">$q^k n^2$</span> is an odd perfect number with Euler prime <span class="math-container">$q$</span>, which of the following relationships between <span class="math-container">$q^2$</span> and <span class="math-container">$n$</span> hold?</a></p> Answer 19: <p><em>This is not a direct answer to your question, but it is certainly related.</em></p> <p>We do know that <span class="math-container">$$I(q^k) &lt; \frac{5}{4} &lt; \frac{3}{2} \leq I(2^{p-1})$$</span> where <span class="math-container">$N=q^k n^2$</span> is an odd perfect number in Eulerian form and <span class="math-container">$M=(2^p - 1){2^{p-1}}$</span> is an even perfect number in Euclidean form.</p> <p>Because of a property satisfied by the abundancy index <span class="math-container">$I(x)=\sigma(x)/x$</span> at prime powers, it follows that <span class="math-container">$$2^{p-1} &lt; q^k.$$</span></p> <p>(<strong>NOTE THAT THE REASONING IS UNSOUND IN THE LAST PARAGRAPH, AS IT ONLY FOLLOWS FROM <span class="math-container">$I(q^k) &lt; I(2^{p-1})$</span> THAT <span class="math-container">$2 &lt; q$</span>. WHAT FOLLOWS IS THEREFORE INCONCLUSIVE, AT THIS POINT.</strong>)</p> <p>It follows that, since <span class="math-container">$q^k &lt; n^2$</span> <a href="https://cs.uwaterloo.ca/journals/JIS/VOL15/Dris/dris8.html" rel="nofollow noreferrer">[Dris, 2012]</a>, if <span class="math-container">$N$</span> is bounded, then <span class="math-container">$M$</span> is also bounded.</p> <p>Thus, if there exists an odd perfect number, we would infer that there are only finitely many even perfect numbers.</p> Answer 20: <p>This <a href="https://math.stackexchange.com/a/2420379/28816">answer</a> attempts to prove your conjecture that if there is an odd perfect number, then there are infinitely many of them.</p> <p>Briefly, the argument makes use of the following lemma:</p> <blockquote> <p><strong>Lemma</strong> If <span class="math-container">$q^k n^2$</span> is an odd perfect number with special/Euler prime <span class="math-container">$q$</span>, then the biconditional <span class="math-container">$$\frac{q^2}{n}+\frac{n}{q^2} \text{ is bounded from above } \iff \frac{\sigma(q^2)}{n}+\frac{\sigma(n)}{q^2} \text{ is bounded from above}$$</span> holds.</p> </blockquote> <p>Details are in the following question: <a href="https://math.stackexchange.com/questions/2420351">If <span class="math-container">$q^k n^2$</span> is an odd perfect number with Euler prime <span class="math-container">$q$</span>, which of the following relationships between <span class="math-container">$q^2$</span> and <span class="math-container">$n$</span> hold?</a></p>
Find orthocentre of a triangle given equations of its three sides, without finding triangle&#39;s vertices
2,116,467
6
https://math.stackexchange.com/questions/2116467/find-orthocentre-of-a-triangle-given-equations-of-its-three-sides-without-findi
<blockquote> <p>Find the coordinates of the orthocenter of the triangle, the equations of whose sides are $x + y = 1$, $2x + 3y = 6$, $4x − y + 4 = 0$.</p> </blockquote> <p>I know how to find it by first finding all the vertices (the orthocenter is the point of intersection of all altitudes). How can we find it without finding the coordinates of its vertices?</p>
[ { "answer_id": 2117080, "score": 6, "text": "<p>(See figure below)</p>\n\n<p>Here is a solution bypassing the obtention of vertices' coordinates, by using \"<strong>pencils of lines</strong>\".</p>\n\n<p>Being given two lines $L_1$ and $L_2$ with resp. equations $$\\begin{cases}u_1x+v_1y+w_1=0\\\\u_2x+v_2y+w_2=0\\end{cases}$$</p>\n\n<p>Any line passing through point $L_1 \\cap L_2$ has general equation</p>\n\n<p>$$m(u_1x+v_1y+w_1=0)+(1-m)(u_2x+v_2y+w_2=0)=0, \\ \\ m \\in \\mathbb{R}.$$ </p>\n\n<p>The set of all these lines is called the \"pencil of lines\" defined by $L_1$ and $L_2$. </p>\n\n<p>Thus, the pencil of lines defined by $x+y-1=0$ and $2x+3y-6=0$ is:</p>\n\n<p>$$a(x+y-1)+(1-a)(2x+3y-6)=0$$</p>\n\n<p>$$\\tag{1} \\iff \\ (2-a)x+(3-2a)y+(5a-6)=0$$</p>\n\n<p>Among these lines, one is the height. This height is characterized by the fact that its normal vector is orthogonal to the normal vector of the third line:</p>\n\n<p>$$\\binom{2-a}{3-2a} \\perp \\binom{4}{-1} \\ \\ \\iff \\ \\ (2-a)4+(3-2a)(-1)=0 \\ \\ \\iff \\ \\ a=\\frac52$$</p>\n\n<p>(recall: a line with equation $ux+vy+w=0$ has normal vector $\\binom{u}{v}.$)</p>\n\n<p>Plugging this value of $a$ in (1) gives the equation of the height:</p>\n\n<p>$$\\tag{2}x+4y-13=0.$$</p>\n\n<p>Working in the same way with a group of 2 other sides: </p>\n\n<p>$$b(2x+3y-6)+(1-b)(4x-y+4)=0$$</p>\n\n<p>$$\\tag{3} \\iff \\ \\ (4-2b)x+(4b-1)y+(4-10b)=0$$</p>\n\n<p>$$\\binom{4-2b}{4b-1} \\perp \\binom{1}{1} \\ \\ \\iff \\ \\ (4-2b)1+(4b-1)1=0 \\ \\ \\iff \\ \\ b=-\\frac32$$</p>\n\n<p>Plugging this value of $b$ in (3) gives the equation of the second height:</p>\n\n<p>$$\\tag{4}7x-7y+19=0.$$</p>\n\n<p>The solution of the system formed by (2) and (4) are the coordinates of the orthocentre </p>\n\n<blockquote>\n <p>$$H\\left(\\frac{3}{7},\\frac{22}{7}\\right).$$</p>\n</blockquote>\n\n<p><a href=\"https://i.stack.imgur.com/z0NRc.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/z0NRc.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2116518, "score": 1, "text": "<p>Call $O=(x_0,y_0)$ the orthocenter. A general line that goes through $O$ is like:</p>\n\n<p>$$y=ax+(y_0-ax_0)$$</p>\n\n<p>Once it is perpendicular to all three lines then all heights must have the form:</p>\n\n<p>$$y=x+y_0-x_0\\\\\ny=\\frac{3}{2}x+y_0-\\frac{3}{2}x_0\\\\\ny=-\\frac{1}{4}x+y_0-\\frac{1}{4}x_0$$</p>\n\n<p>See that all three lines pass through $O$ but in order to be a height all of them must pass through some vertex. It means that there is not a way to find $O$ without at least two vertices. </p>\n" }, { "answer_id": 2117080, "score": 6, "text": "<p>(See figure below)</p>\n\n<p>Here is a solution bypassing the obtention of vertices' coordinates, by using \"<strong>pencils of lines</strong>\".</p>\n\n<p>Being given two lines $L_1$ and $L_2$ with resp. equations $$\\begin{cases}u_1x+v_1y+w_1=0\\\\u_2x+v_2y+w_2=0\\end{cases}$$</p>\n\n<p>Any line passing through point $L_1 \\cap L_2$ has general equation</p>\n\n<p>$$m(u_1x+v_1y+w_1=0)+(1-m)(u_2x+v_2y+w_2=0)=0, \\ \\ m \\in \\mathbb{R}.$$ </p>\n\n<p>The set of all these lines is called the \"pencil of lines\" defined by $L_1$ and $L_2$. </p>\n\n<p>Thus, the pencil of lines defined by $x+y-1=0$ and $2x+3y-6=0$ is:</p>\n\n<p>$$a(x+y-1)+(1-a)(2x+3y-6)=0$$</p>\n\n<p>$$\\tag{1} \\iff \\ (2-a)x+(3-2a)y+(5a-6)=0$$</p>\n\n<p>Among these lines, one is the height. This height is characterized by the fact that its normal vector is orthogonal to the normal vector of the third line:</p>\n\n<p>$$\\binom{2-a}{3-2a} \\perp \\binom{4}{-1} \\ \\ \\iff \\ \\ (2-a)4+(3-2a)(-1)=0 \\ \\ \\iff \\ \\ a=\\frac52$$</p>\n\n<p>(recall: a line with equation $ux+vy+w=0$ has normal vector $\\binom{u}{v}.$)</p>\n\n<p>Plugging this value of $a$ in (1) gives the equation of the height:</p>\n\n<p>$$\\tag{2}x+4y-13=0.$$</p>\n\n<p>Working in the same way with a group of 2 other sides: </p>\n\n<p>$$b(2x+3y-6)+(1-b)(4x-y+4)=0$$</p>\n\n<p>$$\\tag{3} \\iff \\ \\ (4-2b)x+(4b-1)y+(4-10b)=0$$</p>\n\n<p>$$\\binom{4-2b}{4b-1} \\perp \\binom{1}{1} \\ \\ \\iff \\ \\ (4-2b)1+(4b-1)1=0 \\ \\ \\iff \\ \\ b=-\\frac32$$</p>\n\n<p>Plugging this value of $b$ in (3) gives the equation of the second height:</p>\n\n<p>$$\\tag{4}7x-7y+19=0.$$</p>\n\n<p>The solution of the system formed by (2) and (4) are the coordinates of the orthocentre </p>\n\n<blockquote>\n <p>$$H\\left(\\frac{3}{7},\\frac{22}{7}\\right).$$</p>\n</blockquote>\n\n<p><a href=\"https://i.stack.imgur.com/z0NRc.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/z0NRc.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2116518, "score": 1, "text": "<p>Call $O=(x_0,y_0)$ the orthocenter. A general line that goes through $O$ is like:</p>\n\n<p>$$y=ax+(y_0-ax_0)$$</p>\n\n<p>Once it is perpendicular to all three lines then all heights must have the form:</p>\n\n<p>$$y=x+y_0-x_0\\\\\ny=\\frac{3}{2}x+y_0-\\frac{3}{2}x_0\\\\\ny=-\\frac{1}{4}x+y_0-\\frac{1}{4}x_0$$</p>\n\n<p>See that all three lines pass through $O$ but in order to be a height all of them must pass through some vertex. It means that there is not a way to find $O$ without at least two vertices. </p>\n" }, { "answer_id": 2117080, "score": 6, "text": "<p>(See figure below)</p>\n\n<p>Here is a solution bypassing the obtention of vertices' coordinates, by using \"<strong>pencils of lines</strong>\".</p>\n\n<p>Being given two lines $L_1$ and $L_2$ with resp. equations $$\\begin{cases}u_1x+v_1y+w_1=0\\\\u_2x+v_2y+w_2=0\\end{cases}$$</p>\n\n<p>Any line passing through point $L_1 \\cap L_2$ has general equation</p>\n\n<p>$$m(u_1x+v_1y+w_1=0)+(1-m)(u_2x+v_2y+w_2=0)=0, \\ \\ m \\in \\mathbb{R}.$$ </p>\n\n<p>The set of all these lines is called the \"pencil of lines\" defined by $L_1$ and $L_2$. </p>\n\n<p>Thus, the pencil of lines defined by $x+y-1=0$ and $2x+3y-6=0$ is:</p>\n\n<p>$$a(x+y-1)+(1-a)(2x+3y-6)=0$$</p>\n\n<p>$$\\tag{1} \\iff \\ (2-a)x+(3-2a)y+(5a-6)=0$$</p>\n\n<p>Among these lines, one is the height. This height is characterized by the fact that its normal vector is orthogonal to the normal vector of the third line:</p>\n\n<p>$$\\binom{2-a}{3-2a} \\perp \\binom{4}{-1} \\ \\ \\iff \\ \\ (2-a)4+(3-2a)(-1)=0 \\ \\ \\iff \\ \\ a=\\frac52$$</p>\n\n<p>(recall: a line with equation $ux+vy+w=0$ has normal vector $\\binom{u}{v}.$)</p>\n\n<p>Plugging this value of $a$ in (1) gives the equation of the height:</p>\n\n<p>$$\\tag{2}x+4y-13=0.$$</p>\n\n<p>Working in the same way with a group of 2 other sides: </p>\n\n<p>$$b(2x+3y-6)+(1-b)(4x-y+4)=0$$</p>\n\n<p>$$\\tag{3} \\iff \\ \\ (4-2b)x+(4b-1)y+(4-10b)=0$$</p>\n\n<p>$$\\binom{4-2b}{4b-1} \\perp \\binom{1}{1} \\ \\ \\iff \\ \\ (4-2b)1+(4b-1)1=0 \\ \\ \\iff \\ \\ b=-\\frac32$$</p>\n\n<p>Plugging this value of $b$ in (3) gives the equation of the second height:</p>\n\n<p>$$\\tag{4}7x-7y+19=0.$$</p>\n\n<p>The solution of the system formed by (2) and (4) are the coordinates of the orthocentre </p>\n\n<blockquote>\n <p>$$H\\left(\\frac{3}{7},\\frac{22}{7}\\right).$$</p>\n</blockquote>\n\n<p><a href=\"https://i.stack.imgur.com/z0NRc.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/z0NRc.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2116518, "score": 1, "text": "<p>Call $O=(x_0,y_0)$ the orthocenter. A general line that goes through $O$ is like:</p>\n\n<p>$$y=ax+(y_0-ax_0)$$</p>\n\n<p>Once it is perpendicular to all three lines then all heights must have the form:</p>\n\n<p>$$y=x+y_0-x_0\\\\\ny=\\frac{3}{2}x+y_0-\\frac{3}{2}x_0\\\\\ny=-\\frac{1}{4}x+y_0-\\frac{1}{4}x_0$$</p>\n\n<p>See that all three lines pass through $O$ but in order to be a height all of them must pass through some vertex. It means that there is not a way to find $O$ without at least two vertices. </p>\n" }, { "answer_id": 2117080, "score": 6, "text": "<p>(See figure below)</p>\n\n<p>Here is a solution bypassing the obtention of vertices' coordinates, by using \"<strong>pencils of lines</strong>\".</p>\n\n<p>Being given two lines $L_1$ and $L_2$ with resp. equations $$\\begin{cases}u_1x+v_1y+w_1=0\\\\u_2x+v_2y+w_2=0\\end{cases}$$</p>\n\n<p>Any line passing through point $L_1 \\cap L_2$ has general equation</p>\n\n<p>$$m(u_1x+v_1y+w_1=0)+(1-m)(u_2x+v_2y+w_2=0)=0, \\ \\ m \\in \\mathbb{R}.$$ </p>\n\n<p>The set of all these lines is called the \"pencil of lines\" defined by $L_1$ and $L_2$. </p>\n\n<p>Thus, the pencil of lines defined by $x+y-1=0$ and $2x+3y-6=0$ is:</p>\n\n<p>$$a(x+y-1)+(1-a)(2x+3y-6)=0$$</p>\n\n<p>$$\\tag{1} \\iff \\ (2-a)x+(3-2a)y+(5a-6)=0$$</p>\n\n<p>Among these lines, one is the height. This height is characterized by the fact that its normal vector is orthogonal to the normal vector of the third line:</p>\n\n<p>$$\\binom{2-a}{3-2a} \\perp \\binom{4}{-1} \\ \\ \\iff \\ \\ (2-a)4+(3-2a)(-1)=0 \\ \\ \\iff \\ \\ a=\\frac52$$</p>\n\n<p>(recall: a line with equation $ux+vy+w=0$ has normal vector $\\binom{u}{v}.$)</p>\n\n<p>Plugging this value of $a$ in (1) gives the equation of the height:</p>\n\n<p>$$\\tag{2}x+4y-13=0.$$</p>\n\n<p>Working in the same way with a group of 2 other sides: </p>\n\n<p>$$b(2x+3y-6)+(1-b)(4x-y+4)=0$$</p>\n\n<p>$$\\tag{3} \\iff \\ \\ (4-2b)x+(4b-1)y+(4-10b)=0$$</p>\n\n<p>$$\\binom{4-2b}{4b-1} \\perp \\binom{1}{1} \\ \\ \\iff \\ \\ (4-2b)1+(4b-1)1=0 \\ \\ \\iff \\ \\ b=-\\frac32$$</p>\n\n<p>Plugging this value of $b$ in (3) gives the equation of the second height:</p>\n\n<p>$$\\tag{4}7x-7y+19=0.$$</p>\n\n<p>The solution of the system formed by (2) and (4) are the coordinates of the orthocentre </p>\n\n<blockquote>\n <p>$$H\\left(\\frac{3}{7},\\frac{22}{7}\\right).$$</p>\n</blockquote>\n\n<p><a href=\"https://i.stack.imgur.com/z0NRc.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/z0NRc.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2116518, "score": 1, "text": "<p>Call $O=(x_0,y_0)$ the orthocenter. A general line that goes through $O$ is like:</p>\n\n<p>$$y=ax+(y_0-ax_0)$$</p>\n\n<p>Once it is perpendicular to all three lines then all heights must have the form:</p>\n\n<p>$$y=x+y_0-x_0\\\\\ny=\\frac{3}{2}x+y_0-\\frac{3}{2}x_0\\\\\ny=-\\frac{1}{4}x+y_0-\\frac{1}{4}x_0$$</p>\n\n<p>See that all three lines pass through $O$ but in order to be a height all of them must pass through some vertex. It means that there is not a way to find $O$ without at least two vertices. </p>\n" }, { "answer_id": 2117080, "score": 6, "text": "<p>(See figure below)</p>\n\n<p>Here is a solution bypassing the obtention of vertices' coordinates, by using \"<strong>pencils of lines</strong>\".</p>\n\n<p>Being given two lines $L_1$ and $L_2$ with resp. equations $$\\begin{cases}u_1x+v_1y+w_1=0\\\\u_2x+v_2y+w_2=0\\end{cases}$$</p>\n\n<p>Any line passing through point $L_1 \\cap L_2$ has general equation</p>\n\n<p>$$m(u_1x+v_1y+w_1=0)+(1-m)(u_2x+v_2y+w_2=0)=0, \\ \\ m \\in \\mathbb{R}.$$ </p>\n\n<p>The set of all these lines is called the \"pencil of lines\" defined by $L_1$ and $L_2$. </p>\n\n<p>Thus, the pencil of lines defined by $x+y-1=0$ and $2x+3y-6=0$ is:</p>\n\n<p>$$a(x+y-1)+(1-a)(2x+3y-6)=0$$</p>\n\n<p>$$\\tag{1} \\iff \\ (2-a)x+(3-2a)y+(5a-6)=0$$</p>\n\n<p>Among these lines, one is the height. This height is characterized by the fact that its normal vector is orthogonal to the normal vector of the third line:</p>\n\n<p>$$\\binom{2-a}{3-2a} \\perp \\binom{4}{-1} \\ \\ \\iff \\ \\ (2-a)4+(3-2a)(-1)=0 \\ \\ \\iff \\ \\ a=\\frac52$$</p>\n\n<p>(recall: a line with equation $ux+vy+w=0$ has normal vector $\\binom{u}{v}.$)</p>\n\n<p>Plugging this value of $a$ in (1) gives the equation of the height:</p>\n\n<p>$$\\tag{2}x+4y-13=0.$$</p>\n\n<p>Working in the same way with a group of 2 other sides: </p>\n\n<p>$$b(2x+3y-6)+(1-b)(4x-y+4)=0$$</p>\n\n<p>$$\\tag{3} \\iff \\ \\ (4-2b)x+(4b-1)y+(4-10b)=0$$</p>\n\n<p>$$\\binom{4-2b}{4b-1} \\perp \\binom{1}{1} \\ \\ \\iff \\ \\ (4-2b)1+(4b-1)1=0 \\ \\ \\iff \\ \\ b=-\\frac32$$</p>\n\n<p>Plugging this value of $b$ in (3) gives the equation of the second height:</p>\n\n<p>$$\\tag{4}7x-7y+19=0.$$</p>\n\n<p>The solution of the system formed by (2) and (4) are the coordinates of the orthocentre </p>\n\n<blockquote>\n <p>$$H\\left(\\frac{3}{7},\\frac{22}{7}\\right).$$</p>\n</blockquote>\n\n<p><a href=\"https://i.stack.imgur.com/z0NRc.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/z0NRc.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2116518, "score": 1, "text": "<p>Call $O=(x_0,y_0)$ the orthocenter. A general line that goes through $O$ is like:</p>\n\n<p>$$y=ax+(y_0-ax_0)$$</p>\n\n<p>Once it is perpendicular to all three lines then all heights must have the form:</p>\n\n<p>$$y=x+y_0-x_0\\\\\ny=\\frac{3}{2}x+y_0-\\frac{3}{2}x_0\\\\\ny=-\\frac{1}{4}x+y_0-\\frac{1}{4}x_0$$</p>\n\n<p>See that all three lines pass through $O$ but in order to be a height all of them must pass through some vertex. It means that there is not a way to find $O$ without at least two vertices. </p>\n" }, { "answer_id": 2117080, "score": 6, "text": "<p>(See figure below)</p>\n\n<p>Here is a solution bypassing the obtention of vertices' coordinates, by using \"<strong>pencils of lines</strong>\".</p>\n\n<p>Being given two lines $L_1$ and $L_2$ with resp. equations $$\\begin{cases}u_1x+v_1y+w_1=0\\\\u_2x+v_2y+w_2=0\\end{cases}$$</p>\n\n<p>Any line passing through point $L_1 \\cap L_2$ has general equation</p>\n\n<p>$$m(u_1x+v_1y+w_1=0)+(1-m)(u_2x+v_2y+w_2=0)=0, \\ \\ m \\in \\mathbb{R}.$$ </p>\n\n<p>The set of all these lines is called the \"pencil of lines\" defined by $L_1$ and $L_2$. </p>\n\n<p>Thus, the pencil of lines defined by $x+y-1=0$ and $2x+3y-6=0$ is:</p>\n\n<p>$$a(x+y-1)+(1-a)(2x+3y-6)=0$$</p>\n\n<p>$$\\tag{1} \\iff \\ (2-a)x+(3-2a)y+(5a-6)=0$$</p>\n\n<p>Among these lines, one is the height. This height is characterized by the fact that its normal vector is orthogonal to the normal vector of the third line:</p>\n\n<p>$$\\binom{2-a}{3-2a} \\perp \\binom{4}{-1} \\ \\ \\iff \\ \\ (2-a)4+(3-2a)(-1)=0 \\ \\ \\iff \\ \\ a=\\frac52$$</p>\n\n<p>(recall: a line with equation $ux+vy+w=0$ has normal vector $\\binom{u}{v}.$)</p>\n\n<p>Plugging this value of $a$ in (1) gives the equation of the height:</p>\n\n<p>$$\\tag{2}x+4y-13=0.$$</p>\n\n<p>Working in the same way with a group of 2 other sides: </p>\n\n<p>$$b(2x+3y-6)+(1-b)(4x-y+4)=0$$</p>\n\n<p>$$\\tag{3} \\iff \\ \\ (4-2b)x+(4b-1)y+(4-10b)=0$$</p>\n\n<p>$$\\binom{4-2b}{4b-1} \\perp \\binom{1}{1} \\ \\ \\iff \\ \\ (4-2b)1+(4b-1)1=0 \\ \\ \\iff \\ \\ b=-\\frac32$$</p>\n\n<p>Plugging this value of $b$ in (3) gives the equation of the second height:</p>\n\n<p>$$\\tag{4}7x-7y+19=0.$$</p>\n\n<p>The solution of the system formed by (2) and (4) are the coordinates of the orthocentre </p>\n\n<blockquote>\n <p>$$H\\left(\\frac{3}{7},\\frac{22}{7}\\right).$$</p>\n</blockquote>\n\n<p><a href=\"https://i.stack.imgur.com/z0NRc.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/z0NRc.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2116518, "score": 1, "text": "<p>Call $O=(x_0,y_0)$ the orthocenter. A general line that goes through $O$ is like:</p>\n\n<p>$$y=ax+(y_0-ax_0)$$</p>\n\n<p>Once it is perpendicular to all three lines then all heights must have the form:</p>\n\n<p>$$y=x+y_0-x_0\\\\\ny=\\frac{3}{2}x+y_0-\\frac{3}{2}x_0\\\\\ny=-\\frac{1}{4}x+y_0-\\frac{1}{4}x_0$$</p>\n\n<p>See that all three lines pass through $O$ but in order to be a height all of them must pass through some vertex. It means that there is not a way to find $O$ without at least two vertices. </p>\n" }, { "answer_id": 2117080, "score": 6, "text": "<p>(See figure below)</p>\n\n<p>Here is a solution bypassing the obtention of vertices' coordinates, by using \"<strong>pencils of lines</strong>\".</p>\n\n<p>Being given two lines $L_1$ and $L_2$ with resp. equations $$\\begin{cases}u_1x+v_1y+w_1=0\\\\u_2x+v_2y+w_2=0\\end{cases}$$</p>\n\n<p>Any line passing through point $L_1 \\cap L_2$ has general equation</p>\n\n<p>$$m(u_1x+v_1y+w_1=0)+(1-m)(u_2x+v_2y+w_2=0)=0, \\ \\ m \\in \\mathbb{R}.$$ </p>\n\n<p>The set of all these lines is called the \"pencil of lines\" defined by $L_1$ and $L_2$. </p>\n\n<p>Thus, the pencil of lines defined by $x+y-1=0$ and $2x+3y-6=0$ is:</p>\n\n<p>$$a(x+y-1)+(1-a)(2x+3y-6)=0$$</p>\n\n<p>$$\\tag{1} \\iff \\ (2-a)x+(3-2a)y+(5a-6)=0$$</p>\n\n<p>Among these lines, one is the height. This height is characterized by the fact that its normal vector is orthogonal to the normal vector of the third line:</p>\n\n<p>$$\\binom{2-a}{3-2a} \\perp \\binom{4}{-1} \\ \\ \\iff \\ \\ (2-a)4+(3-2a)(-1)=0 \\ \\ \\iff \\ \\ a=\\frac52$$</p>\n\n<p>(recall: a line with equation $ux+vy+w=0$ has normal vector $\\binom{u}{v}.$)</p>\n\n<p>Plugging this value of $a$ in (1) gives the equation of the height:</p>\n\n<p>$$\\tag{2}x+4y-13=0.$$</p>\n\n<p>Working in the same way with a group of 2 other sides: </p>\n\n<p>$$b(2x+3y-6)+(1-b)(4x-y+4)=0$$</p>\n\n<p>$$\\tag{3} \\iff \\ \\ (4-2b)x+(4b-1)y+(4-10b)=0$$</p>\n\n<p>$$\\binom{4-2b}{4b-1} \\perp \\binom{1}{1} \\ \\ \\iff \\ \\ (4-2b)1+(4b-1)1=0 \\ \\ \\iff \\ \\ b=-\\frac32$$</p>\n\n<p>Plugging this value of $b$ in (3) gives the equation of the second height:</p>\n\n<p>$$\\tag{4}7x-7y+19=0.$$</p>\n\n<p>The solution of the system formed by (2) and (4) are the coordinates of the orthocentre </p>\n\n<blockquote>\n <p>$$H\\left(\\frac{3}{7},\\frac{22}{7}\\right).$$</p>\n</blockquote>\n\n<p><a href=\"https://i.stack.imgur.com/z0NRc.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/z0NRc.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2116518, "score": 1, "text": "<p>Call $O=(x_0,y_0)$ the orthocenter. A general line that goes through $O$ is like:</p>\n\n<p>$$y=ax+(y_0-ax_0)$$</p>\n\n<p>Once it is perpendicular to all three lines then all heights must have the form:</p>\n\n<p>$$y=x+y_0-x_0\\\\\ny=\\frac{3}{2}x+y_0-\\frac{3}{2}x_0\\\\\ny=-\\frac{1}{4}x+y_0-\\frac{1}{4}x_0$$</p>\n\n<p>See that all three lines pass through $O$ but in order to be a height all of them must pass through some vertex. It means that there is not a way to find $O$ without at least two vertices. </p>\n" }, { "answer_id": 2117080, "score": 6, "text": "<p>(See figure below)</p>\n\n<p>Here is a solution bypassing the obtention of vertices' coordinates, by using \"<strong>pencils of lines</strong>\".</p>\n\n<p>Being given two lines $L_1$ and $L_2$ with resp. equations $$\\begin{cases}u_1x+v_1y+w_1=0\\\\u_2x+v_2y+w_2=0\\end{cases}$$</p>\n\n<p>Any line passing through point $L_1 \\cap L_2$ has general equation</p>\n\n<p>$$m(u_1x+v_1y+w_1=0)+(1-m)(u_2x+v_2y+w_2=0)=0, \\ \\ m \\in \\mathbb{R}.$$ </p>\n\n<p>The set of all these lines is called the \"pencil of lines\" defined by $L_1$ and $L_2$. </p>\n\n<p>Thus, the pencil of lines defined by $x+y-1=0$ and $2x+3y-6=0$ is:</p>\n\n<p>$$a(x+y-1)+(1-a)(2x+3y-6)=0$$</p>\n\n<p>$$\\tag{1} \\iff \\ (2-a)x+(3-2a)y+(5a-6)=0$$</p>\n\n<p>Among these lines, one is the height. This height is characterized by the fact that its normal vector is orthogonal to the normal vector of the third line:</p>\n\n<p>$$\\binom{2-a}{3-2a} \\perp \\binom{4}{-1} \\ \\ \\iff \\ \\ (2-a)4+(3-2a)(-1)=0 \\ \\ \\iff \\ \\ a=\\frac52$$</p>\n\n<p>(recall: a line with equation $ux+vy+w=0$ has normal vector $\\binom{u}{v}.$)</p>\n\n<p>Plugging this value of $a$ in (1) gives the equation of the height:</p>\n\n<p>$$\\tag{2}x+4y-13=0.$$</p>\n\n<p>Working in the same way with a group of 2 other sides: </p>\n\n<p>$$b(2x+3y-6)+(1-b)(4x-y+4)=0$$</p>\n\n<p>$$\\tag{3} \\iff \\ \\ (4-2b)x+(4b-1)y+(4-10b)=0$$</p>\n\n<p>$$\\binom{4-2b}{4b-1} \\perp \\binom{1}{1} \\ \\ \\iff \\ \\ (4-2b)1+(4b-1)1=0 \\ \\ \\iff \\ \\ b=-\\frac32$$</p>\n\n<p>Plugging this value of $b$ in (3) gives the equation of the second height:</p>\n\n<p>$$\\tag{4}7x-7y+19=0.$$</p>\n\n<p>The solution of the system formed by (2) and (4) are the coordinates of the orthocentre </p>\n\n<blockquote>\n <p>$$H\\left(\\frac{3}{7},\\frac{22}{7}\\right).$$</p>\n</blockquote>\n\n<p><a href=\"https://i.stack.imgur.com/z0NRc.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/z0NRc.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2116518, "score": 1, "text": "<p>Call $O=(x_0,y_0)$ the orthocenter. A general line that goes through $O$ is like:</p>\n\n<p>$$y=ax+(y_0-ax_0)$$</p>\n\n<p>Once it is perpendicular to all three lines then all heights must have the form:</p>\n\n<p>$$y=x+y_0-x_0\\\\\ny=\\frac{3}{2}x+y_0-\\frac{3}{2}x_0\\\\\ny=-\\frac{1}{4}x+y_0-\\frac{1}{4}x_0$$</p>\n\n<p>See that all three lines pass through $O$ but in order to be a height all of them must pass through some vertex. It means that there is not a way to find $O$ without at least two vertices. </p>\n" }, { "answer_id": 2117080, "score": 6, "text": "<p>(See figure below)</p>\n\n<p>Here is a solution bypassing the obtention of vertices' coordinates, by using \"<strong>pencils of lines</strong>\".</p>\n\n<p>Being given two lines $L_1$ and $L_2$ with resp. equations $$\\begin{cases}u_1x+v_1y+w_1=0\\\\u_2x+v_2y+w_2=0\\end{cases}$$</p>\n\n<p>Any line passing through point $L_1 \\cap L_2$ has general equation</p>\n\n<p>$$m(u_1x+v_1y+w_1=0)+(1-m)(u_2x+v_2y+w_2=0)=0, \\ \\ m \\in \\mathbb{R}.$$ </p>\n\n<p>The set of all these lines is called the \"pencil of lines\" defined by $L_1$ and $L_2$. </p>\n\n<p>Thus, the pencil of lines defined by $x+y-1=0$ and $2x+3y-6=0$ is:</p>\n\n<p>$$a(x+y-1)+(1-a)(2x+3y-6)=0$$</p>\n\n<p>$$\\tag{1} \\iff \\ (2-a)x+(3-2a)y+(5a-6)=0$$</p>\n\n<p>Among these lines, one is the height. This height is characterized by the fact that its normal vector is orthogonal to the normal vector of the third line:</p>\n\n<p>$$\\binom{2-a}{3-2a} \\perp \\binom{4}{-1} \\ \\ \\iff \\ \\ (2-a)4+(3-2a)(-1)=0 \\ \\ \\iff \\ \\ a=\\frac52$$</p>\n\n<p>(recall: a line with equation $ux+vy+w=0$ has normal vector $\\binom{u}{v}.$)</p>\n\n<p>Plugging this value of $a$ in (1) gives the equation of the height:</p>\n\n<p>$$\\tag{2}x+4y-13=0.$$</p>\n\n<p>Working in the same way with a group of 2 other sides: </p>\n\n<p>$$b(2x+3y-6)+(1-b)(4x-y+4)=0$$</p>\n\n<p>$$\\tag{3} \\iff \\ \\ (4-2b)x+(4b-1)y+(4-10b)=0$$</p>\n\n<p>$$\\binom{4-2b}{4b-1} \\perp \\binom{1}{1} \\ \\ \\iff \\ \\ (4-2b)1+(4b-1)1=0 \\ \\ \\iff \\ \\ b=-\\frac32$$</p>\n\n<p>Plugging this value of $b$ in (3) gives the equation of the second height:</p>\n\n<p>$$\\tag{4}7x-7y+19=0.$$</p>\n\n<p>The solution of the system formed by (2) and (4) are the coordinates of the orthocentre </p>\n\n<blockquote>\n <p>$$H\\left(\\frac{3}{7},\\frac{22}{7}\\right).$$</p>\n</blockquote>\n\n<p><a href=\"https://i.stack.imgur.com/z0NRc.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/z0NRc.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2116518, "score": 1, "text": "<p>Call $O=(x_0,y_0)$ the orthocenter. A general line that goes through $O$ is like:</p>\n\n<p>$$y=ax+(y_0-ax_0)$$</p>\n\n<p>Once it is perpendicular to all three lines then all heights must have the form:</p>\n\n<p>$$y=x+y_0-x_0\\\\\ny=\\frac{3}{2}x+y_0-\\frac{3}{2}x_0\\\\\ny=-\\frac{1}{4}x+y_0-\\frac{1}{4}x_0$$</p>\n\n<p>See that all three lines pass through $O$ but in order to be a height all of them must pass through some vertex. It means that there is not a way to find $O$ without at least two vertices. </p>\n" }, { "answer_id": 2117080, "score": 6, "text": "<p>(See figure below)</p>\n\n<p>Here is a solution bypassing the obtention of vertices' coordinates, by using \"<strong>pencils of lines</strong>\".</p>\n\n<p>Being given two lines $L_1$ and $L_2$ with resp. equations $$\\begin{cases}u_1x+v_1y+w_1=0\\\\u_2x+v_2y+w_2=0\\end{cases}$$</p>\n\n<p>Any line passing through point $L_1 \\cap L_2$ has general equation</p>\n\n<p>$$m(u_1x+v_1y+w_1=0)+(1-m)(u_2x+v_2y+w_2=0)=0, \\ \\ m \\in \\mathbb{R}.$$ </p>\n\n<p>The set of all these lines is called the \"pencil of lines\" defined by $L_1$ and $L_2$. </p>\n\n<p>Thus, the pencil of lines defined by $x+y-1=0$ and $2x+3y-6=0$ is:</p>\n\n<p>$$a(x+y-1)+(1-a)(2x+3y-6)=0$$</p>\n\n<p>$$\\tag{1} \\iff \\ (2-a)x+(3-2a)y+(5a-6)=0$$</p>\n\n<p>Among these lines, one is the height. This height is characterized by the fact that its normal vector is orthogonal to the normal vector of the third line:</p>\n\n<p>$$\\binom{2-a}{3-2a} \\perp \\binom{4}{-1} \\ \\ \\iff \\ \\ (2-a)4+(3-2a)(-1)=0 \\ \\ \\iff \\ \\ a=\\frac52$$</p>\n\n<p>(recall: a line with equation $ux+vy+w=0$ has normal vector $\\binom{u}{v}.$)</p>\n\n<p>Plugging this value of $a$ in (1) gives the equation of the height:</p>\n\n<p>$$\\tag{2}x+4y-13=0.$$</p>\n\n<p>Working in the same way with a group of 2 other sides: </p>\n\n<p>$$b(2x+3y-6)+(1-b)(4x-y+4)=0$$</p>\n\n<p>$$\\tag{3} \\iff \\ \\ (4-2b)x+(4b-1)y+(4-10b)=0$$</p>\n\n<p>$$\\binom{4-2b}{4b-1} \\perp \\binom{1}{1} \\ \\ \\iff \\ \\ (4-2b)1+(4b-1)1=0 \\ \\ \\iff \\ \\ b=-\\frac32$$</p>\n\n<p>Plugging this value of $b$ in (3) gives the equation of the second height:</p>\n\n<p>$$\\tag{4}7x-7y+19=0.$$</p>\n\n<p>The solution of the system formed by (2) and (4) are the coordinates of the orthocentre </p>\n\n<blockquote>\n <p>$$H\\left(\\frac{3}{7},\\frac{22}{7}\\right).$$</p>\n</blockquote>\n\n<p><a href=\"https://i.stack.imgur.com/z0NRc.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/z0NRc.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2116518, "score": 1, "text": "<p>Call $O=(x_0,y_0)$ the orthocenter. A general line that goes through $O$ is like:</p>\n\n<p>$$y=ax+(y_0-ax_0)$$</p>\n\n<p>Once it is perpendicular to all three lines then all heights must have the form:</p>\n\n<p>$$y=x+y_0-x_0\\\\\ny=\\frac{3}{2}x+y_0-\\frac{3}{2}x_0\\\\\ny=-\\frac{1}{4}x+y_0-\\frac{1}{4}x_0$$</p>\n\n<p>See that all three lines pass through $O$ but in order to be a height all of them must pass through some vertex. It means that there is not a way to find $O$ without at least two vertices. </p>\n" } ]
Question: Find orthocentre of a triangle given equations of its three sides, without finding triangle&#39;s vertices Answer 1: <p>(See figure below)</p> <p>Here is a solution bypassing the obtention of vertices' coordinates, by using "<strong>pencils of lines</strong>".</p> <p>Being given two lines $L_1$ and $L_2$ with resp. equations $$\begin{cases}u_1x+v_1y+w_1=0\\u_2x+v_2y+w_2=0\end{cases}$$</p> <p>Any line passing through point $L_1 \cap L_2$ has general equation</p> <p>$$m(u_1x+v_1y+w_1=0)+(1-m)(u_2x+v_2y+w_2=0)=0, \ \ m \in \mathbb{R}.$$ </p> <p>The set of all these lines is called the "pencil of lines" defined by $L_1$ and $L_2$. </p> <p>Thus, the pencil of lines defined by $x+y-1=0$ and $2x+3y-6=0$ is:</p> <p>$$a(x+y-1)+(1-a)(2x+3y-6)=0$$</p> <p>$$\tag{1} \iff \ (2-a)x+(3-2a)y+(5a-6)=0$$</p> <p>Among these lines, one is the height. This height is characterized by the fact that its normal vector is orthogonal to the normal vector of the third line:</p> <p>$$\binom{2-a}{3-2a} \perp \binom{4}{-1} \ \ \iff \ \ (2-a)4+(3-2a)(-1)=0 \ \ \iff \ \ a=\frac52$$</p> <p>(recall: a line with equation $ux+vy+w=0$ has normal vector $\binom{u}{v}.$)</p> <p>Plugging this value of $a$ in (1) gives the equation of the height:</p> <p>$$\tag{2}x+4y-13=0.$$</p> <p>Working in the same way with a group of 2 other sides: </p> <p>$$b(2x+3y-6)+(1-b)(4x-y+4)=0$$</p> <p>$$\tag{3} \iff \ \ (4-2b)x+(4b-1)y+(4-10b)=0$$</p> <p>$$\binom{4-2b}{4b-1} \perp \binom{1}{1} \ \ \iff \ \ (4-2b)1+(4b-1)1=0 \ \ \iff \ \ b=-\frac32$$</p> <p>Plugging this value of $b$ in (3) gives the equation of the second height:</p> <p>$$\tag{4}7x-7y+19=0.$$</p> <p>The solution of the system formed by (2) and (4) are the coordinates of the orthocentre </p> <blockquote> <p>$$H\left(\frac{3}{7},\frac{22}{7}\right).$$</p> </blockquote> <p><a href="https://i.stack.imgur.com/z0NRc.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/z0NRc.jpg" alt="enter image description here"></a></p> Answer 2: <p>Call $O=(x_0,y_0)$ the orthocenter. A general line that goes through $O$ is like:</p> <p>$$y=ax+(y_0-ax_0)$$</p> <p>Once it is perpendicular to all three lines then all heights must have the form:</p> <p>$$y=x+y_0-x_0\\ y=\frac{3}{2}x+y_0-\frac{3}{2}x_0\\ y=-\frac{1}{4}x+y_0-\frac{1}{4}x_0$$</p> <p>See that all three lines pass through $O$ but in order to be a height all of them must pass through some vertex. It means that there is not a way to find $O$ without at least two vertices. </p> Answer 3: <p>(See figure below)</p> <p>Here is a solution bypassing the obtention of vertices' coordinates, by using "<strong>pencils of lines</strong>".</p> <p>Being given two lines $L_1$ and $L_2$ with resp. equations $$\begin{cases}u_1x+v_1y+w_1=0\\u_2x+v_2y+w_2=0\end{cases}$$</p> <p>Any line passing through point $L_1 \cap L_2$ has general equation</p> <p>$$m(u_1x+v_1y+w_1=0)+(1-m)(u_2x+v_2y+w_2=0)=0, \ \ m \in \mathbb{R}.$$ </p> <p>The set of all these lines is called the "pencil of lines" defined by $L_1$ and $L_2$. </p> <p>Thus, the pencil of lines defined by $x+y-1=0$ and $2x+3y-6=0$ is:</p> <p>$$a(x+y-1)+(1-a)(2x+3y-6)=0$$</p> <p>$$\tag{1} \iff \ (2-a)x+(3-2a)y+(5a-6)=0$$</p> <p>Among these lines, one is the height. This height is characterized by the fact that its normal vector is orthogonal to the normal vector of the third line:</p> <p>$$\binom{2-a}{3-2a} \perp \binom{4}{-1} \ \ \iff \ \ (2-a)4+(3-2a)(-1)=0 \ \ \iff \ \ a=\frac52$$</p> <p>(recall: a line with equation $ux+vy+w=0$ has normal vector $\binom{u}{v}.$)</p> <p>Plugging this value of $a$ in (1) gives the equation of the height:</p> <p>$$\tag{2}x+4y-13=0.$$</p> <p>Working in the same way with a group of 2 other sides: </p> <p>$$b(2x+3y-6)+(1-b)(4x-y+4)=0$$</p> <p>$$\tag{3} \iff \ \ (4-2b)x+(4b-1)y+(4-10b)=0$$</p> <p>$$\binom{4-2b}{4b-1} \perp \binom{1}{1} \ \ \iff \ \ (4-2b)1+(4b-1)1=0 \ \ \iff \ \ b=-\frac32$$</p> <p>Plugging this value of $b$ in (3) gives the equation of the second height:</p> <p>$$\tag{4}7x-7y+19=0.$$</p> <p>The solution of the system formed by (2) and (4) are the coordinates of the orthocentre </p> <blockquote> <p>$$H\left(\frac{3}{7},\frac{22}{7}\right).$$</p> </blockquote> <p><a href="https://i.stack.imgur.com/z0NRc.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/z0NRc.jpg" alt="enter image description here"></a></p> Answer 4: <p>Call $O=(x_0,y_0)$ the orthocenter. A general line that goes through $O$ is like:</p> <p>$$y=ax+(y_0-ax_0)$$</p> <p>Once it is perpendicular to all three lines then all heights must have the form:</p> <p>$$y=x+y_0-x_0\\ y=\frac{3}{2}x+y_0-\frac{3}{2}x_0\\ y=-\frac{1}{4}x+y_0-\frac{1}{4}x_0$$</p> <p>See that all three lines pass through $O$ but in order to be a height all of them must pass through some vertex. It means that there is not a way to find $O$ without at least two vertices. </p> Answer 5: <p>(See figure below)</p> <p>Here is a solution bypassing the obtention of vertices' coordinates, by using "<strong>pencils of lines</strong>".</p> <p>Being given two lines $L_1$ and $L_2$ with resp. equations $$\begin{cases}u_1x+v_1y+w_1=0\\u_2x+v_2y+w_2=0\end{cases}$$</p> <p>Any line passing through point $L_1 \cap L_2$ has general equation</p> <p>$$m(u_1x+v_1y+w_1=0)+(1-m)(u_2x+v_2y+w_2=0)=0, \ \ m \in \mathbb{R}.$$ </p> <p>The set of all these lines is called the "pencil of lines" defined by $L_1$ and $L_2$. </p> <p>Thus, the pencil of lines defined by $x+y-1=0$ and $2x+3y-6=0$ is:</p> <p>$$a(x+y-1)+(1-a)(2x+3y-6)=0$$</p> <p>$$\tag{1} \iff \ (2-a)x+(3-2a)y+(5a-6)=0$$</p> <p>Among these lines, one is the height. This height is characterized by the fact that its normal vector is orthogonal to the normal vector of the third line:</p> <p>$$\binom{2-a}{3-2a} \perp \binom{4}{-1} \ \ \iff \ \ (2-a)4+(3-2a)(-1)=0 \ \ \iff \ \ a=\frac52$$</p> <p>(recall: a line with equation $ux+vy+w=0$ has normal vector $\binom{u}{v}.$)</p> <p>Plugging this value of $a$ in (1) gives the equation of the height:</p> <p>$$\tag{2}x+4y-13=0.$$</p> <p>Working in the same way with a group of 2 other sides: </p> <p>$$b(2x+3y-6)+(1-b)(4x-y+4)=0$$</p> <p>$$\tag{3} \iff \ \ (4-2b)x+(4b-1)y+(4-10b)=0$$</p> <p>$$\binom{4-2b}{4b-1} \perp \binom{1}{1} \ \ \iff \ \ (4-2b)1+(4b-1)1=0 \ \ \iff \ \ b=-\frac32$$</p> <p>Plugging this value of $b$ in (3) gives the equation of the second height:</p> <p>$$\tag{4}7x-7y+19=0.$$</p> <p>The solution of the system formed by (2) and (4) are the coordinates of the orthocentre </p> <blockquote> <p>$$H\left(\frac{3}{7},\frac{22}{7}\right).$$</p> </blockquote> <p><a href="https://i.stack.imgur.com/z0NRc.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/z0NRc.jpg" alt="enter image description here"></a></p> Answer 6: <p>Call $O=(x_0,y_0)$ the orthocenter. A general line that goes through $O$ is like:</p> <p>$$y=ax+(y_0-ax_0)$$</p> <p>Once it is perpendicular to all three lines then all heights must have the form:</p> <p>$$y=x+y_0-x_0\\ y=\frac{3}{2}x+y_0-\frac{3}{2}x_0\\ y=-\frac{1}{4}x+y_0-\frac{1}{4}x_0$$</p> <p>See that all three lines pass through $O$ but in order to be a height all of them must pass through some vertex. It means that there is not a way to find $O$ without at least two vertices. </p> Answer 7: <p>(See figure below)</p> <p>Here is a solution bypassing the obtention of vertices' coordinates, by using "<strong>pencils of lines</strong>".</p> <p>Being given two lines $L_1$ and $L_2$ with resp. equations $$\begin{cases}u_1x+v_1y+w_1=0\\u_2x+v_2y+w_2=0\end{cases}$$</p> <p>Any line passing through point $L_1 \cap L_2$ has general equation</p> <p>$$m(u_1x+v_1y+w_1=0)+(1-m)(u_2x+v_2y+w_2=0)=0, \ \ m \in \mathbb{R}.$$ </p> <p>The set of all these lines is called the "pencil of lines" defined by $L_1$ and $L_2$. </p> <p>Thus, the pencil of lines defined by $x+y-1=0$ and $2x+3y-6=0$ is:</p> <p>$$a(x+y-1)+(1-a)(2x+3y-6)=0$$</p> <p>$$\tag{1} \iff \ (2-a)x+(3-2a)y+(5a-6)=0$$</p> <p>Among these lines, one is the height. This height is characterized by the fact that its normal vector is orthogonal to the normal vector of the third line:</p> <p>$$\binom{2-a}{3-2a} \perp \binom{4}{-1} \ \ \iff \ \ (2-a)4+(3-2a)(-1)=0 \ \ \iff \ \ a=\frac52$$</p> <p>(recall: a line with equation $ux+vy+w=0$ has normal vector $\binom{u}{v}.$)</p> <p>Plugging this value of $a$ in (1) gives the equation of the height:</p> <p>$$\tag{2}x+4y-13=0.$$</p> <p>Working in the same way with a group of 2 other sides: </p> <p>$$b(2x+3y-6)+(1-b)(4x-y+4)=0$$</p> <p>$$\tag{3} \iff \ \ (4-2b)x+(4b-1)y+(4-10b)=0$$</p> <p>$$\binom{4-2b}{4b-1} \perp \binom{1}{1} \ \ \iff \ \ (4-2b)1+(4b-1)1=0 \ \ \iff \ \ b=-\frac32$$</p> <p>Plugging this value of $b$ in (3) gives the equation of the second height:</p> <p>$$\tag{4}7x-7y+19=0.$$</p> <p>The solution of the system formed by (2) and (4) are the coordinates of the orthocentre </p> <blockquote> <p>$$H\left(\frac{3}{7},\frac{22}{7}\right).$$</p> </blockquote> <p><a href="https://i.stack.imgur.com/z0NRc.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/z0NRc.jpg" alt="enter image description here"></a></p> Answer 8: <p>Call $O=(x_0,y_0)$ the orthocenter. A general line that goes through $O$ is like:</p> <p>$$y=ax+(y_0-ax_0)$$</p> <p>Once it is perpendicular to all three lines then all heights must have the form:</p> <p>$$y=x+y_0-x_0\\ y=\frac{3}{2}x+y_0-\frac{3}{2}x_0\\ y=-\frac{1}{4}x+y_0-\frac{1}{4}x_0$$</p> <p>See that all three lines pass through $O$ but in order to be a height all of them must pass through some vertex. It means that there is not a way to find $O$ without at least two vertices. </p> Answer 9: <p>(See figure below)</p> <p>Here is a solution bypassing the obtention of vertices' coordinates, by using "<strong>pencils of lines</strong>".</p> <p>Being given two lines $L_1$ and $L_2$ with resp. equations $$\begin{cases}u_1x+v_1y+w_1=0\\u_2x+v_2y+w_2=0\end{cases}$$</p> <p>Any line passing through point $L_1 \cap L_2$ has general equation</p> <p>$$m(u_1x+v_1y+w_1=0)+(1-m)(u_2x+v_2y+w_2=0)=0, \ \ m \in \mathbb{R}.$$ </p> <p>The set of all these lines is called the "pencil of lines" defined by $L_1$ and $L_2$. </p> <p>Thus, the pencil of lines defined by $x+y-1=0$ and $2x+3y-6=0$ is:</p> <p>$$a(x+y-1)+(1-a)(2x+3y-6)=0$$</p> <p>$$\tag{1} \iff \ (2-a)x+(3-2a)y+(5a-6)=0$$</p> <p>Among these lines, one is the height. This height is characterized by the fact that its normal vector is orthogonal to the normal vector of the third line:</p> <p>$$\binom{2-a}{3-2a} \perp \binom{4}{-1} \ \ \iff \ \ (2-a)4+(3-2a)(-1)=0 \ \ \iff \ \ a=\frac52$$</p> <p>(recall: a line with equation $ux+vy+w=0$ has normal vector $\binom{u}{v}.$)</p> <p>Plugging this value of $a$ in (1) gives the equation of the height:</p> <p>$$\tag{2}x+4y-13=0.$$</p> <p>Working in the same way with a group of 2 other sides: </p> <p>$$b(2x+3y-6)+(1-b)(4x-y+4)=0$$</p> <p>$$\tag{3} \iff \ \ (4-2b)x+(4b-1)y+(4-10b)=0$$</p> <p>$$\binom{4-2b}{4b-1} \perp \binom{1}{1} \ \ \iff \ \ (4-2b)1+(4b-1)1=0 \ \ \iff \ \ b=-\frac32$$</p> <p>Plugging this value of $b$ in (3) gives the equation of the second height:</p> <p>$$\tag{4}7x-7y+19=0.$$</p> <p>The solution of the system formed by (2) and (4) are the coordinates of the orthocentre </p> <blockquote> <p>$$H\left(\frac{3}{7},\frac{22}{7}\right).$$</p> </blockquote> <p><a href="https://i.stack.imgur.com/z0NRc.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/z0NRc.jpg" alt="enter image description here"></a></p> Answer 10: <p>Call $O=(x_0,y_0)$ the orthocenter. A general line that goes through $O$ is like:</p> <p>$$y=ax+(y_0-ax_0)$$</p> <p>Once it is perpendicular to all three lines then all heights must have the form:</p> <p>$$y=x+y_0-x_0\\ y=\frac{3}{2}x+y_0-\frac{3}{2}x_0\\ y=-\frac{1}{4}x+y_0-\frac{1}{4}x_0$$</p> <p>See that all three lines pass through $O$ but in order to be a height all of them must pass through some vertex. It means that there is not a way to find $O$ without at least two vertices. </p> Answer 11: <p>(See figure below)</p> <p>Here is a solution bypassing the obtention of vertices' coordinates, by using "<strong>pencils of lines</strong>".</p> <p>Being given two lines $L_1$ and $L_2$ with resp. equations $$\begin{cases}u_1x+v_1y+w_1=0\\u_2x+v_2y+w_2=0\end{cases}$$</p> <p>Any line passing through point $L_1 \cap L_2$ has general equation</p> <p>$$m(u_1x+v_1y+w_1=0)+(1-m)(u_2x+v_2y+w_2=0)=0, \ \ m \in \mathbb{R}.$$ </p> <p>The set of all these lines is called the "pencil of lines" defined by $L_1$ and $L_2$. </p> <p>Thus, the pencil of lines defined by $x+y-1=0$ and $2x+3y-6=0$ is:</p> <p>$$a(x+y-1)+(1-a)(2x+3y-6)=0$$</p> <p>$$\tag{1} \iff \ (2-a)x+(3-2a)y+(5a-6)=0$$</p> <p>Among these lines, one is the height. This height is characterized by the fact that its normal vector is orthogonal to the normal vector of the third line:</p> <p>$$\binom{2-a}{3-2a} \perp \binom{4}{-1} \ \ \iff \ \ (2-a)4+(3-2a)(-1)=0 \ \ \iff \ \ a=\frac52$$</p> <p>(recall: a line with equation $ux+vy+w=0$ has normal vector $\binom{u}{v}.$)</p> <p>Plugging this value of $a$ in (1) gives the equation of the height:</p> <p>$$\tag{2}x+4y-13=0.$$</p> <p>Working in the same way with a group of 2 other sides: </p> <p>$$b(2x+3y-6)+(1-b)(4x-y+4)=0$$</p> <p>$$\tag{3} \iff \ \ (4-2b)x+(4b-1)y+(4-10b)=0$$</p> <p>$$\binom{4-2b}{4b-1} \perp \binom{1}{1} \ \ \iff \ \ (4-2b)1+(4b-1)1=0 \ \ \iff \ \ b=-\frac32$$</p> <p>Plugging this value of $b$ in (3) gives the equation of the second height:</p> <p>$$\tag{4}7x-7y+19=0.$$</p> <p>The solution of the system formed by (2) and (4) are the coordinates of the orthocentre </p> <blockquote> <p>$$H\left(\frac{3}{7},\frac{22}{7}\right).$$</p> </blockquote> <p><a href="https://i.stack.imgur.com/z0NRc.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/z0NRc.jpg" alt="enter image description here"></a></p> Answer 12: <p>Call $O=(x_0,y_0)$ the orthocenter. A general line that goes through $O$ is like:</p> <p>$$y=ax+(y_0-ax_0)$$</p> <p>Once it is perpendicular to all three lines then all heights must have the form:</p> <p>$$y=x+y_0-x_0\\ y=\frac{3}{2}x+y_0-\frac{3}{2}x_0\\ y=-\frac{1}{4}x+y_0-\frac{1}{4}x_0$$</p> <p>See that all three lines pass through $O$ but in order to be a height all of them must pass through some vertex. It means that there is not a way to find $O$ without at least two vertices. </p> Answer 13: <p>(See figure below)</p> <p>Here is a solution bypassing the obtention of vertices' coordinates, by using "<strong>pencils of lines</strong>".</p> <p>Being given two lines $L_1$ and $L_2$ with resp. equations $$\begin{cases}u_1x+v_1y+w_1=0\\u_2x+v_2y+w_2=0\end{cases}$$</p> <p>Any line passing through point $L_1 \cap L_2$ has general equation</p> <p>$$m(u_1x+v_1y+w_1=0)+(1-m)(u_2x+v_2y+w_2=0)=0, \ \ m \in \mathbb{R}.$$ </p> <p>The set of all these lines is called the "pencil of lines" defined by $L_1$ and $L_2$. </p> <p>Thus, the pencil of lines defined by $x+y-1=0$ and $2x+3y-6=0$ is:</p> <p>$$a(x+y-1)+(1-a)(2x+3y-6)=0$$</p> <p>$$\tag{1} \iff \ (2-a)x+(3-2a)y+(5a-6)=0$$</p> <p>Among these lines, one is the height. This height is characterized by the fact that its normal vector is orthogonal to the normal vector of the third line:</p> <p>$$\binom{2-a}{3-2a} \perp \binom{4}{-1} \ \ \iff \ \ (2-a)4+(3-2a)(-1)=0 \ \ \iff \ \ a=\frac52$$</p> <p>(recall: a line with equation $ux+vy+w=0$ has normal vector $\binom{u}{v}.$)</p> <p>Plugging this value of $a$ in (1) gives the equation of the height:</p> <p>$$\tag{2}x+4y-13=0.$$</p> <p>Working in the same way with a group of 2 other sides: </p> <p>$$b(2x+3y-6)+(1-b)(4x-y+4)=0$$</p> <p>$$\tag{3} \iff \ \ (4-2b)x+(4b-1)y+(4-10b)=0$$</p> <p>$$\binom{4-2b}{4b-1} \perp \binom{1}{1} \ \ \iff \ \ (4-2b)1+(4b-1)1=0 \ \ \iff \ \ b=-\frac32$$</p> <p>Plugging this value of $b$ in (3) gives the equation of the second height:</p> <p>$$\tag{4}7x-7y+19=0.$$</p> <p>The solution of the system formed by (2) and (4) are the coordinates of the orthocentre </p> <blockquote> <p>$$H\left(\frac{3}{7},\frac{22}{7}\right).$$</p> </blockquote> <p><a href="https://i.stack.imgur.com/z0NRc.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/z0NRc.jpg" alt="enter image description here"></a></p> Answer 14: <p>Call $O=(x_0,y_0)$ the orthocenter. A general line that goes through $O$ is like:</p> <p>$$y=ax+(y_0-ax_0)$$</p> <p>Once it is perpendicular to all three lines then all heights must have the form:</p> <p>$$y=x+y_0-x_0\\ y=\frac{3}{2}x+y_0-\frac{3}{2}x_0\\ y=-\frac{1}{4}x+y_0-\frac{1}{4}x_0$$</p> <p>See that all three lines pass through $O$ but in order to be a height all of them must pass through some vertex. It means that there is not a way to find $O$ without at least two vertices. </p> Answer 15: <p>(See figure below)</p> <p>Here is a solution bypassing the obtention of vertices' coordinates, by using "<strong>pencils of lines</strong>".</p> <p>Being given two lines $L_1$ and $L_2$ with resp. equations $$\begin{cases}u_1x+v_1y+w_1=0\\u_2x+v_2y+w_2=0\end{cases}$$</p> <p>Any line passing through point $L_1 \cap L_2$ has general equation</p> <p>$$m(u_1x+v_1y+w_1=0)+(1-m)(u_2x+v_2y+w_2=0)=0, \ \ m \in \mathbb{R}.$$ </p> <p>The set of all these lines is called the "pencil of lines" defined by $L_1$ and $L_2$. </p> <p>Thus, the pencil of lines defined by $x+y-1=0$ and $2x+3y-6=0$ is:</p> <p>$$a(x+y-1)+(1-a)(2x+3y-6)=0$$</p> <p>$$\tag{1} \iff \ (2-a)x+(3-2a)y+(5a-6)=0$$</p> <p>Among these lines, one is the height. This height is characterized by the fact that its normal vector is orthogonal to the normal vector of the third line:</p> <p>$$\binom{2-a}{3-2a} \perp \binom{4}{-1} \ \ \iff \ \ (2-a)4+(3-2a)(-1)=0 \ \ \iff \ \ a=\frac52$$</p> <p>(recall: a line with equation $ux+vy+w=0$ has normal vector $\binom{u}{v}.$)</p> <p>Plugging this value of $a$ in (1) gives the equation of the height:</p> <p>$$\tag{2}x+4y-13=0.$$</p> <p>Working in the same way with a group of 2 other sides: </p> <p>$$b(2x+3y-6)+(1-b)(4x-y+4)=0$$</p> <p>$$\tag{3} \iff \ \ (4-2b)x+(4b-1)y+(4-10b)=0$$</p> <p>$$\binom{4-2b}{4b-1} \perp \binom{1}{1} \ \ \iff \ \ (4-2b)1+(4b-1)1=0 \ \ \iff \ \ b=-\frac32$$</p> <p>Plugging this value of $b$ in (3) gives the equation of the second height:</p> <p>$$\tag{4}7x-7y+19=0.$$</p> <p>The solution of the system formed by (2) and (4) are the coordinates of the orthocentre </p> <blockquote> <p>$$H\left(\frac{3}{7},\frac{22}{7}\right).$$</p> </blockquote> <p><a href="https://i.stack.imgur.com/z0NRc.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/z0NRc.jpg" alt="enter image description here"></a></p> Answer 16: <p>Call $O=(x_0,y_0)$ the orthocenter. A general line that goes through $O$ is like:</p> <p>$$y=ax+(y_0-ax_0)$$</p> <p>Once it is perpendicular to all three lines then all heights must have the form:</p> <p>$$y=x+y_0-x_0\\ y=\frac{3}{2}x+y_0-\frac{3}{2}x_0\\ y=-\frac{1}{4}x+y_0-\frac{1}{4}x_0$$</p> <p>See that all three lines pass through $O$ but in order to be a height all of them must pass through some vertex. It means that there is not a way to find $O$ without at least two vertices. </p> Answer 17: <p>(See figure below)</p> <p>Here is a solution bypassing the obtention of vertices' coordinates, by using "<strong>pencils of lines</strong>".</p> <p>Being given two lines $L_1$ and $L_2$ with resp. equations $$\begin{cases}u_1x+v_1y+w_1=0\\u_2x+v_2y+w_2=0\end{cases}$$</p> <p>Any line passing through point $L_1 \cap L_2$ has general equation</p> <p>$$m(u_1x+v_1y+w_1=0)+(1-m)(u_2x+v_2y+w_2=0)=0, \ \ m \in \mathbb{R}.$$ </p> <p>The set of all these lines is called the "pencil of lines" defined by $L_1$ and $L_2$. </p> <p>Thus, the pencil of lines defined by $x+y-1=0$ and $2x+3y-6=0$ is:</p> <p>$$a(x+y-1)+(1-a)(2x+3y-6)=0$$</p> <p>$$\tag{1} \iff \ (2-a)x+(3-2a)y+(5a-6)=0$$</p> <p>Among these lines, one is the height. This height is characterized by the fact that its normal vector is orthogonal to the normal vector of the third line:</p> <p>$$\binom{2-a}{3-2a} \perp \binom{4}{-1} \ \ \iff \ \ (2-a)4+(3-2a)(-1)=0 \ \ \iff \ \ a=\frac52$$</p> <p>(recall: a line with equation $ux+vy+w=0$ has normal vector $\binom{u}{v}.$)</p> <p>Plugging this value of $a$ in (1) gives the equation of the height:</p> <p>$$\tag{2}x+4y-13=0.$$</p> <p>Working in the same way with a group of 2 other sides: </p> <p>$$b(2x+3y-6)+(1-b)(4x-y+4)=0$$</p> <p>$$\tag{3} \iff \ \ (4-2b)x+(4b-1)y+(4-10b)=0$$</p> <p>$$\binom{4-2b}{4b-1} \perp \binom{1}{1} \ \ \iff \ \ (4-2b)1+(4b-1)1=0 \ \ \iff \ \ b=-\frac32$$</p> <p>Plugging this value of $b$ in (3) gives the equation of the second height:</p> <p>$$\tag{4}7x-7y+19=0.$$</p> <p>The solution of the system formed by (2) and (4) are the coordinates of the orthocentre </p> <blockquote> <p>$$H\left(\frac{3}{7},\frac{22}{7}\right).$$</p> </blockquote> <p><a href="https://i.stack.imgur.com/z0NRc.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/z0NRc.jpg" alt="enter image description here"></a></p> Answer 18: <p>Call $O=(x_0,y_0)$ the orthocenter. A general line that goes through $O$ is like:</p> <p>$$y=ax+(y_0-ax_0)$$</p> <p>Once it is perpendicular to all three lines then all heights must have the form:</p> <p>$$y=x+y_0-x_0\\ y=\frac{3}{2}x+y_0-\frac{3}{2}x_0\\ y=-\frac{1}{4}x+y_0-\frac{1}{4}x_0$$</p> <p>See that all three lines pass through $O$ but in order to be a height all of them must pass through some vertex. It means that there is not a way to find $O$ without at least two vertices. </p> Answer 19: <p>(See figure below)</p> <p>Here is a solution bypassing the obtention of vertices' coordinates, by using "<strong>pencils of lines</strong>".</p> <p>Being given two lines $L_1$ and $L_2$ with resp. equations $$\begin{cases}u_1x+v_1y+w_1=0\\u_2x+v_2y+w_2=0\end{cases}$$</p> <p>Any line passing through point $L_1 \cap L_2$ has general equation</p> <p>$$m(u_1x+v_1y+w_1=0)+(1-m)(u_2x+v_2y+w_2=0)=0, \ \ m \in \mathbb{R}.$$ </p> <p>The set of all these lines is called the "pencil of lines" defined by $L_1$ and $L_2$. </p> <p>Thus, the pencil of lines defined by $x+y-1=0$ and $2x+3y-6=0$ is:</p> <p>$$a(x+y-1)+(1-a)(2x+3y-6)=0$$</p> <p>$$\tag{1} \iff \ (2-a)x+(3-2a)y+(5a-6)=0$$</p> <p>Among these lines, one is the height. This height is characterized by the fact that its normal vector is orthogonal to the normal vector of the third line:</p> <p>$$\binom{2-a}{3-2a} \perp \binom{4}{-1} \ \ \iff \ \ (2-a)4+(3-2a)(-1)=0 \ \ \iff \ \ a=\frac52$$</p> <p>(recall: a line with equation $ux+vy+w=0$ has normal vector $\binom{u}{v}.$)</p> <p>Plugging this value of $a$ in (1) gives the equation of the height:</p> <p>$$\tag{2}x+4y-13=0.$$</p> <p>Working in the same way with a group of 2 other sides: </p> <p>$$b(2x+3y-6)+(1-b)(4x-y+4)=0$$</p> <p>$$\tag{3} \iff \ \ (4-2b)x+(4b-1)y+(4-10b)=0$$</p> <p>$$\binom{4-2b}{4b-1} \perp \binom{1}{1} \ \ \iff \ \ (4-2b)1+(4b-1)1=0 \ \ \iff \ \ b=-\frac32$$</p> <p>Plugging this value of $b$ in (3) gives the equation of the second height:</p> <p>$$\tag{4}7x-7y+19=0.$$</p> <p>The solution of the system formed by (2) and (4) are the coordinates of the orthocentre </p> <blockquote> <p>$$H\left(\frac{3}{7},\frac{22}{7}\right).$$</p> </blockquote> <p><a href="https://i.stack.imgur.com/z0NRc.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/z0NRc.jpg" alt="enter image description here"></a></p> Answer 20: <p>Call $O=(x_0,y_0)$ the orthocenter. A general line that goes through $O$ is like:</p> <p>$$y=ax+(y_0-ax_0)$$</p> <p>Once it is perpendicular to all three lines then all heights must have the form:</p> <p>$$y=x+y_0-x_0\\ y=\frac{3}{2}x+y_0-\frac{3}{2}x_0\\ y=-\frac{1}{4}x+y_0-\frac{1}{4}x_0$$</p> <p>See that all three lines pass through $O$ but in order to be a height all of them must pass through some vertex. It means that there is not a way to find $O$ without at least two vertices. </p>
How can a Fields Medallist be &#39;not very good at logic&#39;?
2,116,634
6
https://math.stackexchange.com/questions/2116634/how-can-a-fields-medallist-be-not-very-good-at-logic
<p><sub><a href="http://superstringtheory.com/people/atiyah.html" rel="nofollow noreferrer">Source 2:</a> <em>Sir Michael Atiyah on math, physics and fun</em>.</sub></p> <blockquote> <h3>I think the way a lot of people think about mathematics, since it's all based on logic, it can't be unpredictable.</h3> <p><strong><em>Well, the idea that mathematics is synonymous with logic is a great ridiculous statement that some people make. Mathematics is very difficult to define, actually, what constitutes mathematics. Logical thinking is a key part of mathematics, but it's by no means the only part.</em></strong> You've got to have a lot of input and material from somewhere, you've got to have ideas coming from physics, concepts from geometry. You've got to have imagination, you're going to use intuition, guesswork, vision, like a creative artist has. In fact, proofs are usually only the last bit of the story, when you come to tie up the... dot the i's and cross the T's. Sometimes the proof is needed to hold the whole thing together like the steel structure of a building, but sometimes you've stopped putting it together, and the proof is just the last little bit of polish on the surface.<br>   <strong><em>So the most time mathematicians are working, they're concerned with much more than proofs, they're concerned with ideas, understanding why this is true, what leads where, possible links. You play around in your mind with a whole host of ill-defined things.</em></strong><br>   And I think that's one thing the field can get wrong when they're being taught to students. They can see a very formal proof, and they can see, this is what mathematics is.** My story I can tell. When I was a student I went to some lectures on analysis where people gave some very formal proofs about this being less than epsilon and this is bigger than that. Then I had private supervision from a Russian mathematician called Bessikovich, a good analyst, and he'd draw a little picture and say, this -- this is small, this -- this is very small. Now that's the way an analyst thinks. None of this nonsense about precision. Small, very small. You get an idea what is going on. And then you can work it out afterwards. And people can be misled, if you read books, textbooks or go to lectures, and you see this very formal approach and you think, gosh that's the way I gotta think, and they can be turned off by that because that's not an interesting thing, mathematics, you see. You aren't thinking at that point imaginatively.<br>   But you mustn't get carried away by the other extreme. You mustn't go all the time with airy-faery ideas that you can't actually write and solve a problem. That's a danger. But you've got to have a balance between being able to be disciplined and solve problems and apply logical thinking when necessary. And at other times you've got to be able to freely float in the atmosphere like a poet and imagine the whole universe of possibilities, and hope that eventually you come down to Earth somewhere else. So it's very exciting to be a practicing mathematician or a physicist. Physicists in principle have to tie themselves down to Earth more than mathematicians, and one day look at experimental data. Well mathematicians have to tie themselves down in other ways too. A proof is one of the things that instantly ties them down. But it's a mistake to think that mathematics and logic are the same. They overlap in important ways, but it's a big mistake. $\color{red}{\text{I'm not very good at logic.}}$</p> </blockquote> <ol> <li><p>Per the italicised sentences above, I understand that math needs some logic, more than logic; but not all mathematicians need specialise in Mathematical Logic.</p></li> <li><p>Assumption: Sir Michael means the red (last sentence) literally, and not humbly or self-effacingly. </p></li> <li><p>Then how is the red true? An excellent mathematician must have mastered the fundaments of logic, e.g. undergraduate Mathematical Logic. So does the red instead mean research-level Mathematical Logic? </p></li> </ol>
[ { "answer_id": 2116809, "score": 3, "text": "<p>I think the sentiment expressed by the author is pretty clear: being a good mathematician is being able to go back and forth between being able to be really precise (logic), <em>and</em> being able to explore (and maybe even 'intuit') interesting relationships and connections, play with new ideas, speculate, hypothesize, etc.</p>\n\n<p>But, as such, I do believe Sir Michael did (probably intentionally) overstate his case a bit: as he says himself, <em>at some point</em> you need to take these new ideas and suspected theorems and make them hard, and that requires being good at logic. So I agree with you there that this seems like a strange statement of his. Indeed, Sir Michael is of course perfectly capable in logical thinking! </p>\n\n<p>Then again, he does refer to some of the high technical formalizations of logic, and how that seems almost too 'extreme', too 'restricted' ... how indeed it is hard to see how, say, a deeply formal logical axiomatization of, say, arithmetic, gives us any novel ideas about arithmetic (if you work with the Peano Axioms, I think you'll get the idea). So <em>that</em> kind of logic is something he says that he resists doing. </p>\n\n<p>But in the end you're right: to say that he is not good at logic at all, seems overstated, though most likely intentionally so and for dramatic effect: I think it's his way of trying to get those of us who do tend to stay in the purely 'precise' domain to 'venture out' a little more!</p>\n" }, { "answer_id": 2116674, "score": 2, "text": "<p>One can be an excellent mathematician and be helpless in face of even the 1st order logic (aka predicate theory). Just take a look at <a href=\"https://en.wikipedia.org/wiki/Principia_Mathematica\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Principia_Mathematica</a> (admittedly, this is a bit extreme). I remember taking logic classes and having to face this staff: it felt completely foreign to me, even though it describes (to some extent) how mathematical proofs are structured. </p>\n" }, { "answer_id": 2116792, "score": 2, "text": "<p>If he's a mathematician then he's very good at the kind of logic that's taught in secondary school, which is the only part of logic that's used in actual mathematical proofs. The purpose of that kind of logic is to assure correctness of proofs.</p>\n\n<p>Now suppose someone asks whether the ratio of lengths of proofs to lengths of theorems is bounded. That is also a question of <b>logic.</b> There are many questions of that kind, and that's probably what he was talking about.</p>\n\n<p>PS: It's not bounded.</p>\n" }, { "answer_id": 2116809, "score": 3, "text": "<p>I think the sentiment expressed by the author is pretty clear: being a good mathematician is being able to go back and forth between being able to be really precise (logic), <em>and</em> being able to explore (and maybe even 'intuit') interesting relationships and connections, play with new ideas, speculate, hypothesize, etc.</p>\n\n<p>But, as such, I do believe Sir Michael did (probably intentionally) overstate his case a bit: as he says himself, <em>at some point</em> you need to take these new ideas and suspected theorems and make them hard, and that requires being good at logic. So I agree with you there that this seems like a strange statement of his. Indeed, Sir Michael is of course perfectly capable in logical thinking! </p>\n\n<p>Then again, he does refer to some of the high technical formalizations of logic, and how that seems almost too 'extreme', too 'restricted' ... how indeed it is hard to see how, say, a deeply formal logical axiomatization of, say, arithmetic, gives us any novel ideas about arithmetic (if you work with the Peano Axioms, I think you'll get the idea). So <em>that</em> kind of logic is something he says that he resists doing. </p>\n\n<p>But in the end you're right: to say that he is not good at logic at all, seems overstated, though most likely intentionally so and for dramatic effect: I think it's his way of trying to get those of us who do tend to stay in the purely 'precise' domain to 'venture out' a little more!</p>\n" }, { "answer_id": 2116674, "score": 2, "text": "<p>One can be an excellent mathematician and be helpless in face of even the 1st order logic (aka predicate theory). Just take a look at <a href=\"https://en.wikipedia.org/wiki/Principia_Mathematica\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Principia_Mathematica</a> (admittedly, this is a bit extreme). I remember taking logic classes and having to face this staff: it felt completely foreign to me, even though it describes (to some extent) how mathematical proofs are structured. </p>\n" }, { "answer_id": 2116792, "score": 2, "text": "<p>If he's a mathematician then he's very good at the kind of logic that's taught in secondary school, which is the only part of logic that's used in actual mathematical proofs. The purpose of that kind of logic is to assure correctness of proofs.</p>\n\n<p>Now suppose someone asks whether the ratio of lengths of proofs to lengths of theorems is bounded. That is also a question of <b>logic.</b> There are many questions of that kind, and that's probably what he was talking about.</p>\n\n<p>PS: It's not bounded.</p>\n" }, { "answer_id": 2116809, "score": 3, "text": "<p>I think the sentiment expressed by the author is pretty clear: being a good mathematician is being able to go back and forth between being able to be really precise (logic), <em>and</em> being able to explore (and maybe even 'intuit') interesting relationships and connections, play with new ideas, speculate, hypothesize, etc.</p>\n\n<p>But, as such, I do believe Sir Michael did (probably intentionally) overstate his case a bit: as he says himself, <em>at some point</em> you need to take these new ideas and suspected theorems and make them hard, and that requires being good at logic. So I agree with you there that this seems like a strange statement of his. Indeed, Sir Michael is of course perfectly capable in logical thinking! </p>\n\n<p>Then again, he does refer to some of the high technical formalizations of logic, and how that seems almost too 'extreme', too 'restricted' ... how indeed it is hard to see how, say, a deeply formal logical axiomatization of, say, arithmetic, gives us any novel ideas about arithmetic (if you work with the Peano Axioms, I think you'll get the idea). So <em>that</em> kind of logic is something he says that he resists doing. </p>\n\n<p>But in the end you're right: to say that he is not good at logic at all, seems overstated, though most likely intentionally so and for dramatic effect: I think it's his way of trying to get those of us who do tend to stay in the purely 'precise' domain to 'venture out' a little more!</p>\n" }, { "answer_id": 2116674, "score": 2, "text": "<p>One can be an excellent mathematician and be helpless in face of even the 1st order logic (aka predicate theory). Just take a look at <a href=\"https://en.wikipedia.org/wiki/Principia_Mathematica\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Principia_Mathematica</a> (admittedly, this is a bit extreme). I remember taking logic classes and having to face this staff: it felt completely foreign to me, even though it describes (to some extent) how mathematical proofs are structured. </p>\n" }, { "answer_id": 2116792, "score": 2, "text": "<p>If he's a mathematician then he's very good at the kind of logic that's taught in secondary school, which is the only part of logic that's used in actual mathematical proofs. The purpose of that kind of logic is to assure correctness of proofs.</p>\n\n<p>Now suppose someone asks whether the ratio of lengths of proofs to lengths of theorems is bounded. That is also a question of <b>logic.</b> There are many questions of that kind, and that's probably what he was talking about.</p>\n\n<p>PS: It's not bounded.</p>\n" }, { "answer_id": 2116809, "score": 3, "text": "<p>I think the sentiment expressed by the author is pretty clear: being a good mathematician is being able to go back and forth between being able to be really precise (logic), <em>and</em> being able to explore (and maybe even 'intuit') interesting relationships and connections, play with new ideas, speculate, hypothesize, etc.</p>\n\n<p>But, as such, I do believe Sir Michael did (probably intentionally) overstate his case a bit: as he says himself, <em>at some point</em> you need to take these new ideas and suspected theorems and make them hard, and that requires being good at logic. So I agree with you there that this seems like a strange statement of his. Indeed, Sir Michael is of course perfectly capable in logical thinking! </p>\n\n<p>Then again, he does refer to some of the high technical formalizations of logic, and how that seems almost too 'extreme', too 'restricted' ... how indeed it is hard to see how, say, a deeply formal logical axiomatization of, say, arithmetic, gives us any novel ideas about arithmetic (if you work with the Peano Axioms, I think you'll get the idea). So <em>that</em> kind of logic is something he says that he resists doing. </p>\n\n<p>But in the end you're right: to say that he is not good at logic at all, seems overstated, though most likely intentionally so and for dramatic effect: I think it's his way of trying to get those of us who do tend to stay in the purely 'precise' domain to 'venture out' a little more!</p>\n" }, { "answer_id": 2116674, "score": 2, "text": "<p>One can be an excellent mathematician and be helpless in face of even the 1st order logic (aka predicate theory). Just take a look at <a href=\"https://en.wikipedia.org/wiki/Principia_Mathematica\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Principia_Mathematica</a> (admittedly, this is a bit extreme). I remember taking logic classes and having to face this staff: it felt completely foreign to me, even though it describes (to some extent) how mathematical proofs are structured. </p>\n" }, { "answer_id": 2116792, "score": 2, "text": "<p>If he's a mathematician then he's very good at the kind of logic that's taught in secondary school, which is the only part of logic that's used in actual mathematical proofs. The purpose of that kind of logic is to assure correctness of proofs.</p>\n\n<p>Now suppose someone asks whether the ratio of lengths of proofs to lengths of theorems is bounded. That is also a question of <b>logic.</b> There are many questions of that kind, and that's probably what he was talking about.</p>\n\n<p>PS: It's not bounded.</p>\n" }, { "answer_id": 2116809, "score": 3, "text": "<p>I think the sentiment expressed by the author is pretty clear: being a good mathematician is being able to go back and forth between being able to be really precise (logic), <em>and</em> being able to explore (and maybe even 'intuit') interesting relationships and connections, play with new ideas, speculate, hypothesize, etc.</p>\n\n<p>But, as such, I do believe Sir Michael did (probably intentionally) overstate his case a bit: as he says himself, <em>at some point</em> you need to take these new ideas and suspected theorems and make them hard, and that requires being good at logic. So I agree with you there that this seems like a strange statement of his. Indeed, Sir Michael is of course perfectly capable in logical thinking! </p>\n\n<p>Then again, he does refer to some of the high technical formalizations of logic, and how that seems almost too 'extreme', too 'restricted' ... how indeed it is hard to see how, say, a deeply formal logical axiomatization of, say, arithmetic, gives us any novel ideas about arithmetic (if you work with the Peano Axioms, I think you'll get the idea). So <em>that</em> kind of logic is something he says that he resists doing. </p>\n\n<p>But in the end you're right: to say that he is not good at logic at all, seems overstated, though most likely intentionally so and for dramatic effect: I think it's his way of trying to get those of us who do tend to stay in the purely 'precise' domain to 'venture out' a little more!</p>\n" }, { "answer_id": 2116674, "score": 2, "text": "<p>One can be an excellent mathematician and be helpless in face of even the 1st order logic (aka predicate theory). Just take a look at <a href=\"https://en.wikipedia.org/wiki/Principia_Mathematica\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Principia_Mathematica</a> (admittedly, this is a bit extreme). I remember taking logic classes and having to face this staff: it felt completely foreign to me, even though it describes (to some extent) how mathematical proofs are structured. </p>\n" }, { "answer_id": 2116792, "score": 2, "text": "<p>If he's a mathematician then he's very good at the kind of logic that's taught in secondary school, which is the only part of logic that's used in actual mathematical proofs. The purpose of that kind of logic is to assure correctness of proofs.</p>\n\n<p>Now suppose someone asks whether the ratio of lengths of proofs to lengths of theorems is bounded. That is also a question of <b>logic.</b> There are many questions of that kind, and that's probably what he was talking about.</p>\n\n<p>PS: It's not bounded.</p>\n" }, { "answer_id": 2116809, "score": 3, "text": "<p>I think the sentiment expressed by the author is pretty clear: being a good mathematician is being able to go back and forth between being able to be really precise (logic), <em>and</em> being able to explore (and maybe even 'intuit') interesting relationships and connections, play with new ideas, speculate, hypothesize, etc.</p>\n\n<p>But, as such, I do believe Sir Michael did (probably intentionally) overstate his case a bit: as he says himself, <em>at some point</em> you need to take these new ideas and suspected theorems and make them hard, and that requires being good at logic. So I agree with you there that this seems like a strange statement of his. Indeed, Sir Michael is of course perfectly capable in logical thinking! </p>\n\n<p>Then again, he does refer to some of the high technical formalizations of logic, and how that seems almost too 'extreme', too 'restricted' ... how indeed it is hard to see how, say, a deeply formal logical axiomatization of, say, arithmetic, gives us any novel ideas about arithmetic (if you work with the Peano Axioms, I think you'll get the idea). So <em>that</em> kind of logic is something he says that he resists doing. </p>\n\n<p>But in the end you're right: to say that he is not good at logic at all, seems overstated, though most likely intentionally so and for dramatic effect: I think it's his way of trying to get those of us who do tend to stay in the purely 'precise' domain to 'venture out' a little more!</p>\n" }, { "answer_id": 2116674, "score": 2, "text": "<p>One can be an excellent mathematician and be helpless in face of even the 1st order logic (aka predicate theory). Just take a look at <a href=\"https://en.wikipedia.org/wiki/Principia_Mathematica\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Principia_Mathematica</a> (admittedly, this is a bit extreme). I remember taking logic classes and having to face this staff: it felt completely foreign to me, even though it describes (to some extent) how mathematical proofs are structured. </p>\n" }, { "answer_id": 2116792, "score": 2, "text": "<p>If he's a mathematician then he's very good at the kind of logic that's taught in secondary school, which is the only part of logic that's used in actual mathematical proofs. The purpose of that kind of logic is to assure correctness of proofs.</p>\n\n<p>Now suppose someone asks whether the ratio of lengths of proofs to lengths of theorems is bounded. That is also a question of <b>logic.</b> There are many questions of that kind, and that's probably what he was talking about.</p>\n\n<p>PS: It's not bounded.</p>\n" }, { "answer_id": 2116809, "score": 3, "text": "<p>I think the sentiment expressed by the author is pretty clear: being a good mathematician is being able to go back and forth between being able to be really precise (logic), <em>and</em> being able to explore (and maybe even 'intuit') interesting relationships and connections, play with new ideas, speculate, hypothesize, etc.</p>\n\n<p>But, as such, I do believe Sir Michael did (probably intentionally) overstate his case a bit: as he says himself, <em>at some point</em> you need to take these new ideas and suspected theorems and make them hard, and that requires being good at logic. So I agree with you there that this seems like a strange statement of his. Indeed, Sir Michael is of course perfectly capable in logical thinking! </p>\n\n<p>Then again, he does refer to some of the high technical formalizations of logic, and how that seems almost too 'extreme', too 'restricted' ... how indeed it is hard to see how, say, a deeply formal logical axiomatization of, say, arithmetic, gives us any novel ideas about arithmetic (if you work with the Peano Axioms, I think you'll get the idea). So <em>that</em> kind of logic is something he says that he resists doing. </p>\n\n<p>But in the end you're right: to say that he is not good at logic at all, seems overstated, though most likely intentionally so and for dramatic effect: I think it's his way of trying to get those of us who do tend to stay in the purely 'precise' domain to 'venture out' a little more!</p>\n" }, { "answer_id": 2116674, "score": 2, "text": "<p>One can be an excellent mathematician and be helpless in face of even the 1st order logic (aka predicate theory). Just take a look at <a href=\"https://en.wikipedia.org/wiki/Principia_Mathematica\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Principia_Mathematica</a> (admittedly, this is a bit extreme). I remember taking logic classes and having to face this staff: it felt completely foreign to me, even though it describes (to some extent) how mathematical proofs are structured. </p>\n" }, { "answer_id": 2116792, "score": 2, "text": "<p>If he's a mathematician then he's very good at the kind of logic that's taught in secondary school, which is the only part of logic that's used in actual mathematical proofs. The purpose of that kind of logic is to assure correctness of proofs.</p>\n\n<p>Now suppose someone asks whether the ratio of lengths of proofs to lengths of theorems is bounded. That is also a question of <b>logic.</b> There are many questions of that kind, and that's probably what he was talking about.</p>\n\n<p>PS: It's not bounded.</p>\n" }, { "answer_id": 2116809, "score": 3, "text": "<p>I think the sentiment expressed by the author is pretty clear: being a good mathematician is being able to go back and forth between being able to be really precise (logic), <em>and</em> being able to explore (and maybe even 'intuit') interesting relationships and connections, play with new ideas, speculate, hypothesize, etc.</p>\n\n<p>But, as such, I do believe Sir Michael did (probably intentionally) overstate his case a bit: as he says himself, <em>at some point</em> you need to take these new ideas and suspected theorems and make them hard, and that requires being good at logic. So I agree with you there that this seems like a strange statement of his. Indeed, Sir Michael is of course perfectly capable in logical thinking! </p>\n\n<p>Then again, he does refer to some of the high technical formalizations of logic, and how that seems almost too 'extreme', too 'restricted' ... how indeed it is hard to see how, say, a deeply formal logical axiomatization of, say, arithmetic, gives us any novel ideas about arithmetic (if you work with the Peano Axioms, I think you'll get the idea). So <em>that</em> kind of logic is something he says that he resists doing. </p>\n\n<p>But in the end you're right: to say that he is not good at logic at all, seems overstated, though most likely intentionally so and for dramatic effect: I think it's his way of trying to get those of us who do tend to stay in the purely 'precise' domain to 'venture out' a little more!</p>\n" }, { "answer_id": 2116674, "score": 2, "text": "<p>One can be an excellent mathematician and be helpless in face of even the 1st order logic (aka predicate theory). Just take a look at <a href=\"https://en.wikipedia.org/wiki/Principia_Mathematica\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Principia_Mathematica</a> (admittedly, this is a bit extreme). I remember taking logic classes and having to face this staff: it felt completely foreign to me, even though it describes (to some extent) how mathematical proofs are structured. </p>\n" }, { "answer_id": 2116792, "score": 2, "text": "<p>If he's a mathematician then he's very good at the kind of logic that's taught in secondary school, which is the only part of logic that's used in actual mathematical proofs. The purpose of that kind of logic is to assure correctness of proofs.</p>\n\n<p>Now suppose someone asks whether the ratio of lengths of proofs to lengths of theorems is bounded. That is also a question of <b>logic.</b> There are many questions of that kind, and that's probably what he was talking about.</p>\n\n<p>PS: It's not bounded.</p>\n" }, { "answer_id": 2116809, "score": 3, "text": "<p>I think the sentiment expressed by the author is pretty clear: being a good mathematician is being able to go back and forth between being able to be really precise (logic), <em>and</em> being able to explore (and maybe even 'intuit') interesting relationships and connections, play with new ideas, speculate, hypothesize, etc.</p>\n\n<p>But, as such, I do believe Sir Michael did (probably intentionally) overstate his case a bit: as he says himself, <em>at some point</em> you need to take these new ideas and suspected theorems and make them hard, and that requires being good at logic. So I agree with you there that this seems like a strange statement of his. Indeed, Sir Michael is of course perfectly capable in logical thinking! </p>\n\n<p>Then again, he does refer to some of the high technical formalizations of logic, and how that seems almost too 'extreme', too 'restricted' ... how indeed it is hard to see how, say, a deeply formal logical axiomatization of, say, arithmetic, gives us any novel ideas about arithmetic (if you work with the Peano Axioms, I think you'll get the idea). So <em>that</em> kind of logic is something he says that he resists doing. </p>\n\n<p>But in the end you're right: to say that he is not good at logic at all, seems overstated, though most likely intentionally so and for dramatic effect: I think it's his way of trying to get those of us who do tend to stay in the purely 'precise' domain to 'venture out' a little more!</p>\n" }, { "answer_id": 2116674, "score": 2, "text": "<p>One can be an excellent mathematician and be helpless in face of even the 1st order logic (aka predicate theory). Just take a look at <a href=\"https://en.wikipedia.org/wiki/Principia_Mathematica\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Principia_Mathematica</a> (admittedly, this is a bit extreme). I remember taking logic classes and having to face this staff: it felt completely foreign to me, even though it describes (to some extent) how mathematical proofs are structured. </p>\n" }, { "answer_id": 2116792, "score": 2, "text": "<p>If he's a mathematician then he's very good at the kind of logic that's taught in secondary school, which is the only part of logic that's used in actual mathematical proofs. The purpose of that kind of logic is to assure correctness of proofs.</p>\n\n<p>Now suppose someone asks whether the ratio of lengths of proofs to lengths of theorems is bounded. That is also a question of <b>logic.</b> There are many questions of that kind, and that's probably what he was talking about.</p>\n\n<p>PS: It's not bounded.</p>\n" }, { "answer_id": 2116809, "score": 3, "text": "<p>I think the sentiment expressed by the author is pretty clear: being a good mathematician is being able to go back and forth between being able to be really precise (logic), <em>and</em> being able to explore (and maybe even 'intuit') interesting relationships and connections, play with new ideas, speculate, hypothesize, etc.</p>\n\n<p>But, as such, I do believe Sir Michael did (probably intentionally) overstate his case a bit: as he says himself, <em>at some point</em> you need to take these new ideas and suspected theorems and make them hard, and that requires being good at logic. So I agree with you there that this seems like a strange statement of his. Indeed, Sir Michael is of course perfectly capable in logical thinking! </p>\n\n<p>Then again, he does refer to some of the high technical formalizations of logic, and how that seems almost too 'extreme', too 'restricted' ... how indeed it is hard to see how, say, a deeply formal logical axiomatization of, say, arithmetic, gives us any novel ideas about arithmetic (if you work with the Peano Axioms, I think you'll get the idea). So <em>that</em> kind of logic is something he says that he resists doing. </p>\n\n<p>But in the end you're right: to say that he is not good at logic at all, seems overstated, though most likely intentionally so and for dramatic effect: I think it's his way of trying to get those of us who do tend to stay in the purely 'precise' domain to 'venture out' a little more!</p>\n" }, { "answer_id": 2116674, "score": 2, "text": "<p>One can be an excellent mathematician and be helpless in face of even the 1st order logic (aka predicate theory). Just take a look at <a href=\"https://en.wikipedia.org/wiki/Principia_Mathematica\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Principia_Mathematica</a> (admittedly, this is a bit extreme). I remember taking logic classes and having to face this staff: it felt completely foreign to me, even though it describes (to some extent) how mathematical proofs are structured. </p>\n" }, { "answer_id": 2116792, "score": 2, "text": "<p>If he's a mathematician then he's very good at the kind of logic that's taught in secondary school, which is the only part of logic that's used in actual mathematical proofs. The purpose of that kind of logic is to assure correctness of proofs.</p>\n\n<p>Now suppose someone asks whether the ratio of lengths of proofs to lengths of theorems is bounded. That is also a question of <b>logic.</b> There are many questions of that kind, and that's probably what he was talking about.</p>\n\n<p>PS: It's not bounded.</p>\n" } ]
Question: How can a Fields Medallist be &#39;not very good at logic&#39;? Answer 1: <p>I think the sentiment expressed by the author is pretty clear: being a good mathematician is being able to go back and forth between being able to be really precise (logic), <em>and</em> being able to explore (and maybe even 'intuit') interesting relationships and connections, play with new ideas, speculate, hypothesize, etc.</p> <p>But, as such, I do believe Sir Michael did (probably intentionally) overstate his case a bit: as he says himself, <em>at some point</em> you need to take these new ideas and suspected theorems and make them hard, and that requires being good at logic. So I agree with you there that this seems like a strange statement of his. Indeed, Sir Michael is of course perfectly capable in logical thinking! </p> <p>Then again, he does refer to some of the high technical formalizations of logic, and how that seems almost too 'extreme', too 'restricted' ... how indeed it is hard to see how, say, a deeply formal logical axiomatization of, say, arithmetic, gives us any novel ideas about arithmetic (if you work with the Peano Axioms, I think you'll get the idea). So <em>that</em> kind of logic is something he says that he resists doing. </p> <p>But in the end you're right: to say that he is not good at logic at all, seems overstated, though most likely intentionally so and for dramatic effect: I think it's his way of trying to get those of us who do tend to stay in the purely 'precise' domain to 'venture out' a little more!</p> Answer 2: <p>One can be an excellent mathematician and be helpless in face of even the 1st order logic (aka predicate theory). Just take a look at <a href="https://en.wikipedia.org/wiki/Principia_Mathematica" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Principia_Mathematica</a> (admittedly, this is a bit extreme). I remember taking logic classes and having to face this staff: it felt completely foreign to me, even though it describes (to some extent) how mathematical proofs are structured. </p> Answer 3: <p>If he's a mathematician then he's very good at the kind of logic that's taught in secondary school, which is the only part of logic that's used in actual mathematical proofs. The purpose of that kind of logic is to assure correctness of proofs.</p> <p>Now suppose someone asks whether the ratio of lengths of proofs to lengths of theorems is bounded. That is also a question of <b>logic.</b> There are many questions of that kind, and that's probably what he was talking about.</p> <p>PS: It's not bounded.</p> Answer 4: <p>I think the sentiment expressed by the author is pretty clear: being a good mathematician is being able to go back and forth between being able to be really precise (logic), <em>and</em> being able to explore (and maybe even 'intuit') interesting relationships and connections, play with new ideas, speculate, hypothesize, etc.</p> <p>But, as such, I do believe Sir Michael did (probably intentionally) overstate his case a bit: as he says himself, <em>at some point</em> you need to take these new ideas and suspected theorems and make them hard, and that requires being good at logic. So I agree with you there that this seems like a strange statement of his. Indeed, Sir Michael is of course perfectly capable in logical thinking! </p> <p>Then again, he does refer to some of the high technical formalizations of logic, and how that seems almost too 'extreme', too 'restricted' ... how indeed it is hard to see how, say, a deeply formal logical axiomatization of, say, arithmetic, gives us any novel ideas about arithmetic (if you work with the Peano Axioms, I think you'll get the idea). So <em>that</em> kind of logic is something he says that he resists doing. </p> <p>But in the end you're right: to say that he is not good at logic at all, seems overstated, though most likely intentionally so and for dramatic effect: I think it's his way of trying to get those of us who do tend to stay in the purely 'precise' domain to 'venture out' a little more!</p> Answer 5: <p>One can be an excellent mathematician and be helpless in face of even the 1st order logic (aka predicate theory). Just take a look at <a href="https://en.wikipedia.org/wiki/Principia_Mathematica" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Principia_Mathematica</a> (admittedly, this is a bit extreme). I remember taking logic classes and having to face this staff: it felt completely foreign to me, even though it describes (to some extent) how mathematical proofs are structured. </p> Answer 6: <p>If he's a mathematician then he's very good at the kind of logic that's taught in secondary school, which is the only part of logic that's used in actual mathematical proofs. The purpose of that kind of logic is to assure correctness of proofs.</p> <p>Now suppose someone asks whether the ratio of lengths of proofs to lengths of theorems is bounded. That is also a question of <b>logic.</b> There are many questions of that kind, and that's probably what he was talking about.</p> <p>PS: It's not bounded.</p> Answer 7: <p>I think the sentiment expressed by the author is pretty clear: being a good mathematician is being able to go back and forth between being able to be really precise (logic), <em>and</em> being able to explore (and maybe even 'intuit') interesting relationships and connections, play with new ideas, speculate, hypothesize, etc.</p> <p>But, as such, I do believe Sir Michael did (probably intentionally) overstate his case a bit: as he says himself, <em>at some point</em> you need to take these new ideas and suspected theorems and make them hard, and that requires being good at logic. So I agree with you there that this seems like a strange statement of his. Indeed, Sir Michael is of course perfectly capable in logical thinking! </p> <p>Then again, he does refer to some of the high technical formalizations of logic, and how that seems almost too 'extreme', too 'restricted' ... how indeed it is hard to see how, say, a deeply formal logical axiomatization of, say, arithmetic, gives us any novel ideas about arithmetic (if you work with the Peano Axioms, I think you'll get the idea). So <em>that</em> kind of logic is something he says that he resists doing. </p> <p>But in the end you're right: to say that he is not good at logic at all, seems overstated, though most likely intentionally so and for dramatic effect: I think it's his way of trying to get those of us who do tend to stay in the purely 'precise' domain to 'venture out' a little more!</p> Answer 8: <p>One can be an excellent mathematician and be helpless in face of even the 1st order logic (aka predicate theory). Just take a look at <a href="https://en.wikipedia.org/wiki/Principia_Mathematica" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Principia_Mathematica</a> (admittedly, this is a bit extreme). I remember taking logic classes and having to face this staff: it felt completely foreign to me, even though it describes (to some extent) how mathematical proofs are structured. </p> Answer 9: <p>If he's a mathematician then he's very good at the kind of logic that's taught in secondary school, which is the only part of logic that's used in actual mathematical proofs. The purpose of that kind of logic is to assure correctness of proofs.</p> <p>Now suppose someone asks whether the ratio of lengths of proofs to lengths of theorems is bounded. That is also a question of <b>logic.</b> There are many questions of that kind, and that's probably what he was talking about.</p> <p>PS: It's not bounded.</p> Answer 10: <p>I think the sentiment expressed by the author is pretty clear: being a good mathematician is being able to go back and forth between being able to be really precise (logic), <em>and</em> being able to explore (and maybe even 'intuit') interesting relationships and connections, play with new ideas, speculate, hypothesize, etc.</p> <p>But, as such, I do believe Sir Michael did (probably intentionally) overstate his case a bit: as he says himself, <em>at some point</em> you need to take these new ideas and suspected theorems and make them hard, and that requires being good at logic. So I agree with you there that this seems like a strange statement of his. Indeed, Sir Michael is of course perfectly capable in logical thinking! </p> <p>Then again, he does refer to some of the high technical formalizations of logic, and how that seems almost too 'extreme', too 'restricted' ... how indeed it is hard to see how, say, a deeply formal logical axiomatization of, say, arithmetic, gives us any novel ideas about arithmetic (if you work with the Peano Axioms, I think you'll get the idea). So <em>that</em> kind of logic is something he says that he resists doing. </p> <p>But in the end you're right: to say that he is not good at logic at all, seems overstated, though most likely intentionally so and for dramatic effect: I think it's his way of trying to get those of us who do tend to stay in the purely 'precise' domain to 'venture out' a little more!</p> Answer 11: <p>One can be an excellent mathematician and be helpless in face of even the 1st order logic (aka predicate theory). Just take a look at <a href="https://en.wikipedia.org/wiki/Principia_Mathematica" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Principia_Mathematica</a> (admittedly, this is a bit extreme). I remember taking logic classes and having to face this staff: it felt completely foreign to me, even though it describes (to some extent) how mathematical proofs are structured. </p> Answer 12: <p>If he's a mathematician then he's very good at the kind of logic that's taught in secondary school, which is the only part of logic that's used in actual mathematical proofs. The purpose of that kind of logic is to assure correctness of proofs.</p> <p>Now suppose someone asks whether the ratio of lengths of proofs to lengths of theorems is bounded. That is also a question of <b>logic.</b> There are many questions of that kind, and that's probably what he was talking about.</p> <p>PS: It's not bounded.</p> Answer 13: <p>I think the sentiment expressed by the author is pretty clear: being a good mathematician is being able to go back and forth between being able to be really precise (logic), <em>and</em> being able to explore (and maybe even 'intuit') interesting relationships and connections, play with new ideas, speculate, hypothesize, etc.</p> <p>But, as such, I do believe Sir Michael did (probably intentionally) overstate his case a bit: as he says himself, <em>at some point</em> you need to take these new ideas and suspected theorems and make them hard, and that requires being good at logic. So I agree with you there that this seems like a strange statement of his. Indeed, Sir Michael is of course perfectly capable in logical thinking! </p> <p>Then again, he does refer to some of the high technical formalizations of logic, and how that seems almost too 'extreme', too 'restricted' ... how indeed it is hard to see how, say, a deeply formal logical axiomatization of, say, arithmetic, gives us any novel ideas about arithmetic (if you work with the Peano Axioms, I think you'll get the idea). So <em>that</em> kind of logic is something he says that he resists doing. </p> <p>But in the end you're right: to say that he is not good at logic at all, seems overstated, though most likely intentionally so and for dramatic effect: I think it's his way of trying to get those of us who do tend to stay in the purely 'precise' domain to 'venture out' a little more!</p> Answer 14: <p>One can be an excellent mathematician and be helpless in face of even the 1st order logic (aka predicate theory). Just take a look at <a href="https://en.wikipedia.org/wiki/Principia_Mathematica" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Principia_Mathematica</a> (admittedly, this is a bit extreme). I remember taking logic classes and having to face this staff: it felt completely foreign to me, even though it describes (to some extent) how mathematical proofs are structured. </p> Answer 15: <p>If he's a mathematician then he's very good at the kind of logic that's taught in secondary school, which is the only part of logic that's used in actual mathematical proofs. The purpose of that kind of logic is to assure correctness of proofs.</p> <p>Now suppose someone asks whether the ratio of lengths of proofs to lengths of theorems is bounded. That is also a question of <b>logic.</b> There are many questions of that kind, and that's probably what he was talking about.</p> <p>PS: It's not bounded.</p> Answer 16: <p>I think the sentiment expressed by the author is pretty clear: being a good mathematician is being able to go back and forth between being able to be really precise (logic), <em>and</em> being able to explore (and maybe even 'intuit') interesting relationships and connections, play with new ideas, speculate, hypothesize, etc.</p> <p>But, as such, I do believe Sir Michael did (probably intentionally) overstate his case a bit: as he says himself, <em>at some point</em> you need to take these new ideas and suspected theorems and make them hard, and that requires being good at logic. So I agree with you there that this seems like a strange statement of his. Indeed, Sir Michael is of course perfectly capable in logical thinking! </p> <p>Then again, he does refer to some of the high technical formalizations of logic, and how that seems almost too 'extreme', too 'restricted' ... how indeed it is hard to see how, say, a deeply formal logical axiomatization of, say, arithmetic, gives us any novel ideas about arithmetic (if you work with the Peano Axioms, I think you'll get the idea). So <em>that</em> kind of logic is something he says that he resists doing. </p> <p>But in the end you're right: to say that he is not good at logic at all, seems overstated, though most likely intentionally so and for dramatic effect: I think it's his way of trying to get those of us who do tend to stay in the purely 'precise' domain to 'venture out' a little more!</p> Answer 17: <p>One can be an excellent mathematician and be helpless in face of even the 1st order logic (aka predicate theory). Just take a look at <a href="https://en.wikipedia.org/wiki/Principia_Mathematica" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Principia_Mathematica</a> (admittedly, this is a bit extreme). I remember taking logic classes and having to face this staff: it felt completely foreign to me, even though it describes (to some extent) how mathematical proofs are structured. </p> Answer 18: <p>If he's a mathematician then he's very good at the kind of logic that's taught in secondary school, which is the only part of logic that's used in actual mathematical proofs. The purpose of that kind of logic is to assure correctness of proofs.</p> <p>Now suppose someone asks whether the ratio of lengths of proofs to lengths of theorems is bounded. That is also a question of <b>logic.</b> There are many questions of that kind, and that's probably what he was talking about.</p> <p>PS: It's not bounded.</p> Answer 19: <p>I think the sentiment expressed by the author is pretty clear: being a good mathematician is being able to go back and forth between being able to be really precise (logic), <em>and</em> being able to explore (and maybe even 'intuit') interesting relationships and connections, play with new ideas, speculate, hypothesize, etc.</p> <p>But, as such, I do believe Sir Michael did (probably intentionally) overstate his case a bit: as he says himself, <em>at some point</em> you need to take these new ideas and suspected theorems and make them hard, and that requires being good at logic. So I agree with you there that this seems like a strange statement of his. Indeed, Sir Michael is of course perfectly capable in logical thinking! </p> <p>Then again, he does refer to some of the high technical formalizations of logic, and how that seems almost too 'extreme', too 'restricted' ... how indeed it is hard to see how, say, a deeply formal logical axiomatization of, say, arithmetic, gives us any novel ideas about arithmetic (if you work with the Peano Axioms, I think you'll get the idea). So <em>that</em> kind of logic is something he says that he resists doing. </p> <p>But in the end you're right: to say that he is not good at logic at all, seems overstated, though most likely intentionally so and for dramatic effect: I think it's his way of trying to get those of us who do tend to stay in the purely 'precise' domain to 'venture out' a little more!</p> Answer 20: <p>One can be an excellent mathematician and be helpless in face of even the 1st order logic (aka predicate theory). Just take a look at <a href="https://en.wikipedia.org/wiki/Principia_Mathematica" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Principia_Mathematica</a> (admittedly, this is a bit extreme). I remember taking logic classes and having to face this staff: it felt completely foreign to me, even though it describes (to some extent) how mathematical proofs are structured. </p> Answer 21: <p>If he's a mathematician then he's very good at the kind of logic that's taught in secondary school, which is the only part of logic that's used in actual mathematical proofs. The purpose of that kind of logic is to assure correctness of proofs.</p> <p>Now suppose someone asks whether the ratio of lengths of proofs to lengths of theorems is bounded. That is also a question of <b>logic.</b> There are many questions of that kind, and that's probably what he was talking about.</p> <p>PS: It's not bounded.</p> Answer 22: <p>I think the sentiment expressed by the author is pretty clear: being a good mathematician is being able to go back and forth between being able to be really precise (logic), <em>and</em> being able to explore (and maybe even 'intuit') interesting relationships and connections, play with new ideas, speculate, hypothesize, etc.</p> <p>But, as such, I do believe Sir Michael did (probably intentionally) overstate his case a bit: as he says himself, <em>at some point</em> you need to take these new ideas and suspected theorems and make them hard, and that requires being good at logic. So I agree with you there that this seems like a strange statement of his. Indeed, Sir Michael is of course perfectly capable in logical thinking! </p> <p>Then again, he does refer to some of the high technical formalizations of logic, and how that seems almost too 'extreme', too 'restricted' ... how indeed it is hard to see how, say, a deeply formal logical axiomatization of, say, arithmetic, gives us any novel ideas about arithmetic (if you work with the Peano Axioms, I think you'll get the idea). So <em>that</em> kind of logic is something he says that he resists doing. </p> <p>But in the end you're right: to say that he is not good at logic at all, seems overstated, though most likely intentionally so and for dramatic effect: I think it's his way of trying to get those of us who do tend to stay in the purely 'precise' domain to 'venture out' a little more!</p> Answer 23: <p>One can be an excellent mathematician and be helpless in face of even the 1st order logic (aka predicate theory). Just take a look at <a href="https://en.wikipedia.org/wiki/Principia_Mathematica" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Principia_Mathematica</a> (admittedly, this is a bit extreme). I remember taking logic classes and having to face this staff: it felt completely foreign to me, even though it describes (to some extent) how mathematical proofs are structured. </p> Answer 24: <p>If he's a mathematician then he's very good at the kind of logic that's taught in secondary school, which is the only part of logic that's used in actual mathematical proofs. The purpose of that kind of logic is to assure correctness of proofs.</p> <p>Now suppose someone asks whether the ratio of lengths of proofs to lengths of theorems is bounded. That is also a question of <b>logic.</b> There are many questions of that kind, and that's probably what he was talking about.</p> <p>PS: It's not bounded.</p> Answer 25: <p>I think the sentiment expressed by the author is pretty clear: being a good mathematician is being able to go back and forth between being able to be really precise (logic), <em>and</em> being able to explore (and maybe even 'intuit') interesting relationships and connections, play with new ideas, speculate, hypothesize, etc.</p> <p>But, as such, I do believe Sir Michael did (probably intentionally) overstate his case a bit: as he says himself, <em>at some point</em> you need to take these new ideas and suspected theorems and make them hard, and that requires being good at logic. So I agree with you there that this seems like a strange statement of his. Indeed, Sir Michael is of course perfectly capable in logical thinking! </p> <p>Then again, he does refer to some of the high technical formalizations of logic, and how that seems almost too 'extreme', too 'restricted' ... how indeed it is hard to see how, say, a deeply formal logical axiomatization of, say, arithmetic, gives us any novel ideas about arithmetic (if you work with the Peano Axioms, I think you'll get the idea). So <em>that</em> kind of logic is something he says that he resists doing. </p> <p>But in the end you're right: to say that he is not good at logic at all, seems overstated, though most likely intentionally so and for dramatic effect: I think it's his way of trying to get those of us who do tend to stay in the purely 'precise' domain to 'venture out' a little more!</p> Answer 26: <p>One can be an excellent mathematician and be helpless in face of even the 1st order logic (aka predicate theory). Just take a look at <a href="https://en.wikipedia.org/wiki/Principia_Mathematica" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Principia_Mathematica</a> (admittedly, this is a bit extreme). I remember taking logic classes and having to face this staff: it felt completely foreign to me, even though it describes (to some extent) how mathematical proofs are structured. </p> Answer 27: <p>If he's a mathematician then he's very good at the kind of logic that's taught in secondary school, which is the only part of logic that's used in actual mathematical proofs. The purpose of that kind of logic is to assure correctness of proofs.</p> <p>Now suppose someone asks whether the ratio of lengths of proofs to lengths of theorems is bounded. That is also a question of <b>logic.</b> There are many questions of that kind, and that's probably what he was talking about.</p> <p>PS: It's not bounded.</p> Answer 28: <p>I think the sentiment expressed by the author is pretty clear: being a good mathematician is being able to go back and forth between being able to be really precise (logic), <em>and</em> being able to explore (and maybe even 'intuit') interesting relationships and connections, play with new ideas, speculate, hypothesize, etc.</p> <p>But, as such, I do believe Sir Michael did (probably intentionally) overstate his case a bit: as he says himself, <em>at some point</em> you need to take these new ideas and suspected theorems and make them hard, and that requires being good at logic. So I agree with you there that this seems like a strange statement of his. Indeed, Sir Michael is of course perfectly capable in logical thinking! </p> <p>Then again, he does refer to some of the high technical formalizations of logic, and how that seems almost too 'extreme', too 'restricted' ... how indeed it is hard to see how, say, a deeply formal logical axiomatization of, say, arithmetic, gives us any novel ideas about arithmetic (if you work with the Peano Axioms, I think you'll get the idea). So <em>that</em> kind of logic is something he says that he resists doing. </p> <p>But in the end you're right: to say that he is not good at logic at all, seems overstated, though most likely intentionally so and for dramatic effect: I think it's his way of trying to get those of us who do tend to stay in the purely 'precise' domain to 'venture out' a little more!</p> Answer 29: <p>One can be an excellent mathematician and be helpless in face of even the 1st order logic (aka predicate theory). Just take a look at <a href="https://en.wikipedia.org/wiki/Principia_Mathematica" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Principia_Mathematica</a> (admittedly, this is a bit extreme). I remember taking logic classes and having to face this staff: it felt completely foreign to me, even though it describes (to some extent) how mathematical proofs are structured. </p> Answer 30: <p>If he's a mathematician then he's very good at the kind of logic that's taught in secondary school, which is the only part of logic that's used in actual mathematical proofs. The purpose of that kind of logic is to assure correctness of proofs.</p> <p>Now suppose someone asks whether the ratio of lengths of proofs to lengths of theorems is bounded. That is also a question of <b>logic.</b> There are many questions of that kind, and that's probably what he was talking about.</p> <p>PS: It's not bounded.</p>
How do I find the equation of a curve formed by a series of lines tangent to a curve?
2,117,036
6
https://math.stackexchange.com/questions/2117036/how-do-i-find-the-equation-of-a-curve-formed-by-a-series-of-lines-tangent-to-a-c
<p>When I was young I used to draw a sequence of straight lines on graph paper which made a curve after I finished. On a coordinate plane, the lines would be equivalent to starting at $y=9$ on the $y$ axis and ending at $x=1$ on the $x$ axis. With each line, I would decrease the $y$ by one unit and increase the $x$ by one unit. </p> <p>Here is a desmos graph that illustrates. <a href="https://www.desmos.com/calculator/u4ea8swmfg" rel="nofollow noreferrer">https://www.desmos.com/calculator/u4ea8swmfg</a></p> <p>Here is a similar example where the angle between the lines is 60 degrees. <a href="https://drive.google.com/open?id=0B5QHq_oPha0ybGdrbFNhUHRPOGc" rel="nofollow noreferrer">https://drive.google.com/open?id=0B5QHq_oPha0ybGdrbFNhUHRPOGc</a></p> <p>I think each line is basically a tangent line along the curve produced by taking a first derivative.</p> <p>Are these types of curves parabolas or possibly hyperbolae? And how could I find the equation of such a curve?</p>
[ { "answer_id": 2117075, "score": 4, "text": "<p>The curve you have described is in fact a parabola; specifically, it is the parabola generated as the quadratic <a href=\"https://en.wikipedia.org/wiki/B%C3%A9zier_curve\" rel=\"nofollow noreferrer\">Bézier curve</a> with the control points as the ends of your lines.</p>\n\n<p>The equation of this parabola is, for three points $a, b, c$:</p>\n\n<p>$$f(t)=(1-t)^2a+t(1-t)b+t^2c$$</p>\n\n<p>This is a parametric formula; you enter a $t$ value and it gives the $x$ and $y$ values of a point on the curve, as opposed to entering an $x$ value and getting the $y$ value out.</p>\n\n<p>In the first case, with the right angle, the parabola is simply</p>\n\n<p>$$f(t)=\\left(10t^2,10(1-t)^2\\right)$$</p>\n\n<p>shown in red on this figure.</p>\n\n<p><a href=\"https://i.stack.imgur.com/GD0Yj.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/GD0Yj.png\" alt=\"enter image description here\"></a></p>\n\n<p>The $60^\\circ$ angle looks like this.</p>\n\n<p><a href=\"https://i.stack.imgur.com/9MErX.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/9MErX.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2117062, "score": 1, "text": "<p>With points $A_n(0,n)$ on $y-$axis and $B_n(10-n,0)$ on $x-$axis the equations of all lines pass through are\n$$y=\\dfrac{-n}{10-n}x+n$$\nwith $C$ as the variable of curves family, we write\n$$y=\\dfrac{-C}{10-C}x+C$$\nfor envelope of curves family, $\\dfrac{\\partial f}{\\partial C}=0$ where\n$$f=y+\\dfrac{C}{10-C}x-C$$\nafter deletion $C$ between $f$ and $\\dfrac{\\partial f}{\\partial C}=0$ we get\n$$10x=\\Big(10-\\frac{y\\sqrt{10}}{\\sqrt{10}-\\sqrt{x}}\\Big)^2$$</p>\n" }, { "answer_id": 2117075, "score": 4, "text": "<p>The curve you have described is in fact a parabola; specifically, it is the parabola generated as the quadratic <a href=\"https://en.wikipedia.org/wiki/B%C3%A9zier_curve\" rel=\"nofollow noreferrer\">Bézier curve</a> with the control points as the ends of your lines.</p>\n\n<p>The equation of this parabola is, for three points $a, b, c$:</p>\n\n<p>$$f(t)=(1-t)^2a+t(1-t)b+t^2c$$</p>\n\n<p>This is a parametric formula; you enter a $t$ value and it gives the $x$ and $y$ values of a point on the curve, as opposed to entering an $x$ value and getting the $y$ value out.</p>\n\n<p>In the first case, with the right angle, the parabola is simply</p>\n\n<p>$$f(t)=\\left(10t^2,10(1-t)^2\\right)$$</p>\n\n<p>shown in red on this figure.</p>\n\n<p><a href=\"https://i.stack.imgur.com/GD0Yj.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/GD0Yj.png\" alt=\"enter image description here\"></a></p>\n\n<p>The $60^\\circ$ angle looks like this.</p>\n\n<p><a href=\"https://i.stack.imgur.com/9MErX.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/9MErX.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2117062, "score": 1, "text": "<p>With points $A_n(0,n)$ on $y-$axis and $B_n(10-n,0)$ on $x-$axis the equations of all lines pass through are\n$$y=\\dfrac{-n}{10-n}x+n$$\nwith $C$ as the variable of curves family, we write\n$$y=\\dfrac{-C}{10-C}x+C$$\nfor envelope of curves family, $\\dfrac{\\partial f}{\\partial C}=0$ where\n$$f=y+\\dfrac{C}{10-C}x-C$$\nafter deletion $C$ between $f$ and $\\dfrac{\\partial f}{\\partial C}=0$ we get\n$$10x=\\Big(10-\\frac{y\\sqrt{10}}{\\sqrt{10}-\\sqrt{x}}\\Big)^2$$</p>\n" }, { "answer_id": 2117075, "score": 4, "text": "<p>The curve you have described is in fact a parabola; specifically, it is the parabola generated as the quadratic <a href=\"https://en.wikipedia.org/wiki/B%C3%A9zier_curve\" rel=\"nofollow noreferrer\">Bézier curve</a> with the control points as the ends of your lines.</p>\n\n<p>The equation of this parabola is, for three points $a, b, c$:</p>\n\n<p>$$f(t)=(1-t)^2a+t(1-t)b+t^2c$$</p>\n\n<p>This is a parametric formula; you enter a $t$ value and it gives the $x$ and $y$ values of a point on the curve, as opposed to entering an $x$ value and getting the $y$ value out.</p>\n\n<p>In the first case, with the right angle, the parabola is simply</p>\n\n<p>$$f(t)=\\left(10t^2,10(1-t)^2\\right)$$</p>\n\n<p>shown in red on this figure.</p>\n\n<p><a href=\"https://i.stack.imgur.com/GD0Yj.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/GD0Yj.png\" alt=\"enter image description here\"></a></p>\n\n<p>The $60^\\circ$ angle looks like this.</p>\n\n<p><a href=\"https://i.stack.imgur.com/9MErX.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/9MErX.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2117062, "score": 1, "text": "<p>With points $A_n(0,n)$ on $y-$axis and $B_n(10-n,0)$ on $x-$axis the equations of all lines pass through are\n$$y=\\dfrac{-n}{10-n}x+n$$\nwith $C$ as the variable of curves family, we write\n$$y=\\dfrac{-C}{10-C}x+C$$\nfor envelope of curves family, $\\dfrac{\\partial f}{\\partial C}=0$ where\n$$f=y+\\dfrac{C}{10-C}x-C$$\nafter deletion $C$ between $f$ and $\\dfrac{\\partial f}{\\partial C}=0$ we get\n$$10x=\\Big(10-\\frac{y\\sqrt{10}}{\\sqrt{10}-\\sqrt{x}}\\Big)^2$$</p>\n" }, { "answer_id": 2117075, "score": 4, "text": "<p>The curve you have described is in fact a parabola; specifically, it is the parabola generated as the quadratic <a href=\"https://en.wikipedia.org/wiki/B%C3%A9zier_curve\" rel=\"nofollow noreferrer\">Bézier curve</a> with the control points as the ends of your lines.</p>\n\n<p>The equation of this parabola is, for three points $a, b, c$:</p>\n\n<p>$$f(t)=(1-t)^2a+t(1-t)b+t^2c$$</p>\n\n<p>This is a parametric formula; you enter a $t$ value and it gives the $x$ and $y$ values of a point on the curve, as opposed to entering an $x$ value and getting the $y$ value out.</p>\n\n<p>In the first case, with the right angle, the parabola is simply</p>\n\n<p>$$f(t)=\\left(10t^2,10(1-t)^2\\right)$$</p>\n\n<p>shown in red on this figure.</p>\n\n<p><a href=\"https://i.stack.imgur.com/GD0Yj.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/GD0Yj.png\" alt=\"enter image description here\"></a></p>\n\n<p>The $60^\\circ$ angle looks like this.</p>\n\n<p><a href=\"https://i.stack.imgur.com/9MErX.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/9MErX.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2117062, "score": 1, "text": "<p>With points $A_n(0,n)$ on $y-$axis and $B_n(10-n,0)$ on $x-$axis the equations of all lines pass through are\n$$y=\\dfrac{-n}{10-n}x+n$$\nwith $C$ as the variable of curves family, we write\n$$y=\\dfrac{-C}{10-C}x+C$$\nfor envelope of curves family, $\\dfrac{\\partial f}{\\partial C}=0$ where\n$$f=y+\\dfrac{C}{10-C}x-C$$\nafter deletion $C$ between $f$ and $\\dfrac{\\partial f}{\\partial C}=0$ we get\n$$10x=\\Big(10-\\frac{y\\sqrt{10}}{\\sqrt{10}-\\sqrt{x}}\\Big)^2$$</p>\n" }, { "answer_id": 2117075, "score": 4, "text": "<p>The curve you have described is in fact a parabola; specifically, it is the parabola generated as the quadratic <a href=\"https://en.wikipedia.org/wiki/B%C3%A9zier_curve\" rel=\"nofollow noreferrer\">Bézier curve</a> with the control points as the ends of your lines.</p>\n\n<p>The equation of this parabola is, for three points $a, b, c$:</p>\n\n<p>$$f(t)=(1-t)^2a+t(1-t)b+t^2c$$</p>\n\n<p>This is a parametric formula; you enter a $t$ value and it gives the $x$ and $y$ values of a point on the curve, as opposed to entering an $x$ value and getting the $y$ value out.</p>\n\n<p>In the first case, with the right angle, the parabola is simply</p>\n\n<p>$$f(t)=\\left(10t^2,10(1-t)^2\\right)$$</p>\n\n<p>shown in red on this figure.</p>\n\n<p><a href=\"https://i.stack.imgur.com/GD0Yj.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/GD0Yj.png\" alt=\"enter image description here\"></a></p>\n\n<p>The $60^\\circ$ angle looks like this.</p>\n\n<p><a href=\"https://i.stack.imgur.com/9MErX.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/9MErX.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2117062, "score": 1, "text": "<p>With points $A_n(0,n)$ on $y-$axis and $B_n(10-n,0)$ on $x-$axis the equations of all lines pass through are\n$$y=\\dfrac{-n}{10-n}x+n$$\nwith $C$ as the variable of curves family, we write\n$$y=\\dfrac{-C}{10-C}x+C$$\nfor envelope of curves family, $\\dfrac{\\partial f}{\\partial C}=0$ where\n$$f=y+\\dfrac{C}{10-C}x-C$$\nafter deletion $C$ between $f$ and $\\dfrac{\\partial f}{\\partial C}=0$ we get\n$$10x=\\Big(10-\\frac{y\\sqrt{10}}{\\sqrt{10}-\\sqrt{x}}\\Big)^2$$</p>\n" }, { "answer_id": 2117075, "score": 4, "text": "<p>The curve you have described is in fact a parabola; specifically, it is the parabola generated as the quadratic <a href=\"https://en.wikipedia.org/wiki/B%C3%A9zier_curve\" rel=\"nofollow noreferrer\">Bézier curve</a> with the control points as the ends of your lines.</p>\n\n<p>The equation of this parabola is, for three points $a, b, c$:</p>\n\n<p>$$f(t)=(1-t)^2a+t(1-t)b+t^2c$$</p>\n\n<p>This is a parametric formula; you enter a $t$ value and it gives the $x$ and $y$ values of a point on the curve, as opposed to entering an $x$ value and getting the $y$ value out.</p>\n\n<p>In the first case, with the right angle, the parabola is simply</p>\n\n<p>$$f(t)=\\left(10t^2,10(1-t)^2\\right)$$</p>\n\n<p>shown in red on this figure.</p>\n\n<p><a href=\"https://i.stack.imgur.com/GD0Yj.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/GD0Yj.png\" alt=\"enter image description here\"></a></p>\n\n<p>The $60^\\circ$ angle looks like this.</p>\n\n<p><a href=\"https://i.stack.imgur.com/9MErX.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/9MErX.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2117062, "score": 1, "text": "<p>With points $A_n(0,n)$ on $y-$axis and $B_n(10-n,0)$ on $x-$axis the equations of all lines pass through are\n$$y=\\dfrac{-n}{10-n}x+n$$\nwith $C$ as the variable of curves family, we write\n$$y=\\dfrac{-C}{10-C}x+C$$\nfor envelope of curves family, $\\dfrac{\\partial f}{\\partial C}=0$ where\n$$f=y+\\dfrac{C}{10-C}x-C$$\nafter deletion $C$ between $f$ and $\\dfrac{\\partial f}{\\partial C}=0$ we get\n$$10x=\\Big(10-\\frac{y\\sqrt{10}}{\\sqrt{10}-\\sqrt{x}}\\Big)^2$$</p>\n" }, { "answer_id": 2117075, "score": 4, "text": "<p>The curve you have described is in fact a parabola; specifically, it is the parabola generated as the quadratic <a href=\"https://en.wikipedia.org/wiki/B%C3%A9zier_curve\" rel=\"nofollow noreferrer\">Bézier curve</a> with the control points as the ends of your lines.</p>\n\n<p>The equation of this parabola is, for three points $a, b, c$:</p>\n\n<p>$$f(t)=(1-t)^2a+t(1-t)b+t^2c$$</p>\n\n<p>This is a parametric formula; you enter a $t$ value and it gives the $x$ and $y$ values of a point on the curve, as opposed to entering an $x$ value and getting the $y$ value out.</p>\n\n<p>In the first case, with the right angle, the parabola is simply</p>\n\n<p>$$f(t)=\\left(10t^2,10(1-t)^2\\right)$$</p>\n\n<p>shown in red on this figure.</p>\n\n<p><a href=\"https://i.stack.imgur.com/GD0Yj.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/GD0Yj.png\" alt=\"enter image description here\"></a></p>\n\n<p>The $60^\\circ$ angle looks like this.</p>\n\n<p><a href=\"https://i.stack.imgur.com/9MErX.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/9MErX.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2117062, "score": 1, "text": "<p>With points $A_n(0,n)$ on $y-$axis and $B_n(10-n,0)$ on $x-$axis the equations of all lines pass through are\n$$y=\\dfrac{-n}{10-n}x+n$$\nwith $C$ as the variable of curves family, we write\n$$y=\\dfrac{-C}{10-C}x+C$$\nfor envelope of curves family, $\\dfrac{\\partial f}{\\partial C}=0$ where\n$$f=y+\\dfrac{C}{10-C}x-C$$\nafter deletion $C$ between $f$ and $\\dfrac{\\partial f}{\\partial C}=0$ we get\n$$10x=\\Big(10-\\frac{y\\sqrt{10}}{\\sqrt{10}-\\sqrt{x}}\\Big)^2$$</p>\n" }, { "answer_id": 2117075, "score": 4, "text": "<p>The curve you have described is in fact a parabola; specifically, it is the parabola generated as the quadratic <a href=\"https://en.wikipedia.org/wiki/B%C3%A9zier_curve\" rel=\"nofollow noreferrer\">Bézier curve</a> with the control points as the ends of your lines.</p>\n\n<p>The equation of this parabola is, for three points $a, b, c$:</p>\n\n<p>$$f(t)=(1-t)^2a+t(1-t)b+t^2c$$</p>\n\n<p>This is a parametric formula; you enter a $t$ value and it gives the $x$ and $y$ values of a point on the curve, as opposed to entering an $x$ value and getting the $y$ value out.</p>\n\n<p>In the first case, with the right angle, the parabola is simply</p>\n\n<p>$$f(t)=\\left(10t^2,10(1-t)^2\\right)$$</p>\n\n<p>shown in red on this figure.</p>\n\n<p><a href=\"https://i.stack.imgur.com/GD0Yj.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/GD0Yj.png\" alt=\"enter image description here\"></a></p>\n\n<p>The $60^\\circ$ angle looks like this.</p>\n\n<p><a href=\"https://i.stack.imgur.com/9MErX.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/9MErX.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2117062, "score": 1, "text": "<p>With points $A_n(0,n)$ on $y-$axis and $B_n(10-n,0)$ on $x-$axis the equations of all lines pass through are\n$$y=\\dfrac{-n}{10-n}x+n$$\nwith $C$ as the variable of curves family, we write\n$$y=\\dfrac{-C}{10-C}x+C$$\nfor envelope of curves family, $\\dfrac{\\partial f}{\\partial C}=0$ where\n$$f=y+\\dfrac{C}{10-C}x-C$$\nafter deletion $C$ between $f$ and $\\dfrac{\\partial f}{\\partial C}=0$ we get\n$$10x=\\Big(10-\\frac{y\\sqrt{10}}{\\sqrt{10}-\\sqrt{x}}\\Big)^2$$</p>\n" }, { "answer_id": 2117075, "score": 4, "text": "<p>The curve you have described is in fact a parabola; specifically, it is the parabola generated as the quadratic <a href=\"https://en.wikipedia.org/wiki/B%C3%A9zier_curve\" rel=\"nofollow noreferrer\">Bézier curve</a> with the control points as the ends of your lines.</p>\n\n<p>The equation of this parabola is, for three points $a, b, c$:</p>\n\n<p>$$f(t)=(1-t)^2a+t(1-t)b+t^2c$$</p>\n\n<p>This is a parametric formula; you enter a $t$ value and it gives the $x$ and $y$ values of a point on the curve, as opposed to entering an $x$ value and getting the $y$ value out.</p>\n\n<p>In the first case, with the right angle, the parabola is simply</p>\n\n<p>$$f(t)=\\left(10t^2,10(1-t)^2\\right)$$</p>\n\n<p>shown in red on this figure.</p>\n\n<p><a href=\"https://i.stack.imgur.com/GD0Yj.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/GD0Yj.png\" alt=\"enter image description here\"></a></p>\n\n<p>The $60^\\circ$ angle looks like this.</p>\n\n<p><a href=\"https://i.stack.imgur.com/9MErX.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/9MErX.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2117062, "score": 1, "text": "<p>With points $A_n(0,n)$ on $y-$axis and $B_n(10-n,0)$ on $x-$axis the equations of all lines pass through are\n$$y=\\dfrac{-n}{10-n}x+n$$\nwith $C$ as the variable of curves family, we write\n$$y=\\dfrac{-C}{10-C}x+C$$\nfor envelope of curves family, $\\dfrac{\\partial f}{\\partial C}=0$ where\n$$f=y+\\dfrac{C}{10-C}x-C$$\nafter deletion $C$ between $f$ and $\\dfrac{\\partial f}{\\partial C}=0$ we get\n$$10x=\\Big(10-\\frac{y\\sqrt{10}}{\\sqrt{10}-\\sqrt{x}}\\Big)^2$$</p>\n" }, { "answer_id": 2117075, "score": 4, "text": "<p>The curve you have described is in fact a parabola; specifically, it is the parabola generated as the quadratic <a href=\"https://en.wikipedia.org/wiki/B%C3%A9zier_curve\" rel=\"nofollow noreferrer\">Bézier curve</a> with the control points as the ends of your lines.</p>\n\n<p>The equation of this parabola is, for three points $a, b, c$:</p>\n\n<p>$$f(t)=(1-t)^2a+t(1-t)b+t^2c$$</p>\n\n<p>This is a parametric formula; you enter a $t$ value and it gives the $x$ and $y$ values of a point on the curve, as opposed to entering an $x$ value and getting the $y$ value out.</p>\n\n<p>In the first case, with the right angle, the parabola is simply</p>\n\n<p>$$f(t)=\\left(10t^2,10(1-t)^2\\right)$$</p>\n\n<p>shown in red on this figure.</p>\n\n<p><a href=\"https://i.stack.imgur.com/GD0Yj.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/GD0Yj.png\" alt=\"enter image description here\"></a></p>\n\n<p>The $60^\\circ$ angle looks like this.</p>\n\n<p><a href=\"https://i.stack.imgur.com/9MErX.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/9MErX.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2117062, "score": 1, "text": "<p>With points $A_n(0,n)$ on $y-$axis and $B_n(10-n,0)$ on $x-$axis the equations of all lines pass through are\n$$y=\\dfrac{-n}{10-n}x+n$$\nwith $C$ as the variable of curves family, we write\n$$y=\\dfrac{-C}{10-C}x+C$$\nfor envelope of curves family, $\\dfrac{\\partial f}{\\partial C}=0$ where\n$$f=y+\\dfrac{C}{10-C}x-C$$\nafter deletion $C$ between $f$ and $\\dfrac{\\partial f}{\\partial C}=0$ we get\n$$10x=\\Big(10-\\frac{y\\sqrt{10}}{\\sqrt{10}-\\sqrt{x}}\\Big)^2$$</p>\n" } ]
Question: How do I find the equation of a curve formed by a series of lines tangent to a curve? Answer 1: <p>The curve you have described is in fact a parabola; specifically, it is the parabola generated as the quadratic <a href="https://en.wikipedia.org/wiki/B%C3%A9zier_curve" rel="nofollow noreferrer">Bézier curve</a> with the control points as the ends of your lines.</p> <p>The equation of this parabola is, for three points $a, b, c$:</p> <p>$$f(t)=(1-t)^2a+t(1-t)b+t^2c$$</p> <p>This is a parametric formula; you enter a $t$ value and it gives the $x$ and $y$ values of a point on the curve, as opposed to entering an $x$ value and getting the $y$ value out.</p> <p>In the first case, with the right angle, the parabola is simply</p> <p>$$f(t)=\left(10t^2,10(1-t)^2\right)$$</p> <p>shown in red on this figure.</p> <p><a href="https://i.stack.imgur.com/GD0Yj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GD0Yj.png" alt="enter image description here"></a></p> <p>The $60^\circ$ angle looks like this.</p> <p><a href="https://i.stack.imgur.com/9MErX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9MErX.png" alt="enter image description here"></a></p> Answer 2: <p>With points $A_n(0,n)$ on $y-$axis and $B_n(10-n,0)$ on $x-$axis the equations of all lines pass through are $$y=\dfrac{-n}{10-n}x+n$$ with $C$ as the variable of curves family, we write $$y=\dfrac{-C}{10-C}x+C$$ for envelope of curves family, $\dfrac{\partial f}{\partial C}=0$ where $$f=y+\dfrac{C}{10-C}x-C$$ after deletion $C$ between $f$ and $\dfrac{\partial f}{\partial C}=0$ we get $$10x=\Big(10-\frac{y\sqrt{10}}{\sqrt{10}-\sqrt{x}}\Big)^2$$</p> Answer 3: <p>The curve you have described is in fact a parabola; specifically, it is the parabola generated as the quadratic <a href="https://en.wikipedia.org/wiki/B%C3%A9zier_curve" rel="nofollow noreferrer">Bézier curve</a> with the control points as the ends of your lines.</p> <p>The equation of this parabola is, for three points $a, b, c$:</p> <p>$$f(t)=(1-t)^2a+t(1-t)b+t^2c$$</p> <p>This is a parametric formula; you enter a $t$ value and it gives the $x$ and $y$ values of a point on the curve, as opposed to entering an $x$ value and getting the $y$ value out.</p> <p>In the first case, with the right angle, the parabola is simply</p> <p>$$f(t)=\left(10t^2,10(1-t)^2\right)$$</p> <p>shown in red on this figure.</p> <p><a href="https://i.stack.imgur.com/GD0Yj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GD0Yj.png" alt="enter image description here"></a></p> <p>The $60^\circ$ angle looks like this.</p> <p><a href="https://i.stack.imgur.com/9MErX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9MErX.png" alt="enter image description here"></a></p> Answer 4: <p>With points $A_n(0,n)$ on $y-$axis and $B_n(10-n,0)$ on $x-$axis the equations of all lines pass through are $$y=\dfrac{-n}{10-n}x+n$$ with $C$ as the variable of curves family, we write $$y=\dfrac{-C}{10-C}x+C$$ for envelope of curves family, $\dfrac{\partial f}{\partial C}=0$ where $$f=y+\dfrac{C}{10-C}x-C$$ after deletion $C$ between $f$ and $\dfrac{\partial f}{\partial C}=0$ we get $$10x=\Big(10-\frac{y\sqrt{10}}{\sqrt{10}-\sqrt{x}}\Big)^2$$</p> Answer 5: <p>The curve you have described is in fact a parabola; specifically, it is the parabola generated as the quadratic <a href="https://en.wikipedia.org/wiki/B%C3%A9zier_curve" rel="nofollow noreferrer">Bézier curve</a> with the control points as the ends of your lines.</p> <p>The equation of this parabola is, for three points $a, b, c$:</p> <p>$$f(t)=(1-t)^2a+t(1-t)b+t^2c$$</p> <p>This is a parametric formula; you enter a $t$ value and it gives the $x$ and $y$ values of a point on the curve, as opposed to entering an $x$ value and getting the $y$ value out.</p> <p>In the first case, with the right angle, the parabola is simply</p> <p>$$f(t)=\left(10t^2,10(1-t)^2\right)$$</p> <p>shown in red on this figure.</p> <p><a href="https://i.stack.imgur.com/GD0Yj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GD0Yj.png" alt="enter image description here"></a></p> <p>The $60^\circ$ angle looks like this.</p> <p><a href="https://i.stack.imgur.com/9MErX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9MErX.png" alt="enter image description here"></a></p> Answer 6: <p>With points $A_n(0,n)$ on $y-$axis and $B_n(10-n,0)$ on $x-$axis the equations of all lines pass through are $$y=\dfrac{-n}{10-n}x+n$$ with $C$ as the variable of curves family, we write $$y=\dfrac{-C}{10-C}x+C$$ for envelope of curves family, $\dfrac{\partial f}{\partial C}=0$ where $$f=y+\dfrac{C}{10-C}x-C$$ after deletion $C$ between $f$ and $\dfrac{\partial f}{\partial C}=0$ we get $$10x=\Big(10-\frac{y\sqrt{10}}{\sqrt{10}-\sqrt{x}}\Big)^2$$</p> Answer 7: <p>The curve you have described is in fact a parabola; specifically, it is the parabola generated as the quadratic <a href="https://en.wikipedia.org/wiki/B%C3%A9zier_curve" rel="nofollow noreferrer">Bézier curve</a> with the control points as the ends of your lines.</p> <p>The equation of this parabola is, for three points $a, b, c$:</p> <p>$$f(t)=(1-t)^2a+t(1-t)b+t^2c$$</p> <p>This is a parametric formula; you enter a $t$ value and it gives the $x$ and $y$ values of a point on the curve, as opposed to entering an $x$ value and getting the $y$ value out.</p> <p>In the first case, with the right angle, the parabola is simply</p> <p>$$f(t)=\left(10t^2,10(1-t)^2\right)$$</p> <p>shown in red on this figure.</p> <p><a href="https://i.stack.imgur.com/GD0Yj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GD0Yj.png" alt="enter image description here"></a></p> <p>The $60^\circ$ angle looks like this.</p> <p><a href="https://i.stack.imgur.com/9MErX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9MErX.png" alt="enter image description here"></a></p> Answer 8: <p>With points $A_n(0,n)$ on $y-$axis and $B_n(10-n,0)$ on $x-$axis the equations of all lines pass through are $$y=\dfrac{-n}{10-n}x+n$$ with $C$ as the variable of curves family, we write $$y=\dfrac{-C}{10-C}x+C$$ for envelope of curves family, $\dfrac{\partial f}{\partial C}=0$ where $$f=y+\dfrac{C}{10-C}x-C$$ after deletion $C$ between $f$ and $\dfrac{\partial f}{\partial C}=0$ we get $$10x=\Big(10-\frac{y\sqrt{10}}{\sqrt{10}-\sqrt{x}}\Big)^2$$</p> Answer 9: <p>The curve you have described is in fact a parabola; specifically, it is the parabola generated as the quadratic <a href="https://en.wikipedia.org/wiki/B%C3%A9zier_curve" rel="nofollow noreferrer">Bézier curve</a> with the control points as the ends of your lines.</p> <p>The equation of this parabola is, for three points $a, b, c$:</p> <p>$$f(t)=(1-t)^2a+t(1-t)b+t^2c$$</p> <p>This is a parametric formula; you enter a $t$ value and it gives the $x$ and $y$ values of a point on the curve, as opposed to entering an $x$ value and getting the $y$ value out.</p> <p>In the first case, with the right angle, the parabola is simply</p> <p>$$f(t)=\left(10t^2,10(1-t)^2\right)$$</p> <p>shown in red on this figure.</p> <p><a href="https://i.stack.imgur.com/GD0Yj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GD0Yj.png" alt="enter image description here"></a></p> <p>The $60^\circ$ angle looks like this.</p> <p><a href="https://i.stack.imgur.com/9MErX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9MErX.png" alt="enter image description here"></a></p> Answer 10: <p>With points $A_n(0,n)$ on $y-$axis and $B_n(10-n,0)$ on $x-$axis the equations of all lines pass through are $$y=\dfrac{-n}{10-n}x+n$$ with $C$ as the variable of curves family, we write $$y=\dfrac{-C}{10-C}x+C$$ for envelope of curves family, $\dfrac{\partial f}{\partial C}=0$ where $$f=y+\dfrac{C}{10-C}x-C$$ after deletion $C$ between $f$ and $\dfrac{\partial f}{\partial C}=0$ we get $$10x=\Big(10-\frac{y\sqrt{10}}{\sqrt{10}-\sqrt{x}}\Big)^2$$</p> Answer 11: <p>The curve you have described is in fact a parabola; specifically, it is the parabola generated as the quadratic <a href="https://en.wikipedia.org/wiki/B%C3%A9zier_curve" rel="nofollow noreferrer">Bézier curve</a> with the control points as the ends of your lines.</p> <p>The equation of this parabola is, for three points $a, b, c$:</p> <p>$$f(t)=(1-t)^2a+t(1-t)b+t^2c$$</p> <p>This is a parametric formula; you enter a $t$ value and it gives the $x$ and $y$ values of a point on the curve, as opposed to entering an $x$ value and getting the $y$ value out.</p> <p>In the first case, with the right angle, the parabola is simply</p> <p>$$f(t)=\left(10t^2,10(1-t)^2\right)$$</p> <p>shown in red on this figure.</p> <p><a href="https://i.stack.imgur.com/GD0Yj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GD0Yj.png" alt="enter image description here"></a></p> <p>The $60^\circ$ angle looks like this.</p> <p><a href="https://i.stack.imgur.com/9MErX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9MErX.png" alt="enter image description here"></a></p> Answer 12: <p>With points $A_n(0,n)$ on $y-$axis and $B_n(10-n,0)$ on $x-$axis the equations of all lines pass through are $$y=\dfrac{-n}{10-n}x+n$$ with $C$ as the variable of curves family, we write $$y=\dfrac{-C}{10-C}x+C$$ for envelope of curves family, $\dfrac{\partial f}{\partial C}=0$ where $$f=y+\dfrac{C}{10-C}x-C$$ after deletion $C$ between $f$ and $\dfrac{\partial f}{\partial C}=0$ we get $$10x=\Big(10-\frac{y\sqrt{10}}{\sqrt{10}-\sqrt{x}}\Big)^2$$</p> Answer 13: <p>The curve you have described is in fact a parabola; specifically, it is the parabola generated as the quadratic <a href="https://en.wikipedia.org/wiki/B%C3%A9zier_curve" rel="nofollow noreferrer">Bézier curve</a> with the control points as the ends of your lines.</p> <p>The equation of this parabola is, for three points $a, b, c$:</p> <p>$$f(t)=(1-t)^2a+t(1-t)b+t^2c$$</p> <p>This is a parametric formula; you enter a $t$ value and it gives the $x$ and $y$ values of a point on the curve, as opposed to entering an $x$ value and getting the $y$ value out.</p> <p>In the first case, with the right angle, the parabola is simply</p> <p>$$f(t)=\left(10t^2,10(1-t)^2\right)$$</p> <p>shown in red on this figure.</p> <p><a href="https://i.stack.imgur.com/GD0Yj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GD0Yj.png" alt="enter image description here"></a></p> <p>The $60^\circ$ angle looks like this.</p> <p><a href="https://i.stack.imgur.com/9MErX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9MErX.png" alt="enter image description here"></a></p> Answer 14: <p>With points $A_n(0,n)$ on $y-$axis and $B_n(10-n,0)$ on $x-$axis the equations of all lines pass through are $$y=\dfrac{-n}{10-n}x+n$$ with $C$ as the variable of curves family, we write $$y=\dfrac{-C}{10-C}x+C$$ for envelope of curves family, $\dfrac{\partial f}{\partial C}=0$ where $$f=y+\dfrac{C}{10-C}x-C$$ after deletion $C$ between $f$ and $\dfrac{\partial f}{\partial C}=0$ we get $$10x=\Big(10-\frac{y\sqrt{10}}{\sqrt{10}-\sqrt{x}}\Big)^2$$</p> Answer 15: <p>The curve you have described is in fact a parabola; specifically, it is the parabola generated as the quadratic <a href="https://en.wikipedia.org/wiki/B%C3%A9zier_curve" rel="nofollow noreferrer">Bézier curve</a> with the control points as the ends of your lines.</p> <p>The equation of this parabola is, for three points $a, b, c$:</p> <p>$$f(t)=(1-t)^2a+t(1-t)b+t^2c$$</p> <p>This is a parametric formula; you enter a $t$ value and it gives the $x$ and $y$ values of a point on the curve, as opposed to entering an $x$ value and getting the $y$ value out.</p> <p>In the first case, with the right angle, the parabola is simply</p> <p>$$f(t)=\left(10t^2,10(1-t)^2\right)$$</p> <p>shown in red on this figure.</p> <p><a href="https://i.stack.imgur.com/GD0Yj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GD0Yj.png" alt="enter image description here"></a></p> <p>The $60^\circ$ angle looks like this.</p> <p><a href="https://i.stack.imgur.com/9MErX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9MErX.png" alt="enter image description here"></a></p> Answer 16: <p>With points $A_n(0,n)$ on $y-$axis and $B_n(10-n,0)$ on $x-$axis the equations of all lines pass through are $$y=\dfrac{-n}{10-n}x+n$$ with $C$ as the variable of curves family, we write $$y=\dfrac{-C}{10-C}x+C$$ for envelope of curves family, $\dfrac{\partial f}{\partial C}=0$ where $$f=y+\dfrac{C}{10-C}x-C$$ after deletion $C$ between $f$ and $\dfrac{\partial f}{\partial C}=0$ we get $$10x=\Big(10-\frac{y\sqrt{10}}{\sqrt{10}-\sqrt{x}}\Big)^2$$</p> Answer 17: <p>The curve you have described is in fact a parabola; specifically, it is the parabola generated as the quadratic <a href="https://en.wikipedia.org/wiki/B%C3%A9zier_curve" rel="nofollow noreferrer">Bézier curve</a> with the control points as the ends of your lines.</p> <p>The equation of this parabola is, for three points $a, b, c$:</p> <p>$$f(t)=(1-t)^2a+t(1-t)b+t^2c$$</p> <p>This is a parametric formula; you enter a $t$ value and it gives the $x$ and $y$ values of a point on the curve, as opposed to entering an $x$ value and getting the $y$ value out.</p> <p>In the first case, with the right angle, the parabola is simply</p> <p>$$f(t)=\left(10t^2,10(1-t)^2\right)$$</p> <p>shown in red on this figure.</p> <p><a href="https://i.stack.imgur.com/GD0Yj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GD0Yj.png" alt="enter image description here"></a></p> <p>The $60^\circ$ angle looks like this.</p> <p><a href="https://i.stack.imgur.com/9MErX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9MErX.png" alt="enter image description here"></a></p> Answer 18: <p>With points $A_n(0,n)$ on $y-$axis and $B_n(10-n,0)$ on $x-$axis the equations of all lines pass through are $$y=\dfrac{-n}{10-n}x+n$$ with $C$ as the variable of curves family, we write $$y=\dfrac{-C}{10-C}x+C$$ for envelope of curves family, $\dfrac{\partial f}{\partial C}=0$ where $$f=y+\dfrac{C}{10-C}x-C$$ after deletion $C$ between $f$ and $\dfrac{\partial f}{\partial C}=0$ we get $$10x=\Big(10-\frac{y\sqrt{10}}{\sqrt{10}-\sqrt{x}}\Big)^2$$</p> Answer 19: <p>The curve you have described is in fact a parabola; specifically, it is the parabola generated as the quadratic <a href="https://en.wikipedia.org/wiki/B%C3%A9zier_curve" rel="nofollow noreferrer">Bézier curve</a> with the control points as the ends of your lines.</p> <p>The equation of this parabola is, for three points $a, b, c$:</p> <p>$$f(t)=(1-t)^2a+t(1-t)b+t^2c$$</p> <p>This is a parametric formula; you enter a $t$ value and it gives the $x$ and $y$ values of a point on the curve, as opposed to entering an $x$ value and getting the $y$ value out.</p> <p>In the first case, with the right angle, the parabola is simply</p> <p>$$f(t)=\left(10t^2,10(1-t)^2\right)$$</p> <p>shown in red on this figure.</p> <p><a href="https://i.stack.imgur.com/GD0Yj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GD0Yj.png" alt="enter image description here"></a></p> <p>The $60^\circ$ angle looks like this.</p> <p><a href="https://i.stack.imgur.com/9MErX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9MErX.png" alt="enter image description here"></a></p> Answer 20: <p>With points $A_n(0,n)$ on $y-$axis and $B_n(10-n,0)$ on $x-$axis the equations of all lines pass through are $$y=\dfrac{-n}{10-n}x+n$$ with $C$ as the variable of curves family, we write $$y=\dfrac{-C}{10-C}x+C$$ for envelope of curves family, $\dfrac{\partial f}{\partial C}=0$ where $$f=y+\dfrac{C}{10-C}x-C$$ after deletion $C$ between $f$ and $\dfrac{\partial f}{\partial C}=0$ we get $$10x=\Big(10-\frac{y\sqrt{10}}{\sqrt{10}-\sqrt{x}}\Big)^2$$</p>
Change of variables for $\iint_D x\sin(y-x^2) \, dA$
2,117,157
6
https://math.stackexchange.com/questions/2117157/change-of-variables-for-iint-d-x-siny-x2-da
<p>Let $D$ be the region in the first quadrant of the $x,y$-plane, bounded by the curves $y=x^2$, $y=x^2+1$, $x+y=1$, and $x+y=2$. </p> <p>I want to find a change of variables for $\iint_D x\sin(y-x^2) \, dA$.</p> <p>Since $y-x^2=0$, $y-x^2=1$, my inclination is to let $u=y-x^2$, and since $x+y=1$ and $x+y=2$ I want to let $v=x+y$. However, when solving for $x$ in terms of $u$ and $v$ I get a horrid looking thing. </p> <p>Could someone provide me with a better way of approaching this problem? Thank you!</p>
[ { "answer_id": 2117860, "score": 2, "text": "<p>Here's half an answer. Maybe you went down this path already:</p>\n\n<p>If you let $u=x^2, v=x+y$ some things are less horrid. First, the Jacobian is $\\frac{1}{2x}$ which cancels the first $x$ in the integrand. The pre-image $D^*$ of the domain is bounded by $v=1, v=2, v=u+\\sqrt{u},$ and $v=u+\\sqrt{u}+1.$ </p>\n\n<p>At first glance $D^*$ tempts you to integrate in the order $du \\; dv.$ But you run into the same horridness solving those equations for $u$. But happily, the 4 points of intersections are not that bad and (here's the good part) the middle two have the same $u$ coordinate $a=(3-\\sqrt{5})/2$. So you can break $D^*$ into two regions. </p>\n\n<p>First region is $u=0$ to $a$, $v=1$ to $u+\\sqrt{u} +1$.</p>\n\n<p>Second region is $u = a$ to $1$, $v=u+\\sqrt{u}$ to $2$.</p>\n\n<p>Everything works swimmingly until the end when there are two integrals:</p>\n\n<p>$$\\int_0^a \\frac{1}{2} \\cos(1-\\sqrt{u}-u) \\; du$$</p>\n\n<p>and</p>\n\n<p>$$\\int_a^1 \\frac{1}{2} \\cos(2-\\sqrt{u}-u) \\; du.$$</p>\n\n<p>I was hoping for a clever substitution that would change one of them into the other and then they'd cancel out. I've had no luck with that so far.</p>\n" }, { "answer_id": 2876115, "score": 1, "text": "<p>I have added an answer to an other question which solves this question as well or shows where the problem is in general for this integral to be evaluated: <a href=\"https://math.stackexchange.com/a/2876108/541516\">https://math.stackexchange.com/a/2876108/541516</a></p>\n\n<p>An other interesting variable change which can take us much further and show that this integral cannot be evaluated to closed form solution without series expansion, would be to remove away $x$ in the integral. See that we have $x^2$ inside $\\sin(\\cdot)$ which can make this happen.</p>\n\n<p>Let, $k = x^2$ and keeping $y$ as it is, which gives a jacobian of $1/2x$ and cancels with the $x$. Now, although we have $\\sin(y-k)$ kind of structure which we can expand by trigonometric identity of $\\sin(A-B)$ into two parts when function of $k$ and $y$ are independent, it is better to do an other variable change which makes integral limits associated with $y$ to fixed limits. </p>\n\n<p>See that now the limit curves are of the form $y-k = c$ and $\\sqrt k + y = c$.</p>\n\n<p>So, we can take new variable $t = y-k$ which actually does this, making the limits for $t$ to be 0 to 1. Limits for $k$ are taken to be dependent on t and of the form $\\sqrt k + k + t = c$. Now the inner function is just $sin(t)$ which can be pulled out of the integral with $k$. Now to evaluate the integral with $k$ we just have to evaluate the limits i.e. $\\sqrt k + k + t = 1$ and $\\sqrt k + k + t = 2$. If we solve the quadratic equations in $\\sqrt k$ and then take square the positive of the quadratic solution which is ($-1 + \\sqrt{1 + 4(c-t)}$)/2, we get, constants, $t$ terms and $\\sqrt{c-t}$ kind of terms. And remember a $sin(t)$ is waiting for us and limits are 0 to 1 in variable t! </p>\n\n<p>Now, $\\sin(t)$ can be evaluated, $t \\sin(t)$ is well known by integration by parts. The problematic part is the terms with the structure $\\sqrt{c-t} \\sin(t)$. To get rigid of c (which is 1 and 2 here), we can do variable transformation and $p = c-t$ which brings us a structure $\\sqrt p \\sin(c - p)$ which similar to the general structure, $\\sqrt p \\sin(p)$ through expansion of $\\sin(c - p)$. </p>\n\n<p>So, now to this $\\sqrt{p} \\sin(p)$ when we do integration by parts, this ends up with having an integral of $cos(p)/\\sqrt{p}$ to be evaluated, which on variable transformation $p = u^2$ gives integral of $\\cos(u^2)$ which further more cannot be simplified by hand and is usually termed Fresnel integral which has series expansion ( see <a href=\"https://en.wikipedia.org/wiki/Fresnel_integral\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Fresnel_integral</a>).</p>\n\n<p>PS: I have focused only on the hard terms and ignored signs, constants and other easy terms coming from integration by parts just for transparent idea of what creates problem, hope it is understandable, let me know if some part is unclear.</p>\n" }, { "answer_id": 2117860, "score": 2, "text": "<p>Here's half an answer. Maybe you went down this path already:</p>\n\n<p>If you let $u=x^2, v=x+y$ some things are less horrid. First, the Jacobian is $\\frac{1}{2x}$ which cancels the first $x$ in the integrand. The pre-image $D^*$ of the domain is bounded by $v=1, v=2, v=u+\\sqrt{u},$ and $v=u+\\sqrt{u}+1.$ </p>\n\n<p>At first glance $D^*$ tempts you to integrate in the order $du \\; dv.$ But you run into the same horridness solving those equations for $u$. But happily, the 4 points of intersections are not that bad and (here's the good part) the middle two have the same $u$ coordinate $a=(3-\\sqrt{5})/2$. So you can break $D^*$ into two regions. </p>\n\n<p>First region is $u=0$ to $a$, $v=1$ to $u+\\sqrt{u} +1$.</p>\n\n<p>Second region is $u = a$ to $1$, $v=u+\\sqrt{u}$ to $2$.</p>\n\n<p>Everything works swimmingly until the end when there are two integrals:</p>\n\n<p>$$\\int_0^a \\frac{1}{2} \\cos(1-\\sqrt{u}-u) \\; du$$</p>\n\n<p>and</p>\n\n<p>$$\\int_a^1 \\frac{1}{2} \\cos(2-\\sqrt{u}-u) \\; du.$$</p>\n\n<p>I was hoping for a clever substitution that would change one of them into the other and then they'd cancel out. I've had no luck with that so far.</p>\n" }, { "answer_id": 2876115, "score": 1, "text": "<p>I have added an answer to an other question which solves this question as well or shows where the problem is in general for this integral to be evaluated: <a href=\"https://math.stackexchange.com/a/2876108/541516\">https://math.stackexchange.com/a/2876108/541516</a></p>\n\n<p>An other interesting variable change which can take us much further and show that this integral cannot be evaluated to closed form solution without series expansion, would be to remove away $x$ in the integral. See that we have $x^2$ inside $\\sin(\\cdot)$ which can make this happen.</p>\n\n<p>Let, $k = x^2$ and keeping $y$ as it is, which gives a jacobian of $1/2x$ and cancels with the $x$. Now, although we have $\\sin(y-k)$ kind of structure which we can expand by trigonometric identity of $\\sin(A-B)$ into two parts when function of $k$ and $y$ are independent, it is better to do an other variable change which makes integral limits associated with $y$ to fixed limits. </p>\n\n<p>See that now the limit curves are of the form $y-k = c$ and $\\sqrt k + y = c$.</p>\n\n<p>So, we can take new variable $t = y-k$ which actually does this, making the limits for $t$ to be 0 to 1. Limits for $k$ are taken to be dependent on t and of the form $\\sqrt k + k + t = c$. Now the inner function is just $sin(t)$ which can be pulled out of the integral with $k$. Now to evaluate the integral with $k$ we just have to evaluate the limits i.e. $\\sqrt k + k + t = 1$ and $\\sqrt k + k + t = 2$. If we solve the quadratic equations in $\\sqrt k$ and then take square the positive of the quadratic solution which is ($-1 + \\sqrt{1 + 4(c-t)}$)/2, we get, constants, $t$ terms and $\\sqrt{c-t}$ kind of terms. And remember a $sin(t)$ is waiting for us and limits are 0 to 1 in variable t! </p>\n\n<p>Now, $\\sin(t)$ can be evaluated, $t \\sin(t)$ is well known by integration by parts. The problematic part is the terms with the structure $\\sqrt{c-t} \\sin(t)$. To get rigid of c (which is 1 and 2 here), we can do variable transformation and $p = c-t$ which brings us a structure $\\sqrt p \\sin(c - p)$ which similar to the general structure, $\\sqrt p \\sin(p)$ through expansion of $\\sin(c - p)$. </p>\n\n<p>So, now to this $\\sqrt{p} \\sin(p)$ when we do integration by parts, this ends up with having an integral of $cos(p)/\\sqrt{p}$ to be evaluated, which on variable transformation $p = u^2$ gives integral of $\\cos(u^2)$ which further more cannot be simplified by hand and is usually termed Fresnel integral which has series expansion ( see <a href=\"https://en.wikipedia.org/wiki/Fresnel_integral\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Fresnel_integral</a>).</p>\n\n<p>PS: I have focused only on the hard terms and ignored signs, constants and other easy terms coming from integration by parts just for transparent idea of what creates problem, hope it is understandable, let me know if some part is unclear.</p>\n" }, { "answer_id": 2117860, "score": 2, "text": "<p>Here's half an answer. Maybe you went down this path already:</p>\n\n<p>If you let $u=x^2, v=x+y$ some things are less horrid. First, the Jacobian is $\\frac{1}{2x}$ which cancels the first $x$ in the integrand. The pre-image $D^*$ of the domain is bounded by $v=1, v=2, v=u+\\sqrt{u},$ and $v=u+\\sqrt{u}+1.$ </p>\n\n<p>At first glance $D^*$ tempts you to integrate in the order $du \\; dv.$ But you run into the same horridness solving those equations for $u$. But happily, the 4 points of intersections are not that bad and (here's the good part) the middle two have the same $u$ coordinate $a=(3-\\sqrt{5})/2$. So you can break $D^*$ into two regions. </p>\n\n<p>First region is $u=0$ to $a$, $v=1$ to $u+\\sqrt{u} +1$.</p>\n\n<p>Second region is $u = a$ to $1$, $v=u+\\sqrt{u}$ to $2$.</p>\n\n<p>Everything works swimmingly until the end when there are two integrals:</p>\n\n<p>$$\\int_0^a \\frac{1}{2} \\cos(1-\\sqrt{u}-u) \\; du$$</p>\n\n<p>and</p>\n\n<p>$$\\int_a^1 \\frac{1}{2} \\cos(2-\\sqrt{u}-u) \\; du.$$</p>\n\n<p>I was hoping for a clever substitution that would change one of them into the other and then they'd cancel out. I've had no luck with that so far.</p>\n" }, { "answer_id": 2876115, "score": 1, "text": "<p>I have added an answer to an other question which solves this question as well or shows where the problem is in general for this integral to be evaluated: <a href=\"https://math.stackexchange.com/a/2876108/541516\">https://math.stackexchange.com/a/2876108/541516</a></p>\n\n<p>An other interesting variable change which can take us much further and show that this integral cannot be evaluated to closed form solution without series expansion, would be to remove away $x$ in the integral. See that we have $x^2$ inside $\\sin(\\cdot)$ which can make this happen.</p>\n\n<p>Let, $k = x^2$ and keeping $y$ as it is, which gives a jacobian of $1/2x$ and cancels with the $x$. Now, although we have $\\sin(y-k)$ kind of structure which we can expand by trigonometric identity of $\\sin(A-B)$ into two parts when function of $k$ and $y$ are independent, it is better to do an other variable change which makes integral limits associated with $y$ to fixed limits. </p>\n\n<p>See that now the limit curves are of the form $y-k = c$ and $\\sqrt k + y = c$.</p>\n\n<p>So, we can take new variable $t = y-k$ which actually does this, making the limits for $t$ to be 0 to 1. Limits for $k$ are taken to be dependent on t and of the form $\\sqrt k + k + t = c$. Now the inner function is just $sin(t)$ which can be pulled out of the integral with $k$. Now to evaluate the integral with $k$ we just have to evaluate the limits i.e. $\\sqrt k + k + t = 1$ and $\\sqrt k + k + t = 2$. If we solve the quadratic equations in $\\sqrt k$ and then take square the positive of the quadratic solution which is ($-1 + \\sqrt{1 + 4(c-t)}$)/2, we get, constants, $t$ terms and $\\sqrt{c-t}$ kind of terms. And remember a $sin(t)$ is waiting for us and limits are 0 to 1 in variable t! </p>\n\n<p>Now, $\\sin(t)$ can be evaluated, $t \\sin(t)$ is well known by integration by parts. The problematic part is the terms with the structure $\\sqrt{c-t} \\sin(t)$. To get rigid of c (which is 1 and 2 here), we can do variable transformation and $p = c-t$ which brings us a structure $\\sqrt p \\sin(c - p)$ which similar to the general structure, $\\sqrt p \\sin(p)$ through expansion of $\\sin(c - p)$. </p>\n\n<p>So, now to this $\\sqrt{p} \\sin(p)$ when we do integration by parts, this ends up with having an integral of $cos(p)/\\sqrt{p}$ to be evaluated, which on variable transformation $p = u^2$ gives integral of $\\cos(u^2)$ which further more cannot be simplified by hand and is usually termed Fresnel integral which has series expansion ( see <a href=\"https://en.wikipedia.org/wiki/Fresnel_integral\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Fresnel_integral</a>).</p>\n\n<p>PS: I have focused only on the hard terms and ignored signs, constants and other easy terms coming from integration by parts just for transparent idea of what creates problem, hope it is understandable, let me know if some part is unclear.</p>\n" }, { "answer_id": 2117860, "score": 2, "text": "<p>Here's half an answer. Maybe you went down this path already:</p>\n\n<p>If you let $u=x^2, v=x+y$ some things are less horrid. First, the Jacobian is $\\frac{1}{2x}$ which cancels the first $x$ in the integrand. The pre-image $D^*$ of the domain is bounded by $v=1, v=2, v=u+\\sqrt{u},$ and $v=u+\\sqrt{u}+1.$ </p>\n\n<p>At first glance $D^*$ tempts you to integrate in the order $du \\; dv.$ But you run into the same horridness solving those equations for $u$. But happily, the 4 points of intersections are not that bad and (here's the good part) the middle two have the same $u$ coordinate $a=(3-\\sqrt{5})/2$. So you can break $D^*$ into two regions. </p>\n\n<p>First region is $u=0$ to $a$, $v=1$ to $u+\\sqrt{u} +1$.</p>\n\n<p>Second region is $u = a$ to $1$, $v=u+\\sqrt{u}$ to $2$.</p>\n\n<p>Everything works swimmingly until the end when there are two integrals:</p>\n\n<p>$$\\int_0^a \\frac{1}{2} \\cos(1-\\sqrt{u}-u) \\; du$$</p>\n\n<p>and</p>\n\n<p>$$\\int_a^1 \\frac{1}{2} \\cos(2-\\sqrt{u}-u) \\; du.$$</p>\n\n<p>I was hoping for a clever substitution that would change one of them into the other and then they'd cancel out. I've had no luck with that so far.</p>\n" }, { "answer_id": 2876115, "score": 1, "text": "<p>I have added an answer to an other question which solves this question as well or shows where the problem is in general for this integral to be evaluated: <a href=\"https://math.stackexchange.com/a/2876108/541516\">https://math.stackexchange.com/a/2876108/541516</a></p>\n\n<p>An other interesting variable change which can take us much further and show that this integral cannot be evaluated to closed form solution without series expansion, would be to remove away $x$ in the integral. See that we have $x^2$ inside $\\sin(\\cdot)$ which can make this happen.</p>\n\n<p>Let, $k = x^2$ and keeping $y$ as it is, which gives a jacobian of $1/2x$ and cancels with the $x$. Now, although we have $\\sin(y-k)$ kind of structure which we can expand by trigonometric identity of $\\sin(A-B)$ into two parts when function of $k$ and $y$ are independent, it is better to do an other variable change which makes integral limits associated with $y$ to fixed limits. </p>\n\n<p>See that now the limit curves are of the form $y-k = c$ and $\\sqrt k + y = c$.</p>\n\n<p>So, we can take new variable $t = y-k$ which actually does this, making the limits for $t$ to be 0 to 1. Limits for $k$ are taken to be dependent on t and of the form $\\sqrt k + k + t = c$. Now the inner function is just $sin(t)$ which can be pulled out of the integral with $k$. Now to evaluate the integral with $k$ we just have to evaluate the limits i.e. $\\sqrt k + k + t = 1$ and $\\sqrt k + k + t = 2$. If we solve the quadratic equations in $\\sqrt k$ and then take square the positive of the quadratic solution which is ($-1 + \\sqrt{1 + 4(c-t)}$)/2, we get, constants, $t$ terms and $\\sqrt{c-t}$ kind of terms. And remember a $sin(t)$ is waiting for us and limits are 0 to 1 in variable t! </p>\n\n<p>Now, $\\sin(t)$ can be evaluated, $t \\sin(t)$ is well known by integration by parts. The problematic part is the terms with the structure $\\sqrt{c-t} \\sin(t)$. To get rigid of c (which is 1 and 2 here), we can do variable transformation and $p = c-t$ which brings us a structure $\\sqrt p \\sin(c - p)$ which similar to the general structure, $\\sqrt p \\sin(p)$ through expansion of $\\sin(c - p)$. </p>\n\n<p>So, now to this $\\sqrt{p} \\sin(p)$ when we do integration by parts, this ends up with having an integral of $cos(p)/\\sqrt{p}$ to be evaluated, which on variable transformation $p = u^2$ gives integral of $\\cos(u^2)$ which further more cannot be simplified by hand and is usually termed Fresnel integral which has series expansion ( see <a href=\"https://en.wikipedia.org/wiki/Fresnel_integral\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Fresnel_integral</a>).</p>\n\n<p>PS: I have focused only on the hard terms and ignored signs, constants and other easy terms coming from integration by parts just for transparent idea of what creates problem, hope it is understandable, let me know if some part is unclear.</p>\n" }, { "answer_id": 2117860, "score": 2, "text": "<p>Here's half an answer. Maybe you went down this path already:</p>\n\n<p>If you let $u=x^2, v=x+y$ some things are less horrid. First, the Jacobian is $\\frac{1}{2x}$ which cancels the first $x$ in the integrand. The pre-image $D^*$ of the domain is bounded by $v=1, v=2, v=u+\\sqrt{u},$ and $v=u+\\sqrt{u}+1.$ </p>\n\n<p>At first glance $D^*$ tempts you to integrate in the order $du \\; dv.$ But you run into the same horridness solving those equations for $u$. But happily, the 4 points of intersections are not that bad and (here's the good part) the middle two have the same $u$ coordinate $a=(3-\\sqrt{5})/2$. So you can break $D^*$ into two regions. </p>\n\n<p>First region is $u=0$ to $a$, $v=1$ to $u+\\sqrt{u} +1$.</p>\n\n<p>Second region is $u = a$ to $1$, $v=u+\\sqrt{u}$ to $2$.</p>\n\n<p>Everything works swimmingly until the end when there are two integrals:</p>\n\n<p>$$\\int_0^a \\frac{1}{2} \\cos(1-\\sqrt{u}-u) \\; du$$</p>\n\n<p>and</p>\n\n<p>$$\\int_a^1 \\frac{1}{2} \\cos(2-\\sqrt{u}-u) \\; du.$$</p>\n\n<p>I was hoping for a clever substitution that would change one of them into the other and then they'd cancel out. I've had no luck with that so far.</p>\n" }, { "answer_id": 2876115, "score": 1, "text": "<p>I have added an answer to an other question which solves this question as well or shows where the problem is in general for this integral to be evaluated: <a href=\"https://math.stackexchange.com/a/2876108/541516\">https://math.stackexchange.com/a/2876108/541516</a></p>\n\n<p>An other interesting variable change which can take us much further and show that this integral cannot be evaluated to closed form solution without series expansion, would be to remove away $x$ in the integral. See that we have $x^2$ inside $\\sin(\\cdot)$ which can make this happen.</p>\n\n<p>Let, $k = x^2$ and keeping $y$ as it is, which gives a jacobian of $1/2x$ and cancels with the $x$. Now, although we have $\\sin(y-k)$ kind of structure which we can expand by trigonometric identity of $\\sin(A-B)$ into two parts when function of $k$ and $y$ are independent, it is better to do an other variable change which makes integral limits associated with $y$ to fixed limits. </p>\n\n<p>See that now the limit curves are of the form $y-k = c$ and $\\sqrt k + y = c$.</p>\n\n<p>So, we can take new variable $t = y-k$ which actually does this, making the limits for $t$ to be 0 to 1. Limits for $k$ are taken to be dependent on t and of the form $\\sqrt k + k + t = c$. Now the inner function is just $sin(t)$ which can be pulled out of the integral with $k$. Now to evaluate the integral with $k$ we just have to evaluate the limits i.e. $\\sqrt k + k + t = 1$ and $\\sqrt k + k + t = 2$. If we solve the quadratic equations in $\\sqrt k$ and then take square the positive of the quadratic solution which is ($-1 + \\sqrt{1 + 4(c-t)}$)/2, we get, constants, $t$ terms and $\\sqrt{c-t}$ kind of terms. And remember a $sin(t)$ is waiting for us and limits are 0 to 1 in variable t! </p>\n\n<p>Now, $\\sin(t)$ can be evaluated, $t \\sin(t)$ is well known by integration by parts. The problematic part is the terms with the structure $\\sqrt{c-t} \\sin(t)$. To get rigid of c (which is 1 and 2 here), we can do variable transformation and $p = c-t$ which brings us a structure $\\sqrt p \\sin(c - p)$ which similar to the general structure, $\\sqrt p \\sin(p)$ through expansion of $\\sin(c - p)$. </p>\n\n<p>So, now to this $\\sqrt{p} \\sin(p)$ when we do integration by parts, this ends up with having an integral of $cos(p)/\\sqrt{p}$ to be evaluated, which on variable transformation $p = u^2$ gives integral of $\\cos(u^2)$ which further more cannot be simplified by hand and is usually termed Fresnel integral which has series expansion ( see <a href=\"https://en.wikipedia.org/wiki/Fresnel_integral\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Fresnel_integral</a>).</p>\n\n<p>PS: I have focused only on the hard terms and ignored signs, constants and other easy terms coming from integration by parts just for transparent idea of what creates problem, hope it is understandable, let me know if some part is unclear.</p>\n" }, { "answer_id": 2117860, "score": 2, "text": "<p>Here's half an answer. Maybe you went down this path already:</p>\n\n<p>If you let $u=x^2, v=x+y$ some things are less horrid. First, the Jacobian is $\\frac{1}{2x}$ which cancels the first $x$ in the integrand. The pre-image $D^*$ of the domain is bounded by $v=1, v=2, v=u+\\sqrt{u},$ and $v=u+\\sqrt{u}+1.$ </p>\n\n<p>At first glance $D^*$ tempts you to integrate in the order $du \\; dv.$ But you run into the same horridness solving those equations for $u$. But happily, the 4 points of intersections are not that bad and (here's the good part) the middle two have the same $u$ coordinate $a=(3-\\sqrt{5})/2$. So you can break $D^*$ into two regions. </p>\n\n<p>First region is $u=0$ to $a$, $v=1$ to $u+\\sqrt{u} +1$.</p>\n\n<p>Second region is $u = a$ to $1$, $v=u+\\sqrt{u}$ to $2$.</p>\n\n<p>Everything works swimmingly until the end when there are two integrals:</p>\n\n<p>$$\\int_0^a \\frac{1}{2} \\cos(1-\\sqrt{u}-u) \\; du$$</p>\n\n<p>and</p>\n\n<p>$$\\int_a^1 \\frac{1}{2} \\cos(2-\\sqrt{u}-u) \\; du.$$</p>\n\n<p>I was hoping for a clever substitution that would change one of them into the other and then they'd cancel out. I've had no luck with that so far.</p>\n" }, { "answer_id": 2876115, "score": 1, "text": "<p>I have added an answer to an other question which solves this question as well or shows where the problem is in general for this integral to be evaluated: <a href=\"https://math.stackexchange.com/a/2876108/541516\">https://math.stackexchange.com/a/2876108/541516</a></p>\n\n<p>An other interesting variable change which can take us much further and show that this integral cannot be evaluated to closed form solution without series expansion, would be to remove away $x$ in the integral. See that we have $x^2$ inside $\\sin(\\cdot)$ which can make this happen.</p>\n\n<p>Let, $k = x^2$ and keeping $y$ as it is, which gives a jacobian of $1/2x$ and cancels with the $x$. Now, although we have $\\sin(y-k)$ kind of structure which we can expand by trigonometric identity of $\\sin(A-B)$ into two parts when function of $k$ and $y$ are independent, it is better to do an other variable change which makes integral limits associated with $y$ to fixed limits. </p>\n\n<p>See that now the limit curves are of the form $y-k = c$ and $\\sqrt k + y = c$.</p>\n\n<p>So, we can take new variable $t = y-k$ which actually does this, making the limits for $t$ to be 0 to 1. Limits for $k$ are taken to be dependent on t and of the form $\\sqrt k + k + t = c$. Now the inner function is just $sin(t)$ which can be pulled out of the integral with $k$. Now to evaluate the integral with $k$ we just have to evaluate the limits i.e. $\\sqrt k + k + t = 1$ and $\\sqrt k + k + t = 2$. If we solve the quadratic equations in $\\sqrt k$ and then take square the positive of the quadratic solution which is ($-1 + \\sqrt{1 + 4(c-t)}$)/2, we get, constants, $t$ terms and $\\sqrt{c-t}$ kind of terms. And remember a $sin(t)$ is waiting for us and limits are 0 to 1 in variable t! </p>\n\n<p>Now, $\\sin(t)$ can be evaluated, $t \\sin(t)$ is well known by integration by parts. The problematic part is the terms with the structure $\\sqrt{c-t} \\sin(t)$. To get rigid of c (which is 1 and 2 here), we can do variable transformation and $p = c-t$ which brings us a structure $\\sqrt p \\sin(c - p)$ which similar to the general structure, $\\sqrt p \\sin(p)$ through expansion of $\\sin(c - p)$. </p>\n\n<p>So, now to this $\\sqrt{p} \\sin(p)$ when we do integration by parts, this ends up with having an integral of $cos(p)/\\sqrt{p}$ to be evaluated, which on variable transformation $p = u^2$ gives integral of $\\cos(u^2)$ which further more cannot be simplified by hand and is usually termed Fresnel integral which has series expansion ( see <a href=\"https://en.wikipedia.org/wiki/Fresnel_integral\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Fresnel_integral</a>).</p>\n\n<p>PS: I have focused only on the hard terms and ignored signs, constants and other easy terms coming from integration by parts just for transparent idea of what creates problem, hope it is understandable, let me know if some part is unclear.</p>\n" }, { "answer_id": 2117860, "score": 2, "text": "<p>Here's half an answer. Maybe you went down this path already:</p>\n\n<p>If you let $u=x^2, v=x+y$ some things are less horrid. First, the Jacobian is $\\frac{1}{2x}$ which cancels the first $x$ in the integrand. The pre-image $D^*$ of the domain is bounded by $v=1, v=2, v=u+\\sqrt{u},$ and $v=u+\\sqrt{u}+1.$ </p>\n\n<p>At first glance $D^*$ tempts you to integrate in the order $du \\; dv.$ But you run into the same horridness solving those equations for $u$. But happily, the 4 points of intersections are not that bad and (here's the good part) the middle two have the same $u$ coordinate $a=(3-\\sqrt{5})/2$. So you can break $D^*$ into two regions. </p>\n\n<p>First region is $u=0$ to $a$, $v=1$ to $u+\\sqrt{u} +1$.</p>\n\n<p>Second region is $u = a$ to $1$, $v=u+\\sqrt{u}$ to $2$.</p>\n\n<p>Everything works swimmingly until the end when there are two integrals:</p>\n\n<p>$$\\int_0^a \\frac{1}{2} \\cos(1-\\sqrt{u}-u) \\; du$$</p>\n\n<p>and</p>\n\n<p>$$\\int_a^1 \\frac{1}{2} \\cos(2-\\sqrt{u}-u) \\; du.$$</p>\n\n<p>I was hoping for a clever substitution that would change one of them into the other and then they'd cancel out. I've had no luck with that so far.</p>\n" }, { "answer_id": 2876115, "score": 1, "text": "<p>I have added an answer to an other question which solves this question as well or shows where the problem is in general for this integral to be evaluated: <a href=\"https://math.stackexchange.com/a/2876108/541516\">https://math.stackexchange.com/a/2876108/541516</a></p>\n\n<p>An other interesting variable change which can take us much further and show that this integral cannot be evaluated to closed form solution without series expansion, would be to remove away $x$ in the integral. See that we have $x^2$ inside $\\sin(\\cdot)$ which can make this happen.</p>\n\n<p>Let, $k = x^2$ and keeping $y$ as it is, which gives a jacobian of $1/2x$ and cancels with the $x$. Now, although we have $\\sin(y-k)$ kind of structure which we can expand by trigonometric identity of $\\sin(A-B)$ into two parts when function of $k$ and $y$ are independent, it is better to do an other variable change which makes integral limits associated with $y$ to fixed limits. </p>\n\n<p>See that now the limit curves are of the form $y-k = c$ and $\\sqrt k + y = c$.</p>\n\n<p>So, we can take new variable $t = y-k$ which actually does this, making the limits for $t$ to be 0 to 1. Limits for $k$ are taken to be dependent on t and of the form $\\sqrt k + k + t = c$. Now the inner function is just $sin(t)$ which can be pulled out of the integral with $k$. Now to evaluate the integral with $k$ we just have to evaluate the limits i.e. $\\sqrt k + k + t = 1$ and $\\sqrt k + k + t = 2$. If we solve the quadratic equations in $\\sqrt k$ and then take square the positive of the quadratic solution which is ($-1 + \\sqrt{1 + 4(c-t)}$)/2, we get, constants, $t$ terms and $\\sqrt{c-t}$ kind of terms. And remember a $sin(t)$ is waiting for us and limits are 0 to 1 in variable t! </p>\n\n<p>Now, $\\sin(t)$ can be evaluated, $t \\sin(t)$ is well known by integration by parts. The problematic part is the terms with the structure $\\sqrt{c-t} \\sin(t)$. To get rigid of c (which is 1 and 2 here), we can do variable transformation and $p = c-t$ which brings us a structure $\\sqrt p \\sin(c - p)$ which similar to the general structure, $\\sqrt p \\sin(p)$ through expansion of $\\sin(c - p)$. </p>\n\n<p>So, now to this $\\sqrt{p} \\sin(p)$ when we do integration by parts, this ends up with having an integral of $cos(p)/\\sqrt{p}$ to be evaluated, which on variable transformation $p = u^2$ gives integral of $\\cos(u^2)$ which further more cannot be simplified by hand and is usually termed Fresnel integral which has series expansion ( see <a href=\"https://en.wikipedia.org/wiki/Fresnel_integral\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Fresnel_integral</a>).</p>\n\n<p>PS: I have focused only on the hard terms and ignored signs, constants and other easy terms coming from integration by parts just for transparent idea of what creates problem, hope it is understandable, let me know if some part is unclear.</p>\n" }, { "answer_id": 2117860, "score": 2, "text": "<p>Here's half an answer. Maybe you went down this path already:</p>\n\n<p>If you let $u=x^2, v=x+y$ some things are less horrid. First, the Jacobian is $\\frac{1}{2x}$ which cancels the first $x$ in the integrand. The pre-image $D^*$ of the domain is bounded by $v=1, v=2, v=u+\\sqrt{u},$ and $v=u+\\sqrt{u}+1.$ </p>\n\n<p>At first glance $D^*$ tempts you to integrate in the order $du \\; dv.$ But you run into the same horridness solving those equations for $u$. But happily, the 4 points of intersections are not that bad and (here's the good part) the middle two have the same $u$ coordinate $a=(3-\\sqrt{5})/2$. So you can break $D^*$ into two regions. </p>\n\n<p>First region is $u=0$ to $a$, $v=1$ to $u+\\sqrt{u} +1$.</p>\n\n<p>Second region is $u = a$ to $1$, $v=u+\\sqrt{u}$ to $2$.</p>\n\n<p>Everything works swimmingly until the end when there are two integrals:</p>\n\n<p>$$\\int_0^a \\frac{1}{2} \\cos(1-\\sqrt{u}-u) \\; du$$</p>\n\n<p>and</p>\n\n<p>$$\\int_a^1 \\frac{1}{2} \\cos(2-\\sqrt{u}-u) \\; du.$$</p>\n\n<p>I was hoping for a clever substitution that would change one of them into the other and then they'd cancel out. I've had no luck with that so far.</p>\n" }, { "answer_id": 2876115, "score": 1, "text": "<p>I have added an answer to an other question which solves this question as well or shows where the problem is in general for this integral to be evaluated: <a href=\"https://math.stackexchange.com/a/2876108/541516\">https://math.stackexchange.com/a/2876108/541516</a></p>\n\n<p>An other interesting variable change which can take us much further and show that this integral cannot be evaluated to closed form solution without series expansion, would be to remove away $x$ in the integral. See that we have $x^2$ inside $\\sin(\\cdot)$ which can make this happen.</p>\n\n<p>Let, $k = x^2$ and keeping $y$ as it is, which gives a jacobian of $1/2x$ and cancels with the $x$. Now, although we have $\\sin(y-k)$ kind of structure which we can expand by trigonometric identity of $\\sin(A-B)$ into two parts when function of $k$ and $y$ are independent, it is better to do an other variable change which makes integral limits associated with $y$ to fixed limits. </p>\n\n<p>See that now the limit curves are of the form $y-k = c$ and $\\sqrt k + y = c$.</p>\n\n<p>So, we can take new variable $t = y-k$ which actually does this, making the limits for $t$ to be 0 to 1. Limits for $k$ are taken to be dependent on t and of the form $\\sqrt k + k + t = c$. Now the inner function is just $sin(t)$ which can be pulled out of the integral with $k$. Now to evaluate the integral with $k$ we just have to evaluate the limits i.e. $\\sqrt k + k + t = 1$ and $\\sqrt k + k + t = 2$. If we solve the quadratic equations in $\\sqrt k$ and then take square the positive of the quadratic solution which is ($-1 + \\sqrt{1 + 4(c-t)}$)/2, we get, constants, $t$ terms and $\\sqrt{c-t}$ kind of terms. And remember a $sin(t)$ is waiting for us and limits are 0 to 1 in variable t! </p>\n\n<p>Now, $\\sin(t)$ can be evaluated, $t \\sin(t)$ is well known by integration by parts. The problematic part is the terms with the structure $\\sqrt{c-t} \\sin(t)$. To get rigid of c (which is 1 and 2 here), we can do variable transformation and $p = c-t$ which brings us a structure $\\sqrt p \\sin(c - p)$ which similar to the general structure, $\\sqrt p \\sin(p)$ through expansion of $\\sin(c - p)$. </p>\n\n<p>So, now to this $\\sqrt{p} \\sin(p)$ when we do integration by parts, this ends up with having an integral of $cos(p)/\\sqrt{p}$ to be evaluated, which on variable transformation $p = u^2$ gives integral of $\\cos(u^2)$ which further more cannot be simplified by hand and is usually termed Fresnel integral which has series expansion ( see <a href=\"https://en.wikipedia.org/wiki/Fresnel_integral\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Fresnel_integral</a>).</p>\n\n<p>PS: I have focused only on the hard terms and ignored signs, constants and other easy terms coming from integration by parts just for transparent idea of what creates problem, hope it is understandable, let me know if some part is unclear.</p>\n" }, { "answer_id": 2117860, "score": 2, "text": "<p>Here's half an answer. Maybe you went down this path already:</p>\n\n<p>If you let $u=x^2, v=x+y$ some things are less horrid. First, the Jacobian is $\\frac{1}{2x}$ which cancels the first $x$ in the integrand. The pre-image $D^*$ of the domain is bounded by $v=1, v=2, v=u+\\sqrt{u},$ and $v=u+\\sqrt{u}+1.$ </p>\n\n<p>At first glance $D^*$ tempts you to integrate in the order $du \\; dv.$ But you run into the same horridness solving those equations for $u$. But happily, the 4 points of intersections are not that bad and (here's the good part) the middle two have the same $u$ coordinate $a=(3-\\sqrt{5})/2$. So you can break $D^*$ into two regions. </p>\n\n<p>First region is $u=0$ to $a$, $v=1$ to $u+\\sqrt{u} +1$.</p>\n\n<p>Second region is $u = a$ to $1$, $v=u+\\sqrt{u}$ to $2$.</p>\n\n<p>Everything works swimmingly until the end when there are two integrals:</p>\n\n<p>$$\\int_0^a \\frac{1}{2} \\cos(1-\\sqrt{u}-u) \\; du$$</p>\n\n<p>and</p>\n\n<p>$$\\int_a^1 \\frac{1}{2} \\cos(2-\\sqrt{u}-u) \\; du.$$</p>\n\n<p>I was hoping for a clever substitution that would change one of them into the other and then they'd cancel out. I've had no luck with that so far.</p>\n" }, { "answer_id": 2876115, "score": 1, "text": "<p>I have added an answer to an other question which solves this question as well or shows where the problem is in general for this integral to be evaluated: <a href=\"https://math.stackexchange.com/a/2876108/541516\">https://math.stackexchange.com/a/2876108/541516</a></p>\n\n<p>An other interesting variable change which can take us much further and show that this integral cannot be evaluated to closed form solution without series expansion, would be to remove away $x$ in the integral. See that we have $x^2$ inside $\\sin(\\cdot)$ which can make this happen.</p>\n\n<p>Let, $k = x^2$ and keeping $y$ as it is, which gives a jacobian of $1/2x$ and cancels with the $x$. Now, although we have $\\sin(y-k)$ kind of structure which we can expand by trigonometric identity of $\\sin(A-B)$ into two parts when function of $k$ and $y$ are independent, it is better to do an other variable change which makes integral limits associated with $y$ to fixed limits. </p>\n\n<p>See that now the limit curves are of the form $y-k = c$ and $\\sqrt k + y = c$.</p>\n\n<p>So, we can take new variable $t = y-k$ which actually does this, making the limits for $t$ to be 0 to 1. Limits for $k$ are taken to be dependent on t and of the form $\\sqrt k + k + t = c$. Now the inner function is just $sin(t)$ which can be pulled out of the integral with $k$. Now to evaluate the integral with $k$ we just have to evaluate the limits i.e. $\\sqrt k + k + t = 1$ and $\\sqrt k + k + t = 2$. If we solve the quadratic equations in $\\sqrt k$ and then take square the positive of the quadratic solution which is ($-1 + \\sqrt{1 + 4(c-t)}$)/2, we get, constants, $t$ terms and $\\sqrt{c-t}$ kind of terms. And remember a $sin(t)$ is waiting for us and limits are 0 to 1 in variable t! </p>\n\n<p>Now, $\\sin(t)$ can be evaluated, $t \\sin(t)$ is well known by integration by parts. The problematic part is the terms with the structure $\\sqrt{c-t} \\sin(t)$. To get rigid of c (which is 1 and 2 here), we can do variable transformation and $p = c-t$ which brings us a structure $\\sqrt p \\sin(c - p)$ which similar to the general structure, $\\sqrt p \\sin(p)$ through expansion of $\\sin(c - p)$. </p>\n\n<p>So, now to this $\\sqrt{p} \\sin(p)$ when we do integration by parts, this ends up with having an integral of $cos(p)/\\sqrt{p}$ to be evaluated, which on variable transformation $p = u^2$ gives integral of $\\cos(u^2)$ which further more cannot be simplified by hand and is usually termed Fresnel integral which has series expansion ( see <a href=\"https://en.wikipedia.org/wiki/Fresnel_integral\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Fresnel_integral</a>).</p>\n\n<p>PS: I have focused only on the hard terms and ignored signs, constants and other easy terms coming from integration by parts just for transparent idea of what creates problem, hope it is understandable, let me know if some part is unclear.</p>\n" } ]
Question: Change of variables for $\iint_D x\sin(y-x^2) \, dA$ Answer 1: <p>Here's half an answer. Maybe you went down this path already:</p> <p>If you let $u=x^2, v=x+y$ some things are less horrid. First, the Jacobian is $\frac{1}{2x}$ which cancels the first $x$ in the integrand. The pre-image $D^*$ of the domain is bounded by $v=1, v=2, v=u+\sqrt{u},$ and $v=u+\sqrt{u}+1.$ </p> <p>At first glance $D^*$ tempts you to integrate in the order $du \; dv.$ But you run into the same horridness solving those equations for $u$. But happily, the 4 points of intersections are not that bad and (here's the good part) the middle two have the same $u$ coordinate $a=(3-\sqrt{5})/2$. So you can break $D^*$ into two regions. </p> <p>First region is $u=0$ to $a$, $v=1$ to $u+\sqrt{u} +1$.</p> <p>Second region is $u = a$ to $1$, $v=u+\sqrt{u}$ to $2$.</p> <p>Everything works swimmingly until the end when there are two integrals:</p> <p>$$\int_0^a \frac{1}{2} \cos(1-\sqrt{u}-u) \; du$$</p> <p>and</p> <p>$$\int_a^1 \frac{1}{2} \cos(2-\sqrt{u}-u) \; du.$$</p> <p>I was hoping for a clever substitution that would change one of them into the other and then they'd cancel out. I've had no luck with that so far.</p> Answer 2: <p>I have added an answer to an other question which solves this question as well or shows where the problem is in general for this integral to be evaluated: <a href="https://math.stackexchange.com/a/2876108/541516">https://math.stackexchange.com/a/2876108/541516</a></p> <p>An other interesting variable change which can take us much further and show that this integral cannot be evaluated to closed form solution without series expansion, would be to remove away $x$ in the integral. See that we have $x^2$ inside $\sin(\cdot)$ which can make this happen.</p> <p>Let, $k = x^2$ and keeping $y$ as it is, which gives a jacobian of $1/2x$ and cancels with the $x$. Now, although we have $\sin(y-k)$ kind of structure which we can expand by trigonometric identity of $\sin(A-B)$ into two parts when function of $k$ and $y$ are independent, it is better to do an other variable change which makes integral limits associated with $y$ to fixed limits. </p> <p>See that now the limit curves are of the form $y-k = c$ and $\sqrt k + y = c$.</p> <p>So, we can take new variable $t = y-k$ which actually does this, making the limits for $t$ to be 0 to 1. Limits for $k$ are taken to be dependent on t and of the form $\sqrt k + k + t = c$. Now the inner function is just $sin(t)$ which can be pulled out of the integral with $k$. Now to evaluate the integral with $k$ we just have to evaluate the limits i.e. $\sqrt k + k + t = 1$ and $\sqrt k + k + t = 2$. If we solve the quadratic equations in $\sqrt k$ and then take square the positive of the quadratic solution which is ($-1 + \sqrt{1 + 4(c-t)}$)/2, we get, constants, $t$ terms and $\sqrt{c-t}$ kind of terms. And remember a $sin(t)$ is waiting for us and limits are 0 to 1 in variable t! </p> <p>Now, $\sin(t)$ can be evaluated, $t \sin(t)$ is well known by integration by parts. The problematic part is the terms with the structure $\sqrt{c-t} \sin(t)$. To get rigid of c (which is 1 and 2 here), we can do variable transformation and $p = c-t$ which brings us a structure $\sqrt p \sin(c - p)$ which similar to the general structure, $\sqrt p \sin(p)$ through expansion of $\sin(c - p)$. </p> <p>So, now to this $\sqrt{p} \sin(p)$ when we do integration by parts, this ends up with having an integral of $cos(p)/\sqrt{p}$ to be evaluated, which on variable transformation $p = u^2$ gives integral of $\cos(u^2)$ which further more cannot be simplified by hand and is usually termed Fresnel integral which has series expansion ( see <a href="https://en.wikipedia.org/wiki/Fresnel_integral" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Fresnel_integral</a>).</p> <p>PS: I have focused only on the hard terms and ignored signs, constants and other easy terms coming from integration by parts just for transparent idea of what creates problem, hope it is understandable, let me know if some part is unclear.</p> Answer 3: <p>Here's half an answer. Maybe you went down this path already:</p> <p>If you let $u=x^2, v=x+y$ some things are less horrid. First, the Jacobian is $\frac{1}{2x}$ which cancels the first $x$ in the integrand. The pre-image $D^*$ of the domain is bounded by $v=1, v=2, v=u+\sqrt{u},$ and $v=u+\sqrt{u}+1.$ </p> <p>At first glance $D^*$ tempts you to integrate in the order $du \; dv.$ But you run into the same horridness solving those equations for $u$. But happily, the 4 points of intersections are not that bad and (here's the good part) the middle two have the same $u$ coordinate $a=(3-\sqrt{5})/2$. So you can break $D^*$ into two regions. </p> <p>First region is $u=0$ to $a$, $v=1$ to $u+\sqrt{u} +1$.</p> <p>Second region is $u = a$ to $1$, $v=u+\sqrt{u}$ to $2$.</p> <p>Everything works swimmingly until the end when there are two integrals:</p> <p>$$\int_0^a \frac{1}{2} \cos(1-\sqrt{u}-u) \; du$$</p> <p>and</p> <p>$$\int_a^1 \frac{1}{2} \cos(2-\sqrt{u}-u) \; du.$$</p> <p>I was hoping for a clever substitution that would change one of them into the other and then they'd cancel out. I've had no luck with that so far.</p> Answer 4: <p>I have added an answer to an other question which solves this question as well or shows where the problem is in general for this integral to be evaluated: <a href="https://math.stackexchange.com/a/2876108/541516">https://math.stackexchange.com/a/2876108/541516</a></p> <p>An other interesting variable change which can take us much further and show that this integral cannot be evaluated to closed form solution without series expansion, would be to remove away $x$ in the integral. See that we have $x^2$ inside $\sin(\cdot)$ which can make this happen.</p> <p>Let, $k = x^2$ and keeping $y$ as it is, which gives a jacobian of $1/2x$ and cancels with the $x$. Now, although we have $\sin(y-k)$ kind of structure which we can expand by trigonometric identity of $\sin(A-B)$ into two parts when function of $k$ and $y$ are independent, it is better to do an other variable change which makes integral limits associated with $y$ to fixed limits. </p> <p>See that now the limit curves are of the form $y-k = c$ and $\sqrt k + y = c$.</p> <p>So, we can take new variable $t = y-k$ which actually does this, making the limits for $t$ to be 0 to 1. Limits for $k$ are taken to be dependent on t and of the form $\sqrt k + k + t = c$. Now the inner function is just $sin(t)$ which can be pulled out of the integral with $k$. Now to evaluate the integral with $k$ we just have to evaluate the limits i.e. $\sqrt k + k + t = 1$ and $\sqrt k + k + t = 2$. If we solve the quadratic equations in $\sqrt k$ and then take square the positive of the quadratic solution which is ($-1 + \sqrt{1 + 4(c-t)}$)/2, we get, constants, $t$ terms and $\sqrt{c-t}$ kind of terms. And remember a $sin(t)$ is waiting for us and limits are 0 to 1 in variable t! </p> <p>Now, $\sin(t)$ can be evaluated, $t \sin(t)$ is well known by integration by parts. The problematic part is the terms with the structure $\sqrt{c-t} \sin(t)$. To get rigid of c (which is 1 and 2 here), we can do variable transformation and $p = c-t$ which brings us a structure $\sqrt p \sin(c - p)$ which similar to the general structure, $\sqrt p \sin(p)$ through expansion of $\sin(c - p)$. </p> <p>So, now to this $\sqrt{p} \sin(p)$ when we do integration by parts, this ends up with having an integral of $cos(p)/\sqrt{p}$ to be evaluated, which on variable transformation $p = u^2$ gives integral of $\cos(u^2)$ which further more cannot be simplified by hand and is usually termed Fresnel integral which has series expansion ( see <a href="https://en.wikipedia.org/wiki/Fresnel_integral" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Fresnel_integral</a>).</p> <p>PS: I have focused only on the hard terms and ignored signs, constants and other easy terms coming from integration by parts just for transparent idea of what creates problem, hope it is understandable, let me know if some part is unclear.</p> Answer 5: <p>Here's half an answer. Maybe you went down this path already:</p> <p>If you let $u=x^2, v=x+y$ some things are less horrid. First, the Jacobian is $\frac{1}{2x}$ which cancels the first $x$ in the integrand. The pre-image $D^*$ of the domain is bounded by $v=1, v=2, v=u+\sqrt{u},$ and $v=u+\sqrt{u}+1.$ </p> <p>At first glance $D^*$ tempts you to integrate in the order $du \; dv.$ But you run into the same horridness solving those equations for $u$. But happily, the 4 points of intersections are not that bad and (here's the good part) the middle two have the same $u$ coordinate $a=(3-\sqrt{5})/2$. So you can break $D^*$ into two regions. </p> <p>First region is $u=0$ to $a$, $v=1$ to $u+\sqrt{u} +1$.</p> <p>Second region is $u = a$ to $1$, $v=u+\sqrt{u}$ to $2$.</p> <p>Everything works swimmingly until the end when there are two integrals:</p> <p>$$\int_0^a \frac{1}{2} \cos(1-\sqrt{u}-u) \; du$$</p> <p>and</p> <p>$$\int_a^1 \frac{1}{2} \cos(2-\sqrt{u}-u) \; du.$$</p> <p>I was hoping for a clever substitution that would change one of them into the other and then they'd cancel out. I've had no luck with that so far.</p> Answer 6: <p>I have added an answer to an other question which solves this question as well or shows where the problem is in general for this integral to be evaluated: <a href="https://math.stackexchange.com/a/2876108/541516">https://math.stackexchange.com/a/2876108/541516</a></p> <p>An other interesting variable change which can take us much further and show that this integral cannot be evaluated to closed form solution without series expansion, would be to remove away $x$ in the integral. See that we have $x^2$ inside $\sin(\cdot)$ which can make this happen.</p> <p>Let, $k = x^2$ and keeping $y$ as it is, which gives a jacobian of $1/2x$ and cancels with the $x$. Now, although we have $\sin(y-k)$ kind of structure which we can expand by trigonometric identity of $\sin(A-B)$ into two parts when function of $k$ and $y$ are independent, it is better to do an other variable change which makes integral limits associated with $y$ to fixed limits. </p> <p>See that now the limit curves are of the form $y-k = c$ and $\sqrt k + y = c$.</p> <p>So, we can take new variable $t = y-k$ which actually does this, making the limits for $t$ to be 0 to 1. Limits for $k$ are taken to be dependent on t and of the form $\sqrt k + k + t = c$. Now the inner function is just $sin(t)$ which can be pulled out of the integral with $k$. Now to evaluate the integral with $k$ we just have to evaluate the limits i.e. $\sqrt k + k + t = 1$ and $\sqrt k + k + t = 2$. If we solve the quadratic equations in $\sqrt k$ and then take square the positive of the quadratic solution which is ($-1 + \sqrt{1 + 4(c-t)}$)/2, we get, constants, $t$ terms and $\sqrt{c-t}$ kind of terms. And remember a $sin(t)$ is waiting for us and limits are 0 to 1 in variable t! </p> <p>Now, $\sin(t)$ can be evaluated, $t \sin(t)$ is well known by integration by parts. The problematic part is the terms with the structure $\sqrt{c-t} \sin(t)$. To get rigid of c (which is 1 and 2 here), we can do variable transformation and $p = c-t$ which brings us a structure $\sqrt p \sin(c - p)$ which similar to the general structure, $\sqrt p \sin(p)$ through expansion of $\sin(c - p)$. </p> <p>So, now to this $\sqrt{p} \sin(p)$ when we do integration by parts, this ends up with having an integral of $cos(p)/\sqrt{p}$ to be evaluated, which on variable transformation $p = u^2$ gives integral of $\cos(u^2)$ which further more cannot be simplified by hand and is usually termed Fresnel integral which has series expansion ( see <a href="https://en.wikipedia.org/wiki/Fresnel_integral" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Fresnel_integral</a>).</p> <p>PS: I have focused only on the hard terms and ignored signs, constants and other easy terms coming from integration by parts just for transparent idea of what creates problem, hope it is understandable, let me know if some part is unclear.</p> Answer 7: <p>Here's half an answer. Maybe you went down this path already:</p> <p>If you let $u=x^2, v=x+y$ some things are less horrid. First, the Jacobian is $\frac{1}{2x}$ which cancels the first $x$ in the integrand. The pre-image $D^*$ of the domain is bounded by $v=1, v=2, v=u+\sqrt{u},$ and $v=u+\sqrt{u}+1.$ </p> <p>At first glance $D^*$ tempts you to integrate in the order $du \; dv.$ But you run into the same horridness solving those equations for $u$. But happily, the 4 points of intersections are not that bad and (here's the good part) the middle two have the same $u$ coordinate $a=(3-\sqrt{5})/2$. So you can break $D^*$ into two regions. </p> <p>First region is $u=0$ to $a$, $v=1$ to $u+\sqrt{u} +1$.</p> <p>Second region is $u = a$ to $1$, $v=u+\sqrt{u}$ to $2$.</p> <p>Everything works swimmingly until the end when there are two integrals:</p> <p>$$\int_0^a \frac{1}{2} \cos(1-\sqrt{u}-u) \; du$$</p> <p>and</p> <p>$$\int_a^1 \frac{1}{2} \cos(2-\sqrt{u}-u) \; du.$$</p> <p>I was hoping for a clever substitution that would change one of them into the other and then they'd cancel out. I've had no luck with that so far.</p> Answer 8: <p>I have added an answer to an other question which solves this question as well or shows where the problem is in general for this integral to be evaluated: <a href="https://math.stackexchange.com/a/2876108/541516">https://math.stackexchange.com/a/2876108/541516</a></p> <p>An other interesting variable change which can take us much further and show that this integral cannot be evaluated to closed form solution without series expansion, would be to remove away $x$ in the integral. See that we have $x^2$ inside $\sin(\cdot)$ which can make this happen.</p> <p>Let, $k = x^2$ and keeping $y$ as it is, which gives a jacobian of $1/2x$ and cancels with the $x$. Now, although we have $\sin(y-k)$ kind of structure which we can expand by trigonometric identity of $\sin(A-B)$ into two parts when function of $k$ and $y$ are independent, it is better to do an other variable change which makes integral limits associated with $y$ to fixed limits. </p> <p>See that now the limit curves are of the form $y-k = c$ and $\sqrt k + y = c$.</p> <p>So, we can take new variable $t = y-k$ which actually does this, making the limits for $t$ to be 0 to 1. Limits for $k$ are taken to be dependent on t and of the form $\sqrt k + k + t = c$. Now the inner function is just $sin(t)$ which can be pulled out of the integral with $k$. Now to evaluate the integral with $k$ we just have to evaluate the limits i.e. $\sqrt k + k + t = 1$ and $\sqrt k + k + t = 2$. If we solve the quadratic equations in $\sqrt k$ and then take square the positive of the quadratic solution which is ($-1 + \sqrt{1 + 4(c-t)}$)/2, we get, constants, $t$ terms and $\sqrt{c-t}$ kind of terms. And remember a $sin(t)$ is waiting for us and limits are 0 to 1 in variable t! </p> <p>Now, $\sin(t)$ can be evaluated, $t \sin(t)$ is well known by integration by parts. The problematic part is the terms with the structure $\sqrt{c-t} \sin(t)$. To get rigid of c (which is 1 and 2 here), we can do variable transformation and $p = c-t$ which brings us a structure $\sqrt p \sin(c - p)$ which similar to the general structure, $\sqrt p \sin(p)$ through expansion of $\sin(c - p)$. </p> <p>So, now to this $\sqrt{p} \sin(p)$ when we do integration by parts, this ends up with having an integral of $cos(p)/\sqrt{p}$ to be evaluated, which on variable transformation $p = u^2$ gives integral of $\cos(u^2)$ which further more cannot be simplified by hand and is usually termed Fresnel integral which has series expansion ( see <a href="https://en.wikipedia.org/wiki/Fresnel_integral" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Fresnel_integral</a>).</p> <p>PS: I have focused only on the hard terms and ignored signs, constants and other easy terms coming from integration by parts just for transparent idea of what creates problem, hope it is understandable, let me know if some part is unclear.</p> Answer 9: <p>Here's half an answer. Maybe you went down this path already:</p> <p>If you let $u=x^2, v=x+y$ some things are less horrid. First, the Jacobian is $\frac{1}{2x}$ which cancels the first $x$ in the integrand. The pre-image $D^*$ of the domain is bounded by $v=1, v=2, v=u+\sqrt{u},$ and $v=u+\sqrt{u}+1.$ </p> <p>At first glance $D^*$ tempts you to integrate in the order $du \; dv.$ But you run into the same horridness solving those equations for $u$. But happily, the 4 points of intersections are not that bad and (here's the good part) the middle two have the same $u$ coordinate $a=(3-\sqrt{5})/2$. So you can break $D^*$ into two regions. </p> <p>First region is $u=0$ to $a$, $v=1$ to $u+\sqrt{u} +1$.</p> <p>Second region is $u = a$ to $1$, $v=u+\sqrt{u}$ to $2$.</p> <p>Everything works swimmingly until the end when there are two integrals:</p> <p>$$\int_0^a \frac{1}{2} \cos(1-\sqrt{u}-u) \; du$$</p> <p>and</p> <p>$$\int_a^1 \frac{1}{2} \cos(2-\sqrt{u}-u) \; du.$$</p> <p>I was hoping for a clever substitution that would change one of them into the other and then they'd cancel out. I've had no luck with that so far.</p> Answer 10: <p>I have added an answer to an other question which solves this question as well or shows where the problem is in general for this integral to be evaluated: <a href="https://math.stackexchange.com/a/2876108/541516">https://math.stackexchange.com/a/2876108/541516</a></p> <p>An other interesting variable change which can take us much further and show that this integral cannot be evaluated to closed form solution without series expansion, would be to remove away $x$ in the integral. See that we have $x^2$ inside $\sin(\cdot)$ which can make this happen.</p> <p>Let, $k = x^2$ and keeping $y$ as it is, which gives a jacobian of $1/2x$ and cancels with the $x$. Now, although we have $\sin(y-k)$ kind of structure which we can expand by trigonometric identity of $\sin(A-B)$ into two parts when function of $k$ and $y$ are independent, it is better to do an other variable change which makes integral limits associated with $y$ to fixed limits. </p> <p>See that now the limit curves are of the form $y-k = c$ and $\sqrt k + y = c$.</p> <p>So, we can take new variable $t = y-k$ which actually does this, making the limits for $t$ to be 0 to 1. Limits for $k$ are taken to be dependent on t and of the form $\sqrt k + k + t = c$. Now the inner function is just $sin(t)$ which can be pulled out of the integral with $k$. Now to evaluate the integral with $k$ we just have to evaluate the limits i.e. $\sqrt k + k + t = 1$ and $\sqrt k + k + t = 2$. If we solve the quadratic equations in $\sqrt k$ and then take square the positive of the quadratic solution which is ($-1 + \sqrt{1 + 4(c-t)}$)/2, we get, constants, $t$ terms and $\sqrt{c-t}$ kind of terms. And remember a $sin(t)$ is waiting for us and limits are 0 to 1 in variable t! </p> <p>Now, $\sin(t)$ can be evaluated, $t \sin(t)$ is well known by integration by parts. The problematic part is the terms with the structure $\sqrt{c-t} \sin(t)$. To get rigid of c (which is 1 and 2 here), we can do variable transformation and $p = c-t$ which brings us a structure $\sqrt p \sin(c - p)$ which similar to the general structure, $\sqrt p \sin(p)$ through expansion of $\sin(c - p)$. </p> <p>So, now to this $\sqrt{p} \sin(p)$ when we do integration by parts, this ends up with having an integral of $cos(p)/\sqrt{p}$ to be evaluated, which on variable transformation $p = u^2$ gives integral of $\cos(u^2)$ which further more cannot be simplified by hand and is usually termed Fresnel integral which has series expansion ( see <a href="https://en.wikipedia.org/wiki/Fresnel_integral" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Fresnel_integral</a>).</p> <p>PS: I have focused only on the hard terms and ignored signs, constants and other easy terms coming from integration by parts just for transparent idea of what creates problem, hope it is understandable, let me know if some part is unclear.</p> Answer 11: <p>Here's half an answer. Maybe you went down this path already:</p> <p>If you let $u=x^2, v=x+y$ some things are less horrid. First, the Jacobian is $\frac{1}{2x}$ which cancels the first $x$ in the integrand. The pre-image $D^*$ of the domain is bounded by $v=1, v=2, v=u+\sqrt{u},$ and $v=u+\sqrt{u}+1.$ </p> <p>At first glance $D^*$ tempts you to integrate in the order $du \; dv.$ But you run into the same horridness solving those equations for $u$. But happily, the 4 points of intersections are not that bad and (here's the good part) the middle two have the same $u$ coordinate $a=(3-\sqrt{5})/2$. So you can break $D^*$ into two regions. </p> <p>First region is $u=0$ to $a$, $v=1$ to $u+\sqrt{u} +1$.</p> <p>Second region is $u = a$ to $1$, $v=u+\sqrt{u}$ to $2$.</p> <p>Everything works swimmingly until the end when there are two integrals:</p> <p>$$\int_0^a \frac{1}{2} \cos(1-\sqrt{u}-u) \; du$$</p> <p>and</p> <p>$$\int_a^1 \frac{1}{2} \cos(2-\sqrt{u}-u) \; du.$$</p> <p>I was hoping for a clever substitution that would change one of them into the other and then they'd cancel out. I've had no luck with that so far.</p> Answer 12: <p>I have added an answer to an other question which solves this question as well or shows where the problem is in general for this integral to be evaluated: <a href="https://math.stackexchange.com/a/2876108/541516">https://math.stackexchange.com/a/2876108/541516</a></p> <p>An other interesting variable change which can take us much further and show that this integral cannot be evaluated to closed form solution without series expansion, would be to remove away $x$ in the integral. See that we have $x^2$ inside $\sin(\cdot)$ which can make this happen.</p> <p>Let, $k = x^2$ and keeping $y$ as it is, which gives a jacobian of $1/2x$ and cancels with the $x$. Now, although we have $\sin(y-k)$ kind of structure which we can expand by trigonometric identity of $\sin(A-B)$ into two parts when function of $k$ and $y$ are independent, it is better to do an other variable change which makes integral limits associated with $y$ to fixed limits. </p> <p>See that now the limit curves are of the form $y-k = c$ and $\sqrt k + y = c$.</p> <p>So, we can take new variable $t = y-k$ which actually does this, making the limits for $t$ to be 0 to 1. Limits for $k$ are taken to be dependent on t and of the form $\sqrt k + k + t = c$. Now the inner function is just $sin(t)$ which can be pulled out of the integral with $k$. Now to evaluate the integral with $k$ we just have to evaluate the limits i.e. $\sqrt k + k + t = 1$ and $\sqrt k + k + t = 2$. If we solve the quadratic equations in $\sqrt k$ and then take square the positive of the quadratic solution which is ($-1 + \sqrt{1 + 4(c-t)}$)/2, we get, constants, $t$ terms and $\sqrt{c-t}$ kind of terms. And remember a $sin(t)$ is waiting for us and limits are 0 to 1 in variable t! </p> <p>Now, $\sin(t)$ can be evaluated, $t \sin(t)$ is well known by integration by parts. The problematic part is the terms with the structure $\sqrt{c-t} \sin(t)$. To get rigid of c (which is 1 and 2 here), we can do variable transformation and $p = c-t$ which brings us a structure $\sqrt p \sin(c - p)$ which similar to the general structure, $\sqrt p \sin(p)$ through expansion of $\sin(c - p)$. </p> <p>So, now to this $\sqrt{p} \sin(p)$ when we do integration by parts, this ends up with having an integral of $cos(p)/\sqrt{p}$ to be evaluated, which on variable transformation $p = u^2$ gives integral of $\cos(u^2)$ which further more cannot be simplified by hand and is usually termed Fresnel integral which has series expansion ( see <a href="https://en.wikipedia.org/wiki/Fresnel_integral" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Fresnel_integral</a>).</p> <p>PS: I have focused only on the hard terms and ignored signs, constants and other easy terms coming from integration by parts just for transparent idea of what creates problem, hope it is understandable, let me know if some part is unclear.</p> Answer 13: <p>Here's half an answer. Maybe you went down this path already:</p> <p>If you let $u=x^2, v=x+y$ some things are less horrid. First, the Jacobian is $\frac{1}{2x}$ which cancels the first $x$ in the integrand. The pre-image $D^*$ of the domain is bounded by $v=1, v=2, v=u+\sqrt{u},$ and $v=u+\sqrt{u}+1.$ </p> <p>At first glance $D^*$ tempts you to integrate in the order $du \; dv.$ But you run into the same horridness solving those equations for $u$. But happily, the 4 points of intersections are not that bad and (here's the good part) the middle two have the same $u$ coordinate $a=(3-\sqrt{5})/2$. So you can break $D^*$ into two regions. </p> <p>First region is $u=0$ to $a$, $v=1$ to $u+\sqrt{u} +1$.</p> <p>Second region is $u = a$ to $1$, $v=u+\sqrt{u}$ to $2$.</p> <p>Everything works swimmingly until the end when there are two integrals:</p> <p>$$\int_0^a \frac{1}{2} \cos(1-\sqrt{u}-u) \; du$$</p> <p>and</p> <p>$$\int_a^1 \frac{1}{2} \cos(2-\sqrt{u}-u) \; du.$$</p> <p>I was hoping for a clever substitution that would change one of them into the other and then they'd cancel out. I've had no luck with that so far.</p> Answer 14: <p>I have added an answer to an other question which solves this question as well or shows where the problem is in general for this integral to be evaluated: <a href="https://math.stackexchange.com/a/2876108/541516">https://math.stackexchange.com/a/2876108/541516</a></p> <p>An other interesting variable change which can take us much further and show that this integral cannot be evaluated to closed form solution without series expansion, would be to remove away $x$ in the integral. See that we have $x^2$ inside $\sin(\cdot)$ which can make this happen.</p> <p>Let, $k = x^2$ and keeping $y$ as it is, which gives a jacobian of $1/2x$ and cancels with the $x$. Now, although we have $\sin(y-k)$ kind of structure which we can expand by trigonometric identity of $\sin(A-B)$ into two parts when function of $k$ and $y$ are independent, it is better to do an other variable change which makes integral limits associated with $y$ to fixed limits. </p> <p>See that now the limit curves are of the form $y-k = c$ and $\sqrt k + y = c$.</p> <p>So, we can take new variable $t = y-k$ which actually does this, making the limits for $t$ to be 0 to 1. Limits for $k$ are taken to be dependent on t and of the form $\sqrt k + k + t = c$. Now the inner function is just $sin(t)$ which can be pulled out of the integral with $k$. Now to evaluate the integral with $k$ we just have to evaluate the limits i.e. $\sqrt k + k + t = 1$ and $\sqrt k + k + t = 2$. If we solve the quadratic equations in $\sqrt k$ and then take square the positive of the quadratic solution which is ($-1 + \sqrt{1 + 4(c-t)}$)/2, we get, constants, $t$ terms and $\sqrt{c-t}$ kind of terms. And remember a $sin(t)$ is waiting for us and limits are 0 to 1 in variable t! </p> <p>Now, $\sin(t)$ can be evaluated, $t \sin(t)$ is well known by integration by parts. The problematic part is the terms with the structure $\sqrt{c-t} \sin(t)$. To get rigid of c (which is 1 and 2 here), we can do variable transformation and $p = c-t$ which brings us a structure $\sqrt p \sin(c - p)$ which similar to the general structure, $\sqrt p \sin(p)$ through expansion of $\sin(c - p)$. </p> <p>So, now to this $\sqrt{p} \sin(p)$ when we do integration by parts, this ends up with having an integral of $cos(p)/\sqrt{p}$ to be evaluated, which on variable transformation $p = u^2$ gives integral of $\cos(u^2)$ which further more cannot be simplified by hand and is usually termed Fresnel integral which has series expansion ( see <a href="https://en.wikipedia.org/wiki/Fresnel_integral" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Fresnel_integral</a>).</p> <p>PS: I have focused only on the hard terms and ignored signs, constants and other easy terms coming from integration by parts just for transparent idea of what creates problem, hope it is understandable, let me know if some part is unclear.</p> Answer 15: <p>Here's half an answer. Maybe you went down this path already:</p> <p>If you let $u=x^2, v=x+y$ some things are less horrid. First, the Jacobian is $\frac{1}{2x}$ which cancels the first $x$ in the integrand. The pre-image $D^*$ of the domain is bounded by $v=1, v=2, v=u+\sqrt{u},$ and $v=u+\sqrt{u}+1.$ </p> <p>At first glance $D^*$ tempts you to integrate in the order $du \; dv.$ But you run into the same horridness solving those equations for $u$. But happily, the 4 points of intersections are not that bad and (here's the good part) the middle two have the same $u$ coordinate $a=(3-\sqrt{5})/2$. So you can break $D^*$ into two regions. </p> <p>First region is $u=0$ to $a$, $v=1$ to $u+\sqrt{u} +1$.</p> <p>Second region is $u = a$ to $1$, $v=u+\sqrt{u}$ to $2$.</p> <p>Everything works swimmingly until the end when there are two integrals:</p> <p>$$\int_0^a \frac{1}{2} \cos(1-\sqrt{u}-u) \; du$$</p> <p>and</p> <p>$$\int_a^1 \frac{1}{2} \cos(2-\sqrt{u}-u) \; du.$$</p> <p>I was hoping for a clever substitution that would change one of them into the other and then they'd cancel out. I've had no luck with that so far.</p> Answer 16: <p>I have added an answer to an other question which solves this question as well or shows where the problem is in general for this integral to be evaluated: <a href="https://math.stackexchange.com/a/2876108/541516">https://math.stackexchange.com/a/2876108/541516</a></p> <p>An other interesting variable change which can take us much further and show that this integral cannot be evaluated to closed form solution without series expansion, would be to remove away $x$ in the integral. See that we have $x^2$ inside $\sin(\cdot)$ which can make this happen.</p> <p>Let, $k = x^2$ and keeping $y$ as it is, which gives a jacobian of $1/2x$ and cancels with the $x$. Now, although we have $\sin(y-k)$ kind of structure which we can expand by trigonometric identity of $\sin(A-B)$ into two parts when function of $k$ and $y$ are independent, it is better to do an other variable change which makes integral limits associated with $y$ to fixed limits. </p> <p>See that now the limit curves are of the form $y-k = c$ and $\sqrt k + y = c$.</p> <p>So, we can take new variable $t = y-k$ which actually does this, making the limits for $t$ to be 0 to 1. Limits for $k$ are taken to be dependent on t and of the form $\sqrt k + k + t = c$. Now the inner function is just $sin(t)$ which can be pulled out of the integral with $k$. Now to evaluate the integral with $k$ we just have to evaluate the limits i.e. $\sqrt k + k + t = 1$ and $\sqrt k + k + t = 2$. If we solve the quadratic equations in $\sqrt k$ and then take square the positive of the quadratic solution which is ($-1 + \sqrt{1 + 4(c-t)}$)/2, we get, constants, $t$ terms and $\sqrt{c-t}$ kind of terms. And remember a $sin(t)$ is waiting for us and limits are 0 to 1 in variable t! </p> <p>Now, $\sin(t)$ can be evaluated, $t \sin(t)$ is well known by integration by parts. The problematic part is the terms with the structure $\sqrt{c-t} \sin(t)$. To get rigid of c (which is 1 and 2 here), we can do variable transformation and $p = c-t$ which brings us a structure $\sqrt p \sin(c - p)$ which similar to the general structure, $\sqrt p \sin(p)$ through expansion of $\sin(c - p)$. </p> <p>So, now to this $\sqrt{p} \sin(p)$ when we do integration by parts, this ends up with having an integral of $cos(p)/\sqrt{p}$ to be evaluated, which on variable transformation $p = u^2$ gives integral of $\cos(u^2)$ which further more cannot be simplified by hand and is usually termed Fresnel integral which has series expansion ( see <a href="https://en.wikipedia.org/wiki/Fresnel_integral" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Fresnel_integral</a>).</p> <p>PS: I have focused only on the hard terms and ignored signs, constants and other easy terms coming from integration by parts just for transparent idea of what creates problem, hope it is understandable, let me know if some part is unclear.</p> Answer 17: <p>Here's half an answer. Maybe you went down this path already:</p> <p>If you let $u=x^2, v=x+y$ some things are less horrid. First, the Jacobian is $\frac{1}{2x}$ which cancels the first $x$ in the integrand. The pre-image $D^*$ of the domain is bounded by $v=1, v=2, v=u+\sqrt{u},$ and $v=u+\sqrt{u}+1.$ </p> <p>At first glance $D^*$ tempts you to integrate in the order $du \; dv.$ But you run into the same horridness solving those equations for $u$. But happily, the 4 points of intersections are not that bad and (here's the good part) the middle two have the same $u$ coordinate $a=(3-\sqrt{5})/2$. So you can break $D^*$ into two regions. </p> <p>First region is $u=0$ to $a$, $v=1$ to $u+\sqrt{u} +1$.</p> <p>Second region is $u = a$ to $1$, $v=u+\sqrt{u}$ to $2$.</p> <p>Everything works swimmingly until the end when there are two integrals:</p> <p>$$\int_0^a \frac{1}{2} \cos(1-\sqrt{u}-u) \; du$$</p> <p>and</p> <p>$$\int_a^1 \frac{1}{2} \cos(2-\sqrt{u}-u) \; du.$$</p> <p>I was hoping for a clever substitution that would change one of them into the other and then they'd cancel out. I've had no luck with that so far.</p> Answer 18: <p>I have added an answer to an other question which solves this question as well or shows where the problem is in general for this integral to be evaluated: <a href="https://math.stackexchange.com/a/2876108/541516">https://math.stackexchange.com/a/2876108/541516</a></p> <p>An other interesting variable change which can take us much further and show that this integral cannot be evaluated to closed form solution without series expansion, would be to remove away $x$ in the integral. See that we have $x^2$ inside $\sin(\cdot)$ which can make this happen.</p> <p>Let, $k = x^2$ and keeping $y$ as it is, which gives a jacobian of $1/2x$ and cancels with the $x$. Now, although we have $\sin(y-k)$ kind of structure which we can expand by trigonometric identity of $\sin(A-B)$ into two parts when function of $k$ and $y$ are independent, it is better to do an other variable change which makes integral limits associated with $y$ to fixed limits. </p> <p>See that now the limit curves are of the form $y-k = c$ and $\sqrt k + y = c$.</p> <p>So, we can take new variable $t = y-k$ which actually does this, making the limits for $t$ to be 0 to 1. Limits for $k$ are taken to be dependent on t and of the form $\sqrt k + k + t = c$. Now the inner function is just $sin(t)$ which can be pulled out of the integral with $k$. Now to evaluate the integral with $k$ we just have to evaluate the limits i.e. $\sqrt k + k + t = 1$ and $\sqrt k + k + t = 2$. If we solve the quadratic equations in $\sqrt k$ and then take square the positive of the quadratic solution which is ($-1 + \sqrt{1 + 4(c-t)}$)/2, we get, constants, $t$ terms and $\sqrt{c-t}$ kind of terms. And remember a $sin(t)$ is waiting for us and limits are 0 to 1 in variable t! </p> <p>Now, $\sin(t)$ can be evaluated, $t \sin(t)$ is well known by integration by parts. The problematic part is the terms with the structure $\sqrt{c-t} \sin(t)$. To get rigid of c (which is 1 and 2 here), we can do variable transformation and $p = c-t$ which brings us a structure $\sqrt p \sin(c - p)$ which similar to the general structure, $\sqrt p \sin(p)$ through expansion of $\sin(c - p)$. </p> <p>So, now to this $\sqrt{p} \sin(p)$ when we do integration by parts, this ends up with having an integral of $cos(p)/\sqrt{p}$ to be evaluated, which on variable transformation $p = u^2$ gives integral of $\cos(u^2)$ which further more cannot be simplified by hand and is usually termed Fresnel integral which has series expansion ( see <a href="https://en.wikipedia.org/wiki/Fresnel_integral" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Fresnel_integral</a>).</p> <p>PS: I have focused only on the hard terms and ignored signs, constants and other easy terms coming from integration by parts just for transparent idea of what creates problem, hope it is understandable, let me know if some part is unclear.</p>
Linear map $f(x^n)=nf(x)$.
2,078,982
5
https://math.stackexchange.com/questions/2078982/linear-map-fxn-nfx
<p>I want to find a linear map $f$ with the property $f(x^n)=nf(x)$.</p> <p>Suppose we have the recurrence $x_n+\frac{1}{x_n}=x_{n+1}$ with $x_1=a$. If such a map exists then $0=f(x_{n+1})$. So $f$ attains roots at $x_2,x_3,....$. </p> <p>Does $\{x_2,x_3,....\}$ account for all real numbers if we are allowed to vary $a \in \mathbb{R}$? </p>
[ { "answer_id": 2078989, "score": 2, "text": "<p>Let $x_0\\neq 0$ be a number so that $f(x_0)=0$. Then for any $c\\in \\Bbb R$, we have $f(cx_0)=cf(x_0)=0$.</p>\n" }, { "answer_id": 2078984, "score": 1, "text": "<p>we have $f(1^n)=nf(1)$ for all positive $n$, so $f(1)=0$. We also have $0=f((-1)^2)=2f(-1)$ and so $f(-1)=0$.</p>\n\n<p>Since the function is linear and has two roots we have $f(x)=0$ for all $x$.</p>\n" }, { "answer_id": 2078989, "score": 2, "text": "<p>Let $x_0\\neq 0$ be a number so that $f(x_0)=0$. Then for any $c\\in \\Bbb R$, we have $f(cx_0)=cf(x_0)=0$.</p>\n" }, { "answer_id": 2078984, "score": 1, "text": "<p>we have $f(1^n)=nf(1)$ for all positive $n$, so $f(1)=0$. We also have $0=f((-1)^2)=2f(-1)$ and so $f(-1)=0$.</p>\n\n<p>Since the function is linear and has two roots we have $f(x)=0$ for all $x$.</p>\n" }, { "answer_id": 2078989, "score": 2, "text": "<p>Let $x_0\\neq 0$ be a number so that $f(x_0)=0$. Then for any $c\\in \\Bbb R$, we have $f(cx_0)=cf(x_0)=0$.</p>\n" }, { "answer_id": 2078984, "score": 1, "text": "<p>we have $f(1^n)=nf(1)$ for all positive $n$, so $f(1)=0$. We also have $0=f((-1)^2)=2f(-1)$ and so $f(-1)=0$.</p>\n\n<p>Since the function is linear and has two roots we have $f(x)=0$ for all $x$.</p>\n" }, { "answer_id": 2078989, "score": 2, "text": "<p>Let $x_0\\neq 0$ be a number so that $f(x_0)=0$. Then for any $c\\in \\Bbb R$, we have $f(cx_0)=cf(x_0)=0$.</p>\n" }, { "answer_id": 2078984, "score": 1, "text": "<p>we have $f(1^n)=nf(1)$ for all positive $n$, so $f(1)=0$. We also have $0=f((-1)^2)=2f(-1)$ and so $f(-1)=0$.</p>\n\n<p>Since the function is linear and has two roots we have $f(x)=0$ for all $x$.</p>\n" }, { "answer_id": 2078989, "score": 2, "text": "<p>Let $x_0\\neq 0$ be a number so that $f(x_0)=0$. Then for any $c\\in \\Bbb R$, we have $f(cx_0)=cf(x_0)=0$.</p>\n" }, { "answer_id": 2078984, "score": 1, "text": "<p>we have $f(1^n)=nf(1)$ for all positive $n$, so $f(1)=0$. We also have $0=f((-1)^2)=2f(-1)$ and so $f(-1)=0$.</p>\n\n<p>Since the function is linear and has two roots we have $f(x)=0$ for all $x$.</p>\n" }, { "answer_id": 2078989, "score": 2, "text": "<p>Let $x_0\\neq 0$ be a number so that $f(x_0)=0$. Then for any $c\\in \\Bbb R$, we have $f(cx_0)=cf(x_0)=0$.</p>\n" }, { "answer_id": 2078984, "score": 1, "text": "<p>we have $f(1^n)=nf(1)$ for all positive $n$, so $f(1)=0$. We also have $0=f((-1)^2)=2f(-1)$ and so $f(-1)=0$.</p>\n\n<p>Since the function is linear and has two roots we have $f(x)=0$ for all $x$.</p>\n" }, { "answer_id": 2078989, "score": 2, "text": "<p>Let $x_0\\neq 0$ be a number so that $f(x_0)=0$. Then for any $c\\in \\Bbb R$, we have $f(cx_0)=cf(x_0)=0$.</p>\n" }, { "answer_id": 2078984, "score": 1, "text": "<p>we have $f(1^n)=nf(1)$ for all positive $n$, so $f(1)=0$. We also have $0=f((-1)^2)=2f(-1)$ and so $f(-1)=0$.</p>\n\n<p>Since the function is linear and has two roots we have $f(x)=0$ for all $x$.</p>\n" }, { "answer_id": 2078989, "score": 2, "text": "<p>Let $x_0\\neq 0$ be a number so that $f(x_0)=0$. Then for any $c\\in \\Bbb R$, we have $f(cx_0)=cf(x_0)=0$.</p>\n" }, { "answer_id": 2078984, "score": 1, "text": "<p>we have $f(1^n)=nf(1)$ for all positive $n$, so $f(1)=0$. We also have $0=f((-1)^2)=2f(-1)$ and so $f(-1)=0$.</p>\n\n<p>Since the function is linear and has two roots we have $f(x)=0$ for all $x$.</p>\n" }, { "answer_id": 2078989, "score": 2, "text": "<p>Let $x_0\\neq 0$ be a number so that $f(x_0)=0$. Then for any $c\\in \\Bbb R$, we have $f(cx_0)=cf(x_0)=0$.</p>\n" }, { "answer_id": 2078984, "score": 1, "text": "<p>we have $f(1^n)=nf(1)$ for all positive $n$, so $f(1)=0$. We also have $0=f((-1)^2)=2f(-1)$ and so $f(-1)=0$.</p>\n\n<p>Since the function is linear and has two roots we have $f(x)=0$ for all $x$.</p>\n" } ]
Question: Linear map $f(x^n)=nf(x)$. Answer 1: <p>Let $x_0\neq 0$ be a number so that $f(x_0)=0$. Then for any $c\in \Bbb R$, we have $f(cx_0)=cf(x_0)=0$.</p> Answer 2: <p>we have $f(1^n)=nf(1)$ for all positive $n$, so $f(1)=0$. We also have $0=f((-1)^2)=2f(-1)$ and so $f(-1)=0$.</p> <p>Since the function is linear and has two roots we have $f(x)=0$ for all $x$.</p> Answer 3: <p>Let $x_0\neq 0$ be a number so that $f(x_0)=0$. Then for any $c\in \Bbb R$, we have $f(cx_0)=cf(x_0)=0$.</p> Answer 4: <p>we have $f(1^n)=nf(1)$ for all positive $n$, so $f(1)=0$. We also have $0=f((-1)^2)=2f(-1)$ and so $f(-1)=0$.</p> <p>Since the function is linear and has two roots we have $f(x)=0$ for all $x$.</p> Answer 5: <p>Let $x_0\neq 0$ be a number so that $f(x_0)=0$. Then for any $c\in \Bbb R$, we have $f(cx_0)=cf(x_0)=0$.</p> Answer 6: <p>we have $f(1^n)=nf(1)$ for all positive $n$, so $f(1)=0$. We also have $0=f((-1)^2)=2f(-1)$ and so $f(-1)=0$.</p> <p>Since the function is linear and has two roots we have $f(x)=0$ for all $x$.</p> Answer 7: <p>Let $x_0\neq 0$ be a number so that $f(x_0)=0$. Then for any $c\in \Bbb R$, we have $f(cx_0)=cf(x_0)=0$.</p> Answer 8: <p>we have $f(1^n)=nf(1)$ for all positive $n$, so $f(1)=0$. We also have $0=f((-1)^2)=2f(-1)$ and so $f(-1)=0$.</p> <p>Since the function is linear and has two roots we have $f(x)=0$ for all $x$.</p> Answer 9: <p>Let $x_0\neq 0$ be a number so that $f(x_0)=0$. Then for any $c\in \Bbb R$, we have $f(cx_0)=cf(x_0)=0$.</p> Answer 10: <p>we have $f(1^n)=nf(1)$ for all positive $n$, so $f(1)=0$. We also have $0=f((-1)^2)=2f(-1)$ and so $f(-1)=0$.</p> <p>Since the function is linear and has two roots we have $f(x)=0$ for all $x$.</p> Answer 11: <p>Let $x_0\neq 0$ be a number so that $f(x_0)=0$. Then for any $c\in \Bbb R$, we have $f(cx_0)=cf(x_0)=0$.</p> Answer 12: <p>we have $f(1^n)=nf(1)$ for all positive $n$, so $f(1)=0$. We also have $0=f((-1)^2)=2f(-1)$ and so $f(-1)=0$.</p> <p>Since the function is linear and has two roots we have $f(x)=0$ for all $x$.</p> Answer 13: <p>Let $x_0\neq 0$ be a number so that $f(x_0)=0$. Then for any $c\in \Bbb R$, we have $f(cx_0)=cf(x_0)=0$.</p> Answer 14: <p>we have $f(1^n)=nf(1)$ for all positive $n$, so $f(1)=0$. We also have $0=f((-1)^2)=2f(-1)$ and so $f(-1)=0$.</p> <p>Since the function is linear and has two roots we have $f(x)=0$ for all $x$.</p> Answer 15: <p>Let $x_0\neq 0$ be a number so that $f(x_0)=0$. Then for any $c\in \Bbb R$, we have $f(cx_0)=cf(x_0)=0$.</p> Answer 16: <p>we have $f(1^n)=nf(1)$ for all positive $n$, so $f(1)=0$. We also have $0=f((-1)^2)=2f(-1)$ and so $f(-1)=0$.</p> <p>Since the function is linear and has two roots we have $f(x)=0$ for all $x$.</p> Answer 17: <p>Let $x_0\neq 0$ be a number so that $f(x_0)=0$. Then for any $c\in \Bbb R$, we have $f(cx_0)=cf(x_0)=0$.</p> Answer 18: <p>we have $f(1^n)=nf(1)$ for all positive $n$, so $f(1)=0$. We also have $0=f((-1)^2)=2f(-1)$ and so $f(-1)=0$.</p> <p>Since the function is linear and has two roots we have $f(x)=0$ for all $x$.</p>
Why Is Inverse of a Polynomial Still a Polynomial?
2,079,202
5
https://math.stackexchange.com/questions/2079202/why-is-inverse-of-a-polynomial-still-a-polynomial
<p>The fourth paragraph of the wikipedia article <a href="https://en.wikipedia.org/wiki/Algebraic_extension#cite_note-1" rel="noreferrer">algebraic extension</a> states that $K[a]$ is a field, which means every polynomial has an inverse. The inverse has to be a polynomial over $K$ as well. It seems it requires $K$ to be a non-integral domain. How do we resolve the confusion? How do we prove this paragraph? </p>
[ { "answer_id": 2079207, "score": 8, "text": "<p>Polynomials over a field $K$, that is, elements of $K[x]$, don't generally (unless they have degree zero) have an inverse.</p>\n\n<p>$K[a]$ is a different object, it is isomorphic to the quotient ring $K[x]/(m)$ where $m$ is the minimal polynomial of $a$ (you can see this via the first isomorphism theorem using the morphism $\\varphi:K[x]\\to K[a]$ which sends $\\sum k_i x^i\\mapsto\\sum k_i a^i$), assuming $a$ lives inside a field extension of $K$ its minimal polynomial will be irreducible, so the ideal $(m)$ it generates will be maximal and the quotient $K[x]/(m)$ will be a field.</p>\n\n<p>Inverses in this field can be computed via the usual euclidean algorithm, take a polynomial $f\\in K[x]$ with $\\gcd(f,m)=1$, then by Bezout's theorem there are $h$ and $g$ such that $fh+gm=1$, but in the quotient ring $gm=0$ so $h$ is the inverse of $f$</p>\n" }, { "answer_id": 2079214, "score": 2, "text": "<p>First, $\\;f(a)\\;$ is <strong>not</strong> a polynomial but the evaluation of a polynomial at $\\;a\\;$ , meaning: a number. </p>\n\n<p>The complete claim is: If $\\;K\\;$ is a field and $\\;F\\;$ is some extension of $\\;K\\;$ , and $\\;a\\in F\\;$ <em>is algebraic</em> over $\\;K\\;$ , then</p>\n\n<p>$$K[a]:=\\left\\{\\,f(a)\\;/\\;f(x)\\in K[x]\\,\\right\\}\\;\\;\\text{is a field, which means}\\;\\;K[a]=K(a)$$</p>\n\n<p>The proof: since $\\;a\\;$ is algebraic over $\\;K\\;$ there exists $\\;0\\not\\equiv m_a(x)\\in K[x]\\;$ of minimal degree s.t. $\\;m_a(a)=0\\;$ , but then:</p>\n\n<p>$$m_a(x)=\\sum_{k=0}^{n-1}b_kx^k+x^n\\implies b_0=-a^n-\\sum_{k=1}^{n-1}b_ka^k\\;\\;\\color{red}{(*)}$$</p>\n\n<p>(...Observe the claim is trivial if $\\;a\\in K\\;$ , and we can assume $\\;b_0\\neq0\\;$ (why?) ...)</p>\n\n<p>Yet $\\;0\\neq a\\in F\\;$ and $\\;F\\;$ is a field, so $\\;a^{-1}\\in F\\;$ , and then multiplying $\\;\\color{red}{(*)}\\;$ by $\\;a^{-1}\\;$:</p>\n\n<p>$$a^{-1}=-\\frac1{b_0}\\left(a^{n-1}+\\sum_{k=1}^{n-1}b_ka^{k-1}\\right)\\in K[a]$$</p>\n\n<p>Deduce now that $\\;K(a)\\subset K[a]\\;$ , and since the other contention is trivial if we remember $\\;K(a)\\;$ is the fraction field of the integral domain $\\;K[a]\\;$ , we're done.</p>\n" }, { "answer_id": 2079207, "score": 8, "text": "<p>Polynomials over a field $K$, that is, elements of $K[x]$, don't generally (unless they have degree zero) have an inverse.</p>\n\n<p>$K[a]$ is a different object, it is isomorphic to the quotient ring $K[x]/(m)$ where $m$ is the minimal polynomial of $a$ (you can see this via the first isomorphism theorem using the morphism $\\varphi:K[x]\\to K[a]$ which sends $\\sum k_i x^i\\mapsto\\sum k_i a^i$), assuming $a$ lives inside a field extension of $K$ its minimal polynomial will be irreducible, so the ideal $(m)$ it generates will be maximal and the quotient $K[x]/(m)$ will be a field.</p>\n\n<p>Inverses in this field can be computed via the usual euclidean algorithm, take a polynomial $f\\in K[x]$ with $\\gcd(f,m)=1$, then by Bezout's theorem there are $h$ and $g$ such that $fh+gm=1$, but in the quotient ring $gm=0$ so $h$ is the inverse of $f$</p>\n" }, { "answer_id": 2079214, "score": 2, "text": "<p>First, $\\;f(a)\\;$ is <strong>not</strong> a polynomial but the evaluation of a polynomial at $\\;a\\;$ , meaning: a number. </p>\n\n<p>The complete claim is: If $\\;K\\;$ is a field and $\\;F\\;$ is some extension of $\\;K\\;$ , and $\\;a\\in F\\;$ <em>is algebraic</em> over $\\;K\\;$ , then</p>\n\n<p>$$K[a]:=\\left\\{\\,f(a)\\;/\\;f(x)\\in K[x]\\,\\right\\}\\;\\;\\text{is a field, which means}\\;\\;K[a]=K(a)$$</p>\n\n<p>The proof: since $\\;a\\;$ is algebraic over $\\;K\\;$ there exists $\\;0\\not\\equiv m_a(x)\\in K[x]\\;$ of minimal degree s.t. $\\;m_a(a)=0\\;$ , but then:</p>\n\n<p>$$m_a(x)=\\sum_{k=0}^{n-1}b_kx^k+x^n\\implies b_0=-a^n-\\sum_{k=1}^{n-1}b_ka^k\\;\\;\\color{red}{(*)}$$</p>\n\n<p>(...Observe the claim is trivial if $\\;a\\in K\\;$ , and we can assume $\\;b_0\\neq0\\;$ (why?) ...)</p>\n\n<p>Yet $\\;0\\neq a\\in F\\;$ and $\\;F\\;$ is a field, so $\\;a^{-1}\\in F\\;$ , and then multiplying $\\;\\color{red}{(*)}\\;$ by $\\;a^{-1}\\;$:</p>\n\n<p>$$a^{-1}=-\\frac1{b_0}\\left(a^{n-1}+\\sum_{k=1}^{n-1}b_ka^{k-1}\\right)\\in K[a]$$</p>\n\n<p>Deduce now that $\\;K(a)\\subset K[a]\\;$ , and since the other contention is trivial if we remember $\\;K(a)\\;$ is the fraction field of the integral domain $\\;K[a]\\;$ , we're done.</p>\n" }, { "answer_id": 2079207, "score": 8, "text": "<p>Polynomials over a field $K$, that is, elements of $K[x]$, don't generally (unless they have degree zero) have an inverse.</p>\n\n<p>$K[a]$ is a different object, it is isomorphic to the quotient ring $K[x]/(m)$ where $m$ is the minimal polynomial of $a$ (you can see this via the first isomorphism theorem using the morphism $\\varphi:K[x]\\to K[a]$ which sends $\\sum k_i x^i\\mapsto\\sum k_i a^i$), assuming $a$ lives inside a field extension of $K$ its minimal polynomial will be irreducible, so the ideal $(m)$ it generates will be maximal and the quotient $K[x]/(m)$ will be a field.</p>\n\n<p>Inverses in this field can be computed via the usual euclidean algorithm, take a polynomial $f\\in K[x]$ with $\\gcd(f,m)=1$, then by Bezout's theorem there are $h$ and $g$ such that $fh+gm=1$, but in the quotient ring $gm=0$ so $h$ is the inverse of $f$</p>\n" }, { "answer_id": 2079214, "score": 2, "text": "<p>First, $\\;f(a)\\;$ is <strong>not</strong> a polynomial but the evaluation of a polynomial at $\\;a\\;$ , meaning: a number. </p>\n\n<p>The complete claim is: If $\\;K\\;$ is a field and $\\;F\\;$ is some extension of $\\;K\\;$ , and $\\;a\\in F\\;$ <em>is algebraic</em> over $\\;K\\;$ , then</p>\n\n<p>$$K[a]:=\\left\\{\\,f(a)\\;/\\;f(x)\\in K[x]\\,\\right\\}\\;\\;\\text{is a field, which means}\\;\\;K[a]=K(a)$$</p>\n\n<p>The proof: since $\\;a\\;$ is algebraic over $\\;K\\;$ there exists $\\;0\\not\\equiv m_a(x)\\in K[x]\\;$ of minimal degree s.t. $\\;m_a(a)=0\\;$ , but then:</p>\n\n<p>$$m_a(x)=\\sum_{k=0}^{n-1}b_kx^k+x^n\\implies b_0=-a^n-\\sum_{k=1}^{n-1}b_ka^k\\;\\;\\color{red}{(*)}$$</p>\n\n<p>(...Observe the claim is trivial if $\\;a\\in K\\;$ , and we can assume $\\;b_0\\neq0\\;$ (why?) ...)</p>\n\n<p>Yet $\\;0\\neq a\\in F\\;$ and $\\;F\\;$ is a field, so $\\;a^{-1}\\in F\\;$ , and then multiplying $\\;\\color{red}{(*)}\\;$ by $\\;a^{-1}\\;$:</p>\n\n<p>$$a^{-1}=-\\frac1{b_0}\\left(a^{n-1}+\\sum_{k=1}^{n-1}b_ka^{k-1}\\right)\\in K[a]$$</p>\n\n<p>Deduce now that $\\;K(a)\\subset K[a]\\;$ , and since the other contention is trivial if we remember $\\;K(a)\\;$ is the fraction field of the integral domain $\\;K[a]\\;$ , we're done.</p>\n" }, { "answer_id": 2079207, "score": 8, "text": "<p>Polynomials over a field $K$, that is, elements of $K[x]$, don't generally (unless they have degree zero) have an inverse.</p>\n\n<p>$K[a]$ is a different object, it is isomorphic to the quotient ring $K[x]/(m)$ where $m$ is the minimal polynomial of $a$ (you can see this via the first isomorphism theorem using the morphism $\\varphi:K[x]\\to K[a]$ which sends $\\sum k_i x^i\\mapsto\\sum k_i a^i$), assuming $a$ lives inside a field extension of $K$ its minimal polynomial will be irreducible, so the ideal $(m)$ it generates will be maximal and the quotient $K[x]/(m)$ will be a field.</p>\n\n<p>Inverses in this field can be computed via the usual euclidean algorithm, take a polynomial $f\\in K[x]$ with $\\gcd(f,m)=1$, then by Bezout's theorem there are $h$ and $g$ such that $fh+gm=1$, but in the quotient ring $gm=0$ so $h$ is the inverse of $f$</p>\n" }, { "answer_id": 2079214, "score": 2, "text": "<p>First, $\\;f(a)\\;$ is <strong>not</strong> a polynomial but the evaluation of a polynomial at $\\;a\\;$ , meaning: a number. </p>\n\n<p>The complete claim is: If $\\;K\\;$ is a field and $\\;F\\;$ is some extension of $\\;K\\;$ , and $\\;a\\in F\\;$ <em>is algebraic</em> over $\\;K\\;$ , then</p>\n\n<p>$$K[a]:=\\left\\{\\,f(a)\\;/\\;f(x)\\in K[x]\\,\\right\\}\\;\\;\\text{is a field, which means}\\;\\;K[a]=K(a)$$</p>\n\n<p>The proof: since $\\;a\\;$ is algebraic over $\\;K\\;$ there exists $\\;0\\not\\equiv m_a(x)\\in K[x]\\;$ of minimal degree s.t. $\\;m_a(a)=0\\;$ , but then:</p>\n\n<p>$$m_a(x)=\\sum_{k=0}^{n-1}b_kx^k+x^n\\implies b_0=-a^n-\\sum_{k=1}^{n-1}b_ka^k\\;\\;\\color{red}{(*)}$$</p>\n\n<p>(...Observe the claim is trivial if $\\;a\\in K\\;$ , and we can assume $\\;b_0\\neq0\\;$ (why?) ...)</p>\n\n<p>Yet $\\;0\\neq a\\in F\\;$ and $\\;F\\;$ is a field, so $\\;a^{-1}\\in F\\;$ , and then multiplying $\\;\\color{red}{(*)}\\;$ by $\\;a^{-1}\\;$:</p>\n\n<p>$$a^{-1}=-\\frac1{b_0}\\left(a^{n-1}+\\sum_{k=1}^{n-1}b_ka^{k-1}\\right)\\in K[a]$$</p>\n\n<p>Deduce now that $\\;K(a)\\subset K[a]\\;$ , and since the other contention is trivial if we remember $\\;K(a)\\;$ is the fraction field of the integral domain $\\;K[a]\\;$ , we're done.</p>\n" }, { "answer_id": 2079207, "score": 8, "text": "<p>Polynomials over a field $K$, that is, elements of $K[x]$, don't generally (unless they have degree zero) have an inverse.</p>\n\n<p>$K[a]$ is a different object, it is isomorphic to the quotient ring $K[x]/(m)$ where $m$ is the minimal polynomial of $a$ (you can see this via the first isomorphism theorem using the morphism $\\varphi:K[x]\\to K[a]$ which sends $\\sum k_i x^i\\mapsto\\sum k_i a^i$), assuming $a$ lives inside a field extension of $K$ its minimal polynomial will be irreducible, so the ideal $(m)$ it generates will be maximal and the quotient $K[x]/(m)$ will be a field.</p>\n\n<p>Inverses in this field can be computed via the usual euclidean algorithm, take a polynomial $f\\in K[x]$ with $\\gcd(f,m)=1$, then by Bezout's theorem there are $h$ and $g$ such that $fh+gm=1$, but in the quotient ring $gm=0$ so $h$ is the inverse of $f$</p>\n" }, { "answer_id": 2079214, "score": 2, "text": "<p>First, $\\;f(a)\\;$ is <strong>not</strong> a polynomial but the evaluation of a polynomial at $\\;a\\;$ , meaning: a number. </p>\n\n<p>The complete claim is: If $\\;K\\;$ is a field and $\\;F\\;$ is some extension of $\\;K\\;$ , and $\\;a\\in F\\;$ <em>is algebraic</em> over $\\;K\\;$ , then</p>\n\n<p>$$K[a]:=\\left\\{\\,f(a)\\;/\\;f(x)\\in K[x]\\,\\right\\}\\;\\;\\text{is a field, which means}\\;\\;K[a]=K(a)$$</p>\n\n<p>The proof: since $\\;a\\;$ is algebraic over $\\;K\\;$ there exists $\\;0\\not\\equiv m_a(x)\\in K[x]\\;$ of minimal degree s.t. $\\;m_a(a)=0\\;$ , but then:</p>\n\n<p>$$m_a(x)=\\sum_{k=0}^{n-1}b_kx^k+x^n\\implies b_0=-a^n-\\sum_{k=1}^{n-1}b_ka^k\\;\\;\\color{red}{(*)}$$</p>\n\n<p>(...Observe the claim is trivial if $\\;a\\in K\\;$ , and we can assume $\\;b_0\\neq0\\;$ (why?) ...)</p>\n\n<p>Yet $\\;0\\neq a\\in F\\;$ and $\\;F\\;$ is a field, so $\\;a^{-1}\\in F\\;$ , and then multiplying $\\;\\color{red}{(*)}\\;$ by $\\;a^{-1}\\;$:</p>\n\n<p>$$a^{-1}=-\\frac1{b_0}\\left(a^{n-1}+\\sum_{k=1}^{n-1}b_ka^{k-1}\\right)\\in K[a]$$</p>\n\n<p>Deduce now that $\\;K(a)\\subset K[a]\\;$ , and since the other contention is trivial if we remember $\\;K(a)\\;$ is the fraction field of the integral domain $\\;K[a]\\;$ , we're done.</p>\n" }, { "answer_id": 2079207, "score": 8, "text": "<p>Polynomials over a field $K$, that is, elements of $K[x]$, don't generally (unless they have degree zero) have an inverse.</p>\n\n<p>$K[a]$ is a different object, it is isomorphic to the quotient ring $K[x]/(m)$ where $m$ is the minimal polynomial of $a$ (you can see this via the first isomorphism theorem using the morphism $\\varphi:K[x]\\to K[a]$ which sends $\\sum k_i x^i\\mapsto\\sum k_i a^i$), assuming $a$ lives inside a field extension of $K$ its minimal polynomial will be irreducible, so the ideal $(m)$ it generates will be maximal and the quotient $K[x]/(m)$ will be a field.</p>\n\n<p>Inverses in this field can be computed via the usual euclidean algorithm, take a polynomial $f\\in K[x]$ with $\\gcd(f,m)=1$, then by Bezout's theorem there are $h$ and $g$ such that $fh+gm=1$, but in the quotient ring $gm=0$ so $h$ is the inverse of $f$</p>\n" }, { "answer_id": 2079214, "score": 2, "text": "<p>First, $\\;f(a)\\;$ is <strong>not</strong> a polynomial but the evaluation of a polynomial at $\\;a\\;$ , meaning: a number. </p>\n\n<p>The complete claim is: If $\\;K\\;$ is a field and $\\;F\\;$ is some extension of $\\;K\\;$ , and $\\;a\\in F\\;$ <em>is algebraic</em> over $\\;K\\;$ , then</p>\n\n<p>$$K[a]:=\\left\\{\\,f(a)\\;/\\;f(x)\\in K[x]\\,\\right\\}\\;\\;\\text{is a field, which means}\\;\\;K[a]=K(a)$$</p>\n\n<p>The proof: since $\\;a\\;$ is algebraic over $\\;K\\;$ there exists $\\;0\\not\\equiv m_a(x)\\in K[x]\\;$ of minimal degree s.t. $\\;m_a(a)=0\\;$ , but then:</p>\n\n<p>$$m_a(x)=\\sum_{k=0}^{n-1}b_kx^k+x^n\\implies b_0=-a^n-\\sum_{k=1}^{n-1}b_ka^k\\;\\;\\color{red}{(*)}$$</p>\n\n<p>(...Observe the claim is trivial if $\\;a\\in K\\;$ , and we can assume $\\;b_0\\neq0\\;$ (why?) ...)</p>\n\n<p>Yet $\\;0\\neq a\\in F\\;$ and $\\;F\\;$ is a field, so $\\;a^{-1}\\in F\\;$ , and then multiplying $\\;\\color{red}{(*)}\\;$ by $\\;a^{-1}\\;$:</p>\n\n<p>$$a^{-1}=-\\frac1{b_0}\\left(a^{n-1}+\\sum_{k=1}^{n-1}b_ka^{k-1}\\right)\\in K[a]$$</p>\n\n<p>Deduce now that $\\;K(a)\\subset K[a]\\;$ , and since the other contention is trivial if we remember $\\;K(a)\\;$ is the fraction field of the integral domain $\\;K[a]\\;$ , we're done.</p>\n" }, { "answer_id": 2079207, "score": 8, "text": "<p>Polynomials over a field $K$, that is, elements of $K[x]$, don't generally (unless they have degree zero) have an inverse.</p>\n\n<p>$K[a]$ is a different object, it is isomorphic to the quotient ring $K[x]/(m)$ where $m$ is the minimal polynomial of $a$ (you can see this via the first isomorphism theorem using the morphism $\\varphi:K[x]\\to K[a]$ which sends $\\sum k_i x^i\\mapsto\\sum k_i a^i$), assuming $a$ lives inside a field extension of $K$ its minimal polynomial will be irreducible, so the ideal $(m)$ it generates will be maximal and the quotient $K[x]/(m)$ will be a field.</p>\n\n<p>Inverses in this field can be computed via the usual euclidean algorithm, take a polynomial $f\\in K[x]$ with $\\gcd(f,m)=1$, then by Bezout's theorem there are $h$ and $g$ such that $fh+gm=1$, but in the quotient ring $gm=0$ so $h$ is the inverse of $f$</p>\n" }, { "answer_id": 2079214, "score": 2, "text": "<p>First, $\\;f(a)\\;$ is <strong>not</strong> a polynomial but the evaluation of a polynomial at $\\;a\\;$ , meaning: a number. </p>\n\n<p>The complete claim is: If $\\;K\\;$ is a field and $\\;F\\;$ is some extension of $\\;K\\;$ , and $\\;a\\in F\\;$ <em>is algebraic</em> over $\\;K\\;$ , then</p>\n\n<p>$$K[a]:=\\left\\{\\,f(a)\\;/\\;f(x)\\in K[x]\\,\\right\\}\\;\\;\\text{is a field, which means}\\;\\;K[a]=K(a)$$</p>\n\n<p>The proof: since $\\;a\\;$ is algebraic over $\\;K\\;$ there exists $\\;0\\not\\equiv m_a(x)\\in K[x]\\;$ of minimal degree s.t. $\\;m_a(a)=0\\;$ , but then:</p>\n\n<p>$$m_a(x)=\\sum_{k=0}^{n-1}b_kx^k+x^n\\implies b_0=-a^n-\\sum_{k=1}^{n-1}b_ka^k\\;\\;\\color{red}{(*)}$$</p>\n\n<p>(...Observe the claim is trivial if $\\;a\\in K\\;$ , and we can assume $\\;b_0\\neq0\\;$ (why?) ...)</p>\n\n<p>Yet $\\;0\\neq a\\in F\\;$ and $\\;F\\;$ is a field, so $\\;a^{-1}\\in F\\;$ , and then multiplying $\\;\\color{red}{(*)}\\;$ by $\\;a^{-1}\\;$:</p>\n\n<p>$$a^{-1}=-\\frac1{b_0}\\left(a^{n-1}+\\sum_{k=1}^{n-1}b_ka^{k-1}\\right)\\in K[a]$$</p>\n\n<p>Deduce now that $\\;K(a)\\subset K[a]\\;$ , and since the other contention is trivial if we remember $\\;K(a)\\;$ is the fraction field of the integral domain $\\;K[a]\\;$ , we're done.</p>\n" }, { "answer_id": 2079207, "score": 8, "text": "<p>Polynomials over a field $K$, that is, elements of $K[x]$, don't generally (unless they have degree zero) have an inverse.</p>\n\n<p>$K[a]$ is a different object, it is isomorphic to the quotient ring $K[x]/(m)$ where $m$ is the minimal polynomial of $a$ (you can see this via the first isomorphism theorem using the morphism $\\varphi:K[x]\\to K[a]$ which sends $\\sum k_i x^i\\mapsto\\sum k_i a^i$), assuming $a$ lives inside a field extension of $K$ its minimal polynomial will be irreducible, so the ideal $(m)$ it generates will be maximal and the quotient $K[x]/(m)$ will be a field.</p>\n\n<p>Inverses in this field can be computed via the usual euclidean algorithm, take a polynomial $f\\in K[x]$ with $\\gcd(f,m)=1$, then by Bezout's theorem there are $h$ and $g$ such that $fh+gm=1$, but in the quotient ring $gm=0$ so $h$ is the inverse of $f$</p>\n" }, { "answer_id": 2079214, "score": 2, "text": "<p>First, $\\;f(a)\\;$ is <strong>not</strong> a polynomial but the evaluation of a polynomial at $\\;a\\;$ , meaning: a number. </p>\n\n<p>The complete claim is: If $\\;K\\;$ is a field and $\\;F\\;$ is some extension of $\\;K\\;$ , and $\\;a\\in F\\;$ <em>is algebraic</em> over $\\;K\\;$ , then</p>\n\n<p>$$K[a]:=\\left\\{\\,f(a)\\;/\\;f(x)\\in K[x]\\,\\right\\}\\;\\;\\text{is a field, which means}\\;\\;K[a]=K(a)$$</p>\n\n<p>The proof: since $\\;a\\;$ is algebraic over $\\;K\\;$ there exists $\\;0\\not\\equiv m_a(x)\\in K[x]\\;$ of minimal degree s.t. $\\;m_a(a)=0\\;$ , but then:</p>\n\n<p>$$m_a(x)=\\sum_{k=0}^{n-1}b_kx^k+x^n\\implies b_0=-a^n-\\sum_{k=1}^{n-1}b_ka^k\\;\\;\\color{red}{(*)}$$</p>\n\n<p>(...Observe the claim is trivial if $\\;a\\in K\\;$ , and we can assume $\\;b_0\\neq0\\;$ (why?) ...)</p>\n\n<p>Yet $\\;0\\neq a\\in F\\;$ and $\\;F\\;$ is a field, so $\\;a^{-1}\\in F\\;$ , and then multiplying $\\;\\color{red}{(*)}\\;$ by $\\;a^{-1}\\;$:</p>\n\n<p>$$a^{-1}=-\\frac1{b_0}\\left(a^{n-1}+\\sum_{k=1}^{n-1}b_ka^{k-1}\\right)\\in K[a]$$</p>\n\n<p>Deduce now that $\\;K(a)\\subset K[a]\\;$ , and since the other contention is trivial if we remember $\\;K(a)\\;$ is the fraction field of the integral domain $\\;K[a]\\;$ , we're done.</p>\n" }, { "answer_id": 2079207, "score": 8, "text": "<p>Polynomials over a field $K$, that is, elements of $K[x]$, don't generally (unless they have degree zero) have an inverse.</p>\n\n<p>$K[a]$ is a different object, it is isomorphic to the quotient ring $K[x]/(m)$ where $m$ is the minimal polynomial of $a$ (you can see this via the first isomorphism theorem using the morphism $\\varphi:K[x]\\to K[a]$ which sends $\\sum k_i x^i\\mapsto\\sum k_i a^i$), assuming $a$ lives inside a field extension of $K$ its minimal polynomial will be irreducible, so the ideal $(m)$ it generates will be maximal and the quotient $K[x]/(m)$ will be a field.</p>\n\n<p>Inverses in this field can be computed via the usual euclidean algorithm, take a polynomial $f\\in K[x]$ with $\\gcd(f,m)=1$, then by Bezout's theorem there are $h$ and $g$ such that $fh+gm=1$, but in the quotient ring $gm=0$ so $h$ is the inverse of $f$</p>\n" }, { "answer_id": 2079214, "score": 2, "text": "<p>First, $\\;f(a)\\;$ is <strong>not</strong> a polynomial but the evaluation of a polynomial at $\\;a\\;$ , meaning: a number. </p>\n\n<p>The complete claim is: If $\\;K\\;$ is a field and $\\;F\\;$ is some extension of $\\;K\\;$ , and $\\;a\\in F\\;$ <em>is algebraic</em> over $\\;K\\;$ , then</p>\n\n<p>$$K[a]:=\\left\\{\\,f(a)\\;/\\;f(x)\\in K[x]\\,\\right\\}\\;\\;\\text{is a field, which means}\\;\\;K[a]=K(a)$$</p>\n\n<p>The proof: since $\\;a\\;$ is algebraic over $\\;K\\;$ there exists $\\;0\\not\\equiv m_a(x)\\in K[x]\\;$ of minimal degree s.t. $\\;m_a(a)=0\\;$ , but then:</p>\n\n<p>$$m_a(x)=\\sum_{k=0}^{n-1}b_kx^k+x^n\\implies b_0=-a^n-\\sum_{k=1}^{n-1}b_ka^k\\;\\;\\color{red}{(*)}$$</p>\n\n<p>(...Observe the claim is trivial if $\\;a\\in K\\;$ , and we can assume $\\;b_0\\neq0\\;$ (why?) ...)</p>\n\n<p>Yet $\\;0\\neq a\\in F\\;$ and $\\;F\\;$ is a field, so $\\;a^{-1}\\in F\\;$ , and then multiplying $\\;\\color{red}{(*)}\\;$ by $\\;a^{-1}\\;$:</p>\n\n<p>$$a^{-1}=-\\frac1{b_0}\\left(a^{n-1}+\\sum_{k=1}^{n-1}b_ka^{k-1}\\right)\\in K[a]$$</p>\n\n<p>Deduce now that $\\;K(a)\\subset K[a]\\;$ , and since the other contention is trivial if we remember $\\;K(a)\\;$ is the fraction field of the integral domain $\\;K[a]\\;$ , we're done.</p>\n" } ]
Question: Why Is Inverse of a Polynomial Still a Polynomial? Answer 1: <p>Polynomials over a field $K$, that is, elements of $K[x]$, don't generally (unless they have degree zero) have an inverse.</p> <p>$K[a]$ is a different object, it is isomorphic to the quotient ring $K[x]/(m)$ where $m$ is the minimal polynomial of $a$ (you can see this via the first isomorphism theorem using the morphism $\varphi:K[x]\to K[a]$ which sends $\sum k_i x^i\mapsto\sum k_i a^i$), assuming $a$ lives inside a field extension of $K$ its minimal polynomial will be irreducible, so the ideal $(m)$ it generates will be maximal and the quotient $K[x]/(m)$ will be a field.</p> <p>Inverses in this field can be computed via the usual euclidean algorithm, take a polynomial $f\in K[x]$ with $\gcd(f,m)=1$, then by Bezout's theorem there are $h$ and $g$ such that $fh+gm=1$, but in the quotient ring $gm=0$ so $h$ is the inverse of $f$</p> Answer 2: <p>First, $\;f(a)\;$ is <strong>not</strong> a polynomial but the evaluation of a polynomial at $\;a\;$ , meaning: a number. </p> <p>The complete claim is: If $\;K\;$ is a field and $\;F\;$ is some extension of $\;K\;$ , and $\;a\in F\;$ <em>is algebraic</em> over $\;K\;$ , then</p> <p>$$K[a]:=\left\{\,f(a)\;/\;f(x)\in K[x]\,\right\}\;\;\text{is a field, which means}\;\;K[a]=K(a)$$</p> <p>The proof: since $\;a\;$ is algebraic over $\;K\;$ there exists $\;0\not\equiv m_a(x)\in K[x]\;$ of minimal degree s.t. $\;m_a(a)=0\;$ , but then:</p> <p>$$m_a(x)=\sum_{k=0}^{n-1}b_kx^k+x^n\implies b_0=-a^n-\sum_{k=1}^{n-1}b_ka^k\;\;\color{red}{(*)}$$</p> <p>(...Observe the claim is trivial if $\;a\in K\;$ , and we can assume $\;b_0\neq0\;$ (why?) ...)</p> <p>Yet $\;0\neq a\in F\;$ and $\;F\;$ is a field, so $\;a^{-1}\in F\;$ , and then multiplying $\;\color{red}{(*)}\;$ by $\;a^{-1}\;$:</p> <p>$$a^{-1}=-\frac1{b_0}\left(a^{n-1}+\sum_{k=1}^{n-1}b_ka^{k-1}\right)\in K[a]$$</p> <p>Deduce now that $\;K(a)\subset K[a]\;$ , and since the other contention is trivial if we remember $\;K(a)\;$ is the fraction field of the integral domain $\;K[a]\;$ , we're done.</p> Answer 3: <p>Polynomials over a field $K$, that is, elements of $K[x]$, don't generally (unless they have degree zero) have an inverse.</p> <p>$K[a]$ is a different object, it is isomorphic to the quotient ring $K[x]/(m)$ where $m$ is the minimal polynomial of $a$ (you can see this via the first isomorphism theorem using the morphism $\varphi:K[x]\to K[a]$ which sends $\sum k_i x^i\mapsto\sum k_i a^i$), assuming $a$ lives inside a field extension of $K$ its minimal polynomial will be irreducible, so the ideal $(m)$ it generates will be maximal and the quotient $K[x]/(m)$ will be a field.</p> <p>Inverses in this field can be computed via the usual euclidean algorithm, take a polynomial $f\in K[x]$ with $\gcd(f,m)=1$, then by Bezout's theorem there are $h$ and $g$ such that $fh+gm=1$, but in the quotient ring $gm=0$ so $h$ is the inverse of $f$</p> Answer 4: <p>First, $\;f(a)\;$ is <strong>not</strong> a polynomial but the evaluation of a polynomial at $\;a\;$ , meaning: a number. </p> <p>The complete claim is: If $\;K\;$ is a field and $\;F\;$ is some extension of $\;K\;$ , and $\;a\in F\;$ <em>is algebraic</em> over $\;K\;$ , then</p> <p>$$K[a]:=\left\{\,f(a)\;/\;f(x)\in K[x]\,\right\}\;\;\text{is a field, which means}\;\;K[a]=K(a)$$</p> <p>The proof: since $\;a\;$ is algebraic over $\;K\;$ there exists $\;0\not\equiv m_a(x)\in K[x]\;$ of minimal degree s.t. $\;m_a(a)=0\;$ , but then:</p> <p>$$m_a(x)=\sum_{k=0}^{n-1}b_kx^k+x^n\implies b_0=-a^n-\sum_{k=1}^{n-1}b_ka^k\;\;\color{red}{(*)}$$</p> <p>(...Observe the claim is trivial if $\;a\in K\;$ , and we can assume $\;b_0\neq0\;$ (why?) ...)</p> <p>Yet $\;0\neq a\in F\;$ and $\;F\;$ is a field, so $\;a^{-1}\in F\;$ , and then multiplying $\;\color{red}{(*)}\;$ by $\;a^{-1}\;$:</p> <p>$$a^{-1}=-\frac1{b_0}\left(a^{n-1}+\sum_{k=1}^{n-1}b_ka^{k-1}\right)\in K[a]$$</p> <p>Deduce now that $\;K(a)\subset K[a]\;$ , and since the other contention is trivial if we remember $\;K(a)\;$ is the fraction field of the integral domain $\;K[a]\;$ , we're done.</p> Answer 5: <p>Polynomials over a field $K$, that is, elements of $K[x]$, don't generally (unless they have degree zero) have an inverse.</p> <p>$K[a]$ is a different object, it is isomorphic to the quotient ring $K[x]/(m)$ where $m$ is the minimal polynomial of $a$ (you can see this via the first isomorphism theorem using the morphism $\varphi:K[x]\to K[a]$ which sends $\sum k_i x^i\mapsto\sum k_i a^i$), assuming $a$ lives inside a field extension of $K$ its minimal polynomial will be irreducible, so the ideal $(m)$ it generates will be maximal and the quotient $K[x]/(m)$ will be a field.</p> <p>Inverses in this field can be computed via the usual euclidean algorithm, take a polynomial $f\in K[x]$ with $\gcd(f,m)=1$, then by Bezout's theorem there are $h$ and $g$ such that $fh+gm=1$, but in the quotient ring $gm=0$ so $h$ is the inverse of $f$</p> Answer 6: <p>First, $\;f(a)\;$ is <strong>not</strong> a polynomial but the evaluation of a polynomial at $\;a\;$ , meaning: a number. </p> <p>The complete claim is: If $\;K\;$ is a field and $\;F\;$ is some extension of $\;K\;$ , and $\;a\in F\;$ <em>is algebraic</em> over $\;K\;$ , then</p> <p>$$K[a]:=\left\{\,f(a)\;/\;f(x)\in K[x]\,\right\}\;\;\text{is a field, which means}\;\;K[a]=K(a)$$</p> <p>The proof: since $\;a\;$ is algebraic over $\;K\;$ there exists $\;0\not\equiv m_a(x)\in K[x]\;$ of minimal degree s.t. $\;m_a(a)=0\;$ , but then:</p> <p>$$m_a(x)=\sum_{k=0}^{n-1}b_kx^k+x^n\implies b_0=-a^n-\sum_{k=1}^{n-1}b_ka^k\;\;\color{red}{(*)}$$</p> <p>(...Observe the claim is trivial if $\;a\in K\;$ , and we can assume $\;b_0\neq0\;$ (why?) ...)</p> <p>Yet $\;0\neq a\in F\;$ and $\;F\;$ is a field, so $\;a^{-1}\in F\;$ , and then multiplying $\;\color{red}{(*)}\;$ by $\;a^{-1}\;$:</p> <p>$$a^{-1}=-\frac1{b_0}\left(a^{n-1}+\sum_{k=1}^{n-1}b_ka^{k-1}\right)\in K[a]$$</p> <p>Deduce now that $\;K(a)\subset K[a]\;$ , and since the other contention is trivial if we remember $\;K(a)\;$ is the fraction field of the integral domain $\;K[a]\;$ , we're done.</p> Answer 7: <p>Polynomials over a field $K$, that is, elements of $K[x]$, don't generally (unless they have degree zero) have an inverse.</p> <p>$K[a]$ is a different object, it is isomorphic to the quotient ring $K[x]/(m)$ where $m$ is the minimal polynomial of $a$ (you can see this via the first isomorphism theorem using the morphism $\varphi:K[x]\to K[a]$ which sends $\sum k_i x^i\mapsto\sum k_i a^i$), assuming $a$ lives inside a field extension of $K$ its minimal polynomial will be irreducible, so the ideal $(m)$ it generates will be maximal and the quotient $K[x]/(m)$ will be a field.</p> <p>Inverses in this field can be computed via the usual euclidean algorithm, take a polynomial $f\in K[x]$ with $\gcd(f,m)=1$, then by Bezout's theorem there are $h$ and $g$ such that $fh+gm=1$, but in the quotient ring $gm=0$ so $h$ is the inverse of $f$</p> Answer 8: <p>First, $\;f(a)\;$ is <strong>not</strong> a polynomial but the evaluation of a polynomial at $\;a\;$ , meaning: a number. </p> <p>The complete claim is: If $\;K\;$ is a field and $\;F\;$ is some extension of $\;K\;$ , and $\;a\in F\;$ <em>is algebraic</em> over $\;K\;$ , then</p> <p>$$K[a]:=\left\{\,f(a)\;/\;f(x)\in K[x]\,\right\}\;\;\text{is a field, which means}\;\;K[a]=K(a)$$</p> <p>The proof: since $\;a\;$ is algebraic over $\;K\;$ there exists $\;0\not\equiv m_a(x)\in K[x]\;$ of minimal degree s.t. $\;m_a(a)=0\;$ , but then:</p> <p>$$m_a(x)=\sum_{k=0}^{n-1}b_kx^k+x^n\implies b_0=-a^n-\sum_{k=1}^{n-1}b_ka^k\;\;\color{red}{(*)}$$</p> <p>(...Observe the claim is trivial if $\;a\in K\;$ , and we can assume $\;b_0\neq0\;$ (why?) ...)</p> <p>Yet $\;0\neq a\in F\;$ and $\;F\;$ is a field, so $\;a^{-1}\in F\;$ , and then multiplying $\;\color{red}{(*)}\;$ by $\;a^{-1}\;$:</p> <p>$$a^{-1}=-\frac1{b_0}\left(a^{n-1}+\sum_{k=1}^{n-1}b_ka^{k-1}\right)\in K[a]$$</p> <p>Deduce now that $\;K(a)\subset K[a]\;$ , and since the other contention is trivial if we remember $\;K(a)\;$ is the fraction field of the integral domain $\;K[a]\;$ , we're done.</p> Answer 9: <p>Polynomials over a field $K$, that is, elements of $K[x]$, don't generally (unless they have degree zero) have an inverse.</p> <p>$K[a]$ is a different object, it is isomorphic to the quotient ring $K[x]/(m)$ where $m$ is the minimal polynomial of $a$ (you can see this via the first isomorphism theorem using the morphism $\varphi:K[x]\to K[a]$ which sends $\sum k_i x^i\mapsto\sum k_i a^i$), assuming $a$ lives inside a field extension of $K$ its minimal polynomial will be irreducible, so the ideal $(m)$ it generates will be maximal and the quotient $K[x]/(m)$ will be a field.</p> <p>Inverses in this field can be computed via the usual euclidean algorithm, take a polynomial $f\in K[x]$ with $\gcd(f,m)=1$, then by Bezout's theorem there are $h$ and $g$ such that $fh+gm=1$, but in the quotient ring $gm=0$ so $h$ is the inverse of $f$</p> Answer 10: <p>First, $\;f(a)\;$ is <strong>not</strong> a polynomial but the evaluation of a polynomial at $\;a\;$ , meaning: a number. </p> <p>The complete claim is: If $\;K\;$ is a field and $\;F\;$ is some extension of $\;K\;$ , and $\;a\in F\;$ <em>is algebraic</em> over $\;K\;$ , then</p> <p>$$K[a]:=\left\{\,f(a)\;/\;f(x)\in K[x]\,\right\}\;\;\text{is a field, which means}\;\;K[a]=K(a)$$</p> <p>The proof: since $\;a\;$ is algebraic over $\;K\;$ there exists $\;0\not\equiv m_a(x)\in K[x]\;$ of minimal degree s.t. $\;m_a(a)=0\;$ , but then:</p> <p>$$m_a(x)=\sum_{k=0}^{n-1}b_kx^k+x^n\implies b_0=-a^n-\sum_{k=1}^{n-1}b_ka^k\;\;\color{red}{(*)}$$</p> <p>(...Observe the claim is trivial if $\;a\in K\;$ , and we can assume $\;b_0\neq0\;$ (why?) ...)</p> <p>Yet $\;0\neq a\in F\;$ and $\;F\;$ is a field, so $\;a^{-1}\in F\;$ , and then multiplying $\;\color{red}{(*)}\;$ by $\;a^{-1}\;$:</p> <p>$$a^{-1}=-\frac1{b_0}\left(a^{n-1}+\sum_{k=1}^{n-1}b_ka^{k-1}\right)\in K[a]$$</p> <p>Deduce now that $\;K(a)\subset K[a]\;$ , and since the other contention is trivial if we remember $\;K(a)\;$ is the fraction field of the integral domain $\;K[a]\;$ , we're done.</p> Answer 11: <p>Polynomials over a field $K$, that is, elements of $K[x]$, don't generally (unless they have degree zero) have an inverse.</p> <p>$K[a]$ is a different object, it is isomorphic to the quotient ring $K[x]/(m)$ where $m$ is the minimal polynomial of $a$ (you can see this via the first isomorphism theorem using the morphism $\varphi:K[x]\to K[a]$ which sends $\sum k_i x^i\mapsto\sum k_i a^i$), assuming $a$ lives inside a field extension of $K$ its minimal polynomial will be irreducible, so the ideal $(m)$ it generates will be maximal and the quotient $K[x]/(m)$ will be a field.</p> <p>Inverses in this field can be computed via the usual euclidean algorithm, take a polynomial $f\in K[x]$ with $\gcd(f,m)=1$, then by Bezout's theorem there are $h$ and $g$ such that $fh+gm=1$, but in the quotient ring $gm=0$ so $h$ is the inverse of $f$</p> Answer 12: <p>First, $\;f(a)\;$ is <strong>not</strong> a polynomial but the evaluation of a polynomial at $\;a\;$ , meaning: a number. </p> <p>The complete claim is: If $\;K\;$ is a field and $\;F\;$ is some extension of $\;K\;$ , and $\;a\in F\;$ <em>is algebraic</em> over $\;K\;$ , then</p> <p>$$K[a]:=\left\{\,f(a)\;/\;f(x)\in K[x]\,\right\}\;\;\text{is a field, which means}\;\;K[a]=K(a)$$</p> <p>The proof: since $\;a\;$ is algebraic over $\;K\;$ there exists $\;0\not\equiv m_a(x)\in K[x]\;$ of minimal degree s.t. $\;m_a(a)=0\;$ , but then:</p> <p>$$m_a(x)=\sum_{k=0}^{n-1}b_kx^k+x^n\implies b_0=-a^n-\sum_{k=1}^{n-1}b_ka^k\;\;\color{red}{(*)}$$</p> <p>(...Observe the claim is trivial if $\;a\in K\;$ , and we can assume $\;b_0\neq0\;$ (why?) ...)</p> <p>Yet $\;0\neq a\in F\;$ and $\;F\;$ is a field, so $\;a^{-1}\in F\;$ , and then multiplying $\;\color{red}{(*)}\;$ by $\;a^{-1}\;$:</p> <p>$$a^{-1}=-\frac1{b_0}\left(a^{n-1}+\sum_{k=1}^{n-1}b_ka^{k-1}\right)\in K[a]$$</p> <p>Deduce now that $\;K(a)\subset K[a]\;$ , and since the other contention is trivial if we remember $\;K(a)\;$ is the fraction field of the integral domain $\;K[a]\;$ , we're done.</p> Answer 13: <p>Polynomials over a field $K$, that is, elements of $K[x]$, don't generally (unless they have degree zero) have an inverse.</p> <p>$K[a]$ is a different object, it is isomorphic to the quotient ring $K[x]/(m)$ where $m$ is the minimal polynomial of $a$ (you can see this via the first isomorphism theorem using the morphism $\varphi:K[x]\to K[a]$ which sends $\sum k_i x^i\mapsto\sum k_i a^i$), assuming $a$ lives inside a field extension of $K$ its minimal polynomial will be irreducible, so the ideal $(m)$ it generates will be maximal and the quotient $K[x]/(m)$ will be a field.</p> <p>Inverses in this field can be computed via the usual euclidean algorithm, take a polynomial $f\in K[x]$ with $\gcd(f,m)=1$, then by Bezout's theorem there are $h$ and $g$ such that $fh+gm=1$, but in the quotient ring $gm=0$ so $h$ is the inverse of $f$</p> Answer 14: <p>First, $\;f(a)\;$ is <strong>not</strong> a polynomial but the evaluation of a polynomial at $\;a\;$ , meaning: a number. </p> <p>The complete claim is: If $\;K\;$ is a field and $\;F\;$ is some extension of $\;K\;$ , and $\;a\in F\;$ <em>is algebraic</em> over $\;K\;$ , then</p> <p>$$K[a]:=\left\{\,f(a)\;/\;f(x)\in K[x]\,\right\}\;\;\text{is a field, which means}\;\;K[a]=K(a)$$</p> <p>The proof: since $\;a\;$ is algebraic over $\;K\;$ there exists $\;0\not\equiv m_a(x)\in K[x]\;$ of minimal degree s.t. $\;m_a(a)=0\;$ , but then:</p> <p>$$m_a(x)=\sum_{k=0}^{n-1}b_kx^k+x^n\implies b_0=-a^n-\sum_{k=1}^{n-1}b_ka^k\;\;\color{red}{(*)}$$</p> <p>(...Observe the claim is trivial if $\;a\in K\;$ , and we can assume $\;b_0\neq0\;$ (why?) ...)</p> <p>Yet $\;0\neq a\in F\;$ and $\;F\;$ is a field, so $\;a^{-1}\in F\;$ , and then multiplying $\;\color{red}{(*)}\;$ by $\;a^{-1}\;$:</p> <p>$$a^{-1}=-\frac1{b_0}\left(a^{n-1}+\sum_{k=1}^{n-1}b_ka^{k-1}\right)\in K[a]$$</p> <p>Deduce now that $\;K(a)\subset K[a]\;$ , and since the other contention is trivial if we remember $\;K(a)\;$ is the fraction field of the integral domain $\;K[a]\;$ , we're done.</p> Answer 15: <p>Polynomials over a field $K$, that is, elements of $K[x]$, don't generally (unless they have degree zero) have an inverse.</p> <p>$K[a]$ is a different object, it is isomorphic to the quotient ring $K[x]/(m)$ where $m$ is the minimal polynomial of $a$ (you can see this via the first isomorphism theorem using the morphism $\varphi:K[x]\to K[a]$ which sends $\sum k_i x^i\mapsto\sum k_i a^i$), assuming $a$ lives inside a field extension of $K$ its minimal polynomial will be irreducible, so the ideal $(m)$ it generates will be maximal and the quotient $K[x]/(m)$ will be a field.</p> <p>Inverses in this field can be computed via the usual euclidean algorithm, take a polynomial $f\in K[x]$ with $\gcd(f,m)=1$, then by Bezout's theorem there are $h$ and $g$ such that $fh+gm=1$, but in the quotient ring $gm=0$ so $h$ is the inverse of $f$</p> Answer 16: <p>First, $\;f(a)\;$ is <strong>not</strong> a polynomial but the evaluation of a polynomial at $\;a\;$ , meaning: a number. </p> <p>The complete claim is: If $\;K\;$ is a field and $\;F\;$ is some extension of $\;K\;$ , and $\;a\in F\;$ <em>is algebraic</em> over $\;K\;$ , then</p> <p>$$K[a]:=\left\{\,f(a)\;/\;f(x)\in K[x]\,\right\}\;\;\text{is a field, which means}\;\;K[a]=K(a)$$</p> <p>The proof: since $\;a\;$ is algebraic over $\;K\;$ there exists $\;0\not\equiv m_a(x)\in K[x]\;$ of minimal degree s.t. $\;m_a(a)=0\;$ , but then:</p> <p>$$m_a(x)=\sum_{k=0}^{n-1}b_kx^k+x^n\implies b_0=-a^n-\sum_{k=1}^{n-1}b_ka^k\;\;\color{red}{(*)}$$</p> <p>(...Observe the claim is trivial if $\;a\in K\;$ , and we can assume $\;b_0\neq0\;$ (why?) ...)</p> <p>Yet $\;0\neq a\in F\;$ and $\;F\;$ is a field, so $\;a^{-1}\in F\;$ , and then multiplying $\;\color{red}{(*)}\;$ by $\;a^{-1}\;$:</p> <p>$$a^{-1}=-\frac1{b_0}\left(a^{n-1}+\sum_{k=1}^{n-1}b_ka^{k-1}\right)\in K[a]$$</p> <p>Deduce now that $\;K(a)\subset K[a]\;$ , and since the other contention is trivial if we remember $\;K(a)\;$ is the fraction field of the integral domain $\;K[a]\;$ , we're done.</p> Answer 17: <p>Polynomials over a field $K$, that is, elements of $K[x]$, don't generally (unless they have degree zero) have an inverse.</p> <p>$K[a]$ is a different object, it is isomorphic to the quotient ring $K[x]/(m)$ where $m$ is the minimal polynomial of $a$ (you can see this via the first isomorphism theorem using the morphism $\varphi:K[x]\to K[a]$ which sends $\sum k_i x^i\mapsto\sum k_i a^i$), assuming $a$ lives inside a field extension of $K$ its minimal polynomial will be irreducible, so the ideal $(m)$ it generates will be maximal and the quotient $K[x]/(m)$ will be a field.</p> <p>Inverses in this field can be computed via the usual euclidean algorithm, take a polynomial $f\in K[x]$ with $\gcd(f,m)=1$, then by Bezout's theorem there are $h$ and $g$ such that $fh+gm=1$, but in the quotient ring $gm=0$ so $h$ is the inverse of $f$</p> Answer 18: <p>First, $\;f(a)\;$ is <strong>not</strong> a polynomial but the evaluation of a polynomial at $\;a\;$ , meaning: a number. </p> <p>The complete claim is: If $\;K\;$ is a field and $\;F\;$ is some extension of $\;K\;$ , and $\;a\in F\;$ <em>is algebraic</em> over $\;K\;$ , then</p> <p>$$K[a]:=\left\{\,f(a)\;/\;f(x)\in K[x]\,\right\}\;\;\text{is a field, which means}\;\;K[a]=K(a)$$</p> <p>The proof: since $\;a\;$ is algebraic over $\;K\;$ there exists $\;0\not\equiv m_a(x)\in K[x]\;$ of minimal degree s.t. $\;m_a(a)=0\;$ , but then:</p> <p>$$m_a(x)=\sum_{k=0}^{n-1}b_kx^k+x^n\implies b_0=-a^n-\sum_{k=1}^{n-1}b_ka^k\;\;\color{red}{(*)}$$</p> <p>(...Observe the claim is trivial if $\;a\in K\;$ , and we can assume $\;b_0\neq0\;$ (why?) ...)</p> <p>Yet $\;0\neq a\in F\;$ and $\;F\;$ is a field, so $\;a^{-1}\in F\;$ , and then multiplying $\;\color{red}{(*)}\;$ by $\;a^{-1}\;$:</p> <p>$$a^{-1}=-\frac1{b_0}\left(a^{n-1}+\sum_{k=1}^{n-1}b_ka^{k-1}\right)\in K[a]$$</p> <p>Deduce now that $\;K(a)\subset K[a]\;$ , and since the other contention is trivial if we remember $\;K(a)\;$ is the fraction field of the integral domain $\;K[a]\;$ , we're done.</p>
Distribution of zeroes of a continuous function
2,079,449
5
https://math.stackexchange.com/questions/2079449/distribution-of-zeroes-of-a-continuous-function
<p>I'm interested in the possible distribution of zeroes in a closed interval $[a,b] \subset \mathbb{R}$ of a continuous function $f:\mathbb{R} \rightarrow \mathbb{R}$ that is non-constant in any open interval. </p> <ol> <li>I managed to show that if the zeroes are uniformly distributed, then the set of zeroes in $[a,b]$ can be at most finite otherwise they would form a dense set which would imply that the function is constant. </li> <li>I'm curious about the possible number of accumulation points in $[a,b]$. I conjecture that there can be countably many but I haven't managed to construct an explicit example of a function satisfying this requirement. </li> <li>I also believe but I haven't managed to show that there can't be uncountably many accumulation points in $[a,b]$.</li> </ol> <p>Is there a general theorem in analysis or general topology which addresses these particular questions?</p>
[ { "answer_id": 2079476, "score": 8, "text": "<p>Let $C \\subset [0,1]$ be a Cantor set (could be the standard middle-thirds one, could be a fat Cantor set), and</p>\n\n<p>$$f(t) = \\operatorname{dist}(t,C) = \\inf \\{ \\lvert t-c\\rvert : c \\in C\\}.$$</p>\n\n<p>Then $f\\colon [0,1] \\to [0,1]$ is a continuous function with $C = f^{-1}(0)$, and $f$ is not constant on any nondegenerate interval. Since Cantor sets are perfect, $f^{-1}(0)$ has uncountably many ($2^{\\aleph_0}$) accumulation points.</p>\n\n<p>Every closed subset of a metric space is the zero set of a continuous function, and if the domain is an interval, one can always arrange it so that the function is not constant on any interval not intersecting the zero set. Thus for an interval, every closed subset with empty interior is the zero set of a continuous function that is not constant on any interval.</p>\n" }, { "answer_id": 2079460, "score": 1, "text": "<p>Part $2$ is solved fast by the function:</p>\n\n<p>$f(x)=\\cases{x\\sin(1/x)) \\text{ if } x\\neq 0\\\\ 0 \\text{ if } x=0}$</p>\n\n<p>It has a countable number of zeroes in the interval $[0,1]$.</p>\n\n<hr>\n\n<p>Sorry, the previous set of discontinuities was countable but had only one accumulation point, the following does the trick:</p>\n\n<p>Take the set $A=\\frac{1}{2^n}+\\frac{1}{2^m}$ with $n$ and $m$ positive naturals. As in Daniel's answer we can take the function $f(x)=d(x,A)$. The accumulation points are precisely $0$ and $\\frac{1}{2^n}$ for positive $n$.</p>\n" }, { "answer_id": 2079476, "score": 8, "text": "<p>Let $C \\subset [0,1]$ be a Cantor set (could be the standard middle-thirds one, could be a fat Cantor set), and</p>\n\n<p>$$f(t) = \\operatorname{dist}(t,C) = \\inf \\{ \\lvert t-c\\rvert : c \\in C\\}.$$</p>\n\n<p>Then $f\\colon [0,1] \\to [0,1]$ is a continuous function with $C = f^{-1}(0)$, and $f$ is not constant on any nondegenerate interval. Since Cantor sets are perfect, $f^{-1}(0)$ has uncountably many ($2^{\\aleph_0}$) accumulation points.</p>\n\n<p>Every closed subset of a metric space is the zero set of a continuous function, and if the domain is an interval, one can always arrange it so that the function is not constant on any interval not intersecting the zero set. Thus for an interval, every closed subset with empty interior is the zero set of a continuous function that is not constant on any interval.</p>\n" }, { "answer_id": 2079460, "score": 1, "text": "<p>Part $2$ is solved fast by the function:</p>\n\n<p>$f(x)=\\cases{x\\sin(1/x)) \\text{ if } x\\neq 0\\\\ 0 \\text{ if } x=0}$</p>\n\n<p>It has a countable number of zeroes in the interval $[0,1]$.</p>\n\n<hr>\n\n<p>Sorry, the previous set of discontinuities was countable but had only one accumulation point, the following does the trick:</p>\n\n<p>Take the set $A=\\frac{1}{2^n}+\\frac{1}{2^m}$ with $n$ and $m$ positive naturals. As in Daniel's answer we can take the function $f(x)=d(x,A)$. The accumulation points are precisely $0$ and $\\frac{1}{2^n}$ for positive $n$.</p>\n" }, { "answer_id": 2079476, "score": 8, "text": "<p>Let $C \\subset [0,1]$ be a Cantor set (could be the standard middle-thirds one, could be a fat Cantor set), and</p>\n\n<p>$$f(t) = \\operatorname{dist}(t,C) = \\inf \\{ \\lvert t-c\\rvert : c \\in C\\}.$$</p>\n\n<p>Then $f\\colon [0,1] \\to [0,1]$ is a continuous function with $C = f^{-1}(0)$, and $f$ is not constant on any nondegenerate interval. Since Cantor sets are perfect, $f^{-1}(0)$ has uncountably many ($2^{\\aleph_0}$) accumulation points.</p>\n\n<p>Every closed subset of a metric space is the zero set of a continuous function, and if the domain is an interval, one can always arrange it so that the function is not constant on any interval not intersecting the zero set. Thus for an interval, every closed subset with empty interior is the zero set of a continuous function that is not constant on any interval.</p>\n" }, { "answer_id": 2079460, "score": 1, "text": "<p>Part $2$ is solved fast by the function:</p>\n\n<p>$f(x)=\\cases{x\\sin(1/x)) \\text{ if } x\\neq 0\\\\ 0 \\text{ if } x=0}$</p>\n\n<p>It has a countable number of zeroes in the interval $[0,1]$.</p>\n\n<hr>\n\n<p>Sorry, the previous set of discontinuities was countable but had only one accumulation point, the following does the trick:</p>\n\n<p>Take the set $A=\\frac{1}{2^n}+\\frac{1}{2^m}$ with $n$ and $m$ positive naturals. As in Daniel's answer we can take the function $f(x)=d(x,A)$. The accumulation points are precisely $0$ and $\\frac{1}{2^n}$ for positive $n$.</p>\n" }, { "answer_id": 2079476, "score": 8, "text": "<p>Let $C \\subset [0,1]$ be a Cantor set (could be the standard middle-thirds one, could be a fat Cantor set), and</p>\n\n<p>$$f(t) = \\operatorname{dist}(t,C) = \\inf \\{ \\lvert t-c\\rvert : c \\in C\\}.$$</p>\n\n<p>Then $f\\colon [0,1] \\to [0,1]$ is a continuous function with $C = f^{-1}(0)$, and $f$ is not constant on any nondegenerate interval. Since Cantor sets are perfect, $f^{-1}(0)$ has uncountably many ($2^{\\aleph_0}$) accumulation points.</p>\n\n<p>Every closed subset of a metric space is the zero set of a continuous function, and if the domain is an interval, one can always arrange it so that the function is not constant on any interval not intersecting the zero set. Thus for an interval, every closed subset with empty interior is the zero set of a continuous function that is not constant on any interval.</p>\n" }, { "answer_id": 2079460, "score": 1, "text": "<p>Part $2$ is solved fast by the function:</p>\n\n<p>$f(x)=\\cases{x\\sin(1/x)) \\text{ if } x\\neq 0\\\\ 0 \\text{ if } x=0}$</p>\n\n<p>It has a countable number of zeroes in the interval $[0,1]$.</p>\n\n<hr>\n\n<p>Sorry, the previous set of discontinuities was countable but had only one accumulation point, the following does the trick:</p>\n\n<p>Take the set $A=\\frac{1}{2^n}+\\frac{1}{2^m}$ with $n$ and $m$ positive naturals. As in Daniel's answer we can take the function $f(x)=d(x,A)$. The accumulation points are precisely $0$ and $\\frac{1}{2^n}$ for positive $n$.</p>\n" }, { "answer_id": 2079476, "score": 8, "text": "<p>Let $C \\subset [0,1]$ be a Cantor set (could be the standard middle-thirds one, could be a fat Cantor set), and</p>\n\n<p>$$f(t) = \\operatorname{dist}(t,C) = \\inf \\{ \\lvert t-c\\rvert : c \\in C\\}.$$</p>\n\n<p>Then $f\\colon [0,1] \\to [0,1]$ is a continuous function with $C = f^{-1}(0)$, and $f$ is not constant on any nondegenerate interval. Since Cantor sets are perfect, $f^{-1}(0)$ has uncountably many ($2^{\\aleph_0}$) accumulation points.</p>\n\n<p>Every closed subset of a metric space is the zero set of a continuous function, and if the domain is an interval, one can always arrange it so that the function is not constant on any interval not intersecting the zero set. Thus for an interval, every closed subset with empty interior is the zero set of a continuous function that is not constant on any interval.</p>\n" }, { "answer_id": 2079460, "score": 1, "text": "<p>Part $2$ is solved fast by the function:</p>\n\n<p>$f(x)=\\cases{x\\sin(1/x)) \\text{ if } x\\neq 0\\\\ 0 \\text{ if } x=0}$</p>\n\n<p>It has a countable number of zeroes in the interval $[0,1]$.</p>\n\n<hr>\n\n<p>Sorry, the previous set of discontinuities was countable but had only one accumulation point, the following does the trick:</p>\n\n<p>Take the set $A=\\frac{1}{2^n}+\\frac{1}{2^m}$ with $n$ and $m$ positive naturals. As in Daniel's answer we can take the function $f(x)=d(x,A)$. The accumulation points are precisely $0$ and $\\frac{1}{2^n}$ for positive $n$.</p>\n" }, { "answer_id": 2079476, "score": 8, "text": "<p>Let $C \\subset [0,1]$ be a Cantor set (could be the standard middle-thirds one, could be a fat Cantor set), and</p>\n\n<p>$$f(t) = \\operatorname{dist}(t,C) = \\inf \\{ \\lvert t-c\\rvert : c \\in C\\}.$$</p>\n\n<p>Then $f\\colon [0,1] \\to [0,1]$ is a continuous function with $C = f^{-1}(0)$, and $f$ is not constant on any nondegenerate interval. Since Cantor sets are perfect, $f^{-1}(0)$ has uncountably many ($2^{\\aleph_0}$) accumulation points.</p>\n\n<p>Every closed subset of a metric space is the zero set of a continuous function, and if the domain is an interval, one can always arrange it so that the function is not constant on any interval not intersecting the zero set. Thus for an interval, every closed subset with empty interior is the zero set of a continuous function that is not constant on any interval.</p>\n" }, { "answer_id": 2079460, "score": 1, "text": "<p>Part $2$ is solved fast by the function:</p>\n\n<p>$f(x)=\\cases{x\\sin(1/x)) \\text{ if } x\\neq 0\\\\ 0 \\text{ if } x=0}$</p>\n\n<p>It has a countable number of zeroes in the interval $[0,1]$.</p>\n\n<hr>\n\n<p>Sorry, the previous set of discontinuities was countable but had only one accumulation point, the following does the trick:</p>\n\n<p>Take the set $A=\\frac{1}{2^n}+\\frac{1}{2^m}$ with $n$ and $m$ positive naturals. As in Daniel's answer we can take the function $f(x)=d(x,A)$. The accumulation points are precisely $0$ and $\\frac{1}{2^n}$ for positive $n$.</p>\n" }, { "answer_id": 2079476, "score": 8, "text": "<p>Let $C \\subset [0,1]$ be a Cantor set (could be the standard middle-thirds one, could be a fat Cantor set), and</p>\n\n<p>$$f(t) = \\operatorname{dist}(t,C) = \\inf \\{ \\lvert t-c\\rvert : c \\in C\\}.$$</p>\n\n<p>Then $f\\colon [0,1] \\to [0,1]$ is a continuous function with $C = f^{-1}(0)$, and $f$ is not constant on any nondegenerate interval. Since Cantor sets are perfect, $f^{-1}(0)$ has uncountably many ($2^{\\aleph_0}$) accumulation points.</p>\n\n<p>Every closed subset of a metric space is the zero set of a continuous function, and if the domain is an interval, one can always arrange it so that the function is not constant on any interval not intersecting the zero set. Thus for an interval, every closed subset with empty interior is the zero set of a continuous function that is not constant on any interval.</p>\n" }, { "answer_id": 2079460, "score": 1, "text": "<p>Part $2$ is solved fast by the function:</p>\n\n<p>$f(x)=\\cases{x\\sin(1/x)) \\text{ if } x\\neq 0\\\\ 0 \\text{ if } x=0}$</p>\n\n<p>It has a countable number of zeroes in the interval $[0,1]$.</p>\n\n<hr>\n\n<p>Sorry, the previous set of discontinuities was countable but had only one accumulation point, the following does the trick:</p>\n\n<p>Take the set $A=\\frac{1}{2^n}+\\frac{1}{2^m}$ with $n$ and $m$ positive naturals. As in Daniel's answer we can take the function $f(x)=d(x,A)$. The accumulation points are precisely $0$ and $\\frac{1}{2^n}$ for positive $n$.</p>\n" }, { "answer_id": 2079476, "score": 8, "text": "<p>Let $C \\subset [0,1]$ be a Cantor set (could be the standard middle-thirds one, could be a fat Cantor set), and</p>\n\n<p>$$f(t) = \\operatorname{dist}(t,C) = \\inf \\{ \\lvert t-c\\rvert : c \\in C\\}.$$</p>\n\n<p>Then $f\\colon [0,1] \\to [0,1]$ is a continuous function with $C = f^{-1}(0)$, and $f$ is not constant on any nondegenerate interval. Since Cantor sets are perfect, $f^{-1}(0)$ has uncountably many ($2^{\\aleph_0}$) accumulation points.</p>\n\n<p>Every closed subset of a metric space is the zero set of a continuous function, and if the domain is an interval, one can always arrange it so that the function is not constant on any interval not intersecting the zero set. Thus for an interval, every closed subset with empty interior is the zero set of a continuous function that is not constant on any interval.</p>\n" }, { "answer_id": 2079460, "score": 1, "text": "<p>Part $2$ is solved fast by the function:</p>\n\n<p>$f(x)=\\cases{x\\sin(1/x)) \\text{ if } x\\neq 0\\\\ 0 \\text{ if } x=0}$</p>\n\n<p>It has a countable number of zeroes in the interval $[0,1]$.</p>\n\n<hr>\n\n<p>Sorry, the previous set of discontinuities was countable but had only one accumulation point, the following does the trick:</p>\n\n<p>Take the set $A=\\frac{1}{2^n}+\\frac{1}{2^m}$ with $n$ and $m$ positive naturals. As in Daniel's answer we can take the function $f(x)=d(x,A)$. The accumulation points are precisely $0$ and $\\frac{1}{2^n}$ for positive $n$.</p>\n" }, { "answer_id": 2079476, "score": 8, "text": "<p>Let $C \\subset [0,1]$ be a Cantor set (could be the standard middle-thirds one, could be a fat Cantor set), and</p>\n\n<p>$$f(t) = \\operatorname{dist}(t,C) = \\inf \\{ \\lvert t-c\\rvert : c \\in C\\}.$$</p>\n\n<p>Then $f\\colon [0,1] \\to [0,1]$ is a continuous function with $C = f^{-1}(0)$, and $f$ is not constant on any nondegenerate interval. Since Cantor sets are perfect, $f^{-1}(0)$ has uncountably many ($2^{\\aleph_0}$) accumulation points.</p>\n\n<p>Every closed subset of a metric space is the zero set of a continuous function, and if the domain is an interval, one can always arrange it so that the function is not constant on any interval not intersecting the zero set. Thus for an interval, every closed subset with empty interior is the zero set of a continuous function that is not constant on any interval.</p>\n" }, { "answer_id": 2079460, "score": 1, "text": "<p>Part $2$ is solved fast by the function:</p>\n\n<p>$f(x)=\\cases{x\\sin(1/x)) \\text{ if } x\\neq 0\\\\ 0 \\text{ if } x=0}$</p>\n\n<p>It has a countable number of zeroes in the interval $[0,1]$.</p>\n\n<hr>\n\n<p>Sorry, the previous set of discontinuities was countable but had only one accumulation point, the following does the trick:</p>\n\n<p>Take the set $A=\\frac{1}{2^n}+\\frac{1}{2^m}$ with $n$ and $m$ positive naturals. As in Daniel's answer we can take the function $f(x)=d(x,A)$. The accumulation points are precisely $0$ and $\\frac{1}{2^n}$ for positive $n$.</p>\n" } ]
Question: Distribution of zeroes of a continuous function Answer 1: <p>Let $C \subset [0,1]$ be a Cantor set (could be the standard middle-thirds one, could be a fat Cantor set), and</p> <p>$$f(t) = \operatorname{dist}(t,C) = \inf \{ \lvert t-c\rvert : c \in C\}.$$</p> <p>Then $f\colon [0,1] \to [0,1]$ is a continuous function with $C = f^{-1}(0)$, and $f$ is not constant on any nondegenerate interval. Since Cantor sets are perfect, $f^{-1}(0)$ has uncountably many ($2^{\aleph_0}$) accumulation points.</p> <p>Every closed subset of a metric space is the zero set of a continuous function, and if the domain is an interval, one can always arrange it so that the function is not constant on any interval not intersecting the zero set. Thus for an interval, every closed subset with empty interior is the zero set of a continuous function that is not constant on any interval.</p> Answer 2: <p>Part $2$ is solved fast by the function:</p> <p>$f(x)=\cases{x\sin(1/x)) \text{ if } x\neq 0\\ 0 \text{ if } x=0}$</p> <p>It has a countable number of zeroes in the interval $[0,1]$.</p> <hr> <p>Sorry, the previous set of discontinuities was countable but had only one accumulation point, the following does the trick:</p> <p>Take the set $A=\frac{1}{2^n}+\frac{1}{2^m}$ with $n$ and $m$ positive naturals. As in Daniel's answer we can take the function $f(x)=d(x,A)$. The accumulation points are precisely $0$ and $\frac{1}{2^n}$ for positive $n$.</p> Answer 3: <p>Let $C \subset [0,1]$ be a Cantor set (could be the standard middle-thirds one, could be a fat Cantor set), and</p> <p>$$f(t) = \operatorname{dist}(t,C) = \inf \{ \lvert t-c\rvert : c \in C\}.$$</p> <p>Then $f\colon [0,1] \to [0,1]$ is a continuous function with $C = f^{-1}(0)$, and $f$ is not constant on any nondegenerate interval. Since Cantor sets are perfect, $f^{-1}(0)$ has uncountably many ($2^{\aleph_0}$) accumulation points.</p> <p>Every closed subset of a metric space is the zero set of a continuous function, and if the domain is an interval, one can always arrange it so that the function is not constant on any interval not intersecting the zero set. Thus for an interval, every closed subset with empty interior is the zero set of a continuous function that is not constant on any interval.</p> Answer 4: <p>Part $2$ is solved fast by the function:</p> <p>$f(x)=\cases{x\sin(1/x)) \text{ if } x\neq 0\\ 0 \text{ if } x=0}$</p> <p>It has a countable number of zeroes in the interval $[0,1]$.</p> <hr> <p>Sorry, the previous set of discontinuities was countable but had only one accumulation point, the following does the trick:</p> <p>Take the set $A=\frac{1}{2^n}+\frac{1}{2^m}$ with $n$ and $m$ positive naturals. As in Daniel's answer we can take the function $f(x)=d(x,A)$. The accumulation points are precisely $0$ and $\frac{1}{2^n}$ for positive $n$.</p> Answer 5: <p>Let $C \subset [0,1]$ be a Cantor set (could be the standard middle-thirds one, could be a fat Cantor set), and</p> <p>$$f(t) = \operatorname{dist}(t,C) = \inf \{ \lvert t-c\rvert : c \in C\}.$$</p> <p>Then $f\colon [0,1] \to [0,1]$ is a continuous function with $C = f^{-1}(0)$, and $f$ is not constant on any nondegenerate interval. Since Cantor sets are perfect, $f^{-1}(0)$ has uncountably many ($2^{\aleph_0}$) accumulation points.</p> <p>Every closed subset of a metric space is the zero set of a continuous function, and if the domain is an interval, one can always arrange it so that the function is not constant on any interval not intersecting the zero set. Thus for an interval, every closed subset with empty interior is the zero set of a continuous function that is not constant on any interval.</p> Answer 6: <p>Part $2$ is solved fast by the function:</p> <p>$f(x)=\cases{x\sin(1/x)) \text{ if } x\neq 0\\ 0 \text{ if } x=0}$</p> <p>It has a countable number of zeroes in the interval $[0,1]$.</p> <hr> <p>Sorry, the previous set of discontinuities was countable but had only one accumulation point, the following does the trick:</p> <p>Take the set $A=\frac{1}{2^n}+\frac{1}{2^m}$ with $n$ and $m$ positive naturals. As in Daniel's answer we can take the function $f(x)=d(x,A)$. The accumulation points are precisely $0$ and $\frac{1}{2^n}$ for positive $n$.</p> Answer 7: <p>Let $C \subset [0,1]$ be a Cantor set (could be the standard middle-thirds one, could be a fat Cantor set), and</p> <p>$$f(t) = \operatorname{dist}(t,C) = \inf \{ \lvert t-c\rvert : c \in C\}.$$</p> <p>Then $f\colon [0,1] \to [0,1]$ is a continuous function with $C = f^{-1}(0)$, and $f$ is not constant on any nondegenerate interval. Since Cantor sets are perfect, $f^{-1}(0)$ has uncountably many ($2^{\aleph_0}$) accumulation points.</p> <p>Every closed subset of a metric space is the zero set of a continuous function, and if the domain is an interval, one can always arrange it so that the function is not constant on any interval not intersecting the zero set. Thus for an interval, every closed subset with empty interior is the zero set of a continuous function that is not constant on any interval.</p> Answer 8: <p>Part $2$ is solved fast by the function:</p> <p>$f(x)=\cases{x\sin(1/x)) \text{ if } x\neq 0\\ 0 \text{ if } x=0}$</p> <p>It has a countable number of zeroes in the interval $[0,1]$.</p> <hr> <p>Sorry, the previous set of discontinuities was countable but had only one accumulation point, the following does the trick:</p> <p>Take the set $A=\frac{1}{2^n}+\frac{1}{2^m}$ with $n$ and $m$ positive naturals. As in Daniel's answer we can take the function $f(x)=d(x,A)$. The accumulation points are precisely $0$ and $\frac{1}{2^n}$ for positive $n$.</p> Answer 9: <p>Let $C \subset [0,1]$ be a Cantor set (could be the standard middle-thirds one, could be a fat Cantor set), and</p> <p>$$f(t) = \operatorname{dist}(t,C) = \inf \{ \lvert t-c\rvert : c \in C\}.$$</p> <p>Then $f\colon [0,1] \to [0,1]$ is a continuous function with $C = f^{-1}(0)$, and $f$ is not constant on any nondegenerate interval. Since Cantor sets are perfect, $f^{-1}(0)$ has uncountably many ($2^{\aleph_0}$) accumulation points.</p> <p>Every closed subset of a metric space is the zero set of a continuous function, and if the domain is an interval, one can always arrange it so that the function is not constant on any interval not intersecting the zero set. Thus for an interval, every closed subset with empty interior is the zero set of a continuous function that is not constant on any interval.</p> Answer 10: <p>Part $2$ is solved fast by the function:</p> <p>$f(x)=\cases{x\sin(1/x)) \text{ if } x\neq 0\\ 0 \text{ if } x=0}$</p> <p>It has a countable number of zeroes in the interval $[0,1]$.</p> <hr> <p>Sorry, the previous set of discontinuities was countable but had only one accumulation point, the following does the trick:</p> <p>Take the set $A=\frac{1}{2^n}+\frac{1}{2^m}$ with $n$ and $m$ positive naturals. As in Daniel's answer we can take the function $f(x)=d(x,A)$. The accumulation points are precisely $0$ and $\frac{1}{2^n}$ for positive $n$.</p> Answer 11: <p>Let $C \subset [0,1]$ be a Cantor set (could be the standard middle-thirds one, could be a fat Cantor set), and</p> <p>$$f(t) = \operatorname{dist}(t,C) = \inf \{ \lvert t-c\rvert : c \in C\}.$$</p> <p>Then $f\colon [0,1] \to [0,1]$ is a continuous function with $C = f^{-1}(0)$, and $f$ is not constant on any nondegenerate interval. Since Cantor sets are perfect, $f^{-1}(0)$ has uncountably many ($2^{\aleph_0}$) accumulation points.</p> <p>Every closed subset of a metric space is the zero set of a continuous function, and if the domain is an interval, one can always arrange it so that the function is not constant on any interval not intersecting the zero set. Thus for an interval, every closed subset with empty interior is the zero set of a continuous function that is not constant on any interval.</p> Answer 12: <p>Part $2$ is solved fast by the function:</p> <p>$f(x)=\cases{x\sin(1/x)) \text{ if } x\neq 0\\ 0 \text{ if } x=0}$</p> <p>It has a countable number of zeroes in the interval $[0,1]$.</p> <hr> <p>Sorry, the previous set of discontinuities was countable but had only one accumulation point, the following does the trick:</p> <p>Take the set $A=\frac{1}{2^n}+\frac{1}{2^m}$ with $n$ and $m$ positive naturals. As in Daniel's answer we can take the function $f(x)=d(x,A)$. The accumulation points are precisely $0$ and $\frac{1}{2^n}$ for positive $n$.</p> Answer 13: <p>Let $C \subset [0,1]$ be a Cantor set (could be the standard middle-thirds one, could be a fat Cantor set), and</p> <p>$$f(t) = \operatorname{dist}(t,C) = \inf \{ \lvert t-c\rvert : c \in C\}.$$</p> <p>Then $f\colon [0,1] \to [0,1]$ is a continuous function with $C = f^{-1}(0)$, and $f$ is not constant on any nondegenerate interval. Since Cantor sets are perfect, $f^{-1}(0)$ has uncountably many ($2^{\aleph_0}$) accumulation points.</p> <p>Every closed subset of a metric space is the zero set of a continuous function, and if the domain is an interval, one can always arrange it so that the function is not constant on any interval not intersecting the zero set. Thus for an interval, every closed subset with empty interior is the zero set of a continuous function that is not constant on any interval.</p> Answer 14: <p>Part $2$ is solved fast by the function:</p> <p>$f(x)=\cases{x\sin(1/x)) \text{ if } x\neq 0\\ 0 \text{ if } x=0}$</p> <p>It has a countable number of zeroes in the interval $[0,1]$.</p> <hr> <p>Sorry, the previous set of discontinuities was countable but had only one accumulation point, the following does the trick:</p> <p>Take the set $A=\frac{1}{2^n}+\frac{1}{2^m}$ with $n$ and $m$ positive naturals. As in Daniel's answer we can take the function $f(x)=d(x,A)$. The accumulation points are precisely $0$ and $\frac{1}{2^n}$ for positive $n$.</p> Answer 15: <p>Let $C \subset [0,1]$ be a Cantor set (could be the standard middle-thirds one, could be a fat Cantor set), and</p> <p>$$f(t) = \operatorname{dist}(t,C) = \inf \{ \lvert t-c\rvert : c \in C\}.$$</p> <p>Then $f\colon [0,1] \to [0,1]$ is a continuous function with $C = f^{-1}(0)$, and $f$ is not constant on any nondegenerate interval. Since Cantor sets are perfect, $f^{-1}(0)$ has uncountably many ($2^{\aleph_0}$) accumulation points.</p> <p>Every closed subset of a metric space is the zero set of a continuous function, and if the domain is an interval, one can always arrange it so that the function is not constant on any interval not intersecting the zero set. Thus for an interval, every closed subset with empty interior is the zero set of a continuous function that is not constant on any interval.</p> Answer 16: <p>Part $2$ is solved fast by the function:</p> <p>$f(x)=\cases{x\sin(1/x)) \text{ if } x\neq 0\\ 0 \text{ if } x=0}$</p> <p>It has a countable number of zeroes in the interval $[0,1]$.</p> <hr> <p>Sorry, the previous set of discontinuities was countable but had only one accumulation point, the following does the trick:</p> <p>Take the set $A=\frac{1}{2^n}+\frac{1}{2^m}$ with $n$ and $m$ positive naturals. As in Daniel's answer we can take the function $f(x)=d(x,A)$. The accumulation points are precisely $0$ and $\frac{1}{2^n}$ for positive $n$.</p> Answer 17: <p>Let $C \subset [0,1]$ be a Cantor set (could be the standard middle-thirds one, could be a fat Cantor set), and</p> <p>$$f(t) = \operatorname{dist}(t,C) = \inf \{ \lvert t-c\rvert : c \in C\}.$$</p> <p>Then $f\colon [0,1] \to [0,1]$ is a continuous function with $C = f^{-1}(0)$, and $f$ is not constant on any nondegenerate interval. Since Cantor sets are perfect, $f^{-1}(0)$ has uncountably many ($2^{\aleph_0}$) accumulation points.</p> <p>Every closed subset of a metric space is the zero set of a continuous function, and if the domain is an interval, one can always arrange it so that the function is not constant on any interval not intersecting the zero set. Thus for an interval, every closed subset with empty interior is the zero set of a continuous function that is not constant on any interval.</p> Answer 18: <p>Part $2$ is solved fast by the function:</p> <p>$f(x)=\cases{x\sin(1/x)) \text{ if } x\neq 0\\ 0 \text{ if } x=0}$</p> <p>It has a countable number of zeroes in the interval $[0,1]$.</p> <hr> <p>Sorry, the previous set of discontinuities was countable but had only one accumulation point, the following does the trick:</p> <p>Take the set $A=\frac{1}{2^n}+\frac{1}{2^m}$ with $n$ and $m$ positive naturals. As in Daniel's answer we can take the function $f(x)=d(x,A)$. The accumulation points are precisely $0$ and $\frac{1}{2^n}$ for positive $n$.</p>
Packing two different circles in a rectangle
2,079,582
5
https://math.stackexchange.com/questions/2079582/packing-two-different-circles-in-a-rectangle
<p>Given the dimension of a rectangle and radii of two circles, how can I decide if these two circles can fit in the rectangle?</p> <p>I don't know if there is a formula to compute such a thing!</p> <p>thank you :)</p>
[ { "answer_id": 2079601, "score": 5, "text": "<p>In the figure below, let circle $E$ have radius $r_1$, circle $G$ have radius $r_2$ and the rectangle be $H$ high and $W$ wide. You get the best fit by putting the circles in opposite corners. First you need $H,W \\gt 2r_1, 2r_2$ or one of the circles won't fit all by itself. Then the coordinates of $E$ are $(r_1,H-r_1)$ and the coordinates of $G$ are $(W-r_2,r_2)$. You need them to be at least $r_1+r_2$ apart, so the requirement is $$\\sqrt{(W-r_1-r_2)^2+(H-r_1-r_2)^2} \\gt r_1+r_2$$</p>\n\n<p><a href=\"https://i.stack.imgur.com/2tPDu.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2tPDu.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2079607, "score": -1, "text": "<p>To pack two circles tightly together is to touch them tangently. The rectangle must be as wide as the longer diameter and the longer side as long as the sum of the diameters.</p>\n\n<p>So if circle one has diameter = $d_1$ and circle two has diameter = $d_2$ and $d_1 \\ge d_2$ and the rectangle has sides $l$ and $w$ and $l \\ge w$, then the two circles will fit if $d_1 \\le w$ and $d_1 + d_2 \\le l$.</p>\n" }, { "answer_id": 2079601, "score": 5, "text": "<p>In the figure below, let circle $E$ have radius $r_1$, circle $G$ have radius $r_2$ and the rectangle be $H$ high and $W$ wide. You get the best fit by putting the circles in opposite corners. First you need $H,W \\gt 2r_1, 2r_2$ or one of the circles won't fit all by itself. Then the coordinates of $E$ are $(r_1,H-r_1)$ and the coordinates of $G$ are $(W-r_2,r_2)$. You need them to be at least $r_1+r_2$ apart, so the requirement is $$\\sqrt{(W-r_1-r_2)^2+(H-r_1-r_2)^2} \\gt r_1+r_2$$</p>\n\n<p><a href=\"https://i.stack.imgur.com/2tPDu.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2tPDu.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2079607, "score": -1, "text": "<p>To pack two circles tightly together is to touch them tangently. The rectangle must be as wide as the longer diameter and the longer side as long as the sum of the diameters.</p>\n\n<p>So if circle one has diameter = $d_1$ and circle two has diameter = $d_2$ and $d_1 \\ge d_2$ and the rectangle has sides $l$ and $w$ and $l \\ge w$, then the two circles will fit if $d_1 \\le w$ and $d_1 + d_2 \\le l$.</p>\n" }, { "answer_id": 2079601, "score": 5, "text": "<p>In the figure below, let circle $E$ have radius $r_1$, circle $G$ have radius $r_2$ and the rectangle be $H$ high and $W$ wide. You get the best fit by putting the circles in opposite corners. First you need $H,W \\gt 2r_1, 2r_2$ or one of the circles won't fit all by itself. Then the coordinates of $E$ are $(r_1,H-r_1)$ and the coordinates of $G$ are $(W-r_2,r_2)$. You need them to be at least $r_1+r_2$ apart, so the requirement is $$\\sqrt{(W-r_1-r_2)^2+(H-r_1-r_2)^2} \\gt r_1+r_2$$</p>\n\n<p><a href=\"https://i.stack.imgur.com/2tPDu.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2tPDu.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2079607, "score": -1, "text": "<p>To pack two circles tightly together is to touch them tangently. The rectangle must be as wide as the longer diameter and the longer side as long as the sum of the diameters.</p>\n\n<p>So if circle one has diameter = $d_1$ and circle two has diameter = $d_2$ and $d_1 \\ge d_2$ and the rectangle has sides $l$ and $w$ and $l \\ge w$, then the two circles will fit if $d_1 \\le w$ and $d_1 + d_2 \\le l$.</p>\n" }, { "answer_id": 2079601, "score": 5, "text": "<p>In the figure below, let circle $E$ have radius $r_1$, circle $G$ have radius $r_2$ and the rectangle be $H$ high and $W$ wide. You get the best fit by putting the circles in opposite corners. First you need $H,W \\gt 2r_1, 2r_2$ or one of the circles won't fit all by itself. Then the coordinates of $E$ are $(r_1,H-r_1)$ and the coordinates of $G$ are $(W-r_2,r_2)$. You need them to be at least $r_1+r_2$ apart, so the requirement is $$\\sqrt{(W-r_1-r_2)^2+(H-r_1-r_2)^2} \\gt r_1+r_2$$</p>\n\n<p><a href=\"https://i.stack.imgur.com/2tPDu.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2tPDu.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2079607, "score": -1, "text": "<p>To pack two circles tightly together is to touch them tangently. The rectangle must be as wide as the longer diameter and the longer side as long as the sum of the diameters.</p>\n\n<p>So if circle one has diameter = $d_1$ and circle two has diameter = $d_2$ and $d_1 \\ge d_2$ and the rectangle has sides $l$ and $w$ and $l \\ge w$, then the two circles will fit if $d_1 \\le w$ and $d_1 + d_2 \\le l$.</p>\n" }, { "answer_id": 2079601, "score": 5, "text": "<p>In the figure below, let circle $E$ have radius $r_1$, circle $G$ have radius $r_2$ and the rectangle be $H$ high and $W$ wide. You get the best fit by putting the circles in opposite corners. First you need $H,W \\gt 2r_1, 2r_2$ or one of the circles won't fit all by itself. Then the coordinates of $E$ are $(r_1,H-r_1)$ and the coordinates of $G$ are $(W-r_2,r_2)$. You need them to be at least $r_1+r_2$ apart, so the requirement is $$\\sqrt{(W-r_1-r_2)^2+(H-r_1-r_2)^2} \\gt r_1+r_2$$</p>\n\n<p><a href=\"https://i.stack.imgur.com/2tPDu.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2tPDu.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2079607, "score": -1, "text": "<p>To pack two circles tightly together is to touch them tangently. The rectangle must be as wide as the longer diameter and the longer side as long as the sum of the diameters.</p>\n\n<p>So if circle one has diameter = $d_1$ and circle two has diameter = $d_2$ and $d_1 \\ge d_2$ and the rectangle has sides $l$ and $w$ and $l \\ge w$, then the two circles will fit if $d_1 \\le w$ and $d_1 + d_2 \\le l$.</p>\n" }, { "answer_id": 2079601, "score": 5, "text": "<p>In the figure below, let circle $E$ have radius $r_1$, circle $G$ have radius $r_2$ and the rectangle be $H$ high and $W$ wide. You get the best fit by putting the circles in opposite corners. First you need $H,W \\gt 2r_1, 2r_2$ or one of the circles won't fit all by itself. Then the coordinates of $E$ are $(r_1,H-r_1)$ and the coordinates of $G$ are $(W-r_2,r_2)$. You need them to be at least $r_1+r_2$ apart, so the requirement is $$\\sqrt{(W-r_1-r_2)^2+(H-r_1-r_2)^2} \\gt r_1+r_2$$</p>\n\n<p><a href=\"https://i.stack.imgur.com/2tPDu.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2tPDu.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2079607, "score": -1, "text": "<p>To pack two circles tightly together is to touch them tangently. The rectangle must be as wide as the longer diameter and the longer side as long as the sum of the diameters.</p>\n\n<p>So if circle one has diameter = $d_1$ and circle two has diameter = $d_2$ and $d_1 \\ge d_2$ and the rectangle has sides $l$ and $w$ and $l \\ge w$, then the two circles will fit if $d_1 \\le w$ and $d_1 + d_2 \\le l$.</p>\n" }, { "answer_id": 2079601, "score": 5, "text": "<p>In the figure below, let circle $E$ have radius $r_1$, circle $G$ have radius $r_2$ and the rectangle be $H$ high and $W$ wide. You get the best fit by putting the circles in opposite corners. First you need $H,W \\gt 2r_1, 2r_2$ or one of the circles won't fit all by itself. Then the coordinates of $E$ are $(r_1,H-r_1)$ and the coordinates of $G$ are $(W-r_2,r_2)$. You need them to be at least $r_1+r_2$ apart, so the requirement is $$\\sqrt{(W-r_1-r_2)^2+(H-r_1-r_2)^2} \\gt r_1+r_2$$</p>\n\n<p><a href=\"https://i.stack.imgur.com/2tPDu.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2tPDu.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2079607, "score": -1, "text": "<p>To pack two circles tightly together is to touch them tangently. The rectangle must be as wide as the longer diameter and the longer side as long as the sum of the diameters.</p>\n\n<p>So if circle one has diameter = $d_1$ and circle two has diameter = $d_2$ and $d_1 \\ge d_2$ and the rectangle has sides $l$ and $w$ and $l \\ge w$, then the two circles will fit if $d_1 \\le w$ and $d_1 + d_2 \\le l$.</p>\n" }, { "answer_id": 2079601, "score": 5, "text": "<p>In the figure below, let circle $E$ have radius $r_1$, circle $G$ have radius $r_2$ and the rectangle be $H$ high and $W$ wide. You get the best fit by putting the circles in opposite corners. First you need $H,W \\gt 2r_1, 2r_2$ or one of the circles won't fit all by itself. Then the coordinates of $E$ are $(r_1,H-r_1)$ and the coordinates of $G$ are $(W-r_2,r_2)$. You need them to be at least $r_1+r_2$ apart, so the requirement is $$\\sqrt{(W-r_1-r_2)^2+(H-r_1-r_2)^2} \\gt r_1+r_2$$</p>\n\n<p><a href=\"https://i.stack.imgur.com/2tPDu.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2tPDu.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2079607, "score": -1, "text": "<p>To pack two circles tightly together is to touch them tangently. The rectangle must be as wide as the longer diameter and the longer side as long as the sum of the diameters.</p>\n\n<p>So if circle one has diameter = $d_1$ and circle two has diameter = $d_2$ and $d_1 \\ge d_2$ and the rectangle has sides $l$ and $w$ and $l \\ge w$, then the two circles will fit if $d_1 \\le w$ and $d_1 + d_2 \\le l$.</p>\n" }, { "answer_id": 2079601, "score": 5, "text": "<p>In the figure below, let circle $E$ have radius $r_1$, circle $G$ have radius $r_2$ and the rectangle be $H$ high and $W$ wide. You get the best fit by putting the circles in opposite corners. First you need $H,W \\gt 2r_1, 2r_2$ or one of the circles won't fit all by itself. Then the coordinates of $E$ are $(r_1,H-r_1)$ and the coordinates of $G$ are $(W-r_2,r_2)$. You need them to be at least $r_1+r_2$ apart, so the requirement is $$\\sqrt{(W-r_1-r_2)^2+(H-r_1-r_2)^2} \\gt r_1+r_2$$</p>\n\n<p><a href=\"https://i.stack.imgur.com/2tPDu.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2tPDu.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2079607, "score": -1, "text": "<p>To pack two circles tightly together is to touch them tangently. The rectangle must be as wide as the longer diameter and the longer side as long as the sum of the diameters.</p>\n\n<p>So if circle one has diameter = $d_1$ and circle two has diameter = $d_2$ and $d_1 \\ge d_2$ and the rectangle has sides $l$ and $w$ and $l \\ge w$, then the two circles will fit if $d_1 \\le w$ and $d_1 + d_2 \\le l$.</p>\n" } ]
Question: Packing two different circles in a rectangle Answer 1: <p>In the figure below, let circle $E$ have radius $r_1$, circle $G$ have radius $r_2$ and the rectangle be $H$ high and $W$ wide. You get the best fit by putting the circles in opposite corners. First you need $H,W \gt 2r_1, 2r_2$ or one of the circles won't fit all by itself. Then the coordinates of $E$ are $(r_1,H-r_1)$ and the coordinates of $G$ are $(W-r_2,r_2)$. You need them to be at least $r_1+r_2$ apart, so the requirement is $$\sqrt{(W-r_1-r_2)^2+(H-r_1-r_2)^2} \gt r_1+r_2$$</p> <p><a href="https://i.stack.imgur.com/2tPDu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2tPDu.png" alt="enter image description here"></a></p> Answer 2: <p>To pack two circles tightly together is to touch them tangently. The rectangle must be as wide as the longer diameter and the longer side as long as the sum of the diameters.</p> <p>So if circle one has diameter = $d_1$ and circle two has diameter = $d_2$ and $d_1 \ge d_2$ and the rectangle has sides $l$ and $w$ and $l \ge w$, then the two circles will fit if $d_1 \le w$ and $d_1 + d_2 \le l$.</p> Answer 3: <p>In the figure below, let circle $E$ have radius $r_1$, circle $G$ have radius $r_2$ and the rectangle be $H$ high and $W$ wide. You get the best fit by putting the circles in opposite corners. First you need $H,W \gt 2r_1, 2r_2$ or one of the circles won't fit all by itself. Then the coordinates of $E$ are $(r_1,H-r_1)$ and the coordinates of $G$ are $(W-r_2,r_2)$. You need them to be at least $r_1+r_2$ apart, so the requirement is $$\sqrt{(W-r_1-r_2)^2+(H-r_1-r_2)^2} \gt r_1+r_2$$</p> <p><a href="https://i.stack.imgur.com/2tPDu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2tPDu.png" alt="enter image description here"></a></p> Answer 4: <p>To pack two circles tightly together is to touch them tangently. The rectangle must be as wide as the longer diameter and the longer side as long as the sum of the diameters.</p> <p>So if circle one has diameter = $d_1$ and circle two has diameter = $d_2$ and $d_1 \ge d_2$ and the rectangle has sides $l$ and $w$ and $l \ge w$, then the two circles will fit if $d_1 \le w$ and $d_1 + d_2 \le l$.</p> Answer 5: <p>In the figure below, let circle $E$ have radius $r_1$, circle $G$ have radius $r_2$ and the rectangle be $H$ high and $W$ wide. You get the best fit by putting the circles in opposite corners. First you need $H,W \gt 2r_1, 2r_2$ or one of the circles won't fit all by itself. Then the coordinates of $E$ are $(r_1,H-r_1)$ and the coordinates of $G$ are $(W-r_2,r_2)$. You need them to be at least $r_1+r_2$ apart, so the requirement is $$\sqrt{(W-r_1-r_2)^2+(H-r_1-r_2)^2} \gt r_1+r_2$$</p> <p><a href="https://i.stack.imgur.com/2tPDu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2tPDu.png" alt="enter image description here"></a></p> Answer 6: <p>To pack two circles tightly together is to touch them tangently. The rectangle must be as wide as the longer diameter and the longer side as long as the sum of the diameters.</p> <p>So if circle one has diameter = $d_1$ and circle two has diameter = $d_2$ and $d_1 \ge d_2$ and the rectangle has sides $l$ and $w$ and $l \ge w$, then the two circles will fit if $d_1 \le w$ and $d_1 + d_2 \le l$.</p> Answer 7: <p>In the figure below, let circle $E$ have radius $r_1$, circle $G$ have radius $r_2$ and the rectangle be $H$ high and $W$ wide. You get the best fit by putting the circles in opposite corners. First you need $H,W \gt 2r_1, 2r_2$ or one of the circles won't fit all by itself. Then the coordinates of $E$ are $(r_1,H-r_1)$ and the coordinates of $G$ are $(W-r_2,r_2)$. You need them to be at least $r_1+r_2$ apart, so the requirement is $$\sqrt{(W-r_1-r_2)^2+(H-r_1-r_2)^2} \gt r_1+r_2$$</p> <p><a href="https://i.stack.imgur.com/2tPDu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2tPDu.png" alt="enter image description here"></a></p> Answer 8: <p>To pack two circles tightly together is to touch them tangently. The rectangle must be as wide as the longer diameter and the longer side as long as the sum of the diameters.</p> <p>So if circle one has diameter = $d_1$ and circle two has diameter = $d_2$ and $d_1 \ge d_2$ and the rectangle has sides $l$ and $w$ and $l \ge w$, then the two circles will fit if $d_1 \le w$ and $d_1 + d_2 \le l$.</p> Answer 9: <p>In the figure below, let circle $E$ have radius $r_1$, circle $G$ have radius $r_2$ and the rectangle be $H$ high and $W$ wide. You get the best fit by putting the circles in opposite corners. First you need $H,W \gt 2r_1, 2r_2$ or one of the circles won't fit all by itself. Then the coordinates of $E$ are $(r_1,H-r_1)$ and the coordinates of $G$ are $(W-r_2,r_2)$. You need them to be at least $r_1+r_2$ apart, so the requirement is $$\sqrt{(W-r_1-r_2)^2+(H-r_1-r_2)^2} \gt r_1+r_2$$</p> <p><a href="https://i.stack.imgur.com/2tPDu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2tPDu.png" alt="enter image description here"></a></p> Answer 10: <p>To pack two circles tightly together is to touch them tangently. The rectangle must be as wide as the longer diameter and the longer side as long as the sum of the diameters.</p> <p>So if circle one has diameter = $d_1$ and circle two has diameter = $d_2$ and $d_1 \ge d_2$ and the rectangle has sides $l$ and $w$ and $l \ge w$, then the two circles will fit if $d_1 \le w$ and $d_1 + d_2 \le l$.</p> Answer 11: <p>In the figure below, let circle $E$ have radius $r_1$, circle $G$ have radius $r_2$ and the rectangle be $H$ high and $W$ wide. You get the best fit by putting the circles in opposite corners. First you need $H,W \gt 2r_1, 2r_2$ or one of the circles won't fit all by itself. Then the coordinates of $E$ are $(r_1,H-r_1)$ and the coordinates of $G$ are $(W-r_2,r_2)$. You need them to be at least $r_1+r_2$ apart, so the requirement is $$\sqrt{(W-r_1-r_2)^2+(H-r_1-r_2)^2} \gt r_1+r_2$$</p> <p><a href="https://i.stack.imgur.com/2tPDu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2tPDu.png" alt="enter image description here"></a></p> Answer 12: <p>To pack two circles tightly together is to touch them tangently. The rectangle must be as wide as the longer diameter and the longer side as long as the sum of the diameters.</p> <p>So if circle one has diameter = $d_1$ and circle two has diameter = $d_2$ and $d_1 \ge d_2$ and the rectangle has sides $l$ and $w$ and $l \ge w$, then the two circles will fit if $d_1 \le w$ and $d_1 + d_2 \le l$.</p> Answer 13: <p>In the figure below, let circle $E$ have radius $r_1$, circle $G$ have radius $r_2$ and the rectangle be $H$ high and $W$ wide. You get the best fit by putting the circles in opposite corners. First you need $H,W \gt 2r_1, 2r_2$ or one of the circles won't fit all by itself. Then the coordinates of $E$ are $(r_1,H-r_1)$ and the coordinates of $G$ are $(W-r_2,r_2)$. You need them to be at least $r_1+r_2$ apart, so the requirement is $$\sqrt{(W-r_1-r_2)^2+(H-r_1-r_2)^2} \gt r_1+r_2$$</p> <p><a href="https://i.stack.imgur.com/2tPDu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2tPDu.png" alt="enter image description here"></a></p> Answer 14: <p>To pack two circles tightly together is to touch them tangently. The rectangle must be as wide as the longer diameter and the longer side as long as the sum of the diameters.</p> <p>So if circle one has diameter = $d_1$ and circle two has diameter = $d_2$ and $d_1 \ge d_2$ and the rectangle has sides $l$ and $w$ and $l \ge w$, then the two circles will fit if $d_1 \le w$ and $d_1 + d_2 \le l$.</p> Answer 15: <p>In the figure below, let circle $E$ have radius $r_1$, circle $G$ have radius $r_2$ and the rectangle be $H$ high and $W$ wide. You get the best fit by putting the circles in opposite corners. First you need $H,W \gt 2r_1, 2r_2$ or one of the circles won't fit all by itself. Then the coordinates of $E$ are $(r_1,H-r_1)$ and the coordinates of $G$ are $(W-r_2,r_2)$. You need them to be at least $r_1+r_2$ apart, so the requirement is $$\sqrt{(W-r_1-r_2)^2+(H-r_1-r_2)^2} \gt r_1+r_2$$</p> <p><a href="https://i.stack.imgur.com/2tPDu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2tPDu.png" alt="enter image description here"></a></p> Answer 16: <p>To pack two circles tightly together is to touch them tangently. The rectangle must be as wide as the longer diameter and the longer side as long as the sum of the diameters.</p> <p>So if circle one has diameter = $d_1$ and circle two has diameter = $d_2$ and $d_1 \ge d_2$ and the rectangle has sides $l$ and $w$ and $l \ge w$, then the two circles will fit if $d_1 \le w$ and $d_1 + d_2 \le l$.</p> Answer 17: <p>In the figure below, let circle $E$ have radius $r_1$, circle $G$ have radius $r_2$ and the rectangle be $H$ high and $W$ wide. You get the best fit by putting the circles in opposite corners. First you need $H,W \gt 2r_1, 2r_2$ or one of the circles won't fit all by itself. Then the coordinates of $E$ are $(r_1,H-r_1)$ and the coordinates of $G$ are $(W-r_2,r_2)$. You need them to be at least $r_1+r_2$ apart, so the requirement is $$\sqrt{(W-r_1-r_2)^2+(H-r_1-r_2)^2} \gt r_1+r_2$$</p> <p><a href="https://i.stack.imgur.com/2tPDu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2tPDu.png" alt="enter image description here"></a></p> Answer 18: <p>To pack two circles tightly together is to touch them tangently. The rectangle must be as wide as the longer diameter and the longer side as long as the sum of the diameters.</p> <p>So if circle one has diameter = $d_1$ and circle two has diameter = $d_2$ and $d_1 \ge d_2$ and the rectangle has sides $l$ and $w$ and $l \ge w$, then the two circles will fit if $d_1 \le w$ and $d_1 + d_2 \le l$.</p>
Relationship between Borel $\sigma$-algebra on $l^2(\mathbb{N})$ and the product $\sigma$-algebra on $\mathbb{R}^\infty$
2,079,732
5
https://math.stackexchange.com/questions/2079732/relationship-between-borel-sigma-algebra-on-l2-mathbbn-and-the-product
<p>Recall that $\otimes_{n=1}^\infty \mathcal{B}(\mathbb{R})$ is defined as the smallest $\sigma$-algebra on $\mathbb{R}^\infty$ that makes every coordinate projection measurable. Furthermore $\mathcal{B}(\mathbb{R}^\infty)$ is the Borel $\sigma$-algebra generated by the product topology. We also know that $\otimes_{n=1}^\infty \mathcal{B}(\mathbb{R})=\mathcal{B}(\mathbb{R}^\infty)$.</p> <p>Furthermore let $l^2(\mathbb{N})$ be the space of square summable sequences indexed by $\mathbb{N}$. This is a Hilbert space with inner product $\langle x,y \rangle = \sum_n x_ny_n$. Let $\mathcal{B}(l^2(\mathbb{N}))$ denote the Borel sigma algebra induced by the natural metric on $l^2(\mathbb{N})$.</p> <p>We may note that $l^2(\mathbb{N})\subset \mathbb{R}^\infty$, but does it hold that $ l^2(\mathbb{N})\in \mathcal{B}(\mathbb{R}^\infty)$ and $\mathcal{B}(\mathbb{R}^\infty) \cap l^2(\mathbb{N})=\mathcal{B}(l^2(\mathbb{N}))$ ?</p> <p><em>The reason I'm interested in this is: I have two probability measures $P,Q$ on $(l^2(\mathbb{N}),\mathcal{B}(l^2(\mathbb{N})))$ and I would like to show that $P=Q$ by (if the above holds) showing that $$ \tilde{P}:=P(\cdot \cap l^2(\mathbb{N})), \quad \text{and} \quad \tilde{Q}:=Q(\cdot \cap l^2(\mathbb{N})), $$ are identical measures on $\mathbb{R}^\infty$ (which can be done by for example looking at the finite dimensional distribution).</em></p>
[ { "answer_id": 2082908, "score": 4, "text": "<p>To see that $\\ell^2$ is Borel in $\\mathbb{R}^\\infty$, consider the function $\\pi_i : \\mathbb{R}^\\infty \\to \\mathbb{R}$ defined by $\\pi_i(x) = x_i$. This is a continuous function on $\\mathbb{R}^\\infty$ by very definition of the product topology, so it is also Borel. Thus $f : \\mathbb{R}^\\infty \\to [0,\\infty]$ defined by $f(x) = \\sum_{i=1}^\\infty |f_i(x)|^2$ is also Borel, being a countable sum of Borel functions. But $f$ is just the $\\ell^2$ norm squared, so $\\ell^2 = f^{-1}([0,\\infty)$ is a Borel set in $\\mathbb{R}^\\infty$.</p>\n\n<p>For your second question, let $\\mathcal{G}$ be the $\\sigma$-algebra on $l^2$ induced by the Borel $\\sigma$-algebra on $\\mathbb{R}^\\infty$. You can show $\\mathcal{G}$ equals the Borel $\\sigma$-algebra on $l^2$ by showing that each $\\sigma$-algebra contains a generating set for the other. In one direction, note that sets of the form $\\pi_i^{-1}((a,b)) \\cap l^2$ generate $\\mathcal{G}$, and are open in $l^2$ (since $\\pi_i$ is a continuous linear functional on $l^2$). In the other direction, the function $f$ constructed above can be used to show that every ball of $l^2$ is Borel in $\\mathbb{R}^\\infty$ and hence in $\\mathcal{G}$.</p>\n" }, { "answer_id": 2080543, "score": 0, "text": "<p><strong>Can someone please verify that this proof is correct?</strong></p>\n\n<p>It is obvious that the induced metric $d$ on $l^2(\\mathbb{N})$ satisfies that for any sequence $(x_n)_{n\\in \\mathbb{N}}=((x_{1,n},x_{2,n},...))\\subset l^2(\\mathbb{N})$ and $x=(x_1,x_2,...)\\in l^2(\\mathbb{N})$\n \\begin{align*}\nd(x_n,x) = \\Big(\\sum_{i=1}^\\infty (x_{i,n}-x_i)^2\\Big)^{1/2} \\to_n 0 \\iff x_{i,n} \\to_n x_i \\quad \\forall i\\in \\mathbb{N},\n\\end{align*}\n by Weierstrass M-test. Hence $l^2(\\mathbb{N})$ has the topology $\\tau$ of coordinate-wise convergence. We may define a metric $\\tilde{d}$ on $\\mathbb{R}^\\infty$ given by $\\tilde{d}(x,y)=\\sum_{i=1}^\\infty \\frac{1\\land |x_{i,n}-x_i|}{2^i}$ and by example 1.2 Billingsley (1999) this metric also induces the topology of coordinate-wise convergence, which we denote $\\tilde{\\tau}$. </p>\n\n<p>Since convergence uniquely determines the topology for metrizable topologies, we get that $\\tilde{\\tau}$ is the product topology on $\\mathbb{R}^\\infty$ (since the product topology also is characterized by coordinate-wise convergence), such that $\\sigma(\\tilde{\\tau})=\\mathcal{B}(\\mathbb{R}^\\infty)$. </p>\n\n<p>Consider the restriction $\\tilde{d}|_{l^2(\\mathbb{N})}$ of $\\tilde{d}$ to $l^2(\\mathbb{N})$ and note that it is a metric equivalent to $d$, since they have the same convergent sequences and limits. Thus\n $$ \\mathcal{B}(l^2(\\mathbb{N})) := \\mathcal{B}(l^2(\\mathbb{N}),d) = \\mathcal{B}(l^2(\\mathbb{N}), \\tilde{d}|_{l^2(\\mathbb{N})}) = \\sigma(\\tilde{\\tau}\\cap l^2(\\mathbb{N}))$$\n$$ = \\sigma(\\tilde{\\tau})\\cap l^2(\\mathbb{N}) = \\mathcal{B}(\\mathbb{R}^\\infty) \\cap l^2(\\mathbb{N}),$$ \n where we used that the restricted metric $\\tilde{d}|_{l^2(\\mathbb{N})}$ induces the subspace topology $\\tilde{\\tau}\\cap l^2(\\mathbb{N})$. </p>\n" }, { "answer_id": 2082908, "score": 4, "text": "<p>To see that $\\ell^2$ is Borel in $\\mathbb{R}^\\infty$, consider the function $\\pi_i : \\mathbb{R}^\\infty \\to \\mathbb{R}$ defined by $\\pi_i(x) = x_i$. This is a continuous function on $\\mathbb{R}^\\infty$ by very definition of the product topology, so it is also Borel. Thus $f : \\mathbb{R}^\\infty \\to [0,\\infty]$ defined by $f(x) = \\sum_{i=1}^\\infty |f_i(x)|^2$ is also Borel, being a countable sum of Borel functions. But $f$ is just the $\\ell^2$ norm squared, so $\\ell^2 = f^{-1}([0,\\infty)$ is a Borel set in $\\mathbb{R}^\\infty$.</p>\n\n<p>For your second question, let $\\mathcal{G}$ be the $\\sigma$-algebra on $l^2$ induced by the Borel $\\sigma$-algebra on $\\mathbb{R}^\\infty$. You can show $\\mathcal{G}$ equals the Borel $\\sigma$-algebra on $l^2$ by showing that each $\\sigma$-algebra contains a generating set for the other. In one direction, note that sets of the form $\\pi_i^{-1}((a,b)) \\cap l^2$ generate $\\mathcal{G}$, and are open in $l^2$ (since $\\pi_i$ is a continuous linear functional on $l^2$). In the other direction, the function $f$ constructed above can be used to show that every ball of $l^2$ is Borel in $\\mathbb{R}^\\infty$ and hence in $\\mathcal{G}$.</p>\n" }, { "answer_id": 2080543, "score": 0, "text": "<p><strong>Can someone please verify that this proof is correct?</strong></p>\n\n<p>It is obvious that the induced metric $d$ on $l^2(\\mathbb{N})$ satisfies that for any sequence $(x_n)_{n\\in \\mathbb{N}}=((x_{1,n},x_{2,n},...))\\subset l^2(\\mathbb{N})$ and $x=(x_1,x_2,...)\\in l^2(\\mathbb{N})$\n \\begin{align*}\nd(x_n,x) = \\Big(\\sum_{i=1}^\\infty (x_{i,n}-x_i)^2\\Big)^{1/2} \\to_n 0 \\iff x_{i,n} \\to_n x_i \\quad \\forall i\\in \\mathbb{N},\n\\end{align*}\n by Weierstrass M-test. Hence $l^2(\\mathbb{N})$ has the topology $\\tau$ of coordinate-wise convergence. We may define a metric $\\tilde{d}$ on $\\mathbb{R}^\\infty$ given by $\\tilde{d}(x,y)=\\sum_{i=1}^\\infty \\frac{1\\land |x_{i,n}-x_i|}{2^i}$ and by example 1.2 Billingsley (1999) this metric also induces the topology of coordinate-wise convergence, which we denote $\\tilde{\\tau}$. </p>\n\n<p>Since convergence uniquely determines the topology for metrizable topologies, we get that $\\tilde{\\tau}$ is the product topology on $\\mathbb{R}^\\infty$ (since the product topology also is characterized by coordinate-wise convergence), such that $\\sigma(\\tilde{\\tau})=\\mathcal{B}(\\mathbb{R}^\\infty)$. </p>\n\n<p>Consider the restriction $\\tilde{d}|_{l^2(\\mathbb{N})}$ of $\\tilde{d}$ to $l^2(\\mathbb{N})$ and note that it is a metric equivalent to $d$, since they have the same convergent sequences and limits. Thus\n $$ \\mathcal{B}(l^2(\\mathbb{N})) := \\mathcal{B}(l^2(\\mathbb{N}),d) = \\mathcal{B}(l^2(\\mathbb{N}), \\tilde{d}|_{l^2(\\mathbb{N})}) = \\sigma(\\tilde{\\tau}\\cap l^2(\\mathbb{N}))$$\n$$ = \\sigma(\\tilde{\\tau})\\cap l^2(\\mathbb{N}) = \\mathcal{B}(\\mathbb{R}^\\infty) \\cap l^2(\\mathbb{N}),$$ \n where we used that the restricted metric $\\tilde{d}|_{l^2(\\mathbb{N})}$ induces the subspace topology $\\tilde{\\tau}\\cap l^2(\\mathbb{N})$. </p>\n" }, { "answer_id": 2082908, "score": 4, "text": "<p>To see that $\\ell^2$ is Borel in $\\mathbb{R}^\\infty$, consider the function $\\pi_i : \\mathbb{R}^\\infty \\to \\mathbb{R}$ defined by $\\pi_i(x) = x_i$. This is a continuous function on $\\mathbb{R}^\\infty$ by very definition of the product topology, so it is also Borel. Thus $f : \\mathbb{R}^\\infty \\to [0,\\infty]$ defined by $f(x) = \\sum_{i=1}^\\infty |f_i(x)|^2$ is also Borel, being a countable sum of Borel functions. But $f$ is just the $\\ell^2$ norm squared, so $\\ell^2 = f^{-1}([0,\\infty)$ is a Borel set in $\\mathbb{R}^\\infty$.</p>\n\n<p>For your second question, let $\\mathcal{G}$ be the $\\sigma$-algebra on $l^2$ induced by the Borel $\\sigma$-algebra on $\\mathbb{R}^\\infty$. You can show $\\mathcal{G}$ equals the Borel $\\sigma$-algebra on $l^2$ by showing that each $\\sigma$-algebra contains a generating set for the other. In one direction, note that sets of the form $\\pi_i^{-1}((a,b)) \\cap l^2$ generate $\\mathcal{G}$, and are open in $l^2$ (since $\\pi_i$ is a continuous linear functional on $l^2$). In the other direction, the function $f$ constructed above can be used to show that every ball of $l^2$ is Borel in $\\mathbb{R}^\\infty$ and hence in $\\mathcal{G}$.</p>\n" }, { "answer_id": 2080543, "score": 0, "text": "<p><strong>Can someone please verify that this proof is correct?</strong></p>\n\n<p>It is obvious that the induced metric $d$ on $l^2(\\mathbb{N})$ satisfies that for any sequence $(x_n)_{n\\in \\mathbb{N}}=((x_{1,n},x_{2,n},...))\\subset l^2(\\mathbb{N})$ and $x=(x_1,x_2,...)\\in l^2(\\mathbb{N})$\n \\begin{align*}\nd(x_n,x) = \\Big(\\sum_{i=1}^\\infty (x_{i,n}-x_i)^2\\Big)^{1/2} \\to_n 0 \\iff x_{i,n} \\to_n x_i \\quad \\forall i\\in \\mathbb{N},\n\\end{align*}\n by Weierstrass M-test. Hence $l^2(\\mathbb{N})$ has the topology $\\tau$ of coordinate-wise convergence. We may define a metric $\\tilde{d}$ on $\\mathbb{R}^\\infty$ given by $\\tilde{d}(x,y)=\\sum_{i=1}^\\infty \\frac{1\\land |x_{i,n}-x_i|}{2^i}$ and by example 1.2 Billingsley (1999) this metric also induces the topology of coordinate-wise convergence, which we denote $\\tilde{\\tau}$. </p>\n\n<p>Since convergence uniquely determines the topology for metrizable topologies, we get that $\\tilde{\\tau}$ is the product topology on $\\mathbb{R}^\\infty$ (since the product topology also is characterized by coordinate-wise convergence), such that $\\sigma(\\tilde{\\tau})=\\mathcal{B}(\\mathbb{R}^\\infty)$. </p>\n\n<p>Consider the restriction $\\tilde{d}|_{l^2(\\mathbb{N})}$ of $\\tilde{d}$ to $l^2(\\mathbb{N})$ and note that it is a metric equivalent to $d$, since they have the same convergent sequences and limits. Thus\n $$ \\mathcal{B}(l^2(\\mathbb{N})) := \\mathcal{B}(l^2(\\mathbb{N}),d) = \\mathcal{B}(l^2(\\mathbb{N}), \\tilde{d}|_{l^2(\\mathbb{N})}) = \\sigma(\\tilde{\\tau}\\cap l^2(\\mathbb{N}))$$\n$$ = \\sigma(\\tilde{\\tau})\\cap l^2(\\mathbb{N}) = \\mathcal{B}(\\mathbb{R}^\\infty) \\cap l^2(\\mathbb{N}),$$ \n where we used that the restricted metric $\\tilde{d}|_{l^2(\\mathbb{N})}$ induces the subspace topology $\\tilde{\\tau}\\cap l^2(\\mathbb{N})$. </p>\n" }, { "answer_id": 2082908, "score": 4, "text": "<p>To see that $\\ell^2$ is Borel in $\\mathbb{R}^\\infty$, consider the function $\\pi_i : \\mathbb{R}^\\infty \\to \\mathbb{R}$ defined by $\\pi_i(x) = x_i$. This is a continuous function on $\\mathbb{R}^\\infty$ by very definition of the product topology, so it is also Borel. Thus $f : \\mathbb{R}^\\infty \\to [0,\\infty]$ defined by $f(x) = \\sum_{i=1}^\\infty |f_i(x)|^2$ is also Borel, being a countable sum of Borel functions. But $f$ is just the $\\ell^2$ norm squared, so $\\ell^2 = f^{-1}([0,\\infty)$ is a Borel set in $\\mathbb{R}^\\infty$.</p>\n\n<p>For your second question, let $\\mathcal{G}$ be the $\\sigma$-algebra on $l^2$ induced by the Borel $\\sigma$-algebra on $\\mathbb{R}^\\infty$. You can show $\\mathcal{G}$ equals the Borel $\\sigma$-algebra on $l^2$ by showing that each $\\sigma$-algebra contains a generating set for the other. In one direction, note that sets of the form $\\pi_i^{-1}((a,b)) \\cap l^2$ generate $\\mathcal{G}$, and are open in $l^2$ (since $\\pi_i$ is a continuous linear functional on $l^2$). In the other direction, the function $f$ constructed above can be used to show that every ball of $l^2$ is Borel in $\\mathbb{R}^\\infty$ and hence in $\\mathcal{G}$.</p>\n" }, { "answer_id": 2080543, "score": 0, "text": "<p><strong>Can someone please verify that this proof is correct?</strong></p>\n\n<p>It is obvious that the induced metric $d$ on $l^2(\\mathbb{N})$ satisfies that for any sequence $(x_n)_{n\\in \\mathbb{N}}=((x_{1,n},x_{2,n},...))\\subset l^2(\\mathbb{N})$ and $x=(x_1,x_2,...)\\in l^2(\\mathbb{N})$\n \\begin{align*}\nd(x_n,x) = \\Big(\\sum_{i=1}^\\infty (x_{i,n}-x_i)^2\\Big)^{1/2} \\to_n 0 \\iff x_{i,n} \\to_n x_i \\quad \\forall i\\in \\mathbb{N},\n\\end{align*}\n by Weierstrass M-test. Hence $l^2(\\mathbb{N})$ has the topology $\\tau$ of coordinate-wise convergence. We may define a metric $\\tilde{d}$ on $\\mathbb{R}^\\infty$ given by $\\tilde{d}(x,y)=\\sum_{i=1}^\\infty \\frac{1\\land |x_{i,n}-x_i|}{2^i}$ and by example 1.2 Billingsley (1999) this metric also induces the topology of coordinate-wise convergence, which we denote $\\tilde{\\tau}$. </p>\n\n<p>Since convergence uniquely determines the topology for metrizable topologies, we get that $\\tilde{\\tau}$ is the product topology on $\\mathbb{R}^\\infty$ (since the product topology also is characterized by coordinate-wise convergence), such that $\\sigma(\\tilde{\\tau})=\\mathcal{B}(\\mathbb{R}^\\infty)$. </p>\n\n<p>Consider the restriction $\\tilde{d}|_{l^2(\\mathbb{N})}$ of $\\tilde{d}$ to $l^2(\\mathbb{N})$ and note that it is a metric equivalent to $d$, since they have the same convergent sequences and limits. Thus\n $$ \\mathcal{B}(l^2(\\mathbb{N})) := \\mathcal{B}(l^2(\\mathbb{N}),d) = \\mathcal{B}(l^2(\\mathbb{N}), \\tilde{d}|_{l^2(\\mathbb{N})}) = \\sigma(\\tilde{\\tau}\\cap l^2(\\mathbb{N}))$$\n$$ = \\sigma(\\tilde{\\tau})\\cap l^2(\\mathbb{N}) = \\mathcal{B}(\\mathbb{R}^\\infty) \\cap l^2(\\mathbb{N}),$$ \n where we used that the restricted metric $\\tilde{d}|_{l^2(\\mathbb{N})}$ induces the subspace topology $\\tilde{\\tau}\\cap l^2(\\mathbb{N})$. </p>\n" }, { "answer_id": 2082908, "score": 4, "text": "<p>To see that $\\ell^2$ is Borel in $\\mathbb{R}^\\infty$, consider the function $\\pi_i : \\mathbb{R}^\\infty \\to \\mathbb{R}$ defined by $\\pi_i(x) = x_i$. This is a continuous function on $\\mathbb{R}^\\infty$ by very definition of the product topology, so it is also Borel. Thus $f : \\mathbb{R}^\\infty \\to [0,\\infty]$ defined by $f(x) = \\sum_{i=1}^\\infty |f_i(x)|^2$ is also Borel, being a countable sum of Borel functions. But $f$ is just the $\\ell^2$ norm squared, so $\\ell^2 = f^{-1}([0,\\infty)$ is a Borel set in $\\mathbb{R}^\\infty$.</p>\n\n<p>For your second question, let $\\mathcal{G}$ be the $\\sigma$-algebra on $l^2$ induced by the Borel $\\sigma$-algebra on $\\mathbb{R}^\\infty$. You can show $\\mathcal{G}$ equals the Borel $\\sigma$-algebra on $l^2$ by showing that each $\\sigma$-algebra contains a generating set for the other. In one direction, note that sets of the form $\\pi_i^{-1}((a,b)) \\cap l^2$ generate $\\mathcal{G}$, and are open in $l^2$ (since $\\pi_i$ is a continuous linear functional on $l^2$). In the other direction, the function $f$ constructed above can be used to show that every ball of $l^2$ is Borel in $\\mathbb{R}^\\infty$ and hence in $\\mathcal{G}$.</p>\n" }, { "answer_id": 2080543, "score": 0, "text": "<p><strong>Can someone please verify that this proof is correct?</strong></p>\n\n<p>It is obvious that the induced metric $d$ on $l^2(\\mathbb{N})$ satisfies that for any sequence $(x_n)_{n\\in \\mathbb{N}}=((x_{1,n},x_{2,n},...))\\subset l^2(\\mathbb{N})$ and $x=(x_1,x_2,...)\\in l^2(\\mathbb{N})$\n \\begin{align*}\nd(x_n,x) = \\Big(\\sum_{i=1}^\\infty (x_{i,n}-x_i)^2\\Big)^{1/2} \\to_n 0 \\iff x_{i,n} \\to_n x_i \\quad \\forall i\\in \\mathbb{N},\n\\end{align*}\n by Weierstrass M-test. Hence $l^2(\\mathbb{N})$ has the topology $\\tau$ of coordinate-wise convergence. We may define a metric $\\tilde{d}$ on $\\mathbb{R}^\\infty$ given by $\\tilde{d}(x,y)=\\sum_{i=1}^\\infty \\frac{1\\land |x_{i,n}-x_i|}{2^i}$ and by example 1.2 Billingsley (1999) this metric also induces the topology of coordinate-wise convergence, which we denote $\\tilde{\\tau}$. </p>\n\n<p>Since convergence uniquely determines the topology for metrizable topologies, we get that $\\tilde{\\tau}$ is the product topology on $\\mathbb{R}^\\infty$ (since the product topology also is characterized by coordinate-wise convergence), such that $\\sigma(\\tilde{\\tau})=\\mathcal{B}(\\mathbb{R}^\\infty)$. </p>\n\n<p>Consider the restriction $\\tilde{d}|_{l^2(\\mathbb{N})}$ of $\\tilde{d}$ to $l^2(\\mathbb{N})$ and note that it is a metric equivalent to $d$, since they have the same convergent sequences and limits. Thus\n $$ \\mathcal{B}(l^2(\\mathbb{N})) := \\mathcal{B}(l^2(\\mathbb{N}),d) = \\mathcal{B}(l^2(\\mathbb{N}), \\tilde{d}|_{l^2(\\mathbb{N})}) = \\sigma(\\tilde{\\tau}\\cap l^2(\\mathbb{N}))$$\n$$ = \\sigma(\\tilde{\\tau})\\cap l^2(\\mathbb{N}) = \\mathcal{B}(\\mathbb{R}^\\infty) \\cap l^2(\\mathbb{N}),$$ \n where we used that the restricted metric $\\tilde{d}|_{l^2(\\mathbb{N})}$ induces the subspace topology $\\tilde{\\tau}\\cap l^2(\\mathbb{N})$. </p>\n" }, { "answer_id": 2082908, "score": 4, "text": "<p>To see that $\\ell^2$ is Borel in $\\mathbb{R}^\\infty$, consider the function $\\pi_i : \\mathbb{R}^\\infty \\to \\mathbb{R}$ defined by $\\pi_i(x) = x_i$. This is a continuous function on $\\mathbb{R}^\\infty$ by very definition of the product topology, so it is also Borel. Thus $f : \\mathbb{R}^\\infty \\to [0,\\infty]$ defined by $f(x) = \\sum_{i=1}^\\infty |f_i(x)|^2$ is also Borel, being a countable sum of Borel functions. But $f$ is just the $\\ell^2$ norm squared, so $\\ell^2 = f^{-1}([0,\\infty)$ is a Borel set in $\\mathbb{R}^\\infty$.</p>\n\n<p>For your second question, let $\\mathcal{G}$ be the $\\sigma$-algebra on $l^2$ induced by the Borel $\\sigma$-algebra on $\\mathbb{R}^\\infty$. You can show $\\mathcal{G}$ equals the Borel $\\sigma$-algebra on $l^2$ by showing that each $\\sigma$-algebra contains a generating set for the other. In one direction, note that sets of the form $\\pi_i^{-1}((a,b)) \\cap l^2$ generate $\\mathcal{G}$, and are open in $l^2$ (since $\\pi_i$ is a continuous linear functional on $l^2$). In the other direction, the function $f$ constructed above can be used to show that every ball of $l^2$ is Borel in $\\mathbb{R}^\\infty$ and hence in $\\mathcal{G}$.</p>\n" }, { "answer_id": 2080543, "score": 0, "text": "<p><strong>Can someone please verify that this proof is correct?</strong></p>\n\n<p>It is obvious that the induced metric $d$ on $l^2(\\mathbb{N})$ satisfies that for any sequence $(x_n)_{n\\in \\mathbb{N}}=((x_{1,n},x_{2,n},...))\\subset l^2(\\mathbb{N})$ and $x=(x_1,x_2,...)\\in l^2(\\mathbb{N})$\n \\begin{align*}\nd(x_n,x) = \\Big(\\sum_{i=1}^\\infty (x_{i,n}-x_i)^2\\Big)^{1/2} \\to_n 0 \\iff x_{i,n} \\to_n x_i \\quad \\forall i\\in \\mathbb{N},\n\\end{align*}\n by Weierstrass M-test. Hence $l^2(\\mathbb{N})$ has the topology $\\tau$ of coordinate-wise convergence. We may define a metric $\\tilde{d}$ on $\\mathbb{R}^\\infty$ given by $\\tilde{d}(x,y)=\\sum_{i=1}^\\infty \\frac{1\\land |x_{i,n}-x_i|}{2^i}$ and by example 1.2 Billingsley (1999) this metric also induces the topology of coordinate-wise convergence, which we denote $\\tilde{\\tau}$. </p>\n\n<p>Since convergence uniquely determines the topology for metrizable topologies, we get that $\\tilde{\\tau}$ is the product topology on $\\mathbb{R}^\\infty$ (since the product topology also is characterized by coordinate-wise convergence), such that $\\sigma(\\tilde{\\tau})=\\mathcal{B}(\\mathbb{R}^\\infty)$. </p>\n\n<p>Consider the restriction $\\tilde{d}|_{l^2(\\mathbb{N})}$ of $\\tilde{d}$ to $l^2(\\mathbb{N})$ and note that it is a metric equivalent to $d$, since they have the same convergent sequences and limits. Thus\n $$ \\mathcal{B}(l^2(\\mathbb{N})) := \\mathcal{B}(l^2(\\mathbb{N}),d) = \\mathcal{B}(l^2(\\mathbb{N}), \\tilde{d}|_{l^2(\\mathbb{N})}) = \\sigma(\\tilde{\\tau}\\cap l^2(\\mathbb{N}))$$\n$$ = \\sigma(\\tilde{\\tau})\\cap l^2(\\mathbb{N}) = \\mathcal{B}(\\mathbb{R}^\\infty) \\cap l^2(\\mathbb{N}),$$ \n where we used that the restricted metric $\\tilde{d}|_{l^2(\\mathbb{N})}$ induces the subspace topology $\\tilde{\\tau}\\cap l^2(\\mathbb{N})$. </p>\n" }, { "answer_id": 2082908, "score": 4, "text": "<p>To see that $\\ell^2$ is Borel in $\\mathbb{R}^\\infty$, consider the function $\\pi_i : \\mathbb{R}^\\infty \\to \\mathbb{R}$ defined by $\\pi_i(x) = x_i$. This is a continuous function on $\\mathbb{R}^\\infty$ by very definition of the product topology, so it is also Borel. Thus $f : \\mathbb{R}^\\infty \\to [0,\\infty]$ defined by $f(x) = \\sum_{i=1}^\\infty |f_i(x)|^2$ is also Borel, being a countable sum of Borel functions. But $f$ is just the $\\ell^2$ norm squared, so $\\ell^2 = f^{-1}([0,\\infty)$ is a Borel set in $\\mathbb{R}^\\infty$.</p>\n\n<p>For your second question, let $\\mathcal{G}$ be the $\\sigma$-algebra on $l^2$ induced by the Borel $\\sigma$-algebra on $\\mathbb{R}^\\infty$. You can show $\\mathcal{G}$ equals the Borel $\\sigma$-algebra on $l^2$ by showing that each $\\sigma$-algebra contains a generating set for the other. In one direction, note that sets of the form $\\pi_i^{-1}((a,b)) \\cap l^2$ generate $\\mathcal{G}$, and are open in $l^2$ (since $\\pi_i$ is a continuous linear functional on $l^2$). In the other direction, the function $f$ constructed above can be used to show that every ball of $l^2$ is Borel in $\\mathbb{R}^\\infty$ and hence in $\\mathcal{G}$.</p>\n" }, { "answer_id": 2080543, "score": 0, "text": "<p><strong>Can someone please verify that this proof is correct?</strong></p>\n\n<p>It is obvious that the induced metric $d$ on $l^2(\\mathbb{N})$ satisfies that for any sequence $(x_n)_{n\\in \\mathbb{N}}=((x_{1,n},x_{2,n},...))\\subset l^2(\\mathbb{N})$ and $x=(x_1,x_2,...)\\in l^2(\\mathbb{N})$\n \\begin{align*}\nd(x_n,x) = \\Big(\\sum_{i=1}^\\infty (x_{i,n}-x_i)^2\\Big)^{1/2} \\to_n 0 \\iff x_{i,n} \\to_n x_i \\quad \\forall i\\in \\mathbb{N},\n\\end{align*}\n by Weierstrass M-test. Hence $l^2(\\mathbb{N})$ has the topology $\\tau$ of coordinate-wise convergence. We may define a metric $\\tilde{d}$ on $\\mathbb{R}^\\infty$ given by $\\tilde{d}(x,y)=\\sum_{i=1}^\\infty \\frac{1\\land |x_{i,n}-x_i|}{2^i}$ and by example 1.2 Billingsley (1999) this metric also induces the topology of coordinate-wise convergence, which we denote $\\tilde{\\tau}$. </p>\n\n<p>Since convergence uniquely determines the topology for metrizable topologies, we get that $\\tilde{\\tau}$ is the product topology on $\\mathbb{R}^\\infty$ (since the product topology also is characterized by coordinate-wise convergence), such that $\\sigma(\\tilde{\\tau})=\\mathcal{B}(\\mathbb{R}^\\infty)$. </p>\n\n<p>Consider the restriction $\\tilde{d}|_{l^2(\\mathbb{N})}$ of $\\tilde{d}$ to $l^2(\\mathbb{N})$ and note that it is a metric equivalent to $d$, since they have the same convergent sequences and limits. Thus\n $$ \\mathcal{B}(l^2(\\mathbb{N})) := \\mathcal{B}(l^2(\\mathbb{N}),d) = \\mathcal{B}(l^2(\\mathbb{N}), \\tilde{d}|_{l^2(\\mathbb{N})}) = \\sigma(\\tilde{\\tau}\\cap l^2(\\mathbb{N}))$$\n$$ = \\sigma(\\tilde{\\tau})\\cap l^2(\\mathbb{N}) = \\mathcal{B}(\\mathbb{R}^\\infty) \\cap l^2(\\mathbb{N}),$$ \n where we used that the restricted metric $\\tilde{d}|_{l^2(\\mathbb{N})}$ induces the subspace topology $\\tilde{\\tau}\\cap l^2(\\mathbb{N})$. </p>\n" }, { "answer_id": 2082908, "score": 4, "text": "<p>To see that $\\ell^2$ is Borel in $\\mathbb{R}^\\infty$, consider the function $\\pi_i : \\mathbb{R}^\\infty \\to \\mathbb{R}$ defined by $\\pi_i(x) = x_i$. This is a continuous function on $\\mathbb{R}^\\infty$ by very definition of the product topology, so it is also Borel. Thus $f : \\mathbb{R}^\\infty \\to [0,\\infty]$ defined by $f(x) = \\sum_{i=1}^\\infty |f_i(x)|^2$ is also Borel, being a countable sum of Borel functions. But $f$ is just the $\\ell^2$ norm squared, so $\\ell^2 = f^{-1}([0,\\infty)$ is a Borel set in $\\mathbb{R}^\\infty$.</p>\n\n<p>For your second question, let $\\mathcal{G}$ be the $\\sigma$-algebra on $l^2$ induced by the Borel $\\sigma$-algebra on $\\mathbb{R}^\\infty$. You can show $\\mathcal{G}$ equals the Borel $\\sigma$-algebra on $l^2$ by showing that each $\\sigma$-algebra contains a generating set for the other. In one direction, note that sets of the form $\\pi_i^{-1}((a,b)) \\cap l^2$ generate $\\mathcal{G}$, and are open in $l^2$ (since $\\pi_i$ is a continuous linear functional on $l^2$). In the other direction, the function $f$ constructed above can be used to show that every ball of $l^2$ is Borel in $\\mathbb{R}^\\infty$ and hence in $\\mathcal{G}$.</p>\n" }, { "answer_id": 2080543, "score": 0, "text": "<p><strong>Can someone please verify that this proof is correct?</strong></p>\n\n<p>It is obvious that the induced metric $d$ on $l^2(\\mathbb{N})$ satisfies that for any sequence $(x_n)_{n\\in \\mathbb{N}}=((x_{1,n},x_{2,n},...))\\subset l^2(\\mathbb{N})$ and $x=(x_1,x_2,...)\\in l^2(\\mathbb{N})$\n \\begin{align*}\nd(x_n,x) = \\Big(\\sum_{i=1}^\\infty (x_{i,n}-x_i)^2\\Big)^{1/2} \\to_n 0 \\iff x_{i,n} \\to_n x_i \\quad \\forall i\\in \\mathbb{N},\n\\end{align*}\n by Weierstrass M-test. Hence $l^2(\\mathbb{N})$ has the topology $\\tau$ of coordinate-wise convergence. We may define a metric $\\tilde{d}$ on $\\mathbb{R}^\\infty$ given by $\\tilde{d}(x,y)=\\sum_{i=1}^\\infty \\frac{1\\land |x_{i,n}-x_i|}{2^i}$ and by example 1.2 Billingsley (1999) this metric also induces the topology of coordinate-wise convergence, which we denote $\\tilde{\\tau}$. </p>\n\n<p>Since convergence uniquely determines the topology for metrizable topologies, we get that $\\tilde{\\tau}$ is the product topology on $\\mathbb{R}^\\infty$ (since the product topology also is characterized by coordinate-wise convergence), such that $\\sigma(\\tilde{\\tau})=\\mathcal{B}(\\mathbb{R}^\\infty)$. </p>\n\n<p>Consider the restriction $\\tilde{d}|_{l^2(\\mathbb{N})}$ of $\\tilde{d}$ to $l^2(\\mathbb{N})$ and note that it is a metric equivalent to $d$, since they have the same convergent sequences and limits. Thus\n $$ \\mathcal{B}(l^2(\\mathbb{N})) := \\mathcal{B}(l^2(\\mathbb{N}),d) = \\mathcal{B}(l^2(\\mathbb{N}), \\tilde{d}|_{l^2(\\mathbb{N})}) = \\sigma(\\tilde{\\tau}\\cap l^2(\\mathbb{N}))$$\n$$ = \\sigma(\\tilde{\\tau})\\cap l^2(\\mathbb{N}) = \\mathcal{B}(\\mathbb{R}^\\infty) \\cap l^2(\\mathbb{N}),$$ \n where we used that the restricted metric $\\tilde{d}|_{l^2(\\mathbb{N})}$ induces the subspace topology $\\tilde{\\tau}\\cap l^2(\\mathbb{N})$. </p>\n" }, { "answer_id": 2082908, "score": 4, "text": "<p>To see that $\\ell^2$ is Borel in $\\mathbb{R}^\\infty$, consider the function $\\pi_i : \\mathbb{R}^\\infty \\to \\mathbb{R}$ defined by $\\pi_i(x) = x_i$. This is a continuous function on $\\mathbb{R}^\\infty$ by very definition of the product topology, so it is also Borel. Thus $f : \\mathbb{R}^\\infty \\to [0,\\infty]$ defined by $f(x) = \\sum_{i=1}^\\infty |f_i(x)|^2$ is also Borel, being a countable sum of Borel functions. But $f$ is just the $\\ell^2$ norm squared, so $\\ell^2 = f^{-1}([0,\\infty)$ is a Borel set in $\\mathbb{R}^\\infty$.</p>\n\n<p>For your second question, let $\\mathcal{G}$ be the $\\sigma$-algebra on $l^2$ induced by the Borel $\\sigma$-algebra on $\\mathbb{R}^\\infty$. You can show $\\mathcal{G}$ equals the Borel $\\sigma$-algebra on $l^2$ by showing that each $\\sigma$-algebra contains a generating set for the other. In one direction, note that sets of the form $\\pi_i^{-1}((a,b)) \\cap l^2$ generate $\\mathcal{G}$, and are open in $l^2$ (since $\\pi_i$ is a continuous linear functional on $l^2$). In the other direction, the function $f$ constructed above can be used to show that every ball of $l^2$ is Borel in $\\mathbb{R}^\\infty$ and hence in $\\mathcal{G}$.</p>\n" }, { "answer_id": 2080543, "score": 0, "text": "<p><strong>Can someone please verify that this proof is correct?</strong></p>\n\n<p>It is obvious that the induced metric $d$ on $l^2(\\mathbb{N})$ satisfies that for any sequence $(x_n)_{n\\in \\mathbb{N}}=((x_{1,n},x_{2,n},...))\\subset l^2(\\mathbb{N})$ and $x=(x_1,x_2,...)\\in l^2(\\mathbb{N})$\n \\begin{align*}\nd(x_n,x) = \\Big(\\sum_{i=1}^\\infty (x_{i,n}-x_i)^2\\Big)^{1/2} \\to_n 0 \\iff x_{i,n} \\to_n x_i \\quad \\forall i\\in \\mathbb{N},\n\\end{align*}\n by Weierstrass M-test. Hence $l^2(\\mathbb{N})$ has the topology $\\tau$ of coordinate-wise convergence. We may define a metric $\\tilde{d}$ on $\\mathbb{R}^\\infty$ given by $\\tilde{d}(x,y)=\\sum_{i=1}^\\infty \\frac{1\\land |x_{i,n}-x_i|}{2^i}$ and by example 1.2 Billingsley (1999) this metric also induces the topology of coordinate-wise convergence, which we denote $\\tilde{\\tau}$. </p>\n\n<p>Since convergence uniquely determines the topology for metrizable topologies, we get that $\\tilde{\\tau}$ is the product topology on $\\mathbb{R}^\\infty$ (since the product topology also is characterized by coordinate-wise convergence), such that $\\sigma(\\tilde{\\tau})=\\mathcal{B}(\\mathbb{R}^\\infty)$. </p>\n\n<p>Consider the restriction $\\tilde{d}|_{l^2(\\mathbb{N})}$ of $\\tilde{d}$ to $l^2(\\mathbb{N})$ and note that it is a metric equivalent to $d$, since they have the same convergent sequences and limits. Thus\n $$ \\mathcal{B}(l^2(\\mathbb{N})) := \\mathcal{B}(l^2(\\mathbb{N}),d) = \\mathcal{B}(l^2(\\mathbb{N}), \\tilde{d}|_{l^2(\\mathbb{N})}) = \\sigma(\\tilde{\\tau}\\cap l^2(\\mathbb{N}))$$\n$$ = \\sigma(\\tilde{\\tau})\\cap l^2(\\mathbb{N}) = \\mathcal{B}(\\mathbb{R}^\\infty) \\cap l^2(\\mathbb{N}),$$ \n where we used that the restricted metric $\\tilde{d}|_{l^2(\\mathbb{N})}$ induces the subspace topology $\\tilde{\\tau}\\cap l^2(\\mathbb{N})$. </p>\n" } ]
Question: Relationship between Borel $\sigma$-algebra on $l^2(\mathbb{N})$ and the product $\sigma$-algebra on $\mathbb{R}^\infty$ Answer 1: <p>To see that $\ell^2$ is Borel in $\mathbb{R}^\infty$, consider the function $\pi_i : \mathbb{R}^\infty \to \mathbb{R}$ defined by $\pi_i(x) = x_i$. This is a continuous function on $\mathbb{R}^\infty$ by very definition of the product topology, so it is also Borel. Thus $f : \mathbb{R}^\infty \to [0,\infty]$ defined by $f(x) = \sum_{i=1}^\infty |f_i(x)|^2$ is also Borel, being a countable sum of Borel functions. But $f$ is just the $\ell^2$ norm squared, so $\ell^2 = f^{-1}([0,\infty)$ is a Borel set in $\mathbb{R}^\infty$.</p> <p>For your second question, let $\mathcal{G}$ be the $\sigma$-algebra on $l^2$ induced by the Borel $\sigma$-algebra on $\mathbb{R}^\infty$. You can show $\mathcal{G}$ equals the Borel $\sigma$-algebra on $l^2$ by showing that each $\sigma$-algebra contains a generating set for the other. In one direction, note that sets of the form $\pi_i^{-1}((a,b)) \cap l^2$ generate $\mathcal{G}$, and are open in $l^2$ (since $\pi_i$ is a continuous linear functional on $l^2$). In the other direction, the function $f$ constructed above can be used to show that every ball of $l^2$ is Borel in $\mathbb{R}^\infty$ and hence in $\mathcal{G}$.</p> Answer 2: <p><strong>Can someone please verify that this proof is correct?</strong></p> <p>It is obvious that the induced metric $d$ on $l^2(\mathbb{N})$ satisfies that for any sequence $(x_n)_{n\in \mathbb{N}}=((x_{1,n},x_{2,n},...))\subset l^2(\mathbb{N})$ and $x=(x_1,x_2,...)\in l^2(\mathbb{N})$ \begin{align*} d(x_n,x) = \Big(\sum_{i=1}^\infty (x_{i,n}-x_i)^2\Big)^{1/2} \to_n 0 \iff x_{i,n} \to_n x_i \quad \forall i\in \mathbb{N}, \end{align*} by Weierstrass M-test. Hence $l^2(\mathbb{N})$ has the topology $\tau$ of coordinate-wise convergence. We may define a metric $\tilde{d}$ on $\mathbb{R}^\infty$ given by $\tilde{d}(x,y)=\sum_{i=1}^\infty \frac{1\land |x_{i,n}-x_i|}{2^i}$ and by example 1.2 Billingsley (1999) this metric also induces the topology of coordinate-wise convergence, which we denote $\tilde{\tau}$. </p> <p>Since convergence uniquely determines the topology for metrizable topologies, we get that $\tilde{\tau}$ is the product topology on $\mathbb{R}^\infty$ (since the product topology also is characterized by coordinate-wise convergence), such that $\sigma(\tilde{\tau})=\mathcal{B}(\mathbb{R}^\infty)$. </p> <p>Consider the restriction $\tilde{d}|_{l^2(\mathbb{N})}$ of $\tilde{d}$ to $l^2(\mathbb{N})$ and note that it is a metric equivalent to $d$, since they have the same convergent sequences and limits. Thus $$ \mathcal{B}(l^2(\mathbb{N})) := \mathcal{B}(l^2(\mathbb{N}),d) = \mathcal{B}(l^2(\mathbb{N}), \tilde{d}|_{l^2(\mathbb{N})}) = \sigma(\tilde{\tau}\cap l^2(\mathbb{N}))$$ $$ = \sigma(\tilde{\tau})\cap l^2(\mathbb{N}) = \mathcal{B}(\mathbb{R}^\infty) \cap l^2(\mathbb{N}),$$ where we used that the restricted metric $\tilde{d}|_{l^2(\mathbb{N})}$ induces the subspace topology $\tilde{\tau}\cap l^2(\mathbb{N})$. </p> Answer 3: <p>To see that $\ell^2$ is Borel in $\mathbb{R}^\infty$, consider the function $\pi_i : \mathbb{R}^\infty \to \mathbb{R}$ defined by $\pi_i(x) = x_i$. This is a continuous function on $\mathbb{R}^\infty$ by very definition of the product topology, so it is also Borel. Thus $f : \mathbb{R}^\infty \to [0,\infty]$ defined by $f(x) = \sum_{i=1}^\infty |f_i(x)|^2$ is also Borel, being a countable sum of Borel functions. But $f$ is just the $\ell^2$ norm squared, so $\ell^2 = f^{-1}([0,\infty)$ is a Borel set in $\mathbb{R}^\infty$.</p> <p>For your second question, let $\mathcal{G}$ be the $\sigma$-algebra on $l^2$ induced by the Borel $\sigma$-algebra on $\mathbb{R}^\infty$. You can show $\mathcal{G}$ equals the Borel $\sigma$-algebra on $l^2$ by showing that each $\sigma$-algebra contains a generating set for the other. In one direction, note that sets of the form $\pi_i^{-1}((a,b)) \cap l^2$ generate $\mathcal{G}$, and are open in $l^2$ (since $\pi_i$ is a continuous linear functional on $l^2$). In the other direction, the function $f$ constructed above can be used to show that every ball of $l^2$ is Borel in $\mathbb{R}^\infty$ and hence in $\mathcal{G}$.</p> Answer 4: <p><strong>Can someone please verify that this proof is correct?</strong></p> <p>It is obvious that the induced metric $d$ on $l^2(\mathbb{N})$ satisfies that for any sequence $(x_n)_{n\in \mathbb{N}}=((x_{1,n},x_{2,n},...))\subset l^2(\mathbb{N})$ and $x=(x_1,x_2,...)\in l^2(\mathbb{N})$ \begin{align*} d(x_n,x) = \Big(\sum_{i=1}^\infty (x_{i,n}-x_i)^2\Big)^{1/2} \to_n 0 \iff x_{i,n} \to_n x_i \quad \forall i\in \mathbb{N}, \end{align*} by Weierstrass M-test. Hence $l^2(\mathbb{N})$ has the topology $\tau$ of coordinate-wise convergence. We may define a metric $\tilde{d}$ on $\mathbb{R}^\infty$ given by $\tilde{d}(x,y)=\sum_{i=1}^\infty \frac{1\land |x_{i,n}-x_i|}{2^i}$ and by example 1.2 Billingsley (1999) this metric also induces the topology of coordinate-wise convergence, which we denote $\tilde{\tau}$. </p> <p>Since convergence uniquely determines the topology for metrizable topologies, we get that $\tilde{\tau}$ is the product topology on $\mathbb{R}^\infty$ (since the product topology also is characterized by coordinate-wise convergence), such that $\sigma(\tilde{\tau})=\mathcal{B}(\mathbb{R}^\infty)$. </p> <p>Consider the restriction $\tilde{d}|_{l^2(\mathbb{N})}$ of $\tilde{d}$ to $l^2(\mathbb{N})$ and note that it is a metric equivalent to $d$, since they have the same convergent sequences and limits. Thus $$ \mathcal{B}(l^2(\mathbb{N})) := \mathcal{B}(l^2(\mathbb{N}),d) = \mathcal{B}(l^2(\mathbb{N}), \tilde{d}|_{l^2(\mathbb{N})}) = \sigma(\tilde{\tau}\cap l^2(\mathbb{N}))$$ $$ = \sigma(\tilde{\tau})\cap l^2(\mathbb{N}) = \mathcal{B}(\mathbb{R}^\infty) \cap l^2(\mathbb{N}),$$ where we used that the restricted metric $\tilde{d}|_{l^2(\mathbb{N})}$ induces the subspace topology $\tilde{\tau}\cap l^2(\mathbb{N})$. </p> Answer 5: <p>To see that $\ell^2$ is Borel in $\mathbb{R}^\infty$, consider the function $\pi_i : \mathbb{R}^\infty \to \mathbb{R}$ defined by $\pi_i(x) = x_i$. This is a continuous function on $\mathbb{R}^\infty$ by very definition of the product topology, so it is also Borel. Thus $f : \mathbb{R}^\infty \to [0,\infty]$ defined by $f(x) = \sum_{i=1}^\infty |f_i(x)|^2$ is also Borel, being a countable sum of Borel functions. But $f$ is just the $\ell^2$ norm squared, so $\ell^2 = f^{-1}([0,\infty)$ is a Borel set in $\mathbb{R}^\infty$.</p> <p>For your second question, let $\mathcal{G}$ be the $\sigma$-algebra on $l^2$ induced by the Borel $\sigma$-algebra on $\mathbb{R}^\infty$. You can show $\mathcal{G}$ equals the Borel $\sigma$-algebra on $l^2$ by showing that each $\sigma$-algebra contains a generating set for the other. In one direction, note that sets of the form $\pi_i^{-1}((a,b)) \cap l^2$ generate $\mathcal{G}$, and are open in $l^2$ (since $\pi_i$ is a continuous linear functional on $l^2$). In the other direction, the function $f$ constructed above can be used to show that every ball of $l^2$ is Borel in $\mathbb{R}^\infty$ and hence in $\mathcal{G}$.</p> Answer 6: <p><strong>Can someone please verify that this proof is correct?</strong></p> <p>It is obvious that the induced metric $d$ on $l^2(\mathbb{N})$ satisfies that for any sequence $(x_n)_{n\in \mathbb{N}}=((x_{1,n},x_{2,n},...))\subset l^2(\mathbb{N})$ and $x=(x_1,x_2,...)\in l^2(\mathbb{N})$ \begin{align*} d(x_n,x) = \Big(\sum_{i=1}^\infty (x_{i,n}-x_i)^2\Big)^{1/2} \to_n 0 \iff x_{i,n} \to_n x_i \quad \forall i\in \mathbb{N}, \end{align*} by Weierstrass M-test. Hence $l^2(\mathbb{N})$ has the topology $\tau$ of coordinate-wise convergence. We may define a metric $\tilde{d}$ on $\mathbb{R}^\infty$ given by $\tilde{d}(x,y)=\sum_{i=1}^\infty \frac{1\land |x_{i,n}-x_i|}{2^i}$ and by example 1.2 Billingsley (1999) this metric also induces the topology of coordinate-wise convergence, which we denote $\tilde{\tau}$. </p> <p>Since convergence uniquely determines the topology for metrizable topologies, we get that $\tilde{\tau}$ is the product topology on $\mathbb{R}^\infty$ (since the product topology also is characterized by coordinate-wise convergence), such that $\sigma(\tilde{\tau})=\mathcal{B}(\mathbb{R}^\infty)$. </p> <p>Consider the restriction $\tilde{d}|_{l^2(\mathbb{N})}$ of $\tilde{d}$ to $l^2(\mathbb{N})$ and note that it is a metric equivalent to $d$, since they have the same convergent sequences and limits. Thus $$ \mathcal{B}(l^2(\mathbb{N})) := \mathcal{B}(l^2(\mathbb{N}),d) = \mathcal{B}(l^2(\mathbb{N}), \tilde{d}|_{l^2(\mathbb{N})}) = \sigma(\tilde{\tau}\cap l^2(\mathbb{N}))$$ $$ = \sigma(\tilde{\tau})\cap l^2(\mathbb{N}) = \mathcal{B}(\mathbb{R}^\infty) \cap l^2(\mathbb{N}),$$ where we used that the restricted metric $\tilde{d}|_{l^2(\mathbb{N})}$ induces the subspace topology $\tilde{\tau}\cap l^2(\mathbb{N})$. </p> Answer 7: <p>To see that $\ell^2$ is Borel in $\mathbb{R}^\infty$, consider the function $\pi_i : \mathbb{R}^\infty \to \mathbb{R}$ defined by $\pi_i(x) = x_i$. This is a continuous function on $\mathbb{R}^\infty$ by very definition of the product topology, so it is also Borel. Thus $f : \mathbb{R}^\infty \to [0,\infty]$ defined by $f(x) = \sum_{i=1}^\infty |f_i(x)|^2$ is also Borel, being a countable sum of Borel functions. But $f$ is just the $\ell^2$ norm squared, so $\ell^2 = f^{-1}([0,\infty)$ is a Borel set in $\mathbb{R}^\infty$.</p> <p>For your second question, let $\mathcal{G}$ be the $\sigma$-algebra on $l^2$ induced by the Borel $\sigma$-algebra on $\mathbb{R}^\infty$. You can show $\mathcal{G}$ equals the Borel $\sigma$-algebra on $l^2$ by showing that each $\sigma$-algebra contains a generating set for the other. In one direction, note that sets of the form $\pi_i^{-1}((a,b)) \cap l^2$ generate $\mathcal{G}$, and are open in $l^2$ (since $\pi_i$ is a continuous linear functional on $l^2$). In the other direction, the function $f$ constructed above can be used to show that every ball of $l^2$ is Borel in $\mathbb{R}^\infty$ and hence in $\mathcal{G}$.</p> Answer 8: <p><strong>Can someone please verify that this proof is correct?</strong></p> <p>It is obvious that the induced metric $d$ on $l^2(\mathbb{N})$ satisfies that for any sequence $(x_n)_{n\in \mathbb{N}}=((x_{1,n},x_{2,n},...))\subset l^2(\mathbb{N})$ and $x=(x_1,x_2,...)\in l^2(\mathbb{N})$ \begin{align*} d(x_n,x) = \Big(\sum_{i=1}^\infty (x_{i,n}-x_i)^2\Big)^{1/2} \to_n 0 \iff x_{i,n} \to_n x_i \quad \forall i\in \mathbb{N}, \end{align*} by Weierstrass M-test. Hence $l^2(\mathbb{N})$ has the topology $\tau$ of coordinate-wise convergence. We may define a metric $\tilde{d}$ on $\mathbb{R}^\infty$ given by $\tilde{d}(x,y)=\sum_{i=1}^\infty \frac{1\land |x_{i,n}-x_i|}{2^i}$ and by example 1.2 Billingsley (1999) this metric also induces the topology of coordinate-wise convergence, which we denote $\tilde{\tau}$. </p> <p>Since convergence uniquely determines the topology for metrizable topologies, we get that $\tilde{\tau}$ is the product topology on $\mathbb{R}^\infty$ (since the product topology also is characterized by coordinate-wise convergence), such that $\sigma(\tilde{\tau})=\mathcal{B}(\mathbb{R}^\infty)$. </p> <p>Consider the restriction $\tilde{d}|_{l^2(\mathbb{N})}$ of $\tilde{d}$ to $l^2(\mathbb{N})$ and note that it is a metric equivalent to $d$, since they have the same convergent sequences and limits. Thus $$ \mathcal{B}(l^2(\mathbb{N})) := \mathcal{B}(l^2(\mathbb{N}),d) = \mathcal{B}(l^2(\mathbb{N}), \tilde{d}|_{l^2(\mathbb{N})}) = \sigma(\tilde{\tau}\cap l^2(\mathbb{N}))$$ $$ = \sigma(\tilde{\tau})\cap l^2(\mathbb{N}) = \mathcal{B}(\mathbb{R}^\infty) \cap l^2(\mathbb{N}),$$ where we used that the restricted metric $\tilde{d}|_{l^2(\mathbb{N})}$ induces the subspace topology $\tilde{\tau}\cap l^2(\mathbb{N})$. </p> Answer 9: <p>To see that $\ell^2$ is Borel in $\mathbb{R}^\infty$, consider the function $\pi_i : \mathbb{R}^\infty \to \mathbb{R}$ defined by $\pi_i(x) = x_i$. This is a continuous function on $\mathbb{R}^\infty$ by very definition of the product topology, so it is also Borel. Thus $f : \mathbb{R}^\infty \to [0,\infty]$ defined by $f(x) = \sum_{i=1}^\infty |f_i(x)|^2$ is also Borel, being a countable sum of Borel functions. But $f$ is just the $\ell^2$ norm squared, so $\ell^2 = f^{-1}([0,\infty)$ is a Borel set in $\mathbb{R}^\infty$.</p> <p>For your second question, let $\mathcal{G}$ be the $\sigma$-algebra on $l^2$ induced by the Borel $\sigma$-algebra on $\mathbb{R}^\infty$. You can show $\mathcal{G}$ equals the Borel $\sigma$-algebra on $l^2$ by showing that each $\sigma$-algebra contains a generating set for the other. In one direction, note that sets of the form $\pi_i^{-1}((a,b)) \cap l^2$ generate $\mathcal{G}$, and are open in $l^2$ (since $\pi_i$ is a continuous linear functional on $l^2$). In the other direction, the function $f$ constructed above can be used to show that every ball of $l^2$ is Borel in $\mathbb{R}^\infty$ and hence in $\mathcal{G}$.</p> Answer 10: <p><strong>Can someone please verify that this proof is correct?</strong></p> <p>It is obvious that the induced metric $d$ on $l^2(\mathbb{N})$ satisfies that for any sequence $(x_n)_{n\in \mathbb{N}}=((x_{1,n},x_{2,n},...))\subset l^2(\mathbb{N})$ and $x=(x_1,x_2,...)\in l^2(\mathbb{N})$ \begin{align*} d(x_n,x) = \Big(\sum_{i=1}^\infty (x_{i,n}-x_i)^2\Big)^{1/2} \to_n 0 \iff x_{i,n} \to_n x_i \quad \forall i\in \mathbb{N}, \end{align*} by Weierstrass M-test. Hence $l^2(\mathbb{N})$ has the topology $\tau$ of coordinate-wise convergence. We may define a metric $\tilde{d}$ on $\mathbb{R}^\infty$ given by $\tilde{d}(x,y)=\sum_{i=1}^\infty \frac{1\land |x_{i,n}-x_i|}{2^i}$ and by example 1.2 Billingsley (1999) this metric also induces the topology of coordinate-wise convergence, which we denote $\tilde{\tau}$. </p> <p>Since convergence uniquely determines the topology for metrizable topologies, we get that $\tilde{\tau}$ is the product topology on $\mathbb{R}^\infty$ (since the product topology also is characterized by coordinate-wise convergence), such that $\sigma(\tilde{\tau})=\mathcal{B}(\mathbb{R}^\infty)$. </p> <p>Consider the restriction $\tilde{d}|_{l^2(\mathbb{N})}$ of $\tilde{d}$ to $l^2(\mathbb{N})$ and note that it is a metric equivalent to $d$, since they have the same convergent sequences and limits. Thus $$ \mathcal{B}(l^2(\mathbb{N})) := \mathcal{B}(l^2(\mathbb{N}),d) = \mathcal{B}(l^2(\mathbb{N}), \tilde{d}|_{l^2(\mathbb{N})}) = \sigma(\tilde{\tau}\cap l^2(\mathbb{N}))$$ $$ = \sigma(\tilde{\tau})\cap l^2(\mathbb{N}) = \mathcal{B}(\mathbb{R}^\infty) \cap l^2(\mathbb{N}),$$ where we used that the restricted metric $\tilde{d}|_{l^2(\mathbb{N})}$ induces the subspace topology $\tilde{\tau}\cap l^2(\mathbb{N})$. </p> Answer 11: <p>To see that $\ell^2$ is Borel in $\mathbb{R}^\infty$, consider the function $\pi_i : \mathbb{R}^\infty \to \mathbb{R}$ defined by $\pi_i(x) = x_i$. This is a continuous function on $\mathbb{R}^\infty$ by very definition of the product topology, so it is also Borel. Thus $f : \mathbb{R}^\infty \to [0,\infty]$ defined by $f(x) = \sum_{i=1}^\infty |f_i(x)|^2$ is also Borel, being a countable sum of Borel functions. But $f$ is just the $\ell^2$ norm squared, so $\ell^2 = f^{-1}([0,\infty)$ is a Borel set in $\mathbb{R}^\infty$.</p> <p>For your second question, let $\mathcal{G}$ be the $\sigma$-algebra on $l^2$ induced by the Borel $\sigma$-algebra on $\mathbb{R}^\infty$. You can show $\mathcal{G}$ equals the Borel $\sigma$-algebra on $l^2$ by showing that each $\sigma$-algebra contains a generating set for the other. In one direction, note that sets of the form $\pi_i^{-1}((a,b)) \cap l^2$ generate $\mathcal{G}$, and are open in $l^2$ (since $\pi_i$ is a continuous linear functional on $l^2$). In the other direction, the function $f$ constructed above can be used to show that every ball of $l^2$ is Borel in $\mathbb{R}^\infty$ and hence in $\mathcal{G}$.</p> Answer 12: <p><strong>Can someone please verify that this proof is correct?</strong></p> <p>It is obvious that the induced metric $d$ on $l^2(\mathbb{N})$ satisfies that for any sequence $(x_n)_{n\in \mathbb{N}}=((x_{1,n},x_{2,n},...))\subset l^2(\mathbb{N})$ and $x=(x_1,x_2,...)\in l^2(\mathbb{N})$ \begin{align*} d(x_n,x) = \Big(\sum_{i=1}^\infty (x_{i,n}-x_i)^2\Big)^{1/2} \to_n 0 \iff x_{i,n} \to_n x_i \quad \forall i\in \mathbb{N}, \end{align*} by Weierstrass M-test. Hence $l^2(\mathbb{N})$ has the topology $\tau$ of coordinate-wise convergence. We may define a metric $\tilde{d}$ on $\mathbb{R}^\infty$ given by $\tilde{d}(x,y)=\sum_{i=1}^\infty \frac{1\land |x_{i,n}-x_i|}{2^i}$ and by example 1.2 Billingsley (1999) this metric also induces the topology of coordinate-wise convergence, which we denote $\tilde{\tau}$. </p> <p>Since convergence uniquely determines the topology for metrizable topologies, we get that $\tilde{\tau}$ is the product topology on $\mathbb{R}^\infty$ (since the product topology also is characterized by coordinate-wise convergence), such that $\sigma(\tilde{\tau})=\mathcal{B}(\mathbb{R}^\infty)$. </p> <p>Consider the restriction $\tilde{d}|_{l^2(\mathbb{N})}$ of $\tilde{d}$ to $l^2(\mathbb{N})$ and note that it is a metric equivalent to $d$, since they have the same convergent sequences and limits. Thus $$ \mathcal{B}(l^2(\mathbb{N})) := \mathcal{B}(l^2(\mathbb{N}),d) = \mathcal{B}(l^2(\mathbb{N}), \tilde{d}|_{l^2(\mathbb{N})}) = \sigma(\tilde{\tau}\cap l^2(\mathbb{N}))$$ $$ = \sigma(\tilde{\tau})\cap l^2(\mathbb{N}) = \mathcal{B}(\mathbb{R}^\infty) \cap l^2(\mathbb{N}),$$ where we used that the restricted metric $\tilde{d}|_{l^2(\mathbb{N})}$ induces the subspace topology $\tilde{\tau}\cap l^2(\mathbb{N})$. </p> Answer 13: <p>To see that $\ell^2$ is Borel in $\mathbb{R}^\infty$, consider the function $\pi_i : \mathbb{R}^\infty \to \mathbb{R}$ defined by $\pi_i(x) = x_i$. This is a continuous function on $\mathbb{R}^\infty$ by very definition of the product topology, so it is also Borel. Thus $f : \mathbb{R}^\infty \to [0,\infty]$ defined by $f(x) = \sum_{i=1}^\infty |f_i(x)|^2$ is also Borel, being a countable sum of Borel functions. But $f$ is just the $\ell^2$ norm squared, so $\ell^2 = f^{-1}([0,\infty)$ is a Borel set in $\mathbb{R}^\infty$.</p> <p>For your second question, let $\mathcal{G}$ be the $\sigma$-algebra on $l^2$ induced by the Borel $\sigma$-algebra on $\mathbb{R}^\infty$. You can show $\mathcal{G}$ equals the Borel $\sigma$-algebra on $l^2$ by showing that each $\sigma$-algebra contains a generating set for the other. In one direction, note that sets of the form $\pi_i^{-1}((a,b)) \cap l^2$ generate $\mathcal{G}$, and are open in $l^2$ (since $\pi_i$ is a continuous linear functional on $l^2$). In the other direction, the function $f$ constructed above can be used to show that every ball of $l^2$ is Borel in $\mathbb{R}^\infty$ and hence in $\mathcal{G}$.</p> Answer 14: <p><strong>Can someone please verify that this proof is correct?</strong></p> <p>It is obvious that the induced metric $d$ on $l^2(\mathbb{N})$ satisfies that for any sequence $(x_n)_{n\in \mathbb{N}}=((x_{1,n},x_{2,n},...))\subset l^2(\mathbb{N})$ and $x=(x_1,x_2,...)\in l^2(\mathbb{N})$ \begin{align*} d(x_n,x) = \Big(\sum_{i=1}^\infty (x_{i,n}-x_i)^2\Big)^{1/2} \to_n 0 \iff x_{i,n} \to_n x_i \quad \forall i\in \mathbb{N}, \end{align*} by Weierstrass M-test. Hence $l^2(\mathbb{N})$ has the topology $\tau$ of coordinate-wise convergence. We may define a metric $\tilde{d}$ on $\mathbb{R}^\infty$ given by $\tilde{d}(x,y)=\sum_{i=1}^\infty \frac{1\land |x_{i,n}-x_i|}{2^i}$ and by example 1.2 Billingsley (1999) this metric also induces the topology of coordinate-wise convergence, which we denote $\tilde{\tau}$. </p> <p>Since convergence uniquely determines the topology for metrizable topologies, we get that $\tilde{\tau}$ is the product topology on $\mathbb{R}^\infty$ (since the product topology also is characterized by coordinate-wise convergence), such that $\sigma(\tilde{\tau})=\mathcal{B}(\mathbb{R}^\infty)$. </p> <p>Consider the restriction $\tilde{d}|_{l^2(\mathbb{N})}$ of $\tilde{d}$ to $l^2(\mathbb{N})$ and note that it is a metric equivalent to $d$, since they have the same convergent sequences and limits. Thus $$ \mathcal{B}(l^2(\mathbb{N})) := \mathcal{B}(l^2(\mathbb{N}),d) = \mathcal{B}(l^2(\mathbb{N}), \tilde{d}|_{l^2(\mathbb{N})}) = \sigma(\tilde{\tau}\cap l^2(\mathbb{N}))$$ $$ = \sigma(\tilde{\tau})\cap l^2(\mathbb{N}) = \mathcal{B}(\mathbb{R}^\infty) \cap l^2(\mathbb{N}),$$ where we used that the restricted metric $\tilde{d}|_{l^2(\mathbb{N})}$ induces the subspace topology $\tilde{\tau}\cap l^2(\mathbb{N})$. </p> Answer 15: <p>To see that $\ell^2$ is Borel in $\mathbb{R}^\infty$, consider the function $\pi_i : \mathbb{R}^\infty \to \mathbb{R}$ defined by $\pi_i(x) = x_i$. This is a continuous function on $\mathbb{R}^\infty$ by very definition of the product topology, so it is also Borel. Thus $f : \mathbb{R}^\infty \to [0,\infty]$ defined by $f(x) = \sum_{i=1}^\infty |f_i(x)|^2$ is also Borel, being a countable sum of Borel functions. But $f$ is just the $\ell^2$ norm squared, so $\ell^2 = f^{-1}([0,\infty)$ is a Borel set in $\mathbb{R}^\infty$.</p> <p>For your second question, let $\mathcal{G}$ be the $\sigma$-algebra on $l^2$ induced by the Borel $\sigma$-algebra on $\mathbb{R}^\infty$. You can show $\mathcal{G}$ equals the Borel $\sigma$-algebra on $l^2$ by showing that each $\sigma$-algebra contains a generating set for the other. In one direction, note that sets of the form $\pi_i^{-1}((a,b)) \cap l^2$ generate $\mathcal{G}$, and are open in $l^2$ (since $\pi_i$ is a continuous linear functional on $l^2$). In the other direction, the function $f$ constructed above can be used to show that every ball of $l^2$ is Borel in $\mathbb{R}^\infty$ and hence in $\mathcal{G}$.</p> Answer 16: <p><strong>Can someone please verify that this proof is correct?</strong></p> <p>It is obvious that the induced metric $d$ on $l^2(\mathbb{N})$ satisfies that for any sequence $(x_n)_{n\in \mathbb{N}}=((x_{1,n},x_{2,n},...))\subset l^2(\mathbb{N})$ and $x=(x_1,x_2,...)\in l^2(\mathbb{N})$ \begin{align*} d(x_n,x) = \Big(\sum_{i=1}^\infty (x_{i,n}-x_i)^2\Big)^{1/2} \to_n 0 \iff x_{i,n} \to_n x_i \quad \forall i\in \mathbb{N}, \end{align*} by Weierstrass M-test. Hence $l^2(\mathbb{N})$ has the topology $\tau$ of coordinate-wise convergence. We may define a metric $\tilde{d}$ on $\mathbb{R}^\infty$ given by $\tilde{d}(x,y)=\sum_{i=1}^\infty \frac{1\land |x_{i,n}-x_i|}{2^i}$ and by example 1.2 Billingsley (1999) this metric also induces the topology of coordinate-wise convergence, which we denote $\tilde{\tau}$. </p> <p>Since convergence uniquely determines the topology for metrizable topologies, we get that $\tilde{\tau}$ is the product topology on $\mathbb{R}^\infty$ (since the product topology also is characterized by coordinate-wise convergence), such that $\sigma(\tilde{\tau})=\mathcal{B}(\mathbb{R}^\infty)$. </p> <p>Consider the restriction $\tilde{d}|_{l^2(\mathbb{N})}$ of $\tilde{d}$ to $l^2(\mathbb{N})$ and note that it is a metric equivalent to $d$, since they have the same convergent sequences and limits. Thus $$ \mathcal{B}(l^2(\mathbb{N})) := \mathcal{B}(l^2(\mathbb{N}),d) = \mathcal{B}(l^2(\mathbb{N}), \tilde{d}|_{l^2(\mathbb{N})}) = \sigma(\tilde{\tau}\cap l^2(\mathbb{N}))$$ $$ = \sigma(\tilde{\tau})\cap l^2(\mathbb{N}) = \mathcal{B}(\mathbb{R}^\infty) \cap l^2(\mathbb{N}),$$ where we used that the restricted metric $\tilde{d}|_{l^2(\mathbb{N})}$ induces the subspace topology $\tilde{\tau}\cap l^2(\mathbb{N})$. </p> Answer 17: <p>To see that $\ell^2$ is Borel in $\mathbb{R}^\infty$, consider the function $\pi_i : \mathbb{R}^\infty \to \mathbb{R}$ defined by $\pi_i(x) = x_i$. This is a continuous function on $\mathbb{R}^\infty$ by very definition of the product topology, so it is also Borel. Thus $f : \mathbb{R}^\infty \to [0,\infty]$ defined by $f(x) = \sum_{i=1}^\infty |f_i(x)|^2$ is also Borel, being a countable sum of Borel functions. But $f$ is just the $\ell^2$ norm squared, so $\ell^2 = f^{-1}([0,\infty)$ is a Borel set in $\mathbb{R}^\infty$.</p> <p>For your second question, let $\mathcal{G}$ be the $\sigma$-algebra on $l^2$ induced by the Borel $\sigma$-algebra on $\mathbb{R}^\infty$. You can show $\mathcal{G}$ equals the Borel $\sigma$-algebra on $l^2$ by showing that each $\sigma$-algebra contains a generating set for the other. In one direction, note that sets of the form $\pi_i^{-1}((a,b)) \cap l^2$ generate $\mathcal{G}$, and are open in $l^2$ (since $\pi_i$ is a continuous linear functional on $l^2$). In the other direction, the function $f$ constructed above can be used to show that every ball of $l^2$ is Borel in $\mathbb{R}^\infty$ and hence in $\mathcal{G}$.</p> Answer 18: <p><strong>Can someone please verify that this proof is correct?</strong></p> <p>It is obvious that the induced metric $d$ on $l^2(\mathbb{N})$ satisfies that for any sequence $(x_n)_{n\in \mathbb{N}}=((x_{1,n},x_{2,n},...))\subset l^2(\mathbb{N})$ and $x=(x_1,x_2,...)\in l^2(\mathbb{N})$ \begin{align*} d(x_n,x) = \Big(\sum_{i=1}^\infty (x_{i,n}-x_i)^2\Big)^{1/2} \to_n 0 \iff x_{i,n} \to_n x_i \quad \forall i\in \mathbb{N}, \end{align*} by Weierstrass M-test. Hence $l^2(\mathbb{N})$ has the topology $\tau$ of coordinate-wise convergence. We may define a metric $\tilde{d}$ on $\mathbb{R}^\infty$ given by $\tilde{d}(x,y)=\sum_{i=1}^\infty \frac{1\land |x_{i,n}-x_i|}{2^i}$ and by example 1.2 Billingsley (1999) this metric also induces the topology of coordinate-wise convergence, which we denote $\tilde{\tau}$. </p> <p>Since convergence uniquely determines the topology for metrizable topologies, we get that $\tilde{\tau}$ is the product topology on $\mathbb{R}^\infty$ (since the product topology also is characterized by coordinate-wise convergence), such that $\sigma(\tilde{\tau})=\mathcal{B}(\mathbb{R}^\infty)$. </p> <p>Consider the restriction $\tilde{d}|_{l^2(\mathbb{N})}$ of $\tilde{d}$ to $l^2(\mathbb{N})$ and note that it is a metric equivalent to $d$, since they have the same convergent sequences and limits. Thus $$ \mathcal{B}(l^2(\mathbb{N})) := \mathcal{B}(l^2(\mathbb{N}),d) = \mathcal{B}(l^2(\mathbb{N}), \tilde{d}|_{l^2(\mathbb{N})}) = \sigma(\tilde{\tau}\cap l^2(\mathbb{N}))$$ $$ = \sigma(\tilde{\tau})\cap l^2(\mathbb{N}) = \mathcal{B}(\mathbb{R}^\infty) \cap l^2(\mathbb{N}),$$ where we used that the restricted metric $\tilde{d}|_{l^2(\mathbb{N})}$ induces the subspace topology $\tilde{\tau}\cap l^2(\mathbb{N})$. </p>
Is the set {$x^n$} a closed and bounded subset of C $[0,1]$ that is NOT compact?
2,079,745
5
https://math.stackexchange.com/questions/2079745/is-the-set-xn-a-closed-and-bounded-subset-of-c-0-1-that-is-not-compact
<p>Consider the space of continuous, real-valued functions on $[0,1]$ under the sup-norm metric. I believe that the set of functions {$x^n$} for all natural n is a closed and bounded subset of our metric space that is not compact. It is bounded as the difference between any two functions in the set is at most $1$ and I believe it is not compact as it does not contain a convergent subsequence (no subsequence converges uniformly to a continuous function, let alone a continuous function in the set). I don't have a rigorous proof of closure but do believe it is closed. Hence, I believe that it works as an example of a subset of a metric space that is closed and bounded but not compact. Is my reasoning valid?</p>
[ { "answer_id": 2079750, "score": 1, "text": "<p>Suppose it is not closed. Its complementary $U$ is not open. There exists $f\\in U$ such that every ball $B(f,1/n)$ contains a function $x^m$. We deduce that $f=lim_nx^{u(n)}$. Since $f$ is not of the form $x^m$. $lim_nu(n)=+\\infty$. But In this case, $lim_nx^{u(n)}$ is the function $g:[0,1]\\rightarrow R$ such that $g(x)=0, x\\neq 0$ and $g(1)=1$. Contradiction since $g$ is not continue, and thus cannot be equal to $f$.</p>\n" }, { "answer_id": 2079770, "score": 1, "text": "<p>Assume that $\\,f_n(x)=x^{k_n}\\to f(x),\\,$ is uniformly convergent in $[0,1]$, where $\\,f\\in C[0,1]$. Then, for every $a\\in (0,1)$, the $\\{\\,f_n\\}$ shall be uniformly converging in $[0,a]$. But, $x^n\\to 0$, uniformly convergent in $[0,a]$, since\n$$\n\\max_{x\\in[0,a]}\\|\\,x^n\\|=a^n\\to 0,\\quad\\text{as $n\\to\\infty$.}\n$$\nHence $\\{\\,f_n\\}$ is also uniformly converging in $[0,a]$, and $\\,f|_{[0,a]}=0$.\nThus $\\,f(x)=0$, for all $x&lt;1$, and as $\\,f$ is continuous, then $f\\equiv =0$. </p>\n\n<p>But, $\\,f_n(1)=1^{k_n}=1$, and thus, $f(1)=\\lim_{n\\to\\infty}f_n(1)=1$. Contradiction. </p>\n\n<p>Thus $\\{x^n\\}$ does NOT have any accumulation point, and hence it is closed.</p>\n" }, { "answer_id": 2079750, "score": 1, "text": "<p>Suppose it is not closed. Its complementary $U$ is not open. There exists $f\\in U$ such that every ball $B(f,1/n)$ contains a function $x^m$. We deduce that $f=lim_nx^{u(n)}$. Since $f$ is not of the form $x^m$. $lim_nu(n)=+\\infty$. But In this case, $lim_nx^{u(n)}$ is the function $g:[0,1]\\rightarrow R$ such that $g(x)=0, x\\neq 0$ and $g(1)=1$. Contradiction since $g$ is not continue, and thus cannot be equal to $f$.</p>\n" }, { "answer_id": 2079770, "score": 1, "text": "<p>Assume that $\\,f_n(x)=x^{k_n}\\to f(x),\\,$ is uniformly convergent in $[0,1]$, where $\\,f\\in C[0,1]$. Then, for every $a\\in (0,1)$, the $\\{\\,f_n\\}$ shall be uniformly converging in $[0,a]$. But, $x^n\\to 0$, uniformly convergent in $[0,a]$, since\n$$\n\\max_{x\\in[0,a]}\\|\\,x^n\\|=a^n\\to 0,\\quad\\text{as $n\\to\\infty$.}\n$$\nHence $\\{\\,f_n\\}$ is also uniformly converging in $[0,a]$, and $\\,f|_{[0,a]}=0$.\nThus $\\,f(x)=0$, for all $x&lt;1$, and as $\\,f$ is continuous, then $f\\equiv =0$. </p>\n\n<p>But, $\\,f_n(1)=1^{k_n}=1$, and thus, $f(1)=\\lim_{n\\to\\infty}f_n(1)=1$. Contradiction. </p>\n\n<p>Thus $\\{x^n\\}$ does NOT have any accumulation point, and hence it is closed.</p>\n" }, { "answer_id": 2079750, "score": 1, "text": "<p>Suppose it is not closed. Its complementary $U$ is not open. There exists $f\\in U$ such that every ball $B(f,1/n)$ contains a function $x^m$. We deduce that $f=lim_nx^{u(n)}$. Since $f$ is not of the form $x^m$. $lim_nu(n)=+\\infty$. But In this case, $lim_nx^{u(n)}$ is the function $g:[0,1]\\rightarrow R$ such that $g(x)=0, x\\neq 0$ and $g(1)=1$. Contradiction since $g$ is not continue, and thus cannot be equal to $f$.</p>\n" }, { "answer_id": 2079770, "score": 1, "text": "<p>Assume that $\\,f_n(x)=x^{k_n}\\to f(x),\\,$ is uniformly convergent in $[0,1]$, where $\\,f\\in C[0,1]$. Then, for every $a\\in (0,1)$, the $\\{\\,f_n\\}$ shall be uniformly converging in $[0,a]$. But, $x^n\\to 0$, uniformly convergent in $[0,a]$, since\n$$\n\\max_{x\\in[0,a]}\\|\\,x^n\\|=a^n\\to 0,\\quad\\text{as $n\\to\\infty$.}\n$$\nHence $\\{\\,f_n\\}$ is also uniformly converging in $[0,a]$, and $\\,f|_{[0,a]}=0$.\nThus $\\,f(x)=0$, for all $x&lt;1$, and as $\\,f$ is continuous, then $f\\equiv =0$. </p>\n\n<p>But, $\\,f_n(1)=1^{k_n}=1$, and thus, $f(1)=\\lim_{n\\to\\infty}f_n(1)=1$. Contradiction. </p>\n\n<p>Thus $\\{x^n\\}$ does NOT have any accumulation point, and hence it is closed.</p>\n" }, { "answer_id": 2079750, "score": 1, "text": "<p>Suppose it is not closed. Its complementary $U$ is not open. There exists $f\\in U$ such that every ball $B(f,1/n)$ contains a function $x^m$. We deduce that $f=lim_nx^{u(n)}$. Since $f$ is not of the form $x^m$. $lim_nu(n)=+\\infty$. But In this case, $lim_nx^{u(n)}$ is the function $g:[0,1]\\rightarrow R$ such that $g(x)=0, x\\neq 0$ and $g(1)=1$. Contradiction since $g$ is not continue, and thus cannot be equal to $f$.</p>\n" }, { "answer_id": 2079770, "score": 1, "text": "<p>Assume that $\\,f_n(x)=x^{k_n}\\to f(x),\\,$ is uniformly convergent in $[0,1]$, where $\\,f\\in C[0,1]$. Then, for every $a\\in (0,1)$, the $\\{\\,f_n\\}$ shall be uniformly converging in $[0,a]$. But, $x^n\\to 0$, uniformly convergent in $[0,a]$, since\n$$\n\\max_{x\\in[0,a]}\\|\\,x^n\\|=a^n\\to 0,\\quad\\text{as $n\\to\\infty$.}\n$$\nHence $\\{\\,f_n\\}$ is also uniformly converging in $[0,a]$, and $\\,f|_{[0,a]}=0$.\nThus $\\,f(x)=0$, for all $x&lt;1$, and as $\\,f$ is continuous, then $f\\equiv =0$. </p>\n\n<p>But, $\\,f_n(1)=1^{k_n}=1$, and thus, $f(1)=\\lim_{n\\to\\infty}f_n(1)=1$. Contradiction. </p>\n\n<p>Thus $\\{x^n\\}$ does NOT have any accumulation point, and hence it is closed.</p>\n" }, { "answer_id": 2079750, "score": 1, "text": "<p>Suppose it is not closed. Its complementary $U$ is not open. There exists $f\\in U$ such that every ball $B(f,1/n)$ contains a function $x^m$. We deduce that $f=lim_nx^{u(n)}$. Since $f$ is not of the form $x^m$. $lim_nu(n)=+\\infty$. But In this case, $lim_nx^{u(n)}$ is the function $g:[0,1]\\rightarrow R$ such that $g(x)=0, x\\neq 0$ and $g(1)=1$. Contradiction since $g$ is not continue, and thus cannot be equal to $f$.</p>\n" }, { "answer_id": 2079770, "score": 1, "text": "<p>Assume that $\\,f_n(x)=x^{k_n}\\to f(x),\\,$ is uniformly convergent in $[0,1]$, where $\\,f\\in C[0,1]$. Then, for every $a\\in (0,1)$, the $\\{\\,f_n\\}$ shall be uniformly converging in $[0,a]$. But, $x^n\\to 0$, uniformly convergent in $[0,a]$, since\n$$\n\\max_{x\\in[0,a]}\\|\\,x^n\\|=a^n\\to 0,\\quad\\text{as $n\\to\\infty$.}\n$$\nHence $\\{\\,f_n\\}$ is also uniformly converging in $[0,a]$, and $\\,f|_{[0,a]}=0$.\nThus $\\,f(x)=0$, for all $x&lt;1$, and as $\\,f$ is continuous, then $f\\equiv =0$. </p>\n\n<p>But, $\\,f_n(1)=1^{k_n}=1$, and thus, $f(1)=\\lim_{n\\to\\infty}f_n(1)=1$. Contradiction. </p>\n\n<p>Thus $\\{x^n\\}$ does NOT have any accumulation point, and hence it is closed.</p>\n" }, { "answer_id": 2079750, "score": 1, "text": "<p>Suppose it is not closed. Its complementary $U$ is not open. There exists $f\\in U$ such that every ball $B(f,1/n)$ contains a function $x^m$. We deduce that $f=lim_nx^{u(n)}$. Since $f$ is not of the form $x^m$. $lim_nu(n)=+\\infty$. But In this case, $lim_nx^{u(n)}$ is the function $g:[0,1]\\rightarrow R$ such that $g(x)=0, x\\neq 0$ and $g(1)=1$. Contradiction since $g$ is not continue, and thus cannot be equal to $f$.</p>\n" }, { "answer_id": 2079770, "score": 1, "text": "<p>Assume that $\\,f_n(x)=x^{k_n}\\to f(x),\\,$ is uniformly convergent in $[0,1]$, where $\\,f\\in C[0,1]$. Then, for every $a\\in (0,1)$, the $\\{\\,f_n\\}$ shall be uniformly converging in $[0,a]$. But, $x^n\\to 0$, uniformly convergent in $[0,a]$, since\n$$\n\\max_{x\\in[0,a]}\\|\\,x^n\\|=a^n\\to 0,\\quad\\text{as $n\\to\\infty$.}\n$$\nHence $\\{\\,f_n\\}$ is also uniformly converging in $[0,a]$, and $\\,f|_{[0,a]}=0$.\nThus $\\,f(x)=0$, for all $x&lt;1$, and as $\\,f$ is continuous, then $f\\equiv =0$. </p>\n\n<p>But, $\\,f_n(1)=1^{k_n}=1$, and thus, $f(1)=\\lim_{n\\to\\infty}f_n(1)=1$. Contradiction. </p>\n\n<p>Thus $\\{x^n\\}$ does NOT have any accumulation point, and hence it is closed.</p>\n" }, { "answer_id": 2079750, "score": 1, "text": "<p>Suppose it is not closed. Its complementary $U$ is not open. There exists $f\\in U$ such that every ball $B(f,1/n)$ contains a function $x^m$. We deduce that $f=lim_nx^{u(n)}$. Since $f$ is not of the form $x^m$. $lim_nu(n)=+\\infty$. But In this case, $lim_nx^{u(n)}$ is the function $g:[0,1]\\rightarrow R$ such that $g(x)=0, x\\neq 0$ and $g(1)=1$. Contradiction since $g$ is not continue, and thus cannot be equal to $f$.</p>\n" }, { "answer_id": 2079770, "score": 1, "text": "<p>Assume that $\\,f_n(x)=x^{k_n}\\to f(x),\\,$ is uniformly convergent in $[0,1]$, where $\\,f\\in C[0,1]$. Then, for every $a\\in (0,1)$, the $\\{\\,f_n\\}$ shall be uniformly converging in $[0,a]$. But, $x^n\\to 0$, uniformly convergent in $[0,a]$, since\n$$\n\\max_{x\\in[0,a]}\\|\\,x^n\\|=a^n\\to 0,\\quad\\text{as $n\\to\\infty$.}\n$$\nHence $\\{\\,f_n\\}$ is also uniformly converging in $[0,a]$, and $\\,f|_{[0,a]}=0$.\nThus $\\,f(x)=0$, for all $x&lt;1$, and as $\\,f$ is continuous, then $f\\equiv =0$. </p>\n\n<p>But, $\\,f_n(1)=1^{k_n}=1$, and thus, $f(1)=\\lim_{n\\to\\infty}f_n(1)=1$. Contradiction. </p>\n\n<p>Thus $\\{x^n\\}$ does NOT have any accumulation point, and hence it is closed.</p>\n" }, { "answer_id": 2079750, "score": 1, "text": "<p>Suppose it is not closed. Its complementary $U$ is not open. There exists $f\\in U$ such that every ball $B(f,1/n)$ contains a function $x^m$. We deduce that $f=lim_nx^{u(n)}$. Since $f$ is not of the form $x^m$. $lim_nu(n)=+\\infty$. But In this case, $lim_nx^{u(n)}$ is the function $g:[0,1]\\rightarrow R$ such that $g(x)=0, x\\neq 0$ and $g(1)=1$. Contradiction since $g$ is not continue, and thus cannot be equal to $f$.</p>\n" }, { "answer_id": 2079770, "score": 1, "text": "<p>Assume that $\\,f_n(x)=x^{k_n}\\to f(x),\\,$ is uniformly convergent in $[0,1]$, where $\\,f\\in C[0,1]$. Then, for every $a\\in (0,1)$, the $\\{\\,f_n\\}$ shall be uniformly converging in $[0,a]$. But, $x^n\\to 0$, uniformly convergent in $[0,a]$, since\n$$\n\\max_{x\\in[0,a]}\\|\\,x^n\\|=a^n\\to 0,\\quad\\text{as $n\\to\\infty$.}\n$$\nHence $\\{\\,f_n\\}$ is also uniformly converging in $[0,a]$, and $\\,f|_{[0,a]}=0$.\nThus $\\,f(x)=0$, for all $x&lt;1$, and as $\\,f$ is continuous, then $f\\equiv =0$. </p>\n\n<p>But, $\\,f_n(1)=1^{k_n}=1$, and thus, $f(1)=\\lim_{n\\to\\infty}f_n(1)=1$. Contradiction. </p>\n\n<p>Thus $\\{x^n\\}$ does NOT have any accumulation point, and hence it is closed.</p>\n" }, { "answer_id": 2079750, "score": 1, "text": "<p>Suppose it is not closed. Its complementary $U$ is not open. There exists $f\\in U$ such that every ball $B(f,1/n)$ contains a function $x^m$. We deduce that $f=lim_nx^{u(n)}$. Since $f$ is not of the form $x^m$. $lim_nu(n)=+\\infty$. But In this case, $lim_nx^{u(n)}$ is the function $g:[0,1]\\rightarrow R$ such that $g(x)=0, x\\neq 0$ and $g(1)=1$. Contradiction since $g$ is not continue, and thus cannot be equal to $f$.</p>\n" }, { "answer_id": 2079770, "score": 1, "text": "<p>Assume that $\\,f_n(x)=x^{k_n}\\to f(x),\\,$ is uniformly convergent in $[0,1]$, where $\\,f\\in C[0,1]$. Then, for every $a\\in (0,1)$, the $\\{\\,f_n\\}$ shall be uniformly converging in $[0,a]$. But, $x^n\\to 0$, uniformly convergent in $[0,a]$, since\n$$\n\\max_{x\\in[0,a]}\\|\\,x^n\\|=a^n\\to 0,\\quad\\text{as $n\\to\\infty$.}\n$$\nHence $\\{\\,f_n\\}$ is also uniformly converging in $[0,a]$, and $\\,f|_{[0,a]}=0$.\nThus $\\,f(x)=0$, for all $x&lt;1$, and as $\\,f$ is continuous, then $f\\equiv =0$. </p>\n\n<p>But, $\\,f_n(1)=1^{k_n}=1$, and thus, $f(1)=\\lim_{n\\to\\infty}f_n(1)=1$. Contradiction. </p>\n\n<p>Thus $\\{x^n\\}$ does NOT have any accumulation point, and hence it is closed.</p>\n" } ]
Question: Is the set {$x^n$} a closed and bounded subset of C $[0,1]$ that is NOT compact? Answer 1: <p>Suppose it is not closed. Its complementary $U$ is not open. There exists $f\in U$ such that every ball $B(f,1/n)$ contains a function $x^m$. We deduce that $f=lim_nx^{u(n)}$. Since $f$ is not of the form $x^m$. $lim_nu(n)=+\infty$. But In this case, $lim_nx^{u(n)}$ is the function $g:[0,1]\rightarrow R$ such that $g(x)=0, x\neq 0$ and $g(1)=1$. Contradiction since $g$ is not continue, and thus cannot be equal to $f$.</p> Answer 2: <p>Assume that $\,f_n(x)=x^{k_n}\to f(x),\,$ is uniformly convergent in $[0,1]$, where $\,f\in C[0,1]$. Then, for every $a\in (0,1)$, the $\{\,f_n\}$ shall be uniformly converging in $[0,a]$. But, $x^n\to 0$, uniformly convergent in $[0,a]$, since $$ \max_{x\in[0,a]}\|\,x^n\|=a^n\to 0,\quad\text{as $n\to\infty$.} $$ Hence $\{\,f_n\}$ is also uniformly converging in $[0,a]$, and $\,f|_{[0,a]}=0$. Thus $\,f(x)=0$, for all $x&lt;1$, and as $\,f$ is continuous, then $f\equiv =0$. </p> <p>But, $\,f_n(1)=1^{k_n}=1$, and thus, $f(1)=\lim_{n\to\infty}f_n(1)=1$. Contradiction. </p> <p>Thus $\{x^n\}$ does NOT have any accumulation point, and hence it is closed.</p> Answer 3: <p>Suppose it is not closed. Its complementary $U$ is not open. There exists $f\in U$ such that every ball $B(f,1/n)$ contains a function $x^m$. We deduce that $f=lim_nx^{u(n)}$. Since $f$ is not of the form $x^m$. $lim_nu(n)=+\infty$. But In this case, $lim_nx^{u(n)}$ is the function $g:[0,1]\rightarrow R$ such that $g(x)=0, x\neq 0$ and $g(1)=1$. Contradiction since $g$ is not continue, and thus cannot be equal to $f$.</p> Answer 4: <p>Assume that $\,f_n(x)=x^{k_n}\to f(x),\,$ is uniformly convergent in $[0,1]$, where $\,f\in C[0,1]$. Then, for every $a\in (0,1)$, the $\{\,f_n\}$ shall be uniformly converging in $[0,a]$. But, $x^n\to 0$, uniformly convergent in $[0,a]$, since $$ \max_{x\in[0,a]}\|\,x^n\|=a^n\to 0,\quad\text{as $n\to\infty$.} $$ Hence $\{\,f_n\}$ is also uniformly converging in $[0,a]$, and $\,f|_{[0,a]}=0$. Thus $\,f(x)=0$, for all $x&lt;1$, and as $\,f$ is continuous, then $f\equiv =0$. </p> <p>But, $\,f_n(1)=1^{k_n}=1$, and thus, $f(1)=\lim_{n\to\infty}f_n(1)=1$. Contradiction. </p> <p>Thus $\{x^n\}$ does NOT have any accumulation point, and hence it is closed.</p> Answer 5: <p>Suppose it is not closed. Its complementary $U$ is not open. There exists $f\in U$ such that every ball $B(f,1/n)$ contains a function $x^m$. We deduce that $f=lim_nx^{u(n)}$. Since $f$ is not of the form $x^m$. $lim_nu(n)=+\infty$. But In this case, $lim_nx^{u(n)}$ is the function $g:[0,1]\rightarrow R$ such that $g(x)=0, x\neq 0$ and $g(1)=1$. Contradiction since $g$ is not continue, and thus cannot be equal to $f$.</p> Answer 6: <p>Assume that $\,f_n(x)=x^{k_n}\to f(x),\,$ is uniformly convergent in $[0,1]$, where $\,f\in C[0,1]$. Then, for every $a\in (0,1)$, the $\{\,f_n\}$ shall be uniformly converging in $[0,a]$. But, $x^n\to 0$, uniformly convergent in $[0,a]$, since $$ \max_{x\in[0,a]}\|\,x^n\|=a^n\to 0,\quad\text{as $n\to\infty$.} $$ Hence $\{\,f_n\}$ is also uniformly converging in $[0,a]$, and $\,f|_{[0,a]}=0$. Thus $\,f(x)=0$, for all $x&lt;1$, and as $\,f$ is continuous, then $f\equiv =0$. </p> <p>But, $\,f_n(1)=1^{k_n}=1$, and thus, $f(1)=\lim_{n\to\infty}f_n(1)=1$. Contradiction. </p> <p>Thus $\{x^n\}$ does NOT have any accumulation point, and hence it is closed.</p> Answer 7: <p>Suppose it is not closed. Its complementary $U$ is not open. There exists $f\in U$ such that every ball $B(f,1/n)$ contains a function $x^m$. We deduce that $f=lim_nx^{u(n)}$. Since $f$ is not of the form $x^m$. $lim_nu(n)=+\infty$. But In this case, $lim_nx^{u(n)}$ is the function $g:[0,1]\rightarrow R$ such that $g(x)=0, x\neq 0$ and $g(1)=1$. Contradiction since $g$ is not continue, and thus cannot be equal to $f$.</p> Answer 8: <p>Assume that $\,f_n(x)=x^{k_n}\to f(x),\,$ is uniformly convergent in $[0,1]$, where $\,f\in C[0,1]$. Then, for every $a\in (0,1)$, the $\{\,f_n\}$ shall be uniformly converging in $[0,a]$. But, $x^n\to 0$, uniformly convergent in $[0,a]$, since $$ \max_{x\in[0,a]}\|\,x^n\|=a^n\to 0,\quad\text{as $n\to\infty$.} $$ Hence $\{\,f_n\}$ is also uniformly converging in $[0,a]$, and $\,f|_{[0,a]}=0$. Thus $\,f(x)=0$, for all $x&lt;1$, and as $\,f$ is continuous, then $f\equiv =0$. </p> <p>But, $\,f_n(1)=1^{k_n}=1$, and thus, $f(1)=\lim_{n\to\infty}f_n(1)=1$. Contradiction. </p> <p>Thus $\{x^n\}$ does NOT have any accumulation point, and hence it is closed.</p> Answer 9: <p>Suppose it is not closed. Its complementary $U$ is not open. There exists $f\in U$ such that every ball $B(f,1/n)$ contains a function $x^m$. We deduce that $f=lim_nx^{u(n)}$. Since $f$ is not of the form $x^m$. $lim_nu(n)=+\infty$. But In this case, $lim_nx^{u(n)}$ is the function $g:[0,1]\rightarrow R$ such that $g(x)=0, x\neq 0$ and $g(1)=1$. Contradiction since $g$ is not continue, and thus cannot be equal to $f$.</p> Answer 10: <p>Assume that $\,f_n(x)=x^{k_n}\to f(x),\,$ is uniformly convergent in $[0,1]$, where $\,f\in C[0,1]$. Then, for every $a\in (0,1)$, the $\{\,f_n\}$ shall be uniformly converging in $[0,a]$. But, $x^n\to 0$, uniformly convergent in $[0,a]$, since $$ \max_{x\in[0,a]}\|\,x^n\|=a^n\to 0,\quad\text{as $n\to\infty$.} $$ Hence $\{\,f_n\}$ is also uniformly converging in $[0,a]$, and $\,f|_{[0,a]}=0$. Thus $\,f(x)=0$, for all $x&lt;1$, and as $\,f$ is continuous, then $f\equiv =0$. </p> <p>But, $\,f_n(1)=1^{k_n}=1$, and thus, $f(1)=\lim_{n\to\infty}f_n(1)=1$. Contradiction. </p> <p>Thus $\{x^n\}$ does NOT have any accumulation point, and hence it is closed.</p> Answer 11: <p>Suppose it is not closed. Its complementary $U$ is not open. There exists $f\in U$ such that every ball $B(f,1/n)$ contains a function $x^m$. We deduce that $f=lim_nx^{u(n)}$. Since $f$ is not of the form $x^m$. $lim_nu(n)=+\infty$. But In this case, $lim_nx^{u(n)}$ is the function $g:[0,1]\rightarrow R$ such that $g(x)=0, x\neq 0$ and $g(1)=1$. Contradiction since $g$ is not continue, and thus cannot be equal to $f$.</p> Answer 12: <p>Assume that $\,f_n(x)=x^{k_n}\to f(x),\,$ is uniformly convergent in $[0,1]$, where $\,f\in C[0,1]$. Then, for every $a\in (0,1)$, the $\{\,f_n\}$ shall be uniformly converging in $[0,a]$. But, $x^n\to 0$, uniformly convergent in $[0,a]$, since $$ \max_{x\in[0,a]}\|\,x^n\|=a^n\to 0,\quad\text{as $n\to\infty$.} $$ Hence $\{\,f_n\}$ is also uniformly converging in $[0,a]$, and $\,f|_{[0,a]}=0$. Thus $\,f(x)=0$, for all $x&lt;1$, and as $\,f$ is continuous, then $f\equiv =0$. </p> <p>But, $\,f_n(1)=1^{k_n}=1$, and thus, $f(1)=\lim_{n\to\infty}f_n(1)=1$. Contradiction. </p> <p>Thus $\{x^n\}$ does NOT have any accumulation point, and hence it is closed.</p> Answer 13: <p>Suppose it is not closed. Its complementary $U$ is not open. There exists $f\in U$ such that every ball $B(f,1/n)$ contains a function $x^m$. We deduce that $f=lim_nx^{u(n)}$. Since $f$ is not of the form $x^m$. $lim_nu(n)=+\infty$. But In this case, $lim_nx^{u(n)}$ is the function $g:[0,1]\rightarrow R$ such that $g(x)=0, x\neq 0$ and $g(1)=1$. Contradiction since $g$ is not continue, and thus cannot be equal to $f$.</p> Answer 14: <p>Assume that $\,f_n(x)=x^{k_n}\to f(x),\,$ is uniformly convergent in $[0,1]$, where $\,f\in C[0,1]$. Then, for every $a\in (0,1)$, the $\{\,f_n\}$ shall be uniformly converging in $[0,a]$. But, $x^n\to 0$, uniformly convergent in $[0,a]$, since $$ \max_{x\in[0,a]}\|\,x^n\|=a^n\to 0,\quad\text{as $n\to\infty$.} $$ Hence $\{\,f_n\}$ is also uniformly converging in $[0,a]$, and $\,f|_{[0,a]}=0$. Thus $\,f(x)=0$, for all $x&lt;1$, and as $\,f$ is continuous, then $f\equiv =0$. </p> <p>But, $\,f_n(1)=1^{k_n}=1$, and thus, $f(1)=\lim_{n\to\infty}f_n(1)=1$. Contradiction. </p> <p>Thus $\{x^n\}$ does NOT have any accumulation point, and hence it is closed.</p> Answer 15: <p>Suppose it is not closed. Its complementary $U$ is not open. There exists $f\in U$ such that every ball $B(f,1/n)$ contains a function $x^m$. We deduce that $f=lim_nx^{u(n)}$. Since $f$ is not of the form $x^m$. $lim_nu(n)=+\infty$. But In this case, $lim_nx^{u(n)}$ is the function $g:[0,1]\rightarrow R$ such that $g(x)=0, x\neq 0$ and $g(1)=1$. Contradiction since $g$ is not continue, and thus cannot be equal to $f$.</p> Answer 16: <p>Assume that $\,f_n(x)=x^{k_n}\to f(x),\,$ is uniformly convergent in $[0,1]$, where $\,f\in C[0,1]$. Then, for every $a\in (0,1)$, the $\{\,f_n\}$ shall be uniformly converging in $[0,a]$. But, $x^n\to 0$, uniformly convergent in $[0,a]$, since $$ \max_{x\in[0,a]}\|\,x^n\|=a^n\to 0,\quad\text{as $n\to\infty$.} $$ Hence $\{\,f_n\}$ is also uniformly converging in $[0,a]$, and $\,f|_{[0,a]}=0$. Thus $\,f(x)=0$, for all $x&lt;1$, and as $\,f$ is continuous, then $f\equiv =0$. </p> <p>But, $\,f_n(1)=1^{k_n}=1$, and thus, $f(1)=\lim_{n\to\infty}f_n(1)=1$. Contradiction. </p> <p>Thus $\{x^n\}$ does NOT have any accumulation point, and hence it is closed.</p> Answer 17: <p>Suppose it is not closed. Its complementary $U$ is not open. There exists $f\in U$ such that every ball $B(f,1/n)$ contains a function $x^m$. We deduce that $f=lim_nx^{u(n)}$. Since $f$ is not of the form $x^m$. $lim_nu(n)=+\infty$. But In this case, $lim_nx^{u(n)}$ is the function $g:[0,1]\rightarrow R$ such that $g(x)=0, x\neq 0$ and $g(1)=1$. Contradiction since $g$ is not continue, and thus cannot be equal to $f$.</p> Answer 18: <p>Assume that $\,f_n(x)=x^{k_n}\to f(x),\,$ is uniformly convergent in $[0,1]$, where $\,f\in C[0,1]$. Then, for every $a\in (0,1)$, the $\{\,f_n\}$ shall be uniformly converging in $[0,a]$. But, $x^n\to 0$, uniformly convergent in $[0,a]$, since $$ \max_{x\in[0,a]}\|\,x^n\|=a^n\to 0,\quad\text{as $n\to\infty$.} $$ Hence $\{\,f_n\}$ is also uniformly converging in $[0,a]$, and $\,f|_{[0,a]}=0$. Thus $\,f(x)=0$, for all $x&lt;1$, and as $\,f$ is continuous, then $f\equiv =0$. </p> <p>But, $\,f_n(1)=1^{k_n}=1$, and thus, $f(1)=\lim_{n\to\infty}f_n(1)=1$. Contradiction. </p> <p>Thus $\{x^n\}$ does NOT have any accumulation point, and hence it is closed.</p>
Which books on proof-writing are stuitable for someone engaged in self-study?
2,079,911
5
https://math.stackexchange.com/questions/2079911/which-books-on-proof-writing-are-stuitable-for-someone-engaged-in-self-study
<p>I want to learn how to read and write proofs. I only have basic pre-calculus skills. As I am preparing for my upcoming <em>calculus 1</em> course, I wanted to understand questions like</p> <blockquote> <p>Prove $\left|ab\right|=\left|a\right|\left|b\right|$ for any numbers $a$ and $b$.</p> </blockquote> <p>These questions pop up in various <em>calculus</em> books and I need to understand them. </p> <p>Any book recommendations?</p>
[ { "answer_id": 2080036, "score": 2, "text": "<p>If you're just about to move into Calculus I, you don't need to worry much about the formal logic behind the mathematics. Proofs typically pop up first in a linear algebra or abstract algebra course sophomore or junior year at a university. Real analysis is where you need to worry about proving both integral and differential Calculus, but that's a class that you take after you finish the Calculus sequence anyway. Any facts in a basic single variable Calculus class should be easy to prove with information from your textbook, if you are even required to prove anything at all.</p>\n" }, { "answer_id": 2079914, "score": 0, "text": "<p>Precalculus Mathematics in a Nutshell: Geometry, Algebra, Trigonometry, George F. Simmons</p>\n\n<p>Pre-calculus Demystified 2/E, Rhonda Huettenmueller</p>\n" }, { "answer_id": 2080036, "score": 2, "text": "<p>If you're just about to move into Calculus I, you don't need to worry much about the formal logic behind the mathematics. Proofs typically pop up first in a linear algebra or abstract algebra course sophomore or junior year at a university. Real analysis is where you need to worry about proving both integral and differential Calculus, but that's a class that you take after you finish the Calculus sequence anyway. Any facts in a basic single variable Calculus class should be easy to prove with information from your textbook, if you are even required to prove anything at all.</p>\n" }, { "answer_id": 2079914, "score": 0, "text": "<p>Precalculus Mathematics in a Nutshell: Geometry, Algebra, Trigonometry, George F. Simmons</p>\n\n<p>Pre-calculus Demystified 2/E, Rhonda Huettenmueller</p>\n" }, { "answer_id": 2080036, "score": 2, "text": "<p>If you're just about to move into Calculus I, you don't need to worry much about the formal logic behind the mathematics. Proofs typically pop up first in a linear algebra or abstract algebra course sophomore or junior year at a university. Real analysis is where you need to worry about proving both integral and differential Calculus, but that's a class that you take after you finish the Calculus sequence anyway. Any facts in a basic single variable Calculus class should be easy to prove with information from your textbook, if you are even required to prove anything at all.</p>\n" }, { "answer_id": 2079914, "score": 0, "text": "<p>Precalculus Mathematics in a Nutshell: Geometry, Algebra, Trigonometry, George F. Simmons</p>\n\n<p>Pre-calculus Demystified 2/E, Rhonda Huettenmueller</p>\n" }, { "answer_id": 2080036, "score": 2, "text": "<p>If you're just about to move into Calculus I, you don't need to worry much about the formal logic behind the mathematics. Proofs typically pop up first in a linear algebra or abstract algebra course sophomore or junior year at a university. Real analysis is where you need to worry about proving both integral and differential Calculus, but that's a class that you take after you finish the Calculus sequence anyway. Any facts in a basic single variable Calculus class should be easy to prove with information from your textbook, if you are even required to prove anything at all.</p>\n" }, { "answer_id": 2079914, "score": 0, "text": "<p>Precalculus Mathematics in a Nutshell: Geometry, Algebra, Trigonometry, George F. Simmons</p>\n\n<p>Pre-calculus Demystified 2/E, Rhonda Huettenmueller</p>\n" }, { "answer_id": 2080036, "score": 2, "text": "<p>If you're just about to move into Calculus I, you don't need to worry much about the formal logic behind the mathematics. Proofs typically pop up first in a linear algebra or abstract algebra course sophomore or junior year at a university. Real analysis is where you need to worry about proving both integral and differential Calculus, but that's a class that you take after you finish the Calculus sequence anyway. Any facts in a basic single variable Calculus class should be easy to prove with information from your textbook, if you are even required to prove anything at all.</p>\n" }, { "answer_id": 2079914, "score": 0, "text": "<p>Precalculus Mathematics in a Nutshell: Geometry, Algebra, Trigonometry, George F. Simmons</p>\n\n<p>Pre-calculus Demystified 2/E, Rhonda Huettenmueller</p>\n" }, { "answer_id": 2080036, "score": 2, "text": "<p>If you're just about to move into Calculus I, you don't need to worry much about the formal logic behind the mathematics. Proofs typically pop up first in a linear algebra or abstract algebra course sophomore or junior year at a university. Real analysis is where you need to worry about proving both integral and differential Calculus, but that's a class that you take after you finish the Calculus sequence anyway. Any facts in a basic single variable Calculus class should be easy to prove with information from your textbook, if you are even required to prove anything at all.</p>\n" }, { "answer_id": 2079914, "score": 0, "text": "<p>Precalculus Mathematics in a Nutshell: Geometry, Algebra, Trigonometry, George F. Simmons</p>\n\n<p>Pre-calculus Demystified 2/E, Rhonda Huettenmueller</p>\n" }, { "answer_id": 2080036, "score": 2, "text": "<p>If you're just about to move into Calculus I, you don't need to worry much about the formal logic behind the mathematics. Proofs typically pop up first in a linear algebra or abstract algebra course sophomore or junior year at a university. Real analysis is where you need to worry about proving both integral and differential Calculus, but that's a class that you take after you finish the Calculus sequence anyway. Any facts in a basic single variable Calculus class should be easy to prove with information from your textbook, if you are even required to prove anything at all.</p>\n" }, { "answer_id": 2079914, "score": 0, "text": "<p>Precalculus Mathematics in a Nutshell: Geometry, Algebra, Trigonometry, George F. Simmons</p>\n\n<p>Pre-calculus Demystified 2/E, Rhonda Huettenmueller</p>\n" }, { "answer_id": 2080036, "score": 2, "text": "<p>If you're just about to move into Calculus I, you don't need to worry much about the formal logic behind the mathematics. Proofs typically pop up first in a linear algebra or abstract algebra course sophomore or junior year at a university. Real analysis is where you need to worry about proving both integral and differential Calculus, but that's a class that you take after you finish the Calculus sequence anyway. Any facts in a basic single variable Calculus class should be easy to prove with information from your textbook, if you are even required to prove anything at all.</p>\n" }, { "answer_id": 2079914, "score": 0, "text": "<p>Precalculus Mathematics in a Nutshell: Geometry, Algebra, Trigonometry, George F. Simmons</p>\n\n<p>Pre-calculus Demystified 2/E, Rhonda Huettenmueller</p>\n" }, { "answer_id": 2080036, "score": 2, "text": "<p>If you're just about to move into Calculus I, you don't need to worry much about the formal logic behind the mathematics. Proofs typically pop up first in a linear algebra or abstract algebra course sophomore or junior year at a university. Real analysis is where you need to worry about proving both integral and differential Calculus, but that's a class that you take after you finish the Calculus sequence anyway. Any facts in a basic single variable Calculus class should be easy to prove with information from your textbook, if you are even required to prove anything at all.</p>\n" }, { "answer_id": 2079914, "score": 0, "text": "<p>Precalculus Mathematics in a Nutshell: Geometry, Algebra, Trigonometry, George F. Simmons</p>\n\n<p>Pre-calculus Demystified 2/E, Rhonda Huettenmueller</p>\n" } ]
Question: Which books on proof-writing are stuitable for someone engaged in self-study? Answer 1: <p>If you're just about to move into Calculus I, you don't need to worry much about the formal logic behind the mathematics. Proofs typically pop up first in a linear algebra or abstract algebra course sophomore or junior year at a university. Real analysis is where you need to worry about proving both integral and differential Calculus, but that's a class that you take after you finish the Calculus sequence anyway. Any facts in a basic single variable Calculus class should be easy to prove with information from your textbook, if you are even required to prove anything at all.</p> Answer 2: <p>Precalculus Mathematics in a Nutshell: Geometry, Algebra, Trigonometry, George F. Simmons</p> <p>Pre-calculus Demystified 2/E, Rhonda Huettenmueller</p> Answer 3: <p>If you're just about to move into Calculus I, you don't need to worry much about the formal logic behind the mathematics. Proofs typically pop up first in a linear algebra or abstract algebra course sophomore or junior year at a university. Real analysis is where you need to worry about proving both integral and differential Calculus, but that's a class that you take after you finish the Calculus sequence anyway. Any facts in a basic single variable Calculus class should be easy to prove with information from your textbook, if you are even required to prove anything at all.</p> Answer 4: <p>Precalculus Mathematics in a Nutshell: Geometry, Algebra, Trigonometry, George F. Simmons</p> <p>Pre-calculus Demystified 2/E, Rhonda Huettenmueller</p> Answer 5: <p>If you're just about to move into Calculus I, you don't need to worry much about the formal logic behind the mathematics. Proofs typically pop up first in a linear algebra or abstract algebra course sophomore or junior year at a university. Real analysis is where you need to worry about proving both integral and differential Calculus, but that's a class that you take after you finish the Calculus sequence anyway. Any facts in a basic single variable Calculus class should be easy to prove with information from your textbook, if you are even required to prove anything at all.</p> Answer 6: <p>Precalculus Mathematics in a Nutshell: Geometry, Algebra, Trigonometry, George F. Simmons</p> <p>Pre-calculus Demystified 2/E, Rhonda Huettenmueller</p> Answer 7: <p>If you're just about to move into Calculus I, you don't need to worry much about the formal logic behind the mathematics. Proofs typically pop up first in a linear algebra or abstract algebra course sophomore or junior year at a university. Real analysis is where you need to worry about proving both integral and differential Calculus, but that's a class that you take after you finish the Calculus sequence anyway. Any facts in a basic single variable Calculus class should be easy to prove with information from your textbook, if you are even required to prove anything at all.</p> Answer 8: <p>Precalculus Mathematics in a Nutshell: Geometry, Algebra, Trigonometry, George F. Simmons</p> <p>Pre-calculus Demystified 2/E, Rhonda Huettenmueller</p> Answer 9: <p>If you're just about to move into Calculus I, you don't need to worry much about the formal logic behind the mathematics. Proofs typically pop up first in a linear algebra or abstract algebra course sophomore or junior year at a university. Real analysis is where you need to worry about proving both integral and differential Calculus, but that's a class that you take after you finish the Calculus sequence anyway. Any facts in a basic single variable Calculus class should be easy to prove with information from your textbook, if you are even required to prove anything at all.</p> Answer 10: <p>Precalculus Mathematics in a Nutshell: Geometry, Algebra, Trigonometry, George F. Simmons</p> <p>Pre-calculus Demystified 2/E, Rhonda Huettenmueller</p> Answer 11: <p>If you're just about to move into Calculus I, you don't need to worry much about the formal logic behind the mathematics. Proofs typically pop up first in a linear algebra or abstract algebra course sophomore or junior year at a university. Real analysis is where you need to worry about proving both integral and differential Calculus, but that's a class that you take after you finish the Calculus sequence anyway. Any facts in a basic single variable Calculus class should be easy to prove with information from your textbook, if you are even required to prove anything at all.</p> Answer 12: <p>Precalculus Mathematics in a Nutshell: Geometry, Algebra, Trigonometry, George F. Simmons</p> <p>Pre-calculus Demystified 2/E, Rhonda Huettenmueller</p> Answer 13: <p>If you're just about to move into Calculus I, you don't need to worry much about the formal logic behind the mathematics. Proofs typically pop up first in a linear algebra or abstract algebra course sophomore or junior year at a university. Real analysis is where you need to worry about proving both integral and differential Calculus, but that's a class that you take after you finish the Calculus sequence anyway. Any facts in a basic single variable Calculus class should be easy to prove with information from your textbook, if you are even required to prove anything at all.</p> Answer 14: <p>Precalculus Mathematics in a Nutshell: Geometry, Algebra, Trigonometry, George F. Simmons</p> <p>Pre-calculus Demystified 2/E, Rhonda Huettenmueller</p> Answer 15: <p>If you're just about to move into Calculus I, you don't need to worry much about the formal logic behind the mathematics. Proofs typically pop up first in a linear algebra or abstract algebra course sophomore or junior year at a university. Real analysis is where you need to worry about proving both integral and differential Calculus, but that's a class that you take after you finish the Calculus sequence anyway. Any facts in a basic single variable Calculus class should be easy to prove with information from your textbook, if you are even required to prove anything at all.</p> Answer 16: <p>Precalculus Mathematics in a Nutshell: Geometry, Algebra, Trigonometry, George F. Simmons</p> <p>Pre-calculus Demystified 2/E, Rhonda Huettenmueller</p> Answer 17: <p>If you're just about to move into Calculus I, you don't need to worry much about the formal logic behind the mathematics. Proofs typically pop up first in a linear algebra or abstract algebra course sophomore or junior year at a university. Real analysis is where you need to worry about proving both integral and differential Calculus, but that's a class that you take after you finish the Calculus sequence anyway. Any facts in a basic single variable Calculus class should be easy to prove with information from your textbook, if you are even required to prove anything at all.</p> Answer 18: <p>Precalculus Mathematics in a Nutshell: Geometry, Algebra, Trigonometry, George F. Simmons</p> <p>Pre-calculus Demystified 2/E, Rhonda Huettenmueller</p>
absolute value of poisson random variable&#39;s deviation from a constant
2,080,017
5
https://math.stackexchange.com/questions/2080017/absolute-value-of-poisson-random-variables-deviation-from-a-constant
<p>Let $q\in \{0,1,2,\dotsc\}$ be a fixed constant and $N$ be a Poisson random variable with parameter $\lambda$. Let $X=|N-q|$. Determine $\mathbb{E}[|N-q|]$, if possible. </p> <p>So, we start out, $\mathbb{E}[X]=\sum_{n=1}^\infty P[X\geq n]=\sum_{n=1}^\infty (1-P[X&lt; n]).$ Now, $P[X&lt;n]=P[|N-q|&lt;n]=P[q-n&lt;N&lt;q+n]=F_N(q+n-1)-F_N(q-n+1)$, where $F_N$ is the cumulative distribution function of $N$. But here I am having trouble simplifying this last expression. The cdf can be written in terms of the (incomplete) gamma function but I'm having trouble using that too. Any suggestions would be appreciated. </p>
[ { "answer_id": 2080033, "score": 3, "text": "<p>$$|N - q| = \\begin{cases}\nN-q, &amp; N \\geq q \\\\\nq - N, &amp; N &lt; q\\text{.}\n\\end{cases}$$\nHence,\n$$\\begin{align}\n\\mathbb{E}[|N-q|] &amp;= \\sum_{k=0}^{q-1}(q-k)\\cdot \\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} + \\sum_{k=q}^{\\infty}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} \\\\\n&amp;= \\sum_{k=0}^{\\infty}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!}+\\sum_{k=0}^{q-1}(q-k)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!}-\\sum_{k=0}^{q-1}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} \\\\\n&amp;= \\mathbb{E}[N-q]-2\\sum_{k=0}^{q-1}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} \\\\\n&amp;= \\lambda-q-2\\sum_{k=0}^{q-1}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!}\n\\end{align}$$\nI am not sure if this can be simplified further.</p>\n\n<p>Edited: <a href=\"http://www.wolframalpha.com/input/?i=sum((k-q)*exp(-lambda)*lambda%5E(k)%2Fk!,%20k%3D%200...q-1)\" rel=\"nofollow noreferrer\">WolframAlpha</a> says that </p>\n\n<p>$$\\sum_{k=0}^{q-1}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} = \\dfrac{\\lambda\\left[\\Gamma(q+1, \\lambda)-e^{-\\lambda}\\lambda^{q}\\right]}{\\Gamma(q+1)} - \\dfrac{\\Gamma(q+1, \\lambda)}{\\Gamma(q)}$$\nwhere $\\Gamma(\\cdot, \\cdot)$ is the <a href=\"http://mathworld.wolfram.com/IncompleteGammaFunction.html\" rel=\"nofollow noreferrer\">incomplete Gamma function</a>.</p>\n" }, { "answer_id": 2080029, "score": 1, "text": "<p>Hint:</p>\n\n<p>$|N-q|=(N-q)1_{N\\geq q}+ (q-N)1_{N\\leq q}$</p>\n" }, { "answer_id": 2080033, "score": 3, "text": "<p>$$|N - q| = \\begin{cases}\nN-q, &amp; N \\geq q \\\\\nq - N, &amp; N &lt; q\\text{.}\n\\end{cases}$$\nHence,\n$$\\begin{align}\n\\mathbb{E}[|N-q|] &amp;= \\sum_{k=0}^{q-1}(q-k)\\cdot \\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} + \\sum_{k=q}^{\\infty}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} \\\\\n&amp;= \\sum_{k=0}^{\\infty}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!}+\\sum_{k=0}^{q-1}(q-k)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!}-\\sum_{k=0}^{q-1}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} \\\\\n&amp;= \\mathbb{E}[N-q]-2\\sum_{k=0}^{q-1}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} \\\\\n&amp;= \\lambda-q-2\\sum_{k=0}^{q-1}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!}\n\\end{align}$$\nI am not sure if this can be simplified further.</p>\n\n<p>Edited: <a href=\"http://www.wolframalpha.com/input/?i=sum((k-q)*exp(-lambda)*lambda%5E(k)%2Fk!,%20k%3D%200...q-1)\" rel=\"nofollow noreferrer\">WolframAlpha</a> says that </p>\n\n<p>$$\\sum_{k=0}^{q-1}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} = \\dfrac{\\lambda\\left[\\Gamma(q+1, \\lambda)-e^{-\\lambda}\\lambda^{q}\\right]}{\\Gamma(q+1)} - \\dfrac{\\Gamma(q+1, \\lambda)}{\\Gamma(q)}$$\nwhere $\\Gamma(\\cdot, \\cdot)$ is the <a href=\"http://mathworld.wolfram.com/IncompleteGammaFunction.html\" rel=\"nofollow noreferrer\">incomplete Gamma function</a>.</p>\n" }, { "answer_id": 2080029, "score": 1, "text": "<p>Hint:</p>\n\n<p>$|N-q|=(N-q)1_{N\\geq q}+ (q-N)1_{N\\leq q}$</p>\n" }, { "answer_id": 2080033, "score": 3, "text": "<p>$$|N - q| = \\begin{cases}\nN-q, &amp; N \\geq q \\\\\nq - N, &amp; N &lt; q\\text{.}\n\\end{cases}$$\nHence,\n$$\\begin{align}\n\\mathbb{E}[|N-q|] &amp;= \\sum_{k=0}^{q-1}(q-k)\\cdot \\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} + \\sum_{k=q}^{\\infty}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} \\\\\n&amp;= \\sum_{k=0}^{\\infty}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!}+\\sum_{k=0}^{q-1}(q-k)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!}-\\sum_{k=0}^{q-1}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} \\\\\n&amp;= \\mathbb{E}[N-q]-2\\sum_{k=0}^{q-1}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} \\\\\n&amp;= \\lambda-q-2\\sum_{k=0}^{q-1}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!}\n\\end{align}$$\nI am not sure if this can be simplified further.</p>\n\n<p>Edited: <a href=\"http://www.wolframalpha.com/input/?i=sum((k-q)*exp(-lambda)*lambda%5E(k)%2Fk!,%20k%3D%200...q-1)\" rel=\"nofollow noreferrer\">WolframAlpha</a> says that </p>\n\n<p>$$\\sum_{k=0}^{q-1}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} = \\dfrac{\\lambda\\left[\\Gamma(q+1, \\lambda)-e^{-\\lambda}\\lambda^{q}\\right]}{\\Gamma(q+1)} - \\dfrac{\\Gamma(q+1, \\lambda)}{\\Gamma(q)}$$\nwhere $\\Gamma(\\cdot, \\cdot)$ is the <a href=\"http://mathworld.wolfram.com/IncompleteGammaFunction.html\" rel=\"nofollow noreferrer\">incomplete Gamma function</a>.</p>\n" }, { "answer_id": 2080029, "score": 1, "text": "<p>Hint:</p>\n\n<p>$|N-q|=(N-q)1_{N\\geq q}+ (q-N)1_{N\\leq q}$</p>\n" }, { "answer_id": 2080033, "score": 3, "text": "<p>$$|N - q| = \\begin{cases}\nN-q, &amp; N \\geq q \\\\\nq - N, &amp; N &lt; q\\text{.}\n\\end{cases}$$\nHence,\n$$\\begin{align}\n\\mathbb{E}[|N-q|] &amp;= \\sum_{k=0}^{q-1}(q-k)\\cdot \\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} + \\sum_{k=q}^{\\infty}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} \\\\\n&amp;= \\sum_{k=0}^{\\infty}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!}+\\sum_{k=0}^{q-1}(q-k)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!}-\\sum_{k=0}^{q-1}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} \\\\\n&amp;= \\mathbb{E}[N-q]-2\\sum_{k=0}^{q-1}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} \\\\\n&amp;= \\lambda-q-2\\sum_{k=0}^{q-1}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!}\n\\end{align}$$\nI am not sure if this can be simplified further.</p>\n\n<p>Edited: <a href=\"http://www.wolframalpha.com/input/?i=sum((k-q)*exp(-lambda)*lambda%5E(k)%2Fk!,%20k%3D%200...q-1)\" rel=\"nofollow noreferrer\">WolframAlpha</a> says that </p>\n\n<p>$$\\sum_{k=0}^{q-1}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} = \\dfrac{\\lambda\\left[\\Gamma(q+1, \\lambda)-e^{-\\lambda}\\lambda^{q}\\right]}{\\Gamma(q+1)} - \\dfrac{\\Gamma(q+1, \\lambda)}{\\Gamma(q)}$$\nwhere $\\Gamma(\\cdot, \\cdot)$ is the <a href=\"http://mathworld.wolfram.com/IncompleteGammaFunction.html\" rel=\"nofollow noreferrer\">incomplete Gamma function</a>.</p>\n" }, { "answer_id": 2080029, "score": 1, "text": "<p>Hint:</p>\n\n<p>$|N-q|=(N-q)1_{N\\geq q}+ (q-N)1_{N\\leq q}$</p>\n" }, { "answer_id": 2080033, "score": 3, "text": "<p>$$|N - q| = \\begin{cases}\nN-q, &amp; N \\geq q \\\\\nq - N, &amp; N &lt; q\\text{.}\n\\end{cases}$$\nHence,\n$$\\begin{align}\n\\mathbb{E}[|N-q|] &amp;= \\sum_{k=0}^{q-1}(q-k)\\cdot \\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} + \\sum_{k=q}^{\\infty}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} \\\\\n&amp;= \\sum_{k=0}^{\\infty}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!}+\\sum_{k=0}^{q-1}(q-k)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!}-\\sum_{k=0}^{q-1}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} \\\\\n&amp;= \\mathbb{E}[N-q]-2\\sum_{k=0}^{q-1}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} \\\\\n&amp;= \\lambda-q-2\\sum_{k=0}^{q-1}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!}\n\\end{align}$$\nI am not sure if this can be simplified further.</p>\n\n<p>Edited: <a href=\"http://www.wolframalpha.com/input/?i=sum((k-q)*exp(-lambda)*lambda%5E(k)%2Fk!,%20k%3D%200...q-1)\" rel=\"nofollow noreferrer\">WolframAlpha</a> says that </p>\n\n<p>$$\\sum_{k=0}^{q-1}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} = \\dfrac{\\lambda\\left[\\Gamma(q+1, \\lambda)-e^{-\\lambda}\\lambda^{q}\\right]}{\\Gamma(q+1)} - \\dfrac{\\Gamma(q+1, \\lambda)}{\\Gamma(q)}$$\nwhere $\\Gamma(\\cdot, \\cdot)$ is the <a href=\"http://mathworld.wolfram.com/IncompleteGammaFunction.html\" rel=\"nofollow noreferrer\">incomplete Gamma function</a>.</p>\n" }, { "answer_id": 2080029, "score": 1, "text": "<p>Hint:</p>\n\n<p>$|N-q|=(N-q)1_{N\\geq q}+ (q-N)1_{N\\leq q}$</p>\n" }, { "answer_id": 2080033, "score": 3, "text": "<p>$$|N - q| = \\begin{cases}\nN-q, &amp; N \\geq q \\\\\nq - N, &amp; N &lt; q\\text{.}\n\\end{cases}$$\nHence,\n$$\\begin{align}\n\\mathbb{E}[|N-q|] &amp;= \\sum_{k=0}^{q-1}(q-k)\\cdot \\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} + \\sum_{k=q}^{\\infty}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} \\\\\n&amp;= \\sum_{k=0}^{\\infty}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!}+\\sum_{k=0}^{q-1}(q-k)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!}-\\sum_{k=0}^{q-1}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} \\\\\n&amp;= \\mathbb{E}[N-q]-2\\sum_{k=0}^{q-1}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} \\\\\n&amp;= \\lambda-q-2\\sum_{k=0}^{q-1}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!}\n\\end{align}$$\nI am not sure if this can be simplified further.</p>\n\n<p>Edited: <a href=\"http://www.wolframalpha.com/input/?i=sum((k-q)*exp(-lambda)*lambda%5E(k)%2Fk!,%20k%3D%200...q-1)\" rel=\"nofollow noreferrer\">WolframAlpha</a> says that </p>\n\n<p>$$\\sum_{k=0}^{q-1}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} = \\dfrac{\\lambda\\left[\\Gamma(q+1, \\lambda)-e^{-\\lambda}\\lambda^{q}\\right]}{\\Gamma(q+1)} - \\dfrac{\\Gamma(q+1, \\lambda)}{\\Gamma(q)}$$\nwhere $\\Gamma(\\cdot, \\cdot)$ is the <a href=\"http://mathworld.wolfram.com/IncompleteGammaFunction.html\" rel=\"nofollow noreferrer\">incomplete Gamma function</a>.</p>\n" }, { "answer_id": 2080029, "score": 1, "text": "<p>Hint:</p>\n\n<p>$|N-q|=(N-q)1_{N\\geq q}+ (q-N)1_{N\\leq q}$</p>\n" }, { "answer_id": 2080033, "score": 3, "text": "<p>$$|N - q| = \\begin{cases}\nN-q, &amp; N \\geq q \\\\\nq - N, &amp; N &lt; q\\text{.}\n\\end{cases}$$\nHence,\n$$\\begin{align}\n\\mathbb{E}[|N-q|] &amp;= \\sum_{k=0}^{q-1}(q-k)\\cdot \\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} + \\sum_{k=q}^{\\infty}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} \\\\\n&amp;= \\sum_{k=0}^{\\infty}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!}+\\sum_{k=0}^{q-1}(q-k)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!}-\\sum_{k=0}^{q-1}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} \\\\\n&amp;= \\mathbb{E}[N-q]-2\\sum_{k=0}^{q-1}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} \\\\\n&amp;= \\lambda-q-2\\sum_{k=0}^{q-1}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!}\n\\end{align}$$\nI am not sure if this can be simplified further.</p>\n\n<p>Edited: <a href=\"http://www.wolframalpha.com/input/?i=sum((k-q)*exp(-lambda)*lambda%5E(k)%2Fk!,%20k%3D%200...q-1)\" rel=\"nofollow noreferrer\">WolframAlpha</a> says that </p>\n\n<p>$$\\sum_{k=0}^{q-1}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} = \\dfrac{\\lambda\\left[\\Gamma(q+1, \\lambda)-e^{-\\lambda}\\lambda^{q}\\right]}{\\Gamma(q+1)} - \\dfrac{\\Gamma(q+1, \\lambda)}{\\Gamma(q)}$$\nwhere $\\Gamma(\\cdot, \\cdot)$ is the <a href=\"http://mathworld.wolfram.com/IncompleteGammaFunction.html\" rel=\"nofollow noreferrer\">incomplete Gamma function</a>.</p>\n" }, { "answer_id": 2080029, "score": 1, "text": "<p>Hint:</p>\n\n<p>$|N-q|=(N-q)1_{N\\geq q}+ (q-N)1_{N\\leq q}$</p>\n" }, { "answer_id": 2080033, "score": 3, "text": "<p>$$|N - q| = \\begin{cases}\nN-q, &amp; N \\geq q \\\\\nq - N, &amp; N &lt; q\\text{.}\n\\end{cases}$$\nHence,\n$$\\begin{align}\n\\mathbb{E}[|N-q|] &amp;= \\sum_{k=0}^{q-1}(q-k)\\cdot \\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} + \\sum_{k=q}^{\\infty}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} \\\\\n&amp;= \\sum_{k=0}^{\\infty}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!}+\\sum_{k=0}^{q-1}(q-k)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!}-\\sum_{k=0}^{q-1}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} \\\\\n&amp;= \\mathbb{E}[N-q]-2\\sum_{k=0}^{q-1}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} \\\\\n&amp;= \\lambda-q-2\\sum_{k=0}^{q-1}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!}\n\\end{align}$$\nI am not sure if this can be simplified further.</p>\n\n<p>Edited: <a href=\"http://www.wolframalpha.com/input/?i=sum((k-q)*exp(-lambda)*lambda%5E(k)%2Fk!,%20k%3D%200...q-1)\" rel=\"nofollow noreferrer\">WolframAlpha</a> says that </p>\n\n<p>$$\\sum_{k=0}^{q-1}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} = \\dfrac{\\lambda\\left[\\Gamma(q+1, \\lambda)-e^{-\\lambda}\\lambda^{q}\\right]}{\\Gamma(q+1)} - \\dfrac{\\Gamma(q+1, \\lambda)}{\\Gamma(q)}$$\nwhere $\\Gamma(\\cdot, \\cdot)$ is the <a href=\"http://mathworld.wolfram.com/IncompleteGammaFunction.html\" rel=\"nofollow noreferrer\">incomplete Gamma function</a>.</p>\n" }, { "answer_id": 2080029, "score": 1, "text": "<p>Hint:</p>\n\n<p>$|N-q|=(N-q)1_{N\\geq q}+ (q-N)1_{N\\leq q}$</p>\n" }, { "answer_id": 2080033, "score": 3, "text": "<p>$$|N - q| = \\begin{cases}\nN-q, &amp; N \\geq q \\\\\nq - N, &amp; N &lt; q\\text{.}\n\\end{cases}$$\nHence,\n$$\\begin{align}\n\\mathbb{E}[|N-q|] &amp;= \\sum_{k=0}^{q-1}(q-k)\\cdot \\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} + \\sum_{k=q}^{\\infty}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} \\\\\n&amp;= \\sum_{k=0}^{\\infty}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!}+\\sum_{k=0}^{q-1}(q-k)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!}-\\sum_{k=0}^{q-1}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} \\\\\n&amp;= \\mathbb{E}[N-q]-2\\sum_{k=0}^{q-1}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} \\\\\n&amp;= \\lambda-q-2\\sum_{k=0}^{q-1}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!}\n\\end{align}$$\nI am not sure if this can be simplified further.</p>\n\n<p>Edited: <a href=\"http://www.wolframalpha.com/input/?i=sum((k-q)*exp(-lambda)*lambda%5E(k)%2Fk!,%20k%3D%200...q-1)\" rel=\"nofollow noreferrer\">WolframAlpha</a> says that </p>\n\n<p>$$\\sum_{k=0}^{q-1}(k-q)\\cdot\\dfrac{e^{-\\lambda}\\lambda^{k}}{k!} = \\dfrac{\\lambda\\left[\\Gamma(q+1, \\lambda)-e^{-\\lambda}\\lambda^{q}\\right]}{\\Gamma(q+1)} - \\dfrac{\\Gamma(q+1, \\lambda)}{\\Gamma(q)}$$\nwhere $\\Gamma(\\cdot, \\cdot)$ is the <a href=\"http://mathworld.wolfram.com/IncompleteGammaFunction.html\" rel=\"nofollow noreferrer\">incomplete Gamma function</a>.</p>\n" }, { "answer_id": 2080029, "score": 1, "text": "<p>Hint:</p>\n\n<p>$|N-q|=(N-q)1_{N\\geq q}+ (q-N)1_{N\\leq q}$</p>\n" } ]
Question: absolute value of poisson random variable&#39;s deviation from a constant Answer 1: <p>$$|N - q| = \begin{cases} N-q, &amp; N \geq q \\ q - N, &amp; N &lt; q\text{.} \end{cases}$$ Hence, $$\begin{align} \mathbb{E}[|N-q|] &amp;= \sum_{k=0}^{q-1}(q-k)\cdot \dfrac{e^{-\lambda}\lambda^{k}}{k!} + \sum_{k=q}^{\infty}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} \\ &amp;= \sum_{k=0}^{\infty}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!}+\sum_{k=0}^{q-1}(q-k)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!}-\sum_{k=0}^{q-1}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} \\ &amp;= \mathbb{E}[N-q]-2\sum_{k=0}^{q-1}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} \\ &amp;= \lambda-q-2\sum_{k=0}^{q-1}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} \end{align}$$ I am not sure if this can be simplified further.</p> <p>Edited: <a href="http://www.wolframalpha.com/input/?i=sum((k-q)*exp(-lambda)*lambda%5E(k)%2Fk!,%20k%3D%200...q-1)" rel="nofollow noreferrer">WolframAlpha</a> says that </p> <p>$$\sum_{k=0}^{q-1}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} = \dfrac{\lambda\left[\Gamma(q+1, \lambda)-e^{-\lambda}\lambda^{q}\right]}{\Gamma(q+1)} - \dfrac{\Gamma(q+1, \lambda)}{\Gamma(q)}$$ where $\Gamma(\cdot, \cdot)$ is the <a href="http://mathworld.wolfram.com/IncompleteGammaFunction.html" rel="nofollow noreferrer">incomplete Gamma function</a>.</p> Answer 2: <p>Hint:</p> <p>$|N-q|=(N-q)1_{N\geq q}+ (q-N)1_{N\leq q}$</p> Answer 3: <p>$$|N - q| = \begin{cases} N-q, &amp; N \geq q \\ q - N, &amp; N &lt; q\text{.} \end{cases}$$ Hence, $$\begin{align} \mathbb{E}[|N-q|] &amp;= \sum_{k=0}^{q-1}(q-k)\cdot \dfrac{e^{-\lambda}\lambda^{k}}{k!} + \sum_{k=q}^{\infty}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} \\ &amp;= \sum_{k=0}^{\infty}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!}+\sum_{k=0}^{q-1}(q-k)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!}-\sum_{k=0}^{q-1}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} \\ &amp;= \mathbb{E}[N-q]-2\sum_{k=0}^{q-1}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} \\ &amp;= \lambda-q-2\sum_{k=0}^{q-1}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} \end{align}$$ I am not sure if this can be simplified further.</p> <p>Edited: <a href="http://www.wolframalpha.com/input/?i=sum((k-q)*exp(-lambda)*lambda%5E(k)%2Fk!,%20k%3D%200...q-1)" rel="nofollow noreferrer">WolframAlpha</a> says that </p> <p>$$\sum_{k=0}^{q-1}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} = \dfrac{\lambda\left[\Gamma(q+1, \lambda)-e^{-\lambda}\lambda^{q}\right]}{\Gamma(q+1)} - \dfrac{\Gamma(q+1, \lambda)}{\Gamma(q)}$$ where $\Gamma(\cdot, \cdot)$ is the <a href="http://mathworld.wolfram.com/IncompleteGammaFunction.html" rel="nofollow noreferrer">incomplete Gamma function</a>.</p> Answer 4: <p>Hint:</p> <p>$|N-q|=(N-q)1_{N\geq q}+ (q-N)1_{N\leq q}$</p> Answer 5: <p>$$|N - q| = \begin{cases} N-q, &amp; N \geq q \\ q - N, &amp; N &lt; q\text{.} \end{cases}$$ Hence, $$\begin{align} \mathbb{E}[|N-q|] &amp;= \sum_{k=0}^{q-1}(q-k)\cdot \dfrac{e^{-\lambda}\lambda^{k}}{k!} + \sum_{k=q}^{\infty}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} \\ &amp;= \sum_{k=0}^{\infty}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!}+\sum_{k=0}^{q-1}(q-k)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!}-\sum_{k=0}^{q-1}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} \\ &amp;= \mathbb{E}[N-q]-2\sum_{k=0}^{q-1}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} \\ &amp;= \lambda-q-2\sum_{k=0}^{q-1}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} \end{align}$$ I am not sure if this can be simplified further.</p> <p>Edited: <a href="http://www.wolframalpha.com/input/?i=sum((k-q)*exp(-lambda)*lambda%5E(k)%2Fk!,%20k%3D%200...q-1)" rel="nofollow noreferrer">WolframAlpha</a> says that </p> <p>$$\sum_{k=0}^{q-1}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} = \dfrac{\lambda\left[\Gamma(q+1, \lambda)-e^{-\lambda}\lambda^{q}\right]}{\Gamma(q+1)} - \dfrac{\Gamma(q+1, \lambda)}{\Gamma(q)}$$ where $\Gamma(\cdot, \cdot)$ is the <a href="http://mathworld.wolfram.com/IncompleteGammaFunction.html" rel="nofollow noreferrer">incomplete Gamma function</a>.</p> Answer 6: <p>Hint:</p> <p>$|N-q|=(N-q)1_{N\geq q}+ (q-N)1_{N\leq q}$</p> Answer 7: <p>$$|N - q| = \begin{cases} N-q, &amp; N \geq q \\ q - N, &amp; N &lt; q\text{.} \end{cases}$$ Hence, $$\begin{align} \mathbb{E}[|N-q|] &amp;= \sum_{k=0}^{q-1}(q-k)\cdot \dfrac{e^{-\lambda}\lambda^{k}}{k!} + \sum_{k=q}^{\infty}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} \\ &amp;= \sum_{k=0}^{\infty}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!}+\sum_{k=0}^{q-1}(q-k)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!}-\sum_{k=0}^{q-1}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} \\ &amp;= \mathbb{E}[N-q]-2\sum_{k=0}^{q-1}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} \\ &amp;= \lambda-q-2\sum_{k=0}^{q-1}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} \end{align}$$ I am not sure if this can be simplified further.</p> <p>Edited: <a href="http://www.wolframalpha.com/input/?i=sum((k-q)*exp(-lambda)*lambda%5E(k)%2Fk!,%20k%3D%200...q-1)" rel="nofollow noreferrer">WolframAlpha</a> says that </p> <p>$$\sum_{k=0}^{q-1}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} = \dfrac{\lambda\left[\Gamma(q+1, \lambda)-e^{-\lambda}\lambda^{q}\right]}{\Gamma(q+1)} - \dfrac{\Gamma(q+1, \lambda)}{\Gamma(q)}$$ where $\Gamma(\cdot, \cdot)$ is the <a href="http://mathworld.wolfram.com/IncompleteGammaFunction.html" rel="nofollow noreferrer">incomplete Gamma function</a>.</p> Answer 8: <p>Hint:</p> <p>$|N-q|=(N-q)1_{N\geq q}+ (q-N)1_{N\leq q}$</p> Answer 9: <p>$$|N - q| = \begin{cases} N-q, &amp; N \geq q \\ q - N, &amp; N &lt; q\text{.} \end{cases}$$ Hence, $$\begin{align} \mathbb{E}[|N-q|] &amp;= \sum_{k=0}^{q-1}(q-k)\cdot \dfrac{e^{-\lambda}\lambda^{k}}{k!} + \sum_{k=q}^{\infty}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} \\ &amp;= \sum_{k=0}^{\infty}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!}+\sum_{k=0}^{q-1}(q-k)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!}-\sum_{k=0}^{q-1}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} \\ &amp;= \mathbb{E}[N-q]-2\sum_{k=0}^{q-1}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} \\ &amp;= \lambda-q-2\sum_{k=0}^{q-1}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} \end{align}$$ I am not sure if this can be simplified further.</p> <p>Edited: <a href="http://www.wolframalpha.com/input/?i=sum((k-q)*exp(-lambda)*lambda%5E(k)%2Fk!,%20k%3D%200...q-1)" rel="nofollow noreferrer">WolframAlpha</a> says that </p> <p>$$\sum_{k=0}^{q-1}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} = \dfrac{\lambda\left[\Gamma(q+1, \lambda)-e^{-\lambda}\lambda^{q}\right]}{\Gamma(q+1)} - \dfrac{\Gamma(q+1, \lambda)}{\Gamma(q)}$$ where $\Gamma(\cdot, \cdot)$ is the <a href="http://mathworld.wolfram.com/IncompleteGammaFunction.html" rel="nofollow noreferrer">incomplete Gamma function</a>.</p> Answer 10: <p>Hint:</p> <p>$|N-q|=(N-q)1_{N\geq q}+ (q-N)1_{N\leq q}$</p> Answer 11: <p>$$|N - q| = \begin{cases} N-q, &amp; N \geq q \\ q - N, &amp; N &lt; q\text{.} \end{cases}$$ Hence, $$\begin{align} \mathbb{E}[|N-q|] &amp;= \sum_{k=0}^{q-1}(q-k)\cdot \dfrac{e^{-\lambda}\lambda^{k}}{k!} + \sum_{k=q}^{\infty}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} \\ &amp;= \sum_{k=0}^{\infty}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!}+\sum_{k=0}^{q-1}(q-k)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!}-\sum_{k=0}^{q-1}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} \\ &amp;= \mathbb{E}[N-q]-2\sum_{k=0}^{q-1}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} \\ &amp;= \lambda-q-2\sum_{k=0}^{q-1}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} \end{align}$$ I am not sure if this can be simplified further.</p> <p>Edited: <a href="http://www.wolframalpha.com/input/?i=sum((k-q)*exp(-lambda)*lambda%5E(k)%2Fk!,%20k%3D%200...q-1)" rel="nofollow noreferrer">WolframAlpha</a> says that </p> <p>$$\sum_{k=0}^{q-1}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} = \dfrac{\lambda\left[\Gamma(q+1, \lambda)-e^{-\lambda}\lambda^{q}\right]}{\Gamma(q+1)} - \dfrac{\Gamma(q+1, \lambda)}{\Gamma(q)}$$ where $\Gamma(\cdot, \cdot)$ is the <a href="http://mathworld.wolfram.com/IncompleteGammaFunction.html" rel="nofollow noreferrer">incomplete Gamma function</a>.</p> Answer 12: <p>Hint:</p> <p>$|N-q|=(N-q)1_{N\geq q}+ (q-N)1_{N\leq q}$</p> Answer 13: <p>$$|N - q| = \begin{cases} N-q, &amp; N \geq q \\ q - N, &amp; N &lt; q\text{.} \end{cases}$$ Hence, $$\begin{align} \mathbb{E}[|N-q|] &amp;= \sum_{k=0}^{q-1}(q-k)\cdot \dfrac{e^{-\lambda}\lambda^{k}}{k!} + \sum_{k=q}^{\infty}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} \\ &amp;= \sum_{k=0}^{\infty}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!}+\sum_{k=0}^{q-1}(q-k)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!}-\sum_{k=0}^{q-1}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} \\ &amp;= \mathbb{E}[N-q]-2\sum_{k=0}^{q-1}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} \\ &amp;= \lambda-q-2\sum_{k=0}^{q-1}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} \end{align}$$ I am not sure if this can be simplified further.</p> <p>Edited: <a href="http://www.wolframalpha.com/input/?i=sum((k-q)*exp(-lambda)*lambda%5E(k)%2Fk!,%20k%3D%200...q-1)" rel="nofollow noreferrer">WolframAlpha</a> says that </p> <p>$$\sum_{k=0}^{q-1}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} = \dfrac{\lambda\left[\Gamma(q+1, \lambda)-e^{-\lambda}\lambda^{q}\right]}{\Gamma(q+1)} - \dfrac{\Gamma(q+1, \lambda)}{\Gamma(q)}$$ where $\Gamma(\cdot, \cdot)$ is the <a href="http://mathworld.wolfram.com/IncompleteGammaFunction.html" rel="nofollow noreferrer">incomplete Gamma function</a>.</p> Answer 14: <p>Hint:</p> <p>$|N-q|=(N-q)1_{N\geq q}+ (q-N)1_{N\leq q}$</p> Answer 15: <p>$$|N - q| = \begin{cases} N-q, &amp; N \geq q \\ q - N, &amp; N &lt; q\text{.} \end{cases}$$ Hence, $$\begin{align} \mathbb{E}[|N-q|] &amp;= \sum_{k=0}^{q-1}(q-k)\cdot \dfrac{e^{-\lambda}\lambda^{k}}{k!} + \sum_{k=q}^{\infty}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} \\ &amp;= \sum_{k=0}^{\infty}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!}+\sum_{k=0}^{q-1}(q-k)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!}-\sum_{k=0}^{q-1}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} \\ &amp;= \mathbb{E}[N-q]-2\sum_{k=0}^{q-1}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} \\ &amp;= \lambda-q-2\sum_{k=0}^{q-1}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} \end{align}$$ I am not sure if this can be simplified further.</p> <p>Edited: <a href="http://www.wolframalpha.com/input/?i=sum((k-q)*exp(-lambda)*lambda%5E(k)%2Fk!,%20k%3D%200...q-1)" rel="nofollow noreferrer">WolframAlpha</a> says that </p> <p>$$\sum_{k=0}^{q-1}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} = \dfrac{\lambda\left[\Gamma(q+1, \lambda)-e^{-\lambda}\lambda^{q}\right]}{\Gamma(q+1)} - \dfrac{\Gamma(q+1, \lambda)}{\Gamma(q)}$$ where $\Gamma(\cdot, \cdot)$ is the <a href="http://mathworld.wolfram.com/IncompleteGammaFunction.html" rel="nofollow noreferrer">incomplete Gamma function</a>.</p> Answer 16: <p>Hint:</p> <p>$|N-q|=(N-q)1_{N\geq q}+ (q-N)1_{N\leq q}$</p> Answer 17: <p>$$|N - q| = \begin{cases} N-q, &amp; N \geq q \\ q - N, &amp; N &lt; q\text{.} \end{cases}$$ Hence, $$\begin{align} \mathbb{E}[|N-q|] &amp;= \sum_{k=0}^{q-1}(q-k)\cdot \dfrac{e^{-\lambda}\lambda^{k}}{k!} + \sum_{k=q}^{\infty}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} \\ &amp;= \sum_{k=0}^{\infty}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!}+\sum_{k=0}^{q-1}(q-k)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!}-\sum_{k=0}^{q-1}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} \\ &amp;= \mathbb{E}[N-q]-2\sum_{k=0}^{q-1}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} \\ &amp;= \lambda-q-2\sum_{k=0}^{q-1}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} \end{align}$$ I am not sure if this can be simplified further.</p> <p>Edited: <a href="http://www.wolframalpha.com/input/?i=sum((k-q)*exp(-lambda)*lambda%5E(k)%2Fk!,%20k%3D%200...q-1)" rel="nofollow noreferrer">WolframAlpha</a> says that </p> <p>$$\sum_{k=0}^{q-1}(k-q)\cdot\dfrac{e^{-\lambda}\lambda^{k}}{k!} = \dfrac{\lambda\left[\Gamma(q+1, \lambda)-e^{-\lambda}\lambda^{q}\right]}{\Gamma(q+1)} - \dfrac{\Gamma(q+1, \lambda)}{\Gamma(q)}$$ where $\Gamma(\cdot, \cdot)$ is the <a href="http://mathworld.wolfram.com/IncompleteGammaFunction.html" rel="nofollow noreferrer">incomplete Gamma function</a>.</p> Answer 18: <p>Hint:</p> <p>$|N-q|=(N-q)1_{N\geq q}+ (q-N)1_{N\leq q}$</p>
Is the sequence $\{\{\log(n!)\}\}_n$ dense in $[0,1]$?
2,080,667
5
https://math.stackexchange.com/questions/2080667/is-the-sequence-logn-n-dense-in-0-1
<p>I tried to find this problem in Mathematics Stack Exchange and in Math Overflow, but I didn't find it anywhere. Here is my problem:</p> <p>Is the sequence $\{\{\log(n!)\}\}_n$, the sequence of fractional parts of $\log(n!)$, dense in $[0,1]$?</p> <p><em>Note 1</em>: Here log stands for the logarithm with base $10$.</p> <p>By intuition, I think that the answer is affirmative. I tried to prove that the above sequence is dense in $[0,1]$ by using the density of the sequence $\{\{\log(n)\}\}_n$, but I stucked in the process.</p> <p><strong>Motivation Problem:</strong> Let $a_0,...,a_n$ be natural numbers with $a_n \neq 0,9$. Does there exist a natural number $m$ such that the first digits in the decimal representation of $m!$ are $a_0,...,a_n$?</p> <p>If $\{\{\log(n!)\}\}_n$ is dense in $[0,1]$ and $l:=a_0+...+a_n10^{n}$, then there is a sufficiently large natural $m$:</p> <p>$\{\log(l)\}&lt;\{\log(m!)\}&lt;\{\log(l+1)\} \Rightarrow \\\{\log(l)\}+[\log(m!)]&lt;\log(m!)&lt;\{\log(l+1)\}+[\log(m!)]\Rightarrow \\\log(l)+[\log(m!)]-[\log(l)]&lt;\log(m!)&lt;\log(l+1)+[\log(m!)]-[log(l+1)]\ (1)$ </p> <p>Since $a_n \neq0,9$, we have: $10^n\leq l,l+1&lt;10^{n+1}\Rightarrow n\leq \log(l),log(l+1)&lt;n+1 \Rightarrow [log(l)]=[log(l+1)]=n$</p> <p>Hence $[\log(m!)]-[\log(l)]$ and $[\log(m!)]-[log(l+1)]$ are the same positive integer, say $k$. Then $(1)$ yields that:</p> <p>$10^kl&lt;m!&lt;10^k(l+1)$ and therefore the answer to the question is "yes".</p> <p><em>Note 2</em>: $[x]$ is the integer part of $x$.</p> <p>Do you have any ideas or hints that will help me prove or disprove the claim of density? </p>
[ { "answer_id": 2080870, "score": 9, "text": "<p>Note that $\\log(s!)=\\sum_{n=1}^s \\log n$. To prove that the $\\{\\sum \\log n\\}$s are dense in $\\mathbb R/\\mathbb Z$, consider an arbitrary $x\\in[0,1]$ and $k\\in\\mathbb N_+$, and show that there is some $\\{\\sum \\log n\\}$ whose distance from $x$ is at most $1/k$.</p>\n\n<p>This will be the case if we can find $2k$ successive $n$s such that $m+\\frac1{2k} &lt; \\log n &lt; m+\\frac 1k$ for all of them, for some integer $m$.</p>\n\n<p>However, this is easy enough. First, since the derivative of $\\log x$ goes to $0$, there will be an $N$ such that each successive increase in $\\log n$ from that point onwards is at most $\\frac{1}{(2k)^2}$. Additionally $\\log n$ itself goes to $+\\infty$, so there will be an $M&gt;N$ such that\n$$ \\log(M-1) &lt; m+\\frac1{2k} &lt; \\log M $$\nfor some $m\\in\\mathbb N$.</p>\n\n<p>The numbers $M, M+1, M+2, \\ldots, M+2k-1$ then have the required property.</p>\n" }, { "answer_id": 2094639, "score": 3, "text": "<p>I was searching for an other problem and I encountered <a href=\"https://mathoverflow.net/questions/45665/distribution-mod-1-of-factorial-multiples-of-real-numbers\">this</a> post in Math Overflow. As the O.P. states, in this <a href=\"https://projecteuclid.org/download/pdf_1/euclid.aop/1176995891\" rel=\"nofollow noreferrer\">paper</a> Persi Diaconis proved that $\\{\\log (n!)\\}_n$ is equidistributed (or uniformly distributed) modulo 1. This implies that the sequence $\\{\\{\\log (n!)\\}\\}_n$ is dense in $[0,1]$.</p>\n\n<p>First, Diaconis gives the definition of a strong Benford sequence. Then he proves that for every strong Benford sequence $\\{a_n\\}_n$ the sequence $\\{\\log (a_n)\\}_n$ is equidistributed modulo 1 (Theorem 1). Finally he uses this theorem and some theorems from the book <em>Uniform Distribution Sequences</em> of <em>Kuipers</em> and <em>Niederreiter</em>, to prove that $\\{n!\\}_n$ is a strong Benford sequence (Theorem 3). This finishes the job. </p>\n" }, { "answer_id": 2080870, "score": 9, "text": "<p>Note that $\\log(s!)=\\sum_{n=1}^s \\log n$. To prove that the $\\{\\sum \\log n\\}$s are dense in $\\mathbb R/\\mathbb Z$, consider an arbitrary $x\\in[0,1]$ and $k\\in\\mathbb N_+$, and show that there is some $\\{\\sum \\log n\\}$ whose distance from $x$ is at most $1/k$.</p>\n\n<p>This will be the case if we can find $2k$ successive $n$s such that $m+\\frac1{2k} &lt; \\log n &lt; m+\\frac 1k$ for all of them, for some integer $m$.</p>\n\n<p>However, this is easy enough. First, since the derivative of $\\log x$ goes to $0$, there will be an $N$ such that each successive increase in $\\log n$ from that point onwards is at most $\\frac{1}{(2k)^2}$. Additionally $\\log n$ itself goes to $+\\infty$, so there will be an $M&gt;N$ such that\n$$ \\log(M-1) &lt; m+\\frac1{2k} &lt; \\log M $$\nfor some $m\\in\\mathbb N$.</p>\n\n<p>The numbers $M, M+1, M+2, \\ldots, M+2k-1$ then have the required property.</p>\n" }, { "answer_id": 2094639, "score": 3, "text": "<p>I was searching for an other problem and I encountered <a href=\"https://mathoverflow.net/questions/45665/distribution-mod-1-of-factorial-multiples-of-real-numbers\">this</a> post in Math Overflow. As the O.P. states, in this <a href=\"https://projecteuclid.org/download/pdf_1/euclid.aop/1176995891\" rel=\"nofollow noreferrer\">paper</a> Persi Diaconis proved that $\\{\\log (n!)\\}_n$ is equidistributed (or uniformly distributed) modulo 1. This implies that the sequence $\\{\\{\\log (n!)\\}\\}_n$ is dense in $[0,1]$.</p>\n\n<p>First, Diaconis gives the definition of a strong Benford sequence. Then he proves that for every strong Benford sequence $\\{a_n\\}_n$ the sequence $\\{\\log (a_n)\\}_n$ is equidistributed modulo 1 (Theorem 1). Finally he uses this theorem and some theorems from the book <em>Uniform Distribution Sequences</em> of <em>Kuipers</em> and <em>Niederreiter</em>, to prove that $\\{n!\\}_n$ is a strong Benford sequence (Theorem 3). This finishes the job. </p>\n" }, { "answer_id": 2080870, "score": 9, "text": "<p>Note that $\\log(s!)=\\sum_{n=1}^s \\log n$. To prove that the $\\{\\sum \\log n\\}$s are dense in $\\mathbb R/\\mathbb Z$, consider an arbitrary $x\\in[0,1]$ and $k\\in\\mathbb N_+$, and show that there is some $\\{\\sum \\log n\\}$ whose distance from $x$ is at most $1/k$.</p>\n\n<p>This will be the case if we can find $2k$ successive $n$s such that $m+\\frac1{2k} &lt; \\log n &lt; m+\\frac 1k$ for all of them, for some integer $m$.</p>\n\n<p>However, this is easy enough. First, since the derivative of $\\log x$ goes to $0$, there will be an $N$ such that each successive increase in $\\log n$ from that point onwards is at most $\\frac{1}{(2k)^2}$. Additionally $\\log n$ itself goes to $+\\infty$, so there will be an $M&gt;N$ such that\n$$ \\log(M-1) &lt; m+\\frac1{2k} &lt; \\log M $$\nfor some $m\\in\\mathbb N$.</p>\n\n<p>The numbers $M, M+1, M+2, \\ldots, M+2k-1$ then have the required property.</p>\n" }, { "answer_id": 2094639, "score": 3, "text": "<p>I was searching for an other problem and I encountered <a href=\"https://mathoverflow.net/questions/45665/distribution-mod-1-of-factorial-multiples-of-real-numbers\">this</a> post in Math Overflow. As the O.P. states, in this <a href=\"https://projecteuclid.org/download/pdf_1/euclid.aop/1176995891\" rel=\"nofollow noreferrer\">paper</a> Persi Diaconis proved that $\\{\\log (n!)\\}_n$ is equidistributed (or uniformly distributed) modulo 1. This implies that the sequence $\\{\\{\\log (n!)\\}\\}_n$ is dense in $[0,1]$.</p>\n\n<p>First, Diaconis gives the definition of a strong Benford sequence. Then he proves that for every strong Benford sequence $\\{a_n\\}_n$ the sequence $\\{\\log (a_n)\\}_n$ is equidistributed modulo 1 (Theorem 1). Finally he uses this theorem and some theorems from the book <em>Uniform Distribution Sequences</em> of <em>Kuipers</em> and <em>Niederreiter</em>, to prove that $\\{n!\\}_n$ is a strong Benford sequence (Theorem 3). This finishes the job. </p>\n" }, { "answer_id": 2080870, "score": 9, "text": "<p>Note that $\\log(s!)=\\sum_{n=1}^s \\log n$. To prove that the $\\{\\sum \\log n\\}$s are dense in $\\mathbb R/\\mathbb Z$, consider an arbitrary $x\\in[0,1]$ and $k\\in\\mathbb N_+$, and show that there is some $\\{\\sum \\log n\\}$ whose distance from $x$ is at most $1/k$.</p>\n\n<p>This will be the case if we can find $2k$ successive $n$s such that $m+\\frac1{2k} &lt; \\log n &lt; m+\\frac 1k$ for all of them, for some integer $m$.</p>\n\n<p>However, this is easy enough. First, since the derivative of $\\log x$ goes to $0$, there will be an $N$ such that each successive increase in $\\log n$ from that point onwards is at most $\\frac{1}{(2k)^2}$. Additionally $\\log n$ itself goes to $+\\infty$, so there will be an $M&gt;N$ such that\n$$ \\log(M-1) &lt; m+\\frac1{2k} &lt; \\log M $$\nfor some $m\\in\\mathbb N$.</p>\n\n<p>The numbers $M, M+1, M+2, \\ldots, M+2k-1$ then have the required property.</p>\n" }, { "answer_id": 2094639, "score": 3, "text": "<p>I was searching for an other problem and I encountered <a href=\"https://mathoverflow.net/questions/45665/distribution-mod-1-of-factorial-multiples-of-real-numbers\">this</a> post in Math Overflow. As the O.P. states, in this <a href=\"https://projecteuclid.org/download/pdf_1/euclid.aop/1176995891\" rel=\"nofollow noreferrer\">paper</a> Persi Diaconis proved that $\\{\\log (n!)\\}_n$ is equidistributed (or uniformly distributed) modulo 1. This implies that the sequence $\\{\\{\\log (n!)\\}\\}_n$ is dense in $[0,1]$.</p>\n\n<p>First, Diaconis gives the definition of a strong Benford sequence. Then he proves that for every strong Benford sequence $\\{a_n\\}_n$ the sequence $\\{\\log (a_n)\\}_n$ is equidistributed modulo 1 (Theorem 1). Finally he uses this theorem and some theorems from the book <em>Uniform Distribution Sequences</em> of <em>Kuipers</em> and <em>Niederreiter</em>, to prove that $\\{n!\\}_n$ is a strong Benford sequence (Theorem 3). This finishes the job. </p>\n" }, { "answer_id": 2080870, "score": 9, "text": "<p>Note that $\\log(s!)=\\sum_{n=1}^s \\log n$. To prove that the $\\{\\sum \\log n\\}$s are dense in $\\mathbb R/\\mathbb Z$, consider an arbitrary $x\\in[0,1]$ and $k\\in\\mathbb N_+$, and show that there is some $\\{\\sum \\log n\\}$ whose distance from $x$ is at most $1/k$.</p>\n\n<p>This will be the case if we can find $2k$ successive $n$s such that $m+\\frac1{2k} &lt; \\log n &lt; m+\\frac 1k$ for all of them, for some integer $m$.</p>\n\n<p>However, this is easy enough. First, since the derivative of $\\log x$ goes to $0$, there will be an $N$ such that each successive increase in $\\log n$ from that point onwards is at most $\\frac{1}{(2k)^2}$. Additionally $\\log n$ itself goes to $+\\infty$, so there will be an $M&gt;N$ such that\n$$ \\log(M-1) &lt; m+\\frac1{2k} &lt; \\log M $$\nfor some $m\\in\\mathbb N$.</p>\n\n<p>The numbers $M, M+1, M+2, \\ldots, M+2k-1$ then have the required property.</p>\n" }, { "answer_id": 2094639, "score": 3, "text": "<p>I was searching for an other problem and I encountered <a href=\"https://mathoverflow.net/questions/45665/distribution-mod-1-of-factorial-multiples-of-real-numbers\">this</a> post in Math Overflow. As the O.P. states, in this <a href=\"https://projecteuclid.org/download/pdf_1/euclid.aop/1176995891\" rel=\"nofollow noreferrer\">paper</a> Persi Diaconis proved that $\\{\\log (n!)\\}_n$ is equidistributed (or uniformly distributed) modulo 1. This implies that the sequence $\\{\\{\\log (n!)\\}\\}_n$ is dense in $[0,1]$.</p>\n\n<p>First, Diaconis gives the definition of a strong Benford sequence. Then he proves that for every strong Benford sequence $\\{a_n\\}_n$ the sequence $\\{\\log (a_n)\\}_n$ is equidistributed modulo 1 (Theorem 1). Finally he uses this theorem and some theorems from the book <em>Uniform Distribution Sequences</em> of <em>Kuipers</em> and <em>Niederreiter</em>, to prove that $\\{n!\\}_n$ is a strong Benford sequence (Theorem 3). This finishes the job. </p>\n" }, { "answer_id": 2080870, "score": 9, "text": "<p>Note that $\\log(s!)=\\sum_{n=1}^s \\log n$. To prove that the $\\{\\sum \\log n\\}$s are dense in $\\mathbb R/\\mathbb Z$, consider an arbitrary $x\\in[0,1]$ and $k\\in\\mathbb N_+$, and show that there is some $\\{\\sum \\log n\\}$ whose distance from $x$ is at most $1/k$.</p>\n\n<p>This will be the case if we can find $2k$ successive $n$s such that $m+\\frac1{2k} &lt; \\log n &lt; m+\\frac 1k$ for all of them, for some integer $m$.</p>\n\n<p>However, this is easy enough. First, since the derivative of $\\log x$ goes to $0$, there will be an $N$ such that each successive increase in $\\log n$ from that point onwards is at most $\\frac{1}{(2k)^2}$. Additionally $\\log n$ itself goes to $+\\infty$, so there will be an $M&gt;N$ such that\n$$ \\log(M-1) &lt; m+\\frac1{2k} &lt; \\log M $$\nfor some $m\\in\\mathbb N$.</p>\n\n<p>The numbers $M, M+1, M+2, \\ldots, M+2k-1$ then have the required property.</p>\n" }, { "answer_id": 2094639, "score": 3, "text": "<p>I was searching for an other problem and I encountered <a href=\"https://mathoverflow.net/questions/45665/distribution-mod-1-of-factorial-multiples-of-real-numbers\">this</a> post in Math Overflow. As the O.P. states, in this <a href=\"https://projecteuclid.org/download/pdf_1/euclid.aop/1176995891\" rel=\"nofollow noreferrer\">paper</a> Persi Diaconis proved that $\\{\\log (n!)\\}_n$ is equidistributed (or uniformly distributed) modulo 1. This implies that the sequence $\\{\\{\\log (n!)\\}\\}_n$ is dense in $[0,1]$.</p>\n\n<p>First, Diaconis gives the definition of a strong Benford sequence. Then he proves that for every strong Benford sequence $\\{a_n\\}_n$ the sequence $\\{\\log (a_n)\\}_n$ is equidistributed modulo 1 (Theorem 1). Finally he uses this theorem and some theorems from the book <em>Uniform Distribution Sequences</em> of <em>Kuipers</em> and <em>Niederreiter</em>, to prove that $\\{n!\\}_n$ is a strong Benford sequence (Theorem 3). This finishes the job. </p>\n" }, { "answer_id": 2080870, "score": 9, "text": "<p>Note that $\\log(s!)=\\sum_{n=1}^s \\log n$. To prove that the $\\{\\sum \\log n\\}$s are dense in $\\mathbb R/\\mathbb Z$, consider an arbitrary $x\\in[0,1]$ and $k\\in\\mathbb N_+$, and show that there is some $\\{\\sum \\log n\\}$ whose distance from $x$ is at most $1/k$.</p>\n\n<p>This will be the case if we can find $2k$ successive $n$s such that $m+\\frac1{2k} &lt; \\log n &lt; m+\\frac 1k$ for all of them, for some integer $m$.</p>\n\n<p>However, this is easy enough. First, since the derivative of $\\log x$ goes to $0$, there will be an $N$ such that each successive increase in $\\log n$ from that point onwards is at most $\\frac{1}{(2k)^2}$. Additionally $\\log n$ itself goes to $+\\infty$, so there will be an $M&gt;N$ such that\n$$ \\log(M-1) &lt; m+\\frac1{2k} &lt; \\log M $$\nfor some $m\\in\\mathbb N$.</p>\n\n<p>The numbers $M, M+1, M+2, \\ldots, M+2k-1$ then have the required property.</p>\n" }, { "answer_id": 2094639, "score": 3, "text": "<p>I was searching for an other problem and I encountered <a href=\"https://mathoverflow.net/questions/45665/distribution-mod-1-of-factorial-multiples-of-real-numbers\">this</a> post in Math Overflow. As the O.P. states, in this <a href=\"https://projecteuclid.org/download/pdf_1/euclid.aop/1176995891\" rel=\"nofollow noreferrer\">paper</a> Persi Diaconis proved that $\\{\\log (n!)\\}_n$ is equidistributed (or uniformly distributed) modulo 1. This implies that the sequence $\\{\\{\\log (n!)\\}\\}_n$ is dense in $[0,1]$.</p>\n\n<p>First, Diaconis gives the definition of a strong Benford sequence. Then he proves that for every strong Benford sequence $\\{a_n\\}_n$ the sequence $\\{\\log (a_n)\\}_n$ is equidistributed modulo 1 (Theorem 1). Finally he uses this theorem and some theorems from the book <em>Uniform Distribution Sequences</em> of <em>Kuipers</em> and <em>Niederreiter</em>, to prove that $\\{n!\\}_n$ is a strong Benford sequence (Theorem 3). This finishes the job. </p>\n" }, { "answer_id": 2080870, "score": 9, "text": "<p>Note that $\\log(s!)=\\sum_{n=1}^s \\log n$. To prove that the $\\{\\sum \\log n\\}$s are dense in $\\mathbb R/\\mathbb Z$, consider an arbitrary $x\\in[0,1]$ and $k\\in\\mathbb N_+$, and show that there is some $\\{\\sum \\log n\\}$ whose distance from $x$ is at most $1/k$.</p>\n\n<p>This will be the case if we can find $2k$ successive $n$s such that $m+\\frac1{2k} &lt; \\log n &lt; m+\\frac 1k$ for all of them, for some integer $m$.</p>\n\n<p>However, this is easy enough. First, since the derivative of $\\log x$ goes to $0$, there will be an $N$ such that each successive increase in $\\log n$ from that point onwards is at most $\\frac{1}{(2k)^2}$. Additionally $\\log n$ itself goes to $+\\infty$, so there will be an $M&gt;N$ such that\n$$ \\log(M-1) &lt; m+\\frac1{2k} &lt; \\log M $$\nfor some $m\\in\\mathbb N$.</p>\n\n<p>The numbers $M, M+1, M+2, \\ldots, M+2k-1$ then have the required property.</p>\n" }, { "answer_id": 2094639, "score": 3, "text": "<p>I was searching for an other problem and I encountered <a href=\"https://mathoverflow.net/questions/45665/distribution-mod-1-of-factorial-multiples-of-real-numbers\">this</a> post in Math Overflow. As the O.P. states, in this <a href=\"https://projecteuclid.org/download/pdf_1/euclid.aop/1176995891\" rel=\"nofollow noreferrer\">paper</a> Persi Diaconis proved that $\\{\\log (n!)\\}_n$ is equidistributed (or uniformly distributed) modulo 1. This implies that the sequence $\\{\\{\\log (n!)\\}\\}_n$ is dense in $[0,1]$.</p>\n\n<p>First, Diaconis gives the definition of a strong Benford sequence. Then he proves that for every strong Benford sequence $\\{a_n\\}_n$ the sequence $\\{\\log (a_n)\\}_n$ is equidistributed modulo 1 (Theorem 1). Finally he uses this theorem and some theorems from the book <em>Uniform Distribution Sequences</em> of <em>Kuipers</em> and <em>Niederreiter</em>, to prove that $\\{n!\\}_n$ is a strong Benford sequence (Theorem 3). This finishes the job. </p>\n" }, { "answer_id": 2080870, "score": 9, "text": "<p>Note that $\\log(s!)=\\sum_{n=1}^s \\log n$. To prove that the $\\{\\sum \\log n\\}$s are dense in $\\mathbb R/\\mathbb Z$, consider an arbitrary $x\\in[0,1]$ and $k\\in\\mathbb N_+$, and show that there is some $\\{\\sum \\log n\\}$ whose distance from $x$ is at most $1/k$.</p>\n\n<p>This will be the case if we can find $2k$ successive $n$s such that $m+\\frac1{2k} &lt; \\log n &lt; m+\\frac 1k$ for all of them, for some integer $m$.</p>\n\n<p>However, this is easy enough. First, since the derivative of $\\log x$ goes to $0$, there will be an $N$ such that each successive increase in $\\log n$ from that point onwards is at most $\\frac{1}{(2k)^2}$. Additionally $\\log n$ itself goes to $+\\infty$, so there will be an $M&gt;N$ such that\n$$ \\log(M-1) &lt; m+\\frac1{2k} &lt; \\log M $$\nfor some $m\\in\\mathbb N$.</p>\n\n<p>The numbers $M, M+1, M+2, \\ldots, M+2k-1$ then have the required property.</p>\n" }, { "answer_id": 2094639, "score": 3, "text": "<p>I was searching for an other problem and I encountered <a href=\"https://mathoverflow.net/questions/45665/distribution-mod-1-of-factorial-multiples-of-real-numbers\">this</a> post in Math Overflow. As the O.P. states, in this <a href=\"https://projecteuclid.org/download/pdf_1/euclid.aop/1176995891\" rel=\"nofollow noreferrer\">paper</a> Persi Diaconis proved that $\\{\\log (n!)\\}_n$ is equidistributed (or uniformly distributed) modulo 1. This implies that the sequence $\\{\\{\\log (n!)\\}\\}_n$ is dense in $[0,1]$.</p>\n\n<p>First, Diaconis gives the definition of a strong Benford sequence. Then he proves that for every strong Benford sequence $\\{a_n\\}_n$ the sequence $\\{\\log (a_n)\\}_n$ is equidistributed modulo 1 (Theorem 1). Finally he uses this theorem and some theorems from the book <em>Uniform Distribution Sequences</em> of <em>Kuipers</em> and <em>Niederreiter</em>, to prove that $\\{n!\\}_n$ is a strong Benford sequence (Theorem 3). This finishes the job. </p>\n" } ]
Question: Is the sequence $\{\{\log(n!)\}\}_n$ dense in $[0,1]$? Answer 1: <p>Note that $\log(s!)=\sum_{n=1}^s \log n$. To prove that the $\{\sum \log n\}$s are dense in $\mathbb R/\mathbb Z$, consider an arbitrary $x\in[0,1]$ and $k\in\mathbb N_+$, and show that there is some $\{\sum \log n\}$ whose distance from $x$ is at most $1/k$.</p> <p>This will be the case if we can find $2k$ successive $n$s such that $m+\frac1{2k} &lt; \log n &lt; m+\frac 1k$ for all of them, for some integer $m$.</p> <p>However, this is easy enough. First, since the derivative of $\log x$ goes to $0$, there will be an $N$ such that each successive increase in $\log n$ from that point onwards is at most $\frac{1}{(2k)^2}$. Additionally $\log n$ itself goes to $+\infty$, so there will be an $M&gt;N$ such that $$ \log(M-1) &lt; m+\frac1{2k} &lt; \log M $$ for some $m\in\mathbb N$.</p> <p>The numbers $M, M+1, M+2, \ldots, M+2k-1$ then have the required property.</p> Answer 2: <p>I was searching for an other problem and I encountered <a href="https://mathoverflow.net/questions/45665/distribution-mod-1-of-factorial-multiples-of-real-numbers">this</a> post in Math Overflow. As the O.P. states, in this <a href="https://projecteuclid.org/download/pdf_1/euclid.aop/1176995891" rel="nofollow noreferrer">paper</a> Persi Diaconis proved that $\{\log (n!)\}_n$ is equidistributed (or uniformly distributed) modulo 1. This implies that the sequence $\{\{\log (n!)\}\}_n$ is dense in $[0,1]$.</p> <p>First, Diaconis gives the definition of a strong Benford sequence. Then he proves that for every strong Benford sequence $\{a_n\}_n$ the sequence $\{\log (a_n)\}_n$ is equidistributed modulo 1 (Theorem 1). Finally he uses this theorem and some theorems from the book <em>Uniform Distribution Sequences</em> of <em>Kuipers</em> and <em>Niederreiter</em>, to prove that $\{n!\}_n$ is a strong Benford sequence (Theorem 3). This finishes the job. </p> Answer 3: <p>Note that $\log(s!)=\sum_{n=1}^s \log n$. To prove that the $\{\sum \log n\}$s are dense in $\mathbb R/\mathbb Z$, consider an arbitrary $x\in[0,1]$ and $k\in\mathbb N_+$, and show that there is some $\{\sum \log n\}$ whose distance from $x$ is at most $1/k$.</p> <p>This will be the case if we can find $2k$ successive $n$s such that $m+\frac1{2k} &lt; \log n &lt; m+\frac 1k$ for all of them, for some integer $m$.</p> <p>However, this is easy enough. First, since the derivative of $\log x$ goes to $0$, there will be an $N$ such that each successive increase in $\log n$ from that point onwards is at most $\frac{1}{(2k)^2}$. Additionally $\log n$ itself goes to $+\infty$, so there will be an $M&gt;N$ such that $$ \log(M-1) &lt; m+\frac1{2k} &lt; \log M $$ for some $m\in\mathbb N$.</p> <p>The numbers $M, M+1, M+2, \ldots, M+2k-1$ then have the required property.</p> Answer 4: <p>I was searching for an other problem and I encountered <a href="https://mathoverflow.net/questions/45665/distribution-mod-1-of-factorial-multiples-of-real-numbers">this</a> post in Math Overflow. As the O.P. states, in this <a href="https://projecteuclid.org/download/pdf_1/euclid.aop/1176995891" rel="nofollow noreferrer">paper</a> Persi Diaconis proved that $\{\log (n!)\}_n$ is equidistributed (or uniformly distributed) modulo 1. This implies that the sequence $\{\{\log (n!)\}\}_n$ is dense in $[0,1]$.</p> <p>First, Diaconis gives the definition of a strong Benford sequence. Then he proves that for every strong Benford sequence $\{a_n\}_n$ the sequence $\{\log (a_n)\}_n$ is equidistributed modulo 1 (Theorem 1). Finally he uses this theorem and some theorems from the book <em>Uniform Distribution Sequences</em> of <em>Kuipers</em> and <em>Niederreiter</em>, to prove that $\{n!\}_n$ is a strong Benford sequence (Theorem 3). This finishes the job. </p> Answer 5: <p>Note that $\log(s!)=\sum_{n=1}^s \log n$. To prove that the $\{\sum \log n\}$s are dense in $\mathbb R/\mathbb Z$, consider an arbitrary $x\in[0,1]$ and $k\in\mathbb N_+$, and show that there is some $\{\sum \log n\}$ whose distance from $x$ is at most $1/k$.</p> <p>This will be the case if we can find $2k$ successive $n$s such that $m+\frac1{2k} &lt; \log n &lt; m+\frac 1k$ for all of them, for some integer $m$.</p> <p>However, this is easy enough. First, since the derivative of $\log x$ goes to $0$, there will be an $N$ such that each successive increase in $\log n$ from that point onwards is at most $\frac{1}{(2k)^2}$. Additionally $\log n$ itself goes to $+\infty$, so there will be an $M&gt;N$ such that $$ \log(M-1) &lt; m+\frac1{2k} &lt; \log M $$ for some $m\in\mathbb N$.</p> <p>The numbers $M, M+1, M+2, \ldots, M+2k-1$ then have the required property.</p> Answer 6: <p>I was searching for an other problem and I encountered <a href="https://mathoverflow.net/questions/45665/distribution-mod-1-of-factorial-multiples-of-real-numbers">this</a> post in Math Overflow. As the O.P. states, in this <a href="https://projecteuclid.org/download/pdf_1/euclid.aop/1176995891" rel="nofollow noreferrer">paper</a> Persi Diaconis proved that $\{\log (n!)\}_n$ is equidistributed (or uniformly distributed) modulo 1. This implies that the sequence $\{\{\log (n!)\}\}_n$ is dense in $[0,1]$.</p> <p>First, Diaconis gives the definition of a strong Benford sequence. Then he proves that for every strong Benford sequence $\{a_n\}_n$ the sequence $\{\log (a_n)\}_n$ is equidistributed modulo 1 (Theorem 1). Finally he uses this theorem and some theorems from the book <em>Uniform Distribution Sequences</em> of <em>Kuipers</em> and <em>Niederreiter</em>, to prove that $\{n!\}_n$ is a strong Benford sequence (Theorem 3). This finishes the job. </p> Answer 7: <p>Note that $\log(s!)=\sum_{n=1}^s \log n$. To prove that the $\{\sum \log n\}$s are dense in $\mathbb R/\mathbb Z$, consider an arbitrary $x\in[0,1]$ and $k\in\mathbb N_+$, and show that there is some $\{\sum \log n\}$ whose distance from $x$ is at most $1/k$.</p> <p>This will be the case if we can find $2k$ successive $n$s such that $m+\frac1{2k} &lt; \log n &lt; m+\frac 1k$ for all of them, for some integer $m$.</p> <p>However, this is easy enough. First, since the derivative of $\log x$ goes to $0$, there will be an $N$ such that each successive increase in $\log n$ from that point onwards is at most $\frac{1}{(2k)^2}$. Additionally $\log n$ itself goes to $+\infty$, so there will be an $M&gt;N$ such that $$ \log(M-1) &lt; m+\frac1{2k} &lt; \log M $$ for some $m\in\mathbb N$.</p> <p>The numbers $M, M+1, M+2, \ldots, M+2k-1$ then have the required property.</p> Answer 8: <p>I was searching for an other problem and I encountered <a href="https://mathoverflow.net/questions/45665/distribution-mod-1-of-factorial-multiples-of-real-numbers">this</a> post in Math Overflow. As the O.P. states, in this <a href="https://projecteuclid.org/download/pdf_1/euclid.aop/1176995891" rel="nofollow noreferrer">paper</a> Persi Diaconis proved that $\{\log (n!)\}_n$ is equidistributed (or uniformly distributed) modulo 1. This implies that the sequence $\{\{\log (n!)\}\}_n$ is dense in $[0,1]$.</p> <p>First, Diaconis gives the definition of a strong Benford sequence. Then he proves that for every strong Benford sequence $\{a_n\}_n$ the sequence $\{\log (a_n)\}_n$ is equidistributed modulo 1 (Theorem 1). Finally he uses this theorem and some theorems from the book <em>Uniform Distribution Sequences</em> of <em>Kuipers</em> and <em>Niederreiter</em>, to prove that $\{n!\}_n$ is a strong Benford sequence (Theorem 3). This finishes the job. </p> Answer 9: <p>Note that $\log(s!)=\sum_{n=1}^s \log n$. To prove that the $\{\sum \log n\}$s are dense in $\mathbb R/\mathbb Z$, consider an arbitrary $x\in[0,1]$ and $k\in\mathbb N_+$, and show that there is some $\{\sum \log n\}$ whose distance from $x$ is at most $1/k$.</p> <p>This will be the case if we can find $2k$ successive $n$s such that $m+\frac1{2k} &lt; \log n &lt; m+\frac 1k$ for all of them, for some integer $m$.</p> <p>However, this is easy enough. First, since the derivative of $\log x$ goes to $0$, there will be an $N$ such that each successive increase in $\log n$ from that point onwards is at most $\frac{1}{(2k)^2}$. Additionally $\log n$ itself goes to $+\infty$, so there will be an $M&gt;N$ such that $$ \log(M-1) &lt; m+\frac1{2k} &lt; \log M $$ for some $m\in\mathbb N$.</p> <p>The numbers $M, M+1, M+2, \ldots, M+2k-1$ then have the required property.</p> Answer 10: <p>I was searching for an other problem and I encountered <a href="https://mathoverflow.net/questions/45665/distribution-mod-1-of-factorial-multiples-of-real-numbers">this</a> post in Math Overflow. As the O.P. states, in this <a href="https://projecteuclid.org/download/pdf_1/euclid.aop/1176995891" rel="nofollow noreferrer">paper</a> Persi Diaconis proved that $\{\log (n!)\}_n$ is equidistributed (or uniformly distributed) modulo 1. This implies that the sequence $\{\{\log (n!)\}\}_n$ is dense in $[0,1]$.</p> <p>First, Diaconis gives the definition of a strong Benford sequence. Then he proves that for every strong Benford sequence $\{a_n\}_n$ the sequence $\{\log (a_n)\}_n$ is equidistributed modulo 1 (Theorem 1). Finally he uses this theorem and some theorems from the book <em>Uniform Distribution Sequences</em> of <em>Kuipers</em> and <em>Niederreiter</em>, to prove that $\{n!\}_n$ is a strong Benford sequence (Theorem 3). This finishes the job. </p> Answer 11: <p>Note that $\log(s!)=\sum_{n=1}^s \log n$. To prove that the $\{\sum \log n\}$s are dense in $\mathbb R/\mathbb Z$, consider an arbitrary $x\in[0,1]$ and $k\in\mathbb N_+$, and show that there is some $\{\sum \log n\}$ whose distance from $x$ is at most $1/k$.</p> <p>This will be the case if we can find $2k$ successive $n$s such that $m+\frac1{2k} &lt; \log n &lt; m+\frac 1k$ for all of them, for some integer $m$.</p> <p>However, this is easy enough. First, since the derivative of $\log x$ goes to $0$, there will be an $N$ such that each successive increase in $\log n$ from that point onwards is at most $\frac{1}{(2k)^2}$. Additionally $\log n$ itself goes to $+\infty$, so there will be an $M&gt;N$ such that $$ \log(M-1) &lt; m+\frac1{2k} &lt; \log M $$ for some $m\in\mathbb N$.</p> <p>The numbers $M, M+1, M+2, \ldots, M+2k-1$ then have the required property.</p> Answer 12: <p>I was searching for an other problem and I encountered <a href="https://mathoverflow.net/questions/45665/distribution-mod-1-of-factorial-multiples-of-real-numbers">this</a> post in Math Overflow. As the O.P. states, in this <a href="https://projecteuclid.org/download/pdf_1/euclid.aop/1176995891" rel="nofollow noreferrer">paper</a> Persi Diaconis proved that $\{\log (n!)\}_n$ is equidistributed (or uniformly distributed) modulo 1. This implies that the sequence $\{\{\log (n!)\}\}_n$ is dense in $[0,1]$.</p> <p>First, Diaconis gives the definition of a strong Benford sequence. Then he proves that for every strong Benford sequence $\{a_n\}_n$ the sequence $\{\log (a_n)\}_n$ is equidistributed modulo 1 (Theorem 1). Finally he uses this theorem and some theorems from the book <em>Uniform Distribution Sequences</em> of <em>Kuipers</em> and <em>Niederreiter</em>, to prove that $\{n!\}_n$ is a strong Benford sequence (Theorem 3). This finishes the job. </p> Answer 13: <p>Note that $\log(s!)=\sum_{n=1}^s \log n$. To prove that the $\{\sum \log n\}$s are dense in $\mathbb R/\mathbb Z$, consider an arbitrary $x\in[0,1]$ and $k\in\mathbb N_+$, and show that there is some $\{\sum \log n\}$ whose distance from $x$ is at most $1/k$.</p> <p>This will be the case if we can find $2k$ successive $n$s such that $m+\frac1{2k} &lt; \log n &lt; m+\frac 1k$ for all of them, for some integer $m$.</p> <p>However, this is easy enough. First, since the derivative of $\log x$ goes to $0$, there will be an $N$ such that each successive increase in $\log n$ from that point onwards is at most $\frac{1}{(2k)^2}$. Additionally $\log n$ itself goes to $+\infty$, so there will be an $M&gt;N$ such that $$ \log(M-1) &lt; m+\frac1{2k} &lt; \log M $$ for some $m\in\mathbb N$.</p> <p>The numbers $M, M+1, M+2, \ldots, M+2k-1$ then have the required property.</p> Answer 14: <p>I was searching for an other problem and I encountered <a href="https://mathoverflow.net/questions/45665/distribution-mod-1-of-factorial-multiples-of-real-numbers">this</a> post in Math Overflow. As the O.P. states, in this <a href="https://projecteuclid.org/download/pdf_1/euclid.aop/1176995891" rel="nofollow noreferrer">paper</a> Persi Diaconis proved that $\{\log (n!)\}_n$ is equidistributed (or uniformly distributed) modulo 1. This implies that the sequence $\{\{\log (n!)\}\}_n$ is dense in $[0,1]$.</p> <p>First, Diaconis gives the definition of a strong Benford sequence. Then he proves that for every strong Benford sequence $\{a_n\}_n$ the sequence $\{\log (a_n)\}_n$ is equidistributed modulo 1 (Theorem 1). Finally he uses this theorem and some theorems from the book <em>Uniform Distribution Sequences</em> of <em>Kuipers</em> and <em>Niederreiter</em>, to prove that $\{n!\}_n$ is a strong Benford sequence (Theorem 3). This finishes the job. </p> Answer 15: <p>Note that $\log(s!)=\sum_{n=1}^s \log n$. To prove that the $\{\sum \log n\}$s are dense in $\mathbb R/\mathbb Z$, consider an arbitrary $x\in[0,1]$ and $k\in\mathbb N_+$, and show that there is some $\{\sum \log n\}$ whose distance from $x$ is at most $1/k$.</p> <p>This will be the case if we can find $2k$ successive $n$s such that $m+\frac1{2k} &lt; \log n &lt; m+\frac 1k$ for all of them, for some integer $m$.</p> <p>However, this is easy enough. First, since the derivative of $\log x$ goes to $0$, there will be an $N$ such that each successive increase in $\log n$ from that point onwards is at most $\frac{1}{(2k)^2}$. Additionally $\log n$ itself goes to $+\infty$, so there will be an $M&gt;N$ such that $$ \log(M-1) &lt; m+\frac1{2k} &lt; \log M $$ for some $m\in\mathbb N$.</p> <p>The numbers $M, M+1, M+2, \ldots, M+2k-1$ then have the required property.</p> Answer 16: <p>I was searching for an other problem and I encountered <a href="https://mathoverflow.net/questions/45665/distribution-mod-1-of-factorial-multiples-of-real-numbers">this</a> post in Math Overflow. As the O.P. states, in this <a href="https://projecteuclid.org/download/pdf_1/euclid.aop/1176995891" rel="nofollow noreferrer">paper</a> Persi Diaconis proved that $\{\log (n!)\}_n$ is equidistributed (or uniformly distributed) modulo 1. This implies that the sequence $\{\{\log (n!)\}\}_n$ is dense in $[0,1]$.</p> <p>First, Diaconis gives the definition of a strong Benford sequence. Then he proves that for every strong Benford sequence $\{a_n\}_n$ the sequence $\{\log (a_n)\}_n$ is equidistributed modulo 1 (Theorem 1). Finally he uses this theorem and some theorems from the book <em>Uniform Distribution Sequences</em> of <em>Kuipers</em> and <em>Niederreiter</em>, to prove that $\{n!\}_n$ is a strong Benford sequence (Theorem 3). This finishes the job. </p> Answer 17: <p>Note that $\log(s!)=\sum_{n=1}^s \log n$. To prove that the $\{\sum \log n\}$s are dense in $\mathbb R/\mathbb Z$, consider an arbitrary $x\in[0,1]$ and $k\in\mathbb N_+$, and show that there is some $\{\sum \log n\}$ whose distance from $x$ is at most $1/k$.</p> <p>This will be the case if we can find $2k$ successive $n$s such that $m+\frac1{2k} &lt; \log n &lt; m+\frac 1k$ for all of them, for some integer $m$.</p> <p>However, this is easy enough. First, since the derivative of $\log x$ goes to $0$, there will be an $N$ such that each successive increase in $\log n$ from that point onwards is at most $\frac{1}{(2k)^2}$. Additionally $\log n$ itself goes to $+\infty$, so there will be an $M&gt;N$ such that $$ \log(M-1) &lt; m+\frac1{2k} &lt; \log M $$ for some $m\in\mathbb N$.</p> <p>The numbers $M, M+1, M+2, \ldots, M+2k-1$ then have the required property.</p> Answer 18: <p>I was searching for an other problem and I encountered <a href="https://mathoverflow.net/questions/45665/distribution-mod-1-of-factorial-multiples-of-real-numbers">this</a> post in Math Overflow. As the O.P. states, in this <a href="https://projecteuclid.org/download/pdf_1/euclid.aop/1176995891" rel="nofollow noreferrer">paper</a> Persi Diaconis proved that $\{\log (n!)\}_n$ is equidistributed (or uniformly distributed) modulo 1. This implies that the sequence $\{\{\log (n!)\}\}_n$ is dense in $[0,1]$.</p> <p>First, Diaconis gives the definition of a strong Benford sequence. Then he proves that for every strong Benford sequence $\{a_n\}_n$ the sequence $\{\log (a_n)\}_n$ is equidistributed modulo 1 (Theorem 1). Finally he uses this theorem and some theorems from the book <em>Uniform Distribution Sequences</em> of <em>Kuipers</em> and <em>Niederreiter</em>, to prove that $\{n!\}_n$ is a strong Benford sequence (Theorem 3). This finishes the job. </p>
Are any constants (from analysis) rational?
2,081,138
5
https://math.stackexchange.com/questions/2081138/are-any-constants-from-analysis-rational
<p>There seems to me to be a meta-principle of analysis that all the "interesting" mathematical constants - $\pi$, $e$, $\gamma$, Bessel function zeros, etc. - are irrational (even transcendental). I'm wondering if there are any counterexamples to this principle - are there any "interesting" constants arising in analysis that are rational?</p> <p>What constitutes an interesting mathematical constant is of course murky. I will propose the following guidelines to make the question somewhat more well-defined:</p> <ol> <li>Small integers are excluded. (These come up all the time for algebraic and combinatorial reasons.) Common fractions like 3/2 should probably be excluded on the same grounds.</li> <li>To be eligible, a constant should be "natural" in the sense that its definition does not depend on arbitrary choices for some other parameters. </li> <li>I'm restricting this question to constants of analysis because some other branches of math, like group theory and combinatorics, have an abundance of large integer constants (e.g. order of the Monster Group). To qualify as a constant of analysis, it should not be derivable from purely algebraic or combinatorial principles. </li> </ol>
[ { "answer_id": 2081246, "score": 4, "text": "<p>You might look <a href=\"https://mathoverflow.net/questions/32967/have-any-long-suspected-irrational-numbers-turned-out-to-be-rational\">here</a> to see which of the listed examples relate to calculus.</p>\n\n<p>Edit: In the link above, $\\frac{32}{27}$ related to the closure of the set of all real zeros of all chromatic polynomials of graphs seems a good candidate to me.</p>\n" }, { "answer_id": 2081277, "score": -2, "text": "<p>There exists interesting natural (hence rational) constants in analysis that fulfill the conditions stated above.</p>\n\n<h2>Proof</h2>\n\n<p>Let $S$ be the natural $\\lceil s\\rceil$. Since $s$ is small, $S$ is also small. We can all agree that the number of interesting theorems in analysis is not small, therefore exists a $K&gt;S$ with (analysis theorem $K$ is interesting) and $K$ not small. We all know that Gödel numbers are interesting. If mathematical theorems are ordered by their Gödel numbers and enumerated, we know that (Gödel number of theorem $n$) $\\geq$ $n$. It follows that (Gödel number of analysis theorem $k$) $\\geq$ $k$. Hence we know that (Gödel number of analysis theorem $K$) is interesting and not small, furthermore strongly analysis-related and rational. Q.E.D.</p>\n" }, { "answer_id": 2081246, "score": 4, "text": "<p>You might look <a href=\"https://mathoverflow.net/questions/32967/have-any-long-suspected-irrational-numbers-turned-out-to-be-rational\">here</a> to see which of the listed examples relate to calculus.</p>\n\n<p>Edit: In the link above, $\\frac{32}{27}$ related to the closure of the set of all real zeros of all chromatic polynomials of graphs seems a good candidate to me.</p>\n" }, { "answer_id": 2081277, "score": -2, "text": "<p>There exists interesting natural (hence rational) constants in analysis that fulfill the conditions stated above.</p>\n\n<h2>Proof</h2>\n\n<p>Let $S$ be the natural $\\lceil s\\rceil$. Since $s$ is small, $S$ is also small. We can all agree that the number of interesting theorems in analysis is not small, therefore exists a $K&gt;S$ with (analysis theorem $K$ is interesting) and $K$ not small. We all know that Gödel numbers are interesting. If mathematical theorems are ordered by their Gödel numbers and enumerated, we know that (Gödel number of theorem $n$) $\\geq$ $n$. It follows that (Gödel number of analysis theorem $k$) $\\geq$ $k$. Hence we know that (Gödel number of analysis theorem $K$) is interesting and not small, furthermore strongly analysis-related and rational. Q.E.D.</p>\n" }, { "answer_id": 2081246, "score": 4, "text": "<p>You might look <a href=\"https://mathoverflow.net/questions/32967/have-any-long-suspected-irrational-numbers-turned-out-to-be-rational\">here</a> to see which of the listed examples relate to calculus.</p>\n\n<p>Edit: In the link above, $\\frac{32}{27}$ related to the closure of the set of all real zeros of all chromatic polynomials of graphs seems a good candidate to me.</p>\n" }, { "answer_id": 2081277, "score": -2, "text": "<p>There exists interesting natural (hence rational) constants in analysis that fulfill the conditions stated above.</p>\n\n<h2>Proof</h2>\n\n<p>Let $S$ be the natural $\\lceil s\\rceil$. Since $s$ is small, $S$ is also small. We can all agree that the number of interesting theorems in analysis is not small, therefore exists a $K&gt;S$ with (analysis theorem $K$ is interesting) and $K$ not small. We all know that Gödel numbers are interesting. If mathematical theorems are ordered by their Gödel numbers and enumerated, we know that (Gödel number of theorem $n$) $\\geq$ $n$. It follows that (Gödel number of analysis theorem $k$) $\\geq$ $k$. Hence we know that (Gödel number of analysis theorem $K$) is interesting and not small, furthermore strongly analysis-related and rational. Q.E.D.</p>\n" }, { "answer_id": 2081246, "score": 4, "text": "<p>You might look <a href=\"https://mathoverflow.net/questions/32967/have-any-long-suspected-irrational-numbers-turned-out-to-be-rational\">here</a> to see which of the listed examples relate to calculus.</p>\n\n<p>Edit: In the link above, $\\frac{32}{27}$ related to the closure of the set of all real zeros of all chromatic polynomials of graphs seems a good candidate to me.</p>\n" }, { "answer_id": 2081277, "score": -2, "text": "<p>There exists interesting natural (hence rational) constants in analysis that fulfill the conditions stated above.</p>\n\n<h2>Proof</h2>\n\n<p>Let $S$ be the natural $\\lceil s\\rceil$. Since $s$ is small, $S$ is also small. We can all agree that the number of interesting theorems in analysis is not small, therefore exists a $K&gt;S$ with (analysis theorem $K$ is interesting) and $K$ not small. We all know that Gödel numbers are interesting. If mathematical theorems are ordered by their Gödel numbers and enumerated, we know that (Gödel number of theorem $n$) $\\geq$ $n$. It follows that (Gödel number of analysis theorem $k$) $\\geq$ $k$. Hence we know that (Gödel number of analysis theorem $K$) is interesting and not small, furthermore strongly analysis-related and rational. Q.E.D.</p>\n" }, { "answer_id": 2081246, "score": 4, "text": "<p>You might look <a href=\"https://mathoverflow.net/questions/32967/have-any-long-suspected-irrational-numbers-turned-out-to-be-rational\">here</a> to see which of the listed examples relate to calculus.</p>\n\n<p>Edit: In the link above, $\\frac{32}{27}$ related to the closure of the set of all real zeros of all chromatic polynomials of graphs seems a good candidate to me.</p>\n" }, { "answer_id": 2081277, "score": -2, "text": "<p>There exists interesting natural (hence rational) constants in analysis that fulfill the conditions stated above.</p>\n\n<h2>Proof</h2>\n\n<p>Let $S$ be the natural $\\lceil s\\rceil$. Since $s$ is small, $S$ is also small. We can all agree that the number of interesting theorems in analysis is not small, therefore exists a $K&gt;S$ with (analysis theorem $K$ is interesting) and $K$ not small. We all know that Gödel numbers are interesting. If mathematical theorems are ordered by their Gödel numbers and enumerated, we know that (Gödel number of theorem $n$) $\\geq$ $n$. It follows that (Gödel number of analysis theorem $k$) $\\geq$ $k$. Hence we know that (Gödel number of analysis theorem $K$) is interesting and not small, furthermore strongly analysis-related and rational. Q.E.D.</p>\n" }, { "answer_id": 2081246, "score": 4, "text": "<p>You might look <a href=\"https://mathoverflow.net/questions/32967/have-any-long-suspected-irrational-numbers-turned-out-to-be-rational\">here</a> to see which of the listed examples relate to calculus.</p>\n\n<p>Edit: In the link above, $\\frac{32}{27}$ related to the closure of the set of all real zeros of all chromatic polynomials of graphs seems a good candidate to me.</p>\n" }, { "answer_id": 2081277, "score": -2, "text": "<p>There exists interesting natural (hence rational) constants in analysis that fulfill the conditions stated above.</p>\n\n<h2>Proof</h2>\n\n<p>Let $S$ be the natural $\\lceil s\\rceil$. Since $s$ is small, $S$ is also small. We can all agree that the number of interesting theorems in analysis is not small, therefore exists a $K&gt;S$ with (analysis theorem $K$ is interesting) and $K$ not small. We all know that Gödel numbers are interesting. If mathematical theorems are ordered by their Gödel numbers and enumerated, we know that (Gödel number of theorem $n$) $\\geq$ $n$. It follows that (Gödel number of analysis theorem $k$) $\\geq$ $k$. Hence we know that (Gödel number of analysis theorem $K$) is interesting and not small, furthermore strongly analysis-related and rational. Q.E.D.</p>\n" }, { "answer_id": 2081246, "score": 4, "text": "<p>You might look <a href=\"https://mathoverflow.net/questions/32967/have-any-long-suspected-irrational-numbers-turned-out-to-be-rational\">here</a> to see which of the listed examples relate to calculus.</p>\n\n<p>Edit: In the link above, $\\frac{32}{27}$ related to the closure of the set of all real zeros of all chromatic polynomials of graphs seems a good candidate to me.</p>\n" }, { "answer_id": 2081277, "score": -2, "text": "<p>There exists interesting natural (hence rational) constants in analysis that fulfill the conditions stated above.</p>\n\n<h2>Proof</h2>\n\n<p>Let $S$ be the natural $\\lceil s\\rceil$. Since $s$ is small, $S$ is also small. We can all agree that the number of interesting theorems in analysis is not small, therefore exists a $K&gt;S$ with (analysis theorem $K$ is interesting) and $K$ not small. We all know that Gödel numbers are interesting. If mathematical theorems are ordered by their Gödel numbers and enumerated, we know that (Gödel number of theorem $n$) $\\geq$ $n$. It follows that (Gödel number of analysis theorem $k$) $\\geq$ $k$. Hence we know that (Gödel number of analysis theorem $K$) is interesting and not small, furthermore strongly analysis-related and rational. Q.E.D.</p>\n" }, { "answer_id": 2081246, "score": 4, "text": "<p>You might look <a href=\"https://mathoverflow.net/questions/32967/have-any-long-suspected-irrational-numbers-turned-out-to-be-rational\">here</a> to see which of the listed examples relate to calculus.</p>\n\n<p>Edit: In the link above, $\\frac{32}{27}$ related to the closure of the set of all real zeros of all chromatic polynomials of graphs seems a good candidate to me.</p>\n" }, { "answer_id": 2081277, "score": -2, "text": "<p>There exists interesting natural (hence rational) constants in analysis that fulfill the conditions stated above.</p>\n\n<h2>Proof</h2>\n\n<p>Let $S$ be the natural $\\lceil s\\rceil$. Since $s$ is small, $S$ is also small. We can all agree that the number of interesting theorems in analysis is not small, therefore exists a $K&gt;S$ with (analysis theorem $K$ is interesting) and $K$ not small. We all know that Gödel numbers are interesting. If mathematical theorems are ordered by their Gödel numbers and enumerated, we know that (Gödel number of theorem $n$) $\\geq$ $n$. It follows that (Gödel number of analysis theorem $k$) $\\geq$ $k$. Hence we know that (Gödel number of analysis theorem $K$) is interesting and not small, furthermore strongly analysis-related and rational. Q.E.D.</p>\n" }, { "answer_id": 2081246, "score": 4, "text": "<p>You might look <a href=\"https://mathoverflow.net/questions/32967/have-any-long-suspected-irrational-numbers-turned-out-to-be-rational\">here</a> to see which of the listed examples relate to calculus.</p>\n\n<p>Edit: In the link above, $\\frac{32}{27}$ related to the closure of the set of all real zeros of all chromatic polynomials of graphs seems a good candidate to me.</p>\n" }, { "answer_id": 2081277, "score": -2, "text": "<p>There exists interesting natural (hence rational) constants in analysis that fulfill the conditions stated above.</p>\n\n<h2>Proof</h2>\n\n<p>Let $S$ be the natural $\\lceil s\\rceil$. Since $s$ is small, $S$ is also small. We can all agree that the number of interesting theorems in analysis is not small, therefore exists a $K&gt;S$ with (analysis theorem $K$ is interesting) and $K$ not small. We all know that Gödel numbers are interesting. If mathematical theorems are ordered by their Gödel numbers and enumerated, we know that (Gödel number of theorem $n$) $\\geq$ $n$. It follows that (Gödel number of analysis theorem $k$) $\\geq$ $k$. Hence we know that (Gödel number of analysis theorem $K$) is interesting and not small, furthermore strongly analysis-related and rational. Q.E.D.</p>\n" } ]
Question: Are any constants (from analysis) rational? Answer 1: <p>You might look <a href="https://mathoverflow.net/questions/32967/have-any-long-suspected-irrational-numbers-turned-out-to-be-rational">here</a> to see which of the listed examples relate to calculus.</p> <p>Edit: In the link above, $\frac{32}{27}$ related to the closure of the set of all real zeros of all chromatic polynomials of graphs seems a good candidate to me.</p> Answer 2: <p>There exists interesting natural (hence rational) constants in analysis that fulfill the conditions stated above.</p> <h2>Proof</h2> <p>Let $S$ be the natural $\lceil s\rceil$. Since $s$ is small, $S$ is also small. We can all agree that the number of interesting theorems in analysis is not small, therefore exists a $K&gt;S$ with (analysis theorem $K$ is interesting) and $K$ not small. We all know that Gödel numbers are interesting. If mathematical theorems are ordered by their Gödel numbers and enumerated, we know that (Gödel number of theorem $n$) $\geq$ $n$. It follows that (Gödel number of analysis theorem $k$) $\geq$ $k$. Hence we know that (Gödel number of analysis theorem $K$) is interesting and not small, furthermore strongly analysis-related and rational. Q.E.D.</p> Answer 3: <p>You might look <a href="https://mathoverflow.net/questions/32967/have-any-long-suspected-irrational-numbers-turned-out-to-be-rational">here</a> to see which of the listed examples relate to calculus.</p> <p>Edit: In the link above, $\frac{32}{27}$ related to the closure of the set of all real zeros of all chromatic polynomials of graphs seems a good candidate to me.</p> Answer 4: <p>There exists interesting natural (hence rational) constants in analysis that fulfill the conditions stated above.</p> <h2>Proof</h2> <p>Let $S$ be the natural $\lceil s\rceil$. Since $s$ is small, $S$ is also small. We can all agree that the number of interesting theorems in analysis is not small, therefore exists a $K&gt;S$ with (analysis theorem $K$ is interesting) and $K$ not small. We all know that Gödel numbers are interesting. If mathematical theorems are ordered by their Gödel numbers and enumerated, we know that (Gödel number of theorem $n$) $\geq$ $n$. It follows that (Gödel number of analysis theorem $k$) $\geq$ $k$. Hence we know that (Gödel number of analysis theorem $K$) is interesting and not small, furthermore strongly analysis-related and rational. Q.E.D.</p> Answer 5: <p>You might look <a href="https://mathoverflow.net/questions/32967/have-any-long-suspected-irrational-numbers-turned-out-to-be-rational">here</a> to see which of the listed examples relate to calculus.</p> <p>Edit: In the link above, $\frac{32}{27}$ related to the closure of the set of all real zeros of all chromatic polynomials of graphs seems a good candidate to me.</p> Answer 6: <p>There exists interesting natural (hence rational) constants in analysis that fulfill the conditions stated above.</p> <h2>Proof</h2> <p>Let $S$ be the natural $\lceil s\rceil$. Since $s$ is small, $S$ is also small. We can all agree that the number of interesting theorems in analysis is not small, therefore exists a $K&gt;S$ with (analysis theorem $K$ is interesting) and $K$ not small. We all know that Gödel numbers are interesting. If mathematical theorems are ordered by their Gödel numbers and enumerated, we know that (Gödel number of theorem $n$) $\geq$ $n$. It follows that (Gödel number of analysis theorem $k$) $\geq$ $k$. Hence we know that (Gödel number of analysis theorem $K$) is interesting and not small, furthermore strongly analysis-related and rational. Q.E.D.</p> Answer 7: <p>You might look <a href="https://mathoverflow.net/questions/32967/have-any-long-suspected-irrational-numbers-turned-out-to-be-rational">here</a> to see which of the listed examples relate to calculus.</p> <p>Edit: In the link above, $\frac{32}{27}$ related to the closure of the set of all real zeros of all chromatic polynomials of graphs seems a good candidate to me.</p> Answer 8: <p>There exists interesting natural (hence rational) constants in analysis that fulfill the conditions stated above.</p> <h2>Proof</h2> <p>Let $S$ be the natural $\lceil s\rceil$. Since $s$ is small, $S$ is also small. We can all agree that the number of interesting theorems in analysis is not small, therefore exists a $K&gt;S$ with (analysis theorem $K$ is interesting) and $K$ not small. We all know that Gödel numbers are interesting. If mathematical theorems are ordered by their Gödel numbers and enumerated, we know that (Gödel number of theorem $n$) $\geq$ $n$. It follows that (Gödel number of analysis theorem $k$) $\geq$ $k$. Hence we know that (Gödel number of analysis theorem $K$) is interesting and not small, furthermore strongly analysis-related and rational. Q.E.D.</p> Answer 9: <p>You might look <a href="https://mathoverflow.net/questions/32967/have-any-long-suspected-irrational-numbers-turned-out-to-be-rational">here</a> to see which of the listed examples relate to calculus.</p> <p>Edit: In the link above, $\frac{32}{27}$ related to the closure of the set of all real zeros of all chromatic polynomials of graphs seems a good candidate to me.</p> Answer 10: <p>There exists interesting natural (hence rational) constants in analysis that fulfill the conditions stated above.</p> <h2>Proof</h2> <p>Let $S$ be the natural $\lceil s\rceil$. Since $s$ is small, $S$ is also small. We can all agree that the number of interesting theorems in analysis is not small, therefore exists a $K&gt;S$ with (analysis theorem $K$ is interesting) and $K$ not small. We all know that Gödel numbers are interesting. If mathematical theorems are ordered by their Gödel numbers and enumerated, we know that (Gödel number of theorem $n$) $\geq$ $n$. It follows that (Gödel number of analysis theorem $k$) $\geq$ $k$. Hence we know that (Gödel number of analysis theorem $K$) is interesting and not small, furthermore strongly analysis-related and rational. Q.E.D.</p> Answer 11: <p>You might look <a href="https://mathoverflow.net/questions/32967/have-any-long-suspected-irrational-numbers-turned-out-to-be-rational">here</a> to see which of the listed examples relate to calculus.</p> <p>Edit: In the link above, $\frac{32}{27}$ related to the closure of the set of all real zeros of all chromatic polynomials of graphs seems a good candidate to me.</p> Answer 12: <p>There exists interesting natural (hence rational) constants in analysis that fulfill the conditions stated above.</p> <h2>Proof</h2> <p>Let $S$ be the natural $\lceil s\rceil$. Since $s$ is small, $S$ is also small. We can all agree that the number of interesting theorems in analysis is not small, therefore exists a $K&gt;S$ with (analysis theorem $K$ is interesting) and $K$ not small. We all know that Gödel numbers are interesting. If mathematical theorems are ordered by their Gödel numbers and enumerated, we know that (Gödel number of theorem $n$) $\geq$ $n$. It follows that (Gödel number of analysis theorem $k$) $\geq$ $k$. Hence we know that (Gödel number of analysis theorem $K$) is interesting and not small, furthermore strongly analysis-related and rational. Q.E.D.</p> Answer 13: <p>You might look <a href="https://mathoverflow.net/questions/32967/have-any-long-suspected-irrational-numbers-turned-out-to-be-rational">here</a> to see which of the listed examples relate to calculus.</p> <p>Edit: In the link above, $\frac{32}{27}$ related to the closure of the set of all real zeros of all chromatic polynomials of graphs seems a good candidate to me.</p> Answer 14: <p>There exists interesting natural (hence rational) constants in analysis that fulfill the conditions stated above.</p> <h2>Proof</h2> <p>Let $S$ be the natural $\lceil s\rceil$. Since $s$ is small, $S$ is also small. We can all agree that the number of interesting theorems in analysis is not small, therefore exists a $K&gt;S$ with (analysis theorem $K$ is interesting) and $K$ not small. We all know that Gödel numbers are interesting. If mathematical theorems are ordered by their Gödel numbers and enumerated, we know that (Gödel number of theorem $n$) $\geq$ $n$. It follows that (Gödel number of analysis theorem $k$) $\geq$ $k$. Hence we know that (Gödel number of analysis theorem $K$) is interesting and not small, furthermore strongly analysis-related and rational. Q.E.D.</p> Answer 15: <p>You might look <a href="https://mathoverflow.net/questions/32967/have-any-long-suspected-irrational-numbers-turned-out-to-be-rational">here</a> to see which of the listed examples relate to calculus.</p> <p>Edit: In the link above, $\frac{32}{27}$ related to the closure of the set of all real zeros of all chromatic polynomials of graphs seems a good candidate to me.</p> Answer 16: <p>There exists interesting natural (hence rational) constants in analysis that fulfill the conditions stated above.</p> <h2>Proof</h2> <p>Let $S$ be the natural $\lceil s\rceil$. Since $s$ is small, $S$ is also small. We can all agree that the number of interesting theorems in analysis is not small, therefore exists a $K&gt;S$ with (analysis theorem $K$ is interesting) and $K$ not small. We all know that Gödel numbers are interesting. If mathematical theorems are ordered by their Gödel numbers and enumerated, we know that (Gödel number of theorem $n$) $\geq$ $n$. It follows that (Gödel number of analysis theorem $k$) $\geq$ $k$. Hence we know that (Gödel number of analysis theorem $K$) is interesting and not small, furthermore strongly analysis-related and rational. Q.E.D.</p> Answer 17: <p>You might look <a href="https://mathoverflow.net/questions/32967/have-any-long-suspected-irrational-numbers-turned-out-to-be-rational">here</a> to see which of the listed examples relate to calculus.</p> <p>Edit: In the link above, $\frac{32}{27}$ related to the closure of the set of all real zeros of all chromatic polynomials of graphs seems a good candidate to me.</p> Answer 18: <p>There exists interesting natural (hence rational) constants in analysis that fulfill the conditions stated above.</p> <h2>Proof</h2> <p>Let $S$ be the natural $\lceil s\rceil$. Since $s$ is small, $S$ is also small. We can all agree that the number of interesting theorems in analysis is not small, therefore exists a $K&gt;S$ with (analysis theorem $K$ is interesting) and $K$ not small. We all know that Gödel numbers are interesting. If mathematical theorems are ordered by their Gödel numbers and enumerated, we know that (Gödel number of theorem $n$) $\geq$ $n$. It follows that (Gödel number of analysis theorem $k$) $\geq$ $k$. Hence we know that (Gödel number of analysis theorem $K$) is interesting and not small, furthermore strongly analysis-related and rational. Q.E.D.</p>
Neighborhood vs open neighborhood
2,081,161
5
https://math.stackexchange.com/questions/2081161/neighborhood-vs-open-neighborhood
<p>I am a beginner in the concept of general Topology. I have some confusion about the utilization of neighborhood and open neighborhood. </p> <p>$\mathbf{Definition}$: Let $(X,\mathcal{T})$ be a topological space and $x\in X$. $N\subset X$ is called neighborhood of $x$ if there exists $U\in \mathcal{T}$ such that $x\in U\subset N$. It is clear that any open set $U\in \mathcal{T}$, such that $x\in U$, is a neighborhood of $x$.</p> <p>Now, Let $(x_n)_{n\in\mathbb{N}}\subset X$ be a sequence and $x\in X$. </p> <p>$\mathbf{Definition\ 1}$: $x$ is a limit of the sequence $(x_n)$ if for any neighborhood $N$ of $x$, there exists $n_0\in\mathbb{N}$ such that $x_n\in N$ for all $n\geq n_0$.</p> <p>Some people use a different definition:</p> <p>$\mathbf{Definition\ 2}$: $x$ is a limit of the sequence $(x_n)$ if for any open neighborhood $N$ of $x$, there exists $n_0\in\mathbb{N}$ such that $x_n\in N$ for all $n\geq n_0$.</p> <p>The only difference is that, in the Definition 1, they consider all neighborhoods of $x$ where in the Definition 2, they only consider open neighborhoods. But is that difference not a problem?</p> <p>Are those definitions equivalent? The same for the definition of Adherent point.</p> <p>$\mathbf{Definition\ 1}$: $x$ is an adherent point of $A$ if for any neighborhood $N$ of $x$, $N\cap A\neq \emptyset$.</p> <p>$\mathbf{Definition\ 2}$: $x$ is an adherent point of $A$ if for any open neighborhood $N$ of $x$, $N\cap A\neq \emptyset$.</p> <p>I am really confused about those two definitions. </p>
[ { "answer_id": 2081177, "score": 4, "text": "<p>In both cases the definitions are equivalent. It should be clear in each case that if <strong>Definition</strong> $\\mathbf{1}$ is satisfied, then so is <strong>Definition</strong> $\\mathbf{2}$. </p>\n\n<p>Suppose in the first case that for each open nbhd $U$ of $x$ there is an $m_U\\in\\Bbb N$ such that $x_n\\in U$ whenever $n\\ge m_U$, and let $N$ be any nbhd of $x$. By definition there is an open nbhd $U$ of $x$ such that $U\\subseteq N$, and clearly $x_n\\in N$ whenever $n\\ge m_U$.</p>\n\n<p>Finally, suppose in the second case that each open nbhd of $x$ has non-empty intersection with $A$, and let $N$ be any nbhd of $x$. As before, there is an open nbhd $U$ of $x$ such that $U\\subseteq N$, and clearly $N\\cap A\\supseteq U\\cap A\\ne\\varnothing$, so $N\\cap A\\ne\\varnothing$.</p>\n" }, { "answer_id": 2081169, "score": 2, "text": "<p>It should be clear that if $x$ is the limit of $(x_n)$ by Definition $1$, then Definition $2$ is satisfied as well (if it's not clear let me know and I can explain). Hence we only need to show that if Definition $2$ is satisfied, then Definition $1$ is as well:</p>\n\n<blockquote>\n <p>Suppose $x$ is the limit of $(x_n)$ by Definition $2$. Let $N$ be any neighborhood of $x$, so there exists some open $U$ with $x\\in U\\subset N$. Because $U$ is an open neighborhood of $x$, by assumption there exists $n_0\\in\\Bbb{N}$ such that $x_n\\in U$ for $n\\ge n_0$. But $U\\subset N$, so $x_n\\in N$ for $n\\ge n_0$ as well, and Definition $1$ is indeed satisfied.</p>\n</blockquote>\n\n<p>So we see the two definitions of a limit are indeed equivalent. I'd recommend you try to apply a similar argument to your second pair of definitions.</p>\n\n<hr>\n\n<p>Edit: Here is the argument to show Definition $1$ implies Definition $2$:</p>\n\n<blockquote>\n <p>Suppose Definition $1$ holds. Then if $U$ is any open neighborhood of $x$, because $U$ is a neighborhood Definition $1$ tells us that there is an $n_0$ such that $x_n\\in U$ for $n\\ge n_0$; since $U$ was arbitrary, we see Definition $2$ holds.</p>\n</blockquote>\n" }, { "answer_id": 2081172, "score": 2, "text": "<p>The two definitions are equivalent. It's clear to imply the conclusion of open neighborhoods from that of neighborhoods. Now conversely, if the statement (in particular, your definitions) holds for open neighborhoods of $x,$ then for any neighborhoods, one can always find an open set containing $x,$ which is an open neighborhood of $x.$ Hence the conclusion follows.</p>\n" }, { "answer_id": 2081204, "score": 0, "text": "<p>These two Definitions are equivalent. To see this, consider a sequence $(x_n)_{n \\in \\mathbb{N}}$ with limit $x$. I start by showing\n$$Definition 1 \\implies Definition 2$$</p>\n\n<p>By definition 1 we know that for any neighborhood $N$ of $x$, $\\exists \\;n_0 \\in \\mathbb{N} \\;$ s.t. $\\;x_n \\in N \\; \\forall n≥n_0$.<br>\nNow pick one such neighborhood $N$. By the definition of neighborhood, $\\exists U$ in the topology s.t. $x \\in U \\subset N$. In particular, since $U$ is in the topology, $U$ is an open set. But $U$ is also a neighborhood itself and chosen randomly. Hence Definition 2 follows.<br>\nRemark: a faster way to see this is to realize that the set of all open neighborhoods of $x$ is a subset of the set of all neighborhoods of $x$.<br>\nNow to see $$Definition 2 \\implies Definition 1$$<br>\nTake any open neighborhood $U$ of $x$. Then by definition 2 $\\exists \\; n_0 \\in \\mathbb{N} \\;s.t. \\; x_n \\in U \\; \\forall n≥n_0$. Now if you consider any arbitrary neighborhood $N$ of $x$ containing $U$, i.e. $U \\subset N$, you clearly also have that $x_n \\in N \\forall n≥n_0$, since $(x_j \\in U \\implies x_j \\in N )\\forall j \\in \\mathbb{N}$. Thus also definition 1 holds and we finally conclude that these definitions are equivalent.<br>\nThe equivalence of the definitions about an adherent point can be shown quite analogously.</p>\n" }, { "answer_id": 2081177, "score": 4, "text": "<p>In both cases the definitions are equivalent. It should be clear in each case that if <strong>Definition</strong> $\\mathbf{1}$ is satisfied, then so is <strong>Definition</strong> $\\mathbf{2}$. </p>\n\n<p>Suppose in the first case that for each open nbhd $U$ of $x$ there is an $m_U\\in\\Bbb N$ such that $x_n\\in U$ whenever $n\\ge m_U$, and let $N$ be any nbhd of $x$. By definition there is an open nbhd $U$ of $x$ such that $U\\subseteq N$, and clearly $x_n\\in N$ whenever $n\\ge m_U$.</p>\n\n<p>Finally, suppose in the second case that each open nbhd of $x$ has non-empty intersection with $A$, and let $N$ be any nbhd of $x$. As before, there is an open nbhd $U$ of $x$ such that $U\\subseteq N$, and clearly $N\\cap A\\supseteq U\\cap A\\ne\\varnothing$, so $N\\cap A\\ne\\varnothing$.</p>\n" }, { "answer_id": 2081169, "score": 2, "text": "<p>It should be clear that if $x$ is the limit of $(x_n)$ by Definition $1$, then Definition $2$ is satisfied as well (if it's not clear let me know and I can explain). Hence we only need to show that if Definition $2$ is satisfied, then Definition $1$ is as well:</p>\n\n<blockquote>\n <p>Suppose $x$ is the limit of $(x_n)$ by Definition $2$. Let $N$ be any neighborhood of $x$, so there exists some open $U$ with $x\\in U\\subset N$. Because $U$ is an open neighborhood of $x$, by assumption there exists $n_0\\in\\Bbb{N}$ such that $x_n\\in U$ for $n\\ge n_0$. But $U\\subset N$, so $x_n\\in N$ for $n\\ge n_0$ as well, and Definition $1$ is indeed satisfied.</p>\n</blockquote>\n\n<p>So we see the two definitions of a limit are indeed equivalent. I'd recommend you try to apply a similar argument to your second pair of definitions.</p>\n\n<hr>\n\n<p>Edit: Here is the argument to show Definition $1$ implies Definition $2$:</p>\n\n<blockquote>\n <p>Suppose Definition $1$ holds. Then if $U$ is any open neighborhood of $x$, because $U$ is a neighborhood Definition $1$ tells us that there is an $n_0$ such that $x_n\\in U$ for $n\\ge n_0$; since $U$ was arbitrary, we see Definition $2$ holds.</p>\n</blockquote>\n" }, { "answer_id": 2081172, "score": 2, "text": "<p>The two definitions are equivalent. It's clear to imply the conclusion of open neighborhoods from that of neighborhoods. Now conversely, if the statement (in particular, your definitions) holds for open neighborhoods of $x,$ then for any neighborhoods, one can always find an open set containing $x,$ which is an open neighborhood of $x.$ Hence the conclusion follows.</p>\n" }, { "answer_id": 2081204, "score": 0, "text": "<p>These two Definitions are equivalent. To see this, consider a sequence $(x_n)_{n \\in \\mathbb{N}}$ with limit $x$. I start by showing\n$$Definition 1 \\implies Definition 2$$</p>\n\n<p>By definition 1 we know that for any neighborhood $N$ of $x$, $\\exists \\;n_0 \\in \\mathbb{N} \\;$ s.t. $\\;x_n \\in N \\; \\forall n≥n_0$.<br>\nNow pick one such neighborhood $N$. By the definition of neighborhood, $\\exists U$ in the topology s.t. $x \\in U \\subset N$. In particular, since $U$ is in the topology, $U$ is an open set. But $U$ is also a neighborhood itself and chosen randomly. Hence Definition 2 follows.<br>\nRemark: a faster way to see this is to realize that the set of all open neighborhoods of $x$ is a subset of the set of all neighborhoods of $x$.<br>\nNow to see $$Definition 2 \\implies Definition 1$$<br>\nTake any open neighborhood $U$ of $x$. Then by definition 2 $\\exists \\; n_0 \\in \\mathbb{N} \\;s.t. \\; x_n \\in U \\; \\forall n≥n_0$. Now if you consider any arbitrary neighborhood $N$ of $x$ containing $U$, i.e. $U \\subset N$, you clearly also have that $x_n \\in N \\forall n≥n_0$, since $(x_j \\in U \\implies x_j \\in N )\\forall j \\in \\mathbb{N}$. Thus also definition 1 holds and we finally conclude that these definitions are equivalent.<br>\nThe equivalence of the definitions about an adherent point can be shown quite analogously.</p>\n" }, { "answer_id": 2081177, "score": 4, "text": "<p>In both cases the definitions are equivalent. It should be clear in each case that if <strong>Definition</strong> $\\mathbf{1}$ is satisfied, then so is <strong>Definition</strong> $\\mathbf{2}$. </p>\n\n<p>Suppose in the first case that for each open nbhd $U$ of $x$ there is an $m_U\\in\\Bbb N$ such that $x_n\\in U$ whenever $n\\ge m_U$, and let $N$ be any nbhd of $x$. By definition there is an open nbhd $U$ of $x$ such that $U\\subseteq N$, and clearly $x_n\\in N$ whenever $n\\ge m_U$.</p>\n\n<p>Finally, suppose in the second case that each open nbhd of $x$ has non-empty intersection with $A$, and let $N$ be any nbhd of $x$. As before, there is an open nbhd $U$ of $x$ such that $U\\subseteq N$, and clearly $N\\cap A\\supseteq U\\cap A\\ne\\varnothing$, so $N\\cap A\\ne\\varnothing$.</p>\n" }, { "answer_id": 2081169, "score": 2, "text": "<p>It should be clear that if $x$ is the limit of $(x_n)$ by Definition $1$, then Definition $2$ is satisfied as well (if it's not clear let me know and I can explain). Hence we only need to show that if Definition $2$ is satisfied, then Definition $1$ is as well:</p>\n\n<blockquote>\n <p>Suppose $x$ is the limit of $(x_n)$ by Definition $2$. Let $N$ be any neighborhood of $x$, so there exists some open $U$ with $x\\in U\\subset N$. Because $U$ is an open neighborhood of $x$, by assumption there exists $n_0\\in\\Bbb{N}$ such that $x_n\\in U$ for $n\\ge n_0$. But $U\\subset N$, so $x_n\\in N$ for $n\\ge n_0$ as well, and Definition $1$ is indeed satisfied.</p>\n</blockquote>\n\n<p>So we see the two definitions of a limit are indeed equivalent. I'd recommend you try to apply a similar argument to your second pair of definitions.</p>\n\n<hr>\n\n<p>Edit: Here is the argument to show Definition $1$ implies Definition $2$:</p>\n\n<blockquote>\n <p>Suppose Definition $1$ holds. Then if $U$ is any open neighborhood of $x$, because $U$ is a neighborhood Definition $1$ tells us that there is an $n_0$ such that $x_n\\in U$ for $n\\ge n_0$; since $U$ was arbitrary, we see Definition $2$ holds.</p>\n</blockquote>\n" }, { "answer_id": 2081172, "score": 2, "text": "<p>The two definitions are equivalent. It's clear to imply the conclusion of open neighborhoods from that of neighborhoods. Now conversely, if the statement (in particular, your definitions) holds for open neighborhoods of $x,$ then for any neighborhoods, one can always find an open set containing $x,$ which is an open neighborhood of $x.$ Hence the conclusion follows.</p>\n" }, { "answer_id": 2081204, "score": 0, "text": "<p>These two Definitions are equivalent. To see this, consider a sequence $(x_n)_{n \\in \\mathbb{N}}$ with limit $x$. I start by showing\n$$Definition 1 \\implies Definition 2$$</p>\n\n<p>By definition 1 we know that for any neighborhood $N$ of $x$, $\\exists \\;n_0 \\in \\mathbb{N} \\;$ s.t. $\\;x_n \\in N \\; \\forall n≥n_0$.<br>\nNow pick one such neighborhood $N$. By the definition of neighborhood, $\\exists U$ in the topology s.t. $x \\in U \\subset N$. In particular, since $U$ is in the topology, $U$ is an open set. But $U$ is also a neighborhood itself and chosen randomly. Hence Definition 2 follows.<br>\nRemark: a faster way to see this is to realize that the set of all open neighborhoods of $x$ is a subset of the set of all neighborhoods of $x$.<br>\nNow to see $$Definition 2 \\implies Definition 1$$<br>\nTake any open neighborhood $U$ of $x$. Then by definition 2 $\\exists \\; n_0 \\in \\mathbb{N} \\;s.t. \\; x_n \\in U \\; \\forall n≥n_0$. Now if you consider any arbitrary neighborhood $N$ of $x$ containing $U$, i.e. $U \\subset N$, you clearly also have that $x_n \\in N \\forall n≥n_0$, since $(x_j \\in U \\implies x_j \\in N )\\forall j \\in \\mathbb{N}$. Thus also definition 1 holds and we finally conclude that these definitions are equivalent.<br>\nThe equivalence of the definitions about an adherent point can be shown quite analogously.</p>\n" }, { "answer_id": 2081177, "score": 4, "text": "<p>In both cases the definitions are equivalent. It should be clear in each case that if <strong>Definition</strong> $\\mathbf{1}$ is satisfied, then so is <strong>Definition</strong> $\\mathbf{2}$. </p>\n\n<p>Suppose in the first case that for each open nbhd $U$ of $x$ there is an $m_U\\in\\Bbb N$ such that $x_n\\in U$ whenever $n\\ge m_U$, and let $N$ be any nbhd of $x$. By definition there is an open nbhd $U$ of $x$ such that $U\\subseteq N$, and clearly $x_n\\in N$ whenever $n\\ge m_U$.</p>\n\n<p>Finally, suppose in the second case that each open nbhd of $x$ has non-empty intersection with $A$, and let $N$ be any nbhd of $x$. As before, there is an open nbhd $U$ of $x$ such that $U\\subseteq N$, and clearly $N\\cap A\\supseteq U\\cap A\\ne\\varnothing$, so $N\\cap A\\ne\\varnothing$.</p>\n" }, { "answer_id": 2081169, "score": 2, "text": "<p>It should be clear that if $x$ is the limit of $(x_n)$ by Definition $1$, then Definition $2$ is satisfied as well (if it's not clear let me know and I can explain). Hence we only need to show that if Definition $2$ is satisfied, then Definition $1$ is as well:</p>\n\n<blockquote>\n <p>Suppose $x$ is the limit of $(x_n)$ by Definition $2$. Let $N$ be any neighborhood of $x$, so there exists some open $U$ with $x\\in U\\subset N$. Because $U$ is an open neighborhood of $x$, by assumption there exists $n_0\\in\\Bbb{N}$ such that $x_n\\in U$ for $n\\ge n_0$. But $U\\subset N$, so $x_n\\in N$ for $n\\ge n_0$ as well, and Definition $1$ is indeed satisfied.</p>\n</blockquote>\n\n<p>So we see the two definitions of a limit are indeed equivalent. I'd recommend you try to apply a similar argument to your second pair of definitions.</p>\n\n<hr>\n\n<p>Edit: Here is the argument to show Definition $1$ implies Definition $2$:</p>\n\n<blockquote>\n <p>Suppose Definition $1$ holds. Then if $U$ is any open neighborhood of $x$, because $U$ is a neighborhood Definition $1$ tells us that there is an $n_0$ such that $x_n\\in U$ for $n\\ge n_0$; since $U$ was arbitrary, we see Definition $2$ holds.</p>\n</blockquote>\n" }, { "answer_id": 2081172, "score": 2, "text": "<p>The two definitions are equivalent. It's clear to imply the conclusion of open neighborhoods from that of neighborhoods. Now conversely, if the statement (in particular, your definitions) holds for open neighborhoods of $x,$ then for any neighborhoods, one can always find an open set containing $x,$ which is an open neighborhood of $x.$ Hence the conclusion follows.</p>\n" }, { "answer_id": 2081204, "score": 0, "text": "<p>These two Definitions are equivalent. To see this, consider a sequence $(x_n)_{n \\in \\mathbb{N}}$ with limit $x$. I start by showing\n$$Definition 1 \\implies Definition 2$$</p>\n\n<p>By definition 1 we know that for any neighborhood $N$ of $x$, $\\exists \\;n_0 \\in \\mathbb{N} \\;$ s.t. $\\;x_n \\in N \\; \\forall n≥n_0$.<br>\nNow pick one such neighborhood $N$. By the definition of neighborhood, $\\exists U$ in the topology s.t. $x \\in U \\subset N$. In particular, since $U$ is in the topology, $U$ is an open set. But $U$ is also a neighborhood itself and chosen randomly. Hence Definition 2 follows.<br>\nRemark: a faster way to see this is to realize that the set of all open neighborhoods of $x$ is a subset of the set of all neighborhoods of $x$.<br>\nNow to see $$Definition 2 \\implies Definition 1$$<br>\nTake any open neighborhood $U$ of $x$. Then by definition 2 $\\exists \\; n_0 \\in \\mathbb{N} \\;s.t. \\; x_n \\in U \\; \\forall n≥n_0$. Now if you consider any arbitrary neighborhood $N$ of $x$ containing $U$, i.e. $U \\subset N$, you clearly also have that $x_n \\in N \\forall n≥n_0$, since $(x_j \\in U \\implies x_j \\in N )\\forall j \\in \\mathbb{N}$. Thus also definition 1 holds and we finally conclude that these definitions are equivalent.<br>\nThe equivalence of the definitions about an adherent point can be shown quite analogously.</p>\n" }, { "answer_id": 2081177, "score": 4, "text": "<p>In both cases the definitions are equivalent. It should be clear in each case that if <strong>Definition</strong> $\\mathbf{1}$ is satisfied, then so is <strong>Definition</strong> $\\mathbf{2}$. </p>\n\n<p>Suppose in the first case that for each open nbhd $U$ of $x$ there is an $m_U\\in\\Bbb N$ such that $x_n\\in U$ whenever $n\\ge m_U$, and let $N$ be any nbhd of $x$. By definition there is an open nbhd $U$ of $x$ such that $U\\subseteq N$, and clearly $x_n\\in N$ whenever $n\\ge m_U$.</p>\n\n<p>Finally, suppose in the second case that each open nbhd of $x$ has non-empty intersection with $A$, and let $N$ be any nbhd of $x$. As before, there is an open nbhd $U$ of $x$ such that $U\\subseteq N$, and clearly $N\\cap A\\supseteq U\\cap A\\ne\\varnothing$, so $N\\cap A\\ne\\varnothing$.</p>\n" }, { "answer_id": 2081169, "score": 2, "text": "<p>It should be clear that if $x$ is the limit of $(x_n)$ by Definition $1$, then Definition $2$ is satisfied as well (if it's not clear let me know and I can explain). Hence we only need to show that if Definition $2$ is satisfied, then Definition $1$ is as well:</p>\n\n<blockquote>\n <p>Suppose $x$ is the limit of $(x_n)$ by Definition $2$. Let $N$ be any neighborhood of $x$, so there exists some open $U$ with $x\\in U\\subset N$. Because $U$ is an open neighborhood of $x$, by assumption there exists $n_0\\in\\Bbb{N}$ such that $x_n\\in U$ for $n\\ge n_0$. But $U\\subset N$, so $x_n\\in N$ for $n\\ge n_0$ as well, and Definition $1$ is indeed satisfied.</p>\n</blockquote>\n\n<p>So we see the two definitions of a limit are indeed equivalent. I'd recommend you try to apply a similar argument to your second pair of definitions.</p>\n\n<hr>\n\n<p>Edit: Here is the argument to show Definition $1$ implies Definition $2$:</p>\n\n<blockquote>\n <p>Suppose Definition $1$ holds. Then if $U$ is any open neighborhood of $x$, because $U$ is a neighborhood Definition $1$ tells us that there is an $n_0$ such that $x_n\\in U$ for $n\\ge n_0$; since $U$ was arbitrary, we see Definition $2$ holds.</p>\n</blockquote>\n" }, { "answer_id": 2081172, "score": 2, "text": "<p>The two definitions are equivalent. It's clear to imply the conclusion of open neighborhoods from that of neighborhoods. Now conversely, if the statement (in particular, your definitions) holds for open neighborhoods of $x,$ then for any neighborhoods, one can always find an open set containing $x,$ which is an open neighborhood of $x.$ Hence the conclusion follows.</p>\n" }, { "answer_id": 2081204, "score": 0, "text": "<p>These two Definitions are equivalent. To see this, consider a sequence $(x_n)_{n \\in \\mathbb{N}}$ with limit $x$. I start by showing\n$$Definition 1 \\implies Definition 2$$</p>\n\n<p>By definition 1 we know that for any neighborhood $N$ of $x$, $\\exists \\;n_0 \\in \\mathbb{N} \\;$ s.t. $\\;x_n \\in N \\; \\forall n≥n_0$.<br>\nNow pick one such neighborhood $N$. By the definition of neighborhood, $\\exists U$ in the topology s.t. $x \\in U \\subset N$. In particular, since $U$ is in the topology, $U$ is an open set. But $U$ is also a neighborhood itself and chosen randomly. Hence Definition 2 follows.<br>\nRemark: a faster way to see this is to realize that the set of all open neighborhoods of $x$ is a subset of the set of all neighborhoods of $x$.<br>\nNow to see $$Definition 2 \\implies Definition 1$$<br>\nTake any open neighborhood $U$ of $x$. Then by definition 2 $\\exists \\; n_0 \\in \\mathbb{N} \\;s.t. \\; x_n \\in U \\; \\forall n≥n_0$. Now if you consider any arbitrary neighborhood $N$ of $x$ containing $U$, i.e. $U \\subset N$, you clearly also have that $x_n \\in N \\forall n≥n_0$, since $(x_j \\in U \\implies x_j \\in N )\\forall j \\in \\mathbb{N}$. Thus also definition 1 holds and we finally conclude that these definitions are equivalent.<br>\nThe equivalence of the definitions about an adherent point can be shown quite analogously.</p>\n" }, { "answer_id": 2081177, "score": 4, "text": "<p>In both cases the definitions are equivalent. It should be clear in each case that if <strong>Definition</strong> $\\mathbf{1}$ is satisfied, then so is <strong>Definition</strong> $\\mathbf{2}$. </p>\n\n<p>Suppose in the first case that for each open nbhd $U$ of $x$ there is an $m_U\\in\\Bbb N$ such that $x_n\\in U$ whenever $n\\ge m_U$, and let $N$ be any nbhd of $x$. By definition there is an open nbhd $U$ of $x$ such that $U\\subseteq N$, and clearly $x_n\\in N$ whenever $n\\ge m_U$.</p>\n\n<p>Finally, suppose in the second case that each open nbhd of $x$ has non-empty intersection with $A$, and let $N$ be any nbhd of $x$. As before, there is an open nbhd $U$ of $x$ such that $U\\subseteq N$, and clearly $N\\cap A\\supseteq U\\cap A\\ne\\varnothing$, so $N\\cap A\\ne\\varnothing$.</p>\n" }, { "answer_id": 2081169, "score": 2, "text": "<p>It should be clear that if $x$ is the limit of $(x_n)$ by Definition $1$, then Definition $2$ is satisfied as well (if it's not clear let me know and I can explain). Hence we only need to show that if Definition $2$ is satisfied, then Definition $1$ is as well:</p>\n\n<blockquote>\n <p>Suppose $x$ is the limit of $(x_n)$ by Definition $2$. Let $N$ be any neighborhood of $x$, so there exists some open $U$ with $x\\in U\\subset N$. Because $U$ is an open neighborhood of $x$, by assumption there exists $n_0\\in\\Bbb{N}$ such that $x_n\\in U$ for $n\\ge n_0$. But $U\\subset N$, so $x_n\\in N$ for $n\\ge n_0$ as well, and Definition $1$ is indeed satisfied.</p>\n</blockquote>\n\n<p>So we see the two definitions of a limit are indeed equivalent. I'd recommend you try to apply a similar argument to your second pair of definitions.</p>\n\n<hr>\n\n<p>Edit: Here is the argument to show Definition $1$ implies Definition $2$:</p>\n\n<blockquote>\n <p>Suppose Definition $1$ holds. Then if $U$ is any open neighborhood of $x$, because $U$ is a neighborhood Definition $1$ tells us that there is an $n_0$ such that $x_n\\in U$ for $n\\ge n_0$; since $U$ was arbitrary, we see Definition $2$ holds.</p>\n</blockquote>\n" }, { "answer_id": 2081172, "score": 2, "text": "<p>The two definitions are equivalent. It's clear to imply the conclusion of open neighborhoods from that of neighborhoods. Now conversely, if the statement (in particular, your definitions) holds for open neighborhoods of $x,$ then for any neighborhoods, one can always find an open set containing $x,$ which is an open neighborhood of $x.$ Hence the conclusion follows.</p>\n" }, { "answer_id": 2081204, "score": 0, "text": "<p>These two Definitions are equivalent. To see this, consider a sequence $(x_n)_{n \\in \\mathbb{N}}$ with limit $x$. I start by showing\n$$Definition 1 \\implies Definition 2$$</p>\n\n<p>By definition 1 we know that for any neighborhood $N$ of $x$, $\\exists \\;n_0 \\in \\mathbb{N} \\;$ s.t. $\\;x_n \\in N \\; \\forall n≥n_0$.<br>\nNow pick one such neighborhood $N$. By the definition of neighborhood, $\\exists U$ in the topology s.t. $x \\in U \\subset N$. In particular, since $U$ is in the topology, $U$ is an open set. But $U$ is also a neighborhood itself and chosen randomly. Hence Definition 2 follows.<br>\nRemark: a faster way to see this is to realize that the set of all open neighborhoods of $x$ is a subset of the set of all neighborhoods of $x$.<br>\nNow to see $$Definition 2 \\implies Definition 1$$<br>\nTake any open neighborhood $U$ of $x$. Then by definition 2 $\\exists \\; n_0 \\in \\mathbb{N} \\;s.t. \\; x_n \\in U \\; \\forall n≥n_0$. Now if you consider any arbitrary neighborhood $N$ of $x$ containing $U$, i.e. $U \\subset N$, you clearly also have that $x_n \\in N \\forall n≥n_0$, since $(x_j \\in U \\implies x_j \\in N )\\forall j \\in \\mathbb{N}$. Thus also definition 1 holds and we finally conclude that these definitions are equivalent.<br>\nThe equivalence of the definitions about an adherent point can be shown quite analogously.</p>\n" }, { "answer_id": 2081177, "score": 4, "text": "<p>In both cases the definitions are equivalent. It should be clear in each case that if <strong>Definition</strong> $\\mathbf{1}$ is satisfied, then so is <strong>Definition</strong> $\\mathbf{2}$. </p>\n\n<p>Suppose in the first case that for each open nbhd $U$ of $x$ there is an $m_U\\in\\Bbb N$ such that $x_n\\in U$ whenever $n\\ge m_U$, and let $N$ be any nbhd of $x$. By definition there is an open nbhd $U$ of $x$ such that $U\\subseteq N$, and clearly $x_n\\in N$ whenever $n\\ge m_U$.</p>\n\n<p>Finally, suppose in the second case that each open nbhd of $x$ has non-empty intersection with $A$, and let $N$ be any nbhd of $x$. As before, there is an open nbhd $U$ of $x$ such that $U\\subseteq N$, and clearly $N\\cap A\\supseteq U\\cap A\\ne\\varnothing$, so $N\\cap A\\ne\\varnothing$.</p>\n" }, { "answer_id": 2081169, "score": 2, "text": "<p>It should be clear that if $x$ is the limit of $(x_n)$ by Definition $1$, then Definition $2$ is satisfied as well (if it's not clear let me know and I can explain). Hence we only need to show that if Definition $2$ is satisfied, then Definition $1$ is as well:</p>\n\n<blockquote>\n <p>Suppose $x$ is the limit of $(x_n)$ by Definition $2$. Let $N$ be any neighborhood of $x$, so there exists some open $U$ with $x\\in U\\subset N$. Because $U$ is an open neighborhood of $x$, by assumption there exists $n_0\\in\\Bbb{N}$ such that $x_n\\in U$ for $n\\ge n_0$. But $U\\subset N$, so $x_n\\in N$ for $n\\ge n_0$ as well, and Definition $1$ is indeed satisfied.</p>\n</blockquote>\n\n<p>So we see the two definitions of a limit are indeed equivalent. I'd recommend you try to apply a similar argument to your second pair of definitions.</p>\n\n<hr>\n\n<p>Edit: Here is the argument to show Definition $1$ implies Definition $2$:</p>\n\n<blockquote>\n <p>Suppose Definition $1$ holds. Then if $U$ is any open neighborhood of $x$, because $U$ is a neighborhood Definition $1$ tells us that there is an $n_0$ such that $x_n\\in U$ for $n\\ge n_0$; since $U$ was arbitrary, we see Definition $2$ holds.</p>\n</blockquote>\n" }, { "answer_id": 2081172, "score": 2, "text": "<p>The two definitions are equivalent. It's clear to imply the conclusion of open neighborhoods from that of neighborhoods. Now conversely, if the statement (in particular, your definitions) holds for open neighborhoods of $x,$ then for any neighborhoods, one can always find an open set containing $x,$ which is an open neighborhood of $x.$ Hence the conclusion follows.</p>\n" }, { "answer_id": 2081204, "score": 0, "text": "<p>These two Definitions are equivalent. To see this, consider a sequence $(x_n)_{n \\in \\mathbb{N}}$ with limit $x$. I start by showing\n$$Definition 1 \\implies Definition 2$$</p>\n\n<p>By definition 1 we know that for any neighborhood $N$ of $x$, $\\exists \\;n_0 \\in \\mathbb{N} \\;$ s.t. $\\;x_n \\in N \\; \\forall n≥n_0$.<br>\nNow pick one such neighborhood $N$. By the definition of neighborhood, $\\exists U$ in the topology s.t. $x \\in U \\subset N$. In particular, since $U$ is in the topology, $U$ is an open set. But $U$ is also a neighborhood itself and chosen randomly. Hence Definition 2 follows.<br>\nRemark: a faster way to see this is to realize that the set of all open neighborhoods of $x$ is a subset of the set of all neighborhoods of $x$.<br>\nNow to see $$Definition 2 \\implies Definition 1$$<br>\nTake any open neighborhood $U$ of $x$. Then by definition 2 $\\exists \\; n_0 \\in \\mathbb{N} \\;s.t. \\; x_n \\in U \\; \\forall n≥n_0$. Now if you consider any arbitrary neighborhood $N$ of $x$ containing $U$, i.e. $U \\subset N$, you clearly also have that $x_n \\in N \\forall n≥n_0$, since $(x_j \\in U \\implies x_j \\in N )\\forall j \\in \\mathbb{N}$. Thus also definition 1 holds and we finally conclude that these definitions are equivalent.<br>\nThe equivalence of the definitions about an adherent point can be shown quite analogously.</p>\n" }, { "answer_id": 2081177, "score": 4, "text": "<p>In both cases the definitions are equivalent. It should be clear in each case that if <strong>Definition</strong> $\\mathbf{1}$ is satisfied, then so is <strong>Definition</strong> $\\mathbf{2}$. </p>\n\n<p>Suppose in the first case that for each open nbhd $U$ of $x$ there is an $m_U\\in\\Bbb N$ such that $x_n\\in U$ whenever $n\\ge m_U$, and let $N$ be any nbhd of $x$. By definition there is an open nbhd $U$ of $x$ such that $U\\subseteq N$, and clearly $x_n\\in N$ whenever $n\\ge m_U$.</p>\n\n<p>Finally, suppose in the second case that each open nbhd of $x$ has non-empty intersection with $A$, and let $N$ be any nbhd of $x$. As before, there is an open nbhd $U$ of $x$ such that $U\\subseteq N$, and clearly $N\\cap A\\supseteq U\\cap A\\ne\\varnothing$, so $N\\cap A\\ne\\varnothing$.</p>\n" }, { "answer_id": 2081169, "score": 2, "text": "<p>It should be clear that if $x$ is the limit of $(x_n)$ by Definition $1$, then Definition $2$ is satisfied as well (if it's not clear let me know and I can explain). Hence we only need to show that if Definition $2$ is satisfied, then Definition $1$ is as well:</p>\n\n<blockquote>\n <p>Suppose $x$ is the limit of $(x_n)$ by Definition $2$. Let $N$ be any neighborhood of $x$, so there exists some open $U$ with $x\\in U\\subset N$. Because $U$ is an open neighborhood of $x$, by assumption there exists $n_0\\in\\Bbb{N}$ such that $x_n\\in U$ for $n\\ge n_0$. But $U\\subset N$, so $x_n\\in N$ for $n\\ge n_0$ as well, and Definition $1$ is indeed satisfied.</p>\n</blockquote>\n\n<p>So we see the two definitions of a limit are indeed equivalent. I'd recommend you try to apply a similar argument to your second pair of definitions.</p>\n\n<hr>\n\n<p>Edit: Here is the argument to show Definition $1$ implies Definition $2$:</p>\n\n<blockquote>\n <p>Suppose Definition $1$ holds. Then if $U$ is any open neighborhood of $x$, because $U$ is a neighborhood Definition $1$ tells us that there is an $n_0$ such that $x_n\\in U$ for $n\\ge n_0$; since $U$ was arbitrary, we see Definition $2$ holds.</p>\n</blockquote>\n" }, { "answer_id": 2081172, "score": 2, "text": "<p>The two definitions are equivalent. It's clear to imply the conclusion of open neighborhoods from that of neighborhoods. Now conversely, if the statement (in particular, your definitions) holds for open neighborhoods of $x,$ then for any neighborhoods, one can always find an open set containing $x,$ which is an open neighborhood of $x.$ Hence the conclusion follows.</p>\n" }, { "answer_id": 2081204, "score": 0, "text": "<p>These two Definitions are equivalent. To see this, consider a sequence $(x_n)_{n \\in \\mathbb{N}}$ with limit $x$. I start by showing\n$$Definition 1 \\implies Definition 2$$</p>\n\n<p>By definition 1 we know that for any neighborhood $N$ of $x$, $\\exists \\;n_0 \\in \\mathbb{N} \\;$ s.t. $\\;x_n \\in N \\; \\forall n≥n_0$.<br>\nNow pick one such neighborhood $N$. By the definition of neighborhood, $\\exists U$ in the topology s.t. $x \\in U \\subset N$. In particular, since $U$ is in the topology, $U$ is an open set. But $U$ is also a neighborhood itself and chosen randomly. Hence Definition 2 follows.<br>\nRemark: a faster way to see this is to realize that the set of all open neighborhoods of $x$ is a subset of the set of all neighborhoods of $x$.<br>\nNow to see $$Definition 2 \\implies Definition 1$$<br>\nTake any open neighborhood $U$ of $x$. Then by definition 2 $\\exists \\; n_0 \\in \\mathbb{N} \\;s.t. \\; x_n \\in U \\; \\forall n≥n_0$. Now if you consider any arbitrary neighborhood $N$ of $x$ containing $U$, i.e. $U \\subset N$, you clearly also have that $x_n \\in N \\forall n≥n_0$, since $(x_j \\in U \\implies x_j \\in N )\\forall j \\in \\mathbb{N}$. Thus also definition 1 holds and we finally conclude that these definitions are equivalent.<br>\nThe equivalence of the definitions about an adherent point can be shown quite analogously.</p>\n" }, { "answer_id": 2081177, "score": 4, "text": "<p>In both cases the definitions are equivalent. It should be clear in each case that if <strong>Definition</strong> $\\mathbf{1}$ is satisfied, then so is <strong>Definition</strong> $\\mathbf{2}$. </p>\n\n<p>Suppose in the first case that for each open nbhd $U$ of $x$ there is an $m_U\\in\\Bbb N$ such that $x_n\\in U$ whenever $n\\ge m_U$, and let $N$ be any nbhd of $x$. By definition there is an open nbhd $U$ of $x$ such that $U\\subseteq N$, and clearly $x_n\\in N$ whenever $n\\ge m_U$.</p>\n\n<p>Finally, suppose in the second case that each open nbhd of $x$ has non-empty intersection with $A$, and let $N$ be any nbhd of $x$. As before, there is an open nbhd $U$ of $x$ such that $U\\subseteq N$, and clearly $N\\cap A\\supseteq U\\cap A\\ne\\varnothing$, so $N\\cap A\\ne\\varnothing$.</p>\n" }, { "answer_id": 2081169, "score": 2, "text": "<p>It should be clear that if $x$ is the limit of $(x_n)$ by Definition $1$, then Definition $2$ is satisfied as well (if it's not clear let me know and I can explain). Hence we only need to show that if Definition $2$ is satisfied, then Definition $1$ is as well:</p>\n\n<blockquote>\n <p>Suppose $x$ is the limit of $(x_n)$ by Definition $2$. Let $N$ be any neighborhood of $x$, so there exists some open $U$ with $x\\in U\\subset N$. Because $U$ is an open neighborhood of $x$, by assumption there exists $n_0\\in\\Bbb{N}$ such that $x_n\\in U$ for $n\\ge n_0$. But $U\\subset N$, so $x_n\\in N$ for $n\\ge n_0$ as well, and Definition $1$ is indeed satisfied.</p>\n</blockquote>\n\n<p>So we see the two definitions of a limit are indeed equivalent. I'd recommend you try to apply a similar argument to your second pair of definitions.</p>\n\n<hr>\n\n<p>Edit: Here is the argument to show Definition $1$ implies Definition $2$:</p>\n\n<blockquote>\n <p>Suppose Definition $1$ holds. Then if $U$ is any open neighborhood of $x$, because $U$ is a neighborhood Definition $1$ tells us that there is an $n_0$ such that $x_n\\in U$ for $n\\ge n_0$; since $U$ was arbitrary, we see Definition $2$ holds.</p>\n</blockquote>\n" }, { "answer_id": 2081172, "score": 2, "text": "<p>The two definitions are equivalent. It's clear to imply the conclusion of open neighborhoods from that of neighborhoods. Now conversely, if the statement (in particular, your definitions) holds for open neighborhoods of $x,$ then for any neighborhoods, one can always find an open set containing $x,$ which is an open neighborhood of $x.$ Hence the conclusion follows.</p>\n" }, { "answer_id": 2081204, "score": 0, "text": "<p>These two Definitions are equivalent. To see this, consider a sequence $(x_n)_{n \\in \\mathbb{N}}$ with limit $x$. I start by showing\n$$Definition 1 \\implies Definition 2$$</p>\n\n<p>By definition 1 we know that for any neighborhood $N$ of $x$, $\\exists \\;n_0 \\in \\mathbb{N} \\;$ s.t. $\\;x_n \\in N \\; \\forall n≥n_0$.<br>\nNow pick one such neighborhood $N$. By the definition of neighborhood, $\\exists U$ in the topology s.t. $x \\in U \\subset N$. In particular, since $U$ is in the topology, $U$ is an open set. But $U$ is also a neighborhood itself and chosen randomly. Hence Definition 2 follows.<br>\nRemark: a faster way to see this is to realize that the set of all open neighborhoods of $x$ is a subset of the set of all neighborhoods of $x$.<br>\nNow to see $$Definition 2 \\implies Definition 1$$<br>\nTake any open neighborhood $U$ of $x$. Then by definition 2 $\\exists \\; n_0 \\in \\mathbb{N} \\;s.t. \\; x_n \\in U \\; \\forall n≥n_0$. Now if you consider any arbitrary neighborhood $N$ of $x$ containing $U$, i.e. $U \\subset N$, you clearly also have that $x_n \\in N \\forall n≥n_0$, since $(x_j \\in U \\implies x_j \\in N )\\forall j \\in \\mathbb{N}$. Thus also definition 1 holds and we finally conclude that these definitions are equivalent.<br>\nThe equivalence of the definitions about an adherent point can be shown quite analogously.</p>\n" } ]
Question: Neighborhood vs open neighborhood Answer 1: <p>In both cases the definitions are equivalent. It should be clear in each case that if <strong>Definition</strong> $\mathbf{1}$ is satisfied, then so is <strong>Definition</strong> $\mathbf{2}$. </p> <p>Suppose in the first case that for each open nbhd $U$ of $x$ there is an $m_U\in\Bbb N$ such that $x_n\in U$ whenever $n\ge m_U$, and let $N$ be any nbhd of $x$. By definition there is an open nbhd $U$ of $x$ such that $U\subseteq N$, and clearly $x_n\in N$ whenever $n\ge m_U$.</p> <p>Finally, suppose in the second case that each open nbhd of $x$ has non-empty intersection with $A$, and let $N$ be any nbhd of $x$. As before, there is an open nbhd $U$ of $x$ such that $U\subseteq N$, and clearly $N\cap A\supseteq U\cap A\ne\varnothing$, so $N\cap A\ne\varnothing$.</p> Answer 2: <p>It should be clear that if $x$ is the limit of $(x_n)$ by Definition $1$, then Definition $2$ is satisfied as well (if it's not clear let me know and I can explain). Hence we only need to show that if Definition $2$ is satisfied, then Definition $1$ is as well:</p> <blockquote> <p>Suppose $x$ is the limit of $(x_n)$ by Definition $2$. Let $N$ be any neighborhood of $x$, so there exists some open $U$ with $x\in U\subset N$. Because $U$ is an open neighborhood of $x$, by assumption there exists $n_0\in\Bbb{N}$ such that $x_n\in U$ for $n\ge n_0$. But $U\subset N$, so $x_n\in N$ for $n\ge n_0$ as well, and Definition $1$ is indeed satisfied.</p> </blockquote> <p>So we see the two definitions of a limit are indeed equivalent. I'd recommend you try to apply a similar argument to your second pair of definitions.</p> <hr> <p>Edit: Here is the argument to show Definition $1$ implies Definition $2$:</p> <blockquote> <p>Suppose Definition $1$ holds. Then if $U$ is any open neighborhood of $x$, because $U$ is a neighborhood Definition $1$ tells us that there is an $n_0$ such that $x_n\in U$ for $n\ge n_0$; since $U$ was arbitrary, we see Definition $2$ holds.</p> </blockquote> Answer 3: <p>The two definitions are equivalent. It's clear to imply the conclusion of open neighborhoods from that of neighborhoods. Now conversely, if the statement (in particular, your definitions) holds for open neighborhoods of $x,$ then for any neighborhoods, one can always find an open set containing $x,$ which is an open neighborhood of $x.$ Hence the conclusion follows.</p> Answer 4: <p>These two Definitions are equivalent. To see this, consider a sequence $(x_n)_{n \in \mathbb{N}}$ with limit $x$. I start by showing $$Definition 1 \implies Definition 2$$</p> <p>By definition 1 we know that for any neighborhood $N$ of $x$, $\exists \;n_0 \in \mathbb{N} \;$ s.t. $\;x_n \in N \; \forall n≥n_0$.<br> Now pick one such neighborhood $N$. By the definition of neighborhood, $\exists U$ in the topology s.t. $x \in U \subset N$. In particular, since $U$ is in the topology, $U$ is an open set. But $U$ is also a neighborhood itself and chosen randomly. Hence Definition 2 follows.<br> Remark: a faster way to see this is to realize that the set of all open neighborhoods of $x$ is a subset of the set of all neighborhoods of $x$.<br> Now to see $$Definition 2 \implies Definition 1$$<br> Take any open neighborhood $U$ of $x$. Then by definition 2 $\exists \; n_0 \in \mathbb{N} \;s.t. \; x_n \in U \; \forall n≥n_0$. Now if you consider any arbitrary neighborhood $N$ of $x$ containing $U$, i.e. $U \subset N$, you clearly also have that $x_n \in N \forall n≥n_0$, since $(x_j \in U \implies x_j \in N )\forall j \in \mathbb{N}$. Thus also definition 1 holds and we finally conclude that these definitions are equivalent.<br> The equivalence of the definitions about an adherent point can be shown quite analogously.</p> Answer 5: <p>In both cases the definitions are equivalent. It should be clear in each case that if <strong>Definition</strong> $\mathbf{1}$ is satisfied, then so is <strong>Definition</strong> $\mathbf{2}$. </p> <p>Suppose in the first case that for each open nbhd $U$ of $x$ there is an $m_U\in\Bbb N$ such that $x_n\in U$ whenever $n\ge m_U$, and let $N$ be any nbhd of $x$. By definition there is an open nbhd $U$ of $x$ such that $U\subseteq N$, and clearly $x_n\in N$ whenever $n\ge m_U$.</p> <p>Finally, suppose in the second case that each open nbhd of $x$ has non-empty intersection with $A$, and let $N$ be any nbhd of $x$. As before, there is an open nbhd $U$ of $x$ such that $U\subseteq N$, and clearly $N\cap A\supseteq U\cap A\ne\varnothing$, so $N\cap A\ne\varnothing$.</p> Answer 6: <p>It should be clear that if $x$ is the limit of $(x_n)$ by Definition $1$, then Definition $2$ is satisfied as well (if it's not clear let me know and I can explain). Hence we only need to show that if Definition $2$ is satisfied, then Definition $1$ is as well:</p> <blockquote> <p>Suppose $x$ is the limit of $(x_n)$ by Definition $2$. Let $N$ be any neighborhood of $x$, so there exists some open $U$ with $x\in U\subset N$. Because $U$ is an open neighborhood of $x$, by assumption there exists $n_0\in\Bbb{N}$ such that $x_n\in U$ for $n\ge n_0$. But $U\subset N$, so $x_n\in N$ for $n\ge n_0$ as well, and Definition $1$ is indeed satisfied.</p> </blockquote> <p>So we see the two definitions of a limit are indeed equivalent. I'd recommend you try to apply a similar argument to your second pair of definitions.</p> <hr> <p>Edit: Here is the argument to show Definition $1$ implies Definition $2$:</p> <blockquote> <p>Suppose Definition $1$ holds. Then if $U$ is any open neighborhood of $x$, because $U$ is a neighborhood Definition $1$ tells us that there is an $n_0$ such that $x_n\in U$ for $n\ge n_0$; since $U$ was arbitrary, we see Definition $2$ holds.</p> </blockquote> Answer 7: <p>The two definitions are equivalent. It's clear to imply the conclusion of open neighborhoods from that of neighborhoods. Now conversely, if the statement (in particular, your definitions) holds for open neighborhoods of $x,$ then for any neighborhoods, one can always find an open set containing $x,$ which is an open neighborhood of $x.$ Hence the conclusion follows.</p> Answer 8: <p>These two Definitions are equivalent. To see this, consider a sequence $(x_n)_{n \in \mathbb{N}}$ with limit $x$. I start by showing $$Definition 1 \implies Definition 2$$</p> <p>By definition 1 we know that for any neighborhood $N$ of $x$, $\exists \;n_0 \in \mathbb{N} \;$ s.t. $\;x_n \in N \; \forall n≥n_0$.<br> Now pick one such neighborhood $N$. By the definition of neighborhood, $\exists U$ in the topology s.t. $x \in U \subset N$. In particular, since $U$ is in the topology, $U$ is an open set. But $U$ is also a neighborhood itself and chosen randomly. Hence Definition 2 follows.<br> Remark: a faster way to see this is to realize that the set of all open neighborhoods of $x$ is a subset of the set of all neighborhoods of $x$.<br> Now to see $$Definition 2 \implies Definition 1$$<br> Take any open neighborhood $U$ of $x$. Then by definition 2 $\exists \; n_0 \in \mathbb{N} \;s.t. \; x_n \in U \; \forall n≥n_0$. Now if you consider any arbitrary neighborhood $N$ of $x$ containing $U$, i.e. $U \subset N$, you clearly also have that $x_n \in N \forall n≥n_0$, since $(x_j \in U \implies x_j \in N )\forall j \in \mathbb{N}$. Thus also definition 1 holds and we finally conclude that these definitions are equivalent.<br> The equivalence of the definitions about an adherent point can be shown quite analogously.</p> Answer 9: <p>In both cases the definitions are equivalent. It should be clear in each case that if <strong>Definition</strong> $\mathbf{1}$ is satisfied, then so is <strong>Definition</strong> $\mathbf{2}$. </p> <p>Suppose in the first case that for each open nbhd $U$ of $x$ there is an $m_U\in\Bbb N$ such that $x_n\in U$ whenever $n\ge m_U$, and let $N$ be any nbhd of $x$. By definition there is an open nbhd $U$ of $x$ such that $U\subseteq N$, and clearly $x_n\in N$ whenever $n\ge m_U$.</p> <p>Finally, suppose in the second case that each open nbhd of $x$ has non-empty intersection with $A$, and let $N$ be any nbhd of $x$. As before, there is an open nbhd $U$ of $x$ such that $U\subseteq N$, and clearly $N\cap A\supseteq U\cap A\ne\varnothing$, so $N\cap A\ne\varnothing$.</p> Answer 10: <p>It should be clear that if $x$ is the limit of $(x_n)$ by Definition $1$, then Definition $2$ is satisfied as well (if it's not clear let me know and I can explain). Hence we only need to show that if Definition $2$ is satisfied, then Definition $1$ is as well:</p> <blockquote> <p>Suppose $x$ is the limit of $(x_n)$ by Definition $2$. Let $N$ be any neighborhood of $x$, so there exists some open $U$ with $x\in U\subset N$. Because $U$ is an open neighborhood of $x$, by assumption there exists $n_0\in\Bbb{N}$ such that $x_n\in U$ for $n\ge n_0$. But $U\subset N$, so $x_n\in N$ for $n\ge n_0$ as well, and Definition $1$ is indeed satisfied.</p> </blockquote> <p>So we see the two definitions of a limit are indeed equivalent. I'd recommend you try to apply a similar argument to your second pair of definitions.</p> <hr> <p>Edit: Here is the argument to show Definition $1$ implies Definition $2$:</p> <blockquote> <p>Suppose Definition $1$ holds. Then if $U$ is any open neighborhood of $x$, because $U$ is a neighborhood Definition $1$ tells us that there is an $n_0$ such that $x_n\in U$ for $n\ge n_0$; since $U$ was arbitrary, we see Definition $2$ holds.</p> </blockquote> Answer 11: <p>The two definitions are equivalent. It's clear to imply the conclusion of open neighborhoods from that of neighborhoods. Now conversely, if the statement (in particular, your definitions) holds for open neighborhoods of $x,$ then for any neighborhoods, one can always find an open set containing $x,$ which is an open neighborhood of $x.$ Hence the conclusion follows.</p> Answer 12: <p>These two Definitions are equivalent. To see this, consider a sequence $(x_n)_{n \in \mathbb{N}}$ with limit $x$. I start by showing $$Definition 1 \implies Definition 2$$</p> <p>By definition 1 we know that for any neighborhood $N$ of $x$, $\exists \;n_0 \in \mathbb{N} \;$ s.t. $\;x_n \in N \; \forall n≥n_0$.<br> Now pick one such neighborhood $N$. By the definition of neighborhood, $\exists U$ in the topology s.t. $x \in U \subset N$. In particular, since $U$ is in the topology, $U$ is an open set. But $U$ is also a neighborhood itself and chosen randomly. Hence Definition 2 follows.<br> Remark: a faster way to see this is to realize that the set of all open neighborhoods of $x$ is a subset of the set of all neighborhoods of $x$.<br> Now to see $$Definition 2 \implies Definition 1$$<br> Take any open neighborhood $U$ of $x$. Then by definition 2 $\exists \; n_0 \in \mathbb{N} \;s.t. \; x_n \in U \; \forall n≥n_0$. Now if you consider any arbitrary neighborhood $N$ of $x$ containing $U$, i.e. $U \subset N$, you clearly also have that $x_n \in N \forall n≥n_0$, since $(x_j \in U \implies x_j \in N )\forall j \in \mathbb{N}$. Thus also definition 1 holds and we finally conclude that these definitions are equivalent.<br> The equivalence of the definitions about an adherent point can be shown quite analogously.</p> Answer 13: <p>In both cases the definitions are equivalent. It should be clear in each case that if <strong>Definition</strong> $\mathbf{1}$ is satisfied, then so is <strong>Definition</strong> $\mathbf{2}$. </p> <p>Suppose in the first case that for each open nbhd $U$ of $x$ there is an $m_U\in\Bbb N$ such that $x_n\in U$ whenever $n\ge m_U$, and let $N$ be any nbhd of $x$. By definition there is an open nbhd $U$ of $x$ such that $U\subseteq N$, and clearly $x_n\in N$ whenever $n\ge m_U$.</p> <p>Finally, suppose in the second case that each open nbhd of $x$ has non-empty intersection with $A$, and let $N$ be any nbhd of $x$. As before, there is an open nbhd $U$ of $x$ such that $U\subseteq N$, and clearly $N\cap A\supseteq U\cap A\ne\varnothing$, so $N\cap A\ne\varnothing$.</p> Answer 14: <p>It should be clear that if $x$ is the limit of $(x_n)$ by Definition $1$, then Definition $2$ is satisfied as well (if it's not clear let me know and I can explain). Hence we only need to show that if Definition $2$ is satisfied, then Definition $1$ is as well:</p> <blockquote> <p>Suppose $x$ is the limit of $(x_n)$ by Definition $2$. Let $N$ be any neighborhood of $x$, so there exists some open $U$ with $x\in U\subset N$. Because $U$ is an open neighborhood of $x$, by assumption there exists $n_0\in\Bbb{N}$ such that $x_n\in U$ for $n\ge n_0$. But $U\subset N$, so $x_n\in N$ for $n\ge n_0$ as well, and Definition $1$ is indeed satisfied.</p> </blockquote> <p>So we see the two definitions of a limit are indeed equivalent. I'd recommend you try to apply a similar argument to your second pair of definitions.</p> <hr> <p>Edit: Here is the argument to show Definition $1$ implies Definition $2$:</p> <blockquote> <p>Suppose Definition $1$ holds. Then if $U$ is any open neighborhood of $x$, because $U$ is a neighborhood Definition $1$ tells us that there is an $n_0$ such that $x_n\in U$ for $n\ge n_0$; since $U$ was arbitrary, we see Definition $2$ holds.</p> </blockquote> Answer 15: <p>The two definitions are equivalent. It's clear to imply the conclusion of open neighborhoods from that of neighborhoods. Now conversely, if the statement (in particular, your definitions) holds for open neighborhoods of $x,$ then for any neighborhoods, one can always find an open set containing $x,$ which is an open neighborhood of $x.$ Hence the conclusion follows.</p> Answer 16: <p>These two Definitions are equivalent. To see this, consider a sequence $(x_n)_{n \in \mathbb{N}}$ with limit $x$. I start by showing $$Definition 1 \implies Definition 2$$</p> <p>By definition 1 we know that for any neighborhood $N$ of $x$, $\exists \;n_0 \in \mathbb{N} \;$ s.t. $\;x_n \in N \; \forall n≥n_0$.<br> Now pick one such neighborhood $N$. By the definition of neighborhood, $\exists U$ in the topology s.t. $x \in U \subset N$. In particular, since $U$ is in the topology, $U$ is an open set. But $U$ is also a neighborhood itself and chosen randomly. Hence Definition 2 follows.<br> Remark: a faster way to see this is to realize that the set of all open neighborhoods of $x$ is a subset of the set of all neighborhoods of $x$.<br> Now to see $$Definition 2 \implies Definition 1$$<br> Take any open neighborhood $U$ of $x$. Then by definition 2 $\exists \; n_0 \in \mathbb{N} \;s.t. \; x_n \in U \; \forall n≥n_0$. Now if you consider any arbitrary neighborhood $N$ of $x$ containing $U$, i.e. $U \subset N$, you clearly also have that $x_n \in N \forall n≥n_0$, since $(x_j \in U \implies x_j \in N )\forall j \in \mathbb{N}$. Thus also definition 1 holds and we finally conclude that these definitions are equivalent.<br> The equivalence of the definitions about an adherent point can be shown quite analogously.</p> Answer 17: <p>In both cases the definitions are equivalent. It should be clear in each case that if <strong>Definition</strong> $\mathbf{1}$ is satisfied, then so is <strong>Definition</strong> $\mathbf{2}$. </p> <p>Suppose in the first case that for each open nbhd $U$ of $x$ there is an $m_U\in\Bbb N$ such that $x_n\in U$ whenever $n\ge m_U$, and let $N$ be any nbhd of $x$. By definition there is an open nbhd $U$ of $x$ such that $U\subseteq N$, and clearly $x_n\in N$ whenever $n\ge m_U$.</p> <p>Finally, suppose in the second case that each open nbhd of $x$ has non-empty intersection with $A$, and let $N$ be any nbhd of $x$. As before, there is an open nbhd $U$ of $x$ such that $U\subseteq N$, and clearly $N\cap A\supseteq U\cap A\ne\varnothing$, so $N\cap A\ne\varnothing$.</p> Answer 18: <p>It should be clear that if $x$ is the limit of $(x_n)$ by Definition $1$, then Definition $2$ is satisfied as well (if it's not clear let me know and I can explain). Hence we only need to show that if Definition $2$ is satisfied, then Definition $1$ is as well:</p> <blockquote> <p>Suppose $x$ is the limit of $(x_n)$ by Definition $2$. Let $N$ be any neighborhood of $x$, so there exists some open $U$ with $x\in U\subset N$. Because $U$ is an open neighborhood of $x$, by assumption there exists $n_0\in\Bbb{N}$ such that $x_n\in U$ for $n\ge n_0$. But $U\subset N$, so $x_n\in N$ for $n\ge n_0$ as well, and Definition $1$ is indeed satisfied.</p> </blockquote> <p>So we see the two definitions of a limit are indeed equivalent. I'd recommend you try to apply a similar argument to your second pair of definitions.</p> <hr> <p>Edit: Here is the argument to show Definition $1$ implies Definition $2$:</p> <blockquote> <p>Suppose Definition $1$ holds. Then if $U$ is any open neighborhood of $x$, because $U$ is a neighborhood Definition $1$ tells us that there is an $n_0$ such that $x_n\in U$ for $n\ge n_0$; since $U$ was arbitrary, we see Definition $2$ holds.</p> </blockquote> Answer 19: <p>The two definitions are equivalent. It's clear to imply the conclusion of open neighborhoods from that of neighborhoods. Now conversely, if the statement (in particular, your definitions) holds for open neighborhoods of $x,$ then for any neighborhoods, one can always find an open set containing $x,$ which is an open neighborhood of $x.$ Hence the conclusion follows.</p> Answer 20: <p>These two Definitions are equivalent. To see this, consider a sequence $(x_n)_{n \in \mathbb{N}}$ with limit $x$. I start by showing $$Definition 1 \implies Definition 2$$</p> <p>By definition 1 we know that for any neighborhood $N$ of $x$, $\exists \;n_0 \in \mathbb{N} \;$ s.t. $\;x_n \in N \; \forall n≥n_0$.<br> Now pick one such neighborhood $N$. By the definition of neighborhood, $\exists U$ in the topology s.t. $x \in U \subset N$. In particular, since $U$ is in the topology, $U$ is an open set. But $U$ is also a neighborhood itself and chosen randomly. Hence Definition 2 follows.<br> Remark: a faster way to see this is to realize that the set of all open neighborhoods of $x$ is a subset of the set of all neighborhoods of $x$.<br> Now to see $$Definition 2 \implies Definition 1$$<br> Take any open neighborhood $U$ of $x$. Then by definition 2 $\exists \; n_0 \in \mathbb{N} \;s.t. \; x_n \in U \; \forall n≥n_0$. Now if you consider any arbitrary neighborhood $N$ of $x$ containing $U$, i.e. $U \subset N$, you clearly also have that $x_n \in N \forall n≥n_0$, since $(x_j \in U \implies x_j \in N )\forall j \in \mathbb{N}$. Thus also definition 1 holds and we finally conclude that these definitions are equivalent.<br> The equivalence of the definitions about an adherent point can be shown quite analogously.</p> Answer 21: <p>In both cases the definitions are equivalent. It should be clear in each case that if <strong>Definition</strong> $\mathbf{1}$ is satisfied, then so is <strong>Definition</strong> $\mathbf{2}$. </p> <p>Suppose in the first case that for each open nbhd $U$ of $x$ there is an $m_U\in\Bbb N$ such that $x_n\in U$ whenever $n\ge m_U$, and let $N$ be any nbhd of $x$. By definition there is an open nbhd $U$ of $x$ such that $U\subseteq N$, and clearly $x_n\in N$ whenever $n\ge m_U$.</p> <p>Finally, suppose in the second case that each open nbhd of $x$ has non-empty intersection with $A$, and let $N$ be any nbhd of $x$. As before, there is an open nbhd $U$ of $x$ such that $U\subseteq N$, and clearly $N\cap A\supseteq U\cap A\ne\varnothing$, so $N\cap A\ne\varnothing$.</p> Answer 22: <p>It should be clear that if $x$ is the limit of $(x_n)$ by Definition $1$, then Definition $2$ is satisfied as well (if it's not clear let me know and I can explain). Hence we only need to show that if Definition $2$ is satisfied, then Definition $1$ is as well:</p> <blockquote> <p>Suppose $x$ is the limit of $(x_n)$ by Definition $2$. Let $N$ be any neighborhood of $x$, so there exists some open $U$ with $x\in U\subset N$. Because $U$ is an open neighborhood of $x$, by assumption there exists $n_0\in\Bbb{N}$ such that $x_n\in U$ for $n\ge n_0$. But $U\subset N$, so $x_n\in N$ for $n\ge n_0$ as well, and Definition $1$ is indeed satisfied.</p> </blockquote> <p>So we see the two definitions of a limit are indeed equivalent. I'd recommend you try to apply a similar argument to your second pair of definitions.</p> <hr> <p>Edit: Here is the argument to show Definition $1$ implies Definition $2$:</p> <blockquote> <p>Suppose Definition $1$ holds. Then if $U$ is any open neighborhood of $x$, because $U$ is a neighborhood Definition $1$ tells us that there is an $n_0$ such that $x_n\in U$ for $n\ge n_0$; since $U$ was arbitrary, we see Definition $2$ holds.</p> </blockquote> Answer 23: <p>The two definitions are equivalent. It's clear to imply the conclusion of open neighborhoods from that of neighborhoods. Now conversely, if the statement (in particular, your definitions) holds for open neighborhoods of $x,$ then for any neighborhoods, one can always find an open set containing $x,$ which is an open neighborhood of $x.$ Hence the conclusion follows.</p> Answer 24: <p>These two Definitions are equivalent. To see this, consider a sequence $(x_n)_{n \in \mathbb{N}}$ with limit $x$. I start by showing $$Definition 1 \implies Definition 2$$</p> <p>By definition 1 we know that for any neighborhood $N$ of $x$, $\exists \;n_0 \in \mathbb{N} \;$ s.t. $\;x_n \in N \; \forall n≥n_0$.<br> Now pick one such neighborhood $N$. By the definition of neighborhood, $\exists U$ in the topology s.t. $x \in U \subset N$. In particular, since $U$ is in the topology, $U$ is an open set. But $U$ is also a neighborhood itself and chosen randomly. Hence Definition 2 follows.<br> Remark: a faster way to see this is to realize that the set of all open neighborhoods of $x$ is a subset of the set of all neighborhoods of $x$.<br> Now to see $$Definition 2 \implies Definition 1$$<br> Take any open neighborhood $U$ of $x$. Then by definition 2 $\exists \; n_0 \in \mathbb{N} \;s.t. \; x_n \in U \; \forall n≥n_0$. Now if you consider any arbitrary neighborhood $N$ of $x$ containing $U$, i.e. $U \subset N$, you clearly also have that $x_n \in N \forall n≥n_0$, since $(x_j \in U \implies x_j \in N )\forall j \in \mathbb{N}$. Thus also definition 1 holds and we finally conclude that these definitions are equivalent.<br> The equivalence of the definitions about an adherent point can be shown quite analogously.</p> Answer 25: <p>In both cases the definitions are equivalent. It should be clear in each case that if <strong>Definition</strong> $\mathbf{1}$ is satisfied, then so is <strong>Definition</strong> $\mathbf{2}$. </p> <p>Suppose in the first case that for each open nbhd $U$ of $x$ there is an $m_U\in\Bbb N$ such that $x_n\in U$ whenever $n\ge m_U$, and let $N$ be any nbhd of $x$. By definition there is an open nbhd $U$ of $x$ such that $U\subseteq N$, and clearly $x_n\in N$ whenever $n\ge m_U$.</p> <p>Finally, suppose in the second case that each open nbhd of $x$ has non-empty intersection with $A$, and let $N$ be any nbhd of $x$. As before, there is an open nbhd $U$ of $x$ such that $U\subseteq N$, and clearly $N\cap A\supseteq U\cap A\ne\varnothing$, so $N\cap A\ne\varnothing$.</p> Answer 26: <p>It should be clear that if $x$ is the limit of $(x_n)$ by Definition $1$, then Definition $2$ is satisfied as well (if it's not clear let me know and I can explain). Hence we only need to show that if Definition $2$ is satisfied, then Definition $1$ is as well:</p> <blockquote> <p>Suppose $x$ is the limit of $(x_n)$ by Definition $2$. Let $N$ be any neighborhood of $x$, so there exists some open $U$ with $x\in U\subset N$. Because $U$ is an open neighborhood of $x$, by assumption there exists $n_0\in\Bbb{N}$ such that $x_n\in U$ for $n\ge n_0$. But $U\subset N$, so $x_n\in N$ for $n\ge n_0$ as well, and Definition $1$ is indeed satisfied.</p> </blockquote> <p>So we see the two definitions of a limit are indeed equivalent. I'd recommend you try to apply a similar argument to your second pair of definitions.</p> <hr> <p>Edit: Here is the argument to show Definition $1$ implies Definition $2$:</p> <blockquote> <p>Suppose Definition $1$ holds. Then if $U$ is any open neighborhood of $x$, because $U$ is a neighborhood Definition $1$ tells us that there is an $n_0$ such that $x_n\in U$ for $n\ge n_0$; since $U$ was arbitrary, we see Definition $2$ holds.</p> </blockquote> Answer 27: <p>The two definitions are equivalent. It's clear to imply the conclusion of open neighborhoods from that of neighborhoods. Now conversely, if the statement (in particular, your definitions) holds for open neighborhoods of $x,$ then for any neighborhoods, one can always find an open set containing $x,$ which is an open neighborhood of $x.$ Hence the conclusion follows.</p> Answer 28: <p>These two Definitions are equivalent. To see this, consider a sequence $(x_n)_{n \in \mathbb{N}}$ with limit $x$. I start by showing $$Definition 1 \implies Definition 2$$</p> <p>By definition 1 we know that for any neighborhood $N$ of $x$, $\exists \;n_0 \in \mathbb{N} \;$ s.t. $\;x_n \in N \; \forall n≥n_0$.<br> Now pick one such neighborhood $N$. By the definition of neighborhood, $\exists U$ in the topology s.t. $x \in U \subset N$. In particular, since $U$ is in the topology, $U$ is an open set. But $U$ is also a neighborhood itself and chosen randomly. Hence Definition 2 follows.<br> Remark: a faster way to see this is to realize that the set of all open neighborhoods of $x$ is a subset of the set of all neighborhoods of $x$.<br> Now to see $$Definition 2 \implies Definition 1$$<br> Take any open neighborhood $U$ of $x$. Then by definition 2 $\exists \; n_0 \in \mathbb{N} \;s.t. \; x_n \in U \; \forall n≥n_0$. Now if you consider any arbitrary neighborhood $N$ of $x$ containing $U$, i.e. $U \subset N$, you clearly also have that $x_n \in N \forall n≥n_0$, since $(x_j \in U \implies x_j \in N )\forall j \in \mathbb{N}$. Thus also definition 1 holds and we finally conclude that these definitions are equivalent.<br> The equivalence of the definitions about an adherent point can be shown quite analogously.</p> Answer 29: <p>In both cases the definitions are equivalent. It should be clear in each case that if <strong>Definition</strong> $\mathbf{1}$ is satisfied, then so is <strong>Definition</strong> $\mathbf{2}$. </p> <p>Suppose in the first case that for each open nbhd $U$ of $x$ there is an $m_U\in\Bbb N$ such that $x_n\in U$ whenever $n\ge m_U$, and let $N$ be any nbhd of $x$. By definition there is an open nbhd $U$ of $x$ such that $U\subseteq N$, and clearly $x_n\in N$ whenever $n\ge m_U$.</p> <p>Finally, suppose in the second case that each open nbhd of $x$ has non-empty intersection with $A$, and let $N$ be any nbhd of $x$. As before, there is an open nbhd $U$ of $x$ such that $U\subseteq N$, and clearly $N\cap A\supseteq U\cap A\ne\varnothing$, so $N\cap A\ne\varnothing$.</p> Answer 30: <p>It should be clear that if $x$ is the limit of $(x_n)$ by Definition $1$, then Definition $2$ is satisfied as well (if it's not clear let me know and I can explain). Hence we only need to show that if Definition $2$ is satisfied, then Definition $1$ is as well:</p> <blockquote> <p>Suppose $x$ is the limit of $(x_n)$ by Definition $2$. Let $N$ be any neighborhood of $x$, so there exists some open $U$ with $x\in U\subset N$. Because $U$ is an open neighborhood of $x$, by assumption there exists $n_0\in\Bbb{N}$ such that $x_n\in U$ for $n\ge n_0$. But $U\subset N$, so $x_n\in N$ for $n\ge n_0$ as well, and Definition $1$ is indeed satisfied.</p> </blockquote> <p>So we see the two definitions of a limit are indeed equivalent. I'd recommend you try to apply a similar argument to your second pair of definitions.</p> <hr> <p>Edit: Here is the argument to show Definition $1$ implies Definition $2$:</p> <blockquote> <p>Suppose Definition $1$ holds. Then if $U$ is any open neighborhood of $x$, because $U$ is a neighborhood Definition $1$ tells us that there is an $n_0$ such that $x_n\in U$ for $n\ge n_0$; since $U$ was arbitrary, we see Definition $2$ holds.</p> </blockquote> Answer 31: <p>The two definitions are equivalent. It's clear to imply the conclusion of open neighborhoods from that of neighborhoods. Now conversely, if the statement (in particular, your definitions) holds for open neighborhoods of $x,$ then for any neighborhoods, one can always find an open set containing $x,$ which is an open neighborhood of $x.$ Hence the conclusion follows.</p> Answer 32: <p>These two Definitions are equivalent. To see this, consider a sequence $(x_n)_{n \in \mathbb{N}}$ with limit $x$. I start by showing $$Definition 1 \implies Definition 2$$</p> <p>By definition 1 we know that for any neighborhood $N$ of $x$, $\exists \;n_0 \in \mathbb{N} \;$ s.t. $\;x_n \in N \; \forall n≥n_0$.<br> Now pick one such neighborhood $N$. By the definition of neighborhood, $\exists U$ in the topology s.t. $x \in U \subset N$. In particular, since $U$ is in the topology, $U$ is an open set. But $U$ is also a neighborhood itself and chosen randomly. Hence Definition 2 follows.<br> Remark: a faster way to see this is to realize that the set of all open neighborhoods of $x$ is a subset of the set of all neighborhoods of $x$.<br> Now to see $$Definition 2 \implies Definition 1$$<br> Take any open neighborhood $U$ of $x$. Then by definition 2 $\exists \; n_0 \in \mathbb{N} \;s.t. \; x_n \in U \; \forall n≥n_0$. Now if you consider any arbitrary neighborhood $N$ of $x$ containing $U$, i.e. $U \subset N$, you clearly also have that $x_n \in N \forall n≥n_0$, since $(x_j \in U \implies x_j \in N )\forall j \in \mathbb{N}$. Thus also definition 1 holds and we finally conclude that these definitions are equivalent.<br> The equivalence of the definitions about an adherent point can be shown quite analogously.</p> Answer 33: <p>In both cases the definitions are equivalent. It should be clear in each case that if <strong>Definition</strong> $\mathbf{1}$ is satisfied, then so is <strong>Definition</strong> $\mathbf{2}$. </p> <p>Suppose in the first case that for each open nbhd $U$ of $x$ there is an $m_U\in\Bbb N$ such that $x_n\in U$ whenever $n\ge m_U$, and let $N$ be any nbhd of $x$. By definition there is an open nbhd $U$ of $x$ such that $U\subseteq N$, and clearly $x_n\in N$ whenever $n\ge m_U$.</p> <p>Finally, suppose in the second case that each open nbhd of $x$ has non-empty intersection with $A$, and let $N$ be any nbhd of $x$. As before, there is an open nbhd $U$ of $x$ such that $U\subseteq N$, and clearly $N\cap A\supseteq U\cap A\ne\varnothing$, so $N\cap A\ne\varnothing$.</p> Answer 34: <p>It should be clear that if $x$ is the limit of $(x_n)$ by Definition $1$, then Definition $2$ is satisfied as well (if it's not clear let me know and I can explain). Hence we only need to show that if Definition $2$ is satisfied, then Definition $1$ is as well:</p> <blockquote> <p>Suppose $x$ is the limit of $(x_n)$ by Definition $2$. Let $N$ be any neighborhood of $x$, so there exists some open $U$ with $x\in U\subset N$. Because $U$ is an open neighborhood of $x$, by assumption there exists $n_0\in\Bbb{N}$ such that $x_n\in U$ for $n\ge n_0$. But $U\subset N$, so $x_n\in N$ for $n\ge n_0$ as well, and Definition $1$ is indeed satisfied.</p> </blockquote> <p>So we see the two definitions of a limit are indeed equivalent. I'd recommend you try to apply a similar argument to your second pair of definitions.</p> <hr> <p>Edit: Here is the argument to show Definition $1$ implies Definition $2$:</p> <blockquote> <p>Suppose Definition $1$ holds. Then if $U$ is any open neighborhood of $x$, because $U$ is a neighborhood Definition $1$ tells us that there is an $n_0$ such that $x_n\in U$ for $n\ge n_0$; since $U$ was arbitrary, we see Definition $2$ holds.</p> </blockquote> Answer 35: <p>The two definitions are equivalent. It's clear to imply the conclusion of open neighborhoods from that of neighborhoods. Now conversely, if the statement (in particular, your definitions) holds for open neighborhoods of $x,$ then for any neighborhoods, one can always find an open set containing $x,$ which is an open neighborhood of $x.$ Hence the conclusion follows.</p> Answer 36: <p>These two Definitions are equivalent. To see this, consider a sequence $(x_n)_{n \in \mathbb{N}}$ with limit $x$. I start by showing $$Definition 1 \implies Definition 2$$</p> <p>By definition 1 we know that for any neighborhood $N$ of $x$, $\exists \;n_0 \in \mathbb{N} \;$ s.t. $\;x_n \in N \; \forall n≥n_0$.<br> Now pick one such neighborhood $N$. By the definition of neighborhood, $\exists U$ in the topology s.t. $x \in U \subset N$. In particular, since $U$ is in the topology, $U$ is an open set. But $U$ is also a neighborhood itself and chosen randomly. Hence Definition 2 follows.<br> Remark: a faster way to see this is to realize that the set of all open neighborhoods of $x$ is a subset of the set of all neighborhoods of $x$.<br> Now to see $$Definition 2 \implies Definition 1$$<br> Take any open neighborhood $U$ of $x$. Then by definition 2 $\exists \; n_0 \in \mathbb{N} \;s.t. \; x_n \in U \; \forall n≥n_0$. Now if you consider any arbitrary neighborhood $N$ of $x$ containing $U$, i.e. $U \subset N$, you clearly also have that $x_n \in N \forall n≥n_0$, since $(x_j \in U \implies x_j \in N )\forall j \in \mathbb{N}$. Thus also definition 1 holds and we finally conclude that these definitions are equivalent.<br> The equivalence of the definitions about an adherent point can be shown quite analogously.</p>
Commutator Subgroup of Direct Product equals the Direct Product of the Commutator Subgroups
2,081,328
5
https://math.stackexchange.com/questions/2081328/commutator-subgroup-of-direct-product-equals-the-direct-product-of-the-commutato
<p>I need to prove that $(G_{1}\times G_{2})^{\prime} = G_{1}^{\prime} \times G_{2}^{\prime}$, where $G^{\prime}$ denotes the <strong>commutator subgroup</strong> of $G$ - i.e., $G^{\prime}=[G,G]$, the subgroup generated by all commutators of elements of $G$. </p> <p>Recall that the <strong>commutator</strong> of two elements $x$ and $y$ in a group $G$, denoted $[x,y]$ is defined to equal $x^{-1}y^{-1}xy$.</p> <p><strong>I have posted my proof as an answer below. Could somebody please take a look at it and let me know if it's okay? If not, please let me know what I need to do in order to fix it.</strong> Thank you. :)</p> <hr> <p><strong>Edit:</strong> I have been informed that my answer given below actually shows only equality of the set of commutators, and that $G^{\prime}$ consists of products of commutators. So, this question is no longer a proof check, but I am asking specifically how to fix what I have written below in order to make it actually answer the thing I set out to prove. I am having a bit of trouble understanding some of the hints given me thus far, and am going to need more detail in any answers given in order for them to be helpful.</p>
[ { "answer_id": 2081407, "score": 4, "text": "<p>Okay...I think this answer finally takes care of the difficulty. Thanks to arctictern for all his help :)</p>\n\n<hr>\n\n<p>Let $g \\in (G_{1}\\times G_{2})^{\\prime}$. Then, since $(G_{1} \\times G_{2})^{\\prime}$ is the subgroup generated by all commutators of elements of $G_{1} \\times G_{2}$, $g$ is the product of commutators of elements of $G_{1} \\times G_{2}$. Letting $(a_{i},b_{i})$, $(c_{i},d_{i})\\in G_{1} \\times G_{2}$, we have that</p>\n\n<p>$\\begin{align} g = [(a_{1},b_{1}),(c_{1},d_{1})]\\cdot [(a_{2},b_{2}),(c_{2},d_{2})]\\cdot\\,\\, \\cdots \\,\\, \\cdot[(a_{t},b_{t}),(c_{t},d_{t})]\\,\\text{for some}\\, t, \\\\ = (a_{1},b_{1})^{-1}(c_{1},d_{1})^{-1}(a_{1},b_{1})(c_{1},d_{1})\\cdot (a_{2},b_{2})^{-1}(c_{2},d_{2})^{-1}(a_{2},b_{2})(c_{2},d_{2})\\cdot \\, \\, \\cdots \\, \\,\\cdot(a_{t},b_{t})^{-1}(c_{t},d_{t})^{-1}(a_{t},b_{t})(c_{t},d_{t}) \\\\ = (a_{1}^{-1},b_{1}^{-1}) (c_{1}^{-1},d_{1}^{-1})(a_{1},b_{1})(c_{1},d_{1})\\cdot (a_{2}^{-1},b_{2}^{-1})(c_{2}^{-1},d_{2}^{-1})(a_{2},b_{2})(c_{2},d_{2})\\cdot \\, \\, \\cdots \\, \\,\\cdot(a_{t}^{-1},b_{t}^{-1})(c_{t}^{-1},d_{t}^{-1})(a_{t},b_{t})(c_{t},d_{t}) \\\\ = (a_{1}^{-1}c_{1}^{-1}a_{1}c_{1}a_{2}^{-1}c_{2}^{-1}a_{2}c_{2}\\cdots a_{t}^{-1}c_{t}^{-1}a_{t}c_{t}, \\,b_{1}^{-1}d_{1}^{-1}b_{1}d_{1}b_{2}^{-1}d_{2}^{-1}b_{2}d_{2}\\cdots b_{t}^{-1}d_{t}^{-1}b_{t}d_{t}) \\\\ = ([a_{1},c_{1}]\\cdot [a_{2},c_{2}]\\cdot \\,\\, \\cdots \\, \\, \\cdot [a_{t},c_{t}], \\,[b_{1},d_{1}]\\cdot [b_{2},d_{2}]\\cdot \\,\\, \\cdots \\, \\, \\cdot [b_{t},d_{t}]) \\in G_{1}^{\\prime} \\times G_{2}^{\\prime}\\end{align}$</p>\n\n<p>So, we have the inclusion $\\mathbf{(G_{1}\\times G_{2})^{\\prime}\\subseteq G_{1}^{\\prime}\\times G_{2}^{\\prime}}$</p>\n\n<hr>\n\n<p>In the other direction, let $h \\in G_{1}^{\\prime} \\times G_{2}^{\\prime}$.</p>\n\n<p>To belong in $ G_{1}^{\\prime} \\times G_{2}^{\\prime}$, $h$ must be the direct product of an element of $G_{1}^{\\prime}$, which is the (group operation) product of commutators of elements of $G_{1}$, and an element of $G_{2}^{\\prime}$, which is the (group operation) product of commutators of elements of $G_{2}$.</p>\n\n<p>Let $h_{1}$ be such an element of $G_{1}^{\\prime}$ and let $h_{2}$ be such an element of $G_{2}^{\\prime}$, and let $h = h_{1} \\times h_{2}$. Then,</p>\n\n<p>$h_{1} = [a_{1},c_{1}]\\cdot [a_{2}, c_{2}] \\cdot \\, \\, \\cdots \\, \\, \\cdot [a_{t},c_{t}]$ for some $t$, where each $a_{i}$, $c_{i}$ is an element of $G_{1}$</p>\n\n<p>and $h_{2} =[b_{1},d_{1}]\\cdot [b_{2}, d_{2}] \\cdot \\, \\, \\cdots \\, \\, \\cdot [b_{t},d_{t}]$ for some $t$, where each $b_{i}$, $d_{i}$ is an element of $G_{2}$.</p>\n\n<p>So, </p>\n\n<p>$\\begin{align}h = h_{1} \\times h_{2} = ([a_{1},c_{1}]\\cdot [a_{2}, c_{2}] \\cdot \\, \\, \\cdots \\, \\, \\cdot [a_{t},c_{t}], [b_{1},d_{1}]\\cdot [b_{2}, d_{2}] \\cdot \\, \\, \\cdots \\, \\, \\cdot [b_{t},d_{t}])\\\\ =(a_{1}^{-1}c_{1}^{-1}a_{1}c_{1}a_{2}^{-1}c_{2}^{-1}a_{2}c_{2}\\cdots a_{t}^{-1}c_{t}^{-1}a_{t}c_{t}, \\,b_{1}^{-1}d_{1}^{-1}b_{1}d_{1}b_{2}^{-1}d_{2}^{-1}b_{2}d_{2}\\cdots b_{t}^{-1}d_{t}^{-1}b_{t}d_{t}) \\\\ = (a_{1}^{-1},b_{1}^{-1}) (c_{1}^{-1},d_{1}^{-1})(a_{1},b_{1})(c_{1},d_{1})\\cdot (a_{2}^{-1},b_{2}^{-1})(c_{2}^{-1},d_{2}^{-1})(a_{2},b_{2})(c_{2},d_{2})\\cdot \\, \\, \\cdots \\, \\,\\cdot(a_{t}^{-1},b_{t}^{-1})(c_{t}^{-1},d_{t}^{-1})(a_{t},b_{t})(c_{t},d_{t}) \\\\ = (a_{1},b_{1})^{-1}(c_{1},d_{1})^{-1}(a_{1},b_{1})(c_{1},d_{1})\\cdot (a_{2},b_{2})^{-1}(c_{2},d_{2})^{-1}(a_{2},b_{2})(c_{2},d_{2})\\cdot \\, \\, \\cdots \\, \\,\\cdot(a_{t},b_{t})^{-1}(c_{t},d_{t})^{-1}(a_{t},b_{t})(c_{t},d_{t}) \\\\ = [(a_{1},b_{1}),(c_{1},d_{1})]\\cdot [(a_{2},b_{2}),(c_{2},d_{2})]\\cdot\\,\\, \\cdots \\,\\, \\cdot[(a_{t},b_{t}),(c_{t},d_{t})] \\in (G_{1}\\times G_{2})^{\\prime} \\end{align}$</p>\n\n<p>So, we have the inclusion $\\mathbf{G_{1}^{\\prime}\\times G_{2}^{\\prime} \\subseteq (G_{1}\\times G_{2})^{\\prime}}$.</p>\n\n<hr>\n\n<p>Thus, since we have inclusion in both directions, we have established the equality $\\mathbf{(G_{1}\\times G_{2})^{\\prime} = G_{1}^{\\prime}\\times G_{2}^{\\prime}}$ for the right things this time, hopefully.</p>\n" }, { "answer_id": 2081329, "score": 3, "text": "<p>Let $(g_{1},g_{2})\\,,\\,(h_{1},h_{2}) \\in G_{1}\\times G_{2}$. Then, $\\begin{align}[(g_{1},g_{2})(h_{1},h_{2})]=(g_{1},g_{2})^{-1}(h_{1},h_{2})^{-1}(g_{1},g_{2})(h_{1},h_{2}) \\\\ =(g_{1}^{-1},g_{2}^{-1})(h_{1}^{-1},h_{2}^{-1})(g_{1},g_{2})(h_{1},h_{2}) = (g_{1}^{-1}h_{1}^{-1}g_{1}h_{1},g_{2}^{-1}h_{2}^{-1}g_{2}h_{2}) \\\\ = ([g_{1},h_{1}],[g_{2},h_{2}]) = ([g_{1},h_{1}],e_{G_{2}})(e_{G_{1}},[g_{2},h_{2}])\\end{align}$.</p>\n\n<p>So, each commutator in $G_{1} \\times G_{2}$ is the product of one in $G_{1}$ with one in $G_{2}$.</p>\n\n<p>Therefore, $\\mathbf{(G_{1}\\times G_{2})^{\\prime}\\subseteq G_{1}^{\\prime}\\times G_{2}^{\\prime}}$.</p>\n\n<p>Now, to get the inclusion in the other direction, let $(g,h) \\in G_{1}^{\\prime} \\times G_{2}^{\\prime}$. Then, $g = [g_{1},g_{2}]\\in G_{1}^{\\prime}$ and $h = [h_{1},h_{2}] \\in G_{2}^{\\prime}$.</p>\n\n<p>So, </p>\n\n<p>$\\begin{align}(g,h) = ([g_{1},g_{2}],[h_{1},h_{2}]) = (g_{1}^{-1}g_{2}^{-1}g_{1}g_{2}, h_{1}^{-1}h_{2}^{-1}h_{1}h_{2}) \\\\ = (g_{1}^{-1},h_{1}^{-1})(g_{2}^{-1},h_{2}^{-1})(g_{1},h_{1})(g_{2},h_{2}) \\\\ = (g_{1},h_{1})^{-1}(g_{2},h_{2})^{-1}(g_{1},h_{1})(g_{2},h_{2}) \\\\= [(g_{1},h_{1}),(g_{2},h_{2})] \\end{align} $ </p>\n\n<p>where $(g_{1},h_{1}), (g_{2},h_{2}) \\in G_{1} \\times G_{2}$. </p>\n\n<p>Therefore, the product of a commutator in $G_{1}$ with a commutator in $G_{2}$ is a commutator in $G_{1} \\times G_{2}$. Thus, $\\mathbf{G_{1}^{\\prime} \\times G_{2}^{\\prime} \\subseteq (G_{1} \\times G_{2})^{\\prime}}$.</p>\n\n<p>And so, we get our equality, $G_{1}^{\\prime} \\times G_{2}^{\\prime} = (G_{1} \\times G_{2})^{\\prime}$.</p>\n" }, { "answer_id": 2081407, "score": 4, "text": "<p>Okay...I think this answer finally takes care of the difficulty. Thanks to arctictern for all his help :)</p>\n\n<hr>\n\n<p>Let $g \\in (G_{1}\\times G_{2})^{\\prime}$. Then, since $(G_{1} \\times G_{2})^{\\prime}$ is the subgroup generated by all commutators of elements of $G_{1} \\times G_{2}$, $g$ is the product of commutators of elements of $G_{1} \\times G_{2}$. Letting $(a_{i},b_{i})$, $(c_{i},d_{i})\\in G_{1} \\times G_{2}$, we have that</p>\n\n<p>$\\begin{align} g = [(a_{1},b_{1}),(c_{1},d_{1})]\\cdot [(a_{2},b_{2}),(c_{2},d_{2})]\\cdot\\,\\, \\cdots \\,\\, \\cdot[(a_{t},b_{t}),(c_{t},d_{t})]\\,\\text{for some}\\, t, \\\\ = (a_{1},b_{1})^{-1}(c_{1},d_{1})^{-1}(a_{1},b_{1})(c_{1},d_{1})\\cdot (a_{2},b_{2})^{-1}(c_{2},d_{2})^{-1}(a_{2},b_{2})(c_{2},d_{2})\\cdot \\, \\, \\cdots \\, \\,\\cdot(a_{t},b_{t})^{-1}(c_{t},d_{t})^{-1}(a_{t},b_{t})(c_{t},d_{t}) \\\\ = (a_{1}^{-1},b_{1}^{-1}) (c_{1}^{-1},d_{1}^{-1})(a_{1},b_{1})(c_{1},d_{1})\\cdot (a_{2}^{-1},b_{2}^{-1})(c_{2}^{-1},d_{2}^{-1})(a_{2},b_{2})(c_{2},d_{2})\\cdot \\, \\, \\cdots \\, \\,\\cdot(a_{t}^{-1},b_{t}^{-1})(c_{t}^{-1},d_{t}^{-1})(a_{t},b_{t})(c_{t},d_{t}) \\\\ = (a_{1}^{-1}c_{1}^{-1}a_{1}c_{1}a_{2}^{-1}c_{2}^{-1}a_{2}c_{2}\\cdots a_{t}^{-1}c_{t}^{-1}a_{t}c_{t}, \\,b_{1}^{-1}d_{1}^{-1}b_{1}d_{1}b_{2}^{-1}d_{2}^{-1}b_{2}d_{2}\\cdots b_{t}^{-1}d_{t}^{-1}b_{t}d_{t}) \\\\ = ([a_{1},c_{1}]\\cdot [a_{2},c_{2}]\\cdot \\,\\, \\cdots \\, \\, \\cdot [a_{t},c_{t}], \\,[b_{1},d_{1}]\\cdot [b_{2},d_{2}]\\cdot \\,\\, \\cdots \\, \\, \\cdot [b_{t},d_{t}]) \\in G_{1}^{\\prime} \\times G_{2}^{\\prime}\\end{align}$</p>\n\n<p>So, we have the inclusion $\\mathbf{(G_{1}\\times G_{2})^{\\prime}\\subseteq G_{1}^{\\prime}\\times G_{2}^{\\prime}}$</p>\n\n<hr>\n\n<p>In the other direction, let $h \\in G_{1}^{\\prime} \\times G_{2}^{\\prime}$.</p>\n\n<p>To belong in $ G_{1}^{\\prime} \\times G_{2}^{\\prime}$, $h$ must be the direct product of an element of $G_{1}^{\\prime}$, which is the (group operation) product of commutators of elements of $G_{1}$, and an element of $G_{2}^{\\prime}$, which is the (group operation) product of commutators of elements of $G_{2}$.</p>\n\n<p>Let $h_{1}$ be such an element of $G_{1}^{\\prime}$ and let $h_{2}$ be such an element of $G_{2}^{\\prime}$, and let $h = h_{1} \\times h_{2}$. Then,</p>\n\n<p>$h_{1} = [a_{1},c_{1}]\\cdot [a_{2}, c_{2}] \\cdot \\, \\, \\cdots \\, \\, \\cdot [a_{t},c_{t}]$ for some $t$, where each $a_{i}$, $c_{i}$ is an element of $G_{1}$</p>\n\n<p>and $h_{2} =[b_{1},d_{1}]\\cdot [b_{2}, d_{2}] \\cdot \\, \\, \\cdots \\, \\, \\cdot [b_{t},d_{t}]$ for some $t$, where each $b_{i}$, $d_{i}$ is an element of $G_{2}$.</p>\n\n<p>So, </p>\n\n<p>$\\begin{align}h = h_{1} \\times h_{2} = ([a_{1},c_{1}]\\cdot [a_{2}, c_{2}] \\cdot \\, \\, \\cdots \\, \\, \\cdot [a_{t},c_{t}], [b_{1},d_{1}]\\cdot [b_{2}, d_{2}] \\cdot \\, \\, \\cdots \\, \\, \\cdot [b_{t},d_{t}])\\\\ =(a_{1}^{-1}c_{1}^{-1}a_{1}c_{1}a_{2}^{-1}c_{2}^{-1}a_{2}c_{2}\\cdots a_{t}^{-1}c_{t}^{-1}a_{t}c_{t}, \\,b_{1}^{-1}d_{1}^{-1}b_{1}d_{1}b_{2}^{-1}d_{2}^{-1}b_{2}d_{2}\\cdots b_{t}^{-1}d_{t}^{-1}b_{t}d_{t}) \\\\ = (a_{1}^{-1},b_{1}^{-1}) (c_{1}^{-1},d_{1}^{-1})(a_{1},b_{1})(c_{1},d_{1})\\cdot (a_{2}^{-1},b_{2}^{-1})(c_{2}^{-1},d_{2}^{-1})(a_{2},b_{2})(c_{2},d_{2})\\cdot \\, \\, \\cdots \\, \\,\\cdot(a_{t}^{-1},b_{t}^{-1})(c_{t}^{-1},d_{t}^{-1})(a_{t},b_{t})(c_{t},d_{t}) \\\\ = (a_{1},b_{1})^{-1}(c_{1},d_{1})^{-1}(a_{1},b_{1})(c_{1},d_{1})\\cdot (a_{2},b_{2})^{-1}(c_{2},d_{2})^{-1}(a_{2},b_{2})(c_{2},d_{2})\\cdot \\, \\, \\cdots \\, \\,\\cdot(a_{t},b_{t})^{-1}(c_{t},d_{t})^{-1}(a_{t},b_{t})(c_{t},d_{t}) \\\\ = [(a_{1},b_{1}),(c_{1},d_{1})]\\cdot [(a_{2},b_{2}),(c_{2},d_{2})]\\cdot\\,\\, \\cdots \\,\\, \\cdot[(a_{t},b_{t}),(c_{t},d_{t})] \\in (G_{1}\\times G_{2})^{\\prime} \\end{align}$</p>\n\n<p>So, we have the inclusion $\\mathbf{G_{1}^{\\prime}\\times G_{2}^{\\prime} \\subseteq (G_{1}\\times G_{2})^{\\prime}}$.</p>\n\n<hr>\n\n<p>Thus, since we have inclusion in both directions, we have established the equality $\\mathbf{(G_{1}\\times G_{2})^{\\prime} = G_{1}^{\\prime}\\times G_{2}^{\\prime}}$ for the right things this time, hopefully.</p>\n" }, { "answer_id": 2081329, "score": 3, "text": "<p>Let $(g_{1},g_{2})\\,,\\,(h_{1},h_{2}) \\in G_{1}\\times G_{2}$. Then, $\\begin{align}[(g_{1},g_{2})(h_{1},h_{2})]=(g_{1},g_{2})^{-1}(h_{1},h_{2})^{-1}(g_{1},g_{2})(h_{1},h_{2}) \\\\ =(g_{1}^{-1},g_{2}^{-1})(h_{1}^{-1},h_{2}^{-1})(g_{1},g_{2})(h_{1},h_{2}) = (g_{1}^{-1}h_{1}^{-1}g_{1}h_{1},g_{2}^{-1}h_{2}^{-1}g_{2}h_{2}) \\\\ = ([g_{1},h_{1}],[g_{2},h_{2}]) = ([g_{1},h_{1}],e_{G_{2}})(e_{G_{1}},[g_{2},h_{2}])\\end{align}$.</p>\n\n<p>So, each commutator in $G_{1} \\times G_{2}$ is the product of one in $G_{1}$ with one in $G_{2}$.</p>\n\n<p>Therefore, $\\mathbf{(G_{1}\\times G_{2})^{\\prime}\\subseteq G_{1}^{\\prime}\\times G_{2}^{\\prime}}$.</p>\n\n<p>Now, to get the inclusion in the other direction, let $(g,h) \\in G_{1}^{\\prime} \\times G_{2}^{\\prime}$. Then, $g = [g_{1},g_{2}]\\in G_{1}^{\\prime}$ and $h = [h_{1},h_{2}] \\in G_{2}^{\\prime}$.</p>\n\n<p>So, </p>\n\n<p>$\\begin{align}(g,h) = ([g_{1},g_{2}],[h_{1},h_{2}]) = (g_{1}^{-1}g_{2}^{-1}g_{1}g_{2}, h_{1}^{-1}h_{2}^{-1}h_{1}h_{2}) \\\\ = (g_{1}^{-1},h_{1}^{-1})(g_{2}^{-1},h_{2}^{-1})(g_{1},h_{1})(g_{2},h_{2}) \\\\ = (g_{1},h_{1})^{-1}(g_{2},h_{2})^{-1}(g_{1},h_{1})(g_{2},h_{2}) \\\\= [(g_{1},h_{1}),(g_{2},h_{2})] \\end{align} $ </p>\n\n<p>where $(g_{1},h_{1}), (g_{2},h_{2}) \\in G_{1} \\times G_{2}$. </p>\n\n<p>Therefore, the product of a commutator in $G_{1}$ with a commutator in $G_{2}$ is a commutator in $G_{1} \\times G_{2}$. Thus, $\\mathbf{G_{1}^{\\prime} \\times G_{2}^{\\prime} \\subseteq (G_{1} \\times G_{2})^{\\prime}}$.</p>\n\n<p>And so, we get our equality, $G_{1}^{\\prime} \\times G_{2}^{\\prime} = (G_{1} \\times G_{2})^{\\prime}$.</p>\n" }, { "answer_id": 2081407, "score": 4, "text": "<p>Okay...I think this answer finally takes care of the difficulty. Thanks to arctictern for all his help :)</p>\n\n<hr>\n\n<p>Let $g \\in (G_{1}\\times G_{2})^{\\prime}$. Then, since $(G_{1} \\times G_{2})^{\\prime}$ is the subgroup generated by all commutators of elements of $G_{1} \\times G_{2}$, $g$ is the product of commutators of elements of $G_{1} \\times G_{2}$. Letting $(a_{i},b_{i})$, $(c_{i},d_{i})\\in G_{1} \\times G_{2}$, we have that</p>\n\n<p>$\\begin{align} g = [(a_{1},b_{1}),(c_{1},d_{1})]\\cdot [(a_{2},b_{2}),(c_{2},d_{2})]\\cdot\\,\\, \\cdots \\,\\, \\cdot[(a_{t},b_{t}),(c_{t},d_{t})]\\,\\text{for some}\\, t, \\\\ = (a_{1},b_{1})^{-1}(c_{1},d_{1})^{-1}(a_{1},b_{1})(c_{1},d_{1})\\cdot (a_{2},b_{2})^{-1}(c_{2},d_{2})^{-1}(a_{2},b_{2})(c_{2},d_{2})\\cdot \\, \\, \\cdots \\, \\,\\cdot(a_{t},b_{t})^{-1}(c_{t},d_{t})^{-1}(a_{t},b_{t})(c_{t},d_{t}) \\\\ = (a_{1}^{-1},b_{1}^{-1}) (c_{1}^{-1},d_{1}^{-1})(a_{1},b_{1})(c_{1},d_{1})\\cdot (a_{2}^{-1},b_{2}^{-1})(c_{2}^{-1},d_{2}^{-1})(a_{2},b_{2})(c_{2},d_{2})\\cdot \\, \\, \\cdots \\, \\,\\cdot(a_{t}^{-1},b_{t}^{-1})(c_{t}^{-1},d_{t}^{-1})(a_{t},b_{t})(c_{t},d_{t}) \\\\ = (a_{1}^{-1}c_{1}^{-1}a_{1}c_{1}a_{2}^{-1}c_{2}^{-1}a_{2}c_{2}\\cdots a_{t}^{-1}c_{t}^{-1}a_{t}c_{t}, \\,b_{1}^{-1}d_{1}^{-1}b_{1}d_{1}b_{2}^{-1}d_{2}^{-1}b_{2}d_{2}\\cdots b_{t}^{-1}d_{t}^{-1}b_{t}d_{t}) \\\\ = ([a_{1},c_{1}]\\cdot [a_{2},c_{2}]\\cdot \\,\\, \\cdots \\, \\, \\cdot [a_{t},c_{t}], \\,[b_{1},d_{1}]\\cdot [b_{2},d_{2}]\\cdot \\,\\, \\cdots \\, \\, \\cdot [b_{t},d_{t}]) \\in G_{1}^{\\prime} \\times G_{2}^{\\prime}\\end{align}$</p>\n\n<p>So, we have the inclusion $\\mathbf{(G_{1}\\times G_{2})^{\\prime}\\subseteq G_{1}^{\\prime}\\times G_{2}^{\\prime}}$</p>\n\n<hr>\n\n<p>In the other direction, let $h \\in G_{1}^{\\prime} \\times G_{2}^{\\prime}$.</p>\n\n<p>To belong in $ G_{1}^{\\prime} \\times G_{2}^{\\prime}$, $h$ must be the direct product of an element of $G_{1}^{\\prime}$, which is the (group operation) product of commutators of elements of $G_{1}$, and an element of $G_{2}^{\\prime}$, which is the (group operation) product of commutators of elements of $G_{2}$.</p>\n\n<p>Let $h_{1}$ be such an element of $G_{1}^{\\prime}$ and let $h_{2}$ be such an element of $G_{2}^{\\prime}$, and let $h = h_{1} \\times h_{2}$. Then,</p>\n\n<p>$h_{1} = [a_{1},c_{1}]\\cdot [a_{2}, c_{2}] \\cdot \\, \\, \\cdots \\, \\, \\cdot [a_{t},c_{t}]$ for some $t$, where each $a_{i}$, $c_{i}$ is an element of $G_{1}$</p>\n\n<p>and $h_{2} =[b_{1},d_{1}]\\cdot [b_{2}, d_{2}] \\cdot \\, \\, \\cdots \\, \\, \\cdot [b_{t},d_{t}]$ for some $t$, where each $b_{i}$, $d_{i}$ is an element of $G_{2}$.</p>\n\n<p>So, </p>\n\n<p>$\\begin{align}h = h_{1} \\times h_{2} = ([a_{1},c_{1}]\\cdot [a_{2}, c_{2}] \\cdot \\, \\, \\cdots \\, \\, \\cdot [a_{t},c_{t}], [b_{1},d_{1}]\\cdot [b_{2}, d_{2}] \\cdot \\, \\, \\cdots \\, \\, \\cdot [b_{t},d_{t}])\\\\ =(a_{1}^{-1}c_{1}^{-1}a_{1}c_{1}a_{2}^{-1}c_{2}^{-1}a_{2}c_{2}\\cdots a_{t}^{-1}c_{t}^{-1}a_{t}c_{t}, \\,b_{1}^{-1}d_{1}^{-1}b_{1}d_{1}b_{2}^{-1}d_{2}^{-1}b_{2}d_{2}\\cdots b_{t}^{-1}d_{t}^{-1}b_{t}d_{t}) \\\\ = (a_{1}^{-1},b_{1}^{-1}) (c_{1}^{-1},d_{1}^{-1})(a_{1},b_{1})(c_{1},d_{1})\\cdot (a_{2}^{-1},b_{2}^{-1})(c_{2}^{-1},d_{2}^{-1})(a_{2},b_{2})(c_{2},d_{2})\\cdot \\, \\, \\cdots \\, \\,\\cdot(a_{t}^{-1},b_{t}^{-1})(c_{t}^{-1},d_{t}^{-1})(a_{t},b_{t})(c_{t},d_{t}) \\\\ = (a_{1},b_{1})^{-1}(c_{1},d_{1})^{-1}(a_{1},b_{1})(c_{1},d_{1})\\cdot (a_{2},b_{2})^{-1}(c_{2},d_{2})^{-1}(a_{2},b_{2})(c_{2},d_{2})\\cdot \\, \\, \\cdots \\, \\,\\cdot(a_{t},b_{t})^{-1}(c_{t},d_{t})^{-1}(a_{t},b_{t})(c_{t},d_{t}) \\\\ = [(a_{1},b_{1}),(c_{1},d_{1})]\\cdot [(a_{2},b_{2}),(c_{2},d_{2})]\\cdot\\,\\, \\cdots \\,\\, \\cdot[(a_{t},b_{t}),(c_{t},d_{t})] \\in (G_{1}\\times G_{2})^{\\prime} \\end{align}$</p>\n\n<p>So, we have the inclusion $\\mathbf{G_{1}^{\\prime}\\times G_{2}^{\\prime} \\subseteq (G_{1}\\times G_{2})^{\\prime}}$.</p>\n\n<hr>\n\n<p>Thus, since we have inclusion in both directions, we have established the equality $\\mathbf{(G_{1}\\times G_{2})^{\\prime} = G_{1}^{\\prime}\\times G_{2}^{\\prime}}$ for the right things this time, hopefully.</p>\n" }, { "answer_id": 2081329, "score": 3, "text": "<p>Let $(g_{1},g_{2})\\,,\\,(h_{1},h_{2}) \\in G_{1}\\times G_{2}$. Then, $\\begin{align}[(g_{1},g_{2})(h_{1},h_{2})]=(g_{1},g_{2})^{-1}(h_{1},h_{2})^{-1}(g_{1},g_{2})(h_{1},h_{2}) \\\\ =(g_{1}^{-1},g_{2}^{-1})(h_{1}^{-1},h_{2}^{-1})(g_{1},g_{2})(h_{1},h_{2}) = (g_{1}^{-1}h_{1}^{-1}g_{1}h_{1},g_{2}^{-1}h_{2}^{-1}g_{2}h_{2}) \\\\ = ([g_{1},h_{1}],[g_{2},h_{2}]) = ([g_{1},h_{1}],e_{G_{2}})(e_{G_{1}},[g_{2},h_{2}])\\end{align}$.</p>\n\n<p>So, each commutator in $G_{1} \\times G_{2}$ is the product of one in $G_{1}$ with one in $G_{2}$.</p>\n\n<p>Therefore, $\\mathbf{(G_{1}\\times G_{2})^{\\prime}\\subseteq G_{1}^{\\prime}\\times G_{2}^{\\prime}}$.</p>\n\n<p>Now, to get the inclusion in the other direction, let $(g,h) \\in G_{1}^{\\prime} \\times G_{2}^{\\prime}$. Then, $g = [g_{1},g_{2}]\\in G_{1}^{\\prime}$ and $h = [h_{1},h_{2}] \\in G_{2}^{\\prime}$.</p>\n\n<p>So, </p>\n\n<p>$\\begin{align}(g,h) = ([g_{1},g_{2}],[h_{1},h_{2}]) = (g_{1}^{-1}g_{2}^{-1}g_{1}g_{2}, h_{1}^{-1}h_{2}^{-1}h_{1}h_{2}) \\\\ = (g_{1}^{-1},h_{1}^{-1})(g_{2}^{-1},h_{2}^{-1})(g_{1},h_{1})(g_{2},h_{2}) \\\\ = (g_{1},h_{1})^{-1}(g_{2},h_{2})^{-1}(g_{1},h_{1})(g_{2},h_{2}) \\\\= [(g_{1},h_{1}),(g_{2},h_{2})] \\end{align} $ </p>\n\n<p>where $(g_{1},h_{1}), (g_{2},h_{2}) \\in G_{1} \\times G_{2}$. </p>\n\n<p>Therefore, the product of a commutator in $G_{1}$ with a commutator in $G_{2}$ is a commutator in $G_{1} \\times G_{2}$. Thus, $\\mathbf{G_{1}^{\\prime} \\times G_{2}^{\\prime} \\subseteq (G_{1} \\times G_{2})^{\\prime}}$.</p>\n\n<p>And so, we get our equality, $G_{1}^{\\prime} \\times G_{2}^{\\prime} = (G_{1} \\times G_{2})^{\\prime}$.</p>\n" }, { "answer_id": 2081407, "score": 4, "text": "<p>Okay...I think this answer finally takes care of the difficulty. Thanks to arctictern for all his help :)</p>\n\n<hr>\n\n<p>Let $g \\in (G_{1}\\times G_{2})^{\\prime}$. Then, since $(G_{1} \\times G_{2})^{\\prime}$ is the subgroup generated by all commutators of elements of $G_{1} \\times G_{2}$, $g$ is the product of commutators of elements of $G_{1} \\times G_{2}$. Letting $(a_{i},b_{i})$, $(c_{i},d_{i})\\in G_{1} \\times G_{2}$, we have that</p>\n\n<p>$\\begin{align} g = [(a_{1},b_{1}),(c_{1},d_{1})]\\cdot [(a_{2},b_{2}),(c_{2},d_{2})]\\cdot\\,\\, \\cdots \\,\\, \\cdot[(a_{t},b_{t}),(c_{t},d_{t})]\\,\\text{for some}\\, t, \\\\ = (a_{1},b_{1})^{-1}(c_{1},d_{1})^{-1}(a_{1},b_{1})(c_{1},d_{1})\\cdot (a_{2},b_{2})^{-1}(c_{2},d_{2})^{-1}(a_{2},b_{2})(c_{2},d_{2})\\cdot \\, \\, \\cdots \\, \\,\\cdot(a_{t},b_{t})^{-1}(c_{t},d_{t})^{-1}(a_{t},b_{t})(c_{t},d_{t}) \\\\ = (a_{1}^{-1},b_{1}^{-1}) (c_{1}^{-1},d_{1}^{-1})(a_{1},b_{1})(c_{1},d_{1})\\cdot (a_{2}^{-1},b_{2}^{-1})(c_{2}^{-1},d_{2}^{-1})(a_{2},b_{2})(c_{2},d_{2})\\cdot \\, \\, \\cdots \\, \\,\\cdot(a_{t}^{-1},b_{t}^{-1})(c_{t}^{-1},d_{t}^{-1})(a_{t},b_{t})(c_{t},d_{t}) \\\\ = (a_{1}^{-1}c_{1}^{-1}a_{1}c_{1}a_{2}^{-1}c_{2}^{-1}a_{2}c_{2}\\cdots a_{t}^{-1}c_{t}^{-1}a_{t}c_{t}, \\,b_{1}^{-1}d_{1}^{-1}b_{1}d_{1}b_{2}^{-1}d_{2}^{-1}b_{2}d_{2}\\cdots b_{t}^{-1}d_{t}^{-1}b_{t}d_{t}) \\\\ = ([a_{1},c_{1}]\\cdot [a_{2},c_{2}]\\cdot \\,\\, \\cdots \\, \\, \\cdot [a_{t},c_{t}], \\,[b_{1},d_{1}]\\cdot [b_{2},d_{2}]\\cdot \\,\\, \\cdots \\, \\, \\cdot [b_{t},d_{t}]) \\in G_{1}^{\\prime} \\times G_{2}^{\\prime}\\end{align}$</p>\n\n<p>So, we have the inclusion $\\mathbf{(G_{1}\\times G_{2})^{\\prime}\\subseteq G_{1}^{\\prime}\\times G_{2}^{\\prime}}$</p>\n\n<hr>\n\n<p>In the other direction, let $h \\in G_{1}^{\\prime} \\times G_{2}^{\\prime}$.</p>\n\n<p>To belong in $ G_{1}^{\\prime} \\times G_{2}^{\\prime}$, $h$ must be the direct product of an element of $G_{1}^{\\prime}$, which is the (group operation) product of commutators of elements of $G_{1}$, and an element of $G_{2}^{\\prime}$, which is the (group operation) product of commutators of elements of $G_{2}$.</p>\n\n<p>Let $h_{1}$ be such an element of $G_{1}^{\\prime}$ and let $h_{2}$ be such an element of $G_{2}^{\\prime}$, and let $h = h_{1} \\times h_{2}$. Then,</p>\n\n<p>$h_{1} = [a_{1},c_{1}]\\cdot [a_{2}, c_{2}] \\cdot \\, \\, \\cdots \\, \\, \\cdot [a_{t},c_{t}]$ for some $t$, where each $a_{i}$, $c_{i}$ is an element of $G_{1}$</p>\n\n<p>and $h_{2} =[b_{1},d_{1}]\\cdot [b_{2}, d_{2}] \\cdot \\, \\, \\cdots \\, \\, \\cdot [b_{t},d_{t}]$ for some $t$, where each $b_{i}$, $d_{i}$ is an element of $G_{2}$.</p>\n\n<p>So, </p>\n\n<p>$\\begin{align}h = h_{1} \\times h_{2} = ([a_{1},c_{1}]\\cdot [a_{2}, c_{2}] \\cdot \\, \\, \\cdots \\, \\, \\cdot [a_{t},c_{t}], [b_{1},d_{1}]\\cdot [b_{2}, d_{2}] \\cdot \\, \\, \\cdots \\, \\, \\cdot [b_{t},d_{t}])\\\\ =(a_{1}^{-1}c_{1}^{-1}a_{1}c_{1}a_{2}^{-1}c_{2}^{-1}a_{2}c_{2}\\cdots a_{t}^{-1}c_{t}^{-1}a_{t}c_{t}, \\,b_{1}^{-1}d_{1}^{-1}b_{1}d_{1}b_{2}^{-1}d_{2}^{-1}b_{2}d_{2}\\cdots b_{t}^{-1}d_{t}^{-1}b_{t}d_{t}) \\\\ = (a_{1}^{-1},b_{1}^{-1}) (c_{1}^{-1},d_{1}^{-1})(a_{1},b_{1})(c_{1},d_{1})\\cdot (a_{2}^{-1},b_{2}^{-1})(c_{2}^{-1},d_{2}^{-1})(a_{2},b_{2})(c_{2},d_{2})\\cdot \\, \\, \\cdots \\, \\,\\cdot(a_{t}^{-1},b_{t}^{-1})(c_{t}^{-1},d_{t}^{-1})(a_{t},b_{t})(c_{t},d_{t}) \\\\ = (a_{1},b_{1})^{-1}(c_{1},d_{1})^{-1}(a_{1},b_{1})(c_{1},d_{1})\\cdot (a_{2},b_{2})^{-1}(c_{2},d_{2})^{-1}(a_{2},b_{2})(c_{2},d_{2})\\cdot \\, \\, \\cdots \\, \\,\\cdot(a_{t},b_{t})^{-1}(c_{t},d_{t})^{-1}(a_{t},b_{t})(c_{t},d_{t}) \\\\ = [(a_{1},b_{1}),(c_{1},d_{1})]\\cdot [(a_{2},b_{2}),(c_{2},d_{2})]\\cdot\\,\\, \\cdots \\,\\, \\cdot[(a_{t},b_{t}),(c_{t},d_{t})] \\in (G_{1}\\times G_{2})^{\\prime} \\end{align}$</p>\n\n<p>So, we have the inclusion $\\mathbf{G_{1}^{\\prime}\\times G_{2}^{\\prime} \\subseteq (G_{1}\\times G_{2})^{\\prime}}$.</p>\n\n<hr>\n\n<p>Thus, since we have inclusion in both directions, we have established the equality $\\mathbf{(G_{1}\\times G_{2})^{\\prime} = G_{1}^{\\prime}\\times G_{2}^{\\prime}}$ for the right things this time, hopefully.</p>\n" }, { "answer_id": 2081329, "score": 3, "text": "<p>Let $(g_{1},g_{2})\\,,\\,(h_{1},h_{2}) \\in G_{1}\\times G_{2}$. Then, $\\begin{align}[(g_{1},g_{2})(h_{1},h_{2})]=(g_{1},g_{2})^{-1}(h_{1},h_{2})^{-1}(g_{1},g_{2})(h_{1},h_{2}) \\\\ =(g_{1}^{-1},g_{2}^{-1})(h_{1}^{-1},h_{2}^{-1})(g_{1},g_{2})(h_{1},h_{2}) = (g_{1}^{-1}h_{1}^{-1}g_{1}h_{1},g_{2}^{-1}h_{2}^{-1}g_{2}h_{2}) \\\\ = ([g_{1},h_{1}],[g_{2},h_{2}]) = ([g_{1},h_{1}],e_{G_{2}})(e_{G_{1}},[g_{2},h_{2}])\\end{align}$.</p>\n\n<p>So, each commutator in $G_{1} \\times G_{2}$ is the product of one in $G_{1}$ with one in $G_{2}$.</p>\n\n<p>Therefore, $\\mathbf{(G_{1}\\times G_{2})^{\\prime}\\subseteq G_{1}^{\\prime}\\times G_{2}^{\\prime}}$.</p>\n\n<p>Now, to get the inclusion in the other direction, let $(g,h) \\in G_{1}^{\\prime} \\times G_{2}^{\\prime}$. Then, $g = [g_{1},g_{2}]\\in G_{1}^{\\prime}$ and $h = [h_{1},h_{2}] \\in G_{2}^{\\prime}$.</p>\n\n<p>So, </p>\n\n<p>$\\begin{align}(g,h) = ([g_{1},g_{2}],[h_{1},h_{2}]) = (g_{1}^{-1}g_{2}^{-1}g_{1}g_{2}, h_{1}^{-1}h_{2}^{-1}h_{1}h_{2}) \\\\ = (g_{1}^{-1},h_{1}^{-1})(g_{2}^{-1},h_{2}^{-1})(g_{1},h_{1})(g_{2},h_{2}) \\\\ = (g_{1},h_{1})^{-1}(g_{2},h_{2})^{-1}(g_{1},h_{1})(g_{2},h_{2}) \\\\= [(g_{1},h_{1}),(g_{2},h_{2})] \\end{align} $ </p>\n\n<p>where $(g_{1},h_{1}), (g_{2},h_{2}) \\in G_{1} \\times G_{2}$. </p>\n\n<p>Therefore, the product of a commutator in $G_{1}$ with a commutator in $G_{2}$ is a commutator in $G_{1} \\times G_{2}$. Thus, $\\mathbf{G_{1}^{\\prime} \\times G_{2}^{\\prime} \\subseteq (G_{1} \\times G_{2})^{\\prime}}$.</p>\n\n<p>And so, we get our equality, $G_{1}^{\\prime} \\times G_{2}^{\\prime} = (G_{1} \\times G_{2})^{\\prime}$.</p>\n" }, { "answer_id": 2081407, "score": 4, "text": "<p>Okay...I think this answer finally takes care of the difficulty. Thanks to arctictern for all his help :)</p>\n\n<hr>\n\n<p>Let $g \\in (G_{1}\\times G_{2})^{\\prime}$. Then, since $(G_{1} \\times G_{2})^{\\prime}$ is the subgroup generated by all commutators of elements of $G_{1} \\times G_{2}$, $g$ is the product of commutators of elements of $G_{1} \\times G_{2}$. Letting $(a_{i},b_{i})$, $(c_{i},d_{i})\\in G_{1} \\times G_{2}$, we have that</p>\n\n<p>$\\begin{align} g = [(a_{1},b_{1}),(c_{1},d_{1})]\\cdot [(a_{2},b_{2}),(c_{2},d_{2})]\\cdot\\,\\, \\cdots \\,\\, \\cdot[(a_{t},b_{t}),(c_{t},d_{t})]\\,\\text{for some}\\, t, \\\\ = (a_{1},b_{1})^{-1}(c_{1},d_{1})^{-1}(a_{1},b_{1})(c_{1},d_{1})\\cdot (a_{2},b_{2})^{-1}(c_{2},d_{2})^{-1}(a_{2},b_{2})(c_{2},d_{2})\\cdot \\, \\, \\cdots \\, \\,\\cdot(a_{t},b_{t})^{-1}(c_{t},d_{t})^{-1}(a_{t},b_{t})(c_{t},d_{t}) \\\\ = (a_{1}^{-1},b_{1}^{-1}) (c_{1}^{-1},d_{1}^{-1})(a_{1},b_{1})(c_{1},d_{1})\\cdot (a_{2}^{-1},b_{2}^{-1})(c_{2}^{-1},d_{2}^{-1})(a_{2},b_{2})(c_{2},d_{2})\\cdot \\, \\, \\cdots \\, \\,\\cdot(a_{t}^{-1},b_{t}^{-1})(c_{t}^{-1},d_{t}^{-1})(a_{t},b_{t})(c_{t},d_{t}) \\\\ = (a_{1}^{-1}c_{1}^{-1}a_{1}c_{1}a_{2}^{-1}c_{2}^{-1}a_{2}c_{2}\\cdots a_{t}^{-1}c_{t}^{-1}a_{t}c_{t}, \\,b_{1}^{-1}d_{1}^{-1}b_{1}d_{1}b_{2}^{-1}d_{2}^{-1}b_{2}d_{2}\\cdots b_{t}^{-1}d_{t}^{-1}b_{t}d_{t}) \\\\ = ([a_{1},c_{1}]\\cdot [a_{2},c_{2}]\\cdot \\,\\, \\cdots \\, \\, \\cdot [a_{t},c_{t}], \\,[b_{1},d_{1}]\\cdot [b_{2},d_{2}]\\cdot \\,\\, \\cdots \\, \\, \\cdot [b_{t},d_{t}]) \\in G_{1}^{\\prime} \\times G_{2}^{\\prime}\\end{align}$</p>\n\n<p>So, we have the inclusion $\\mathbf{(G_{1}\\times G_{2})^{\\prime}\\subseteq G_{1}^{\\prime}\\times G_{2}^{\\prime}}$</p>\n\n<hr>\n\n<p>In the other direction, let $h \\in G_{1}^{\\prime} \\times G_{2}^{\\prime}$.</p>\n\n<p>To belong in $ G_{1}^{\\prime} \\times G_{2}^{\\prime}$, $h$ must be the direct product of an element of $G_{1}^{\\prime}$, which is the (group operation) product of commutators of elements of $G_{1}$, and an element of $G_{2}^{\\prime}$, which is the (group operation) product of commutators of elements of $G_{2}$.</p>\n\n<p>Let $h_{1}$ be such an element of $G_{1}^{\\prime}$ and let $h_{2}$ be such an element of $G_{2}^{\\prime}$, and let $h = h_{1} \\times h_{2}$. Then,</p>\n\n<p>$h_{1} = [a_{1},c_{1}]\\cdot [a_{2}, c_{2}] \\cdot \\, \\, \\cdots \\, \\, \\cdot [a_{t},c_{t}]$ for some $t$, where each $a_{i}$, $c_{i}$ is an element of $G_{1}$</p>\n\n<p>and $h_{2} =[b_{1},d_{1}]\\cdot [b_{2}, d_{2}] \\cdot \\, \\, \\cdots \\, \\, \\cdot [b_{t},d_{t}]$ for some $t$, where each $b_{i}$, $d_{i}$ is an element of $G_{2}$.</p>\n\n<p>So, </p>\n\n<p>$\\begin{align}h = h_{1} \\times h_{2} = ([a_{1},c_{1}]\\cdot [a_{2}, c_{2}] \\cdot \\, \\, \\cdots \\, \\, \\cdot [a_{t},c_{t}], [b_{1},d_{1}]\\cdot [b_{2}, d_{2}] \\cdot \\, \\, \\cdots \\, \\, \\cdot [b_{t},d_{t}])\\\\ =(a_{1}^{-1}c_{1}^{-1}a_{1}c_{1}a_{2}^{-1}c_{2}^{-1}a_{2}c_{2}\\cdots a_{t}^{-1}c_{t}^{-1}a_{t}c_{t}, \\,b_{1}^{-1}d_{1}^{-1}b_{1}d_{1}b_{2}^{-1}d_{2}^{-1}b_{2}d_{2}\\cdots b_{t}^{-1}d_{t}^{-1}b_{t}d_{t}) \\\\ = (a_{1}^{-1},b_{1}^{-1}) (c_{1}^{-1},d_{1}^{-1})(a_{1},b_{1})(c_{1},d_{1})\\cdot (a_{2}^{-1},b_{2}^{-1})(c_{2}^{-1},d_{2}^{-1})(a_{2},b_{2})(c_{2},d_{2})\\cdot \\, \\, \\cdots \\, \\,\\cdot(a_{t}^{-1},b_{t}^{-1})(c_{t}^{-1},d_{t}^{-1})(a_{t},b_{t})(c_{t},d_{t}) \\\\ = (a_{1},b_{1})^{-1}(c_{1},d_{1})^{-1}(a_{1},b_{1})(c_{1},d_{1})\\cdot (a_{2},b_{2})^{-1}(c_{2},d_{2})^{-1}(a_{2},b_{2})(c_{2},d_{2})\\cdot \\, \\, \\cdots \\, \\,\\cdot(a_{t},b_{t})^{-1}(c_{t},d_{t})^{-1}(a_{t},b_{t})(c_{t},d_{t}) \\\\ = [(a_{1},b_{1}),(c_{1},d_{1})]\\cdot [(a_{2},b_{2}),(c_{2},d_{2})]\\cdot\\,\\, \\cdots \\,\\, \\cdot[(a_{t},b_{t}),(c_{t},d_{t})] \\in (G_{1}\\times G_{2})^{\\prime} \\end{align}$</p>\n\n<p>So, we have the inclusion $\\mathbf{G_{1}^{\\prime}\\times G_{2}^{\\prime} \\subseteq (G_{1}\\times G_{2})^{\\prime}}$.</p>\n\n<hr>\n\n<p>Thus, since we have inclusion in both directions, we have established the equality $\\mathbf{(G_{1}\\times G_{2})^{\\prime} = G_{1}^{\\prime}\\times G_{2}^{\\prime}}$ for the right things this time, hopefully.</p>\n" }, { "answer_id": 2081329, "score": 3, "text": "<p>Let $(g_{1},g_{2})\\,,\\,(h_{1},h_{2}) \\in G_{1}\\times G_{2}$. Then, $\\begin{align}[(g_{1},g_{2})(h_{1},h_{2})]=(g_{1},g_{2})^{-1}(h_{1},h_{2})^{-1}(g_{1},g_{2})(h_{1},h_{2}) \\\\ =(g_{1}^{-1},g_{2}^{-1})(h_{1}^{-1},h_{2}^{-1})(g_{1},g_{2})(h_{1},h_{2}) = (g_{1}^{-1}h_{1}^{-1}g_{1}h_{1},g_{2}^{-1}h_{2}^{-1}g_{2}h_{2}) \\\\ = ([g_{1},h_{1}],[g_{2},h_{2}]) = ([g_{1},h_{1}],e_{G_{2}})(e_{G_{1}},[g_{2},h_{2}])\\end{align}$.</p>\n\n<p>So, each commutator in $G_{1} \\times G_{2}$ is the product of one in $G_{1}$ with one in $G_{2}$.</p>\n\n<p>Therefore, $\\mathbf{(G_{1}\\times G_{2})^{\\prime}\\subseteq G_{1}^{\\prime}\\times G_{2}^{\\prime}}$.</p>\n\n<p>Now, to get the inclusion in the other direction, let $(g,h) \\in G_{1}^{\\prime} \\times G_{2}^{\\prime}$. Then, $g = [g_{1},g_{2}]\\in G_{1}^{\\prime}$ and $h = [h_{1},h_{2}] \\in G_{2}^{\\prime}$.</p>\n\n<p>So, </p>\n\n<p>$\\begin{align}(g,h) = ([g_{1},g_{2}],[h_{1},h_{2}]) = (g_{1}^{-1}g_{2}^{-1}g_{1}g_{2}, h_{1}^{-1}h_{2}^{-1}h_{1}h_{2}) \\\\ = (g_{1}^{-1},h_{1}^{-1})(g_{2}^{-1},h_{2}^{-1})(g_{1},h_{1})(g_{2},h_{2}) \\\\ = (g_{1},h_{1})^{-1}(g_{2},h_{2})^{-1}(g_{1},h_{1})(g_{2},h_{2}) \\\\= [(g_{1},h_{1}),(g_{2},h_{2})] \\end{align} $ </p>\n\n<p>where $(g_{1},h_{1}), (g_{2},h_{2}) \\in G_{1} \\times G_{2}$. </p>\n\n<p>Therefore, the product of a commutator in $G_{1}$ with a commutator in $G_{2}$ is a commutator in $G_{1} \\times G_{2}$. Thus, $\\mathbf{G_{1}^{\\prime} \\times G_{2}^{\\prime} \\subseteq (G_{1} \\times G_{2})^{\\prime}}$.</p>\n\n<p>And so, we get our equality, $G_{1}^{\\prime} \\times G_{2}^{\\prime} = (G_{1} \\times G_{2})^{\\prime}$.</p>\n" }, { "answer_id": 2081407, "score": 4, "text": "<p>Okay...I think this answer finally takes care of the difficulty. Thanks to arctictern for all his help :)</p>\n\n<hr>\n\n<p>Let $g \\in (G_{1}\\times G_{2})^{\\prime}$. Then, since $(G_{1} \\times G_{2})^{\\prime}$ is the subgroup generated by all commutators of elements of $G_{1} \\times G_{2}$, $g$ is the product of commutators of elements of $G_{1} \\times G_{2}$. Letting $(a_{i},b_{i})$, $(c_{i},d_{i})\\in G_{1} \\times G_{2}$, we have that</p>\n\n<p>$\\begin{align} g = [(a_{1},b_{1}),(c_{1},d_{1})]\\cdot [(a_{2},b_{2}),(c_{2},d_{2})]\\cdot\\,\\, \\cdots \\,\\, \\cdot[(a_{t},b_{t}),(c_{t},d_{t})]\\,\\text{for some}\\, t, \\\\ = (a_{1},b_{1})^{-1}(c_{1},d_{1})^{-1}(a_{1},b_{1})(c_{1},d_{1})\\cdot (a_{2},b_{2})^{-1}(c_{2},d_{2})^{-1}(a_{2},b_{2})(c_{2},d_{2})\\cdot \\, \\, \\cdots \\, \\,\\cdot(a_{t},b_{t})^{-1}(c_{t},d_{t})^{-1}(a_{t},b_{t})(c_{t},d_{t}) \\\\ = (a_{1}^{-1},b_{1}^{-1}) (c_{1}^{-1},d_{1}^{-1})(a_{1},b_{1})(c_{1},d_{1})\\cdot (a_{2}^{-1},b_{2}^{-1})(c_{2}^{-1},d_{2}^{-1})(a_{2},b_{2})(c_{2},d_{2})\\cdot \\, \\, \\cdots \\, \\,\\cdot(a_{t}^{-1},b_{t}^{-1})(c_{t}^{-1},d_{t}^{-1})(a_{t},b_{t})(c_{t},d_{t}) \\\\ = (a_{1}^{-1}c_{1}^{-1}a_{1}c_{1}a_{2}^{-1}c_{2}^{-1}a_{2}c_{2}\\cdots a_{t}^{-1}c_{t}^{-1}a_{t}c_{t}, \\,b_{1}^{-1}d_{1}^{-1}b_{1}d_{1}b_{2}^{-1}d_{2}^{-1}b_{2}d_{2}\\cdots b_{t}^{-1}d_{t}^{-1}b_{t}d_{t}) \\\\ = ([a_{1},c_{1}]\\cdot [a_{2},c_{2}]\\cdot \\,\\, \\cdots \\, \\, \\cdot [a_{t},c_{t}], \\,[b_{1},d_{1}]\\cdot [b_{2},d_{2}]\\cdot \\,\\, \\cdots \\, \\, \\cdot [b_{t},d_{t}]) \\in G_{1}^{\\prime} \\times G_{2}^{\\prime}\\end{align}$</p>\n\n<p>So, we have the inclusion $\\mathbf{(G_{1}\\times G_{2})^{\\prime}\\subseteq G_{1}^{\\prime}\\times G_{2}^{\\prime}}$</p>\n\n<hr>\n\n<p>In the other direction, let $h \\in G_{1}^{\\prime} \\times G_{2}^{\\prime}$.</p>\n\n<p>To belong in $ G_{1}^{\\prime} \\times G_{2}^{\\prime}$, $h$ must be the direct product of an element of $G_{1}^{\\prime}$, which is the (group operation) product of commutators of elements of $G_{1}$, and an element of $G_{2}^{\\prime}$, which is the (group operation) product of commutators of elements of $G_{2}$.</p>\n\n<p>Let $h_{1}$ be such an element of $G_{1}^{\\prime}$ and let $h_{2}$ be such an element of $G_{2}^{\\prime}$, and let $h = h_{1} \\times h_{2}$. Then,</p>\n\n<p>$h_{1} = [a_{1},c_{1}]\\cdot [a_{2}, c_{2}] \\cdot \\, \\, \\cdots \\, \\, \\cdot [a_{t},c_{t}]$ for some $t$, where each $a_{i}$, $c_{i}$ is an element of $G_{1}$</p>\n\n<p>and $h_{2} =[b_{1},d_{1}]\\cdot [b_{2}, d_{2}] \\cdot \\, \\, \\cdots \\, \\, \\cdot [b_{t},d_{t}]$ for some $t$, where each $b_{i}$, $d_{i}$ is an element of $G_{2}$.</p>\n\n<p>So, </p>\n\n<p>$\\begin{align}h = h_{1} \\times h_{2} = ([a_{1},c_{1}]\\cdot [a_{2}, c_{2}] \\cdot \\, \\, \\cdots \\, \\, \\cdot [a_{t},c_{t}], [b_{1},d_{1}]\\cdot [b_{2}, d_{2}] \\cdot \\, \\, \\cdots \\, \\, \\cdot [b_{t},d_{t}])\\\\ =(a_{1}^{-1}c_{1}^{-1}a_{1}c_{1}a_{2}^{-1}c_{2}^{-1}a_{2}c_{2}\\cdots a_{t}^{-1}c_{t}^{-1}a_{t}c_{t}, \\,b_{1}^{-1}d_{1}^{-1}b_{1}d_{1}b_{2}^{-1}d_{2}^{-1}b_{2}d_{2}\\cdots b_{t}^{-1}d_{t}^{-1}b_{t}d_{t}) \\\\ = (a_{1}^{-1},b_{1}^{-1}) (c_{1}^{-1},d_{1}^{-1})(a_{1},b_{1})(c_{1},d_{1})\\cdot (a_{2}^{-1},b_{2}^{-1})(c_{2}^{-1},d_{2}^{-1})(a_{2},b_{2})(c_{2},d_{2})\\cdot \\, \\, \\cdots \\, \\,\\cdot(a_{t}^{-1},b_{t}^{-1})(c_{t}^{-1},d_{t}^{-1})(a_{t},b_{t})(c_{t},d_{t}) \\\\ = (a_{1},b_{1})^{-1}(c_{1},d_{1})^{-1}(a_{1},b_{1})(c_{1},d_{1})\\cdot (a_{2},b_{2})^{-1}(c_{2},d_{2})^{-1}(a_{2},b_{2})(c_{2},d_{2})\\cdot \\, \\, \\cdots \\, \\,\\cdot(a_{t},b_{t})^{-1}(c_{t},d_{t})^{-1}(a_{t},b_{t})(c_{t},d_{t}) \\\\ = [(a_{1},b_{1}),(c_{1},d_{1})]\\cdot [(a_{2},b_{2}),(c_{2},d_{2})]\\cdot\\,\\, \\cdots \\,\\, \\cdot[(a_{t},b_{t}),(c_{t},d_{t})] \\in (G_{1}\\times G_{2})^{\\prime} \\end{align}$</p>\n\n<p>So, we have the inclusion $\\mathbf{G_{1}^{\\prime}\\times G_{2}^{\\prime} \\subseteq (G_{1}\\times G_{2})^{\\prime}}$.</p>\n\n<hr>\n\n<p>Thus, since we have inclusion in both directions, we have established the equality $\\mathbf{(G_{1}\\times G_{2})^{\\prime} = G_{1}^{\\prime}\\times G_{2}^{\\prime}}$ for the right things this time, hopefully.</p>\n" }, { "answer_id": 2081329, "score": 3, "text": "<p>Let $(g_{1},g_{2})\\,,\\,(h_{1},h_{2}) \\in G_{1}\\times G_{2}$. Then, $\\begin{align}[(g_{1},g_{2})(h_{1},h_{2})]=(g_{1},g_{2})^{-1}(h_{1},h_{2})^{-1}(g_{1},g_{2})(h_{1},h_{2}) \\\\ =(g_{1}^{-1},g_{2}^{-1})(h_{1}^{-1},h_{2}^{-1})(g_{1},g_{2})(h_{1},h_{2}) = (g_{1}^{-1}h_{1}^{-1}g_{1}h_{1},g_{2}^{-1}h_{2}^{-1}g_{2}h_{2}) \\\\ = ([g_{1},h_{1}],[g_{2},h_{2}]) = ([g_{1},h_{1}],e_{G_{2}})(e_{G_{1}},[g_{2},h_{2}])\\end{align}$.</p>\n\n<p>So, each commutator in $G_{1} \\times G_{2}$ is the product of one in $G_{1}$ with one in $G_{2}$.</p>\n\n<p>Therefore, $\\mathbf{(G_{1}\\times G_{2})^{\\prime}\\subseteq G_{1}^{\\prime}\\times G_{2}^{\\prime}}$.</p>\n\n<p>Now, to get the inclusion in the other direction, let $(g,h) \\in G_{1}^{\\prime} \\times G_{2}^{\\prime}$. Then, $g = [g_{1},g_{2}]\\in G_{1}^{\\prime}$ and $h = [h_{1},h_{2}] \\in G_{2}^{\\prime}$.</p>\n\n<p>So, </p>\n\n<p>$\\begin{align}(g,h) = ([g_{1},g_{2}],[h_{1},h_{2}]) = (g_{1}^{-1}g_{2}^{-1}g_{1}g_{2}, h_{1}^{-1}h_{2}^{-1}h_{1}h_{2}) \\\\ = (g_{1}^{-1},h_{1}^{-1})(g_{2}^{-1},h_{2}^{-1})(g_{1},h_{1})(g_{2},h_{2}) \\\\ = (g_{1},h_{1})^{-1}(g_{2},h_{2})^{-1}(g_{1},h_{1})(g_{2},h_{2}) \\\\= [(g_{1},h_{1}),(g_{2},h_{2})] \\end{align} $ </p>\n\n<p>where $(g_{1},h_{1}), (g_{2},h_{2}) \\in G_{1} \\times G_{2}$. </p>\n\n<p>Therefore, the product of a commutator in $G_{1}$ with a commutator in $G_{2}$ is a commutator in $G_{1} \\times G_{2}$. Thus, $\\mathbf{G_{1}^{\\prime} \\times G_{2}^{\\prime} \\subseteq (G_{1} \\times G_{2})^{\\prime}}$.</p>\n\n<p>And so, we get our equality, $G_{1}^{\\prime} \\times G_{2}^{\\prime} = (G_{1} \\times G_{2})^{\\prime}$.</p>\n" }, { "answer_id": 2081407, "score": 4, "text": "<p>Okay...I think this answer finally takes care of the difficulty. Thanks to arctictern for all his help :)</p>\n\n<hr>\n\n<p>Let $g \\in (G_{1}\\times G_{2})^{\\prime}$. Then, since $(G_{1} \\times G_{2})^{\\prime}$ is the subgroup generated by all commutators of elements of $G_{1} \\times G_{2}$, $g$ is the product of commutators of elements of $G_{1} \\times G_{2}$. Letting $(a_{i},b_{i})$, $(c_{i},d_{i})\\in G_{1} \\times G_{2}$, we have that</p>\n\n<p>$\\begin{align} g = [(a_{1},b_{1}),(c_{1},d_{1})]\\cdot [(a_{2},b_{2}),(c_{2},d_{2})]\\cdot\\,\\, \\cdots \\,\\, \\cdot[(a_{t},b_{t}),(c_{t},d_{t})]\\,\\text{for some}\\, t, \\\\ = (a_{1},b_{1})^{-1}(c_{1},d_{1})^{-1}(a_{1},b_{1})(c_{1},d_{1})\\cdot (a_{2},b_{2})^{-1}(c_{2},d_{2})^{-1}(a_{2},b_{2})(c_{2},d_{2})\\cdot \\, \\, \\cdots \\, \\,\\cdot(a_{t},b_{t})^{-1}(c_{t},d_{t})^{-1}(a_{t},b_{t})(c_{t},d_{t}) \\\\ = (a_{1}^{-1},b_{1}^{-1}) (c_{1}^{-1},d_{1}^{-1})(a_{1},b_{1})(c_{1},d_{1})\\cdot (a_{2}^{-1},b_{2}^{-1})(c_{2}^{-1},d_{2}^{-1})(a_{2},b_{2})(c_{2},d_{2})\\cdot \\, \\, \\cdots \\, \\,\\cdot(a_{t}^{-1},b_{t}^{-1})(c_{t}^{-1},d_{t}^{-1})(a_{t},b_{t})(c_{t},d_{t}) \\\\ = (a_{1}^{-1}c_{1}^{-1}a_{1}c_{1}a_{2}^{-1}c_{2}^{-1}a_{2}c_{2}\\cdots a_{t}^{-1}c_{t}^{-1}a_{t}c_{t}, \\,b_{1}^{-1}d_{1}^{-1}b_{1}d_{1}b_{2}^{-1}d_{2}^{-1}b_{2}d_{2}\\cdots b_{t}^{-1}d_{t}^{-1}b_{t}d_{t}) \\\\ = ([a_{1},c_{1}]\\cdot [a_{2},c_{2}]\\cdot \\,\\, \\cdots \\, \\, \\cdot [a_{t},c_{t}], \\,[b_{1},d_{1}]\\cdot [b_{2},d_{2}]\\cdot \\,\\, \\cdots \\, \\, \\cdot [b_{t},d_{t}]) \\in G_{1}^{\\prime} \\times G_{2}^{\\prime}\\end{align}$</p>\n\n<p>So, we have the inclusion $\\mathbf{(G_{1}\\times G_{2})^{\\prime}\\subseteq G_{1}^{\\prime}\\times G_{2}^{\\prime}}$</p>\n\n<hr>\n\n<p>In the other direction, let $h \\in G_{1}^{\\prime} \\times G_{2}^{\\prime}$.</p>\n\n<p>To belong in $ G_{1}^{\\prime} \\times G_{2}^{\\prime}$, $h$ must be the direct product of an element of $G_{1}^{\\prime}$, which is the (group operation) product of commutators of elements of $G_{1}$, and an element of $G_{2}^{\\prime}$, which is the (group operation) product of commutators of elements of $G_{2}$.</p>\n\n<p>Let $h_{1}$ be such an element of $G_{1}^{\\prime}$ and let $h_{2}$ be such an element of $G_{2}^{\\prime}$, and let $h = h_{1} \\times h_{2}$. Then,</p>\n\n<p>$h_{1} = [a_{1},c_{1}]\\cdot [a_{2}, c_{2}] \\cdot \\, \\, \\cdots \\, \\, \\cdot [a_{t},c_{t}]$ for some $t$, where each $a_{i}$, $c_{i}$ is an element of $G_{1}$</p>\n\n<p>and $h_{2} =[b_{1},d_{1}]\\cdot [b_{2}, d_{2}] \\cdot \\, \\, \\cdots \\, \\, \\cdot [b_{t},d_{t}]$ for some $t$, where each $b_{i}$, $d_{i}$ is an element of $G_{2}$.</p>\n\n<p>So, </p>\n\n<p>$\\begin{align}h = h_{1} \\times h_{2} = ([a_{1},c_{1}]\\cdot [a_{2}, c_{2}] \\cdot \\, \\, \\cdots \\, \\, \\cdot [a_{t},c_{t}], [b_{1},d_{1}]\\cdot [b_{2}, d_{2}] \\cdot \\, \\, \\cdots \\, \\, \\cdot [b_{t},d_{t}])\\\\ =(a_{1}^{-1}c_{1}^{-1}a_{1}c_{1}a_{2}^{-1}c_{2}^{-1}a_{2}c_{2}\\cdots a_{t}^{-1}c_{t}^{-1}a_{t}c_{t}, \\,b_{1}^{-1}d_{1}^{-1}b_{1}d_{1}b_{2}^{-1}d_{2}^{-1}b_{2}d_{2}\\cdots b_{t}^{-1}d_{t}^{-1}b_{t}d_{t}) \\\\ = (a_{1}^{-1},b_{1}^{-1}) (c_{1}^{-1},d_{1}^{-1})(a_{1},b_{1})(c_{1},d_{1})\\cdot (a_{2}^{-1},b_{2}^{-1})(c_{2}^{-1},d_{2}^{-1})(a_{2},b_{2})(c_{2},d_{2})\\cdot \\, \\, \\cdots \\, \\,\\cdot(a_{t}^{-1},b_{t}^{-1})(c_{t}^{-1},d_{t}^{-1})(a_{t},b_{t})(c_{t},d_{t}) \\\\ = (a_{1},b_{1})^{-1}(c_{1},d_{1})^{-1}(a_{1},b_{1})(c_{1},d_{1})\\cdot (a_{2},b_{2})^{-1}(c_{2},d_{2})^{-1}(a_{2},b_{2})(c_{2},d_{2})\\cdot \\, \\, \\cdots \\, \\,\\cdot(a_{t},b_{t})^{-1}(c_{t},d_{t})^{-1}(a_{t},b_{t})(c_{t},d_{t}) \\\\ = [(a_{1},b_{1}),(c_{1},d_{1})]\\cdot [(a_{2},b_{2}),(c_{2},d_{2})]\\cdot\\,\\, \\cdots \\,\\, \\cdot[(a_{t},b_{t}),(c_{t},d_{t})] \\in (G_{1}\\times G_{2})^{\\prime} \\end{align}$</p>\n\n<p>So, we have the inclusion $\\mathbf{G_{1}^{\\prime}\\times G_{2}^{\\prime} \\subseteq (G_{1}\\times G_{2})^{\\prime}}$.</p>\n\n<hr>\n\n<p>Thus, since we have inclusion in both directions, we have established the equality $\\mathbf{(G_{1}\\times G_{2})^{\\prime} = G_{1}^{\\prime}\\times G_{2}^{\\prime}}$ for the right things this time, hopefully.</p>\n" }, { "answer_id": 2081329, "score": 3, "text": "<p>Let $(g_{1},g_{2})\\,,\\,(h_{1},h_{2}) \\in G_{1}\\times G_{2}$. Then, $\\begin{align}[(g_{1},g_{2})(h_{1},h_{2})]=(g_{1},g_{2})^{-1}(h_{1},h_{2})^{-1}(g_{1},g_{2})(h_{1},h_{2}) \\\\ =(g_{1}^{-1},g_{2}^{-1})(h_{1}^{-1},h_{2}^{-1})(g_{1},g_{2})(h_{1},h_{2}) = (g_{1}^{-1}h_{1}^{-1}g_{1}h_{1},g_{2}^{-1}h_{2}^{-1}g_{2}h_{2}) \\\\ = ([g_{1},h_{1}],[g_{2},h_{2}]) = ([g_{1},h_{1}],e_{G_{2}})(e_{G_{1}},[g_{2},h_{2}])\\end{align}$.</p>\n\n<p>So, each commutator in $G_{1} \\times G_{2}$ is the product of one in $G_{1}$ with one in $G_{2}$.</p>\n\n<p>Therefore, $\\mathbf{(G_{1}\\times G_{2})^{\\prime}\\subseteq G_{1}^{\\prime}\\times G_{2}^{\\prime}}$.</p>\n\n<p>Now, to get the inclusion in the other direction, let $(g,h) \\in G_{1}^{\\prime} \\times G_{2}^{\\prime}$. Then, $g = [g_{1},g_{2}]\\in G_{1}^{\\prime}$ and $h = [h_{1},h_{2}] \\in G_{2}^{\\prime}$.</p>\n\n<p>So, </p>\n\n<p>$\\begin{align}(g,h) = ([g_{1},g_{2}],[h_{1},h_{2}]) = (g_{1}^{-1}g_{2}^{-1}g_{1}g_{2}, h_{1}^{-1}h_{2}^{-1}h_{1}h_{2}) \\\\ = (g_{1}^{-1},h_{1}^{-1})(g_{2}^{-1},h_{2}^{-1})(g_{1},h_{1})(g_{2},h_{2}) \\\\ = (g_{1},h_{1})^{-1}(g_{2},h_{2})^{-1}(g_{1},h_{1})(g_{2},h_{2}) \\\\= [(g_{1},h_{1}),(g_{2},h_{2})] \\end{align} $ </p>\n\n<p>where $(g_{1},h_{1}), (g_{2},h_{2}) \\in G_{1} \\times G_{2}$. </p>\n\n<p>Therefore, the product of a commutator in $G_{1}$ with a commutator in $G_{2}$ is a commutator in $G_{1} \\times G_{2}$. Thus, $\\mathbf{G_{1}^{\\prime} \\times G_{2}^{\\prime} \\subseteq (G_{1} \\times G_{2})^{\\prime}}$.</p>\n\n<p>And so, we get our equality, $G_{1}^{\\prime} \\times G_{2}^{\\prime} = (G_{1} \\times G_{2})^{\\prime}$.</p>\n" }, { "answer_id": 2081407, "score": 4, "text": "<p>Okay...I think this answer finally takes care of the difficulty. Thanks to arctictern for all his help :)</p>\n\n<hr>\n\n<p>Let $g \\in (G_{1}\\times G_{2})^{\\prime}$. Then, since $(G_{1} \\times G_{2})^{\\prime}$ is the subgroup generated by all commutators of elements of $G_{1} \\times G_{2}$, $g$ is the product of commutators of elements of $G_{1} \\times G_{2}$. Letting $(a_{i},b_{i})$, $(c_{i},d_{i})\\in G_{1} \\times G_{2}$, we have that</p>\n\n<p>$\\begin{align} g = [(a_{1},b_{1}),(c_{1},d_{1})]\\cdot [(a_{2},b_{2}),(c_{2},d_{2})]\\cdot\\,\\, \\cdots \\,\\, \\cdot[(a_{t},b_{t}),(c_{t},d_{t})]\\,\\text{for some}\\, t, \\\\ = (a_{1},b_{1})^{-1}(c_{1},d_{1})^{-1}(a_{1},b_{1})(c_{1},d_{1})\\cdot (a_{2},b_{2})^{-1}(c_{2},d_{2})^{-1}(a_{2},b_{2})(c_{2},d_{2})\\cdot \\, \\, \\cdots \\, \\,\\cdot(a_{t},b_{t})^{-1}(c_{t},d_{t})^{-1}(a_{t},b_{t})(c_{t},d_{t}) \\\\ = (a_{1}^{-1},b_{1}^{-1}) (c_{1}^{-1},d_{1}^{-1})(a_{1},b_{1})(c_{1},d_{1})\\cdot (a_{2}^{-1},b_{2}^{-1})(c_{2}^{-1},d_{2}^{-1})(a_{2},b_{2})(c_{2},d_{2})\\cdot \\, \\, \\cdots \\, \\,\\cdot(a_{t}^{-1},b_{t}^{-1})(c_{t}^{-1},d_{t}^{-1})(a_{t},b_{t})(c_{t},d_{t}) \\\\ = (a_{1}^{-1}c_{1}^{-1}a_{1}c_{1}a_{2}^{-1}c_{2}^{-1}a_{2}c_{2}\\cdots a_{t}^{-1}c_{t}^{-1}a_{t}c_{t}, \\,b_{1}^{-1}d_{1}^{-1}b_{1}d_{1}b_{2}^{-1}d_{2}^{-1}b_{2}d_{2}\\cdots b_{t}^{-1}d_{t}^{-1}b_{t}d_{t}) \\\\ = ([a_{1},c_{1}]\\cdot [a_{2},c_{2}]\\cdot \\,\\, \\cdots \\, \\, \\cdot [a_{t},c_{t}], \\,[b_{1},d_{1}]\\cdot [b_{2},d_{2}]\\cdot \\,\\, \\cdots \\, \\, \\cdot [b_{t},d_{t}]) \\in G_{1}^{\\prime} \\times G_{2}^{\\prime}\\end{align}$</p>\n\n<p>So, we have the inclusion $\\mathbf{(G_{1}\\times G_{2})^{\\prime}\\subseteq G_{1}^{\\prime}\\times G_{2}^{\\prime}}$</p>\n\n<hr>\n\n<p>In the other direction, let $h \\in G_{1}^{\\prime} \\times G_{2}^{\\prime}$.</p>\n\n<p>To belong in $ G_{1}^{\\prime} \\times G_{2}^{\\prime}$, $h$ must be the direct product of an element of $G_{1}^{\\prime}$, which is the (group operation) product of commutators of elements of $G_{1}$, and an element of $G_{2}^{\\prime}$, which is the (group operation) product of commutators of elements of $G_{2}$.</p>\n\n<p>Let $h_{1}$ be such an element of $G_{1}^{\\prime}$ and let $h_{2}$ be such an element of $G_{2}^{\\prime}$, and let $h = h_{1} \\times h_{2}$. Then,</p>\n\n<p>$h_{1} = [a_{1},c_{1}]\\cdot [a_{2}, c_{2}] \\cdot \\, \\, \\cdots \\, \\, \\cdot [a_{t},c_{t}]$ for some $t$, where each $a_{i}$, $c_{i}$ is an element of $G_{1}$</p>\n\n<p>and $h_{2} =[b_{1},d_{1}]\\cdot [b_{2}, d_{2}] \\cdot \\, \\, \\cdots \\, \\, \\cdot [b_{t},d_{t}]$ for some $t$, where each $b_{i}$, $d_{i}$ is an element of $G_{2}$.</p>\n\n<p>So, </p>\n\n<p>$\\begin{align}h = h_{1} \\times h_{2} = ([a_{1},c_{1}]\\cdot [a_{2}, c_{2}] \\cdot \\, \\, \\cdots \\, \\, \\cdot [a_{t},c_{t}], [b_{1},d_{1}]\\cdot [b_{2}, d_{2}] \\cdot \\, \\, \\cdots \\, \\, \\cdot [b_{t},d_{t}])\\\\ =(a_{1}^{-1}c_{1}^{-1}a_{1}c_{1}a_{2}^{-1}c_{2}^{-1}a_{2}c_{2}\\cdots a_{t}^{-1}c_{t}^{-1}a_{t}c_{t}, \\,b_{1}^{-1}d_{1}^{-1}b_{1}d_{1}b_{2}^{-1}d_{2}^{-1}b_{2}d_{2}\\cdots b_{t}^{-1}d_{t}^{-1}b_{t}d_{t}) \\\\ = (a_{1}^{-1},b_{1}^{-1}) (c_{1}^{-1},d_{1}^{-1})(a_{1},b_{1})(c_{1},d_{1})\\cdot (a_{2}^{-1},b_{2}^{-1})(c_{2}^{-1},d_{2}^{-1})(a_{2},b_{2})(c_{2},d_{2})\\cdot \\, \\, \\cdots \\, \\,\\cdot(a_{t}^{-1},b_{t}^{-1})(c_{t}^{-1},d_{t}^{-1})(a_{t},b_{t})(c_{t},d_{t}) \\\\ = (a_{1},b_{1})^{-1}(c_{1},d_{1})^{-1}(a_{1},b_{1})(c_{1},d_{1})\\cdot (a_{2},b_{2})^{-1}(c_{2},d_{2})^{-1}(a_{2},b_{2})(c_{2},d_{2})\\cdot \\, \\, \\cdots \\, \\,\\cdot(a_{t},b_{t})^{-1}(c_{t},d_{t})^{-1}(a_{t},b_{t})(c_{t},d_{t}) \\\\ = [(a_{1},b_{1}),(c_{1},d_{1})]\\cdot [(a_{2},b_{2}),(c_{2},d_{2})]\\cdot\\,\\, \\cdots \\,\\, \\cdot[(a_{t},b_{t}),(c_{t},d_{t})] \\in (G_{1}\\times G_{2})^{\\prime} \\end{align}$</p>\n\n<p>So, we have the inclusion $\\mathbf{G_{1}^{\\prime}\\times G_{2}^{\\prime} \\subseteq (G_{1}\\times G_{2})^{\\prime}}$.</p>\n\n<hr>\n\n<p>Thus, since we have inclusion in both directions, we have established the equality $\\mathbf{(G_{1}\\times G_{2})^{\\prime} = G_{1}^{\\prime}\\times G_{2}^{\\prime}}$ for the right things this time, hopefully.</p>\n" }, { "answer_id": 2081329, "score": 3, "text": "<p>Let $(g_{1},g_{2})\\,,\\,(h_{1},h_{2}) \\in G_{1}\\times G_{2}$. Then, $\\begin{align}[(g_{1},g_{2})(h_{1},h_{2})]=(g_{1},g_{2})^{-1}(h_{1},h_{2})^{-1}(g_{1},g_{2})(h_{1},h_{2}) \\\\ =(g_{1}^{-1},g_{2}^{-1})(h_{1}^{-1},h_{2}^{-1})(g_{1},g_{2})(h_{1},h_{2}) = (g_{1}^{-1}h_{1}^{-1}g_{1}h_{1},g_{2}^{-1}h_{2}^{-1}g_{2}h_{2}) \\\\ = ([g_{1},h_{1}],[g_{2},h_{2}]) = ([g_{1},h_{1}],e_{G_{2}})(e_{G_{1}},[g_{2},h_{2}])\\end{align}$.</p>\n\n<p>So, each commutator in $G_{1} \\times G_{2}$ is the product of one in $G_{1}$ with one in $G_{2}$.</p>\n\n<p>Therefore, $\\mathbf{(G_{1}\\times G_{2})^{\\prime}\\subseteq G_{1}^{\\prime}\\times G_{2}^{\\prime}}$.</p>\n\n<p>Now, to get the inclusion in the other direction, let $(g,h) \\in G_{1}^{\\prime} \\times G_{2}^{\\prime}$. Then, $g = [g_{1},g_{2}]\\in G_{1}^{\\prime}$ and $h = [h_{1},h_{2}] \\in G_{2}^{\\prime}$.</p>\n\n<p>So, </p>\n\n<p>$\\begin{align}(g,h) = ([g_{1},g_{2}],[h_{1},h_{2}]) = (g_{1}^{-1}g_{2}^{-1}g_{1}g_{2}, h_{1}^{-1}h_{2}^{-1}h_{1}h_{2}) \\\\ = (g_{1}^{-1},h_{1}^{-1})(g_{2}^{-1},h_{2}^{-1})(g_{1},h_{1})(g_{2},h_{2}) \\\\ = (g_{1},h_{1})^{-1}(g_{2},h_{2})^{-1}(g_{1},h_{1})(g_{2},h_{2}) \\\\= [(g_{1},h_{1}),(g_{2},h_{2})] \\end{align} $ </p>\n\n<p>where $(g_{1},h_{1}), (g_{2},h_{2}) \\in G_{1} \\times G_{2}$. </p>\n\n<p>Therefore, the product of a commutator in $G_{1}$ with a commutator in $G_{2}$ is a commutator in $G_{1} \\times G_{2}$. Thus, $\\mathbf{G_{1}^{\\prime} \\times G_{2}^{\\prime} \\subseteq (G_{1} \\times G_{2})^{\\prime}}$.</p>\n\n<p>And so, we get our equality, $G_{1}^{\\prime} \\times G_{2}^{\\prime} = (G_{1} \\times G_{2})^{\\prime}$.</p>\n" }, { "answer_id": 2081407, "score": 4, "text": "<p>Okay...I think this answer finally takes care of the difficulty. Thanks to arctictern for all his help :)</p>\n\n<hr>\n\n<p>Let $g \\in (G_{1}\\times G_{2})^{\\prime}$. Then, since $(G_{1} \\times G_{2})^{\\prime}$ is the subgroup generated by all commutators of elements of $G_{1} \\times G_{2}$, $g$ is the product of commutators of elements of $G_{1} \\times G_{2}$. Letting $(a_{i},b_{i})$, $(c_{i},d_{i})\\in G_{1} \\times G_{2}$, we have that</p>\n\n<p>$\\begin{align} g = [(a_{1},b_{1}),(c_{1},d_{1})]\\cdot [(a_{2},b_{2}),(c_{2},d_{2})]\\cdot\\,\\, \\cdots \\,\\, \\cdot[(a_{t},b_{t}),(c_{t},d_{t})]\\,\\text{for some}\\, t, \\\\ = (a_{1},b_{1})^{-1}(c_{1},d_{1})^{-1}(a_{1},b_{1})(c_{1},d_{1})\\cdot (a_{2},b_{2})^{-1}(c_{2},d_{2})^{-1}(a_{2},b_{2})(c_{2},d_{2})\\cdot \\, \\, \\cdots \\, \\,\\cdot(a_{t},b_{t})^{-1}(c_{t},d_{t})^{-1}(a_{t},b_{t})(c_{t},d_{t}) \\\\ = (a_{1}^{-1},b_{1}^{-1}) (c_{1}^{-1},d_{1}^{-1})(a_{1},b_{1})(c_{1},d_{1})\\cdot (a_{2}^{-1},b_{2}^{-1})(c_{2}^{-1},d_{2}^{-1})(a_{2},b_{2})(c_{2},d_{2})\\cdot \\, \\, \\cdots \\, \\,\\cdot(a_{t}^{-1},b_{t}^{-1})(c_{t}^{-1},d_{t}^{-1})(a_{t},b_{t})(c_{t},d_{t}) \\\\ = (a_{1}^{-1}c_{1}^{-1}a_{1}c_{1}a_{2}^{-1}c_{2}^{-1}a_{2}c_{2}\\cdots a_{t}^{-1}c_{t}^{-1}a_{t}c_{t}, \\,b_{1}^{-1}d_{1}^{-1}b_{1}d_{1}b_{2}^{-1}d_{2}^{-1}b_{2}d_{2}\\cdots b_{t}^{-1}d_{t}^{-1}b_{t}d_{t}) \\\\ = ([a_{1},c_{1}]\\cdot [a_{2},c_{2}]\\cdot \\,\\, \\cdots \\, \\, \\cdot [a_{t},c_{t}], \\,[b_{1},d_{1}]\\cdot [b_{2},d_{2}]\\cdot \\,\\, \\cdots \\, \\, \\cdot [b_{t},d_{t}]) \\in G_{1}^{\\prime} \\times G_{2}^{\\prime}\\end{align}$</p>\n\n<p>So, we have the inclusion $\\mathbf{(G_{1}\\times G_{2})^{\\prime}\\subseteq G_{1}^{\\prime}\\times G_{2}^{\\prime}}$</p>\n\n<hr>\n\n<p>In the other direction, let $h \\in G_{1}^{\\prime} \\times G_{2}^{\\prime}$.</p>\n\n<p>To belong in $ G_{1}^{\\prime} \\times G_{2}^{\\prime}$, $h$ must be the direct product of an element of $G_{1}^{\\prime}$, which is the (group operation) product of commutators of elements of $G_{1}$, and an element of $G_{2}^{\\prime}$, which is the (group operation) product of commutators of elements of $G_{2}$.</p>\n\n<p>Let $h_{1}$ be such an element of $G_{1}^{\\prime}$ and let $h_{2}$ be such an element of $G_{2}^{\\prime}$, and let $h = h_{1} \\times h_{2}$. Then,</p>\n\n<p>$h_{1} = [a_{1},c_{1}]\\cdot [a_{2}, c_{2}] \\cdot \\, \\, \\cdots \\, \\, \\cdot [a_{t},c_{t}]$ for some $t$, where each $a_{i}$, $c_{i}$ is an element of $G_{1}$</p>\n\n<p>and $h_{2} =[b_{1},d_{1}]\\cdot [b_{2}, d_{2}] \\cdot \\, \\, \\cdots \\, \\, \\cdot [b_{t},d_{t}]$ for some $t$, where each $b_{i}$, $d_{i}$ is an element of $G_{2}$.</p>\n\n<p>So, </p>\n\n<p>$\\begin{align}h = h_{1} \\times h_{2} = ([a_{1},c_{1}]\\cdot [a_{2}, c_{2}] \\cdot \\, \\, \\cdots \\, \\, \\cdot [a_{t},c_{t}], [b_{1},d_{1}]\\cdot [b_{2}, d_{2}] \\cdot \\, \\, \\cdots \\, \\, \\cdot [b_{t},d_{t}])\\\\ =(a_{1}^{-1}c_{1}^{-1}a_{1}c_{1}a_{2}^{-1}c_{2}^{-1}a_{2}c_{2}\\cdots a_{t}^{-1}c_{t}^{-1}a_{t}c_{t}, \\,b_{1}^{-1}d_{1}^{-1}b_{1}d_{1}b_{2}^{-1}d_{2}^{-1}b_{2}d_{2}\\cdots b_{t}^{-1}d_{t}^{-1}b_{t}d_{t}) \\\\ = (a_{1}^{-1},b_{1}^{-1}) (c_{1}^{-1},d_{1}^{-1})(a_{1},b_{1})(c_{1},d_{1})\\cdot (a_{2}^{-1},b_{2}^{-1})(c_{2}^{-1},d_{2}^{-1})(a_{2},b_{2})(c_{2},d_{2})\\cdot \\, \\, \\cdots \\, \\,\\cdot(a_{t}^{-1},b_{t}^{-1})(c_{t}^{-1},d_{t}^{-1})(a_{t},b_{t})(c_{t},d_{t}) \\\\ = (a_{1},b_{1})^{-1}(c_{1},d_{1})^{-1}(a_{1},b_{1})(c_{1},d_{1})\\cdot (a_{2},b_{2})^{-1}(c_{2},d_{2})^{-1}(a_{2},b_{2})(c_{2},d_{2})\\cdot \\, \\, \\cdots \\, \\,\\cdot(a_{t},b_{t})^{-1}(c_{t},d_{t})^{-1}(a_{t},b_{t})(c_{t},d_{t}) \\\\ = [(a_{1},b_{1}),(c_{1},d_{1})]\\cdot [(a_{2},b_{2}),(c_{2},d_{2})]\\cdot\\,\\, \\cdots \\,\\, \\cdot[(a_{t},b_{t}),(c_{t},d_{t})] \\in (G_{1}\\times G_{2})^{\\prime} \\end{align}$</p>\n\n<p>So, we have the inclusion $\\mathbf{G_{1}^{\\prime}\\times G_{2}^{\\prime} \\subseteq (G_{1}\\times G_{2})^{\\prime}}$.</p>\n\n<hr>\n\n<p>Thus, since we have inclusion in both directions, we have established the equality $\\mathbf{(G_{1}\\times G_{2})^{\\prime} = G_{1}^{\\prime}\\times G_{2}^{\\prime}}$ for the right things this time, hopefully.</p>\n" }, { "answer_id": 2081329, "score": 3, "text": "<p>Let $(g_{1},g_{2})\\,,\\,(h_{1},h_{2}) \\in G_{1}\\times G_{2}$. Then, $\\begin{align}[(g_{1},g_{2})(h_{1},h_{2})]=(g_{1},g_{2})^{-1}(h_{1},h_{2})^{-1}(g_{1},g_{2})(h_{1},h_{2}) \\\\ =(g_{1}^{-1},g_{2}^{-1})(h_{1}^{-1},h_{2}^{-1})(g_{1},g_{2})(h_{1},h_{2}) = (g_{1}^{-1}h_{1}^{-1}g_{1}h_{1},g_{2}^{-1}h_{2}^{-1}g_{2}h_{2}) \\\\ = ([g_{1},h_{1}],[g_{2},h_{2}]) = ([g_{1},h_{1}],e_{G_{2}})(e_{G_{1}},[g_{2},h_{2}])\\end{align}$.</p>\n\n<p>So, each commutator in $G_{1} \\times G_{2}$ is the product of one in $G_{1}$ with one in $G_{2}$.</p>\n\n<p>Therefore, $\\mathbf{(G_{1}\\times G_{2})^{\\prime}\\subseteq G_{1}^{\\prime}\\times G_{2}^{\\prime}}$.</p>\n\n<p>Now, to get the inclusion in the other direction, let $(g,h) \\in G_{1}^{\\prime} \\times G_{2}^{\\prime}$. Then, $g = [g_{1},g_{2}]\\in G_{1}^{\\prime}$ and $h = [h_{1},h_{2}] \\in G_{2}^{\\prime}$.</p>\n\n<p>So, </p>\n\n<p>$\\begin{align}(g,h) = ([g_{1},g_{2}],[h_{1},h_{2}]) = (g_{1}^{-1}g_{2}^{-1}g_{1}g_{2}, h_{1}^{-1}h_{2}^{-1}h_{1}h_{2}) \\\\ = (g_{1}^{-1},h_{1}^{-1})(g_{2}^{-1},h_{2}^{-1})(g_{1},h_{1})(g_{2},h_{2}) \\\\ = (g_{1},h_{1})^{-1}(g_{2},h_{2})^{-1}(g_{1},h_{1})(g_{2},h_{2}) \\\\= [(g_{1},h_{1}),(g_{2},h_{2})] \\end{align} $ </p>\n\n<p>where $(g_{1},h_{1}), (g_{2},h_{2}) \\in G_{1} \\times G_{2}$. </p>\n\n<p>Therefore, the product of a commutator in $G_{1}$ with a commutator in $G_{2}$ is a commutator in $G_{1} \\times G_{2}$. Thus, $\\mathbf{G_{1}^{\\prime} \\times G_{2}^{\\prime} \\subseteq (G_{1} \\times G_{2})^{\\prime}}$.</p>\n\n<p>And so, we get our equality, $G_{1}^{\\prime} \\times G_{2}^{\\prime} = (G_{1} \\times G_{2})^{\\prime}$.</p>\n" } ]
Question: Commutator Subgroup of Direct Product equals the Direct Product of the Commutator Subgroups Answer 1: <p>Okay...I think this answer finally takes care of the difficulty. Thanks to arctictern for all his help :)</p> <hr> <p>Let $g \in (G_{1}\times G_{2})^{\prime}$. Then, since $(G_{1} \times G_{2})^{\prime}$ is the subgroup generated by all commutators of elements of $G_{1} \times G_{2}$, $g$ is the product of commutators of elements of $G_{1} \times G_{2}$. Letting $(a_{i},b_{i})$, $(c_{i},d_{i})\in G_{1} \times G_{2}$, we have that</p> <p>$\begin{align} g = [(a_{1},b_{1}),(c_{1},d_{1})]\cdot [(a_{2},b_{2}),(c_{2},d_{2})]\cdot\,\, \cdots \,\, \cdot[(a_{t},b_{t}),(c_{t},d_{t})]\,\text{for some}\, t, \\ = (a_{1},b_{1})^{-1}(c_{1},d_{1})^{-1}(a_{1},b_{1})(c_{1},d_{1})\cdot (a_{2},b_{2})^{-1}(c_{2},d_{2})^{-1}(a_{2},b_{2})(c_{2},d_{2})\cdot \, \, \cdots \, \,\cdot(a_{t},b_{t})^{-1}(c_{t},d_{t})^{-1}(a_{t},b_{t})(c_{t},d_{t}) \\ = (a_{1}^{-1},b_{1}^{-1}) (c_{1}^{-1},d_{1}^{-1})(a_{1},b_{1})(c_{1},d_{1})\cdot (a_{2}^{-1},b_{2}^{-1})(c_{2}^{-1},d_{2}^{-1})(a_{2},b_{2})(c_{2},d_{2})\cdot \, \, \cdots \, \,\cdot(a_{t}^{-1},b_{t}^{-1})(c_{t}^{-1},d_{t}^{-1})(a_{t},b_{t})(c_{t},d_{t}) \\ = (a_{1}^{-1}c_{1}^{-1}a_{1}c_{1}a_{2}^{-1}c_{2}^{-1}a_{2}c_{2}\cdots a_{t}^{-1}c_{t}^{-1}a_{t}c_{t}, \,b_{1}^{-1}d_{1}^{-1}b_{1}d_{1}b_{2}^{-1}d_{2}^{-1}b_{2}d_{2}\cdots b_{t}^{-1}d_{t}^{-1}b_{t}d_{t}) \\ = ([a_{1},c_{1}]\cdot [a_{2},c_{2}]\cdot \,\, \cdots \, \, \cdot [a_{t},c_{t}], \,[b_{1},d_{1}]\cdot [b_{2},d_{2}]\cdot \,\, \cdots \, \, \cdot [b_{t},d_{t}]) \in G_{1}^{\prime} \times G_{2}^{\prime}\end{align}$</p> <p>So, we have the inclusion $\mathbf{(G_{1}\times G_{2})^{\prime}\subseteq G_{1}^{\prime}\times G_{2}^{\prime}}$</p> <hr> <p>In the other direction, let $h \in G_{1}^{\prime} \times G_{2}^{\prime}$.</p> <p>To belong in $ G_{1}^{\prime} \times G_{2}^{\prime}$, $h$ must be the direct product of an element of $G_{1}^{\prime}$, which is the (group operation) product of commutators of elements of $G_{1}$, and an element of $G_{2}^{\prime}$, which is the (group operation) product of commutators of elements of $G_{2}$.</p> <p>Let $h_{1}$ be such an element of $G_{1}^{\prime}$ and let $h_{2}$ be such an element of $G_{2}^{\prime}$, and let $h = h_{1} \times h_{2}$. Then,</p> <p>$h_{1} = [a_{1},c_{1}]\cdot [a_{2}, c_{2}] \cdot \, \, \cdots \, \, \cdot [a_{t},c_{t}]$ for some $t$, where each $a_{i}$, $c_{i}$ is an element of $G_{1}$</p> <p>and $h_{2} =[b_{1},d_{1}]\cdot [b_{2}, d_{2}] \cdot \, \, \cdots \, \, \cdot [b_{t},d_{t}]$ for some $t$, where each $b_{i}$, $d_{i}$ is an element of $G_{2}$.</p> <p>So, </p> <p>$\begin{align}h = h_{1} \times h_{2} = ([a_{1},c_{1}]\cdot [a_{2}, c_{2}] \cdot \, \, \cdots \, \, \cdot [a_{t},c_{t}], [b_{1},d_{1}]\cdot [b_{2}, d_{2}] \cdot \, \, \cdots \, \, \cdot [b_{t},d_{t}])\\ =(a_{1}^{-1}c_{1}^{-1}a_{1}c_{1}a_{2}^{-1}c_{2}^{-1}a_{2}c_{2}\cdots a_{t}^{-1}c_{t}^{-1}a_{t}c_{t}, \,b_{1}^{-1}d_{1}^{-1}b_{1}d_{1}b_{2}^{-1}d_{2}^{-1}b_{2}d_{2}\cdots b_{t}^{-1}d_{t}^{-1}b_{t}d_{t}) \\ = (a_{1}^{-1},b_{1}^{-1}) (c_{1}^{-1},d_{1}^{-1})(a_{1},b_{1})(c_{1},d_{1})\cdot (a_{2}^{-1},b_{2}^{-1})(c_{2}^{-1},d_{2}^{-1})(a_{2},b_{2})(c_{2},d_{2})\cdot \, \, \cdots \, \,\cdot(a_{t}^{-1},b_{t}^{-1})(c_{t}^{-1},d_{t}^{-1})(a_{t},b_{t})(c_{t},d_{t}) \\ = (a_{1},b_{1})^{-1}(c_{1},d_{1})^{-1}(a_{1},b_{1})(c_{1},d_{1})\cdot (a_{2},b_{2})^{-1}(c_{2},d_{2})^{-1}(a_{2},b_{2})(c_{2},d_{2})\cdot \, \, \cdots \, \,\cdot(a_{t},b_{t})^{-1}(c_{t},d_{t})^{-1}(a_{t},b_{t})(c_{t},d_{t}) \\ = [(a_{1},b_{1}),(c_{1},d_{1})]\cdot [(a_{2},b_{2}),(c_{2},d_{2})]\cdot\,\, \cdots \,\, \cdot[(a_{t},b_{t}),(c_{t},d_{t})] \in (G_{1}\times G_{2})^{\prime} \end{align}$</p> <p>So, we have the inclusion $\mathbf{G_{1}^{\prime}\times G_{2}^{\prime} \subseteq (G_{1}\times G_{2})^{\prime}}$.</p> <hr> <p>Thus, since we have inclusion in both directions, we have established the equality $\mathbf{(G_{1}\times G_{2})^{\prime} = G_{1}^{\prime}\times G_{2}^{\prime}}$ for the right things this time, hopefully.</p> Answer 2: <p>Let $(g_{1},g_{2})\,,\,(h_{1},h_{2}) \in G_{1}\times G_{2}$. Then, $\begin{align}[(g_{1},g_{2})(h_{1},h_{2})]=(g_{1},g_{2})^{-1}(h_{1},h_{2})^{-1}(g_{1},g_{2})(h_{1},h_{2}) \\ =(g_{1}^{-1},g_{2}^{-1})(h_{1}^{-1},h_{2}^{-1})(g_{1},g_{2})(h_{1},h_{2}) = (g_{1}^{-1}h_{1}^{-1}g_{1}h_{1},g_{2}^{-1}h_{2}^{-1}g_{2}h_{2}) \\ = ([g_{1},h_{1}],[g_{2},h_{2}]) = ([g_{1},h_{1}],e_{G_{2}})(e_{G_{1}},[g_{2},h_{2}])\end{align}$.</p> <p>So, each commutator in $G_{1} \times G_{2}$ is the product of one in $G_{1}$ with one in $G_{2}$.</p> <p>Therefore, $\mathbf{(G_{1}\times G_{2})^{\prime}\subseteq G_{1}^{\prime}\times G_{2}^{\prime}}$.</p> <p>Now, to get the inclusion in the other direction, let $(g,h) \in G_{1}^{\prime} \times G_{2}^{\prime}$. Then, $g = [g_{1},g_{2}]\in G_{1}^{\prime}$ and $h = [h_{1},h_{2}] \in G_{2}^{\prime}$.</p> <p>So, </p> <p>$\begin{align}(g,h) = ([g_{1},g_{2}],[h_{1},h_{2}]) = (g_{1}^{-1}g_{2}^{-1}g_{1}g_{2}, h_{1}^{-1}h_{2}^{-1}h_{1}h_{2}) \\ = (g_{1}^{-1},h_{1}^{-1})(g_{2}^{-1},h_{2}^{-1})(g_{1},h_{1})(g_{2},h_{2}) \\ = (g_{1},h_{1})^{-1}(g_{2},h_{2})^{-1}(g_{1},h_{1})(g_{2},h_{2}) \\= [(g_{1},h_{1}),(g_{2},h_{2})] \end{align} $ </p> <p>where $(g_{1},h_{1}), (g_{2},h_{2}) \in G_{1} \times G_{2}$. </p> <p>Therefore, the product of a commutator in $G_{1}$ with a commutator in $G_{2}$ is a commutator in $G_{1} \times G_{2}$. Thus, $\mathbf{G_{1}^{\prime} \times G_{2}^{\prime} \subseteq (G_{1} \times G_{2})^{\prime}}$.</p> <p>And so, we get our equality, $G_{1}^{\prime} \times G_{2}^{\prime} = (G_{1} \times G_{2})^{\prime}$.</p> Answer 3: <p>Okay...I think this answer finally takes care of the difficulty. Thanks to arctictern for all his help :)</p> <hr> <p>Let $g \in (G_{1}\times G_{2})^{\prime}$. Then, since $(G_{1} \times G_{2})^{\prime}$ is the subgroup generated by all commutators of elements of $G_{1} \times G_{2}$, $g$ is the product of commutators of elements of $G_{1} \times G_{2}$. Letting $(a_{i},b_{i})$, $(c_{i},d_{i})\in G_{1} \times G_{2}$, we have that</p> <p>$\begin{align} g = [(a_{1},b_{1}),(c_{1},d_{1})]\cdot [(a_{2},b_{2}),(c_{2},d_{2})]\cdot\,\, \cdots \,\, \cdot[(a_{t},b_{t}),(c_{t},d_{t})]\,\text{for some}\, t, \\ = (a_{1},b_{1})^{-1}(c_{1},d_{1})^{-1}(a_{1},b_{1})(c_{1},d_{1})\cdot (a_{2},b_{2})^{-1}(c_{2},d_{2})^{-1}(a_{2},b_{2})(c_{2},d_{2})\cdot \, \, \cdots \, \,\cdot(a_{t},b_{t})^{-1}(c_{t},d_{t})^{-1}(a_{t},b_{t})(c_{t},d_{t}) \\ = (a_{1}^{-1},b_{1}^{-1}) (c_{1}^{-1},d_{1}^{-1})(a_{1},b_{1})(c_{1},d_{1})\cdot (a_{2}^{-1},b_{2}^{-1})(c_{2}^{-1},d_{2}^{-1})(a_{2},b_{2})(c_{2},d_{2})\cdot \, \, \cdots \, \,\cdot(a_{t}^{-1},b_{t}^{-1})(c_{t}^{-1},d_{t}^{-1})(a_{t},b_{t})(c_{t},d_{t}) \\ = (a_{1}^{-1}c_{1}^{-1}a_{1}c_{1}a_{2}^{-1}c_{2}^{-1}a_{2}c_{2}\cdots a_{t}^{-1}c_{t}^{-1}a_{t}c_{t}, \,b_{1}^{-1}d_{1}^{-1}b_{1}d_{1}b_{2}^{-1}d_{2}^{-1}b_{2}d_{2}\cdots b_{t}^{-1}d_{t}^{-1}b_{t}d_{t}) \\ = ([a_{1},c_{1}]\cdot [a_{2},c_{2}]\cdot \,\, \cdots \, \, \cdot [a_{t},c_{t}], \,[b_{1},d_{1}]\cdot [b_{2},d_{2}]\cdot \,\, \cdots \, \, \cdot [b_{t},d_{t}]) \in G_{1}^{\prime} \times G_{2}^{\prime}\end{align}$</p> <p>So, we have the inclusion $\mathbf{(G_{1}\times G_{2})^{\prime}\subseteq G_{1}^{\prime}\times G_{2}^{\prime}}$</p> <hr> <p>In the other direction, let $h \in G_{1}^{\prime} \times G_{2}^{\prime}$.</p> <p>To belong in $ G_{1}^{\prime} \times G_{2}^{\prime}$, $h$ must be the direct product of an element of $G_{1}^{\prime}$, which is the (group operation) product of commutators of elements of $G_{1}$, and an element of $G_{2}^{\prime}$, which is the (group operation) product of commutators of elements of $G_{2}$.</p> <p>Let $h_{1}$ be such an element of $G_{1}^{\prime}$ and let $h_{2}$ be such an element of $G_{2}^{\prime}$, and let $h = h_{1} \times h_{2}$. Then,</p> <p>$h_{1} = [a_{1},c_{1}]\cdot [a_{2}, c_{2}] \cdot \, \, \cdots \, \, \cdot [a_{t},c_{t}]$ for some $t$, where each $a_{i}$, $c_{i}$ is an element of $G_{1}$</p> <p>and $h_{2} =[b_{1},d_{1}]\cdot [b_{2}, d_{2}] \cdot \, \, \cdots \, \, \cdot [b_{t},d_{t}]$ for some $t$, where each $b_{i}$, $d_{i}$ is an element of $G_{2}$.</p> <p>So, </p> <p>$\begin{align}h = h_{1} \times h_{2} = ([a_{1},c_{1}]\cdot [a_{2}, c_{2}] \cdot \, \, \cdots \, \, \cdot [a_{t},c_{t}], [b_{1},d_{1}]\cdot [b_{2}, d_{2}] \cdot \, \, \cdots \, \, \cdot [b_{t},d_{t}])\\ =(a_{1}^{-1}c_{1}^{-1}a_{1}c_{1}a_{2}^{-1}c_{2}^{-1}a_{2}c_{2}\cdots a_{t}^{-1}c_{t}^{-1}a_{t}c_{t}, \,b_{1}^{-1}d_{1}^{-1}b_{1}d_{1}b_{2}^{-1}d_{2}^{-1}b_{2}d_{2}\cdots b_{t}^{-1}d_{t}^{-1}b_{t}d_{t}) \\ = (a_{1}^{-1},b_{1}^{-1}) (c_{1}^{-1},d_{1}^{-1})(a_{1},b_{1})(c_{1},d_{1})\cdot (a_{2}^{-1},b_{2}^{-1})(c_{2}^{-1},d_{2}^{-1})(a_{2},b_{2})(c_{2},d_{2})\cdot \, \, \cdots \, \,\cdot(a_{t}^{-1},b_{t}^{-1})(c_{t}^{-1},d_{t}^{-1})(a_{t},b_{t})(c_{t},d_{t}) \\ = (a_{1},b_{1})^{-1}(c_{1},d_{1})^{-1}(a_{1},b_{1})(c_{1},d_{1})\cdot (a_{2},b_{2})^{-1}(c_{2},d_{2})^{-1}(a_{2},b_{2})(c_{2},d_{2})\cdot \, \, \cdots \, \,\cdot(a_{t},b_{t})^{-1}(c_{t},d_{t})^{-1}(a_{t},b_{t})(c_{t},d_{t}) \\ = [(a_{1},b_{1}),(c_{1},d_{1})]\cdot [(a_{2},b_{2}),(c_{2},d_{2})]\cdot\,\, \cdots \,\, \cdot[(a_{t},b_{t}),(c_{t},d_{t})] \in (G_{1}\times G_{2})^{\prime} \end{align}$</p> <p>So, we have the inclusion $\mathbf{G_{1}^{\prime}\times G_{2}^{\prime} \subseteq (G_{1}\times G_{2})^{\prime}}$.</p> <hr> <p>Thus, since we have inclusion in both directions, we have established the equality $\mathbf{(G_{1}\times G_{2})^{\prime} = G_{1}^{\prime}\times G_{2}^{\prime}}$ for the right things this time, hopefully.</p> Answer 4: <p>Let $(g_{1},g_{2})\,,\,(h_{1},h_{2}) \in G_{1}\times G_{2}$. Then, $\begin{align}[(g_{1},g_{2})(h_{1},h_{2})]=(g_{1},g_{2})^{-1}(h_{1},h_{2})^{-1}(g_{1},g_{2})(h_{1},h_{2}) \\ =(g_{1}^{-1},g_{2}^{-1})(h_{1}^{-1},h_{2}^{-1})(g_{1},g_{2})(h_{1},h_{2}) = (g_{1}^{-1}h_{1}^{-1}g_{1}h_{1},g_{2}^{-1}h_{2}^{-1}g_{2}h_{2}) \\ = ([g_{1},h_{1}],[g_{2},h_{2}]) = ([g_{1},h_{1}],e_{G_{2}})(e_{G_{1}},[g_{2},h_{2}])\end{align}$.</p> <p>So, each commutator in $G_{1} \times G_{2}$ is the product of one in $G_{1}$ with one in $G_{2}$.</p> <p>Therefore, $\mathbf{(G_{1}\times G_{2})^{\prime}\subseteq G_{1}^{\prime}\times G_{2}^{\prime}}$.</p> <p>Now, to get the inclusion in the other direction, let $(g,h) \in G_{1}^{\prime} \times G_{2}^{\prime}$. Then, $g = [g_{1},g_{2}]\in G_{1}^{\prime}$ and $h = [h_{1},h_{2}] \in G_{2}^{\prime}$.</p> <p>So, </p> <p>$\begin{align}(g,h) = ([g_{1},g_{2}],[h_{1},h_{2}]) = (g_{1}^{-1}g_{2}^{-1}g_{1}g_{2}, h_{1}^{-1}h_{2}^{-1}h_{1}h_{2}) \\ = (g_{1}^{-1},h_{1}^{-1})(g_{2}^{-1},h_{2}^{-1})(g_{1},h_{1})(g_{2},h_{2}) \\ = (g_{1},h_{1})^{-1}(g_{2},h_{2})^{-1}(g_{1},h_{1})(g_{2},h_{2}) \\= [(g_{1},h_{1}),(g_{2},h_{2})] \end{align} $ </p> <p>where $(g_{1},h_{1}), (g_{2},h_{2}) \in G_{1} \times G_{2}$. </p> <p>Therefore, the product of a commutator in $G_{1}$ with a commutator in $G_{2}$ is a commutator in $G_{1} \times G_{2}$. Thus, $\mathbf{G_{1}^{\prime} \times G_{2}^{\prime} \subseteq (G_{1} \times G_{2})^{\prime}}$.</p> <p>And so, we get our equality, $G_{1}^{\prime} \times G_{2}^{\prime} = (G_{1} \times G_{2})^{\prime}$.</p> Answer 5: <p>Okay...I think this answer finally takes care of the difficulty. Thanks to arctictern for all his help :)</p> <hr> <p>Let $g \in (G_{1}\times G_{2})^{\prime}$. Then, since $(G_{1} \times G_{2})^{\prime}$ is the subgroup generated by all commutators of elements of $G_{1} \times G_{2}$, $g$ is the product of commutators of elements of $G_{1} \times G_{2}$. Letting $(a_{i},b_{i})$, $(c_{i},d_{i})\in G_{1} \times G_{2}$, we have that</p> <p>$\begin{align} g = [(a_{1},b_{1}),(c_{1},d_{1})]\cdot [(a_{2},b_{2}),(c_{2},d_{2})]\cdot\,\, \cdots \,\, \cdot[(a_{t},b_{t}),(c_{t},d_{t})]\,\text{for some}\, t, \\ = (a_{1},b_{1})^{-1}(c_{1},d_{1})^{-1}(a_{1},b_{1})(c_{1},d_{1})\cdot (a_{2},b_{2})^{-1}(c_{2},d_{2})^{-1}(a_{2},b_{2})(c_{2},d_{2})\cdot \, \, \cdots \, \,\cdot(a_{t},b_{t})^{-1}(c_{t},d_{t})^{-1}(a_{t},b_{t})(c_{t},d_{t}) \\ = (a_{1}^{-1},b_{1}^{-1}) (c_{1}^{-1},d_{1}^{-1})(a_{1},b_{1})(c_{1},d_{1})\cdot (a_{2}^{-1},b_{2}^{-1})(c_{2}^{-1},d_{2}^{-1})(a_{2},b_{2})(c_{2},d_{2})\cdot \, \, \cdots \, \,\cdot(a_{t}^{-1},b_{t}^{-1})(c_{t}^{-1},d_{t}^{-1})(a_{t},b_{t})(c_{t},d_{t}) \\ = (a_{1}^{-1}c_{1}^{-1}a_{1}c_{1}a_{2}^{-1}c_{2}^{-1}a_{2}c_{2}\cdots a_{t}^{-1}c_{t}^{-1}a_{t}c_{t}, \,b_{1}^{-1}d_{1}^{-1}b_{1}d_{1}b_{2}^{-1}d_{2}^{-1}b_{2}d_{2}\cdots b_{t}^{-1}d_{t}^{-1}b_{t}d_{t}) \\ = ([a_{1},c_{1}]\cdot [a_{2},c_{2}]\cdot \,\, \cdots \, \, \cdot [a_{t},c_{t}], \,[b_{1},d_{1}]\cdot [b_{2},d_{2}]\cdot \,\, \cdots \, \, \cdot [b_{t},d_{t}]) \in G_{1}^{\prime} \times G_{2}^{\prime}\end{align}$</p> <p>So, we have the inclusion $\mathbf{(G_{1}\times G_{2})^{\prime}\subseteq G_{1}^{\prime}\times G_{2}^{\prime}}$</p> <hr> <p>In the other direction, let $h \in G_{1}^{\prime} \times G_{2}^{\prime}$.</p> <p>To belong in $ G_{1}^{\prime} \times G_{2}^{\prime}$, $h$ must be the direct product of an element of $G_{1}^{\prime}$, which is the (group operation) product of commutators of elements of $G_{1}$, and an element of $G_{2}^{\prime}$, which is the (group operation) product of commutators of elements of $G_{2}$.</p> <p>Let $h_{1}$ be such an element of $G_{1}^{\prime}$ and let $h_{2}$ be such an element of $G_{2}^{\prime}$, and let $h = h_{1} \times h_{2}$. Then,</p> <p>$h_{1} = [a_{1},c_{1}]\cdot [a_{2}, c_{2}] \cdot \, \, \cdots \, \, \cdot [a_{t},c_{t}]$ for some $t$, where each $a_{i}$, $c_{i}$ is an element of $G_{1}$</p> <p>and $h_{2} =[b_{1},d_{1}]\cdot [b_{2}, d_{2}] \cdot \, \, \cdots \, \, \cdot [b_{t},d_{t}]$ for some $t$, where each $b_{i}$, $d_{i}$ is an element of $G_{2}$.</p> <p>So, </p> <p>$\begin{align}h = h_{1} \times h_{2} = ([a_{1},c_{1}]\cdot [a_{2}, c_{2}] \cdot \, \, \cdots \, \, \cdot [a_{t},c_{t}], [b_{1},d_{1}]\cdot [b_{2}, d_{2}] \cdot \, \, \cdots \, \, \cdot [b_{t},d_{t}])\\ =(a_{1}^{-1}c_{1}^{-1}a_{1}c_{1}a_{2}^{-1}c_{2}^{-1}a_{2}c_{2}\cdots a_{t}^{-1}c_{t}^{-1}a_{t}c_{t}, \,b_{1}^{-1}d_{1}^{-1}b_{1}d_{1}b_{2}^{-1}d_{2}^{-1}b_{2}d_{2}\cdots b_{t}^{-1}d_{t}^{-1}b_{t}d_{t}) \\ = (a_{1}^{-1},b_{1}^{-1}) (c_{1}^{-1},d_{1}^{-1})(a_{1},b_{1})(c_{1},d_{1})\cdot (a_{2}^{-1},b_{2}^{-1})(c_{2}^{-1},d_{2}^{-1})(a_{2},b_{2})(c_{2},d_{2})\cdot \, \, \cdots \, \,\cdot(a_{t}^{-1},b_{t}^{-1})(c_{t}^{-1},d_{t}^{-1})(a_{t},b_{t})(c_{t},d_{t}) \\ = (a_{1},b_{1})^{-1}(c_{1},d_{1})^{-1}(a_{1},b_{1})(c_{1},d_{1})\cdot (a_{2},b_{2})^{-1}(c_{2},d_{2})^{-1}(a_{2},b_{2})(c_{2},d_{2})\cdot \, \, \cdots \, \,\cdot(a_{t},b_{t})^{-1}(c_{t},d_{t})^{-1}(a_{t},b_{t})(c_{t},d_{t}) \\ = [(a_{1},b_{1}),(c_{1},d_{1})]\cdot [(a_{2},b_{2}),(c_{2},d_{2})]\cdot\,\, \cdots \,\, \cdot[(a_{t},b_{t}),(c_{t},d_{t})] \in (G_{1}\times G_{2})^{\prime} \end{align}$</p> <p>So, we have the inclusion $\mathbf{G_{1}^{\prime}\times G_{2}^{\prime} \subseteq (G_{1}\times G_{2})^{\prime}}$.</p> <hr> <p>Thus, since we have inclusion in both directions, we have established the equality $\mathbf{(G_{1}\times G_{2})^{\prime} = G_{1}^{\prime}\times G_{2}^{\prime}}$ for the right things this time, hopefully.</p> Answer 6: <p>Let $(g_{1},g_{2})\,,\,(h_{1},h_{2}) \in G_{1}\times G_{2}$. Then, $\begin{align}[(g_{1},g_{2})(h_{1},h_{2})]=(g_{1},g_{2})^{-1}(h_{1},h_{2})^{-1}(g_{1},g_{2})(h_{1},h_{2}) \\ =(g_{1}^{-1},g_{2}^{-1})(h_{1}^{-1},h_{2}^{-1})(g_{1},g_{2})(h_{1},h_{2}) = (g_{1}^{-1}h_{1}^{-1}g_{1}h_{1},g_{2}^{-1}h_{2}^{-1}g_{2}h_{2}) \\ = ([g_{1},h_{1}],[g_{2},h_{2}]) = ([g_{1},h_{1}],e_{G_{2}})(e_{G_{1}},[g_{2},h_{2}])\end{align}$.</p> <p>So, each commutator in $G_{1} \times G_{2}$ is the product of one in $G_{1}$ with one in $G_{2}$.</p> <p>Therefore, $\mathbf{(G_{1}\times G_{2})^{\prime}\subseteq G_{1}^{\prime}\times G_{2}^{\prime}}$.</p> <p>Now, to get the inclusion in the other direction, let $(g,h) \in G_{1}^{\prime} \times G_{2}^{\prime}$. Then, $g = [g_{1},g_{2}]\in G_{1}^{\prime}$ and $h = [h_{1},h_{2}] \in G_{2}^{\prime}$.</p> <p>So, </p> <p>$\begin{align}(g,h) = ([g_{1},g_{2}],[h_{1},h_{2}]) = (g_{1}^{-1}g_{2}^{-1}g_{1}g_{2}, h_{1}^{-1}h_{2}^{-1}h_{1}h_{2}) \\ = (g_{1}^{-1},h_{1}^{-1})(g_{2}^{-1},h_{2}^{-1})(g_{1},h_{1})(g_{2},h_{2}) \\ = (g_{1},h_{1})^{-1}(g_{2},h_{2})^{-1}(g_{1},h_{1})(g_{2},h_{2}) \\= [(g_{1},h_{1}),(g_{2},h_{2})] \end{align} $ </p> <p>where $(g_{1},h_{1}), (g_{2},h_{2}) \in G_{1} \times G_{2}$. </p> <p>Therefore, the product of a commutator in $G_{1}$ with a commutator in $G_{2}$ is a commutator in $G_{1} \times G_{2}$. Thus, $\mathbf{G_{1}^{\prime} \times G_{2}^{\prime} \subseteq (G_{1} \times G_{2})^{\prime}}$.</p> <p>And so, we get our equality, $G_{1}^{\prime} \times G_{2}^{\prime} = (G_{1} \times G_{2})^{\prime}$.</p> Answer 7: <p>Okay...I think this answer finally takes care of the difficulty. Thanks to arctictern for all his help :)</p> <hr> <p>Let $g \in (G_{1}\times G_{2})^{\prime}$. Then, since $(G_{1} \times G_{2})^{\prime}$ is the subgroup generated by all commutators of elements of $G_{1} \times G_{2}$, $g$ is the product of commutators of elements of $G_{1} \times G_{2}$. Letting $(a_{i},b_{i})$, $(c_{i},d_{i})\in G_{1} \times G_{2}$, we have that</p> <p>$\begin{align} g = [(a_{1},b_{1}),(c_{1},d_{1})]\cdot [(a_{2},b_{2}),(c_{2},d_{2})]\cdot\,\, \cdots \,\, \cdot[(a_{t},b_{t}),(c_{t},d_{t})]\,\text{for some}\, t, \\ = (a_{1},b_{1})^{-1}(c_{1},d_{1})^{-1}(a_{1},b_{1})(c_{1},d_{1})\cdot (a_{2},b_{2})^{-1}(c_{2},d_{2})^{-1}(a_{2},b_{2})(c_{2},d_{2})\cdot \, \, \cdots \, \,\cdot(a_{t},b_{t})^{-1}(c_{t},d_{t})^{-1}(a_{t},b_{t})(c_{t},d_{t}) \\ = (a_{1}^{-1},b_{1}^{-1}) (c_{1}^{-1},d_{1}^{-1})(a_{1},b_{1})(c_{1},d_{1})\cdot (a_{2}^{-1},b_{2}^{-1})(c_{2}^{-1},d_{2}^{-1})(a_{2},b_{2})(c_{2},d_{2})\cdot \, \, \cdots \, \,\cdot(a_{t}^{-1},b_{t}^{-1})(c_{t}^{-1},d_{t}^{-1})(a_{t},b_{t})(c_{t},d_{t}) \\ = (a_{1}^{-1}c_{1}^{-1}a_{1}c_{1}a_{2}^{-1}c_{2}^{-1}a_{2}c_{2}\cdots a_{t}^{-1}c_{t}^{-1}a_{t}c_{t}, \,b_{1}^{-1}d_{1}^{-1}b_{1}d_{1}b_{2}^{-1}d_{2}^{-1}b_{2}d_{2}\cdots b_{t}^{-1}d_{t}^{-1}b_{t}d_{t}) \\ = ([a_{1},c_{1}]\cdot [a_{2},c_{2}]\cdot \,\, \cdots \, \, \cdot [a_{t},c_{t}], \,[b_{1},d_{1}]\cdot [b_{2},d_{2}]\cdot \,\, \cdots \, \, \cdot [b_{t},d_{t}]) \in G_{1}^{\prime} \times G_{2}^{\prime}\end{align}$</p> <p>So, we have the inclusion $\mathbf{(G_{1}\times G_{2})^{\prime}\subseteq G_{1}^{\prime}\times G_{2}^{\prime}}$</p> <hr> <p>In the other direction, let $h \in G_{1}^{\prime} \times G_{2}^{\prime}$.</p> <p>To belong in $ G_{1}^{\prime} \times G_{2}^{\prime}$, $h$ must be the direct product of an element of $G_{1}^{\prime}$, which is the (group operation) product of commutators of elements of $G_{1}$, and an element of $G_{2}^{\prime}$, which is the (group operation) product of commutators of elements of $G_{2}$.</p> <p>Let $h_{1}$ be such an element of $G_{1}^{\prime}$ and let $h_{2}$ be such an element of $G_{2}^{\prime}$, and let $h = h_{1} \times h_{2}$. Then,</p> <p>$h_{1} = [a_{1},c_{1}]\cdot [a_{2}, c_{2}] \cdot \, \, \cdots \, \, \cdot [a_{t},c_{t}]$ for some $t$, where each $a_{i}$, $c_{i}$ is an element of $G_{1}$</p> <p>and $h_{2} =[b_{1},d_{1}]\cdot [b_{2}, d_{2}] \cdot \, \, \cdots \, \, \cdot [b_{t},d_{t}]$ for some $t$, where each $b_{i}$, $d_{i}$ is an element of $G_{2}$.</p> <p>So, </p> <p>$\begin{align}h = h_{1} \times h_{2} = ([a_{1},c_{1}]\cdot [a_{2}, c_{2}] \cdot \, \, \cdots \, \, \cdot [a_{t},c_{t}], [b_{1},d_{1}]\cdot [b_{2}, d_{2}] \cdot \, \, \cdots \, \, \cdot [b_{t},d_{t}])\\ =(a_{1}^{-1}c_{1}^{-1}a_{1}c_{1}a_{2}^{-1}c_{2}^{-1}a_{2}c_{2}\cdots a_{t}^{-1}c_{t}^{-1}a_{t}c_{t}, \,b_{1}^{-1}d_{1}^{-1}b_{1}d_{1}b_{2}^{-1}d_{2}^{-1}b_{2}d_{2}\cdots b_{t}^{-1}d_{t}^{-1}b_{t}d_{t}) \\ = (a_{1}^{-1},b_{1}^{-1}) (c_{1}^{-1},d_{1}^{-1})(a_{1},b_{1})(c_{1},d_{1})\cdot (a_{2}^{-1},b_{2}^{-1})(c_{2}^{-1},d_{2}^{-1})(a_{2},b_{2})(c_{2},d_{2})\cdot \, \, \cdots \, \,\cdot(a_{t}^{-1},b_{t}^{-1})(c_{t}^{-1},d_{t}^{-1})(a_{t},b_{t})(c_{t},d_{t}) \\ = (a_{1},b_{1})^{-1}(c_{1},d_{1})^{-1}(a_{1},b_{1})(c_{1},d_{1})\cdot (a_{2},b_{2})^{-1}(c_{2},d_{2})^{-1}(a_{2},b_{2})(c_{2},d_{2})\cdot \, \, \cdots \, \,\cdot(a_{t},b_{t})^{-1}(c_{t},d_{t})^{-1}(a_{t},b_{t})(c_{t},d_{t}) \\ = [(a_{1},b_{1}),(c_{1},d_{1})]\cdot [(a_{2},b_{2}),(c_{2},d_{2})]\cdot\,\, \cdots \,\, \cdot[(a_{t},b_{t}),(c_{t},d_{t})] \in (G_{1}\times G_{2})^{\prime} \end{align}$</p> <p>So, we have the inclusion $\mathbf{G_{1}^{\prime}\times G_{2}^{\prime} \subseteq (G_{1}\times G_{2})^{\prime}}$.</p> <hr> <p>Thus, since we have inclusion in both directions, we have established the equality $\mathbf{(G_{1}\times G_{2})^{\prime} = G_{1}^{\prime}\times G_{2}^{\prime}}$ for the right things this time, hopefully.</p> Answer 8: <p>Let $(g_{1},g_{2})\,,\,(h_{1},h_{2}) \in G_{1}\times G_{2}$. Then, $\begin{align}[(g_{1},g_{2})(h_{1},h_{2})]=(g_{1},g_{2})^{-1}(h_{1},h_{2})^{-1}(g_{1},g_{2})(h_{1},h_{2}) \\ =(g_{1}^{-1},g_{2}^{-1})(h_{1}^{-1},h_{2}^{-1})(g_{1},g_{2})(h_{1},h_{2}) = (g_{1}^{-1}h_{1}^{-1}g_{1}h_{1},g_{2}^{-1}h_{2}^{-1}g_{2}h_{2}) \\ = ([g_{1},h_{1}],[g_{2},h_{2}]) = ([g_{1},h_{1}],e_{G_{2}})(e_{G_{1}},[g_{2},h_{2}])\end{align}$.</p> <p>So, each commutator in $G_{1} \times G_{2}$ is the product of one in $G_{1}$ with one in $G_{2}$.</p> <p>Therefore, $\mathbf{(G_{1}\times G_{2})^{\prime}\subseteq G_{1}^{\prime}\times G_{2}^{\prime}}$.</p> <p>Now, to get the inclusion in the other direction, let $(g,h) \in G_{1}^{\prime} \times G_{2}^{\prime}$. Then, $g = [g_{1},g_{2}]\in G_{1}^{\prime}$ and $h = [h_{1},h_{2}] \in G_{2}^{\prime}$.</p> <p>So, </p> <p>$\begin{align}(g,h) = ([g_{1},g_{2}],[h_{1},h_{2}]) = (g_{1}^{-1}g_{2}^{-1}g_{1}g_{2}, h_{1}^{-1}h_{2}^{-1}h_{1}h_{2}) \\ = (g_{1}^{-1},h_{1}^{-1})(g_{2}^{-1},h_{2}^{-1})(g_{1},h_{1})(g_{2},h_{2}) \\ = (g_{1},h_{1})^{-1}(g_{2},h_{2})^{-1}(g_{1},h_{1})(g_{2},h_{2}) \\= [(g_{1},h_{1}),(g_{2},h_{2})] \end{align} $ </p> <p>where $(g_{1},h_{1}), (g_{2},h_{2}) \in G_{1} \times G_{2}$. </p> <p>Therefore, the product of a commutator in $G_{1}$ with a commutator in $G_{2}$ is a commutator in $G_{1} \times G_{2}$. Thus, $\mathbf{G_{1}^{\prime} \times G_{2}^{\prime} \subseteq (G_{1} \times G_{2})^{\prime}}$.</p> <p>And so, we get our equality, $G_{1}^{\prime} \times G_{2}^{\prime} = (G_{1} \times G_{2})^{\prime}$.</p> Answer 9: <p>Okay...I think this answer finally takes care of the difficulty. Thanks to arctictern for all his help :)</p> <hr> <p>Let $g \in (G_{1}\times G_{2})^{\prime}$. Then, since $(G_{1} \times G_{2})^{\prime}$ is the subgroup generated by all commutators of elements of $G_{1} \times G_{2}$, $g$ is the product of commutators of elements of $G_{1} \times G_{2}$. Letting $(a_{i},b_{i})$, $(c_{i},d_{i})\in G_{1} \times G_{2}$, we have that</p> <p>$\begin{align} g = [(a_{1},b_{1}),(c_{1},d_{1})]\cdot [(a_{2},b_{2}),(c_{2},d_{2})]\cdot\,\, \cdots \,\, \cdot[(a_{t},b_{t}),(c_{t},d_{t})]\,\text{for some}\, t, \\ = (a_{1},b_{1})^{-1}(c_{1},d_{1})^{-1}(a_{1},b_{1})(c_{1},d_{1})\cdot (a_{2},b_{2})^{-1}(c_{2},d_{2})^{-1}(a_{2},b_{2})(c_{2},d_{2})\cdot \, \, \cdots \, \,\cdot(a_{t},b_{t})^{-1}(c_{t},d_{t})^{-1}(a_{t},b_{t})(c_{t},d_{t}) \\ = (a_{1}^{-1},b_{1}^{-1}) (c_{1}^{-1},d_{1}^{-1})(a_{1},b_{1})(c_{1},d_{1})\cdot (a_{2}^{-1},b_{2}^{-1})(c_{2}^{-1},d_{2}^{-1})(a_{2},b_{2})(c_{2},d_{2})\cdot \, \, \cdots \, \,\cdot(a_{t}^{-1},b_{t}^{-1})(c_{t}^{-1},d_{t}^{-1})(a_{t},b_{t})(c_{t},d_{t}) \\ = (a_{1}^{-1}c_{1}^{-1}a_{1}c_{1}a_{2}^{-1}c_{2}^{-1}a_{2}c_{2}\cdots a_{t}^{-1}c_{t}^{-1}a_{t}c_{t}, \,b_{1}^{-1}d_{1}^{-1}b_{1}d_{1}b_{2}^{-1}d_{2}^{-1}b_{2}d_{2}\cdots b_{t}^{-1}d_{t}^{-1}b_{t}d_{t}) \\ = ([a_{1},c_{1}]\cdot [a_{2},c_{2}]\cdot \,\, \cdots \, \, \cdot [a_{t},c_{t}], \,[b_{1},d_{1}]\cdot [b_{2},d_{2}]\cdot \,\, \cdots \, \, \cdot [b_{t},d_{t}]) \in G_{1}^{\prime} \times G_{2}^{\prime}\end{align}$</p> <p>So, we have the inclusion $\mathbf{(G_{1}\times G_{2})^{\prime}\subseteq G_{1}^{\prime}\times G_{2}^{\prime}}$</p> <hr> <p>In the other direction, let $h \in G_{1}^{\prime} \times G_{2}^{\prime}$.</p> <p>To belong in $ G_{1}^{\prime} \times G_{2}^{\prime}$, $h$ must be the direct product of an element of $G_{1}^{\prime}$, which is the (group operation) product of commutators of elements of $G_{1}$, and an element of $G_{2}^{\prime}$, which is the (group operation) product of commutators of elements of $G_{2}$.</p> <p>Let $h_{1}$ be such an element of $G_{1}^{\prime}$ and let $h_{2}$ be such an element of $G_{2}^{\prime}$, and let $h = h_{1} \times h_{2}$. Then,</p> <p>$h_{1} = [a_{1},c_{1}]\cdot [a_{2}, c_{2}] \cdot \, \, \cdots \, \, \cdot [a_{t},c_{t}]$ for some $t$, where each $a_{i}$, $c_{i}$ is an element of $G_{1}$</p> <p>and $h_{2} =[b_{1},d_{1}]\cdot [b_{2}, d_{2}] \cdot \, \, \cdots \, \, \cdot [b_{t},d_{t}]$ for some $t$, where each $b_{i}$, $d_{i}$ is an element of $G_{2}$.</p> <p>So, </p> <p>$\begin{align}h = h_{1} \times h_{2} = ([a_{1},c_{1}]\cdot [a_{2}, c_{2}] \cdot \, \, \cdots \, \, \cdot [a_{t},c_{t}], [b_{1},d_{1}]\cdot [b_{2}, d_{2}] \cdot \, \, \cdots \, \, \cdot [b_{t},d_{t}])\\ =(a_{1}^{-1}c_{1}^{-1}a_{1}c_{1}a_{2}^{-1}c_{2}^{-1}a_{2}c_{2}\cdots a_{t}^{-1}c_{t}^{-1}a_{t}c_{t}, \,b_{1}^{-1}d_{1}^{-1}b_{1}d_{1}b_{2}^{-1}d_{2}^{-1}b_{2}d_{2}\cdots b_{t}^{-1}d_{t}^{-1}b_{t}d_{t}) \\ = (a_{1}^{-1},b_{1}^{-1}) (c_{1}^{-1},d_{1}^{-1})(a_{1},b_{1})(c_{1},d_{1})\cdot (a_{2}^{-1},b_{2}^{-1})(c_{2}^{-1},d_{2}^{-1})(a_{2},b_{2})(c_{2},d_{2})\cdot \, \, \cdots \, \,\cdot(a_{t}^{-1},b_{t}^{-1})(c_{t}^{-1},d_{t}^{-1})(a_{t},b_{t})(c_{t},d_{t}) \\ = (a_{1},b_{1})^{-1}(c_{1},d_{1})^{-1}(a_{1},b_{1})(c_{1},d_{1})\cdot (a_{2},b_{2})^{-1}(c_{2},d_{2})^{-1}(a_{2},b_{2})(c_{2},d_{2})\cdot \, \, \cdots \, \,\cdot(a_{t},b_{t})^{-1}(c_{t},d_{t})^{-1}(a_{t},b_{t})(c_{t},d_{t}) \\ = [(a_{1},b_{1}),(c_{1},d_{1})]\cdot [(a_{2},b_{2}),(c_{2},d_{2})]\cdot\,\, \cdots \,\, \cdot[(a_{t},b_{t}),(c_{t},d_{t})] \in (G_{1}\times G_{2})^{\prime} \end{align}$</p> <p>So, we have the inclusion $\mathbf{G_{1}^{\prime}\times G_{2}^{\prime} \subseteq (G_{1}\times G_{2})^{\prime}}$.</p> <hr> <p>Thus, since we have inclusion in both directions, we have established the equality $\mathbf{(G_{1}\times G_{2})^{\prime} = G_{1}^{\prime}\times G_{2}^{\prime}}$ for the right things this time, hopefully.</p> Answer 10: <p>Let $(g_{1},g_{2})\,,\,(h_{1},h_{2}) \in G_{1}\times G_{2}$. Then, $\begin{align}[(g_{1},g_{2})(h_{1},h_{2})]=(g_{1},g_{2})^{-1}(h_{1},h_{2})^{-1}(g_{1},g_{2})(h_{1},h_{2}) \\ =(g_{1}^{-1},g_{2}^{-1})(h_{1}^{-1},h_{2}^{-1})(g_{1},g_{2})(h_{1},h_{2}) = (g_{1}^{-1}h_{1}^{-1}g_{1}h_{1},g_{2}^{-1}h_{2}^{-1}g_{2}h_{2}) \\ = ([g_{1},h_{1}],[g_{2},h_{2}]) = ([g_{1},h_{1}],e_{G_{2}})(e_{G_{1}},[g_{2},h_{2}])\end{align}$.</p> <p>So, each commutator in $G_{1} \times G_{2}$ is the product of one in $G_{1}$ with one in $G_{2}$.</p> <p>Therefore, $\mathbf{(G_{1}\times G_{2})^{\prime}\subseteq G_{1}^{\prime}\times G_{2}^{\prime}}$.</p> <p>Now, to get the inclusion in the other direction, let $(g,h) \in G_{1}^{\prime} \times G_{2}^{\prime}$. Then, $g = [g_{1},g_{2}]\in G_{1}^{\prime}$ and $h = [h_{1},h_{2}] \in G_{2}^{\prime}$.</p> <p>So, </p> <p>$\begin{align}(g,h) = ([g_{1},g_{2}],[h_{1},h_{2}]) = (g_{1}^{-1}g_{2}^{-1}g_{1}g_{2}, h_{1}^{-1}h_{2}^{-1}h_{1}h_{2}) \\ = (g_{1}^{-1},h_{1}^{-1})(g_{2}^{-1},h_{2}^{-1})(g_{1},h_{1})(g_{2},h_{2}) \\ = (g_{1},h_{1})^{-1}(g_{2},h_{2})^{-1}(g_{1},h_{1})(g_{2},h_{2}) \\= [(g_{1},h_{1}),(g_{2},h_{2})] \end{align} $ </p> <p>where $(g_{1},h_{1}), (g_{2},h_{2}) \in G_{1} \times G_{2}$. </p> <p>Therefore, the product of a commutator in $G_{1}$ with a commutator in $G_{2}$ is a commutator in $G_{1} \times G_{2}$. Thus, $\mathbf{G_{1}^{\prime} \times G_{2}^{\prime} \subseteq (G_{1} \times G_{2})^{\prime}}$.</p> <p>And so, we get our equality, $G_{1}^{\prime} \times G_{2}^{\prime} = (G_{1} \times G_{2})^{\prime}$.</p> Answer 11: <p>Okay...I think this answer finally takes care of the difficulty. Thanks to arctictern for all his help :)</p> <hr> <p>Let $g \in (G_{1}\times G_{2})^{\prime}$. Then, since $(G_{1} \times G_{2})^{\prime}$ is the subgroup generated by all commutators of elements of $G_{1} \times G_{2}$, $g$ is the product of commutators of elements of $G_{1} \times G_{2}$. Letting $(a_{i},b_{i})$, $(c_{i},d_{i})\in G_{1} \times G_{2}$, we have that</p> <p>$\begin{align} g = [(a_{1},b_{1}),(c_{1},d_{1})]\cdot [(a_{2},b_{2}),(c_{2},d_{2})]\cdot\,\, \cdots \,\, \cdot[(a_{t},b_{t}),(c_{t},d_{t})]\,\text{for some}\, t, \\ = (a_{1},b_{1})^{-1}(c_{1},d_{1})^{-1}(a_{1},b_{1})(c_{1},d_{1})\cdot (a_{2},b_{2})^{-1}(c_{2},d_{2})^{-1}(a_{2},b_{2})(c_{2},d_{2})\cdot \, \, \cdots \, \,\cdot(a_{t},b_{t})^{-1}(c_{t},d_{t})^{-1}(a_{t},b_{t})(c_{t},d_{t}) \\ = (a_{1}^{-1},b_{1}^{-1}) (c_{1}^{-1},d_{1}^{-1})(a_{1},b_{1})(c_{1},d_{1})\cdot (a_{2}^{-1},b_{2}^{-1})(c_{2}^{-1},d_{2}^{-1})(a_{2},b_{2})(c_{2},d_{2})\cdot \, \, \cdots \, \,\cdot(a_{t}^{-1},b_{t}^{-1})(c_{t}^{-1},d_{t}^{-1})(a_{t},b_{t})(c_{t},d_{t}) \\ = (a_{1}^{-1}c_{1}^{-1}a_{1}c_{1}a_{2}^{-1}c_{2}^{-1}a_{2}c_{2}\cdots a_{t}^{-1}c_{t}^{-1}a_{t}c_{t}, \,b_{1}^{-1}d_{1}^{-1}b_{1}d_{1}b_{2}^{-1}d_{2}^{-1}b_{2}d_{2}\cdots b_{t}^{-1}d_{t}^{-1}b_{t}d_{t}) \\ = ([a_{1},c_{1}]\cdot [a_{2},c_{2}]\cdot \,\, \cdots \, \, \cdot [a_{t},c_{t}], \,[b_{1},d_{1}]\cdot [b_{2},d_{2}]\cdot \,\, \cdots \, \, \cdot [b_{t},d_{t}]) \in G_{1}^{\prime} \times G_{2}^{\prime}\end{align}$</p> <p>So, we have the inclusion $\mathbf{(G_{1}\times G_{2})^{\prime}\subseteq G_{1}^{\prime}\times G_{2}^{\prime}}$</p> <hr> <p>In the other direction, let $h \in G_{1}^{\prime} \times G_{2}^{\prime}$.</p> <p>To belong in $ G_{1}^{\prime} \times G_{2}^{\prime}$, $h$ must be the direct product of an element of $G_{1}^{\prime}$, which is the (group operation) product of commutators of elements of $G_{1}$, and an element of $G_{2}^{\prime}$, which is the (group operation) product of commutators of elements of $G_{2}$.</p> <p>Let $h_{1}$ be such an element of $G_{1}^{\prime}$ and let $h_{2}$ be such an element of $G_{2}^{\prime}$, and let $h = h_{1} \times h_{2}$. Then,</p> <p>$h_{1} = [a_{1},c_{1}]\cdot [a_{2}, c_{2}] \cdot \, \, \cdots \, \, \cdot [a_{t},c_{t}]$ for some $t$, where each $a_{i}$, $c_{i}$ is an element of $G_{1}$</p> <p>and $h_{2} =[b_{1},d_{1}]\cdot [b_{2}, d_{2}] \cdot \, \, \cdots \, \, \cdot [b_{t},d_{t}]$ for some $t$, where each $b_{i}$, $d_{i}$ is an element of $G_{2}$.</p> <p>So, </p> <p>$\begin{align}h = h_{1} \times h_{2} = ([a_{1},c_{1}]\cdot [a_{2}, c_{2}] \cdot \, \, \cdots \, \, \cdot [a_{t},c_{t}], [b_{1},d_{1}]\cdot [b_{2}, d_{2}] \cdot \, \, \cdots \, \, \cdot [b_{t},d_{t}])\\ =(a_{1}^{-1}c_{1}^{-1}a_{1}c_{1}a_{2}^{-1}c_{2}^{-1}a_{2}c_{2}\cdots a_{t}^{-1}c_{t}^{-1}a_{t}c_{t}, \,b_{1}^{-1}d_{1}^{-1}b_{1}d_{1}b_{2}^{-1}d_{2}^{-1}b_{2}d_{2}\cdots b_{t}^{-1}d_{t}^{-1}b_{t}d_{t}) \\ = (a_{1}^{-1},b_{1}^{-1}) (c_{1}^{-1},d_{1}^{-1})(a_{1},b_{1})(c_{1},d_{1})\cdot (a_{2}^{-1},b_{2}^{-1})(c_{2}^{-1},d_{2}^{-1})(a_{2},b_{2})(c_{2},d_{2})\cdot \, \, \cdots \, \,\cdot(a_{t}^{-1},b_{t}^{-1})(c_{t}^{-1},d_{t}^{-1})(a_{t},b_{t})(c_{t},d_{t}) \\ = (a_{1},b_{1})^{-1}(c_{1},d_{1})^{-1}(a_{1},b_{1})(c_{1},d_{1})\cdot (a_{2},b_{2})^{-1}(c_{2},d_{2})^{-1}(a_{2},b_{2})(c_{2},d_{2})\cdot \, \, \cdots \, \,\cdot(a_{t},b_{t})^{-1}(c_{t},d_{t})^{-1}(a_{t},b_{t})(c_{t},d_{t}) \\ = [(a_{1},b_{1}),(c_{1},d_{1})]\cdot [(a_{2},b_{2}),(c_{2},d_{2})]\cdot\,\, \cdots \,\, \cdot[(a_{t},b_{t}),(c_{t},d_{t})] \in (G_{1}\times G_{2})^{\prime} \end{align}$</p> <p>So, we have the inclusion $\mathbf{G_{1}^{\prime}\times G_{2}^{\prime} \subseteq (G_{1}\times G_{2})^{\prime}}$.</p> <hr> <p>Thus, since we have inclusion in both directions, we have established the equality $\mathbf{(G_{1}\times G_{2})^{\prime} = G_{1}^{\prime}\times G_{2}^{\prime}}$ for the right things this time, hopefully.</p> Answer 12: <p>Let $(g_{1},g_{2})\,,\,(h_{1},h_{2}) \in G_{1}\times G_{2}$. Then, $\begin{align}[(g_{1},g_{2})(h_{1},h_{2})]=(g_{1},g_{2})^{-1}(h_{1},h_{2})^{-1}(g_{1},g_{2})(h_{1},h_{2}) \\ =(g_{1}^{-1},g_{2}^{-1})(h_{1}^{-1},h_{2}^{-1})(g_{1},g_{2})(h_{1},h_{2}) = (g_{1}^{-1}h_{1}^{-1}g_{1}h_{1},g_{2}^{-1}h_{2}^{-1}g_{2}h_{2}) \\ = ([g_{1},h_{1}],[g_{2},h_{2}]) = ([g_{1},h_{1}],e_{G_{2}})(e_{G_{1}},[g_{2},h_{2}])\end{align}$.</p> <p>So, each commutator in $G_{1} \times G_{2}$ is the product of one in $G_{1}$ with one in $G_{2}$.</p> <p>Therefore, $\mathbf{(G_{1}\times G_{2})^{\prime}\subseteq G_{1}^{\prime}\times G_{2}^{\prime}}$.</p> <p>Now, to get the inclusion in the other direction, let $(g,h) \in G_{1}^{\prime} \times G_{2}^{\prime}$. Then, $g = [g_{1},g_{2}]\in G_{1}^{\prime}$ and $h = [h_{1},h_{2}] \in G_{2}^{\prime}$.</p> <p>So, </p> <p>$\begin{align}(g,h) = ([g_{1},g_{2}],[h_{1},h_{2}]) = (g_{1}^{-1}g_{2}^{-1}g_{1}g_{2}, h_{1}^{-1}h_{2}^{-1}h_{1}h_{2}) \\ = (g_{1}^{-1},h_{1}^{-1})(g_{2}^{-1},h_{2}^{-1})(g_{1},h_{1})(g_{2},h_{2}) \\ = (g_{1},h_{1})^{-1}(g_{2},h_{2})^{-1}(g_{1},h_{1})(g_{2},h_{2}) \\= [(g_{1},h_{1}),(g_{2},h_{2})] \end{align} $ </p> <p>where $(g_{1},h_{1}), (g_{2},h_{2}) \in G_{1} \times G_{2}$. </p> <p>Therefore, the product of a commutator in $G_{1}$ with a commutator in $G_{2}$ is a commutator in $G_{1} \times G_{2}$. Thus, $\mathbf{G_{1}^{\prime} \times G_{2}^{\prime} \subseteq (G_{1} \times G_{2})^{\prime}}$.</p> <p>And so, we get our equality, $G_{1}^{\prime} \times G_{2}^{\prime} = (G_{1} \times G_{2})^{\prime}$.</p> Answer 13: <p>Okay...I think this answer finally takes care of the difficulty. Thanks to arctictern for all his help :)</p> <hr> <p>Let $g \in (G_{1}\times G_{2})^{\prime}$. Then, since $(G_{1} \times G_{2})^{\prime}$ is the subgroup generated by all commutators of elements of $G_{1} \times G_{2}$, $g$ is the product of commutators of elements of $G_{1} \times G_{2}$. Letting $(a_{i},b_{i})$, $(c_{i},d_{i})\in G_{1} \times G_{2}$, we have that</p> <p>$\begin{align} g = [(a_{1},b_{1}),(c_{1},d_{1})]\cdot [(a_{2},b_{2}),(c_{2},d_{2})]\cdot\,\, \cdots \,\, \cdot[(a_{t},b_{t}),(c_{t},d_{t})]\,\text{for some}\, t, \\ = (a_{1},b_{1})^{-1}(c_{1},d_{1})^{-1}(a_{1},b_{1})(c_{1},d_{1})\cdot (a_{2},b_{2})^{-1}(c_{2},d_{2})^{-1}(a_{2},b_{2})(c_{2},d_{2})\cdot \, \, \cdots \, \,\cdot(a_{t},b_{t})^{-1}(c_{t},d_{t})^{-1}(a_{t},b_{t})(c_{t},d_{t}) \\ = (a_{1}^{-1},b_{1}^{-1}) (c_{1}^{-1},d_{1}^{-1})(a_{1},b_{1})(c_{1},d_{1})\cdot (a_{2}^{-1},b_{2}^{-1})(c_{2}^{-1},d_{2}^{-1})(a_{2},b_{2})(c_{2},d_{2})\cdot \, \, \cdots \, \,\cdot(a_{t}^{-1},b_{t}^{-1})(c_{t}^{-1},d_{t}^{-1})(a_{t},b_{t})(c_{t},d_{t}) \\ = (a_{1}^{-1}c_{1}^{-1}a_{1}c_{1}a_{2}^{-1}c_{2}^{-1}a_{2}c_{2}\cdots a_{t}^{-1}c_{t}^{-1}a_{t}c_{t}, \,b_{1}^{-1}d_{1}^{-1}b_{1}d_{1}b_{2}^{-1}d_{2}^{-1}b_{2}d_{2}\cdots b_{t}^{-1}d_{t}^{-1}b_{t}d_{t}) \\ = ([a_{1},c_{1}]\cdot [a_{2},c_{2}]\cdot \,\, \cdots \, \, \cdot [a_{t},c_{t}], \,[b_{1},d_{1}]\cdot [b_{2},d_{2}]\cdot \,\, \cdots \, \, \cdot [b_{t},d_{t}]) \in G_{1}^{\prime} \times G_{2}^{\prime}\end{align}$</p> <p>So, we have the inclusion $\mathbf{(G_{1}\times G_{2})^{\prime}\subseteq G_{1}^{\prime}\times G_{2}^{\prime}}$</p> <hr> <p>In the other direction, let $h \in G_{1}^{\prime} \times G_{2}^{\prime}$.</p> <p>To belong in $ G_{1}^{\prime} \times G_{2}^{\prime}$, $h$ must be the direct product of an element of $G_{1}^{\prime}$, which is the (group operation) product of commutators of elements of $G_{1}$, and an element of $G_{2}^{\prime}$, which is the (group operation) product of commutators of elements of $G_{2}$.</p> <p>Let $h_{1}$ be such an element of $G_{1}^{\prime}$ and let $h_{2}$ be such an element of $G_{2}^{\prime}$, and let $h = h_{1} \times h_{2}$. Then,</p> <p>$h_{1} = [a_{1},c_{1}]\cdot [a_{2}, c_{2}] \cdot \, \, \cdots \, \, \cdot [a_{t},c_{t}]$ for some $t$, where each $a_{i}$, $c_{i}$ is an element of $G_{1}$</p> <p>and $h_{2} =[b_{1},d_{1}]\cdot [b_{2}, d_{2}] \cdot \, \, \cdots \, \, \cdot [b_{t},d_{t}]$ for some $t$, where each $b_{i}$, $d_{i}$ is an element of $G_{2}$.</p> <p>So, </p> <p>$\begin{align}h = h_{1} \times h_{2} = ([a_{1},c_{1}]\cdot [a_{2}, c_{2}] \cdot \, \, \cdots \, \, \cdot [a_{t},c_{t}], [b_{1},d_{1}]\cdot [b_{2}, d_{2}] \cdot \, \, \cdots \, \, \cdot [b_{t},d_{t}])\\ =(a_{1}^{-1}c_{1}^{-1}a_{1}c_{1}a_{2}^{-1}c_{2}^{-1}a_{2}c_{2}\cdots a_{t}^{-1}c_{t}^{-1}a_{t}c_{t}, \,b_{1}^{-1}d_{1}^{-1}b_{1}d_{1}b_{2}^{-1}d_{2}^{-1}b_{2}d_{2}\cdots b_{t}^{-1}d_{t}^{-1}b_{t}d_{t}) \\ = (a_{1}^{-1},b_{1}^{-1}) (c_{1}^{-1},d_{1}^{-1})(a_{1},b_{1})(c_{1},d_{1})\cdot (a_{2}^{-1},b_{2}^{-1})(c_{2}^{-1},d_{2}^{-1})(a_{2},b_{2})(c_{2},d_{2})\cdot \, \, \cdots \, \,\cdot(a_{t}^{-1},b_{t}^{-1})(c_{t}^{-1},d_{t}^{-1})(a_{t},b_{t})(c_{t},d_{t}) \\ = (a_{1},b_{1})^{-1}(c_{1},d_{1})^{-1}(a_{1},b_{1})(c_{1},d_{1})\cdot (a_{2},b_{2})^{-1}(c_{2},d_{2})^{-1}(a_{2},b_{2})(c_{2},d_{2})\cdot \, \, \cdots \, \,\cdot(a_{t},b_{t})^{-1}(c_{t},d_{t})^{-1}(a_{t},b_{t})(c_{t},d_{t}) \\ = [(a_{1},b_{1}),(c_{1},d_{1})]\cdot [(a_{2},b_{2}),(c_{2},d_{2})]\cdot\,\, \cdots \,\, \cdot[(a_{t},b_{t}),(c_{t},d_{t})] \in (G_{1}\times G_{2})^{\prime} \end{align}$</p> <p>So, we have the inclusion $\mathbf{G_{1}^{\prime}\times G_{2}^{\prime} \subseteq (G_{1}\times G_{2})^{\prime}}$.</p> <hr> <p>Thus, since we have inclusion in both directions, we have established the equality $\mathbf{(G_{1}\times G_{2})^{\prime} = G_{1}^{\prime}\times G_{2}^{\prime}}$ for the right things this time, hopefully.</p> Answer 14: <p>Let $(g_{1},g_{2})\,,\,(h_{1},h_{2}) \in G_{1}\times G_{2}$. Then, $\begin{align}[(g_{1},g_{2})(h_{1},h_{2})]=(g_{1},g_{2})^{-1}(h_{1},h_{2})^{-1}(g_{1},g_{2})(h_{1},h_{2}) \\ =(g_{1}^{-1},g_{2}^{-1})(h_{1}^{-1},h_{2}^{-1})(g_{1},g_{2})(h_{1},h_{2}) = (g_{1}^{-1}h_{1}^{-1}g_{1}h_{1},g_{2}^{-1}h_{2}^{-1}g_{2}h_{2}) \\ = ([g_{1},h_{1}],[g_{2},h_{2}]) = ([g_{1},h_{1}],e_{G_{2}})(e_{G_{1}},[g_{2},h_{2}])\end{align}$.</p> <p>So, each commutator in $G_{1} \times G_{2}$ is the product of one in $G_{1}$ with one in $G_{2}$.</p> <p>Therefore, $\mathbf{(G_{1}\times G_{2})^{\prime}\subseteq G_{1}^{\prime}\times G_{2}^{\prime}}$.</p> <p>Now, to get the inclusion in the other direction, let $(g,h) \in G_{1}^{\prime} \times G_{2}^{\prime}$. Then, $g = [g_{1},g_{2}]\in G_{1}^{\prime}$ and $h = [h_{1},h_{2}] \in G_{2}^{\prime}$.</p> <p>So, </p> <p>$\begin{align}(g,h) = ([g_{1},g_{2}],[h_{1},h_{2}]) = (g_{1}^{-1}g_{2}^{-1}g_{1}g_{2}, h_{1}^{-1}h_{2}^{-1}h_{1}h_{2}) \\ = (g_{1}^{-1},h_{1}^{-1})(g_{2}^{-1},h_{2}^{-1})(g_{1},h_{1})(g_{2},h_{2}) \\ = (g_{1},h_{1})^{-1}(g_{2},h_{2})^{-1}(g_{1},h_{1})(g_{2},h_{2}) \\= [(g_{1},h_{1}),(g_{2},h_{2})] \end{align} $ </p> <p>where $(g_{1},h_{1}), (g_{2},h_{2}) \in G_{1} \times G_{2}$. </p> <p>Therefore, the product of a commutator in $G_{1}$ with a commutator in $G_{2}$ is a commutator in $G_{1} \times G_{2}$. Thus, $\mathbf{G_{1}^{\prime} \times G_{2}^{\prime} \subseteq (G_{1} \times G_{2})^{\prime}}$.</p> <p>And so, we get our equality, $G_{1}^{\prime} \times G_{2}^{\prime} = (G_{1} \times G_{2})^{\prime}$.</p> Answer 15: <p>Okay...I think this answer finally takes care of the difficulty. Thanks to arctictern for all his help :)</p> <hr> <p>Let $g \in (G_{1}\times G_{2})^{\prime}$. Then, since $(G_{1} \times G_{2})^{\prime}$ is the subgroup generated by all commutators of elements of $G_{1} \times G_{2}$, $g$ is the product of commutators of elements of $G_{1} \times G_{2}$. Letting $(a_{i},b_{i})$, $(c_{i},d_{i})\in G_{1} \times G_{2}$, we have that</p> <p>$\begin{align} g = [(a_{1},b_{1}),(c_{1},d_{1})]\cdot [(a_{2},b_{2}),(c_{2},d_{2})]\cdot\,\, \cdots \,\, \cdot[(a_{t},b_{t}),(c_{t},d_{t})]\,\text{for some}\, t, \\ = (a_{1},b_{1})^{-1}(c_{1},d_{1})^{-1}(a_{1},b_{1})(c_{1},d_{1})\cdot (a_{2},b_{2})^{-1}(c_{2},d_{2})^{-1}(a_{2},b_{2})(c_{2},d_{2})\cdot \, \, \cdots \, \,\cdot(a_{t},b_{t})^{-1}(c_{t},d_{t})^{-1}(a_{t},b_{t})(c_{t},d_{t}) \\ = (a_{1}^{-1},b_{1}^{-1}) (c_{1}^{-1},d_{1}^{-1})(a_{1},b_{1})(c_{1},d_{1})\cdot (a_{2}^{-1},b_{2}^{-1})(c_{2}^{-1},d_{2}^{-1})(a_{2},b_{2})(c_{2},d_{2})\cdot \, \, \cdots \, \,\cdot(a_{t}^{-1},b_{t}^{-1})(c_{t}^{-1},d_{t}^{-1})(a_{t},b_{t})(c_{t},d_{t}) \\ = (a_{1}^{-1}c_{1}^{-1}a_{1}c_{1}a_{2}^{-1}c_{2}^{-1}a_{2}c_{2}\cdots a_{t}^{-1}c_{t}^{-1}a_{t}c_{t}, \,b_{1}^{-1}d_{1}^{-1}b_{1}d_{1}b_{2}^{-1}d_{2}^{-1}b_{2}d_{2}\cdots b_{t}^{-1}d_{t}^{-1}b_{t}d_{t}) \\ = ([a_{1},c_{1}]\cdot [a_{2},c_{2}]\cdot \,\, \cdots \, \, \cdot [a_{t},c_{t}], \,[b_{1},d_{1}]\cdot [b_{2},d_{2}]\cdot \,\, \cdots \, \, \cdot [b_{t},d_{t}]) \in G_{1}^{\prime} \times G_{2}^{\prime}\end{align}$</p> <p>So, we have the inclusion $\mathbf{(G_{1}\times G_{2})^{\prime}\subseteq G_{1}^{\prime}\times G_{2}^{\prime}}$</p> <hr> <p>In the other direction, let $h \in G_{1}^{\prime} \times G_{2}^{\prime}$.</p> <p>To belong in $ G_{1}^{\prime} \times G_{2}^{\prime}$, $h$ must be the direct product of an element of $G_{1}^{\prime}$, which is the (group operation) product of commutators of elements of $G_{1}$, and an element of $G_{2}^{\prime}$, which is the (group operation) product of commutators of elements of $G_{2}$.</p> <p>Let $h_{1}$ be such an element of $G_{1}^{\prime}$ and let $h_{2}$ be such an element of $G_{2}^{\prime}$, and let $h = h_{1} \times h_{2}$. Then,</p> <p>$h_{1} = [a_{1},c_{1}]\cdot [a_{2}, c_{2}] \cdot \, \, \cdots \, \, \cdot [a_{t},c_{t}]$ for some $t$, where each $a_{i}$, $c_{i}$ is an element of $G_{1}$</p> <p>and $h_{2} =[b_{1},d_{1}]\cdot [b_{2}, d_{2}] \cdot \, \, \cdots \, \, \cdot [b_{t},d_{t}]$ for some $t$, where each $b_{i}$, $d_{i}$ is an element of $G_{2}$.</p> <p>So, </p> <p>$\begin{align}h = h_{1} \times h_{2} = ([a_{1},c_{1}]\cdot [a_{2}, c_{2}] \cdot \, \, \cdots \, \, \cdot [a_{t},c_{t}], [b_{1},d_{1}]\cdot [b_{2}, d_{2}] \cdot \, \, \cdots \, \, \cdot [b_{t},d_{t}])\\ =(a_{1}^{-1}c_{1}^{-1}a_{1}c_{1}a_{2}^{-1}c_{2}^{-1}a_{2}c_{2}\cdots a_{t}^{-1}c_{t}^{-1}a_{t}c_{t}, \,b_{1}^{-1}d_{1}^{-1}b_{1}d_{1}b_{2}^{-1}d_{2}^{-1}b_{2}d_{2}\cdots b_{t}^{-1}d_{t}^{-1}b_{t}d_{t}) \\ = (a_{1}^{-1},b_{1}^{-1}) (c_{1}^{-1},d_{1}^{-1})(a_{1},b_{1})(c_{1},d_{1})\cdot (a_{2}^{-1},b_{2}^{-1})(c_{2}^{-1},d_{2}^{-1})(a_{2},b_{2})(c_{2},d_{2})\cdot \, \, \cdots \, \,\cdot(a_{t}^{-1},b_{t}^{-1})(c_{t}^{-1},d_{t}^{-1})(a_{t},b_{t})(c_{t},d_{t}) \\ = (a_{1},b_{1})^{-1}(c_{1},d_{1})^{-1}(a_{1},b_{1})(c_{1},d_{1})\cdot (a_{2},b_{2})^{-1}(c_{2},d_{2})^{-1}(a_{2},b_{2})(c_{2},d_{2})\cdot \, \, \cdots \, \,\cdot(a_{t},b_{t})^{-1}(c_{t},d_{t})^{-1}(a_{t},b_{t})(c_{t},d_{t}) \\ = [(a_{1},b_{1}),(c_{1},d_{1})]\cdot [(a_{2},b_{2}),(c_{2},d_{2})]\cdot\,\, \cdots \,\, \cdot[(a_{t},b_{t}),(c_{t},d_{t})] \in (G_{1}\times G_{2})^{\prime} \end{align}$</p> <p>So, we have the inclusion $\mathbf{G_{1}^{\prime}\times G_{2}^{\prime} \subseteq (G_{1}\times G_{2})^{\prime}}$.</p> <hr> <p>Thus, since we have inclusion in both directions, we have established the equality $\mathbf{(G_{1}\times G_{2})^{\prime} = G_{1}^{\prime}\times G_{2}^{\prime}}$ for the right things this time, hopefully.</p> Answer 16: <p>Let $(g_{1},g_{2})\,,\,(h_{1},h_{2}) \in G_{1}\times G_{2}$. Then, $\begin{align}[(g_{1},g_{2})(h_{1},h_{2})]=(g_{1},g_{2})^{-1}(h_{1},h_{2})^{-1}(g_{1},g_{2})(h_{1},h_{2}) \\ =(g_{1}^{-1},g_{2}^{-1})(h_{1}^{-1},h_{2}^{-1})(g_{1},g_{2})(h_{1},h_{2}) = (g_{1}^{-1}h_{1}^{-1}g_{1}h_{1},g_{2}^{-1}h_{2}^{-1}g_{2}h_{2}) \\ = ([g_{1},h_{1}],[g_{2},h_{2}]) = ([g_{1},h_{1}],e_{G_{2}})(e_{G_{1}},[g_{2},h_{2}])\end{align}$.</p> <p>So, each commutator in $G_{1} \times G_{2}$ is the product of one in $G_{1}$ with one in $G_{2}$.</p> <p>Therefore, $\mathbf{(G_{1}\times G_{2})^{\prime}\subseteq G_{1}^{\prime}\times G_{2}^{\prime}}$.</p> <p>Now, to get the inclusion in the other direction, let $(g,h) \in G_{1}^{\prime} \times G_{2}^{\prime}$. Then, $g = [g_{1},g_{2}]\in G_{1}^{\prime}$ and $h = [h_{1},h_{2}] \in G_{2}^{\prime}$.</p> <p>So, </p> <p>$\begin{align}(g,h) = ([g_{1},g_{2}],[h_{1},h_{2}]) = (g_{1}^{-1}g_{2}^{-1}g_{1}g_{2}, h_{1}^{-1}h_{2}^{-1}h_{1}h_{2}) \\ = (g_{1}^{-1},h_{1}^{-1})(g_{2}^{-1},h_{2}^{-1})(g_{1},h_{1})(g_{2},h_{2}) \\ = (g_{1},h_{1})^{-1}(g_{2},h_{2})^{-1}(g_{1},h_{1})(g_{2},h_{2}) \\= [(g_{1},h_{1}),(g_{2},h_{2})] \end{align} $ </p> <p>where $(g_{1},h_{1}), (g_{2},h_{2}) \in G_{1} \times G_{2}$. </p> <p>Therefore, the product of a commutator in $G_{1}$ with a commutator in $G_{2}$ is a commutator in $G_{1} \times G_{2}$. Thus, $\mathbf{G_{1}^{\prime} \times G_{2}^{\prime} \subseteq (G_{1} \times G_{2})^{\prime}}$.</p> <p>And so, we get our equality, $G_{1}^{\prime} \times G_{2}^{\prime} = (G_{1} \times G_{2})^{\prime}$.</p> Answer 17: <p>Okay...I think this answer finally takes care of the difficulty. Thanks to arctictern for all his help :)</p> <hr> <p>Let $g \in (G_{1}\times G_{2})^{\prime}$. Then, since $(G_{1} \times G_{2})^{\prime}$ is the subgroup generated by all commutators of elements of $G_{1} \times G_{2}$, $g$ is the product of commutators of elements of $G_{1} \times G_{2}$. Letting $(a_{i},b_{i})$, $(c_{i},d_{i})\in G_{1} \times G_{2}$, we have that</p> <p>$\begin{align} g = [(a_{1},b_{1}),(c_{1},d_{1})]\cdot [(a_{2},b_{2}),(c_{2},d_{2})]\cdot\,\, \cdots \,\, \cdot[(a_{t},b_{t}),(c_{t},d_{t})]\,\text{for some}\, t, \\ = (a_{1},b_{1})^{-1}(c_{1},d_{1})^{-1}(a_{1},b_{1})(c_{1},d_{1})\cdot (a_{2},b_{2})^{-1}(c_{2},d_{2})^{-1}(a_{2},b_{2})(c_{2},d_{2})\cdot \, \, \cdots \, \,\cdot(a_{t},b_{t})^{-1}(c_{t},d_{t})^{-1}(a_{t},b_{t})(c_{t},d_{t}) \\ = (a_{1}^{-1},b_{1}^{-1}) (c_{1}^{-1},d_{1}^{-1})(a_{1},b_{1})(c_{1},d_{1})\cdot (a_{2}^{-1},b_{2}^{-1})(c_{2}^{-1},d_{2}^{-1})(a_{2},b_{2})(c_{2},d_{2})\cdot \, \, \cdots \, \,\cdot(a_{t}^{-1},b_{t}^{-1})(c_{t}^{-1},d_{t}^{-1})(a_{t},b_{t})(c_{t},d_{t}) \\ = (a_{1}^{-1}c_{1}^{-1}a_{1}c_{1}a_{2}^{-1}c_{2}^{-1}a_{2}c_{2}\cdots a_{t}^{-1}c_{t}^{-1}a_{t}c_{t}, \,b_{1}^{-1}d_{1}^{-1}b_{1}d_{1}b_{2}^{-1}d_{2}^{-1}b_{2}d_{2}\cdots b_{t}^{-1}d_{t}^{-1}b_{t}d_{t}) \\ = ([a_{1},c_{1}]\cdot [a_{2},c_{2}]\cdot \,\, \cdots \, \, \cdot [a_{t},c_{t}], \,[b_{1},d_{1}]\cdot [b_{2},d_{2}]\cdot \,\, \cdots \, \, \cdot [b_{t},d_{t}]) \in G_{1}^{\prime} \times G_{2}^{\prime}\end{align}$</p> <p>So, we have the inclusion $\mathbf{(G_{1}\times G_{2})^{\prime}\subseteq G_{1}^{\prime}\times G_{2}^{\prime}}$</p> <hr> <p>In the other direction, let $h \in G_{1}^{\prime} \times G_{2}^{\prime}$.</p> <p>To belong in $ G_{1}^{\prime} \times G_{2}^{\prime}$, $h$ must be the direct product of an element of $G_{1}^{\prime}$, which is the (group operation) product of commutators of elements of $G_{1}$, and an element of $G_{2}^{\prime}$, which is the (group operation) product of commutators of elements of $G_{2}$.</p> <p>Let $h_{1}$ be such an element of $G_{1}^{\prime}$ and let $h_{2}$ be such an element of $G_{2}^{\prime}$, and let $h = h_{1} \times h_{2}$. Then,</p> <p>$h_{1} = [a_{1},c_{1}]\cdot [a_{2}, c_{2}] \cdot \, \, \cdots \, \, \cdot [a_{t},c_{t}]$ for some $t$, where each $a_{i}$, $c_{i}$ is an element of $G_{1}$</p> <p>and $h_{2} =[b_{1},d_{1}]\cdot [b_{2}, d_{2}] \cdot \, \, \cdots \, \, \cdot [b_{t},d_{t}]$ for some $t$, where each $b_{i}$, $d_{i}$ is an element of $G_{2}$.</p> <p>So, </p> <p>$\begin{align}h = h_{1} \times h_{2} = ([a_{1},c_{1}]\cdot [a_{2}, c_{2}] \cdot \, \, \cdots \, \, \cdot [a_{t},c_{t}], [b_{1},d_{1}]\cdot [b_{2}, d_{2}] \cdot \, \, \cdots \, \, \cdot [b_{t},d_{t}])\\ =(a_{1}^{-1}c_{1}^{-1}a_{1}c_{1}a_{2}^{-1}c_{2}^{-1}a_{2}c_{2}\cdots a_{t}^{-1}c_{t}^{-1}a_{t}c_{t}, \,b_{1}^{-1}d_{1}^{-1}b_{1}d_{1}b_{2}^{-1}d_{2}^{-1}b_{2}d_{2}\cdots b_{t}^{-1}d_{t}^{-1}b_{t}d_{t}) \\ = (a_{1}^{-1},b_{1}^{-1}) (c_{1}^{-1},d_{1}^{-1})(a_{1},b_{1})(c_{1},d_{1})\cdot (a_{2}^{-1},b_{2}^{-1})(c_{2}^{-1},d_{2}^{-1})(a_{2},b_{2})(c_{2},d_{2})\cdot \, \, \cdots \, \,\cdot(a_{t}^{-1},b_{t}^{-1})(c_{t}^{-1},d_{t}^{-1})(a_{t},b_{t})(c_{t},d_{t}) \\ = (a_{1},b_{1})^{-1}(c_{1},d_{1})^{-1}(a_{1},b_{1})(c_{1},d_{1})\cdot (a_{2},b_{2})^{-1}(c_{2},d_{2})^{-1}(a_{2},b_{2})(c_{2},d_{2})\cdot \, \, \cdots \, \,\cdot(a_{t},b_{t})^{-1}(c_{t},d_{t})^{-1}(a_{t},b_{t})(c_{t},d_{t}) \\ = [(a_{1},b_{1}),(c_{1},d_{1})]\cdot [(a_{2},b_{2}),(c_{2},d_{2})]\cdot\,\, \cdots \,\, \cdot[(a_{t},b_{t}),(c_{t},d_{t})] \in (G_{1}\times G_{2})^{\prime} \end{align}$</p> <p>So, we have the inclusion $\mathbf{G_{1}^{\prime}\times G_{2}^{\prime} \subseteq (G_{1}\times G_{2})^{\prime}}$.</p> <hr> <p>Thus, since we have inclusion in both directions, we have established the equality $\mathbf{(G_{1}\times G_{2})^{\prime} = G_{1}^{\prime}\times G_{2}^{\prime}}$ for the right things this time, hopefully.</p> Answer 18: <p>Let $(g_{1},g_{2})\,,\,(h_{1},h_{2}) \in G_{1}\times G_{2}$. Then, $\begin{align}[(g_{1},g_{2})(h_{1},h_{2})]=(g_{1},g_{2})^{-1}(h_{1},h_{2})^{-1}(g_{1},g_{2})(h_{1},h_{2}) \\ =(g_{1}^{-1},g_{2}^{-1})(h_{1}^{-1},h_{2}^{-1})(g_{1},g_{2})(h_{1},h_{2}) = (g_{1}^{-1}h_{1}^{-1}g_{1}h_{1},g_{2}^{-1}h_{2}^{-1}g_{2}h_{2}) \\ = ([g_{1},h_{1}],[g_{2},h_{2}]) = ([g_{1},h_{1}],e_{G_{2}})(e_{G_{1}},[g_{2},h_{2}])\end{align}$.</p> <p>So, each commutator in $G_{1} \times G_{2}$ is the product of one in $G_{1}$ with one in $G_{2}$.</p> <p>Therefore, $\mathbf{(G_{1}\times G_{2})^{\prime}\subseteq G_{1}^{\prime}\times G_{2}^{\prime}}$.</p> <p>Now, to get the inclusion in the other direction, let $(g,h) \in G_{1}^{\prime} \times G_{2}^{\prime}$. Then, $g = [g_{1},g_{2}]\in G_{1}^{\prime}$ and $h = [h_{1},h_{2}] \in G_{2}^{\prime}$.</p> <p>So, </p> <p>$\begin{align}(g,h) = ([g_{1},g_{2}],[h_{1},h_{2}]) = (g_{1}^{-1}g_{2}^{-1}g_{1}g_{2}, h_{1}^{-1}h_{2}^{-1}h_{1}h_{2}) \\ = (g_{1}^{-1},h_{1}^{-1})(g_{2}^{-1},h_{2}^{-1})(g_{1},h_{1})(g_{2},h_{2}) \\ = (g_{1},h_{1})^{-1}(g_{2},h_{2})^{-1}(g_{1},h_{1})(g_{2},h_{2}) \\= [(g_{1},h_{1}),(g_{2},h_{2})] \end{align} $ </p> <p>where $(g_{1},h_{1}), (g_{2},h_{2}) \in G_{1} \times G_{2}$. </p> <p>Therefore, the product of a commutator in $G_{1}$ with a commutator in $G_{2}$ is a commutator in $G_{1} \times G_{2}$. Thus, $\mathbf{G_{1}^{\prime} \times G_{2}^{\prime} \subseteq (G_{1} \times G_{2})^{\prime}}$.</p> <p>And so, we get our equality, $G_{1}^{\prime} \times G_{2}^{\prime} = (G_{1} \times G_{2})^{\prime}$.</p>
What is the general solution to $2 \cos^2 x-\cos x=0$?
2,081,473
5
https://math.stackexchange.com/questions/2081473/what-is-the-general-solution-to-2-cos2-x-cos-x-0
<p>What is the general solution to this trig equation?</p> <p>$$2 \cos^2 x-\cos x=0$$</p> <p>Thanks.</p>
[ { "answer_id": 2081770, "score": 19, "text": "<p>To make it easier to visualise and solve, you can substitute $u=\\cos{x}$.\n$$2\\cos^2{x}-\\cos{x}=0$$\n$$2u^2-u=0$$\nFrom here, you can either use the quadratic formula or factorise. I chose to factorise:\n$$u(2u-1)=0$$\nHence we have the solutions $u=0,\\frac{1}{2}$.\nNow, we solve the equations resulting from our substitution $u=\\cos{x}$:\n$$\\cos{x}=0 \\tag{1}$$\n$$\\cos{x}=\\frac{1}{2} \\tag{2}$$</p>\n\n<p>For equation $(1)$, we obtain:`\n$$x=\\frac{\\pi}{2}+k\\pi$$\nFor equation $(2)$, we obtain:\n$$x=2k\\pi+\\frac{\\pi}{3}$$\nAnd\n$$x=2k\\pi-\\frac{\\pi}{3}$$\nWhere $k \\in \\mathbb{Z}$.</p>\n\n<p>Hence, our general solution is:\n$$x=\\begin{cases} \\frac{\\pi}{2}+k\\pi \\\\ 2k\\pi+\\frac{\\pi}{3} \\\\ 2k\\pi-\\frac{\\pi}{3} \\end{cases}$$</p>\n" }, { "answer_id": 2081474, "score": 13, "text": "<p><strong>hint</strong>: $2\\cos^2 x -\\cos x = \\cos x(2\\cos x - 1)=0$</p>\n" }, { "answer_id": 2081486, "score": 9, "text": "<p>Notice that $$2 \\cos^2 x-\\cos x=0\\implies 2\\cos^2x=\\cos x$$</p>\n\n<p>So either $\\cos x=0$ or $\\cos x=\\frac{1}{2}$. </p>\n\n<p>Now find general solution</p>\n" }, { "answer_id": 2081545, "score": 5, "text": "<p>$$2\\cos^2(x) - \\cos(x) = 0 \\implies \\cos(x)(2\\cos(x) - 1) = 0$$</p>\n\n<p>For what $x$ is $\\cos(x) = 0, \\frac{1}{2}$? There are only 2 solutions that form all distinct solutions; all others are $2k\\pi{}$ away from them</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Special_right_triangle#30.C2.B0.E2.80.9360.C2.B0.E2.80.9390.C2.B0_triangle\" rel=\"nofollow noreferrer\">Have a look</a></p>\n" }, { "answer_id": 2081477, "score": 2, "text": "<p>$2\\cos^2 x-\\cos x=0$</p>\n\n<p>$\\implies \\cos x(2\\cos x-1)=0$</p>\n\n<p>Either</p>\n\n<p>$\\cos x=0$</p>\n\n<p>$\\implies x=\\dfrac{\\pi}{2}\\pm 2n\\pi$</p>\n\n<p>Or</p>\n\n<p>$2\\cos x-1=0$</p>\n\n<p>$\\implies \\cos x=\\dfrac{1}{2}$</p>\n\n<p>$\\implies x=\\dfrac{\\pi}{3}\\pm 2n\\pi$</p>\n\n<p>where\n $n \\in Z$</p>\n" }, { "answer_id": 2081770, "score": 19, "text": "<p>To make it easier to visualise and solve, you can substitute $u=\\cos{x}$.\n$$2\\cos^2{x}-\\cos{x}=0$$\n$$2u^2-u=0$$\nFrom here, you can either use the quadratic formula or factorise. I chose to factorise:\n$$u(2u-1)=0$$\nHence we have the solutions $u=0,\\frac{1}{2}$.\nNow, we solve the equations resulting from our substitution $u=\\cos{x}$:\n$$\\cos{x}=0 \\tag{1}$$\n$$\\cos{x}=\\frac{1}{2} \\tag{2}$$</p>\n\n<p>For equation $(1)$, we obtain:`\n$$x=\\frac{\\pi}{2}+k\\pi$$\nFor equation $(2)$, we obtain:\n$$x=2k\\pi+\\frac{\\pi}{3}$$\nAnd\n$$x=2k\\pi-\\frac{\\pi}{3}$$\nWhere $k \\in \\mathbb{Z}$.</p>\n\n<p>Hence, our general solution is:\n$$x=\\begin{cases} \\frac{\\pi}{2}+k\\pi \\\\ 2k\\pi+\\frac{\\pi}{3} \\\\ 2k\\pi-\\frac{\\pi}{3} \\end{cases}$$</p>\n" }, { "answer_id": 2081474, "score": 13, "text": "<p><strong>hint</strong>: $2\\cos^2 x -\\cos x = \\cos x(2\\cos x - 1)=0$</p>\n" }, { "answer_id": 2081486, "score": 9, "text": "<p>Notice that $$2 \\cos^2 x-\\cos x=0\\implies 2\\cos^2x=\\cos x$$</p>\n\n<p>So either $\\cos x=0$ or $\\cos x=\\frac{1}{2}$. </p>\n\n<p>Now find general solution</p>\n" }, { "answer_id": 2081545, "score": 5, "text": "<p>$$2\\cos^2(x) - \\cos(x) = 0 \\implies \\cos(x)(2\\cos(x) - 1) = 0$$</p>\n\n<p>For what $x$ is $\\cos(x) = 0, \\frac{1}{2}$? There are only 2 solutions that form all distinct solutions; all others are $2k\\pi{}$ away from them</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Special_right_triangle#30.C2.B0.E2.80.9360.C2.B0.E2.80.9390.C2.B0_triangle\" rel=\"nofollow noreferrer\">Have a look</a></p>\n" }, { "answer_id": 2081477, "score": 2, "text": "<p>$2\\cos^2 x-\\cos x=0$</p>\n\n<p>$\\implies \\cos x(2\\cos x-1)=0$</p>\n\n<p>Either</p>\n\n<p>$\\cos x=0$</p>\n\n<p>$\\implies x=\\dfrac{\\pi}{2}\\pm 2n\\pi$</p>\n\n<p>Or</p>\n\n<p>$2\\cos x-1=0$</p>\n\n<p>$\\implies \\cos x=\\dfrac{1}{2}$</p>\n\n<p>$\\implies x=\\dfrac{\\pi}{3}\\pm 2n\\pi$</p>\n\n<p>where\n $n \\in Z$</p>\n" }, { "answer_id": 2081770, "score": 19, "text": "<p>To make it easier to visualise and solve, you can substitute $u=\\cos{x}$.\n$$2\\cos^2{x}-\\cos{x}=0$$\n$$2u^2-u=0$$\nFrom here, you can either use the quadratic formula or factorise. I chose to factorise:\n$$u(2u-1)=0$$\nHence we have the solutions $u=0,\\frac{1}{2}$.\nNow, we solve the equations resulting from our substitution $u=\\cos{x}$:\n$$\\cos{x}=0 \\tag{1}$$\n$$\\cos{x}=\\frac{1}{2} \\tag{2}$$</p>\n\n<p>For equation $(1)$, we obtain:`\n$$x=\\frac{\\pi}{2}+k\\pi$$\nFor equation $(2)$, we obtain:\n$$x=2k\\pi+\\frac{\\pi}{3}$$\nAnd\n$$x=2k\\pi-\\frac{\\pi}{3}$$\nWhere $k \\in \\mathbb{Z}$.</p>\n\n<p>Hence, our general solution is:\n$$x=\\begin{cases} \\frac{\\pi}{2}+k\\pi \\\\ 2k\\pi+\\frac{\\pi}{3} \\\\ 2k\\pi-\\frac{\\pi}{3} \\end{cases}$$</p>\n" }, { "answer_id": 2081474, "score": 13, "text": "<p><strong>hint</strong>: $2\\cos^2 x -\\cos x = \\cos x(2\\cos x - 1)=0$</p>\n" }, { "answer_id": 2081486, "score": 9, "text": "<p>Notice that $$2 \\cos^2 x-\\cos x=0\\implies 2\\cos^2x=\\cos x$$</p>\n\n<p>So either $\\cos x=0$ or $\\cos x=\\frac{1}{2}$. </p>\n\n<p>Now find general solution</p>\n" }, { "answer_id": 2081545, "score": 5, "text": "<p>$$2\\cos^2(x) - \\cos(x) = 0 \\implies \\cos(x)(2\\cos(x) - 1) = 0$$</p>\n\n<p>For what $x$ is $\\cos(x) = 0, \\frac{1}{2}$? There are only 2 solutions that form all distinct solutions; all others are $2k\\pi{}$ away from them</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Special_right_triangle#30.C2.B0.E2.80.9360.C2.B0.E2.80.9390.C2.B0_triangle\" rel=\"nofollow noreferrer\">Have a look</a></p>\n" }, { "answer_id": 2081477, "score": 2, "text": "<p>$2\\cos^2 x-\\cos x=0$</p>\n\n<p>$\\implies \\cos x(2\\cos x-1)=0$</p>\n\n<p>Either</p>\n\n<p>$\\cos x=0$</p>\n\n<p>$\\implies x=\\dfrac{\\pi}{2}\\pm 2n\\pi$</p>\n\n<p>Or</p>\n\n<p>$2\\cos x-1=0$</p>\n\n<p>$\\implies \\cos x=\\dfrac{1}{2}$</p>\n\n<p>$\\implies x=\\dfrac{\\pi}{3}\\pm 2n\\pi$</p>\n\n<p>where\n $n \\in Z$</p>\n" }, { "answer_id": 2081770, "score": 19, "text": "<p>To make it easier to visualise and solve, you can substitute $u=\\cos{x}$.\n$$2\\cos^2{x}-\\cos{x}=0$$\n$$2u^2-u=0$$\nFrom here, you can either use the quadratic formula or factorise. I chose to factorise:\n$$u(2u-1)=0$$\nHence we have the solutions $u=0,\\frac{1}{2}$.\nNow, we solve the equations resulting from our substitution $u=\\cos{x}$:\n$$\\cos{x}=0 \\tag{1}$$\n$$\\cos{x}=\\frac{1}{2} \\tag{2}$$</p>\n\n<p>For equation $(1)$, we obtain:`\n$$x=\\frac{\\pi}{2}+k\\pi$$\nFor equation $(2)$, we obtain:\n$$x=2k\\pi+\\frac{\\pi}{3}$$\nAnd\n$$x=2k\\pi-\\frac{\\pi}{3}$$\nWhere $k \\in \\mathbb{Z}$.</p>\n\n<p>Hence, our general solution is:\n$$x=\\begin{cases} \\frac{\\pi}{2}+k\\pi \\\\ 2k\\pi+\\frac{\\pi}{3} \\\\ 2k\\pi-\\frac{\\pi}{3} \\end{cases}$$</p>\n" }, { "answer_id": 2081474, "score": 13, "text": "<p><strong>hint</strong>: $2\\cos^2 x -\\cos x = \\cos x(2\\cos x - 1)=0$</p>\n" }, { "answer_id": 2081486, "score": 9, "text": "<p>Notice that $$2 \\cos^2 x-\\cos x=0\\implies 2\\cos^2x=\\cos x$$</p>\n\n<p>So either $\\cos x=0$ or $\\cos x=\\frac{1}{2}$. </p>\n\n<p>Now find general solution</p>\n" }, { "answer_id": 2081545, "score": 5, "text": "<p>$$2\\cos^2(x) - \\cos(x) = 0 \\implies \\cos(x)(2\\cos(x) - 1) = 0$$</p>\n\n<p>For what $x$ is $\\cos(x) = 0, \\frac{1}{2}$? There are only 2 solutions that form all distinct solutions; all others are $2k\\pi{}$ away from them</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Special_right_triangle#30.C2.B0.E2.80.9360.C2.B0.E2.80.9390.C2.B0_triangle\" rel=\"nofollow noreferrer\">Have a look</a></p>\n" }, { "answer_id": 2081477, "score": 2, "text": "<p>$2\\cos^2 x-\\cos x=0$</p>\n\n<p>$\\implies \\cos x(2\\cos x-1)=0$</p>\n\n<p>Either</p>\n\n<p>$\\cos x=0$</p>\n\n<p>$\\implies x=\\dfrac{\\pi}{2}\\pm 2n\\pi$</p>\n\n<p>Or</p>\n\n<p>$2\\cos x-1=0$</p>\n\n<p>$\\implies \\cos x=\\dfrac{1}{2}$</p>\n\n<p>$\\implies x=\\dfrac{\\pi}{3}\\pm 2n\\pi$</p>\n\n<p>where\n $n \\in Z$</p>\n" }, { "answer_id": 2081770, "score": 19, "text": "<p>To make it easier to visualise and solve, you can substitute $u=\\cos{x}$.\n$$2\\cos^2{x}-\\cos{x}=0$$\n$$2u^2-u=0$$\nFrom here, you can either use the quadratic formula or factorise. I chose to factorise:\n$$u(2u-1)=0$$\nHence we have the solutions $u=0,\\frac{1}{2}$.\nNow, we solve the equations resulting from our substitution $u=\\cos{x}$:\n$$\\cos{x}=0 \\tag{1}$$\n$$\\cos{x}=\\frac{1}{2} \\tag{2}$$</p>\n\n<p>For equation $(1)$, we obtain:`\n$$x=\\frac{\\pi}{2}+k\\pi$$\nFor equation $(2)$, we obtain:\n$$x=2k\\pi+\\frac{\\pi}{3}$$\nAnd\n$$x=2k\\pi-\\frac{\\pi}{3}$$\nWhere $k \\in \\mathbb{Z}$.</p>\n\n<p>Hence, our general solution is:\n$$x=\\begin{cases} \\frac{\\pi}{2}+k\\pi \\\\ 2k\\pi+\\frac{\\pi}{3} \\\\ 2k\\pi-\\frac{\\pi}{3} \\end{cases}$$</p>\n" }, { "answer_id": 2081474, "score": 13, "text": "<p><strong>hint</strong>: $2\\cos^2 x -\\cos x = \\cos x(2\\cos x - 1)=0$</p>\n" }, { "answer_id": 2081486, "score": 9, "text": "<p>Notice that $$2 \\cos^2 x-\\cos x=0\\implies 2\\cos^2x=\\cos x$$</p>\n\n<p>So either $\\cos x=0$ or $\\cos x=\\frac{1}{2}$. </p>\n\n<p>Now find general solution</p>\n" }, { "answer_id": 2081545, "score": 5, "text": "<p>$$2\\cos^2(x) - \\cos(x) = 0 \\implies \\cos(x)(2\\cos(x) - 1) = 0$$</p>\n\n<p>For what $x$ is $\\cos(x) = 0, \\frac{1}{2}$? There are only 2 solutions that form all distinct solutions; all others are $2k\\pi{}$ away from them</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Special_right_triangle#30.C2.B0.E2.80.9360.C2.B0.E2.80.9390.C2.B0_triangle\" rel=\"nofollow noreferrer\">Have a look</a></p>\n" }, { "answer_id": 2081477, "score": 2, "text": "<p>$2\\cos^2 x-\\cos x=0$</p>\n\n<p>$\\implies \\cos x(2\\cos x-1)=0$</p>\n\n<p>Either</p>\n\n<p>$\\cos x=0$</p>\n\n<p>$\\implies x=\\dfrac{\\pi}{2}\\pm 2n\\pi$</p>\n\n<p>Or</p>\n\n<p>$2\\cos x-1=0$</p>\n\n<p>$\\implies \\cos x=\\dfrac{1}{2}$</p>\n\n<p>$\\implies x=\\dfrac{\\pi}{3}\\pm 2n\\pi$</p>\n\n<p>where\n $n \\in Z$</p>\n" }, { "answer_id": 2081770, "score": 19, "text": "<p>To make it easier to visualise and solve, you can substitute $u=\\cos{x}$.\n$$2\\cos^2{x}-\\cos{x}=0$$\n$$2u^2-u=0$$\nFrom here, you can either use the quadratic formula or factorise. I chose to factorise:\n$$u(2u-1)=0$$\nHence we have the solutions $u=0,\\frac{1}{2}$.\nNow, we solve the equations resulting from our substitution $u=\\cos{x}$:\n$$\\cos{x}=0 \\tag{1}$$\n$$\\cos{x}=\\frac{1}{2} \\tag{2}$$</p>\n\n<p>For equation $(1)$, we obtain:`\n$$x=\\frac{\\pi}{2}+k\\pi$$\nFor equation $(2)$, we obtain:\n$$x=2k\\pi+\\frac{\\pi}{3}$$\nAnd\n$$x=2k\\pi-\\frac{\\pi}{3}$$\nWhere $k \\in \\mathbb{Z}$.</p>\n\n<p>Hence, our general solution is:\n$$x=\\begin{cases} \\frac{\\pi}{2}+k\\pi \\\\ 2k\\pi+\\frac{\\pi}{3} \\\\ 2k\\pi-\\frac{\\pi}{3} \\end{cases}$$</p>\n" }, { "answer_id": 2081474, "score": 13, "text": "<p><strong>hint</strong>: $2\\cos^2 x -\\cos x = \\cos x(2\\cos x - 1)=0$</p>\n" }, { "answer_id": 2081486, "score": 9, "text": "<p>Notice that $$2 \\cos^2 x-\\cos x=0\\implies 2\\cos^2x=\\cos x$$</p>\n\n<p>So either $\\cos x=0$ or $\\cos x=\\frac{1}{2}$. </p>\n\n<p>Now find general solution</p>\n" }, { "answer_id": 2081545, "score": 5, "text": "<p>$$2\\cos^2(x) - \\cos(x) = 0 \\implies \\cos(x)(2\\cos(x) - 1) = 0$$</p>\n\n<p>For what $x$ is $\\cos(x) = 0, \\frac{1}{2}$? There are only 2 solutions that form all distinct solutions; all others are $2k\\pi{}$ away from them</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Special_right_triangle#30.C2.B0.E2.80.9360.C2.B0.E2.80.9390.C2.B0_triangle\" rel=\"nofollow noreferrer\">Have a look</a></p>\n" }, { "answer_id": 2081477, "score": 2, "text": "<p>$2\\cos^2 x-\\cos x=0$</p>\n\n<p>$\\implies \\cos x(2\\cos x-1)=0$</p>\n\n<p>Either</p>\n\n<p>$\\cos x=0$</p>\n\n<p>$\\implies x=\\dfrac{\\pi}{2}\\pm 2n\\pi$</p>\n\n<p>Or</p>\n\n<p>$2\\cos x-1=0$</p>\n\n<p>$\\implies \\cos x=\\dfrac{1}{2}$</p>\n\n<p>$\\implies x=\\dfrac{\\pi}{3}\\pm 2n\\pi$</p>\n\n<p>where\n $n \\in Z$</p>\n" }, { "answer_id": 2081770, "score": 19, "text": "<p>To make it easier to visualise and solve, you can substitute $u=\\cos{x}$.\n$$2\\cos^2{x}-\\cos{x}=0$$\n$$2u^2-u=0$$\nFrom here, you can either use the quadratic formula or factorise. I chose to factorise:\n$$u(2u-1)=0$$\nHence we have the solutions $u=0,\\frac{1}{2}$.\nNow, we solve the equations resulting from our substitution $u=\\cos{x}$:\n$$\\cos{x}=0 \\tag{1}$$\n$$\\cos{x}=\\frac{1}{2} \\tag{2}$$</p>\n\n<p>For equation $(1)$, we obtain:`\n$$x=\\frac{\\pi}{2}+k\\pi$$\nFor equation $(2)$, we obtain:\n$$x=2k\\pi+\\frac{\\pi}{3}$$\nAnd\n$$x=2k\\pi-\\frac{\\pi}{3}$$\nWhere $k \\in \\mathbb{Z}$.</p>\n\n<p>Hence, our general solution is:\n$$x=\\begin{cases} \\frac{\\pi}{2}+k\\pi \\\\ 2k\\pi+\\frac{\\pi}{3} \\\\ 2k\\pi-\\frac{\\pi}{3} \\end{cases}$$</p>\n" }, { "answer_id": 2081474, "score": 13, "text": "<p><strong>hint</strong>: $2\\cos^2 x -\\cos x = \\cos x(2\\cos x - 1)=0$</p>\n" }, { "answer_id": 2081486, "score": 9, "text": "<p>Notice that $$2 \\cos^2 x-\\cos x=0\\implies 2\\cos^2x=\\cos x$$</p>\n\n<p>So either $\\cos x=0$ or $\\cos x=\\frac{1}{2}$. </p>\n\n<p>Now find general solution</p>\n" }, { "answer_id": 2081545, "score": 5, "text": "<p>$$2\\cos^2(x) - \\cos(x) = 0 \\implies \\cos(x)(2\\cos(x) - 1) = 0$$</p>\n\n<p>For what $x$ is $\\cos(x) = 0, \\frac{1}{2}$? There are only 2 solutions that form all distinct solutions; all others are $2k\\pi{}$ away from them</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Special_right_triangle#30.C2.B0.E2.80.9360.C2.B0.E2.80.9390.C2.B0_triangle\" rel=\"nofollow noreferrer\">Have a look</a></p>\n" }, { "answer_id": 2081477, "score": 2, "text": "<p>$2\\cos^2 x-\\cos x=0$</p>\n\n<p>$\\implies \\cos x(2\\cos x-1)=0$</p>\n\n<p>Either</p>\n\n<p>$\\cos x=0$</p>\n\n<p>$\\implies x=\\dfrac{\\pi}{2}\\pm 2n\\pi$</p>\n\n<p>Or</p>\n\n<p>$2\\cos x-1=0$</p>\n\n<p>$\\implies \\cos x=\\dfrac{1}{2}$</p>\n\n<p>$\\implies x=\\dfrac{\\pi}{3}\\pm 2n\\pi$</p>\n\n<p>where\n $n \\in Z$</p>\n" }, { "answer_id": 2081770, "score": 19, "text": "<p>To make it easier to visualise and solve, you can substitute $u=\\cos{x}$.\n$$2\\cos^2{x}-\\cos{x}=0$$\n$$2u^2-u=0$$\nFrom here, you can either use the quadratic formula or factorise. I chose to factorise:\n$$u(2u-1)=0$$\nHence we have the solutions $u=0,\\frac{1}{2}$.\nNow, we solve the equations resulting from our substitution $u=\\cos{x}$:\n$$\\cos{x}=0 \\tag{1}$$\n$$\\cos{x}=\\frac{1}{2} \\tag{2}$$</p>\n\n<p>For equation $(1)$, we obtain:`\n$$x=\\frac{\\pi}{2}+k\\pi$$\nFor equation $(2)$, we obtain:\n$$x=2k\\pi+\\frac{\\pi}{3}$$\nAnd\n$$x=2k\\pi-\\frac{\\pi}{3}$$\nWhere $k \\in \\mathbb{Z}$.</p>\n\n<p>Hence, our general solution is:\n$$x=\\begin{cases} \\frac{\\pi}{2}+k\\pi \\\\ 2k\\pi+\\frac{\\pi}{3} \\\\ 2k\\pi-\\frac{\\pi}{3} \\end{cases}$$</p>\n" }, { "answer_id": 2081474, "score": 13, "text": "<p><strong>hint</strong>: $2\\cos^2 x -\\cos x = \\cos x(2\\cos x - 1)=0$</p>\n" }, { "answer_id": 2081486, "score": 9, "text": "<p>Notice that $$2 \\cos^2 x-\\cos x=0\\implies 2\\cos^2x=\\cos x$$</p>\n\n<p>So either $\\cos x=0$ or $\\cos x=\\frac{1}{2}$. </p>\n\n<p>Now find general solution</p>\n" }, { "answer_id": 2081545, "score": 5, "text": "<p>$$2\\cos^2(x) - \\cos(x) = 0 \\implies \\cos(x)(2\\cos(x) - 1) = 0$$</p>\n\n<p>For what $x$ is $\\cos(x) = 0, \\frac{1}{2}$? There are only 2 solutions that form all distinct solutions; all others are $2k\\pi{}$ away from them</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Special_right_triangle#30.C2.B0.E2.80.9360.C2.B0.E2.80.9390.C2.B0_triangle\" rel=\"nofollow noreferrer\">Have a look</a></p>\n" }, { "answer_id": 2081477, "score": 2, "text": "<p>$2\\cos^2 x-\\cos x=0$</p>\n\n<p>$\\implies \\cos x(2\\cos x-1)=0$</p>\n\n<p>Either</p>\n\n<p>$\\cos x=0$</p>\n\n<p>$\\implies x=\\dfrac{\\pi}{2}\\pm 2n\\pi$</p>\n\n<p>Or</p>\n\n<p>$2\\cos x-1=0$</p>\n\n<p>$\\implies \\cos x=\\dfrac{1}{2}$</p>\n\n<p>$\\implies x=\\dfrac{\\pi}{3}\\pm 2n\\pi$</p>\n\n<p>where\n $n \\in Z$</p>\n" }, { "answer_id": 2081770, "score": 19, "text": "<p>To make it easier to visualise and solve, you can substitute $u=\\cos{x}$.\n$$2\\cos^2{x}-\\cos{x}=0$$\n$$2u^2-u=0$$\nFrom here, you can either use the quadratic formula or factorise. I chose to factorise:\n$$u(2u-1)=0$$\nHence we have the solutions $u=0,\\frac{1}{2}$.\nNow, we solve the equations resulting from our substitution $u=\\cos{x}$:\n$$\\cos{x}=0 \\tag{1}$$\n$$\\cos{x}=\\frac{1}{2} \\tag{2}$$</p>\n\n<p>For equation $(1)$, we obtain:`\n$$x=\\frac{\\pi}{2}+k\\pi$$\nFor equation $(2)$, we obtain:\n$$x=2k\\pi+\\frac{\\pi}{3}$$\nAnd\n$$x=2k\\pi-\\frac{\\pi}{3}$$\nWhere $k \\in \\mathbb{Z}$.</p>\n\n<p>Hence, our general solution is:\n$$x=\\begin{cases} \\frac{\\pi}{2}+k\\pi \\\\ 2k\\pi+\\frac{\\pi}{3} \\\\ 2k\\pi-\\frac{\\pi}{3} \\end{cases}$$</p>\n" }, { "answer_id": 2081474, "score": 13, "text": "<p><strong>hint</strong>: $2\\cos^2 x -\\cos x = \\cos x(2\\cos x - 1)=0$</p>\n" }, { "answer_id": 2081486, "score": 9, "text": "<p>Notice that $$2 \\cos^2 x-\\cos x=0\\implies 2\\cos^2x=\\cos x$$</p>\n\n<p>So either $\\cos x=0$ or $\\cos x=\\frac{1}{2}$. </p>\n\n<p>Now find general solution</p>\n" }, { "answer_id": 2081545, "score": 5, "text": "<p>$$2\\cos^2(x) - \\cos(x) = 0 \\implies \\cos(x)(2\\cos(x) - 1) = 0$$</p>\n\n<p>For what $x$ is $\\cos(x) = 0, \\frac{1}{2}$? There are only 2 solutions that form all distinct solutions; all others are $2k\\pi{}$ away from them</p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Special_right_triangle#30.C2.B0.E2.80.9360.C2.B0.E2.80.9390.C2.B0_triangle\" rel=\"nofollow noreferrer\">Have a look</a></p>\n" }, { "answer_id": 2081477, "score": 2, "text": "<p>$2\\cos^2 x-\\cos x=0$</p>\n\n<p>$\\implies \\cos x(2\\cos x-1)=0$</p>\n\n<p>Either</p>\n\n<p>$\\cos x=0$</p>\n\n<p>$\\implies x=\\dfrac{\\pi}{2}\\pm 2n\\pi$</p>\n\n<p>Or</p>\n\n<p>$2\\cos x-1=0$</p>\n\n<p>$\\implies \\cos x=\\dfrac{1}{2}$</p>\n\n<p>$\\implies x=\\dfrac{\\pi}{3}\\pm 2n\\pi$</p>\n\n<p>where\n $n \\in Z$</p>\n" } ]
Question: What is the general solution to $2 \cos^2 x-\cos x=0$? Answer 1: <p>To make it easier to visualise and solve, you can substitute $u=\cos{x}$. $$2\cos^2{x}-\cos{x}=0$$ $$2u^2-u=0$$ From here, you can either use the quadratic formula or factorise. I chose to factorise: $$u(2u-1)=0$$ Hence we have the solutions $u=0,\frac{1}{2}$. Now, we solve the equations resulting from our substitution $u=\cos{x}$: $$\cos{x}=0 \tag{1}$$ $$\cos{x}=\frac{1}{2} \tag{2}$$</p> <p>For equation $(1)$, we obtain:` $$x=\frac{\pi}{2}+k\pi$$ For equation $(2)$, we obtain: $$x=2k\pi+\frac{\pi}{3}$$ And $$x=2k\pi-\frac{\pi}{3}$$ Where $k \in \mathbb{Z}$.</p> <p>Hence, our general solution is: $$x=\begin{cases} \frac{\pi}{2}+k\pi \\ 2k\pi+\frac{\pi}{3} \\ 2k\pi-\frac{\pi}{3} \end{cases}$$</p> Answer 2: <p><strong>hint</strong>: $2\cos^2 x -\cos x = \cos x(2\cos x - 1)=0$</p> Answer 3: <p>Notice that $$2 \cos^2 x-\cos x=0\implies 2\cos^2x=\cos x$$</p> <p>So either $\cos x=0$ or $\cos x=\frac{1}{2}$. </p> <p>Now find general solution</p> Answer 4: <p>$$2\cos^2(x) - \cos(x) = 0 \implies \cos(x)(2\cos(x) - 1) = 0$$</p> <p>For what $x$ is $\cos(x) = 0, \frac{1}{2}$? There are only 2 solutions that form all distinct solutions; all others are $2k\pi{}$ away from them</p> <p><a href="https://en.wikipedia.org/wiki/Special_right_triangle#30.C2.B0.E2.80.9360.C2.B0.E2.80.9390.C2.B0_triangle" rel="nofollow noreferrer">Have a look</a></p> Answer 5: <p>$2\cos^2 x-\cos x=0$</p> <p>$\implies \cos x(2\cos x-1)=0$</p> <p>Either</p> <p>$\cos x=0$</p> <p>$\implies x=\dfrac{\pi}{2}\pm 2n\pi$</p> <p>Or</p> <p>$2\cos x-1=0$</p> <p>$\implies \cos x=\dfrac{1}{2}$</p> <p>$\implies x=\dfrac{\pi}{3}\pm 2n\pi$</p> <p>where $n \in Z$</p> Answer 6: <p>To make it easier to visualise and solve, you can substitute $u=\cos{x}$. $$2\cos^2{x}-\cos{x}=0$$ $$2u^2-u=0$$ From here, you can either use the quadratic formula or factorise. I chose to factorise: $$u(2u-1)=0$$ Hence we have the solutions $u=0,\frac{1}{2}$. Now, we solve the equations resulting from our substitution $u=\cos{x}$: $$\cos{x}=0 \tag{1}$$ $$\cos{x}=\frac{1}{2} \tag{2}$$</p> <p>For equation $(1)$, we obtain:` $$x=\frac{\pi}{2}+k\pi$$ For equation $(2)$, we obtain: $$x=2k\pi+\frac{\pi}{3}$$ And $$x=2k\pi-\frac{\pi}{3}$$ Where $k \in \mathbb{Z}$.</p> <p>Hence, our general solution is: $$x=\begin{cases} \frac{\pi}{2}+k\pi \\ 2k\pi+\frac{\pi}{3} \\ 2k\pi-\frac{\pi}{3} \end{cases}$$</p> Answer 7: <p><strong>hint</strong>: $2\cos^2 x -\cos x = \cos x(2\cos x - 1)=0$</p> Answer 8: <p>Notice that $$2 \cos^2 x-\cos x=0\implies 2\cos^2x=\cos x$$</p> <p>So either $\cos x=0$ or $\cos x=\frac{1}{2}$. </p> <p>Now find general solution</p> Answer 9: <p>$$2\cos^2(x) - \cos(x) = 0 \implies \cos(x)(2\cos(x) - 1) = 0$$</p> <p>For what $x$ is $\cos(x) = 0, \frac{1}{2}$? There are only 2 solutions that form all distinct solutions; all others are $2k\pi{}$ away from them</p> <p><a href="https://en.wikipedia.org/wiki/Special_right_triangle#30.C2.B0.E2.80.9360.C2.B0.E2.80.9390.C2.B0_triangle" rel="nofollow noreferrer">Have a look</a></p> Answer 10: <p>$2\cos^2 x-\cos x=0$</p> <p>$\implies \cos x(2\cos x-1)=0$</p> <p>Either</p> <p>$\cos x=0$</p> <p>$\implies x=\dfrac{\pi}{2}\pm 2n\pi$</p> <p>Or</p> <p>$2\cos x-1=0$</p> <p>$\implies \cos x=\dfrac{1}{2}$</p> <p>$\implies x=\dfrac{\pi}{3}\pm 2n\pi$</p> <p>where $n \in Z$</p> Answer 11: <p>To make it easier to visualise and solve, you can substitute $u=\cos{x}$. $$2\cos^2{x}-\cos{x}=0$$ $$2u^2-u=0$$ From here, you can either use the quadratic formula or factorise. I chose to factorise: $$u(2u-1)=0$$ Hence we have the solutions $u=0,\frac{1}{2}$. Now, we solve the equations resulting from our substitution $u=\cos{x}$: $$\cos{x}=0 \tag{1}$$ $$\cos{x}=\frac{1}{2} \tag{2}$$</p> <p>For equation $(1)$, we obtain:` $$x=\frac{\pi}{2}+k\pi$$ For equation $(2)$, we obtain: $$x=2k\pi+\frac{\pi}{3}$$ And $$x=2k\pi-\frac{\pi}{3}$$ Where $k \in \mathbb{Z}$.</p> <p>Hence, our general solution is: $$x=\begin{cases} \frac{\pi}{2}+k\pi \\ 2k\pi+\frac{\pi}{3} \\ 2k\pi-\frac{\pi}{3} \end{cases}$$</p> Answer 12: <p><strong>hint</strong>: $2\cos^2 x -\cos x = \cos x(2\cos x - 1)=0$</p> Answer 13: <p>Notice that $$2 \cos^2 x-\cos x=0\implies 2\cos^2x=\cos x$$</p> <p>So either $\cos x=0$ or $\cos x=\frac{1}{2}$. </p> <p>Now find general solution</p> Answer 14: <p>$$2\cos^2(x) - \cos(x) = 0 \implies \cos(x)(2\cos(x) - 1) = 0$$</p> <p>For what $x$ is $\cos(x) = 0, \frac{1}{2}$? There are only 2 solutions that form all distinct solutions; all others are $2k\pi{}$ away from them</p> <p><a href="https://en.wikipedia.org/wiki/Special_right_triangle#30.C2.B0.E2.80.9360.C2.B0.E2.80.9390.C2.B0_triangle" rel="nofollow noreferrer">Have a look</a></p> Answer 15: <p>$2\cos^2 x-\cos x=0$</p> <p>$\implies \cos x(2\cos x-1)=0$</p> <p>Either</p> <p>$\cos x=0$</p> <p>$\implies x=\dfrac{\pi}{2}\pm 2n\pi$</p> <p>Or</p> <p>$2\cos x-1=0$</p> <p>$\implies \cos x=\dfrac{1}{2}$</p> <p>$\implies x=\dfrac{\pi}{3}\pm 2n\pi$</p> <p>where $n \in Z$</p> Answer 16: <p>To make it easier to visualise and solve, you can substitute $u=\cos{x}$. $$2\cos^2{x}-\cos{x}=0$$ $$2u^2-u=0$$ From here, you can either use the quadratic formula or factorise. I chose to factorise: $$u(2u-1)=0$$ Hence we have the solutions $u=0,\frac{1}{2}$. Now, we solve the equations resulting from our substitution $u=\cos{x}$: $$\cos{x}=0 \tag{1}$$ $$\cos{x}=\frac{1}{2} \tag{2}$$</p> <p>For equation $(1)$, we obtain:` $$x=\frac{\pi}{2}+k\pi$$ For equation $(2)$, we obtain: $$x=2k\pi+\frac{\pi}{3}$$ And $$x=2k\pi-\frac{\pi}{3}$$ Where $k \in \mathbb{Z}$.</p> <p>Hence, our general solution is: $$x=\begin{cases} \frac{\pi}{2}+k\pi \\ 2k\pi+\frac{\pi}{3} \\ 2k\pi-\frac{\pi}{3} \end{cases}$$</p> Answer 17: <p><strong>hint</strong>: $2\cos^2 x -\cos x = \cos x(2\cos x - 1)=0$</p> Answer 18: <p>Notice that $$2 \cos^2 x-\cos x=0\implies 2\cos^2x=\cos x$$</p> <p>So either $\cos x=0$ or $\cos x=\frac{1}{2}$. </p> <p>Now find general solution</p> Answer 19: <p>$$2\cos^2(x) - \cos(x) = 0 \implies \cos(x)(2\cos(x) - 1) = 0$$</p> <p>For what $x$ is $\cos(x) = 0, \frac{1}{2}$? There are only 2 solutions that form all distinct solutions; all others are $2k\pi{}$ away from them</p> <p><a href="https://en.wikipedia.org/wiki/Special_right_triangle#30.C2.B0.E2.80.9360.C2.B0.E2.80.9390.C2.B0_triangle" rel="nofollow noreferrer">Have a look</a></p> Answer 20: <p>$2\cos^2 x-\cos x=0$</p> <p>$\implies \cos x(2\cos x-1)=0$</p> <p>Either</p> <p>$\cos x=0$</p> <p>$\implies x=\dfrac{\pi}{2}\pm 2n\pi$</p> <p>Or</p> <p>$2\cos x-1=0$</p> <p>$\implies \cos x=\dfrac{1}{2}$</p> <p>$\implies x=\dfrac{\pi}{3}\pm 2n\pi$</p> <p>where $n \in Z$</p> Answer 21: <p>To make it easier to visualise and solve, you can substitute $u=\cos{x}$. $$2\cos^2{x}-\cos{x}=0$$ $$2u^2-u=0$$ From here, you can either use the quadratic formula or factorise. I chose to factorise: $$u(2u-1)=0$$ Hence we have the solutions $u=0,\frac{1}{2}$. Now, we solve the equations resulting from our substitution $u=\cos{x}$: $$\cos{x}=0 \tag{1}$$ $$\cos{x}=\frac{1}{2} \tag{2}$$</p> <p>For equation $(1)$, we obtain:` $$x=\frac{\pi}{2}+k\pi$$ For equation $(2)$, we obtain: $$x=2k\pi+\frac{\pi}{3}$$ And $$x=2k\pi-\frac{\pi}{3}$$ Where $k \in \mathbb{Z}$.</p> <p>Hence, our general solution is: $$x=\begin{cases} \frac{\pi}{2}+k\pi \\ 2k\pi+\frac{\pi}{3} \\ 2k\pi-\frac{\pi}{3} \end{cases}$$</p> Answer 22: <p><strong>hint</strong>: $2\cos^2 x -\cos x = \cos x(2\cos x - 1)=0$</p> Answer 23: <p>Notice that $$2 \cos^2 x-\cos x=0\implies 2\cos^2x=\cos x$$</p> <p>So either $\cos x=0$ or $\cos x=\frac{1}{2}$. </p> <p>Now find general solution</p> Answer 24: <p>$$2\cos^2(x) - \cos(x) = 0 \implies \cos(x)(2\cos(x) - 1) = 0$$</p> <p>For what $x$ is $\cos(x) = 0, \frac{1}{2}$? There are only 2 solutions that form all distinct solutions; all others are $2k\pi{}$ away from them</p> <p><a href="https://en.wikipedia.org/wiki/Special_right_triangle#30.C2.B0.E2.80.9360.C2.B0.E2.80.9390.C2.B0_triangle" rel="nofollow noreferrer">Have a look</a></p> Answer 25: <p>$2\cos^2 x-\cos x=0$</p> <p>$\implies \cos x(2\cos x-1)=0$</p> <p>Either</p> <p>$\cos x=0$</p> <p>$\implies x=\dfrac{\pi}{2}\pm 2n\pi$</p> <p>Or</p> <p>$2\cos x-1=0$</p> <p>$\implies \cos x=\dfrac{1}{2}$</p> <p>$\implies x=\dfrac{\pi}{3}\pm 2n\pi$</p> <p>where $n \in Z$</p> Answer 26: <p>To make it easier to visualise and solve, you can substitute $u=\cos{x}$. $$2\cos^2{x}-\cos{x}=0$$ $$2u^2-u=0$$ From here, you can either use the quadratic formula or factorise. I chose to factorise: $$u(2u-1)=0$$ Hence we have the solutions $u=0,\frac{1}{2}$. Now, we solve the equations resulting from our substitution $u=\cos{x}$: $$\cos{x}=0 \tag{1}$$ $$\cos{x}=\frac{1}{2} \tag{2}$$</p> <p>For equation $(1)$, we obtain:` $$x=\frac{\pi}{2}+k\pi$$ For equation $(2)$, we obtain: $$x=2k\pi+\frac{\pi}{3}$$ And $$x=2k\pi-\frac{\pi}{3}$$ Where $k \in \mathbb{Z}$.</p> <p>Hence, our general solution is: $$x=\begin{cases} \frac{\pi}{2}+k\pi \\ 2k\pi+\frac{\pi}{3} \\ 2k\pi-\frac{\pi}{3} \end{cases}$$</p> Answer 27: <p><strong>hint</strong>: $2\cos^2 x -\cos x = \cos x(2\cos x - 1)=0$</p> Answer 28: <p>Notice that $$2 \cos^2 x-\cos x=0\implies 2\cos^2x=\cos x$$</p> <p>So either $\cos x=0$ or $\cos x=\frac{1}{2}$. </p> <p>Now find general solution</p> Answer 29: <p>$$2\cos^2(x) - \cos(x) = 0 \implies \cos(x)(2\cos(x) - 1) = 0$$</p> <p>For what $x$ is $\cos(x) = 0, \frac{1}{2}$? There are only 2 solutions that form all distinct solutions; all others are $2k\pi{}$ away from them</p> <p><a href="https://en.wikipedia.org/wiki/Special_right_triangle#30.C2.B0.E2.80.9360.C2.B0.E2.80.9390.C2.B0_triangle" rel="nofollow noreferrer">Have a look</a></p> Answer 30: <p>$2\cos^2 x-\cos x=0$</p> <p>$\implies \cos x(2\cos x-1)=0$</p> <p>Either</p> <p>$\cos x=0$</p> <p>$\implies x=\dfrac{\pi}{2}\pm 2n\pi$</p> <p>Or</p> <p>$2\cos x-1=0$</p> <p>$\implies \cos x=\dfrac{1}{2}$</p> <p>$\implies x=\dfrac{\pi}{3}\pm 2n\pi$</p> <p>where $n \in Z$</p> Answer 31: <p>To make it easier to visualise and solve, you can substitute $u=\cos{x}$. $$2\cos^2{x}-\cos{x}=0$$ $$2u^2-u=0$$ From here, you can either use the quadratic formula or factorise. I chose to factorise: $$u(2u-1)=0$$ Hence we have the solutions $u=0,\frac{1}{2}$. Now, we solve the equations resulting from our substitution $u=\cos{x}$: $$\cos{x}=0 \tag{1}$$ $$\cos{x}=\frac{1}{2} \tag{2}$$</p> <p>For equation $(1)$, we obtain:` $$x=\frac{\pi}{2}+k\pi$$ For equation $(2)$, we obtain: $$x=2k\pi+\frac{\pi}{3}$$ And $$x=2k\pi-\frac{\pi}{3}$$ Where $k \in \mathbb{Z}$.</p> <p>Hence, our general solution is: $$x=\begin{cases} \frac{\pi}{2}+k\pi \\ 2k\pi+\frac{\pi}{3} \\ 2k\pi-\frac{\pi}{3} \end{cases}$$</p> Answer 32: <p><strong>hint</strong>: $2\cos^2 x -\cos x = \cos x(2\cos x - 1)=0$</p> Answer 33: <p>Notice that $$2 \cos^2 x-\cos x=0\implies 2\cos^2x=\cos x$$</p> <p>So either $\cos x=0$ or $\cos x=\frac{1}{2}$. </p> <p>Now find general solution</p> Answer 34: <p>$$2\cos^2(x) - \cos(x) = 0 \implies \cos(x)(2\cos(x) - 1) = 0$$</p> <p>For what $x$ is $\cos(x) = 0, \frac{1}{2}$? There are only 2 solutions that form all distinct solutions; all others are $2k\pi{}$ away from them</p> <p><a href="https://en.wikipedia.org/wiki/Special_right_triangle#30.C2.B0.E2.80.9360.C2.B0.E2.80.9390.C2.B0_triangle" rel="nofollow noreferrer">Have a look</a></p> Answer 35: <p>$2\cos^2 x-\cos x=0$</p> <p>$\implies \cos x(2\cos x-1)=0$</p> <p>Either</p> <p>$\cos x=0$</p> <p>$\implies x=\dfrac{\pi}{2}\pm 2n\pi$</p> <p>Or</p> <p>$2\cos x-1=0$</p> <p>$\implies \cos x=\dfrac{1}{2}$</p> <p>$\implies x=\dfrac{\pi}{3}\pm 2n\pi$</p> <p>where $n \in Z$</p> Answer 36: <p>To make it easier to visualise and solve, you can substitute $u=\cos{x}$. $$2\cos^2{x}-\cos{x}=0$$ $$2u^2-u=0$$ From here, you can either use the quadratic formula or factorise. I chose to factorise: $$u(2u-1)=0$$ Hence we have the solutions $u=0,\frac{1}{2}$. Now, we solve the equations resulting from our substitution $u=\cos{x}$: $$\cos{x}=0 \tag{1}$$ $$\cos{x}=\frac{1}{2} \tag{2}$$</p> <p>For equation $(1)$, we obtain:` $$x=\frac{\pi}{2}+k\pi$$ For equation $(2)$, we obtain: $$x=2k\pi+\frac{\pi}{3}$$ And $$x=2k\pi-\frac{\pi}{3}$$ Where $k \in \mathbb{Z}$.</p> <p>Hence, our general solution is: $$x=\begin{cases} \frac{\pi}{2}+k\pi \\ 2k\pi+\frac{\pi}{3} \\ 2k\pi-\frac{\pi}{3} \end{cases}$$</p> Answer 37: <p><strong>hint</strong>: $2\cos^2 x -\cos x = \cos x(2\cos x - 1)=0$</p> Answer 38: <p>Notice that $$2 \cos^2 x-\cos x=0\implies 2\cos^2x=\cos x$$</p> <p>So either $\cos x=0$ or $\cos x=\frac{1}{2}$. </p> <p>Now find general solution</p> Answer 39: <p>$$2\cos^2(x) - \cos(x) = 0 \implies \cos(x)(2\cos(x) - 1) = 0$$</p> <p>For what $x$ is $\cos(x) = 0, \frac{1}{2}$? There are only 2 solutions that form all distinct solutions; all others are $2k\pi{}$ away from them</p> <p><a href="https://en.wikipedia.org/wiki/Special_right_triangle#30.C2.B0.E2.80.9360.C2.B0.E2.80.9390.C2.B0_triangle" rel="nofollow noreferrer">Have a look</a></p> Answer 40: <p>$2\cos^2 x-\cos x=0$</p> <p>$\implies \cos x(2\cos x-1)=0$</p> <p>Either</p> <p>$\cos x=0$</p> <p>$\implies x=\dfrac{\pi}{2}\pm 2n\pi$</p> <p>Or</p> <p>$2\cos x-1=0$</p> <p>$\implies \cos x=\dfrac{1}{2}$</p> <p>$\implies x=\dfrac{\pi}{3}\pm 2n\pi$</p> <p>where $n \in Z$</p> Answer 41: <p>To make it easier to visualise and solve, you can substitute $u=\cos{x}$. $$2\cos^2{x}-\cos{x}=0$$ $$2u^2-u=0$$ From here, you can either use the quadratic formula or factorise. I chose to factorise: $$u(2u-1)=0$$ Hence we have the solutions $u=0,\frac{1}{2}$. Now, we solve the equations resulting from our substitution $u=\cos{x}$: $$\cos{x}=0 \tag{1}$$ $$\cos{x}=\frac{1}{2} \tag{2}$$</p> <p>For equation $(1)$, we obtain:` $$x=\frac{\pi}{2}+k\pi$$ For equation $(2)$, we obtain: $$x=2k\pi+\frac{\pi}{3}$$ And $$x=2k\pi-\frac{\pi}{3}$$ Where $k \in \mathbb{Z}$.</p> <p>Hence, our general solution is: $$x=\begin{cases} \frac{\pi}{2}+k\pi \\ 2k\pi+\frac{\pi}{3} \\ 2k\pi-\frac{\pi}{3} \end{cases}$$</p> Answer 42: <p><strong>hint</strong>: $2\cos^2 x -\cos x = \cos x(2\cos x - 1)=0$</p> Answer 43: <p>Notice that $$2 \cos^2 x-\cos x=0\implies 2\cos^2x=\cos x$$</p> <p>So either $\cos x=0$ or $\cos x=\frac{1}{2}$. </p> <p>Now find general solution</p> Answer 44: <p>$$2\cos^2(x) - \cos(x) = 0 \implies \cos(x)(2\cos(x) - 1) = 0$$</p> <p>For what $x$ is $\cos(x) = 0, \frac{1}{2}$? There are only 2 solutions that form all distinct solutions; all others are $2k\pi{}$ away from them</p> <p><a href="https://en.wikipedia.org/wiki/Special_right_triangle#30.C2.B0.E2.80.9360.C2.B0.E2.80.9390.C2.B0_triangle" rel="nofollow noreferrer">Have a look</a></p> Answer 45: <p>$2\cos^2 x-\cos x=0$</p> <p>$\implies \cos x(2\cos x-1)=0$</p> <p>Either</p> <p>$\cos x=0$</p> <p>$\implies x=\dfrac{\pi}{2}\pm 2n\pi$</p> <p>Or</p> <p>$2\cos x-1=0$</p> <p>$\implies \cos x=\dfrac{1}{2}$</p> <p>$\implies x=\dfrac{\pi}{3}\pm 2n\pi$</p> <p>where $n \in Z$</p>
Proving the identity ${n \choose k} = {\sum\limits_{m=k-1}^{n-1} {m \choose k-1}}$ by induction over t=n-k
2,081,759
5
https://math.stackexchange.com/questions/2081759/proving-the-identity-n-choose-k-sum-limits-m-k-1n-1-m-choose-k-1
<p>Of the identity ${n \choose k} = {\sum\limits_{m=k-1}^{n-1} {m \choose k-1}}$, a proof by induction over $d=n-k$ should be possible. I can only see how $${n+1 \choose k} = {n \choose k} + {n \choose k-1}$$ (by Pascal's identity)</p> <p>$$ ={\sum_{m=k-1}^{n-1} {m \choose k-1}} + {n \choose k-1}$$ (by induction hypothesis) $$={\sum_{m=k-1}^{n} {m \choose k-1}}$$ leads to the result. Can someone point me into the right direction for an induction over $d=n-k$? </p>
[ { "answer_id": 2082506, "score": 1, "text": "<p>Since $d=n-k$, you can rewrite it as</p>\n\n<p>$$\\binom{d+k}k=\\sum_{m=k-1}^{d+k-1}\\binom{m}{k-1}\\;;$$</p>\n\n<p>let $\\ell=m-(k-1)$, and this can in turn be rewritten</p>\n\n<p>$$\\binom{d+k}k=\\sum_{\\ell=0}^d\\binom{\\ell+k-1}{k-1}\\;,$$</p>\n\n<p>and you want to prove this by induction on $d$. Just use the same approach that you took for your induction on $n$.</p>\n" }, { "answer_id": 2082536, "score": 0, "text": "<p>You ask for an inductive proof, but I cannot resist a combinatorial proof when the opportunity presents itself.</p>\n\n<p>The lefthand side, $\\binom{n}{k}$, counts the number of $k$-element subsets of the set $\\{1, 2, \\dots, n\\}$. Partition the collection of all such subsets according to their largest element. I will demonstrate.</p>\n\n<p>The number of $k$-element subsets whose largest element is $k$ is $\\binom{k-1}{k-1}$; choose $k-1$ elements from the set $\\{1, 2, \\dots, k-1\\}$ and then include the element $k$. (This can be done in only one way, of course.)</p>\n\n<p>The number of $k$-element subsets whose largest element is $k+1$ is $\\binom{k}{k-1}$; choose $k-1$ elements from the set $\\{1, 2, \\dots, k\\}$ and then include the element $k+1$.</p>\n\n<p>In general, the number of $k$-element subsets whose largest element is $m+1$ is $\\binom{m}{k-1}$; choose $k-1$ elements from the set $\\{1, 2, \\dots, m\\}$ and then include the element $m+1$. Summing over $m$ gives the desired identity.</p>\n" }, { "answer_id": 2082506, "score": 1, "text": "<p>Since $d=n-k$, you can rewrite it as</p>\n\n<p>$$\\binom{d+k}k=\\sum_{m=k-1}^{d+k-1}\\binom{m}{k-1}\\;;$$</p>\n\n<p>let $\\ell=m-(k-1)$, and this can in turn be rewritten</p>\n\n<p>$$\\binom{d+k}k=\\sum_{\\ell=0}^d\\binom{\\ell+k-1}{k-1}\\;,$$</p>\n\n<p>and you want to prove this by induction on $d$. Just use the same approach that you took for your induction on $n$.</p>\n" }, { "answer_id": 2082536, "score": 0, "text": "<p>You ask for an inductive proof, but I cannot resist a combinatorial proof when the opportunity presents itself.</p>\n\n<p>The lefthand side, $\\binom{n}{k}$, counts the number of $k$-element subsets of the set $\\{1, 2, \\dots, n\\}$. Partition the collection of all such subsets according to their largest element. I will demonstrate.</p>\n\n<p>The number of $k$-element subsets whose largest element is $k$ is $\\binom{k-1}{k-1}$; choose $k-1$ elements from the set $\\{1, 2, \\dots, k-1\\}$ and then include the element $k$. (This can be done in only one way, of course.)</p>\n\n<p>The number of $k$-element subsets whose largest element is $k+1$ is $\\binom{k}{k-1}$; choose $k-1$ elements from the set $\\{1, 2, \\dots, k\\}$ and then include the element $k+1$.</p>\n\n<p>In general, the number of $k$-element subsets whose largest element is $m+1$ is $\\binom{m}{k-1}$; choose $k-1$ elements from the set $\\{1, 2, \\dots, m\\}$ and then include the element $m+1$. Summing over $m$ gives the desired identity.</p>\n" }, { "answer_id": 2082506, "score": 1, "text": "<p>Since $d=n-k$, you can rewrite it as</p>\n\n<p>$$\\binom{d+k}k=\\sum_{m=k-1}^{d+k-1}\\binom{m}{k-1}\\;;$$</p>\n\n<p>let $\\ell=m-(k-1)$, and this can in turn be rewritten</p>\n\n<p>$$\\binom{d+k}k=\\sum_{\\ell=0}^d\\binom{\\ell+k-1}{k-1}\\;,$$</p>\n\n<p>and you want to prove this by induction on $d$. Just use the same approach that you took for your induction on $n$.</p>\n" }, { "answer_id": 2082536, "score": 0, "text": "<p>You ask for an inductive proof, but I cannot resist a combinatorial proof when the opportunity presents itself.</p>\n\n<p>The lefthand side, $\\binom{n}{k}$, counts the number of $k$-element subsets of the set $\\{1, 2, \\dots, n\\}$. Partition the collection of all such subsets according to their largest element. I will demonstrate.</p>\n\n<p>The number of $k$-element subsets whose largest element is $k$ is $\\binom{k-1}{k-1}$; choose $k-1$ elements from the set $\\{1, 2, \\dots, k-1\\}$ and then include the element $k$. (This can be done in only one way, of course.)</p>\n\n<p>The number of $k$-element subsets whose largest element is $k+1$ is $\\binom{k}{k-1}$; choose $k-1$ elements from the set $\\{1, 2, \\dots, k\\}$ and then include the element $k+1$.</p>\n\n<p>In general, the number of $k$-element subsets whose largest element is $m+1$ is $\\binom{m}{k-1}$; choose $k-1$ elements from the set $\\{1, 2, \\dots, m\\}$ and then include the element $m+1$. Summing over $m$ gives the desired identity.</p>\n" }, { "answer_id": 2082506, "score": 1, "text": "<p>Since $d=n-k$, you can rewrite it as</p>\n\n<p>$$\\binom{d+k}k=\\sum_{m=k-1}^{d+k-1}\\binom{m}{k-1}\\;;$$</p>\n\n<p>let $\\ell=m-(k-1)$, and this can in turn be rewritten</p>\n\n<p>$$\\binom{d+k}k=\\sum_{\\ell=0}^d\\binom{\\ell+k-1}{k-1}\\;,$$</p>\n\n<p>and you want to prove this by induction on $d$. Just use the same approach that you took for your induction on $n$.</p>\n" }, { "answer_id": 2082536, "score": 0, "text": "<p>You ask for an inductive proof, but I cannot resist a combinatorial proof when the opportunity presents itself.</p>\n\n<p>The lefthand side, $\\binom{n}{k}$, counts the number of $k$-element subsets of the set $\\{1, 2, \\dots, n\\}$. Partition the collection of all such subsets according to their largest element. I will demonstrate.</p>\n\n<p>The number of $k$-element subsets whose largest element is $k$ is $\\binom{k-1}{k-1}$; choose $k-1$ elements from the set $\\{1, 2, \\dots, k-1\\}$ and then include the element $k$. (This can be done in only one way, of course.)</p>\n\n<p>The number of $k$-element subsets whose largest element is $k+1$ is $\\binom{k}{k-1}$; choose $k-1$ elements from the set $\\{1, 2, \\dots, k\\}$ and then include the element $k+1$.</p>\n\n<p>In general, the number of $k$-element subsets whose largest element is $m+1$ is $\\binom{m}{k-1}$; choose $k-1$ elements from the set $\\{1, 2, \\dots, m\\}$ and then include the element $m+1$. Summing over $m$ gives the desired identity.</p>\n" }, { "answer_id": 2082506, "score": 1, "text": "<p>Since $d=n-k$, you can rewrite it as</p>\n\n<p>$$\\binom{d+k}k=\\sum_{m=k-1}^{d+k-1}\\binom{m}{k-1}\\;;$$</p>\n\n<p>let $\\ell=m-(k-1)$, and this can in turn be rewritten</p>\n\n<p>$$\\binom{d+k}k=\\sum_{\\ell=0}^d\\binom{\\ell+k-1}{k-1}\\;,$$</p>\n\n<p>and you want to prove this by induction on $d$. Just use the same approach that you took for your induction on $n$.</p>\n" }, { "answer_id": 2082536, "score": 0, "text": "<p>You ask for an inductive proof, but I cannot resist a combinatorial proof when the opportunity presents itself.</p>\n\n<p>The lefthand side, $\\binom{n}{k}$, counts the number of $k$-element subsets of the set $\\{1, 2, \\dots, n\\}$. Partition the collection of all such subsets according to their largest element. I will demonstrate.</p>\n\n<p>The number of $k$-element subsets whose largest element is $k$ is $\\binom{k-1}{k-1}$; choose $k-1$ elements from the set $\\{1, 2, \\dots, k-1\\}$ and then include the element $k$. (This can be done in only one way, of course.)</p>\n\n<p>The number of $k$-element subsets whose largest element is $k+1$ is $\\binom{k}{k-1}$; choose $k-1$ elements from the set $\\{1, 2, \\dots, k\\}$ and then include the element $k+1$.</p>\n\n<p>In general, the number of $k$-element subsets whose largest element is $m+1$ is $\\binom{m}{k-1}$; choose $k-1$ elements from the set $\\{1, 2, \\dots, m\\}$ and then include the element $m+1$. Summing over $m$ gives the desired identity.</p>\n" }, { "answer_id": 2082506, "score": 1, "text": "<p>Since $d=n-k$, you can rewrite it as</p>\n\n<p>$$\\binom{d+k}k=\\sum_{m=k-1}^{d+k-1}\\binom{m}{k-1}\\;;$$</p>\n\n<p>let $\\ell=m-(k-1)$, and this can in turn be rewritten</p>\n\n<p>$$\\binom{d+k}k=\\sum_{\\ell=0}^d\\binom{\\ell+k-1}{k-1}\\;,$$</p>\n\n<p>and you want to prove this by induction on $d$. Just use the same approach that you took for your induction on $n$.</p>\n" }, { "answer_id": 2082536, "score": 0, "text": "<p>You ask for an inductive proof, but I cannot resist a combinatorial proof when the opportunity presents itself.</p>\n\n<p>The lefthand side, $\\binom{n}{k}$, counts the number of $k$-element subsets of the set $\\{1, 2, \\dots, n\\}$. Partition the collection of all such subsets according to their largest element. I will demonstrate.</p>\n\n<p>The number of $k$-element subsets whose largest element is $k$ is $\\binom{k-1}{k-1}$; choose $k-1$ elements from the set $\\{1, 2, \\dots, k-1\\}$ and then include the element $k$. (This can be done in only one way, of course.)</p>\n\n<p>The number of $k$-element subsets whose largest element is $k+1$ is $\\binom{k}{k-1}$; choose $k-1$ elements from the set $\\{1, 2, \\dots, k\\}$ and then include the element $k+1$.</p>\n\n<p>In general, the number of $k$-element subsets whose largest element is $m+1$ is $\\binom{m}{k-1}$; choose $k-1$ elements from the set $\\{1, 2, \\dots, m\\}$ and then include the element $m+1$. Summing over $m$ gives the desired identity.</p>\n" }, { "answer_id": 2082506, "score": 1, "text": "<p>Since $d=n-k$, you can rewrite it as</p>\n\n<p>$$\\binom{d+k}k=\\sum_{m=k-1}^{d+k-1}\\binom{m}{k-1}\\;;$$</p>\n\n<p>let $\\ell=m-(k-1)$, and this can in turn be rewritten</p>\n\n<p>$$\\binom{d+k}k=\\sum_{\\ell=0}^d\\binom{\\ell+k-1}{k-1}\\;,$$</p>\n\n<p>and you want to prove this by induction on $d$. Just use the same approach that you took for your induction on $n$.</p>\n" }, { "answer_id": 2082536, "score": 0, "text": "<p>You ask for an inductive proof, but I cannot resist a combinatorial proof when the opportunity presents itself.</p>\n\n<p>The lefthand side, $\\binom{n}{k}$, counts the number of $k$-element subsets of the set $\\{1, 2, \\dots, n\\}$. Partition the collection of all such subsets according to their largest element. I will demonstrate.</p>\n\n<p>The number of $k$-element subsets whose largest element is $k$ is $\\binom{k-1}{k-1}$; choose $k-1$ elements from the set $\\{1, 2, \\dots, k-1\\}$ and then include the element $k$. (This can be done in only one way, of course.)</p>\n\n<p>The number of $k$-element subsets whose largest element is $k+1$ is $\\binom{k}{k-1}$; choose $k-1$ elements from the set $\\{1, 2, \\dots, k\\}$ and then include the element $k+1$.</p>\n\n<p>In general, the number of $k$-element subsets whose largest element is $m+1$ is $\\binom{m}{k-1}$; choose $k-1$ elements from the set $\\{1, 2, \\dots, m\\}$ and then include the element $m+1$. Summing over $m$ gives the desired identity.</p>\n" }, { "answer_id": 2082506, "score": 1, "text": "<p>Since $d=n-k$, you can rewrite it as</p>\n\n<p>$$\\binom{d+k}k=\\sum_{m=k-1}^{d+k-1}\\binom{m}{k-1}\\;;$$</p>\n\n<p>let $\\ell=m-(k-1)$, and this can in turn be rewritten</p>\n\n<p>$$\\binom{d+k}k=\\sum_{\\ell=0}^d\\binom{\\ell+k-1}{k-1}\\;,$$</p>\n\n<p>and you want to prove this by induction on $d$. Just use the same approach that you took for your induction on $n$.</p>\n" }, { "answer_id": 2082536, "score": 0, "text": "<p>You ask for an inductive proof, but I cannot resist a combinatorial proof when the opportunity presents itself.</p>\n\n<p>The lefthand side, $\\binom{n}{k}$, counts the number of $k$-element subsets of the set $\\{1, 2, \\dots, n\\}$. Partition the collection of all such subsets according to their largest element. I will demonstrate.</p>\n\n<p>The number of $k$-element subsets whose largest element is $k$ is $\\binom{k-1}{k-1}$; choose $k-1$ elements from the set $\\{1, 2, \\dots, k-1\\}$ and then include the element $k$. (This can be done in only one way, of course.)</p>\n\n<p>The number of $k$-element subsets whose largest element is $k+1$ is $\\binom{k}{k-1}$; choose $k-1$ elements from the set $\\{1, 2, \\dots, k\\}$ and then include the element $k+1$.</p>\n\n<p>In general, the number of $k$-element subsets whose largest element is $m+1$ is $\\binom{m}{k-1}$; choose $k-1$ elements from the set $\\{1, 2, \\dots, m\\}$ and then include the element $m+1$. Summing over $m$ gives the desired identity.</p>\n" }, { "answer_id": 2082506, "score": 1, "text": "<p>Since $d=n-k$, you can rewrite it as</p>\n\n<p>$$\\binom{d+k}k=\\sum_{m=k-1}^{d+k-1}\\binom{m}{k-1}\\;;$$</p>\n\n<p>let $\\ell=m-(k-1)$, and this can in turn be rewritten</p>\n\n<p>$$\\binom{d+k}k=\\sum_{\\ell=0}^d\\binom{\\ell+k-1}{k-1}\\;,$$</p>\n\n<p>and you want to prove this by induction on $d$. Just use the same approach that you took for your induction on $n$.</p>\n" }, { "answer_id": 2082536, "score": 0, "text": "<p>You ask for an inductive proof, but I cannot resist a combinatorial proof when the opportunity presents itself.</p>\n\n<p>The lefthand side, $\\binom{n}{k}$, counts the number of $k$-element subsets of the set $\\{1, 2, \\dots, n\\}$. Partition the collection of all such subsets according to their largest element. I will demonstrate.</p>\n\n<p>The number of $k$-element subsets whose largest element is $k$ is $\\binom{k-1}{k-1}$; choose $k-1$ elements from the set $\\{1, 2, \\dots, k-1\\}$ and then include the element $k$. (This can be done in only one way, of course.)</p>\n\n<p>The number of $k$-element subsets whose largest element is $k+1$ is $\\binom{k}{k-1}$; choose $k-1$ elements from the set $\\{1, 2, \\dots, k\\}$ and then include the element $k+1$.</p>\n\n<p>In general, the number of $k$-element subsets whose largest element is $m+1$ is $\\binom{m}{k-1}$; choose $k-1$ elements from the set $\\{1, 2, \\dots, m\\}$ and then include the element $m+1$. Summing over $m$ gives the desired identity.</p>\n" } ]
Question: Proving the identity ${n \choose k} = {\sum\limits_{m=k-1}^{n-1} {m \choose k-1}}$ by induction over t=n-k Answer 1: <p>Since $d=n-k$, you can rewrite it as</p> <p>$$\binom{d+k}k=\sum_{m=k-1}^{d+k-1}\binom{m}{k-1}\;;$$</p> <p>let $\ell=m-(k-1)$, and this can in turn be rewritten</p> <p>$$\binom{d+k}k=\sum_{\ell=0}^d\binom{\ell+k-1}{k-1}\;,$$</p> <p>and you want to prove this by induction on $d$. Just use the same approach that you took for your induction on $n$.</p> Answer 2: <p>You ask for an inductive proof, but I cannot resist a combinatorial proof when the opportunity presents itself.</p> <p>The lefthand side, $\binom{n}{k}$, counts the number of $k$-element subsets of the set $\{1, 2, \dots, n\}$. Partition the collection of all such subsets according to their largest element. I will demonstrate.</p> <p>The number of $k$-element subsets whose largest element is $k$ is $\binom{k-1}{k-1}$; choose $k-1$ elements from the set $\{1, 2, \dots, k-1\}$ and then include the element $k$. (This can be done in only one way, of course.)</p> <p>The number of $k$-element subsets whose largest element is $k+1$ is $\binom{k}{k-1}$; choose $k-1$ elements from the set $\{1, 2, \dots, k\}$ and then include the element $k+1$.</p> <p>In general, the number of $k$-element subsets whose largest element is $m+1$ is $\binom{m}{k-1}$; choose $k-1$ elements from the set $\{1, 2, \dots, m\}$ and then include the element $m+1$. Summing over $m$ gives the desired identity.</p> Answer 3: <p>Since $d=n-k$, you can rewrite it as</p> <p>$$\binom{d+k}k=\sum_{m=k-1}^{d+k-1}\binom{m}{k-1}\;;$$</p> <p>let $\ell=m-(k-1)$, and this can in turn be rewritten</p> <p>$$\binom{d+k}k=\sum_{\ell=0}^d\binom{\ell+k-1}{k-1}\;,$$</p> <p>and you want to prove this by induction on $d$. Just use the same approach that you took for your induction on $n$.</p> Answer 4: <p>You ask for an inductive proof, but I cannot resist a combinatorial proof when the opportunity presents itself.</p> <p>The lefthand side, $\binom{n}{k}$, counts the number of $k$-element subsets of the set $\{1, 2, \dots, n\}$. Partition the collection of all such subsets according to their largest element. I will demonstrate.</p> <p>The number of $k$-element subsets whose largest element is $k$ is $\binom{k-1}{k-1}$; choose $k-1$ elements from the set $\{1, 2, \dots, k-1\}$ and then include the element $k$. (This can be done in only one way, of course.)</p> <p>The number of $k$-element subsets whose largest element is $k+1$ is $\binom{k}{k-1}$; choose $k-1$ elements from the set $\{1, 2, \dots, k\}$ and then include the element $k+1$.</p> <p>In general, the number of $k$-element subsets whose largest element is $m+1$ is $\binom{m}{k-1}$; choose $k-1$ elements from the set $\{1, 2, \dots, m\}$ and then include the element $m+1$. Summing over $m$ gives the desired identity.</p> Answer 5: <p>Since $d=n-k$, you can rewrite it as</p> <p>$$\binom{d+k}k=\sum_{m=k-1}^{d+k-1}\binom{m}{k-1}\;;$$</p> <p>let $\ell=m-(k-1)$, and this can in turn be rewritten</p> <p>$$\binom{d+k}k=\sum_{\ell=0}^d\binom{\ell+k-1}{k-1}\;,$$</p> <p>and you want to prove this by induction on $d$. Just use the same approach that you took for your induction on $n$.</p> Answer 6: <p>You ask for an inductive proof, but I cannot resist a combinatorial proof when the opportunity presents itself.</p> <p>The lefthand side, $\binom{n}{k}$, counts the number of $k$-element subsets of the set $\{1, 2, \dots, n\}$. Partition the collection of all such subsets according to their largest element. I will demonstrate.</p> <p>The number of $k$-element subsets whose largest element is $k$ is $\binom{k-1}{k-1}$; choose $k-1$ elements from the set $\{1, 2, \dots, k-1\}$ and then include the element $k$. (This can be done in only one way, of course.)</p> <p>The number of $k$-element subsets whose largest element is $k+1$ is $\binom{k}{k-1}$; choose $k-1$ elements from the set $\{1, 2, \dots, k\}$ and then include the element $k+1$.</p> <p>In general, the number of $k$-element subsets whose largest element is $m+1$ is $\binom{m}{k-1}$; choose $k-1$ elements from the set $\{1, 2, \dots, m\}$ and then include the element $m+1$. Summing over $m$ gives the desired identity.</p> Answer 7: <p>Since $d=n-k$, you can rewrite it as</p> <p>$$\binom{d+k}k=\sum_{m=k-1}^{d+k-1}\binom{m}{k-1}\;;$$</p> <p>let $\ell=m-(k-1)$, and this can in turn be rewritten</p> <p>$$\binom{d+k}k=\sum_{\ell=0}^d\binom{\ell+k-1}{k-1}\;,$$</p> <p>and you want to prove this by induction on $d$. Just use the same approach that you took for your induction on $n$.</p> Answer 8: <p>You ask for an inductive proof, but I cannot resist a combinatorial proof when the opportunity presents itself.</p> <p>The lefthand side, $\binom{n}{k}$, counts the number of $k$-element subsets of the set $\{1, 2, \dots, n\}$. Partition the collection of all such subsets according to their largest element. I will demonstrate.</p> <p>The number of $k$-element subsets whose largest element is $k$ is $\binom{k-1}{k-1}$; choose $k-1$ elements from the set $\{1, 2, \dots, k-1\}$ and then include the element $k$. (This can be done in only one way, of course.)</p> <p>The number of $k$-element subsets whose largest element is $k+1$ is $\binom{k}{k-1}$; choose $k-1$ elements from the set $\{1, 2, \dots, k\}$ and then include the element $k+1$.</p> <p>In general, the number of $k$-element subsets whose largest element is $m+1$ is $\binom{m}{k-1}$; choose $k-1$ elements from the set $\{1, 2, \dots, m\}$ and then include the element $m+1$. Summing over $m$ gives the desired identity.</p> Answer 9: <p>Since $d=n-k$, you can rewrite it as</p> <p>$$\binom{d+k}k=\sum_{m=k-1}^{d+k-1}\binom{m}{k-1}\;;$$</p> <p>let $\ell=m-(k-1)$, and this can in turn be rewritten</p> <p>$$\binom{d+k}k=\sum_{\ell=0}^d\binom{\ell+k-1}{k-1}\;,$$</p> <p>and you want to prove this by induction on $d$. Just use the same approach that you took for your induction on $n$.</p> Answer 10: <p>You ask for an inductive proof, but I cannot resist a combinatorial proof when the opportunity presents itself.</p> <p>The lefthand side, $\binom{n}{k}$, counts the number of $k$-element subsets of the set $\{1, 2, \dots, n\}$. Partition the collection of all such subsets according to their largest element. I will demonstrate.</p> <p>The number of $k$-element subsets whose largest element is $k$ is $\binom{k-1}{k-1}$; choose $k-1$ elements from the set $\{1, 2, \dots, k-1\}$ and then include the element $k$. (This can be done in only one way, of course.)</p> <p>The number of $k$-element subsets whose largest element is $k+1$ is $\binom{k}{k-1}$; choose $k-1$ elements from the set $\{1, 2, \dots, k\}$ and then include the element $k+1$.</p> <p>In general, the number of $k$-element subsets whose largest element is $m+1$ is $\binom{m}{k-1}$; choose $k-1$ elements from the set $\{1, 2, \dots, m\}$ and then include the element $m+1$. Summing over $m$ gives the desired identity.</p> Answer 11: <p>Since $d=n-k$, you can rewrite it as</p> <p>$$\binom{d+k}k=\sum_{m=k-1}^{d+k-1}\binom{m}{k-1}\;;$$</p> <p>let $\ell=m-(k-1)$, and this can in turn be rewritten</p> <p>$$\binom{d+k}k=\sum_{\ell=0}^d\binom{\ell+k-1}{k-1}\;,$$</p> <p>and you want to prove this by induction on $d$. Just use the same approach that you took for your induction on $n$.</p> Answer 12: <p>You ask for an inductive proof, but I cannot resist a combinatorial proof when the opportunity presents itself.</p> <p>The lefthand side, $\binom{n}{k}$, counts the number of $k$-element subsets of the set $\{1, 2, \dots, n\}$. Partition the collection of all such subsets according to their largest element. I will demonstrate.</p> <p>The number of $k$-element subsets whose largest element is $k$ is $\binom{k-1}{k-1}$; choose $k-1$ elements from the set $\{1, 2, \dots, k-1\}$ and then include the element $k$. (This can be done in only one way, of course.)</p> <p>The number of $k$-element subsets whose largest element is $k+1$ is $\binom{k}{k-1}$; choose $k-1$ elements from the set $\{1, 2, \dots, k\}$ and then include the element $k+1$.</p> <p>In general, the number of $k$-element subsets whose largest element is $m+1$ is $\binom{m}{k-1}$; choose $k-1$ elements from the set $\{1, 2, \dots, m\}$ and then include the element $m+1$. Summing over $m$ gives the desired identity.</p> Answer 13: <p>Since $d=n-k$, you can rewrite it as</p> <p>$$\binom{d+k}k=\sum_{m=k-1}^{d+k-1}\binom{m}{k-1}\;;$$</p> <p>let $\ell=m-(k-1)$, and this can in turn be rewritten</p> <p>$$\binom{d+k}k=\sum_{\ell=0}^d\binom{\ell+k-1}{k-1}\;,$$</p> <p>and you want to prove this by induction on $d$. Just use the same approach that you took for your induction on $n$.</p> Answer 14: <p>You ask for an inductive proof, but I cannot resist a combinatorial proof when the opportunity presents itself.</p> <p>The lefthand side, $\binom{n}{k}$, counts the number of $k$-element subsets of the set $\{1, 2, \dots, n\}$. Partition the collection of all such subsets according to their largest element. I will demonstrate.</p> <p>The number of $k$-element subsets whose largest element is $k$ is $\binom{k-1}{k-1}$; choose $k-1$ elements from the set $\{1, 2, \dots, k-1\}$ and then include the element $k$. (This can be done in only one way, of course.)</p> <p>The number of $k$-element subsets whose largest element is $k+1$ is $\binom{k}{k-1}$; choose $k-1$ elements from the set $\{1, 2, \dots, k\}$ and then include the element $k+1$.</p> <p>In general, the number of $k$-element subsets whose largest element is $m+1$ is $\binom{m}{k-1}$; choose $k-1$ elements from the set $\{1, 2, \dots, m\}$ and then include the element $m+1$. Summing over $m$ gives the desired identity.</p> Answer 15: <p>Since $d=n-k$, you can rewrite it as</p> <p>$$\binom{d+k}k=\sum_{m=k-1}^{d+k-1}\binom{m}{k-1}\;;$$</p> <p>let $\ell=m-(k-1)$, and this can in turn be rewritten</p> <p>$$\binom{d+k}k=\sum_{\ell=0}^d\binom{\ell+k-1}{k-1}\;,$$</p> <p>and you want to prove this by induction on $d$. Just use the same approach that you took for your induction on $n$.</p> Answer 16: <p>You ask for an inductive proof, but I cannot resist a combinatorial proof when the opportunity presents itself.</p> <p>The lefthand side, $\binom{n}{k}$, counts the number of $k$-element subsets of the set $\{1, 2, \dots, n\}$. Partition the collection of all such subsets according to their largest element. I will demonstrate.</p> <p>The number of $k$-element subsets whose largest element is $k$ is $\binom{k-1}{k-1}$; choose $k-1$ elements from the set $\{1, 2, \dots, k-1\}$ and then include the element $k$. (This can be done in only one way, of course.)</p> <p>The number of $k$-element subsets whose largest element is $k+1$ is $\binom{k}{k-1}$; choose $k-1$ elements from the set $\{1, 2, \dots, k\}$ and then include the element $k+1$.</p> <p>In general, the number of $k$-element subsets whose largest element is $m+1$ is $\binom{m}{k-1}$; choose $k-1$ elements from the set $\{1, 2, \dots, m\}$ and then include the element $m+1$. Summing over $m$ gives the desired identity.</p> Answer 17: <p>Since $d=n-k$, you can rewrite it as</p> <p>$$\binom{d+k}k=\sum_{m=k-1}^{d+k-1}\binom{m}{k-1}\;;$$</p> <p>let $\ell=m-(k-1)$, and this can in turn be rewritten</p> <p>$$\binom{d+k}k=\sum_{\ell=0}^d\binom{\ell+k-1}{k-1}\;,$$</p> <p>and you want to prove this by induction on $d$. Just use the same approach that you took for your induction on $n$.</p> Answer 18: <p>You ask for an inductive proof, but I cannot resist a combinatorial proof when the opportunity presents itself.</p> <p>The lefthand side, $\binom{n}{k}$, counts the number of $k$-element subsets of the set $\{1, 2, \dots, n\}$. Partition the collection of all such subsets according to their largest element. I will demonstrate.</p> <p>The number of $k$-element subsets whose largest element is $k$ is $\binom{k-1}{k-1}$; choose $k-1$ elements from the set $\{1, 2, \dots, k-1\}$ and then include the element $k$. (This can be done in only one way, of course.)</p> <p>The number of $k$-element subsets whose largest element is $k+1$ is $\binom{k}{k-1}$; choose $k-1$ elements from the set $\{1, 2, \dots, k\}$ and then include the element $k+1$.</p> <p>In general, the number of $k$-element subsets whose largest element is $m+1$ is $\binom{m}{k-1}$; choose $k-1$ elements from the set $\{1, 2, \dots, m\}$ and then include the element $m+1$. Summing over $m$ gives the desired identity.</p>
Use Mean value theorem to prove the following inequality
2,081,769
5
https://math.stackexchange.com/questions/2081769/use-mean-value-theorem-to-prove-the-following-inequality
<p>A) Use the Mean value theorem to prove that \begin{equation} \sqrt{1+x} &lt; 1 + \frac{1}{2}x \text{ if } x&gt;0 \end{equation}</p> <p>B) Use result in A) to prove that \begin{equation} \sqrt{1+x}&gt;1+\frac{1}{2}x-\frac{1}{8}x^2 \text{ if } x&gt;0 \end{equation}</p> <p>Can someone give an answer for part B) ?</p>
[ { "answer_id": 2081776, "score": 12, "text": "<p>From (A), for $x&gt;0$ we have: </p>\n\n<p>$0&lt;\\sqrt{x+1}-1&lt;\\frac{1}{2}x\\Rightarrow {(\\sqrt{x+1}-1)}^2&lt;\\frac{1}{4}x^2\\Rightarrow {(\\sqrt{x+1})}^2+1^2-2\\sqrt{x+1}&lt;\\frac{1}{4}x^2 \\Rightarrow \\sqrt{x+1}&gt;1+\\frac{x}{2}-\\frac{x^2}{8}$</p>\n" }, { "answer_id": 2081787, "score": 2, "text": "<p>OK, I'll show how I'd solve a:</p>\n\n<p>We have </p>\n\n<p>$$\nf(x) = 1+ \\frac{x}{2} - \\sqrt{1+x} \n$$</p>\n\n<p>Clearly $f(0) = 0$. Let's look at the derivative of $f$. If we show that the derivative is strictly positive for $x&gt;0$, the function is strictly increasing, hence positive (in combination with $f(0)$) for $x&gt;0$. We have </p>\n\n<p>$$\nf'(x) = \\frac{1}{2} - \\frac{1}{2\\sqrt{1+x}} = \\frac{1}{2}\\bigg(\\frac{x}{\\sqrt{1+x}} \\bigg)\n$$</p>\n\n<p>Since $x&gt;0$, the term in the brackets is always positive, hence the derivative is strictly positive and the function is increasing. Hence, </p>\n\n<p>$$\nf(x)&gt;0 \\to 1+\\frac{x}{2} &gt; \\sqrt{1+x}\n$$ </p>\n" }, { "answer_id": 2081776, "score": 12, "text": "<p>From (A), for $x&gt;0$ we have: </p>\n\n<p>$0&lt;\\sqrt{x+1}-1&lt;\\frac{1}{2}x\\Rightarrow {(\\sqrt{x+1}-1)}^2&lt;\\frac{1}{4}x^2\\Rightarrow {(\\sqrt{x+1})}^2+1^2-2\\sqrt{x+1}&lt;\\frac{1}{4}x^2 \\Rightarrow \\sqrt{x+1}&gt;1+\\frac{x}{2}-\\frac{x^2}{8}$</p>\n" }, { "answer_id": 2081787, "score": 2, "text": "<p>OK, I'll show how I'd solve a:</p>\n\n<p>We have </p>\n\n<p>$$\nf(x) = 1+ \\frac{x}{2} - \\sqrt{1+x} \n$$</p>\n\n<p>Clearly $f(0) = 0$. Let's look at the derivative of $f$. If we show that the derivative is strictly positive for $x&gt;0$, the function is strictly increasing, hence positive (in combination with $f(0)$) for $x&gt;0$. We have </p>\n\n<p>$$\nf'(x) = \\frac{1}{2} - \\frac{1}{2\\sqrt{1+x}} = \\frac{1}{2}\\bigg(\\frac{x}{\\sqrt{1+x}} \\bigg)\n$$</p>\n\n<p>Since $x&gt;0$, the term in the brackets is always positive, hence the derivative is strictly positive and the function is increasing. Hence, </p>\n\n<p>$$\nf(x)&gt;0 \\to 1+\\frac{x}{2} &gt; \\sqrt{1+x}\n$$ </p>\n" }, { "answer_id": 2081776, "score": 12, "text": "<p>From (A), for $x&gt;0$ we have: </p>\n\n<p>$0&lt;\\sqrt{x+1}-1&lt;\\frac{1}{2}x\\Rightarrow {(\\sqrt{x+1}-1)}^2&lt;\\frac{1}{4}x^2\\Rightarrow {(\\sqrt{x+1})}^2+1^2-2\\sqrt{x+1}&lt;\\frac{1}{4}x^2 \\Rightarrow \\sqrt{x+1}&gt;1+\\frac{x}{2}-\\frac{x^2}{8}$</p>\n" }, { "answer_id": 2081787, "score": 2, "text": "<p>OK, I'll show how I'd solve a:</p>\n\n<p>We have </p>\n\n<p>$$\nf(x) = 1+ \\frac{x}{2} - \\sqrt{1+x} \n$$</p>\n\n<p>Clearly $f(0) = 0$. Let's look at the derivative of $f$. If we show that the derivative is strictly positive for $x&gt;0$, the function is strictly increasing, hence positive (in combination with $f(0)$) for $x&gt;0$. We have </p>\n\n<p>$$\nf'(x) = \\frac{1}{2} - \\frac{1}{2\\sqrt{1+x}} = \\frac{1}{2}\\bigg(\\frac{x}{\\sqrt{1+x}} \\bigg)\n$$</p>\n\n<p>Since $x&gt;0$, the term in the brackets is always positive, hence the derivative is strictly positive and the function is increasing. Hence, </p>\n\n<p>$$\nf(x)&gt;0 \\to 1+\\frac{x}{2} &gt; \\sqrt{1+x}\n$$ </p>\n" }, { "answer_id": 2081776, "score": 12, "text": "<p>From (A), for $x&gt;0$ we have: </p>\n\n<p>$0&lt;\\sqrt{x+1}-1&lt;\\frac{1}{2}x\\Rightarrow {(\\sqrt{x+1}-1)}^2&lt;\\frac{1}{4}x^2\\Rightarrow {(\\sqrt{x+1})}^2+1^2-2\\sqrt{x+1}&lt;\\frac{1}{4}x^2 \\Rightarrow \\sqrt{x+1}&gt;1+\\frac{x}{2}-\\frac{x^2}{8}$</p>\n" }, { "answer_id": 2081787, "score": 2, "text": "<p>OK, I'll show how I'd solve a:</p>\n\n<p>We have </p>\n\n<p>$$\nf(x) = 1+ \\frac{x}{2} - \\sqrt{1+x} \n$$</p>\n\n<p>Clearly $f(0) = 0$. Let's look at the derivative of $f$. If we show that the derivative is strictly positive for $x&gt;0$, the function is strictly increasing, hence positive (in combination with $f(0)$) for $x&gt;0$. We have </p>\n\n<p>$$\nf'(x) = \\frac{1}{2} - \\frac{1}{2\\sqrt{1+x}} = \\frac{1}{2}\\bigg(\\frac{x}{\\sqrt{1+x}} \\bigg)\n$$</p>\n\n<p>Since $x&gt;0$, the term in the brackets is always positive, hence the derivative is strictly positive and the function is increasing. Hence, </p>\n\n<p>$$\nf(x)&gt;0 \\to 1+\\frac{x}{2} &gt; \\sqrt{1+x}\n$$ </p>\n" }, { "answer_id": 2081776, "score": 12, "text": "<p>From (A), for $x&gt;0$ we have: </p>\n\n<p>$0&lt;\\sqrt{x+1}-1&lt;\\frac{1}{2}x\\Rightarrow {(\\sqrt{x+1}-1)}^2&lt;\\frac{1}{4}x^2\\Rightarrow {(\\sqrt{x+1})}^2+1^2-2\\sqrt{x+1}&lt;\\frac{1}{4}x^2 \\Rightarrow \\sqrt{x+1}&gt;1+\\frac{x}{2}-\\frac{x^2}{8}$</p>\n" }, { "answer_id": 2081787, "score": 2, "text": "<p>OK, I'll show how I'd solve a:</p>\n\n<p>We have </p>\n\n<p>$$\nf(x) = 1+ \\frac{x}{2} - \\sqrt{1+x} \n$$</p>\n\n<p>Clearly $f(0) = 0$. Let's look at the derivative of $f$. If we show that the derivative is strictly positive for $x&gt;0$, the function is strictly increasing, hence positive (in combination with $f(0)$) for $x&gt;0$. We have </p>\n\n<p>$$\nf'(x) = \\frac{1}{2} - \\frac{1}{2\\sqrt{1+x}} = \\frac{1}{2}\\bigg(\\frac{x}{\\sqrt{1+x}} \\bigg)\n$$</p>\n\n<p>Since $x&gt;0$, the term in the brackets is always positive, hence the derivative is strictly positive and the function is increasing. Hence, </p>\n\n<p>$$\nf(x)&gt;0 \\to 1+\\frac{x}{2} &gt; \\sqrt{1+x}\n$$ </p>\n" }, { "answer_id": 2081776, "score": 12, "text": "<p>From (A), for $x&gt;0$ we have: </p>\n\n<p>$0&lt;\\sqrt{x+1}-1&lt;\\frac{1}{2}x\\Rightarrow {(\\sqrt{x+1}-1)}^2&lt;\\frac{1}{4}x^2\\Rightarrow {(\\sqrt{x+1})}^2+1^2-2\\sqrt{x+1}&lt;\\frac{1}{4}x^2 \\Rightarrow \\sqrt{x+1}&gt;1+\\frac{x}{2}-\\frac{x^2}{8}$</p>\n" }, { "answer_id": 2081787, "score": 2, "text": "<p>OK, I'll show how I'd solve a:</p>\n\n<p>We have </p>\n\n<p>$$\nf(x) = 1+ \\frac{x}{2} - \\sqrt{1+x} \n$$</p>\n\n<p>Clearly $f(0) = 0$. Let's look at the derivative of $f$. If we show that the derivative is strictly positive for $x&gt;0$, the function is strictly increasing, hence positive (in combination with $f(0)$) for $x&gt;0$. We have </p>\n\n<p>$$\nf'(x) = \\frac{1}{2} - \\frac{1}{2\\sqrt{1+x}} = \\frac{1}{2}\\bigg(\\frac{x}{\\sqrt{1+x}} \\bigg)\n$$</p>\n\n<p>Since $x&gt;0$, the term in the brackets is always positive, hence the derivative is strictly positive and the function is increasing. Hence, </p>\n\n<p>$$\nf(x)&gt;0 \\to 1+\\frac{x}{2} &gt; \\sqrt{1+x}\n$$ </p>\n" }, { "answer_id": 2081776, "score": 12, "text": "<p>From (A), for $x&gt;0$ we have: </p>\n\n<p>$0&lt;\\sqrt{x+1}-1&lt;\\frac{1}{2}x\\Rightarrow {(\\sqrt{x+1}-1)}^2&lt;\\frac{1}{4}x^2\\Rightarrow {(\\sqrt{x+1})}^2+1^2-2\\sqrt{x+1}&lt;\\frac{1}{4}x^2 \\Rightarrow \\sqrt{x+1}&gt;1+\\frac{x}{2}-\\frac{x^2}{8}$</p>\n" }, { "answer_id": 2081787, "score": 2, "text": "<p>OK, I'll show how I'd solve a:</p>\n\n<p>We have </p>\n\n<p>$$\nf(x) = 1+ \\frac{x}{2} - \\sqrt{1+x} \n$$</p>\n\n<p>Clearly $f(0) = 0$. Let's look at the derivative of $f$. If we show that the derivative is strictly positive for $x&gt;0$, the function is strictly increasing, hence positive (in combination with $f(0)$) for $x&gt;0$. We have </p>\n\n<p>$$\nf'(x) = \\frac{1}{2} - \\frac{1}{2\\sqrt{1+x}} = \\frac{1}{2}\\bigg(\\frac{x}{\\sqrt{1+x}} \\bigg)\n$$</p>\n\n<p>Since $x&gt;0$, the term in the brackets is always positive, hence the derivative is strictly positive and the function is increasing. Hence, </p>\n\n<p>$$\nf(x)&gt;0 \\to 1+\\frac{x}{2} &gt; \\sqrt{1+x}\n$$ </p>\n" }, { "answer_id": 2081776, "score": 12, "text": "<p>From (A), for $x&gt;0$ we have: </p>\n\n<p>$0&lt;\\sqrt{x+1}-1&lt;\\frac{1}{2}x\\Rightarrow {(\\sqrt{x+1}-1)}^2&lt;\\frac{1}{4}x^2\\Rightarrow {(\\sqrt{x+1})}^2+1^2-2\\sqrt{x+1}&lt;\\frac{1}{4}x^2 \\Rightarrow \\sqrt{x+1}&gt;1+\\frac{x}{2}-\\frac{x^2}{8}$</p>\n" }, { "answer_id": 2081787, "score": 2, "text": "<p>OK, I'll show how I'd solve a:</p>\n\n<p>We have </p>\n\n<p>$$\nf(x) = 1+ \\frac{x}{2} - \\sqrt{1+x} \n$$</p>\n\n<p>Clearly $f(0) = 0$. Let's look at the derivative of $f$. If we show that the derivative is strictly positive for $x&gt;0$, the function is strictly increasing, hence positive (in combination with $f(0)$) for $x&gt;0$. We have </p>\n\n<p>$$\nf'(x) = \\frac{1}{2} - \\frac{1}{2\\sqrt{1+x}} = \\frac{1}{2}\\bigg(\\frac{x}{\\sqrt{1+x}} \\bigg)\n$$</p>\n\n<p>Since $x&gt;0$, the term in the brackets is always positive, hence the derivative is strictly positive and the function is increasing. Hence, </p>\n\n<p>$$\nf(x)&gt;0 \\to 1+\\frac{x}{2} &gt; \\sqrt{1+x}\n$$ </p>\n" }, { "answer_id": 2081776, "score": 12, "text": "<p>From (A), for $x&gt;0$ we have: </p>\n\n<p>$0&lt;\\sqrt{x+1}-1&lt;\\frac{1}{2}x\\Rightarrow {(\\sqrt{x+1}-1)}^2&lt;\\frac{1}{4}x^2\\Rightarrow {(\\sqrt{x+1})}^2+1^2-2\\sqrt{x+1}&lt;\\frac{1}{4}x^2 \\Rightarrow \\sqrt{x+1}&gt;1+\\frac{x}{2}-\\frac{x^2}{8}$</p>\n" }, { "answer_id": 2081787, "score": 2, "text": "<p>OK, I'll show how I'd solve a:</p>\n\n<p>We have </p>\n\n<p>$$\nf(x) = 1+ \\frac{x}{2} - \\sqrt{1+x} \n$$</p>\n\n<p>Clearly $f(0) = 0$. Let's look at the derivative of $f$. If we show that the derivative is strictly positive for $x&gt;0$, the function is strictly increasing, hence positive (in combination with $f(0)$) for $x&gt;0$. We have </p>\n\n<p>$$\nf'(x) = \\frac{1}{2} - \\frac{1}{2\\sqrt{1+x}} = \\frac{1}{2}\\bigg(\\frac{x}{\\sqrt{1+x}} \\bigg)\n$$</p>\n\n<p>Since $x&gt;0$, the term in the brackets is always positive, hence the derivative is strictly positive and the function is increasing. Hence, </p>\n\n<p>$$\nf(x)&gt;0 \\to 1+\\frac{x}{2} &gt; \\sqrt{1+x}\n$$ </p>\n" } ]
Question: Use Mean value theorem to prove the following inequality Answer 1: <p>From (A), for $x&gt;0$ we have: </p> <p>$0&lt;\sqrt{x+1}-1&lt;\frac{1}{2}x\Rightarrow {(\sqrt{x+1}-1)}^2&lt;\frac{1}{4}x^2\Rightarrow {(\sqrt{x+1})}^2+1^2-2\sqrt{x+1}&lt;\frac{1}{4}x^2 \Rightarrow \sqrt{x+1}&gt;1+\frac{x}{2}-\frac{x^2}{8}$</p> Answer 2: <p>OK, I'll show how I'd solve a:</p> <p>We have </p> <p>$$ f(x) = 1+ \frac{x}{2} - \sqrt{1+x} $$</p> <p>Clearly $f(0) = 0$. Let's look at the derivative of $f$. If we show that the derivative is strictly positive for $x&gt;0$, the function is strictly increasing, hence positive (in combination with $f(0)$) for $x&gt;0$. We have </p> <p>$$ f'(x) = \frac{1}{2} - \frac{1}{2\sqrt{1+x}} = \frac{1}{2}\bigg(\frac{x}{\sqrt{1+x}} \bigg) $$</p> <p>Since $x&gt;0$, the term in the brackets is always positive, hence the derivative is strictly positive and the function is increasing. Hence, </p> <p>$$ f(x)&gt;0 \to 1+\frac{x}{2} &gt; \sqrt{1+x} $$ </p> Answer 3: <p>From (A), for $x&gt;0$ we have: </p> <p>$0&lt;\sqrt{x+1}-1&lt;\frac{1}{2}x\Rightarrow {(\sqrt{x+1}-1)}^2&lt;\frac{1}{4}x^2\Rightarrow {(\sqrt{x+1})}^2+1^2-2\sqrt{x+1}&lt;\frac{1}{4}x^2 \Rightarrow \sqrt{x+1}&gt;1+\frac{x}{2}-\frac{x^2}{8}$</p> Answer 4: <p>OK, I'll show how I'd solve a:</p> <p>We have </p> <p>$$ f(x) = 1+ \frac{x}{2} - \sqrt{1+x} $$</p> <p>Clearly $f(0) = 0$. Let's look at the derivative of $f$. If we show that the derivative is strictly positive for $x&gt;0$, the function is strictly increasing, hence positive (in combination with $f(0)$) for $x&gt;0$. We have </p> <p>$$ f'(x) = \frac{1}{2} - \frac{1}{2\sqrt{1+x}} = \frac{1}{2}\bigg(\frac{x}{\sqrt{1+x}} \bigg) $$</p> <p>Since $x&gt;0$, the term in the brackets is always positive, hence the derivative is strictly positive and the function is increasing. Hence, </p> <p>$$ f(x)&gt;0 \to 1+\frac{x}{2} &gt; \sqrt{1+x} $$ </p> Answer 5: <p>From (A), for $x&gt;0$ we have: </p> <p>$0&lt;\sqrt{x+1}-1&lt;\frac{1}{2}x\Rightarrow {(\sqrt{x+1}-1)}^2&lt;\frac{1}{4}x^2\Rightarrow {(\sqrt{x+1})}^2+1^2-2\sqrt{x+1}&lt;\frac{1}{4}x^2 \Rightarrow \sqrt{x+1}&gt;1+\frac{x}{2}-\frac{x^2}{8}$</p> Answer 6: <p>OK, I'll show how I'd solve a:</p> <p>We have </p> <p>$$ f(x) = 1+ \frac{x}{2} - \sqrt{1+x} $$</p> <p>Clearly $f(0) = 0$. Let's look at the derivative of $f$. If we show that the derivative is strictly positive for $x&gt;0$, the function is strictly increasing, hence positive (in combination with $f(0)$) for $x&gt;0$. We have </p> <p>$$ f'(x) = \frac{1}{2} - \frac{1}{2\sqrt{1+x}} = \frac{1}{2}\bigg(\frac{x}{\sqrt{1+x}} \bigg) $$</p> <p>Since $x&gt;0$, the term in the brackets is always positive, hence the derivative is strictly positive and the function is increasing. Hence, </p> <p>$$ f(x)&gt;0 \to 1+\frac{x}{2} &gt; \sqrt{1+x} $$ </p> Answer 7: <p>From (A), for $x&gt;0$ we have: </p> <p>$0&lt;\sqrt{x+1}-1&lt;\frac{1}{2}x\Rightarrow {(\sqrt{x+1}-1)}^2&lt;\frac{1}{4}x^2\Rightarrow {(\sqrt{x+1})}^2+1^2-2\sqrt{x+1}&lt;\frac{1}{4}x^2 \Rightarrow \sqrt{x+1}&gt;1+\frac{x}{2}-\frac{x^2}{8}$</p> Answer 8: <p>OK, I'll show how I'd solve a:</p> <p>We have </p> <p>$$ f(x) = 1+ \frac{x}{2} - \sqrt{1+x} $$</p> <p>Clearly $f(0) = 0$. Let's look at the derivative of $f$. If we show that the derivative is strictly positive for $x&gt;0$, the function is strictly increasing, hence positive (in combination with $f(0)$) for $x&gt;0$. We have </p> <p>$$ f'(x) = \frac{1}{2} - \frac{1}{2\sqrt{1+x}} = \frac{1}{2}\bigg(\frac{x}{\sqrt{1+x}} \bigg) $$</p> <p>Since $x&gt;0$, the term in the brackets is always positive, hence the derivative is strictly positive and the function is increasing. Hence, </p> <p>$$ f(x)&gt;0 \to 1+\frac{x}{2} &gt; \sqrt{1+x} $$ </p> Answer 9: <p>From (A), for $x&gt;0$ we have: </p> <p>$0&lt;\sqrt{x+1}-1&lt;\frac{1}{2}x\Rightarrow {(\sqrt{x+1}-1)}^2&lt;\frac{1}{4}x^2\Rightarrow {(\sqrt{x+1})}^2+1^2-2\sqrt{x+1}&lt;\frac{1}{4}x^2 \Rightarrow \sqrt{x+1}&gt;1+\frac{x}{2}-\frac{x^2}{8}$</p> Answer 10: <p>OK, I'll show how I'd solve a:</p> <p>We have </p> <p>$$ f(x) = 1+ \frac{x}{2} - \sqrt{1+x} $$</p> <p>Clearly $f(0) = 0$. Let's look at the derivative of $f$. If we show that the derivative is strictly positive for $x&gt;0$, the function is strictly increasing, hence positive (in combination with $f(0)$) for $x&gt;0$. We have </p> <p>$$ f'(x) = \frac{1}{2} - \frac{1}{2\sqrt{1+x}} = \frac{1}{2}\bigg(\frac{x}{\sqrt{1+x}} \bigg) $$</p> <p>Since $x&gt;0$, the term in the brackets is always positive, hence the derivative is strictly positive and the function is increasing. Hence, </p> <p>$$ f(x)&gt;0 \to 1+\frac{x}{2} &gt; \sqrt{1+x} $$ </p> Answer 11: <p>From (A), for $x&gt;0$ we have: </p> <p>$0&lt;\sqrt{x+1}-1&lt;\frac{1}{2}x\Rightarrow {(\sqrt{x+1}-1)}^2&lt;\frac{1}{4}x^2\Rightarrow {(\sqrt{x+1})}^2+1^2-2\sqrt{x+1}&lt;\frac{1}{4}x^2 \Rightarrow \sqrt{x+1}&gt;1+\frac{x}{2}-\frac{x^2}{8}$</p> Answer 12: <p>OK, I'll show how I'd solve a:</p> <p>We have </p> <p>$$ f(x) = 1+ \frac{x}{2} - \sqrt{1+x} $$</p> <p>Clearly $f(0) = 0$. Let's look at the derivative of $f$. If we show that the derivative is strictly positive for $x&gt;0$, the function is strictly increasing, hence positive (in combination with $f(0)$) for $x&gt;0$. We have </p> <p>$$ f'(x) = \frac{1}{2} - \frac{1}{2\sqrt{1+x}} = \frac{1}{2}\bigg(\frac{x}{\sqrt{1+x}} \bigg) $$</p> <p>Since $x&gt;0$, the term in the brackets is always positive, hence the derivative is strictly positive and the function is increasing. Hence, </p> <p>$$ f(x)&gt;0 \to 1+\frac{x}{2} &gt; \sqrt{1+x} $$ </p> Answer 13: <p>From (A), for $x&gt;0$ we have: </p> <p>$0&lt;\sqrt{x+1}-1&lt;\frac{1}{2}x\Rightarrow {(\sqrt{x+1}-1)}^2&lt;\frac{1}{4}x^2\Rightarrow {(\sqrt{x+1})}^2+1^2-2\sqrt{x+1}&lt;\frac{1}{4}x^2 \Rightarrow \sqrt{x+1}&gt;1+\frac{x}{2}-\frac{x^2}{8}$</p> Answer 14: <p>OK, I'll show how I'd solve a:</p> <p>We have </p> <p>$$ f(x) = 1+ \frac{x}{2} - \sqrt{1+x} $$</p> <p>Clearly $f(0) = 0$. Let's look at the derivative of $f$. If we show that the derivative is strictly positive for $x&gt;0$, the function is strictly increasing, hence positive (in combination with $f(0)$) for $x&gt;0$. We have </p> <p>$$ f'(x) = \frac{1}{2} - \frac{1}{2\sqrt{1+x}} = \frac{1}{2}\bigg(\frac{x}{\sqrt{1+x}} \bigg) $$</p> <p>Since $x&gt;0$, the term in the brackets is always positive, hence the derivative is strictly positive and the function is increasing. Hence, </p> <p>$$ f(x)&gt;0 \to 1+\frac{x}{2} &gt; \sqrt{1+x} $$ </p> Answer 15: <p>From (A), for $x&gt;0$ we have: </p> <p>$0&lt;\sqrt{x+1}-1&lt;\frac{1}{2}x\Rightarrow {(\sqrt{x+1}-1)}^2&lt;\frac{1}{4}x^2\Rightarrow {(\sqrt{x+1})}^2+1^2-2\sqrt{x+1}&lt;\frac{1}{4}x^2 \Rightarrow \sqrt{x+1}&gt;1+\frac{x}{2}-\frac{x^2}{8}$</p> Answer 16: <p>OK, I'll show how I'd solve a:</p> <p>We have </p> <p>$$ f(x) = 1+ \frac{x}{2} - \sqrt{1+x} $$</p> <p>Clearly $f(0) = 0$. Let's look at the derivative of $f$. If we show that the derivative is strictly positive for $x&gt;0$, the function is strictly increasing, hence positive (in combination with $f(0)$) for $x&gt;0$. We have </p> <p>$$ f'(x) = \frac{1}{2} - \frac{1}{2\sqrt{1+x}} = \frac{1}{2}\bigg(\frac{x}{\sqrt{1+x}} \bigg) $$</p> <p>Since $x&gt;0$, the term in the brackets is always positive, hence the derivative is strictly positive and the function is increasing. Hence, </p> <p>$$ f(x)&gt;0 \to 1+\frac{x}{2} &gt; \sqrt{1+x} $$ </p> Answer 17: <p>From (A), for $x&gt;0$ we have: </p> <p>$0&lt;\sqrt{x+1}-1&lt;\frac{1}{2}x\Rightarrow {(\sqrt{x+1}-1)}^2&lt;\frac{1}{4}x^2\Rightarrow {(\sqrt{x+1})}^2+1^2-2\sqrt{x+1}&lt;\frac{1}{4}x^2 \Rightarrow \sqrt{x+1}&gt;1+\frac{x}{2}-\frac{x^2}{8}$</p> Answer 18: <p>OK, I'll show how I'd solve a:</p> <p>We have </p> <p>$$ f(x) = 1+ \frac{x}{2} - \sqrt{1+x} $$</p> <p>Clearly $f(0) = 0$. Let's look at the derivative of $f$. If we show that the derivative is strictly positive for $x&gt;0$, the function is strictly increasing, hence positive (in combination with $f(0)$) for $x&gt;0$. We have </p> <p>$$ f'(x) = \frac{1}{2} - \frac{1}{2\sqrt{1+x}} = \frac{1}{2}\bigg(\frac{x}{\sqrt{1+x}} \bigg) $$</p> <p>Since $x&gt;0$, the term in the brackets is always positive, hence the derivative is strictly positive and the function is increasing. Hence, </p> <p>$$ f(x)&gt;0 \to 1+\frac{x}{2} &gt; \sqrt{1+x} $$ </p>
Closed form for $\sum\limits_{r=-\infty}^\infty \sum\limits_{s=-\infty}^\infty \frac{1}{(k^2\tau^2+(x+2\pi r)^2+(y+2\pi s)^2)^{3/2}}$
2,082,076
5
https://math.stackexchange.com/questions/2082076/closed-form-for-sum-limits-r-infty-infty-sum-limits-s-infty-infty
<p>I have a problem while trying to find a closed form for the following double sum. $$\sum_{r=-\infty}^\infty \sum_{s=-\infty}^\infty \frac{1}{(k^2\tau^2+(x+2\pi r)^2+(y+2\pi s)^2)^{3/2}}$$</p> <p>I used Mathematica for the evaluation but it just returns the double sum. I am wondering how this kind of infinite sum can be handled.</p> <p>Any hints or suggestions are welcomed, perhaps by proposing some ideas to simplify it or making some approximations.</p> <hr> <p>@Dr. Wolfgang Hintze Thank you very much for the detailed explanation. I have a question, when I try to plot the double sum I get an error ''NSum :Summand (or its derivative)... - is not numerical at point r = 0 ''? To plot it I just wrote this :</p> <p><a href="https://i.stack.imgur.com/NjTjO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NjTjO.png" alt="enter image description here"></a> What is the problem in this formula?</p>
[ { "answer_id": 2082129, "score": 6, "text": "<p>This is not a solution, as I haven't found a closed form expression (and I doubt meanwhile that it exists) but a collection of illustrative remarks that might be useful. </p>\n\n<p>Notice that I have reinserted and simplified the derivation of the Fourier form of the double sum in 6).</p>\n\n<p>1) While the sum</p>\n\n<p>$$f=\\sum _{r=-\\infty }^{\\infty } \\sum _{s=-\\infty }^{\\infty } \\frac{1}{\\left(k^2 \\tau ^2+(2 \\pi r+x)^2+(2 \\pi s+y)^2\\right)^{3/2}}$$</p>\n\n<p>is returned unevaluated by Mathematica the corresponding integral has a very simple closed form solution (independent of $x$ and $y$)</p>\n\n<p>$$fi=\\int _{-\\infty }^{\\infty }\\int _{-\\infty }^{\\infty }\\frac{1}{\\left(k^2 \\tau ^2+(2 \\pi r+x)^2+(2 \\pi s+y)^2\\right)^{3/2}}dsdr = \\frac{1}{2 \\pi \\sqrt{k^2 \\tau ^2}}$$</p>\n\n<p>This is of course a purely technical remark as the integral lacks the essential dependence on $x$ and $y$.</p>\n\n<p>2) Numerical investigation shows that but a few summand are sufficient to obtain a reasonable approximation to the complete sum.</p>\n\n<p>3) Plots of the double sum </p>\n\n<p>3a) 3D-plot for two values of $z = k \\tau$</p>\n\n<p>$z = 1$</p>\n\n<p><a href=\"https://i.stack.imgur.com/C1gdb.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/C1gdb.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>$z = \\pi /2$ (this is the value to be assigned to z acording to the OP)</p>\n\n<p><a href=\"https://i.stack.imgur.com/xZbt3.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/xZbt3.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>We see the expected regular spike structure. Where the spikes become steeper with smaller $z$ (notice that the vertical scale is the same in the two pictures).</p>\n\n<p>3b) Cross sections $y = 0$ for different values of z</p>\n\n<p><a href=\"https://i.stack.imgur.com/fQfIr.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/fQfIr.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>3c) Maximum value of $f$ as a function of $z$</p>\n\n<p><a href=\"https://i.stack.imgur.com/aKiTz.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/aKiTz.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Notice that $f \\simeq 1/z^3$ for small $z$. Which can also be derived directly form the double sum as is shown in section 5).</p>\n\n<p>3d) Minimum value of $f$ as a function of $z$ </p>\n\n<p><a href=\"https://i.stack.imgur.com/MPnhY.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/MPnhY.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>4) Physical interpretation of the double sum.</p>\n\n<p>Consider an infinite array of unit masses placed in the x-y-plane at the points $(x,y) = 2 \\pi (r,s)$.</p>\n\n<p>The double sum $f$ multiplied by $z$ is then equal to the vertical component of the gravitational force on a unit mass at the point $(x,y,z)$.</p>\n\n<p>It is clear from this analogue model that the force is strong for small $z$ and decreases for increasing $z$.</p>\n\n<p>5) Asymptotic behaviour </p>\n\n<p>5a) For small $z$ we have $z \\simeq 1/z^3$.</p>\n\n<p>Indeed, for small $z$ we can write</p>\n\n<p>$$f(0,0,z\\to 0)\\simeq \\frac{4}{(2 \\pi )^{3/2}}\\sum _{r=1}^{\\infty } \\sum _{s=1}^{\\infty } \\frac{1}{\\left(r^2+s^2\\right)^{3/2}}+\\frac{1}{z^3}$$</p>\n\n<p>The double sum is a finite constant as can be shown thus</p>\n\n<p>$$fr=\\sum _{r=1}^{\\infty } \\sum _{s=1}^{\\infty } \\frac{1}{\\left(r^2+s^2\\right)^{3/2}}&lt;\\int _1^{\\infty }\\int _1^{\\infty }\\frac{1}{\\left(r^2+s^2\\right)^{3/2}}dsdr=2-\\sqrt{2}\\simeq 0.585786\\text{...}$$</p>\n\n<p>This proves the statement. </p>\n\n<p>5b) For $z\\to\\infty$ we find $f\\simeq \\frac{1}{2 \\pi z}$. </p>\n\n<p>Indeed, in this case the double sum can be approximated by the double integral as calculated in section 1).</p>\n\n<p>5c) Both limits are easily understood from the physical analogy: </p>\n\n<p>For small $z$ and $(x,y) = (0,0)$ we are approaching a single mass at the origin. The force we have calculated is equal to the $z \\frac{1}{z^3} = \\frac{1}{z^2}$, in agreement with Newton's law of gravity.</p>\n\n<p>For large $z$, on the other hand, the grid structure is \"invisible\" to the point mass, and the force becomes equal to the force of an infinite massive plate, and this is constant. Indeed we have found $f \\simeq 1/z$, times $z$ this is constant.</p>\n\n<p>6) Fourier form of the double sum</p>\n\n<p>The double sum $f$, renamed $fc$ here, can also be written as a double Fourier series.</p>\n\n<p>$$fc(x,y,z)=\\frac{1}{2 \\pi z}\\sum _{n=-\\infty }^{\\infty } \\sum _{m=-\\infty }^{\\infty } e^{-z \\sqrt{m^2+n^2}} \\cos (m y) \\cos (n x)$$</p>\n\n<p>The derivation starts from the expression obtained in the solution of Pierpaolo Vivo </p>\n\n<p>$$fe(x,y,z)=\\int_0^{\\infty } \\frac{e^{-t z^2} \\vartheta _3\\left(\\frac{x}{2},e^{-\\frac{1}{4\\; t}}\\right) \\vartheta _3\\left(\\frac{y}{2},e^{-\\frac{1}{4\\; t}}\\right)}{2 \\pi ^{3/2} \\sqrt{t}} \\, dt$$</p>\n\n<p>Subsituting the expression for the Jacobi theta function (see the real part of formula (4) in <a href=\"http://mathworld.wolfram.com/JacobiThetaFunctions.html\" rel=\"nofollow noreferrer\">http://mathworld.wolfram.com/JacobiThetaFunctions.html</a>)</p>\n\n<p>$$\\vartheta _3\\left(\\frac{u}{2},q\\right)=\\sum _{n=-\\infty }^{\\infty } q^{n^2} \\cos (n u)$$</p>\n\n<p>and doing the $t$-integral gives</p>\n\n<p>$$\\cos (m y) \\cos (n x) \\int_0^{\\infty } \\frac{\\exp \\left(-\\frac{m^2+n^2}{4 t}-t z^2\\right)}{\\left(2 \\pi ^{3/2}\\right) \\sqrt{t}} \\, dt=\\frac{e^{-z \\sqrt{m^2+n^2}} \\cos (m y) \\cos (n x)}{2 \\pi z}$$</p>\n\n<p>The double sum of which is $fc$. Done.</p>\n\n<p>Remark: it is interesting, that replacing the double sum by a double integral gives (hint: transform to polar coordinates)</p>\n\n<p>$$\\frac{1}{2 \\pi z}\\int _{-\\infty }^{\\infty }\\int _{-\\infty }^{\\infty }e^{-z \\sqrt{m^2+n^2}} \\cos (m y) \\cos (n x)dmdn=\\frac{1}{\\left(x^2+y^2+z^2\\right)^{3/2}}$$</p>\n\n<p>This expression is less trivial than the one in 1) but the double periodic structure is washed out as well.</p>\n\n<p>7) Approximations</p>\n\n<p>We can approximate the function by taking only a finite number $n$ of terms, defined here as letting the summation indices run from $-n$ to $n$.</p>\n\n<p>It turns out that for $z\\gtrsim 1$ the Fourier double sum $fc$ requires less terms than the plain sum $f$, while for $z\\lesssim 1$ the situation is reversed.\nThe following graphs show the typical behaviour of the maximum and minimum values as a function of the number of terms.</p>\n\n<p>The case $z = \\pi /2$, this is the values of the OP</p>\n\n<p><a href=\"https://i.stack.imgur.com/eIHcS.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/eIHcS.jpg\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/duUs9.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/duUs9.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>The case $z = 1/2$</p>\n\n<p><a href=\"https://i.stack.imgur.com/XEw1m.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/XEw1m.jpg\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/bnjkG.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bnjkG.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>We conclude that if $z$ takes on the value $\\pi /2$ of the OP, then the Fourier double sum truncated at index $n = 4$ gives a satisfactory approximation to $f$. Hence we can state (somewhat boldly) that this is the requested closed form expression.</p>\n" }, { "answer_id": 2082122, "score": 2, "text": "<p>Not really a full answer, but too long for a comment. You can always use the identity\n$$\n\\frac{1}{Z^{3/2}}=\\frac{2}{\\sqrt{\\pi}}\\int_0^\\infty d\\xi\\ \\xi^{1/2}e^{-\\xi Z}\\ ,\n$$\nfor positive $Z$. Your double sum then reads\n$$\n\\frac{2}{\\sqrt{\\pi}}\\int_0^\\infty d\\xi\\ \\xi^{1/2}e^{-\\xi k^2\\tau^2}\\sum_{r=-\\infty}^\\infty e^{-\\xi (x+2\\pi r)^2}\\sum_{s=-\\infty}^\\infty e^{-\\xi (y+2\\pi s)^2}\n$$\nand evaluating the two sums in terms of Elliptic theta functions, one gets\n$$\n\\frac{2}{\\sqrt{\\pi}}\\int_0^\\infty d\\xi\\ \\xi^{1/2}e^{-\\xi k^2\\tau^2}\\frac{\\vartheta _3\\left(\\frac{x}{2},e^{-\\frac{1}{4 \\xi}}\\right)}{2 \\sqrt{\\pi } \\sqrt{\\xi}}\\frac{\\vartheta _3\\left(\\frac{y}{2},e^{-\\frac{1}{4 \\xi}}\\right)}{2 \\sqrt{\\pi } \\sqrt{\\xi}}\n$$\n$$\n=\\int_0^{\\infty } \\frac{e^{-k^2 \\xi \\tau ^2} \\vartheta _3\\left(\\frac{x}{2},e^{-\\frac{1}{4 \\xi }}\\right) \\vartheta _3\\left(\\frac{y}{2},e^{-\\frac{1}{4 \\xi }}\\right)}{2 \\pi^{3/2} \\sqrt{\\xi }} \\, d\\xi.\n$$\nThe integral does not seem to be easy to compute in closed form, but might be better suited for an asymptotic/approximate evaluation. </p>\n" }, { "answer_id": 2085653, "score": 1, "text": "<p>@Dr. Wolfgang Hintze, Thank you very much for the contribution. \nThe point of this answer is just to propose something which could complete the derivation of $f_{s}$ you proposed in the previous answer. At this step I don't think there is a closed form solution, but a reasonable approximation can be made.</p>\n\n<p>Intuitively, $|\\cos(nx) \\cos(y n)|$ is spread out equally between 0 and 1. If we approximate it as a constant $c&gt;0$, then the sum become</p>\n\n<p>$f_{s} \\simeq \\frac{2c}{\\pi z}\\sum_{n=1}^\\infty \\sum_{m=1}^\\infty e^{-z\\sqrt{m^2+n^2}}$.</p>\n\n<p>Again this expression is not tractable with Mathematica, so I thought that approximation can be made using the Euler–Maclaurin formula.</p>\n\n<p>I start by doing a simple modification on $f_{s}$ :</p>\n\n<p>$f_{s} \\simeq \\frac{2c}{\\pi z}[\\sum_{n=0}^\\infty \\sum_{m=0}^\\infty e^{-z\\sqrt{m^2+n^2}}-2]$.</p>\n\n<p>In one variable, Euler_Maclaurin formula state that $\\sum_{n=0}^\\infty f(n)=\\frac{1}{2}f(0)+\\int_{0}^{\\infty}f(n) dn$, I will use this twice in what follow.</p>\n\n<p>We know that\n$\\frac{\\pi}{2z^2} = \\int_{0}^{\\infty}\\int_{0}^{\\infty}e^{-z\\sqrt{m^2+n^2}} dn dm$</p>\n\n<p>then</p>\n\n<p>$\\frac{\\pi}{2z^2} = \\int_{0}^{\\infty}[\\sum_{m=0}^\\infty e^{-z\\sqrt{m^2+n^2}}-\\frac{1}{2}e^{-zm}]dm$</p>\n\n<p>$\\frac{\\pi}{2z^2} = \\sum_{m=0}^\\infty \\sum_{n=0}^\\infty e^{-z\\sqrt{m^2+n^2}}-\\frac{1}{2}\\sum_{n=0}^\\infty e^{-zn}+\\frac{1}{4}$</p>\n\n<p>$\\frac{\\pi}{2z^2} = \\sum_{m=0}^\\infty \\sum_{n=0}^\\infty e^{-z\\sqrt{m^2+n^2}}-\\frac{1}{2} \\frac{e^{z}}{e^{z}-1}+\\frac{1}{4}$</p>\n\n<p>So, we can now conclude that </p>\n\n<p>$\\sum_{m=0}^\\infty \\sum_{n=0}^\\infty e^{-z\\sqrt{m^2+n^2}}= \\frac{\\pi}{2z^2}+\\frac{1}{2} \\frac{e^{z}}{e^{z}-1}-\\frac{1}{4}.$</p>\n\n<p>This all what I was able to do, but I'am not sure that my assumption ($|\\cos(nx) \\cos(y n)|$=c) is reasonable to guarantee a good approximation. What do you think ?</p>\n" }, { "answer_id": 2082129, "score": 6, "text": "<p>This is not a solution, as I haven't found a closed form expression (and I doubt meanwhile that it exists) but a collection of illustrative remarks that might be useful. </p>\n\n<p>Notice that I have reinserted and simplified the derivation of the Fourier form of the double sum in 6).</p>\n\n<p>1) While the sum</p>\n\n<p>$$f=\\sum _{r=-\\infty }^{\\infty } \\sum _{s=-\\infty }^{\\infty } \\frac{1}{\\left(k^2 \\tau ^2+(2 \\pi r+x)^2+(2 \\pi s+y)^2\\right)^{3/2}}$$</p>\n\n<p>is returned unevaluated by Mathematica the corresponding integral has a very simple closed form solution (independent of $x$ and $y$)</p>\n\n<p>$$fi=\\int _{-\\infty }^{\\infty }\\int _{-\\infty }^{\\infty }\\frac{1}{\\left(k^2 \\tau ^2+(2 \\pi r+x)^2+(2 \\pi s+y)^2\\right)^{3/2}}dsdr = \\frac{1}{2 \\pi \\sqrt{k^2 \\tau ^2}}$$</p>\n\n<p>This is of course a purely technical remark as the integral lacks the essential dependence on $x$ and $y$.</p>\n\n<p>2) Numerical investigation shows that but a few summand are sufficient to obtain a reasonable approximation to the complete sum.</p>\n\n<p>3) Plots of the double sum </p>\n\n<p>3a) 3D-plot for two values of $z = k \\tau$</p>\n\n<p>$z = 1$</p>\n\n<p><a href=\"https://i.stack.imgur.com/C1gdb.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/C1gdb.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>$z = \\pi /2$ (this is the value to be assigned to z acording to the OP)</p>\n\n<p><a href=\"https://i.stack.imgur.com/xZbt3.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/xZbt3.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>We see the expected regular spike structure. Where the spikes become steeper with smaller $z$ (notice that the vertical scale is the same in the two pictures).</p>\n\n<p>3b) Cross sections $y = 0$ for different values of z</p>\n\n<p><a href=\"https://i.stack.imgur.com/fQfIr.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/fQfIr.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>3c) Maximum value of $f$ as a function of $z$</p>\n\n<p><a href=\"https://i.stack.imgur.com/aKiTz.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/aKiTz.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Notice that $f \\simeq 1/z^3$ for small $z$. Which can also be derived directly form the double sum as is shown in section 5).</p>\n\n<p>3d) Minimum value of $f$ as a function of $z$ </p>\n\n<p><a href=\"https://i.stack.imgur.com/MPnhY.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/MPnhY.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>4) Physical interpretation of the double sum.</p>\n\n<p>Consider an infinite array of unit masses placed in the x-y-plane at the points $(x,y) = 2 \\pi (r,s)$.</p>\n\n<p>The double sum $f$ multiplied by $z$ is then equal to the vertical component of the gravitational force on a unit mass at the point $(x,y,z)$.</p>\n\n<p>It is clear from this analogue model that the force is strong for small $z$ and decreases for increasing $z$.</p>\n\n<p>5) Asymptotic behaviour </p>\n\n<p>5a) For small $z$ we have $z \\simeq 1/z^3$.</p>\n\n<p>Indeed, for small $z$ we can write</p>\n\n<p>$$f(0,0,z\\to 0)\\simeq \\frac{4}{(2 \\pi )^{3/2}}\\sum _{r=1}^{\\infty } \\sum _{s=1}^{\\infty } \\frac{1}{\\left(r^2+s^2\\right)^{3/2}}+\\frac{1}{z^3}$$</p>\n\n<p>The double sum is a finite constant as can be shown thus</p>\n\n<p>$$fr=\\sum _{r=1}^{\\infty } \\sum _{s=1}^{\\infty } \\frac{1}{\\left(r^2+s^2\\right)^{3/2}}&lt;\\int _1^{\\infty }\\int _1^{\\infty }\\frac{1}{\\left(r^2+s^2\\right)^{3/2}}dsdr=2-\\sqrt{2}\\simeq 0.585786\\text{...}$$</p>\n\n<p>This proves the statement. </p>\n\n<p>5b) For $z\\to\\infty$ we find $f\\simeq \\frac{1}{2 \\pi z}$. </p>\n\n<p>Indeed, in this case the double sum can be approximated by the double integral as calculated in section 1).</p>\n\n<p>5c) Both limits are easily understood from the physical analogy: </p>\n\n<p>For small $z$ and $(x,y) = (0,0)$ we are approaching a single mass at the origin. The force we have calculated is equal to the $z \\frac{1}{z^3} = \\frac{1}{z^2}$, in agreement with Newton's law of gravity.</p>\n\n<p>For large $z$, on the other hand, the grid structure is \"invisible\" to the point mass, and the force becomes equal to the force of an infinite massive plate, and this is constant. Indeed we have found $f \\simeq 1/z$, times $z$ this is constant.</p>\n\n<p>6) Fourier form of the double sum</p>\n\n<p>The double sum $f$, renamed $fc$ here, can also be written as a double Fourier series.</p>\n\n<p>$$fc(x,y,z)=\\frac{1}{2 \\pi z}\\sum _{n=-\\infty }^{\\infty } \\sum _{m=-\\infty }^{\\infty } e^{-z \\sqrt{m^2+n^2}} \\cos (m y) \\cos (n x)$$</p>\n\n<p>The derivation starts from the expression obtained in the solution of Pierpaolo Vivo </p>\n\n<p>$$fe(x,y,z)=\\int_0^{\\infty } \\frac{e^{-t z^2} \\vartheta _3\\left(\\frac{x}{2},e^{-\\frac{1}{4\\; t}}\\right) \\vartheta _3\\left(\\frac{y}{2},e^{-\\frac{1}{4\\; t}}\\right)}{2 \\pi ^{3/2} \\sqrt{t}} \\, dt$$</p>\n\n<p>Subsituting the expression for the Jacobi theta function (see the real part of formula (4) in <a href=\"http://mathworld.wolfram.com/JacobiThetaFunctions.html\" rel=\"nofollow noreferrer\">http://mathworld.wolfram.com/JacobiThetaFunctions.html</a>)</p>\n\n<p>$$\\vartheta _3\\left(\\frac{u}{2},q\\right)=\\sum _{n=-\\infty }^{\\infty } q^{n^2} \\cos (n u)$$</p>\n\n<p>and doing the $t$-integral gives</p>\n\n<p>$$\\cos (m y) \\cos (n x) \\int_0^{\\infty } \\frac{\\exp \\left(-\\frac{m^2+n^2}{4 t}-t z^2\\right)}{\\left(2 \\pi ^{3/2}\\right) \\sqrt{t}} \\, dt=\\frac{e^{-z \\sqrt{m^2+n^2}} \\cos (m y) \\cos (n x)}{2 \\pi z}$$</p>\n\n<p>The double sum of which is $fc$. Done.</p>\n\n<p>Remark: it is interesting, that replacing the double sum by a double integral gives (hint: transform to polar coordinates)</p>\n\n<p>$$\\frac{1}{2 \\pi z}\\int _{-\\infty }^{\\infty }\\int _{-\\infty }^{\\infty }e^{-z \\sqrt{m^2+n^2}} \\cos (m y) \\cos (n x)dmdn=\\frac{1}{\\left(x^2+y^2+z^2\\right)^{3/2}}$$</p>\n\n<p>This expression is less trivial than the one in 1) but the double periodic structure is washed out as well.</p>\n\n<p>7) Approximations</p>\n\n<p>We can approximate the function by taking only a finite number $n$ of terms, defined here as letting the summation indices run from $-n$ to $n$.</p>\n\n<p>It turns out that for $z\\gtrsim 1$ the Fourier double sum $fc$ requires less terms than the plain sum $f$, while for $z\\lesssim 1$ the situation is reversed.\nThe following graphs show the typical behaviour of the maximum and minimum values as a function of the number of terms.</p>\n\n<p>The case $z = \\pi /2$, this is the values of the OP</p>\n\n<p><a href=\"https://i.stack.imgur.com/eIHcS.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/eIHcS.jpg\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/duUs9.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/duUs9.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>The case $z = 1/2$</p>\n\n<p><a href=\"https://i.stack.imgur.com/XEw1m.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/XEw1m.jpg\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/bnjkG.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bnjkG.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>We conclude that if $z$ takes on the value $\\pi /2$ of the OP, then the Fourier double sum truncated at index $n = 4$ gives a satisfactory approximation to $f$. Hence we can state (somewhat boldly) that this is the requested closed form expression.</p>\n" }, { "answer_id": 2082122, "score": 2, "text": "<p>Not really a full answer, but too long for a comment. You can always use the identity\n$$\n\\frac{1}{Z^{3/2}}=\\frac{2}{\\sqrt{\\pi}}\\int_0^\\infty d\\xi\\ \\xi^{1/2}e^{-\\xi Z}\\ ,\n$$\nfor positive $Z$. Your double sum then reads\n$$\n\\frac{2}{\\sqrt{\\pi}}\\int_0^\\infty d\\xi\\ \\xi^{1/2}e^{-\\xi k^2\\tau^2}\\sum_{r=-\\infty}^\\infty e^{-\\xi (x+2\\pi r)^2}\\sum_{s=-\\infty}^\\infty e^{-\\xi (y+2\\pi s)^2}\n$$\nand evaluating the two sums in terms of Elliptic theta functions, one gets\n$$\n\\frac{2}{\\sqrt{\\pi}}\\int_0^\\infty d\\xi\\ \\xi^{1/2}e^{-\\xi k^2\\tau^2}\\frac{\\vartheta _3\\left(\\frac{x}{2},e^{-\\frac{1}{4 \\xi}}\\right)}{2 \\sqrt{\\pi } \\sqrt{\\xi}}\\frac{\\vartheta _3\\left(\\frac{y}{2},e^{-\\frac{1}{4 \\xi}}\\right)}{2 \\sqrt{\\pi } \\sqrt{\\xi}}\n$$\n$$\n=\\int_0^{\\infty } \\frac{e^{-k^2 \\xi \\tau ^2} \\vartheta _3\\left(\\frac{x}{2},e^{-\\frac{1}{4 \\xi }}\\right) \\vartheta _3\\left(\\frac{y}{2},e^{-\\frac{1}{4 \\xi }}\\right)}{2 \\pi^{3/2} \\sqrt{\\xi }} \\, d\\xi.\n$$\nThe integral does not seem to be easy to compute in closed form, but might be better suited for an asymptotic/approximate evaluation. </p>\n" }, { "answer_id": 2085653, "score": 1, "text": "<p>@Dr. Wolfgang Hintze, Thank you very much for the contribution. \nThe point of this answer is just to propose something which could complete the derivation of $f_{s}$ you proposed in the previous answer. At this step I don't think there is a closed form solution, but a reasonable approximation can be made.</p>\n\n<p>Intuitively, $|\\cos(nx) \\cos(y n)|$ is spread out equally between 0 and 1. If we approximate it as a constant $c&gt;0$, then the sum become</p>\n\n<p>$f_{s} \\simeq \\frac{2c}{\\pi z}\\sum_{n=1}^\\infty \\sum_{m=1}^\\infty e^{-z\\sqrt{m^2+n^2}}$.</p>\n\n<p>Again this expression is not tractable with Mathematica, so I thought that approximation can be made using the Euler–Maclaurin formula.</p>\n\n<p>I start by doing a simple modification on $f_{s}$ :</p>\n\n<p>$f_{s} \\simeq \\frac{2c}{\\pi z}[\\sum_{n=0}^\\infty \\sum_{m=0}^\\infty e^{-z\\sqrt{m^2+n^2}}-2]$.</p>\n\n<p>In one variable, Euler_Maclaurin formula state that $\\sum_{n=0}^\\infty f(n)=\\frac{1}{2}f(0)+\\int_{0}^{\\infty}f(n) dn$, I will use this twice in what follow.</p>\n\n<p>We know that\n$\\frac{\\pi}{2z^2} = \\int_{0}^{\\infty}\\int_{0}^{\\infty}e^{-z\\sqrt{m^2+n^2}} dn dm$</p>\n\n<p>then</p>\n\n<p>$\\frac{\\pi}{2z^2} = \\int_{0}^{\\infty}[\\sum_{m=0}^\\infty e^{-z\\sqrt{m^2+n^2}}-\\frac{1}{2}e^{-zm}]dm$</p>\n\n<p>$\\frac{\\pi}{2z^2} = \\sum_{m=0}^\\infty \\sum_{n=0}^\\infty e^{-z\\sqrt{m^2+n^2}}-\\frac{1}{2}\\sum_{n=0}^\\infty e^{-zn}+\\frac{1}{4}$</p>\n\n<p>$\\frac{\\pi}{2z^2} = \\sum_{m=0}^\\infty \\sum_{n=0}^\\infty e^{-z\\sqrt{m^2+n^2}}-\\frac{1}{2} \\frac{e^{z}}{e^{z}-1}+\\frac{1}{4}$</p>\n\n<p>So, we can now conclude that </p>\n\n<p>$\\sum_{m=0}^\\infty \\sum_{n=0}^\\infty e^{-z\\sqrt{m^2+n^2}}= \\frac{\\pi}{2z^2}+\\frac{1}{2} \\frac{e^{z}}{e^{z}-1}-\\frac{1}{4}.$</p>\n\n<p>This all what I was able to do, but I'am not sure that my assumption ($|\\cos(nx) \\cos(y n)|$=c) is reasonable to guarantee a good approximation. What do you think ?</p>\n" }, { "answer_id": 2082129, "score": 6, "text": "<p>This is not a solution, as I haven't found a closed form expression (and I doubt meanwhile that it exists) but a collection of illustrative remarks that might be useful. </p>\n\n<p>Notice that I have reinserted and simplified the derivation of the Fourier form of the double sum in 6).</p>\n\n<p>1) While the sum</p>\n\n<p>$$f=\\sum _{r=-\\infty }^{\\infty } \\sum _{s=-\\infty }^{\\infty } \\frac{1}{\\left(k^2 \\tau ^2+(2 \\pi r+x)^2+(2 \\pi s+y)^2\\right)^{3/2}}$$</p>\n\n<p>is returned unevaluated by Mathematica the corresponding integral has a very simple closed form solution (independent of $x$ and $y$)</p>\n\n<p>$$fi=\\int _{-\\infty }^{\\infty }\\int _{-\\infty }^{\\infty }\\frac{1}{\\left(k^2 \\tau ^2+(2 \\pi r+x)^2+(2 \\pi s+y)^2\\right)^{3/2}}dsdr = \\frac{1}{2 \\pi \\sqrt{k^2 \\tau ^2}}$$</p>\n\n<p>This is of course a purely technical remark as the integral lacks the essential dependence on $x$ and $y$.</p>\n\n<p>2) Numerical investigation shows that but a few summand are sufficient to obtain a reasonable approximation to the complete sum.</p>\n\n<p>3) Plots of the double sum </p>\n\n<p>3a) 3D-plot for two values of $z = k \\tau$</p>\n\n<p>$z = 1$</p>\n\n<p><a href=\"https://i.stack.imgur.com/C1gdb.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/C1gdb.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>$z = \\pi /2$ (this is the value to be assigned to z acording to the OP)</p>\n\n<p><a href=\"https://i.stack.imgur.com/xZbt3.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/xZbt3.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>We see the expected regular spike structure. Where the spikes become steeper with smaller $z$ (notice that the vertical scale is the same in the two pictures).</p>\n\n<p>3b) Cross sections $y = 0$ for different values of z</p>\n\n<p><a href=\"https://i.stack.imgur.com/fQfIr.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/fQfIr.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>3c) Maximum value of $f$ as a function of $z$</p>\n\n<p><a href=\"https://i.stack.imgur.com/aKiTz.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/aKiTz.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Notice that $f \\simeq 1/z^3$ for small $z$. Which can also be derived directly form the double sum as is shown in section 5).</p>\n\n<p>3d) Minimum value of $f$ as a function of $z$ </p>\n\n<p><a href=\"https://i.stack.imgur.com/MPnhY.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/MPnhY.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>4) Physical interpretation of the double sum.</p>\n\n<p>Consider an infinite array of unit masses placed in the x-y-plane at the points $(x,y) = 2 \\pi (r,s)$.</p>\n\n<p>The double sum $f$ multiplied by $z$ is then equal to the vertical component of the gravitational force on a unit mass at the point $(x,y,z)$.</p>\n\n<p>It is clear from this analogue model that the force is strong for small $z$ and decreases for increasing $z$.</p>\n\n<p>5) Asymptotic behaviour </p>\n\n<p>5a) For small $z$ we have $z \\simeq 1/z^3$.</p>\n\n<p>Indeed, for small $z$ we can write</p>\n\n<p>$$f(0,0,z\\to 0)\\simeq \\frac{4}{(2 \\pi )^{3/2}}\\sum _{r=1}^{\\infty } \\sum _{s=1}^{\\infty } \\frac{1}{\\left(r^2+s^2\\right)^{3/2}}+\\frac{1}{z^3}$$</p>\n\n<p>The double sum is a finite constant as can be shown thus</p>\n\n<p>$$fr=\\sum _{r=1}^{\\infty } \\sum _{s=1}^{\\infty } \\frac{1}{\\left(r^2+s^2\\right)^{3/2}}&lt;\\int _1^{\\infty }\\int _1^{\\infty }\\frac{1}{\\left(r^2+s^2\\right)^{3/2}}dsdr=2-\\sqrt{2}\\simeq 0.585786\\text{...}$$</p>\n\n<p>This proves the statement. </p>\n\n<p>5b) For $z\\to\\infty$ we find $f\\simeq \\frac{1}{2 \\pi z}$. </p>\n\n<p>Indeed, in this case the double sum can be approximated by the double integral as calculated in section 1).</p>\n\n<p>5c) Both limits are easily understood from the physical analogy: </p>\n\n<p>For small $z$ and $(x,y) = (0,0)$ we are approaching a single mass at the origin. The force we have calculated is equal to the $z \\frac{1}{z^3} = \\frac{1}{z^2}$, in agreement with Newton's law of gravity.</p>\n\n<p>For large $z$, on the other hand, the grid structure is \"invisible\" to the point mass, and the force becomes equal to the force of an infinite massive plate, and this is constant. Indeed we have found $f \\simeq 1/z$, times $z$ this is constant.</p>\n\n<p>6) Fourier form of the double sum</p>\n\n<p>The double sum $f$, renamed $fc$ here, can also be written as a double Fourier series.</p>\n\n<p>$$fc(x,y,z)=\\frac{1}{2 \\pi z}\\sum _{n=-\\infty }^{\\infty } \\sum _{m=-\\infty }^{\\infty } e^{-z \\sqrt{m^2+n^2}} \\cos (m y) \\cos (n x)$$</p>\n\n<p>The derivation starts from the expression obtained in the solution of Pierpaolo Vivo </p>\n\n<p>$$fe(x,y,z)=\\int_0^{\\infty } \\frac{e^{-t z^2} \\vartheta _3\\left(\\frac{x}{2},e^{-\\frac{1}{4\\; t}}\\right) \\vartheta _3\\left(\\frac{y}{2},e^{-\\frac{1}{4\\; t}}\\right)}{2 \\pi ^{3/2} \\sqrt{t}} \\, dt$$</p>\n\n<p>Subsituting the expression for the Jacobi theta function (see the real part of formula (4) in <a href=\"http://mathworld.wolfram.com/JacobiThetaFunctions.html\" rel=\"nofollow noreferrer\">http://mathworld.wolfram.com/JacobiThetaFunctions.html</a>)</p>\n\n<p>$$\\vartheta _3\\left(\\frac{u}{2},q\\right)=\\sum _{n=-\\infty }^{\\infty } q^{n^2} \\cos (n u)$$</p>\n\n<p>and doing the $t$-integral gives</p>\n\n<p>$$\\cos (m y) \\cos (n x) \\int_0^{\\infty } \\frac{\\exp \\left(-\\frac{m^2+n^2}{4 t}-t z^2\\right)}{\\left(2 \\pi ^{3/2}\\right) \\sqrt{t}} \\, dt=\\frac{e^{-z \\sqrt{m^2+n^2}} \\cos (m y) \\cos (n x)}{2 \\pi z}$$</p>\n\n<p>The double sum of which is $fc$. Done.</p>\n\n<p>Remark: it is interesting, that replacing the double sum by a double integral gives (hint: transform to polar coordinates)</p>\n\n<p>$$\\frac{1}{2 \\pi z}\\int _{-\\infty }^{\\infty }\\int _{-\\infty }^{\\infty }e^{-z \\sqrt{m^2+n^2}} \\cos (m y) \\cos (n x)dmdn=\\frac{1}{\\left(x^2+y^2+z^2\\right)^{3/2}}$$</p>\n\n<p>This expression is less trivial than the one in 1) but the double periodic structure is washed out as well.</p>\n\n<p>7) Approximations</p>\n\n<p>We can approximate the function by taking only a finite number $n$ of terms, defined here as letting the summation indices run from $-n$ to $n$.</p>\n\n<p>It turns out that for $z\\gtrsim 1$ the Fourier double sum $fc$ requires less terms than the plain sum $f$, while for $z\\lesssim 1$ the situation is reversed.\nThe following graphs show the typical behaviour of the maximum and minimum values as a function of the number of terms.</p>\n\n<p>The case $z = \\pi /2$, this is the values of the OP</p>\n\n<p><a href=\"https://i.stack.imgur.com/eIHcS.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/eIHcS.jpg\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/duUs9.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/duUs9.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>The case $z = 1/2$</p>\n\n<p><a href=\"https://i.stack.imgur.com/XEw1m.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/XEw1m.jpg\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/bnjkG.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bnjkG.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>We conclude that if $z$ takes on the value $\\pi /2$ of the OP, then the Fourier double sum truncated at index $n = 4$ gives a satisfactory approximation to $f$. Hence we can state (somewhat boldly) that this is the requested closed form expression.</p>\n" }, { "answer_id": 2082122, "score": 2, "text": "<p>Not really a full answer, but too long for a comment. You can always use the identity\n$$\n\\frac{1}{Z^{3/2}}=\\frac{2}{\\sqrt{\\pi}}\\int_0^\\infty d\\xi\\ \\xi^{1/2}e^{-\\xi Z}\\ ,\n$$\nfor positive $Z$. Your double sum then reads\n$$\n\\frac{2}{\\sqrt{\\pi}}\\int_0^\\infty d\\xi\\ \\xi^{1/2}e^{-\\xi k^2\\tau^2}\\sum_{r=-\\infty}^\\infty e^{-\\xi (x+2\\pi r)^2}\\sum_{s=-\\infty}^\\infty e^{-\\xi (y+2\\pi s)^2}\n$$\nand evaluating the two sums in terms of Elliptic theta functions, one gets\n$$\n\\frac{2}{\\sqrt{\\pi}}\\int_0^\\infty d\\xi\\ \\xi^{1/2}e^{-\\xi k^2\\tau^2}\\frac{\\vartheta _3\\left(\\frac{x}{2},e^{-\\frac{1}{4 \\xi}}\\right)}{2 \\sqrt{\\pi } \\sqrt{\\xi}}\\frac{\\vartheta _3\\left(\\frac{y}{2},e^{-\\frac{1}{4 \\xi}}\\right)}{2 \\sqrt{\\pi } \\sqrt{\\xi}}\n$$\n$$\n=\\int_0^{\\infty } \\frac{e^{-k^2 \\xi \\tau ^2} \\vartheta _3\\left(\\frac{x}{2},e^{-\\frac{1}{4 \\xi }}\\right) \\vartheta _3\\left(\\frac{y}{2},e^{-\\frac{1}{4 \\xi }}\\right)}{2 \\pi^{3/2} \\sqrt{\\xi }} \\, d\\xi.\n$$\nThe integral does not seem to be easy to compute in closed form, but might be better suited for an asymptotic/approximate evaluation. </p>\n" }, { "answer_id": 2085653, "score": 1, "text": "<p>@Dr. Wolfgang Hintze, Thank you very much for the contribution. \nThe point of this answer is just to propose something which could complete the derivation of $f_{s}$ you proposed in the previous answer. At this step I don't think there is a closed form solution, but a reasonable approximation can be made.</p>\n\n<p>Intuitively, $|\\cos(nx) \\cos(y n)|$ is spread out equally between 0 and 1. If we approximate it as a constant $c&gt;0$, then the sum become</p>\n\n<p>$f_{s} \\simeq \\frac{2c}{\\pi z}\\sum_{n=1}^\\infty \\sum_{m=1}^\\infty e^{-z\\sqrt{m^2+n^2}}$.</p>\n\n<p>Again this expression is not tractable with Mathematica, so I thought that approximation can be made using the Euler–Maclaurin formula.</p>\n\n<p>I start by doing a simple modification on $f_{s}$ :</p>\n\n<p>$f_{s} \\simeq \\frac{2c}{\\pi z}[\\sum_{n=0}^\\infty \\sum_{m=0}^\\infty e^{-z\\sqrt{m^2+n^2}}-2]$.</p>\n\n<p>In one variable, Euler_Maclaurin formula state that $\\sum_{n=0}^\\infty f(n)=\\frac{1}{2}f(0)+\\int_{0}^{\\infty}f(n) dn$, I will use this twice in what follow.</p>\n\n<p>We know that\n$\\frac{\\pi}{2z^2} = \\int_{0}^{\\infty}\\int_{0}^{\\infty}e^{-z\\sqrt{m^2+n^2}} dn dm$</p>\n\n<p>then</p>\n\n<p>$\\frac{\\pi}{2z^2} = \\int_{0}^{\\infty}[\\sum_{m=0}^\\infty e^{-z\\sqrt{m^2+n^2}}-\\frac{1}{2}e^{-zm}]dm$</p>\n\n<p>$\\frac{\\pi}{2z^2} = \\sum_{m=0}^\\infty \\sum_{n=0}^\\infty e^{-z\\sqrt{m^2+n^2}}-\\frac{1}{2}\\sum_{n=0}^\\infty e^{-zn}+\\frac{1}{4}$</p>\n\n<p>$\\frac{\\pi}{2z^2} = \\sum_{m=0}^\\infty \\sum_{n=0}^\\infty e^{-z\\sqrt{m^2+n^2}}-\\frac{1}{2} \\frac{e^{z}}{e^{z}-1}+\\frac{1}{4}$</p>\n\n<p>So, we can now conclude that </p>\n\n<p>$\\sum_{m=0}^\\infty \\sum_{n=0}^\\infty e^{-z\\sqrt{m^2+n^2}}= \\frac{\\pi}{2z^2}+\\frac{1}{2} \\frac{e^{z}}{e^{z}-1}-\\frac{1}{4}.$</p>\n\n<p>This all what I was able to do, but I'am not sure that my assumption ($|\\cos(nx) \\cos(y n)|$=c) is reasonable to guarantee a good approximation. What do you think ?</p>\n" }, { "answer_id": 2082129, "score": 6, "text": "<p>This is not a solution, as I haven't found a closed form expression (and I doubt meanwhile that it exists) but a collection of illustrative remarks that might be useful. </p>\n\n<p>Notice that I have reinserted and simplified the derivation of the Fourier form of the double sum in 6).</p>\n\n<p>1) While the sum</p>\n\n<p>$$f=\\sum _{r=-\\infty }^{\\infty } \\sum _{s=-\\infty }^{\\infty } \\frac{1}{\\left(k^2 \\tau ^2+(2 \\pi r+x)^2+(2 \\pi s+y)^2\\right)^{3/2}}$$</p>\n\n<p>is returned unevaluated by Mathematica the corresponding integral has a very simple closed form solution (independent of $x$ and $y$)</p>\n\n<p>$$fi=\\int _{-\\infty }^{\\infty }\\int _{-\\infty }^{\\infty }\\frac{1}{\\left(k^2 \\tau ^2+(2 \\pi r+x)^2+(2 \\pi s+y)^2\\right)^{3/2}}dsdr = \\frac{1}{2 \\pi \\sqrt{k^2 \\tau ^2}}$$</p>\n\n<p>This is of course a purely technical remark as the integral lacks the essential dependence on $x$ and $y$.</p>\n\n<p>2) Numerical investigation shows that but a few summand are sufficient to obtain a reasonable approximation to the complete sum.</p>\n\n<p>3) Plots of the double sum </p>\n\n<p>3a) 3D-plot for two values of $z = k \\tau$</p>\n\n<p>$z = 1$</p>\n\n<p><a href=\"https://i.stack.imgur.com/C1gdb.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/C1gdb.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>$z = \\pi /2$ (this is the value to be assigned to z acording to the OP)</p>\n\n<p><a href=\"https://i.stack.imgur.com/xZbt3.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/xZbt3.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>We see the expected regular spike structure. Where the spikes become steeper with smaller $z$ (notice that the vertical scale is the same in the two pictures).</p>\n\n<p>3b) Cross sections $y = 0$ for different values of z</p>\n\n<p><a href=\"https://i.stack.imgur.com/fQfIr.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/fQfIr.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>3c) Maximum value of $f$ as a function of $z$</p>\n\n<p><a href=\"https://i.stack.imgur.com/aKiTz.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/aKiTz.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Notice that $f \\simeq 1/z^3$ for small $z$. Which can also be derived directly form the double sum as is shown in section 5).</p>\n\n<p>3d) Minimum value of $f$ as a function of $z$ </p>\n\n<p><a href=\"https://i.stack.imgur.com/MPnhY.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/MPnhY.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>4) Physical interpretation of the double sum.</p>\n\n<p>Consider an infinite array of unit masses placed in the x-y-plane at the points $(x,y) = 2 \\pi (r,s)$.</p>\n\n<p>The double sum $f$ multiplied by $z$ is then equal to the vertical component of the gravitational force on a unit mass at the point $(x,y,z)$.</p>\n\n<p>It is clear from this analogue model that the force is strong for small $z$ and decreases for increasing $z$.</p>\n\n<p>5) Asymptotic behaviour </p>\n\n<p>5a) For small $z$ we have $z \\simeq 1/z^3$.</p>\n\n<p>Indeed, for small $z$ we can write</p>\n\n<p>$$f(0,0,z\\to 0)\\simeq \\frac{4}{(2 \\pi )^{3/2}}\\sum _{r=1}^{\\infty } \\sum _{s=1}^{\\infty } \\frac{1}{\\left(r^2+s^2\\right)^{3/2}}+\\frac{1}{z^3}$$</p>\n\n<p>The double sum is a finite constant as can be shown thus</p>\n\n<p>$$fr=\\sum _{r=1}^{\\infty } \\sum _{s=1}^{\\infty } \\frac{1}{\\left(r^2+s^2\\right)^{3/2}}&lt;\\int _1^{\\infty }\\int _1^{\\infty }\\frac{1}{\\left(r^2+s^2\\right)^{3/2}}dsdr=2-\\sqrt{2}\\simeq 0.585786\\text{...}$$</p>\n\n<p>This proves the statement. </p>\n\n<p>5b) For $z\\to\\infty$ we find $f\\simeq \\frac{1}{2 \\pi z}$. </p>\n\n<p>Indeed, in this case the double sum can be approximated by the double integral as calculated in section 1).</p>\n\n<p>5c) Both limits are easily understood from the physical analogy: </p>\n\n<p>For small $z$ and $(x,y) = (0,0)$ we are approaching a single mass at the origin. The force we have calculated is equal to the $z \\frac{1}{z^3} = \\frac{1}{z^2}$, in agreement with Newton's law of gravity.</p>\n\n<p>For large $z$, on the other hand, the grid structure is \"invisible\" to the point mass, and the force becomes equal to the force of an infinite massive plate, and this is constant. Indeed we have found $f \\simeq 1/z$, times $z$ this is constant.</p>\n\n<p>6) Fourier form of the double sum</p>\n\n<p>The double sum $f$, renamed $fc$ here, can also be written as a double Fourier series.</p>\n\n<p>$$fc(x,y,z)=\\frac{1}{2 \\pi z}\\sum _{n=-\\infty }^{\\infty } \\sum _{m=-\\infty }^{\\infty } e^{-z \\sqrt{m^2+n^2}} \\cos (m y) \\cos (n x)$$</p>\n\n<p>The derivation starts from the expression obtained in the solution of Pierpaolo Vivo </p>\n\n<p>$$fe(x,y,z)=\\int_0^{\\infty } \\frac{e^{-t z^2} \\vartheta _3\\left(\\frac{x}{2},e^{-\\frac{1}{4\\; t}}\\right) \\vartheta _3\\left(\\frac{y}{2},e^{-\\frac{1}{4\\; t}}\\right)}{2 \\pi ^{3/2} \\sqrt{t}} \\, dt$$</p>\n\n<p>Subsituting the expression for the Jacobi theta function (see the real part of formula (4) in <a href=\"http://mathworld.wolfram.com/JacobiThetaFunctions.html\" rel=\"nofollow noreferrer\">http://mathworld.wolfram.com/JacobiThetaFunctions.html</a>)</p>\n\n<p>$$\\vartheta _3\\left(\\frac{u}{2},q\\right)=\\sum _{n=-\\infty }^{\\infty } q^{n^2} \\cos (n u)$$</p>\n\n<p>and doing the $t$-integral gives</p>\n\n<p>$$\\cos (m y) \\cos (n x) \\int_0^{\\infty } \\frac{\\exp \\left(-\\frac{m^2+n^2}{4 t}-t z^2\\right)}{\\left(2 \\pi ^{3/2}\\right) \\sqrt{t}} \\, dt=\\frac{e^{-z \\sqrt{m^2+n^2}} \\cos (m y) \\cos (n x)}{2 \\pi z}$$</p>\n\n<p>The double sum of which is $fc$. Done.</p>\n\n<p>Remark: it is interesting, that replacing the double sum by a double integral gives (hint: transform to polar coordinates)</p>\n\n<p>$$\\frac{1}{2 \\pi z}\\int _{-\\infty }^{\\infty }\\int _{-\\infty }^{\\infty }e^{-z \\sqrt{m^2+n^2}} \\cos (m y) \\cos (n x)dmdn=\\frac{1}{\\left(x^2+y^2+z^2\\right)^{3/2}}$$</p>\n\n<p>This expression is less trivial than the one in 1) but the double periodic structure is washed out as well.</p>\n\n<p>7) Approximations</p>\n\n<p>We can approximate the function by taking only a finite number $n$ of terms, defined here as letting the summation indices run from $-n$ to $n$.</p>\n\n<p>It turns out that for $z\\gtrsim 1$ the Fourier double sum $fc$ requires less terms than the plain sum $f$, while for $z\\lesssim 1$ the situation is reversed.\nThe following graphs show the typical behaviour of the maximum and minimum values as a function of the number of terms.</p>\n\n<p>The case $z = \\pi /2$, this is the values of the OP</p>\n\n<p><a href=\"https://i.stack.imgur.com/eIHcS.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/eIHcS.jpg\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/duUs9.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/duUs9.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>The case $z = 1/2$</p>\n\n<p><a href=\"https://i.stack.imgur.com/XEw1m.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/XEw1m.jpg\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/bnjkG.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bnjkG.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>We conclude that if $z$ takes on the value $\\pi /2$ of the OP, then the Fourier double sum truncated at index $n = 4$ gives a satisfactory approximation to $f$. Hence we can state (somewhat boldly) that this is the requested closed form expression.</p>\n" }, { "answer_id": 2082122, "score": 2, "text": "<p>Not really a full answer, but too long for a comment. You can always use the identity\n$$\n\\frac{1}{Z^{3/2}}=\\frac{2}{\\sqrt{\\pi}}\\int_0^\\infty d\\xi\\ \\xi^{1/2}e^{-\\xi Z}\\ ,\n$$\nfor positive $Z$. Your double sum then reads\n$$\n\\frac{2}{\\sqrt{\\pi}}\\int_0^\\infty d\\xi\\ \\xi^{1/2}e^{-\\xi k^2\\tau^2}\\sum_{r=-\\infty}^\\infty e^{-\\xi (x+2\\pi r)^2}\\sum_{s=-\\infty}^\\infty e^{-\\xi (y+2\\pi s)^2}\n$$\nand evaluating the two sums in terms of Elliptic theta functions, one gets\n$$\n\\frac{2}{\\sqrt{\\pi}}\\int_0^\\infty d\\xi\\ \\xi^{1/2}e^{-\\xi k^2\\tau^2}\\frac{\\vartheta _3\\left(\\frac{x}{2},e^{-\\frac{1}{4 \\xi}}\\right)}{2 \\sqrt{\\pi } \\sqrt{\\xi}}\\frac{\\vartheta _3\\left(\\frac{y}{2},e^{-\\frac{1}{4 \\xi}}\\right)}{2 \\sqrt{\\pi } \\sqrt{\\xi}}\n$$\n$$\n=\\int_0^{\\infty } \\frac{e^{-k^2 \\xi \\tau ^2} \\vartheta _3\\left(\\frac{x}{2},e^{-\\frac{1}{4 \\xi }}\\right) \\vartheta _3\\left(\\frac{y}{2},e^{-\\frac{1}{4 \\xi }}\\right)}{2 \\pi^{3/2} \\sqrt{\\xi }} \\, d\\xi.\n$$\nThe integral does not seem to be easy to compute in closed form, but might be better suited for an asymptotic/approximate evaluation. </p>\n" }, { "answer_id": 2085653, "score": 1, "text": "<p>@Dr. Wolfgang Hintze, Thank you very much for the contribution. \nThe point of this answer is just to propose something which could complete the derivation of $f_{s}$ you proposed in the previous answer. At this step I don't think there is a closed form solution, but a reasonable approximation can be made.</p>\n\n<p>Intuitively, $|\\cos(nx) \\cos(y n)|$ is spread out equally between 0 and 1. If we approximate it as a constant $c&gt;0$, then the sum become</p>\n\n<p>$f_{s} \\simeq \\frac{2c}{\\pi z}\\sum_{n=1}^\\infty \\sum_{m=1}^\\infty e^{-z\\sqrt{m^2+n^2}}$.</p>\n\n<p>Again this expression is not tractable with Mathematica, so I thought that approximation can be made using the Euler–Maclaurin formula.</p>\n\n<p>I start by doing a simple modification on $f_{s}$ :</p>\n\n<p>$f_{s} \\simeq \\frac{2c}{\\pi z}[\\sum_{n=0}^\\infty \\sum_{m=0}^\\infty e^{-z\\sqrt{m^2+n^2}}-2]$.</p>\n\n<p>In one variable, Euler_Maclaurin formula state that $\\sum_{n=0}^\\infty f(n)=\\frac{1}{2}f(0)+\\int_{0}^{\\infty}f(n) dn$, I will use this twice in what follow.</p>\n\n<p>We know that\n$\\frac{\\pi}{2z^2} = \\int_{0}^{\\infty}\\int_{0}^{\\infty}e^{-z\\sqrt{m^2+n^2}} dn dm$</p>\n\n<p>then</p>\n\n<p>$\\frac{\\pi}{2z^2} = \\int_{0}^{\\infty}[\\sum_{m=0}^\\infty e^{-z\\sqrt{m^2+n^2}}-\\frac{1}{2}e^{-zm}]dm$</p>\n\n<p>$\\frac{\\pi}{2z^2} = \\sum_{m=0}^\\infty \\sum_{n=0}^\\infty e^{-z\\sqrt{m^2+n^2}}-\\frac{1}{2}\\sum_{n=0}^\\infty e^{-zn}+\\frac{1}{4}$</p>\n\n<p>$\\frac{\\pi}{2z^2} = \\sum_{m=0}^\\infty \\sum_{n=0}^\\infty e^{-z\\sqrt{m^2+n^2}}-\\frac{1}{2} \\frac{e^{z}}{e^{z}-1}+\\frac{1}{4}$</p>\n\n<p>So, we can now conclude that </p>\n\n<p>$\\sum_{m=0}^\\infty \\sum_{n=0}^\\infty e^{-z\\sqrt{m^2+n^2}}= \\frac{\\pi}{2z^2}+\\frac{1}{2} \\frac{e^{z}}{e^{z}-1}-\\frac{1}{4}.$</p>\n\n<p>This all what I was able to do, but I'am not sure that my assumption ($|\\cos(nx) \\cos(y n)|$=c) is reasonable to guarantee a good approximation. What do you think ?</p>\n" }, { "answer_id": 2082129, "score": 6, "text": "<p>This is not a solution, as I haven't found a closed form expression (and I doubt meanwhile that it exists) but a collection of illustrative remarks that might be useful. </p>\n\n<p>Notice that I have reinserted and simplified the derivation of the Fourier form of the double sum in 6).</p>\n\n<p>1) While the sum</p>\n\n<p>$$f=\\sum _{r=-\\infty }^{\\infty } \\sum _{s=-\\infty }^{\\infty } \\frac{1}{\\left(k^2 \\tau ^2+(2 \\pi r+x)^2+(2 \\pi s+y)^2\\right)^{3/2}}$$</p>\n\n<p>is returned unevaluated by Mathematica the corresponding integral has a very simple closed form solution (independent of $x$ and $y$)</p>\n\n<p>$$fi=\\int _{-\\infty }^{\\infty }\\int _{-\\infty }^{\\infty }\\frac{1}{\\left(k^2 \\tau ^2+(2 \\pi r+x)^2+(2 \\pi s+y)^2\\right)^{3/2}}dsdr = \\frac{1}{2 \\pi \\sqrt{k^2 \\tau ^2}}$$</p>\n\n<p>This is of course a purely technical remark as the integral lacks the essential dependence on $x$ and $y$.</p>\n\n<p>2) Numerical investigation shows that but a few summand are sufficient to obtain a reasonable approximation to the complete sum.</p>\n\n<p>3) Plots of the double sum </p>\n\n<p>3a) 3D-plot for two values of $z = k \\tau$</p>\n\n<p>$z = 1$</p>\n\n<p><a href=\"https://i.stack.imgur.com/C1gdb.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/C1gdb.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>$z = \\pi /2$ (this is the value to be assigned to z acording to the OP)</p>\n\n<p><a href=\"https://i.stack.imgur.com/xZbt3.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/xZbt3.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>We see the expected regular spike structure. Where the spikes become steeper with smaller $z$ (notice that the vertical scale is the same in the two pictures).</p>\n\n<p>3b) Cross sections $y = 0$ for different values of z</p>\n\n<p><a href=\"https://i.stack.imgur.com/fQfIr.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/fQfIr.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>3c) Maximum value of $f$ as a function of $z$</p>\n\n<p><a href=\"https://i.stack.imgur.com/aKiTz.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/aKiTz.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Notice that $f \\simeq 1/z^3$ for small $z$. Which can also be derived directly form the double sum as is shown in section 5).</p>\n\n<p>3d) Minimum value of $f$ as a function of $z$ </p>\n\n<p><a href=\"https://i.stack.imgur.com/MPnhY.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/MPnhY.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>4) Physical interpretation of the double sum.</p>\n\n<p>Consider an infinite array of unit masses placed in the x-y-plane at the points $(x,y) = 2 \\pi (r,s)$.</p>\n\n<p>The double sum $f$ multiplied by $z$ is then equal to the vertical component of the gravitational force on a unit mass at the point $(x,y,z)$.</p>\n\n<p>It is clear from this analogue model that the force is strong for small $z$ and decreases for increasing $z$.</p>\n\n<p>5) Asymptotic behaviour </p>\n\n<p>5a) For small $z$ we have $z \\simeq 1/z^3$.</p>\n\n<p>Indeed, for small $z$ we can write</p>\n\n<p>$$f(0,0,z\\to 0)\\simeq \\frac{4}{(2 \\pi )^{3/2}}\\sum _{r=1}^{\\infty } \\sum _{s=1}^{\\infty } \\frac{1}{\\left(r^2+s^2\\right)^{3/2}}+\\frac{1}{z^3}$$</p>\n\n<p>The double sum is a finite constant as can be shown thus</p>\n\n<p>$$fr=\\sum _{r=1}^{\\infty } \\sum _{s=1}^{\\infty } \\frac{1}{\\left(r^2+s^2\\right)^{3/2}}&lt;\\int _1^{\\infty }\\int _1^{\\infty }\\frac{1}{\\left(r^2+s^2\\right)^{3/2}}dsdr=2-\\sqrt{2}\\simeq 0.585786\\text{...}$$</p>\n\n<p>This proves the statement. </p>\n\n<p>5b) For $z\\to\\infty$ we find $f\\simeq \\frac{1}{2 \\pi z}$. </p>\n\n<p>Indeed, in this case the double sum can be approximated by the double integral as calculated in section 1).</p>\n\n<p>5c) Both limits are easily understood from the physical analogy: </p>\n\n<p>For small $z$ and $(x,y) = (0,0)$ we are approaching a single mass at the origin. The force we have calculated is equal to the $z \\frac{1}{z^3} = \\frac{1}{z^2}$, in agreement with Newton's law of gravity.</p>\n\n<p>For large $z$, on the other hand, the grid structure is \"invisible\" to the point mass, and the force becomes equal to the force of an infinite massive plate, and this is constant. Indeed we have found $f \\simeq 1/z$, times $z$ this is constant.</p>\n\n<p>6) Fourier form of the double sum</p>\n\n<p>The double sum $f$, renamed $fc$ here, can also be written as a double Fourier series.</p>\n\n<p>$$fc(x,y,z)=\\frac{1}{2 \\pi z}\\sum _{n=-\\infty }^{\\infty } \\sum _{m=-\\infty }^{\\infty } e^{-z \\sqrt{m^2+n^2}} \\cos (m y) \\cos (n x)$$</p>\n\n<p>The derivation starts from the expression obtained in the solution of Pierpaolo Vivo </p>\n\n<p>$$fe(x,y,z)=\\int_0^{\\infty } \\frac{e^{-t z^2} \\vartheta _3\\left(\\frac{x}{2},e^{-\\frac{1}{4\\; t}}\\right) \\vartheta _3\\left(\\frac{y}{2},e^{-\\frac{1}{4\\; t}}\\right)}{2 \\pi ^{3/2} \\sqrt{t}} \\, dt$$</p>\n\n<p>Subsituting the expression for the Jacobi theta function (see the real part of formula (4) in <a href=\"http://mathworld.wolfram.com/JacobiThetaFunctions.html\" rel=\"nofollow noreferrer\">http://mathworld.wolfram.com/JacobiThetaFunctions.html</a>)</p>\n\n<p>$$\\vartheta _3\\left(\\frac{u}{2},q\\right)=\\sum _{n=-\\infty }^{\\infty } q^{n^2} \\cos (n u)$$</p>\n\n<p>and doing the $t$-integral gives</p>\n\n<p>$$\\cos (m y) \\cos (n x) \\int_0^{\\infty } \\frac{\\exp \\left(-\\frac{m^2+n^2}{4 t}-t z^2\\right)}{\\left(2 \\pi ^{3/2}\\right) \\sqrt{t}} \\, dt=\\frac{e^{-z \\sqrt{m^2+n^2}} \\cos (m y) \\cos (n x)}{2 \\pi z}$$</p>\n\n<p>The double sum of which is $fc$. Done.</p>\n\n<p>Remark: it is interesting, that replacing the double sum by a double integral gives (hint: transform to polar coordinates)</p>\n\n<p>$$\\frac{1}{2 \\pi z}\\int _{-\\infty }^{\\infty }\\int _{-\\infty }^{\\infty }e^{-z \\sqrt{m^2+n^2}} \\cos (m y) \\cos (n x)dmdn=\\frac{1}{\\left(x^2+y^2+z^2\\right)^{3/2}}$$</p>\n\n<p>This expression is less trivial than the one in 1) but the double periodic structure is washed out as well.</p>\n\n<p>7) Approximations</p>\n\n<p>We can approximate the function by taking only a finite number $n$ of terms, defined here as letting the summation indices run from $-n$ to $n$.</p>\n\n<p>It turns out that for $z\\gtrsim 1$ the Fourier double sum $fc$ requires less terms than the plain sum $f$, while for $z\\lesssim 1$ the situation is reversed.\nThe following graphs show the typical behaviour of the maximum and minimum values as a function of the number of terms.</p>\n\n<p>The case $z = \\pi /2$, this is the values of the OP</p>\n\n<p><a href=\"https://i.stack.imgur.com/eIHcS.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/eIHcS.jpg\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/duUs9.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/duUs9.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>The case $z = 1/2$</p>\n\n<p><a href=\"https://i.stack.imgur.com/XEw1m.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/XEw1m.jpg\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/bnjkG.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bnjkG.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>We conclude that if $z$ takes on the value $\\pi /2$ of the OP, then the Fourier double sum truncated at index $n = 4$ gives a satisfactory approximation to $f$. Hence we can state (somewhat boldly) that this is the requested closed form expression.</p>\n" }, { "answer_id": 2082122, "score": 2, "text": "<p>Not really a full answer, but too long for a comment. You can always use the identity\n$$\n\\frac{1}{Z^{3/2}}=\\frac{2}{\\sqrt{\\pi}}\\int_0^\\infty d\\xi\\ \\xi^{1/2}e^{-\\xi Z}\\ ,\n$$\nfor positive $Z$. Your double sum then reads\n$$\n\\frac{2}{\\sqrt{\\pi}}\\int_0^\\infty d\\xi\\ \\xi^{1/2}e^{-\\xi k^2\\tau^2}\\sum_{r=-\\infty}^\\infty e^{-\\xi (x+2\\pi r)^2}\\sum_{s=-\\infty}^\\infty e^{-\\xi (y+2\\pi s)^2}\n$$\nand evaluating the two sums in terms of Elliptic theta functions, one gets\n$$\n\\frac{2}{\\sqrt{\\pi}}\\int_0^\\infty d\\xi\\ \\xi^{1/2}e^{-\\xi k^2\\tau^2}\\frac{\\vartheta _3\\left(\\frac{x}{2},e^{-\\frac{1}{4 \\xi}}\\right)}{2 \\sqrt{\\pi } \\sqrt{\\xi}}\\frac{\\vartheta _3\\left(\\frac{y}{2},e^{-\\frac{1}{4 \\xi}}\\right)}{2 \\sqrt{\\pi } \\sqrt{\\xi}}\n$$\n$$\n=\\int_0^{\\infty } \\frac{e^{-k^2 \\xi \\tau ^2} \\vartheta _3\\left(\\frac{x}{2},e^{-\\frac{1}{4 \\xi }}\\right) \\vartheta _3\\left(\\frac{y}{2},e^{-\\frac{1}{4 \\xi }}\\right)}{2 \\pi^{3/2} \\sqrt{\\xi }} \\, d\\xi.\n$$\nThe integral does not seem to be easy to compute in closed form, but might be better suited for an asymptotic/approximate evaluation. </p>\n" }, { "answer_id": 2085653, "score": 1, "text": "<p>@Dr. Wolfgang Hintze, Thank you very much for the contribution. \nThe point of this answer is just to propose something which could complete the derivation of $f_{s}$ you proposed in the previous answer. At this step I don't think there is a closed form solution, but a reasonable approximation can be made.</p>\n\n<p>Intuitively, $|\\cos(nx) \\cos(y n)|$ is spread out equally between 0 and 1. If we approximate it as a constant $c&gt;0$, then the sum become</p>\n\n<p>$f_{s} \\simeq \\frac{2c}{\\pi z}\\sum_{n=1}^\\infty \\sum_{m=1}^\\infty e^{-z\\sqrt{m^2+n^2}}$.</p>\n\n<p>Again this expression is not tractable with Mathematica, so I thought that approximation can be made using the Euler–Maclaurin formula.</p>\n\n<p>I start by doing a simple modification on $f_{s}$ :</p>\n\n<p>$f_{s} \\simeq \\frac{2c}{\\pi z}[\\sum_{n=0}^\\infty \\sum_{m=0}^\\infty e^{-z\\sqrt{m^2+n^2}}-2]$.</p>\n\n<p>In one variable, Euler_Maclaurin formula state that $\\sum_{n=0}^\\infty f(n)=\\frac{1}{2}f(0)+\\int_{0}^{\\infty}f(n) dn$, I will use this twice in what follow.</p>\n\n<p>We know that\n$\\frac{\\pi}{2z^2} = \\int_{0}^{\\infty}\\int_{0}^{\\infty}e^{-z\\sqrt{m^2+n^2}} dn dm$</p>\n\n<p>then</p>\n\n<p>$\\frac{\\pi}{2z^2} = \\int_{0}^{\\infty}[\\sum_{m=0}^\\infty e^{-z\\sqrt{m^2+n^2}}-\\frac{1}{2}e^{-zm}]dm$</p>\n\n<p>$\\frac{\\pi}{2z^2} = \\sum_{m=0}^\\infty \\sum_{n=0}^\\infty e^{-z\\sqrt{m^2+n^2}}-\\frac{1}{2}\\sum_{n=0}^\\infty e^{-zn}+\\frac{1}{4}$</p>\n\n<p>$\\frac{\\pi}{2z^2} = \\sum_{m=0}^\\infty \\sum_{n=0}^\\infty e^{-z\\sqrt{m^2+n^2}}-\\frac{1}{2} \\frac{e^{z}}{e^{z}-1}+\\frac{1}{4}$</p>\n\n<p>So, we can now conclude that </p>\n\n<p>$\\sum_{m=0}^\\infty \\sum_{n=0}^\\infty e^{-z\\sqrt{m^2+n^2}}= \\frac{\\pi}{2z^2}+\\frac{1}{2} \\frac{e^{z}}{e^{z}-1}-\\frac{1}{4}.$</p>\n\n<p>This all what I was able to do, but I'am not sure that my assumption ($|\\cos(nx) \\cos(y n)|$=c) is reasonable to guarantee a good approximation. What do you think ?</p>\n" }, { "answer_id": 2082129, "score": 6, "text": "<p>This is not a solution, as I haven't found a closed form expression (and I doubt meanwhile that it exists) but a collection of illustrative remarks that might be useful. </p>\n\n<p>Notice that I have reinserted and simplified the derivation of the Fourier form of the double sum in 6).</p>\n\n<p>1) While the sum</p>\n\n<p>$$f=\\sum _{r=-\\infty }^{\\infty } \\sum _{s=-\\infty }^{\\infty } \\frac{1}{\\left(k^2 \\tau ^2+(2 \\pi r+x)^2+(2 \\pi s+y)^2\\right)^{3/2}}$$</p>\n\n<p>is returned unevaluated by Mathematica the corresponding integral has a very simple closed form solution (independent of $x$ and $y$)</p>\n\n<p>$$fi=\\int _{-\\infty }^{\\infty }\\int _{-\\infty }^{\\infty }\\frac{1}{\\left(k^2 \\tau ^2+(2 \\pi r+x)^2+(2 \\pi s+y)^2\\right)^{3/2}}dsdr = \\frac{1}{2 \\pi \\sqrt{k^2 \\tau ^2}}$$</p>\n\n<p>This is of course a purely technical remark as the integral lacks the essential dependence on $x$ and $y$.</p>\n\n<p>2) Numerical investigation shows that but a few summand are sufficient to obtain a reasonable approximation to the complete sum.</p>\n\n<p>3) Plots of the double sum </p>\n\n<p>3a) 3D-plot for two values of $z = k \\tau$</p>\n\n<p>$z = 1$</p>\n\n<p><a href=\"https://i.stack.imgur.com/C1gdb.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/C1gdb.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>$z = \\pi /2$ (this is the value to be assigned to z acording to the OP)</p>\n\n<p><a href=\"https://i.stack.imgur.com/xZbt3.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/xZbt3.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>We see the expected regular spike structure. Where the spikes become steeper with smaller $z$ (notice that the vertical scale is the same in the two pictures).</p>\n\n<p>3b) Cross sections $y = 0$ for different values of z</p>\n\n<p><a href=\"https://i.stack.imgur.com/fQfIr.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/fQfIr.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>3c) Maximum value of $f$ as a function of $z$</p>\n\n<p><a href=\"https://i.stack.imgur.com/aKiTz.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/aKiTz.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Notice that $f \\simeq 1/z^3$ for small $z$. Which can also be derived directly form the double sum as is shown in section 5).</p>\n\n<p>3d) Minimum value of $f$ as a function of $z$ </p>\n\n<p><a href=\"https://i.stack.imgur.com/MPnhY.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/MPnhY.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>4) Physical interpretation of the double sum.</p>\n\n<p>Consider an infinite array of unit masses placed in the x-y-plane at the points $(x,y) = 2 \\pi (r,s)$.</p>\n\n<p>The double sum $f$ multiplied by $z$ is then equal to the vertical component of the gravitational force on a unit mass at the point $(x,y,z)$.</p>\n\n<p>It is clear from this analogue model that the force is strong for small $z$ and decreases for increasing $z$.</p>\n\n<p>5) Asymptotic behaviour </p>\n\n<p>5a) For small $z$ we have $z \\simeq 1/z^3$.</p>\n\n<p>Indeed, for small $z$ we can write</p>\n\n<p>$$f(0,0,z\\to 0)\\simeq \\frac{4}{(2 \\pi )^{3/2}}\\sum _{r=1}^{\\infty } \\sum _{s=1}^{\\infty } \\frac{1}{\\left(r^2+s^2\\right)^{3/2}}+\\frac{1}{z^3}$$</p>\n\n<p>The double sum is a finite constant as can be shown thus</p>\n\n<p>$$fr=\\sum _{r=1}^{\\infty } \\sum _{s=1}^{\\infty } \\frac{1}{\\left(r^2+s^2\\right)^{3/2}}&lt;\\int _1^{\\infty }\\int _1^{\\infty }\\frac{1}{\\left(r^2+s^2\\right)^{3/2}}dsdr=2-\\sqrt{2}\\simeq 0.585786\\text{...}$$</p>\n\n<p>This proves the statement. </p>\n\n<p>5b) For $z\\to\\infty$ we find $f\\simeq \\frac{1}{2 \\pi z}$. </p>\n\n<p>Indeed, in this case the double sum can be approximated by the double integral as calculated in section 1).</p>\n\n<p>5c) Both limits are easily understood from the physical analogy: </p>\n\n<p>For small $z$ and $(x,y) = (0,0)$ we are approaching a single mass at the origin. The force we have calculated is equal to the $z \\frac{1}{z^3} = \\frac{1}{z^2}$, in agreement with Newton's law of gravity.</p>\n\n<p>For large $z$, on the other hand, the grid structure is \"invisible\" to the point mass, and the force becomes equal to the force of an infinite massive plate, and this is constant. Indeed we have found $f \\simeq 1/z$, times $z$ this is constant.</p>\n\n<p>6) Fourier form of the double sum</p>\n\n<p>The double sum $f$, renamed $fc$ here, can also be written as a double Fourier series.</p>\n\n<p>$$fc(x,y,z)=\\frac{1}{2 \\pi z}\\sum _{n=-\\infty }^{\\infty } \\sum _{m=-\\infty }^{\\infty } e^{-z \\sqrt{m^2+n^2}} \\cos (m y) \\cos (n x)$$</p>\n\n<p>The derivation starts from the expression obtained in the solution of Pierpaolo Vivo </p>\n\n<p>$$fe(x,y,z)=\\int_0^{\\infty } \\frac{e^{-t z^2} \\vartheta _3\\left(\\frac{x}{2},e^{-\\frac{1}{4\\; t}}\\right) \\vartheta _3\\left(\\frac{y}{2},e^{-\\frac{1}{4\\; t}}\\right)}{2 \\pi ^{3/2} \\sqrt{t}} \\, dt$$</p>\n\n<p>Subsituting the expression for the Jacobi theta function (see the real part of formula (4) in <a href=\"http://mathworld.wolfram.com/JacobiThetaFunctions.html\" rel=\"nofollow noreferrer\">http://mathworld.wolfram.com/JacobiThetaFunctions.html</a>)</p>\n\n<p>$$\\vartheta _3\\left(\\frac{u}{2},q\\right)=\\sum _{n=-\\infty }^{\\infty } q^{n^2} \\cos (n u)$$</p>\n\n<p>and doing the $t$-integral gives</p>\n\n<p>$$\\cos (m y) \\cos (n x) \\int_0^{\\infty } \\frac{\\exp \\left(-\\frac{m^2+n^2}{4 t}-t z^2\\right)}{\\left(2 \\pi ^{3/2}\\right) \\sqrt{t}} \\, dt=\\frac{e^{-z \\sqrt{m^2+n^2}} \\cos (m y) \\cos (n x)}{2 \\pi z}$$</p>\n\n<p>The double sum of which is $fc$. Done.</p>\n\n<p>Remark: it is interesting, that replacing the double sum by a double integral gives (hint: transform to polar coordinates)</p>\n\n<p>$$\\frac{1}{2 \\pi z}\\int _{-\\infty }^{\\infty }\\int _{-\\infty }^{\\infty }e^{-z \\sqrt{m^2+n^2}} \\cos (m y) \\cos (n x)dmdn=\\frac{1}{\\left(x^2+y^2+z^2\\right)^{3/2}}$$</p>\n\n<p>This expression is less trivial than the one in 1) but the double periodic structure is washed out as well.</p>\n\n<p>7) Approximations</p>\n\n<p>We can approximate the function by taking only a finite number $n$ of terms, defined here as letting the summation indices run from $-n$ to $n$.</p>\n\n<p>It turns out that for $z\\gtrsim 1$ the Fourier double sum $fc$ requires less terms than the plain sum $f$, while for $z\\lesssim 1$ the situation is reversed.\nThe following graphs show the typical behaviour of the maximum and minimum values as a function of the number of terms.</p>\n\n<p>The case $z = \\pi /2$, this is the values of the OP</p>\n\n<p><a href=\"https://i.stack.imgur.com/eIHcS.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/eIHcS.jpg\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/duUs9.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/duUs9.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>The case $z = 1/2$</p>\n\n<p><a href=\"https://i.stack.imgur.com/XEw1m.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/XEw1m.jpg\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/bnjkG.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bnjkG.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>We conclude that if $z$ takes on the value $\\pi /2$ of the OP, then the Fourier double sum truncated at index $n = 4$ gives a satisfactory approximation to $f$. Hence we can state (somewhat boldly) that this is the requested closed form expression.</p>\n" }, { "answer_id": 2082122, "score": 2, "text": "<p>Not really a full answer, but too long for a comment. You can always use the identity\n$$\n\\frac{1}{Z^{3/2}}=\\frac{2}{\\sqrt{\\pi}}\\int_0^\\infty d\\xi\\ \\xi^{1/2}e^{-\\xi Z}\\ ,\n$$\nfor positive $Z$. Your double sum then reads\n$$\n\\frac{2}{\\sqrt{\\pi}}\\int_0^\\infty d\\xi\\ \\xi^{1/2}e^{-\\xi k^2\\tau^2}\\sum_{r=-\\infty}^\\infty e^{-\\xi (x+2\\pi r)^2}\\sum_{s=-\\infty}^\\infty e^{-\\xi (y+2\\pi s)^2}\n$$\nand evaluating the two sums in terms of Elliptic theta functions, one gets\n$$\n\\frac{2}{\\sqrt{\\pi}}\\int_0^\\infty d\\xi\\ \\xi^{1/2}e^{-\\xi k^2\\tau^2}\\frac{\\vartheta _3\\left(\\frac{x}{2},e^{-\\frac{1}{4 \\xi}}\\right)}{2 \\sqrt{\\pi } \\sqrt{\\xi}}\\frac{\\vartheta _3\\left(\\frac{y}{2},e^{-\\frac{1}{4 \\xi}}\\right)}{2 \\sqrt{\\pi } \\sqrt{\\xi}}\n$$\n$$\n=\\int_0^{\\infty } \\frac{e^{-k^2 \\xi \\tau ^2} \\vartheta _3\\left(\\frac{x}{2},e^{-\\frac{1}{4 \\xi }}\\right) \\vartheta _3\\left(\\frac{y}{2},e^{-\\frac{1}{4 \\xi }}\\right)}{2 \\pi^{3/2} \\sqrt{\\xi }} \\, d\\xi.\n$$\nThe integral does not seem to be easy to compute in closed form, but might be better suited for an asymptotic/approximate evaluation. </p>\n" }, { "answer_id": 2085653, "score": 1, "text": "<p>@Dr. Wolfgang Hintze, Thank you very much for the contribution. \nThe point of this answer is just to propose something which could complete the derivation of $f_{s}$ you proposed in the previous answer. At this step I don't think there is a closed form solution, but a reasonable approximation can be made.</p>\n\n<p>Intuitively, $|\\cos(nx) \\cos(y n)|$ is spread out equally between 0 and 1. If we approximate it as a constant $c&gt;0$, then the sum become</p>\n\n<p>$f_{s} \\simeq \\frac{2c}{\\pi z}\\sum_{n=1}^\\infty \\sum_{m=1}^\\infty e^{-z\\sqrt{m^2+n^2}}$.</p>\n\n<p>Again this expression is not tractable with Mathematica, so I thought that approximation can be made using the Euler–Maclaurin formula.</p>\n\n<p>I start by doing a simple modification on $f_{s}$ :</p>\n\n<p>$f_{s} \\simeq \\frac{2c}{\\pi z}[\\sum_{n=0}^\\infty \\sum_{m=0}^\\infty e^{-z\\sqrt{m^2+n^2}}-2]$.</p>\n\n<p>In one variable, Euler_Maclaurin formula state that $\\sum_{n=0}^\\infty f(n)=\\frac{1}{2}f(0)+\\int_{0}^{\\infty}f(n) dn$, I will use this twice in what follow.</p>\n\n<p>We know that\n$\\frac{\\pi}{2z^2} = \\int_{0}^{\\infty}\\int_{0}^{\\infty}e^{-z\\sqrt{m^2+n^2}} dn dm$</p>\n\n<p>then</p>\n\n<p>$\\frac{\\pi}{2z^2} = \\int_{0}^{\\infty}[\\sum_{m=0}^\\infty e^{-z\\sqrt{m^2+n^2}}-\\frac{1}{2}e^{-zm}]dm$</p>\n\n<p>$\\frac{\\pi}{2z^2} = \\sum_{m=0}^\\infty \\sum_{n=0}^\\infty e^{-z\\sqrt{m^2+n^2}}-\\frac{1}{2}\\sum_{n=0}^\\infty e^{-zn}+\\frac{1}{4}$</p>\n\n<p>$\\frac{\\pi}{2z^2} = \\sum_{m=0}^\\infty \\sum_{n=0}^\\infty e^{-z\\sqrt{m^2+n^2}}-\\frac{1}{2} \\frac{e^{z}}{e^{z}-1}+\\frac{1}{4}$</p>\n\n<p>So, we can now conclude that </p>\n\n<p>$\\sum_{m=0}^\\infty \\sum_{n=0}^\\infty e^{-z\\sqrt{m^2+n^2}}= \\frac{\\pi}{2z^2}+\\frac{1}{2} \\frac{e^{z}}{e^{z}-1}-\\frac{1}{4}.$</p>\n\n<p>This all what I was able to do, but I'am not sure that my assumption ($|\\cos(nx) \\cos(y n)|$=c) is reasonable to guarantee a good approximation. What do you think ?</p>\n" }, { "answer_id": 2082129, "score": 6, "text": "<p>This is not a solution, as I haven't found a closed form expression (and I doubt meanwhile that it exists) but a collection of illustrative remarks that might be useful. </p>\n\n<p>Notice that I have reinserted and simplified the derivation of the Fourier form of the double sum in 6).</p>\n\n<p>1) While the sum</p>\n\n<p>$$f=\\sum _{r=-\\infty }^{\\infty } \\sum _{s=-\\infty }^{\\infty } \\frac{1}{\\left(k^2 \\tau ^2+(2 \\pi r+x)^2+(2 \\pi s+y)^2\\right)^{3/2}}$$</p>\n\n<p>is returned unevaluated by Mathematica the corresponding integral has a very simple closed form solution (independent of $x$ and $y$)</p>\n\n<p>$$fi=\\int _{-\\infty }^{\\infty }\\int _{-\\infty }^{\\infty }\\frac{1}{\\left(k^2 \\tau ^2+(2 \\pi r+x)^2+(2 \\pi s+y)^2\\right)^{3/2}}dsdr = \\frac{1}{2 \\pi \\sqrt{k^2 \\tau ^2}}$$</p>\n\n<p>This is of course a purely technical remark as the integral lacks the essential dependence on $x$ and $y$.</p>\n\n<p>2) Numerical investigation shows that but a few summand are sufficient to obtain a reasonable approximation to the complete sum.</p>\n\n<p>3) Plots of the double sum </p>\n\n<p>3a) 3D-plot for two values of $z = k \\tau$</p>\n\n<p>$z = 1$</p>\n\n<p><a href=\"https://i.stack.imgur.com/C1gdb.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/C1gdb.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>$z = \\pi /2$ (this is the value to be assigned to z acording to the OP)</p>\n\n<p><a href=\"https://i.stack.imgur.com/xZbt3.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/xZbt3.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>We see the expected regular spike structure. Where the spikes become steeper with smaller $z$ (notice that the vertical scale is the same in the two pictures).</p>\n\n<p>3b) Cross sections $y = 0$ for different values of z</p>\n\n<p><a href=\"https://i.stack.imgur.com/fQfIr.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/fQfIr.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>3c) Maximum value of $f$ as a function of $z$</p>\n\n<p><a href=\"https://i.stack.imgur.com/aKiTz.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/aKiTz.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Notice that $f \\simeq 1/z^3$ for small $z$. Which can also be derived directly form the double sum as is shown in section 5).</p>\n\n<p>3d) Minimum value of $f$ as a function of $z$ </p>\n\n<p><a href=\"https://i.stack.imgur.com/MPnhY.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/MPnhY.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>4) Physical interpretation of the double sum.</p>\n\n<p>Consider an infinite array of unit masses placed in the x-y-plane at the points $(x,y) = 2 \\pi (r,s)$.</p>\n\n<p>The double sum $f$ multiplied by $z$ is then equal to the vertical component of the gravitational force on a unit mass at the point $(x,y,z)$.</p>\n\n<p>It is clear from this analogue model that the force is strong for small $z$ and decreases for increasing $z$.</p>\n\n<p>5) Asymptotic behaviour </p>\n\n<p>5a) For small $z$ we have $z \\simeq 1/z^3$.</p>\n\n<p>Indeed, for small $z$ we can write</p>\n\n<p>$$f(0,0,z\\to 0)\\simeq \\frac{4}{(2 \\pi )^{3/2}}\\sum _{r=1}^{\\infty } \\sum _{s=1}^{\\infty } \\frac{1}{\\left(r^2+s^2\\right)^{3/2}}+\\frac{1}{z^3}$$</p>\n\n<p>The double sum is a finite constant as can be shown thus</p>\n\n<p>$$fr=\\sum _{r=1}^{\\infty } \\sum _{s=1}^{\\infty } \\frac{1}{\\left(r^2+s^2\\right)^{3/2}}&lt;\\int _1^{\\infty }\\int _1^{\\infty }\\frac{1}{\\left(r^2+s^2\\right)^{3/2}}dsdr=2-\\sqrt{2}\\simeq 0.585786\\text{...}$$</p>\n\n<p>This proves the statement. </p>\n\n<p>5b) For $z\\to\\infty$ we find $f\\simeq \\frac{1}{2 \\pi z}$. </p>\n\n<p>Indeed, in this case the double sum can be approximated by the double integral as calculated in section 1).</p>\n\n<p>5c) Both limits are easily understood from the physical analogy: </p>\n\n<p>For small $z$ and $(x,y) = (0,0)$ we are approaching a single mass at the origin. The force we have calculated is equal to the $z \\frac{1}{z^3} = \\frac{1}{z^2}$, in agreement with Newton's law of gravity.</p>\n\n<p>For large $z$, on the other hand, the grid structure is \"invisible\" to the point mass, and the force becomes equal to the force of an infinite massive plate, and this is constant. Indeed we have found $f \\simeq 1/z$, times $z$ this is constant.</p>\n\n<p>6) Fourier form of the double sum</p>\n\n<p>The double sum $f$, renamed $fc$ here, can also be written as a double Fourier series.</p>\n\n<p>$$fc(x,y,z)=\\frac{1}{2 \\pi z}\\sum _{n=-\\infty }^{\\infty } \\sum _{m=-\\infty }^{\\infty } e^{-z \\sqrt{m^2+n^2}} \\cos (m y) \\cos (n x)$$</p>\n\n<p>The derivation starts from the expression obtained in the solution of Pierpaolo Vivo </p>\n\n<p>$$fe(x,y,z)=\\int_0^{\\infty } \\frac{e^{-t z^2} \\vartheta _3\\left(\\frac{x}{2},e^{-\\frac{1}{4\\; t}}\\right) \\vartheta _3\\left(\\frac{y}{2},e^{-\\frac{1}{4\\; t}}\\right)}{2 \\pi ^{3/2} \\sqrt{t}} \\, dt$$</p>\n\n<p>Subsituting the expression for the Jacobi theta function (see the real part of formula (4) in <a href=\"http://mathworld.wolfram.com/JacobiThetaFunctions.html\" rel=\"nofollow noreferrer\">http://mathworld.wolfram.com/JacobiThetaFunctions.html</a>)</p>\n\n<p>$$\\vartheta _3\\left(\\frac{u}{2},q\\right)=\\sum _{n=-\\infty }^{\\infty } q^{n^2} \\cos (n u)$$</p>\n\n<p>and doing the $t$-integral gives</p>\n\n<p>$$\\cos (m y) \\cos (n x) \\int_0^{\\infty } \\frac{\\exp \\left(-\\frac{m^2+n^2}{4 t}-t z^2\\right)}{\\left(2 \\pi ^{3/2}\\right) \\sqrt{t}} \\, dt=\\frac{e^{-z \\sqrt{m^2+n^2}} \\cos (m y) \\cos (n x)}{2 \\pi z}$$</p>\n\n<p>The double sum of which is $fc$. Done.</p>\n\n<p>Remark: it is interesting, that replacing the double sum by a double integral gives (hint: transform to polar coordinates)</p>\n\n<p>$$\\frac{1}{2 \\pi z}\\int _{-\\infty }^{\\infty }\\int _{-\\infty }^{\\infty }e^{-z \\sqrt{m^2+n^2}} \\cos (m y) \\cos (n x)dmdn=\\frac{1}{\\left(x^2+y^2+z^2\\right)^{3/2}}$$</p>\n\n<p>This expression is less trivial than the one in 1) but the double periodic structure is washed out as well.</p>\n\n<p>7) Approximations</p>\n\n<p>We can approximate the function by taking only a finite number $n$ of terms, defined here as letting the summation indices run from $-n$ to $n$.</p>\n\n<p>It turns out that for $z\\gtrsim 1$ the Fourier double sum $fc$ requires less terms than the plain sum $f$, while for $z\\lesssim 1$ the situation is reversed.\nThe following graphs show the typical behaviour of the maximum and minimum values as a function of the number of terms.</p>\n\n<p>The case $z = \\pi /2$, this is the values of the OP</p>\n\n<p><a href=\"https://i.stack.imgur.com/eIHcS.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/eIHcS.jpg\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/duUs9.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/duUs9.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>The case $z = 1/2$</p>\n\n<p><a href=\"https://i.stack.imgur.com/XEw1m.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/XEw1m.jpg\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/bnjkG.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bnjkG.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>We conclude that if $z$ takes on the value $\\pi /2$ of the OP, then the Fourier double sum truncated at index $n = 4$ gives a satisfactory approximation to $f$. Hence we can state (somewhat boldly) that this is the requested closed form expression.</p>\n" }, { "answer_id": 2082122, "score": 2, "text": "<p>Not really a full answer, but too long for a comment. You can always use the identity\n$$\n\\frac{1}{Z^{3/2}}=\\frac{2}{\\sqrt{\\pi}}\\int_0^\\infty d\\xi\\ \\xi^{1/2}e^{-\\xi Z}\\ ,\n$$\nfor positive $Z$. Your double sum then reads\n$$\n\\frac{2}{\\sqrt{\\pi}}\\int_0^\\infty d\\xi\\ \\xi^{1/2}e^{-\\xi k^2\\tau^2}\\sum_{r=-\\infty}^\\infty e^{-\\xi (x+2\\pi r)^2}\\sum_{s=-\\infty}^\\infty e^{-\\xi (y+2\\pi s)^2}\n$$\nand evaluating the two sums in terms of Elliptic theta functions, one gets\n$$\n\\frac{2}{\\sqrt{\\pi}}\\int_0^\\infty d\\xi\\ \\xi^{1/2}e^{-\\xi k^2\\tau^2}\\frac{\\vartheta _3\\left(\\frac{x}{2},e^{-\\frac{1}{4 \\xi}}\\right)}{2 \\sqrt{\\pi } \\sqrt{\\xi}}\\frac{\\vartheta _3\\left(\\frac{y}{2},e^{-\\frac{1}{4 \\xi}}\\right)}{2 \\sqrt{\\pi } \\sqrt{\\xi}}\n$$\n$$\n=\\int_0^{\\infty } \\frac{e^{-k^2 \\xi \\tau ^2} \\vartheta _3\\left(\\frac{x}{2},e^{-\\frac{1}{4 \\xi }}\\right) \\vartheta _3\\left(\\frac{y}{2},e^{-\\frac{1}{4 \\xi }}\\right)}{2 \\pi^{3/2} \\sqrt{\\xi }} \\, d\\xi.\n$$\nThe integral does not seem to be easy to compute in closed form, but might be better suited for an asymptotic/approximate evaluation. </p>\n" }, { "answer_id": 2085653, "score": 1, "text": "<p>@Dr. Wolfgang Hintze, Thank you very much for the contribution. \nThe point of this answer is just to propose something which could complete the derivation of $f_{s}$ you proposed in the previous answer. At this step I don't think there is a closed form solution, but a reasonable approximation can be made.</p>\n\n<p>Intuitively, $|\\cos(nx) \\cos(y n)|$ is spread out equally between 0 and 1. If we approximate it as a constant $c&gt;0$, then the sum become</p>\n\n<p>$f_{s} \\simeq \\frac{2c}{\\pi z}\\sum_{n=1}^\\infty \\sum_{m=1}^\\infty e^{-z\\sqrt{m^2+n^2}}$.</p>\n\n<p>Again this expression is not tractable with Mathematica, so I thought that approximation can be made using the Euler–Maclaurin formula.</p>\n\n<p>I start by doing a simple modification on $f_{s}$ :</p>\n\n<p>$f_{s} \\simeq \\frac{2c}{\\pi z}[\\sum_{n=0}^\\infty \\sum_{m=0}^\\infty e^{-z\\sqrt{m^2+n^2}}-2]$.</p>\n\n<p>In one variable, Euler_Maclaurin formula state that $\\sum_{n=0}^\\infty f(n)=\\frac{1}{2}f(0)+\\int_{0}^{\\infty}f(n) dn$, I will use this twice in what follow.</p>\n\n<p>We know that\n$\\frac{\\pi}{2z^2} = \\int_{0}^{\\infty}\\int_{0}^{\\infty}e^{-z\\sqrt{m^2+n^2}} dn dm$</p>\n\n<p>then</p>\n\n<p>$\\frac{\\pi}{2z^2} = \\int_{0}^{\\infty}[\\sum_{m=0}^\\infty e^{-z\\sqrt{m^2+n^2}}-\\frac{1}{2}e^{-zm}]dm$</p>\n\n<p>$\\frac{\\pi}{2z^2} = \\sum_{m=0}^\\infty \\sum_{n=0}^\\infty e^{-z\\sqrt{m^2+n^2}}-\\frac{1}{2}\\sum_{n=0}^\\infty e^{-zn}+\\frac{1}{4}$</p>\n\n<p>$\\frac{\\pi}{2z^2} = \\sum_{m=0}^\\infty \\sum_{n=0}^\\infty e^{-z\\sqrt{m^2+n^2}}-\\frac{1}{2} \\frac{e^{z}}{e^{z}-1}+\\frac{1}{4}$</p>\n\n<p>So, we can now conclude that </p>\n\n<p>$\\sum_{m=0}^\\infty \\sum_{n=0}^\\infty e^{-z\\sqrt{m^2+n^2}}= \\frac{\\pi}{2z^2}+\\frac{1}{2} \\frac{e^{z}}{e^{z}-1}-\\frac{1}{4}.$</p>\n\n<p>This all what I was able to do, but I'am not sure that my assumption ($|\\cos(nx) \\cos(y n)|$=c) is reasonable to guarantee a good approximation. What do you think ?</p>\n" }, { "answer_id": 2082129, "score": 6, "text": "<p>This is not a solution, as I haven't found a closed form expression (and I doubt meanwhile that it exists) but a collection of illustrative remarks that might be useful. </p>\n\n<p>Notice that I have reinserted and simplified the derivation of the Fourier form of the double sum in 6).</p>\n\n<p>1) While the sum</p>\n\n<p>$$f=\\sum _{r=-\\infty }^{\\infty } \\sum _{s=-\\infty }^{\\infty } \\frac{1}{\\left(k^2 \\tau ^2+(2 \\pi r+x)^2+(2 \\pi s+y)^2\\right)^{3/2}}$$</p>\n\n<p>is returned unevaluated by Mathematica the corresponding integral has a very simple closed form solution (independent of $x$ and $y$)</p>\n\n<p>$$fi=\\int _{-\\infty }^{\\infty }\\int _{-\\infty }^{\\infty }\\frac{1}{\\left(k^2 \\tau ^2+(2 \\pi r+x)^2+(2 \\pi s+y)^2\\right)^{3/2}}dsdr = \\frac{1}{2 \\pi \\sqrt{k^2 \\tau ^2}}$$</p>\n\n<p>This is of course a purely technical remark as the integral lacks the essential dependence on $x$ and $y$.</p>\n\n<p>2) Numerical investigation shows that but a few summand are sufficient to obtain a reasonable approximation to the complete sum.</p>\n\n<p>3) Plots of the double sum </p>\n\n<p>3a) 3D-plot for two values of $z = k \\tau$</p>\n\n<p>$z = 1$</p>\n\n<p><a href=\"https://i.stack.imgur.com/C1gdb.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/C1gdb.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>$z = \\pi /2$ (this is the value to be assigned to z acording to the OP)</p>\n\n<p><a href=\"https://i.stack.imgur.com/xZbt3.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/xZbt3.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>We see the expected regular spike structure. Where the spikes become steeper with smaller $z$ (notice that the vertical scale is the same in the two pictures).</p>\n\n<p>3b) Cross sections $y = 0$ for different values of z</p>\n\n<p><a href=\"https://i.stack.imgur.com/fQfIr.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/fQfIr.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>3c) Maximum value of $f$ as a function of $z$</p>\n\n<p><a href=\"https://i.stack.imgur.com/aKiTz.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/aKiTz.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Notice that $f \\simeq 1/z^3$ for small $z$. Which can also be derived directly form the double sum as is shown in section 5).</p>\n\n<p>3d) Minimum value of $f$ as a function of $z$ </p>\n\n<p><a href=\"https://i.stack.imgur.com/MPnhY.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/MPnhY.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>4) Physical interpretation of the double sum.</p>\n\n<p>Consider an infinite array of unit masses placed in the x-y-plane at the points $(x,y) = 2 \\pi (r,s)$.</p>\n\n<p>The double sum $f$ multiplied by $z$ is then equal to the vertical component of the gravitational force on a unit mass at the point $(x,y,z)$.</p>\n\n<p>It is clear from this analogue model that the force is strong for small $z$ and decreases for increasing $z$.</p>\n\n<p>5) Asymptotic behaviour </p>\n\n<p>5a) For small $z$ we have $z \\simeq 1/z^3$.</p>\n\n<p>Indeed, for small $z$ we can write</p>\n\n<p>$$f(0,0,z\\to 0)\\simeq \\frac{4}{(2 \\pi )^{3/2}}\\sum _{r=1}^{\\infty } \\sum _{s=1}^{\\infty } \\frac{1}{\\left(r^2+s^2\\right)^{3/2}}+\\frac{1}{z^3}$$</p>\n\n<p>The double sum is a finite constant as can be shown thus</p>\n\n<p>$$fr=\\sum _{r=1}^{\\infty } \\sum _{s=1}^{\\infty } \\frac{1}{\\left(r^2+s^2\\right)^{3/2}}&lt;\\int _1^{\\infty }\\int _1^{\\infty }\\frac{1}{\\left(r^2+s^2\\right)^{3/2}}dsdr=2-\\sqrt{2}\\simeq 0.585786\\text{...}$$</p>\n\n<p>This proves the statement. </p>\n\n<p>5b) For $z\\to\\infty$ we find $f\\simeq \\frac{1}{2 \\pi z}$. </p>\n\n<p>Indeed, in this case the double sum can be approximated by the double integral as calculated in section 1).</p>\n\n<p>5c) Both limits are easily understood from the physical analogy: </p>\n\n<p>For small $z$ and $(x,y) = (0,0)$ we are approaching a single mass at the origin. The force we have calculated is equal to the $z \\frac{1}{z^3} = \\frac{1}{z^2}$, in agreement with Newton's law of gravity.</p>\n\n<p>For large $z$, on the other hand, the grid structure is \"invisible\" to the point mass, and the force becomes equal to the force of an infinite massive plate, and this is constant. Indeed we have found $f \\simeq 1/z$, times $z$ this is constant.</p>\n\n<p>6) Fourier form of the double sum</p>\n\n<p>The double sum $f$, renamed $fc$ here, can also be written as a double Fourier series.</p>\n\n<p>$$fc(x,y,z)=\\frac{1}{2 \\pi z}\\sum _{n=-\\infty }^{\\infty } \\sum _{m=-\\infty }^{\\infty } e^{-z \\sqrt{m^2+n^2}} \\cos (m y) \\cos (n x)$$</p>\n\n<p>The derivation starts from the expression obtained in the solution of Pierpaolo Vivo </p>\n\n<p>$$fe(x,y,z)=\\int_0^{\\infty } \\frac{e^{-t z^2} \\vartheta _3\\left(\\frac{x}{2},e^{-\\frac{1}{4\\; t}}\\right) \\vartheta _3\\left(\\frac{y}{2},e^{-\\frac{1}{4\\; t}}\\right)}{2 \\pi ^{3/2} \\sqrt{t}} \\, dt$$</p>\n\n<p>Subsituting the expression for the Jacobi theta function (see the real part of formula (4) in <a href=\"http://mathworld.wolfram.com/JacobiThetaFunctions.html\" rel=\"nofollow noreferrer\">http://mathworld.wolfram.com/JacobiThetaFunctions.html</a>)</p>\n\n<p>$$\\vartheta _3\\left(\\frac{u}{2},q\\right)=\\sum _{n=-\\infty }^{\\infty } q^{n^2} \\cos (n u)$$</p>\n\n<p>and doing the $t$-integral gives</p>\n\n<p>$$\\cos (m y) \\cos (n x) \\int_0^{\\infty } \\frac{\\exp \\left(-\\frac{m^2+n^2}{4 t}-t z^2\\right)}{\\left(2 \\pi ^{3/2}\\right) \\sqrt{t}} \\, dt=\\frac{e^{-z \\sqrt{m^2+n^2}} \\cos (m y) \\cos (n x)}{2 \\pi z}$$</p>\n\n<p>The double sum of which is $fc$. Done.</p>\n\n<p>Remark: it is interesting, that replacing the double sum by a double integral gives (hint: transform to polar coordinates)</p>\n\n<p>$$\\frac{1}{2 \\pi z}\\int _{-\\infty }^{\\infty }\\int _{-\\infty }^{\\infty }e^{-z \\sqrt{m^2+n^2}} \\cos (m y) \\cos (n x)dmdn=\\frac{1}{\\left(x^2+y^2+z^2\\right)^{3/2}}$$</p>\n\n<p>This expression is less trivial than the one in 1) but the double periodic structure is washed out as well.</p>\n\n<p>7) Approximations</p>\n\n<p>We can approximate the function by taking only a finite number $n$ of terms, defined here as letting the summation indices run from $-n$ to $n$.</p>\n\n<p>It turns out that for $z\\gtrsim 1$ the Fourier double sum $fc$ requires less terms than the plain sum $f$, while for $z\\lesssim 1$ the situation is reversed.\nThe following graphs show the typical behaviour of the maximum and minimum values as a function of the number of terms.</p>\n\n<p>The case $z = \\pi /2$, this is the values of the OP</p>\n\n<p><a href=\"https://i.stack.imgur.com/eIHcS.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/eIHcS.jpg\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/duUs9.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/duUs9.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>The case $z = 1/2$</p>\n\n<p><a href=\"https://i.stack.imgur.com/XEw1m.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/XEw1m.jpg\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/bnjkG.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bnjkG.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>We conclude that if $z$ takes on the value $\\pi /2$ of the OP, then the Fourier double sum truncated at index $n = 4$ gives a satisfactory approximation to $f$. Hence we can state (somewhat boldly) that this is the requested closed form expression.</p>\n" }, { "answer_id": 2082122, "score": 2, "text": "<p>Not really a full answer, but too long for a comment. You can always use the identity\n$$\n\\frac{1}{Z^{3/2}}=\\frac{2}{\\sqrt{\\pi}}\\int_0^\\infty d\\xi\\ \\xi^{1/2}e^{-\\xi Z}\\ ,\n$$\nfor positive $Z$. Your double sum then reads\n$$\n\\frac{2}{\\sqrt{\\pi}}\\int_0^\\infty d\\xi\\ \\xi^{1/2}e^{-\\xi k^2\\tau^2}\\sum_{r=-\\infty}^\\infty e^{-\\xi (x+2\\pi r)^2}\\sum_{s=-\\infty}^\\infty e^{-\\xi (y+2\\pi s)^2}\n$$\nand evaluating the two sums in terms of Elliptic theta functions, one gets\n$$\n\\frac{2}{\\sqrt{\\pi}}\\int_0^\\infty d\\xi\\ \\xi^{1/2}e^{-\\xi k^2\\tau^2}\\frac{\\vartheta _3\\left(\\frac{x}{2},e^{-\\frac{1}{4 \\xi}}\\right)}{2 \\sqrt{\\pi } \\sqrt{\\xi}}\\frac{\\vartheta _3\\left(\\frac{y}{2},e^{-\\frac{1}{4 \\xi}}\\right)}{2 \\sqrt{\\pi } \\sqrt{\\xi}}\n$$\n$$\n=\\int_0^{\\infty } \\frac{e^{-k^2 \\xi \\tau ^2} \\vartheta _3\\left(\\frac{x}{2},e^{-\\frac{1}{4 \\xi }}\\right) \\vartheta _3\\left(\\frac{y}{2},e^{-\\frac{1}{4 \\xi }}\\right)}{2 \\pi^{3/2} \\sqrt{\\xi }} \\, d\\xi.\n$$\nThe integral does not seem to be easy to compute in closed form, but might be better suited for an asymptotic/approximate evaluation. </p>\n" }, { "answer_id": 2085653, "score": 1, "text": "<p>@Dr. Wolfgang Hintze, Thank you very much for the contribution. \nThe point of this answer is just to propose something which could complete the derivation of $f_{s}$ you proposed in the previous answer. At this step I don't think there is a closed form solution, but a reasonable approximation can be made.</p>\n\n<p>Intuitively, $|\\cos(nx) \\cos(y n)|$ is spread out equally between 0 and 1. If we approximate it as a constant $c&gt;0$, then the sum become</p>\n\n<p>$f_{s} \\simeq \\frac{2c}{\\pi z}\\sum_{n=1}^\\infty \\sum_{m=1}^\\infty e^{-z\\sqrt{m^2+n^2}}$.</p>\n\n<p>Again this expression is not tractable with Mathematica, so I thought that approximation can be made using the Euler–Maclaurin formula.</p>\n\n<p>I start by doing a simple modification on $f_{s}$ :</p>\n\n<p>$f_{s} \\simeq \\frac{2c}{\\pi z}[\\sum_{n=0}^\\infty \\sum_{m=0}^\\infty e^{-z\\sqrt{m^2+n^2}}-2]$.</p>\n\n<p>In one variable, Euler_Maclaurin formula state that $\\sum_{n=0}^\\infty f(n)=\\frac{1}{2}f(0)+\\int_{0}^{\\infty}f(n) dn$, I will use this twice in what follow.</p>\n\n<p>We know that\n$\\frac{\\pi}{2z^2} = \\int_{0}^{\\infty}\\int_{0}^{\\infty}e^{-z\\sqrt{m^2+n^2}} dn dm$</p>\n\n<p>then</p>\n\n<p>$\\frac{\\pi}{2z^2} = \\int_{0}^{\\infty}[\\sum_{m=0}^\\infty e^{-z\\sqrt{m^2+n^2}}-\\frac{1}{2}e^{-zm}]dm$</p>\n\n<p>$\\frac{\\pi}{2z^2} = \\sum_{m=0}^\\infty \\sum_{n=0}^\\infty e^{-z\\sqrt{m^2+n^2}}-\\frac{1}{2}\\sum_{n=0}^\\infty e^{-zn}+\\frac{1}{4}$</p>\n\n<p>$\\frac{\\pi}{2z^2} = \\sum_{m=0}^\\infty \\sum_{n=0}^\\infty e^{-z\\sqrt{m^2+n^2}}-\\frac{1}{2} \\frac{e^{z}}{e^{z}-1}+\\frac{1}{4}$</p>\n\n<p>So, we can now conclude that </p>\n\n<p>$\\sum_{m=0}^\\infty \\sum_{n=0}^\\infty e^{-z\\sqrt{m^2+n^2}}= \\frac{\\pi}{2z^2}+\\frac{1}{2} \\frac{e^{z}}{e^{z}-1}-\\frac{1}{4}.$</p>\n\n<p>This all what I was able to do, but I'am not sure that my assumption ($|\\cos(nx) \\cos(y n)|$=c) is reasonable to guarantee a good approximation. What do you think ?</p>\n" }, { "answer_id": 2082129, "score": 6, "text": "<p>This is not a solution, as I haven't found a closed form expression (and I doubt meanwhile that it exists) but a collection of illustrative remarks that might be useful. </p>\n\n<p>Notice that I have reinserted and simplified the derivation of the Fourier form of the double sum in 6).</p>\n\n<p>1) While the sum</p>\n\n<p>$$f=\\sum _{r=-\\infty }^{\\infty } \\sum _{s=-\\infty }^{\\infty } \\frac{1}{\\left(k^2 \\tau ^2+(2 \\pi r+x)^2+(2 \\pi s+y)^2\\right)^{3/2}}$$</p>\n\n<p>is returned unevaluated by Mathematica the corresponding integral has a very simple closed form solution (independent of $x$ and $y$)</p>\n\n<p>$$fi=\\int _{-\\infty }^{\\infty }\\int _{-\\infty }^{\\infty }\\frac{1}{\\left(k^2 \\tau ^2+(2 \\pi r+x)^2+(2 \\pi s+y)^2\\right)^{3/2}}dsdr = \\frac{1}{2 \\pi \\sqrt{k^2 \\tau ^2}}$$</p>\n\n<p>This is of course a purely technical remark as the integral lacks the essential dependence on $x$ and $y$.</p>\n\n<p>2) Numerical investigation shows that but a few summand are sufficient to obtain a reasonable approximation to the complete sum.</p>\n\n<p>3) Plots of the double sum </p>\n\n<p>3a) 3D-plot for two values of $z = k \\tau$</p>\n\n<p>$z = 1$</p>\n\n<p><a href=\"https://i.stack.imgur.com/C1gdb.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/C1gdb.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>$z = \\pi /2$ (this is the value to be assigned to z acording to the OP)</p>\n\n<p><a href=\"https://i.stack.imgur.com/xZbt3.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/xZbt3.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>We see the expected regular spike structure. Where the spikes become steeper with smaller $z$ (notice that the vertical scale is the same in the two pictures).</p>\n\n<p>3b) Cross sections $y = 0$ for different values of z</p>\n\n<p><a href=\"https://i.stack.imgur.com/fQfIr.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/fQfIr.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>3c) Maximum value of $f$ as a function of $z$</p>\n\n<p><a href=\"https://i.stack.imgur.com/aKiTz.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/aKiTz.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Notice that $f \\simeq 1/z^3$ for small $z$. Which can also be derived directly form the double sum as is shown in section 5).</p>\n\n<p>3d) Minimum value of $f$ as a function of $z$ </p>\n\n<p><a href=\"https://i.stack.imgur.com/MPnhY.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/MPnhY.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>4) Physical interpretation of the double sum.</p>\n\n<p>Consider an infinite array of unit masses placed in the x-y-plane at the points $(x,y) = 2 \\pi (r,s)$.</p>\n\n<p>The double sum $f$ multiplied by $z$ is then equal to the vertical component of the gravitational force on a unit mass at the point $(x,y,z)$.</p>\n\n<p>It is clear from this analogue model that the force is strong for small $z$ and decreases for increasing $z$.</p>\n\n<p>5) Asymptotic behaviour </p>\n\n<p>5a) For small $z$ we have $z \\simeq 1/z^3$.</p>\n\n<p>Indeed, for small $z$ we can write</p>\n\n<p>$$f(0,0,z\\to 0)\\simeq \\frac{4}{(2 \\pi )^{3/2}}\\sum _{r=1}^{\\infty } \\sum _{s=1}^{\\infty } \\frac{1}{\\left(r^2+s^2\\right)^{3/2}}+\\frac{1}{z^3}$$</p>\n\n<p>The double sum is a finite constant as can be shown thus</p>\n\n<p>$$fr=\\sum _{r=1}^{\\infty } \\sum _{s=1}^{\\infty } \\frac{1}{\\left(r^2+s^2\\right)^{3/2}}&lt;\\int _1^{\\infty }\\int _1^{\\infty }\\frac{1}{\\left(r^2+s^2\\right)^{3/2}}dsdr=2-\\sqrt{2}\\simeq 0.585786\\text{...}$$</p>\n\n<p>This proves the statement. </p>\n\n<p>5b) For $z\\to\\infty$ we find $f\\simeq \\frac{1}{2 \\pi z}$. </p>\n\n<p>Indeed, in this case the double sum can be approximated by the double integral as calculated in section 1).</p>\n\n<p>5c) Both limits are easily understood from the physical analogy: </p>\n\n<p>For small $z$ and $(x,y) = (0,0)$ we are approaching a single mass at the origin. The force we have calculated is equal to the $z \\frac{1}{z^3} = \\frac{1}{z^2}$, in agreement with Newton's law of gravity.</p>\n\n<p>For large $z$, on the other hand, the grid structure is \"invisible\" to the point mass, and the force becomes equal to the force of an infinite massive plate, and this is constant. Indeed we have found $f \\simeq 1/z$, times $z$ this is constant.</p>\n\n<p>6) Fourier form of the double sum</p>\n\n<p>The double sum $f$, renamed $fc$ here, can also be written as a double Fourier series.</p>\n\n<p>$$fc(x,y,z)=\\frac{1}{2 \\pi z}\\sum _{n=-\\infty }^{\\infty } \\sum _{m=-\\infty }^{\\infty } e^{-z \\sqrt{m^2+n^2}} \\cos (m y) \\cos (n x)$$</p>\n\n<p>The derivation starts from the expression obtained in the solution of Pierpaolo Vivo </p>\n\n<p>$$fe(x,y,z)=\\int_0^{\\infty } \\frac{e^{-t z^2} \\vartheta _3\\left(\\frac{x}{2},e^{-\\frac{1}{4\\; t}}\\right) \\vartheta _3\\left(\\frac{y}{2},e^{-\\frac{1}{4\\; t}}\\right)}{2 \\pi ^{3/2} \\sqrt{t}} \\, dt$$</p>\n\n<p>Subsituting the expression for the Jacobi theta function (see the real part of formula (4) in <a href=\"http://mathworld.wolfram.com/JacobiThetaFunctions.html\" rel=\"nofollow noreferrer\">http://mathworld.wolfram.com/JacobiThetaFunctions.html</a>)</p>\n\n<p>$$\\vartheta _3\\left(\\frac{u}{2},q\\right)=\\sum _{n=-\\infty }^{\\infty } q^{n^2} \\cos (n u)$$</p>\n\n<p>and doing the $t$-integral gives</p>\n\n<p>$$\\cos (m y) \\cos (n x) \\int_0^{\\infty } \\frac{\\exp \\left(-\\frac{m^2+n^2}{4 t}-t z^2\\right)}{\\left(2 \\pi ^{3/2}\\right) \\sqrt{t}} \\, dt=\\frac{e^{-z \\sqrt{m^2+n^2}} \\cos (m y) \\cos (n x)}{2 \\pi z}$$</p>\n\n<p>The double sum of which is $fc$. Done.</p>\n\n<p>Remark: it is interesting, that replacing the double sum by a double integral gives (hint: transform to polar coordinates)</p>\n\n<p>$$\\frac{1}{2 \\pi z}\\int _{-\\infty }^{\\infty }\\int _{-\\infty }^{\\infty }e^{-z \\sqrt{m^2+n^2}} \\cos (m y) \\cos (n x)dmdn=\\frac{1}{\\left(x^2+y^2+z^2\\right)^{3/2}}$$</p>\n\n<p>This expression is less trivial than the one in 1) but the double periodic structure is washed out as well.</p>\n\n<p>7) Approximations</p>\n\n<p>We can approximate the function by taking only a finite number $n$ of terms, defined here as letting the summation indices run from $-n$ to $n$.</p>\n\n<p>It turns out that for $z\\gtrsim 1$ the Fourier double sum $fc$ requires less terms than the plain sum $f$, while for $z\\lesssim 1$ the situation is reversed.\nThe following graphs show the typical behaviour of the maximum and minimum values as a function of the number of terms.</p>\n\n<p>The case $z = \\pi /2$, this is the values of the OP</p>\n\n<p><a href=\"https://i.stack.imgur.com/eIHcS.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/eIHcS.jpg\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/duUs9.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/duUs9.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>The case $z = 1/2$</p>\n\n<p><a href=\"https://i.stack.imgur.com/XEw1m.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/XEw1m.jpg\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/bnjkG.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/bnjkG.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>We conclude that if $z$ takes on the value $\\pi /2$ of the OP, then the Fourier double sum truncated at index $n = 4$ gives a satisfactory approximation to $f$. Hence we can state (somewhat boldly) that this is the requested closed form expression.</p>\n" }, { "answer_id": 2082122, "score": 2, "text": "<p>Not really a full answer, but too long for a comment. You can always use the identity\n$$\n\\frac{1}{Z^{3/2}}=\\frac{2}{\\sqrt{\\pi}}\\int_0^\\infty d\\xi\\ \\xi^{1/2}e^{-\\xi Z}\\ ,\n$$\nfor positive $Z$. Your double sum then reads\n$$\n\\frac{2}{\\sqrt{\\pi}}\\int_0^\\infty d\\xi\\ \\xi^{1/2}e^{-\\xi k^2\\tau^2}\\sum_{r=-\\infty}^\\infty e^{-\\xi (x+2\\pi r)^2}\\sum_{s=-\\infty}^\\infty e^{-\\xi (y+2\\pi s)^2}\n$$\nand evaluating the two sums in terms of Elliptic theta functions, one gets\n$$\n\\frac{2}{\\sqrt{\\pi}}\\int_0^\\infty d\\xi\\ \\xi^{1/2}e^{-\\xi k^2\\tau^2}\\frac{\\vartheta _3\\left(\\frac{x}{2},e^{-\\frac{1}{4 \\xi}}\\right)}{2 \\sqrt{\\pi } \\sqrt{\\xi}}\\frac{\\vartheta _3\\left(\\frac{y}{2},e^{-\\frac{1}{4 \\xi}}\\right)}{2 \\sqrt{\\pi } \\sqrt{\\xi}}\n$$\n$$\n=\\int_0^{\\infty } \\frac{e^{-k^2 \\xi \\tau ^2} \\vartheta _3\\left(\\frac{x}{2},e^{-\\frac{1}{4 \\xi }}\\right) \\vartheta _3\\left(\\frac{y}{2},e^{-\\frac{1}{4 \\xi }}\\right)}{2 \\pi^{3/2} \\sqrt{\\xi }} \\, d\\xi.\n$$\nThe integral does not seem to be easy to compute in closed form, but might be better suited for an asymptotic/approximate evaluation. </p>\n" }, { "answer_id": 2085653, "score": 1, "text": "<p>@Dr. Wolfgang Hintze, Thank you very much for the contribution. \nThe point of this answer is just to propose something which could complete the derivation of $f_{s}$ you proposed in the previous answer. At this step I don't think there is a closed form solution, but a reasonable approximation can be made.</p>\n\n<p>Intuitively, $|\\cos(nx) \\cos(y n)|$ is spread out equally between 0 and 1. If we approximate it as a constant $c&gt;0$, then the sum become</p>\n\n<p>$f_{s} \\simeq \\frac{2c}{\\pi z}\\sum_{n=1}^\\infty \\sum_{m=1}^\\infty e^{-z\\sqrt{m^2+n^2}}$.</p>\n\n<p>Again this expression is not tractable with Mathematica, so I thought that approximation can be made using the Euler–Maclaurin formula.</p>\n\n<p>I start by doing a simple modification on $f_{s}$ :</p>\n\n<p>$f_{s} \\simeq \\frac{2c}{\\pi z}[\\sum_{n=0}^\\infty \\sum_{m=0}^\\infty e^{-z\\sqrt{m^2+n^2}}-2]$.</p>\n\n<p>In one variable, Euler_Maclaurin formula state that $\\sum_{n=0}^\\infty f(n)=\\frac{1}{2}f(0)+\\int_{0}^{\\infty}f(n) dn$, I will use this twice in what follow.</p>\n\n<p>We know that\n$\\frac{\\pi}{2z^2} = \\int_{0}^{\\infty}\\int_{0}^{\\infty}e^{-z\\sqrt{m^2+n^2}} dn dm$</p>\n\n<p>then</p>\n\n<p>$\\frac{\\pi}{2z^2} = \\int_{0}^{\\infty}[\\sum_{m=0}^\\infty e^{-z\\sqrt{m^2+n^2}}-\\frac{1}{2}e^{-zm}]dm$</p>\n\n<p>$\\frac{\\pi}{2z^2} = \\sum_{m=0}^\\infty \\sum_{n=0}^\\infty e^{-z\\sqrt{m^2+n^2}}-\\frac{1}{2}\\sum_{n=0}^\\infty e^{-zn}+\\frac{1}{4}$</p>\n\n<p>$\\frac{\\pi}{2z^2} = \\sum_{m=0}^\\infty \\sum_{n=0}^\\infty e^{-z\\sqrt{m^2+n^2}}-\\frac{1}{2} \\frac{e^{z}}{e^{z}-1}+\\frac{1}{4}$</p>\n\n<p>So, we can now conclude that </p>\n\n<p>$\\sum_{m=0}^\\infty \\sum_{n=0}^\\infty e^{-z\\sqrt{m^2+n^2}}= \\frac{\\pi}{2z^2}+\\frac{1}{2} \\frac{e^{z}}{e^{z}-1}-\\frac{1}{4}.$</p>\n\n<p>This all what I was able to do, but I'am not sure that my assumption ($|\\cos(nx) \\cos(y n)|$=c) is reasonable to guarantee a good approximation. What do you think ?</p>\n" } ]
Question: Closed form for $\sum\limits_{r=-\infty}^\infty \sum\limits_{s=-\infty}^\infty \frac{1}{(k^2\tau^2+(x+2\pi r)^2+(y+2\pi s)^2)^{3/2}}$ Answer 1: <p>This is not a solution, as I haven't found a closed form expression (and I doubt meanwhile that it exists) but a collection of illustrative remarks that might be useful. </p> <p>Notice that I have reinserted and simplified the derivation of the Fourier form of the double sum in 6).</p> <p>1) While the sum</p> <p>$$f=\sum _{r=-\infty }^{\infty } \sum _{s=-\infty }^{\infty } \frac{1}{\left(k^2 \tau ^2+(2 \pi r+x)^2+(2 \pi s+y)^2\right)^{3/2}}$$</p> <p>is returned unevaluated by Mathematica the corresponding integral has a very simple closed form solution (independent of $x$ and $y$)</p> <p>$$fi=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }\frac{1}{\left(k^2 \tau ^2+(2 \pi r+x)^2+(2 \pi s+y)^2\right)^{3/2}}dsdr = \frac{1}{2 \pi \sqrt{k^2 \tau ^2}}$$</p> <p>This is of course a purely technical remark as the integral lacks the essential dependence on $x$ and $y$.</p> <p>2) Numerical investigation shows that but a few summand are sufficient to obtain a reasonable approximation to the complete sum.</p> <p>3) Plots of the double sum </p> <p>3a) 3D-plot for two values of $z = k \tau$</p> <p>$z = 1$</p> <p><a href="https://i.stack.imgur.com/C1gdb.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C1gdb.jpg" alt="enter image description here"></a></p> <p>$z = \pi /2$ (this is the value to be assigned to z acording to the OP)</p> <p><a href="https://i.stack.imgur.com/xZbt3.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xZbt3.jpg" alt="enter image description here"></a></p> <p>We see the expected regular spike structure. Where the spikes become steeper with smaller $z$ (notice that the vertical scale is the same in the two pictures).</p> <p>3b) Cross sections $y = 0$ for different values of z</p> <p><a href="https://i.stack.imgur.com/fQfIr.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fQfIr.jpg" alt="enter image description here"></a></p> <p>3c) Maximum value of $f$ as a function of $z$</p> <p><a href="https://i.stack.imgur.com/aKiTz.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aKiTz.jpg" alt="enter image description here"></a></p> <p>Notice that $f \simeq 1/z^3$ for small $z$. Which can also be derived directly form the double sum as is shown in section 5).</p> <p>3d) Minimum value of $f$ as a function of $z$ </p> <p><a href="https://i.stack.imgur.com/MPnhY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MPnhY.jpg" alt="enter image description here"></a></p> <p>4) Physical interpretation of the double sum.</p> <p>Consider an infinite array of unit masses placed in the x-y-plane at the points $(x,y) = 2 \pi (r,s)$.</p> <p>The double sum $f$ multiplied by $z$ is then equal to the vertical component of the gravitational force on a unit mass at the point $(x,y,z)$.</p> <p>It is clear from this analogue model that the force is strong for small $z$ and decreases for increasing $z$.</p> <p>5) Asymptotic behaviour </p> <p>5a) For small $z$ we have $z \simeq 1/z^3$.</p> <p>Indeed, for small $z$ we can write</p> <p>$$f(0,0,z\to 0)\simeq \frac{4}{(2 \pi )^{3/2}}\sum _{r=1}^{\infty } \sum _{s=1}^{\infty } \frac{1}{\left(r^2+s^2\right)^{3/2}}+\frac{1}{z^3}$$</p> <p>The double sum is a finite constant as can be shown thus</p> <p>$$fr=\sum _{r=1}^{\infty } \sum _{s=1}^{\infty } \frac{1}{\left(r^2+s^2\right)^{3/2}}&lt;\int _1^{\infty }\int _1^{\infty }\frac{1}{\left(r^2+s^2\right)^{3/2}}dsdr=2-\sqrt{2}\simeq 0.585786\text{...}$$</p> <p>This proves the statement. </p> <p>5b) For $z\to\infty$ we find $f\simeq \frac{1}{2 \pi z}$. </p> <p>Indeed, in this case the double sum can be approximated by the double integral as calculated in section 1).</p> <p>5c) Both limits are easily understood from the physical analogy: </p> <p>For small $z$ and $(x,y) = (0,0)$ we are approaching a single mass at the origin. The force we have calculated is equal to the $z \frac{1}{z^3} = \frac{1}{z^2}$, in agreement with Newton's law of gravity.</p> <p>For large $z$, on the other hand, the grid structure is "invisible" to the point mass, and the force becomes equal to the force of an infinite massive plate, and this is constant. Indeed we have found $f \simeq 1/z$, times $z$ this is constant.</p> <p>6) Fourier form of the double sum</p> <p>The double sum $f$, renamed $fc$ here, can also be written as a double Fourier series.</p> <p>$$fc(x,y,z)=\frac{1}{2 \pi z}\sum _{n=-\infty }^{\infty } \sum _{m=-\infty }^{\infty } e^{-z \sqrt{m^2+n^2}} \cos (m y) \cos (n x)$$</p> <p>The derivation starts from the expression obtained in the solution of Pierpaolo Vivo </p> <p>$$fe(x,y,z)=\int_0^{\infty } \frac{e^{-t z^2} \vartheta _3\left(\frac{x}{2},e^{-\frac{1}{4\; t}}\right) \vartheta _3\left(\frac{y}{2},e^{-\frac{1}{4\; t}}\right)}{2 \pi ^{3/2} \sqrt{t}} \, dt$$</p> <p>Subsituting the expression for the Jacobi theta function (see the real part of formula (4) in <a href="http://mathworld.wolfram.com/JacobiThetaFunctions.html" rel="nofollow noreferrer">http://mathworld.wolfram.com/JacobiThetaFunctions.html</a>)</p> <p>$$\vartheta _3\left(\frac{u}{2},q\right)=\sum _{n=-\infty }^{\infty } q^{n^2} \cos (n u)$$</p> <p>and doing the $t$-integral gives</p> <p>$$\cos (m y) \cos (n x) \int_0^{\infty } \frac{\exp \left(-\frac{m^2+n^2}{4 t}-t z^2\right)}{\left(2 \pi ^{3/2}\right) \sqrt{t}} \, dt=\frac{e^{-z \sqrt{m^2+n^2}} \cos (m y) \cos (n x)}{2 \pi z}$$</p> <p>The double sum of which is $fc$. Done.</p> <p>Remark: it is interesting, that replacing the double sum by a double integral gives (hint: transform to polar coordinates)</p> <p>$$\frac{1}{2 \pi z}\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }e^{-z \sqrt{m^2+n^2}} \cos (m y) \cos (n x)dmdn=\frac{1}{\left(x^2+y^2+z^2\right)^{3/2}}$$</p> <p>This expression is less trivial than the one in 1) but the double periodic structure is washed out as well.</p> <p>7) Approximations</p> <p>We can approximate the function by taking only a finite number $n$ of terms, defined here as letting the summation indices run from $-n$ to $n$.</p> <p>It turns out that for $z\gtrsim 1$ the Fourier double sum $fc$ requires less terms than the plain sum $f$, while for $z\lesssim 1$ the situation is reversed. The following graphs show the typical behaviour of the maximum and minimum values as a function of the number of terms.</p> <p>The case $z = \pi /2$, this is the values of the OP</p> <p><a href="https://i.stack.imgur.com/eIHcS.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eIHcS.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/duUs9.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/duUs9.jpg" alt="enter image description here"></a></p> <p>The case $z = 1/2$</p> <p><a href="https://i.stack.imgur.com/XEw1m.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XEw1m.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/bnjkG.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bnjkG.jpg" alt="enter image description here"></a></p> <p>We conclude that if $z$ takes on the value $\pi /2$ of the OP, then the Fourier double sum truncated at index $n = 4$ gives a satisfactory approximation to $f$. Hence we can state (somewhat boldly) that this is the requested closed form expression.</p> Answer 2: <p>Not really a full answer, but too long for a comment. You can always use the identity $$ \frac{1}{Z^{3/2}}=\frac{2}{\sqrt{\pi}}\int_0^\infty d\xi\ \xi^{1/2}e^{-\xi Z}\ , $$ for positive $Z$. Your double sum then reads $$ \frac{2}{\sqrt{\pi}}\int_0^\infty d\xi\ \xi^{1/2}e^{-\xi k^2\tau^2}\sum_{r=-\infty}^\infty e^{-\xi (x+2\pi r)^2}\sum_{s=-\infty}^\infty e^{-\xi (y+2\pi s)^2} $$ and evaluating the two sums in terms of Elliptic theta functions, one gets $$ \frac{2}{\sqrt{\pi}}\int_0^\infty d\xi\ \xi^{1/2}e^{-\xi k^2\tau^2}\frac{\vartheta _3\left(\frac{x}{2},e^{-\frac{1}{4 \xi}}\right)}{2 \sqrt{\pi } \sqrt{\xi}}\frac{\vartheta _3\left(\frac{y}{2},e^{-\frac{1}{4 \xi}}\right)}{2 \sqrt{\pi } \sqrt{\xi}} $$ $$ =\int_0^{\infty } \frac{e^{-k^2 \xi \tau ^2} \vartheta _3\left(\frac{x}{2},e^{-\frac{1}{4 \xi }}\right) \vartheta _3\left(\frac{y}{2},e^{-\frac{1}{4 \xi }}\right)}{2 \pi^{3/2} \sqrt{\xi }} \, d\xi. $$ The integral does not seem to be easy to compute in closed form, but might be better suited for an asymptotic/approximate evaluation. </p> Answer 3: <p>@Dr. Wolfgang Hintze, Thank you very much for the contribution. The point of this answer is just to propose something which could complete the derivation of $f_{s}$ you proposed in the previous answer. At this step I don't think there is a closed form solution, but a reasonable approximation can be made.</p> <p>Intuitively, $|\cos(nx) \cos(y n)|$ is spread out equally between 0 and 1. If we approximate it as a constant $c&gt;0$, then the sum become</p> <p>$f_{s} \simeq \frac{2c}{\pi z}\sum_{n=1}^\infty \sum_{m=1}^\infty e^{-z\sqrt{m^2+n^2}}$.</p> <p>Again this expression is not tractable with Mathematica, so I thought that approximation can be made using the Euler–Maclaurin formula.</p> <p>I start by doing a simple modification on $f_{s}$ :</p> <p>$f_{s} \simeq \frac{2c}{\pi z}[\sum_{n=0}^\infty \sum_{m=0}^\infty e^{-z\sqrt{m^2+n^2}}-2]$.</p> <p>In one variable, Euler_Maclaurin formula state that $\sum_{n=0}^\infty f(n)=\frac{1}{2}f(0)+\int_{0}^{\infty}f(n) dn$, I will use this twice in what follow.</p> <p>We know that $\frac{\pi}{2z^2} = \int_{0}^{\infty}\int_{0}^{\infty}e^{-z\sqrt{m^2+n^2}} dn dm$</p> <p>then</p> <p>$\frac{\pi}{2z^2} = \int_{0}^{\infty}[\sum_{m=0}^\infty e^{-z\sqrt{m^2+n^2}}-\frac{1}{2}e^{-zm}]dm$</p> <p>$\frac{\pi}{2z^2} = \sum_{m=0}^\infty \sum_{n=0}^\infty e^{-z\sqrt{m^2+n^2}}-\frac{1}{2}\sum_{n=0}^\infty e^{-zn}+\frac{1}{4}$</p> <p>$\frac{\pi}{2z^2} = \sum_{m=0}^\infty \sum_{n=0}^\infty e^{-z\sqrt{m^2+n^2}}-\frac{1}{2} \frac{e^{z}}{e^{z}-1}+\frac{1}{4}$</p> <p>So, we can now conclude that </p> <p>$\sum_{m=0}^\infty \sum_{n=0}^\infty e^{-z\sqrt{m^2+n^2}}= \frac{\pi}{2z^2}+\frac{1}{2} \frac{e^{z}}{e^{z}-1}-\frac{1}{4}.$</p> <p>This all what I was able to do, but I'am not sure that my assumption ($|\cos(nx) \cos(y n)|$=c) is reasonable to guarantee a good approximation. What do you think ?</p> Answer 4: <p>This is not a solution, as I haven't found a closed form expression (and I doubt meanwhile that it exists) but a collection of illustrative remarks that might be useful. </p> <p>Notice that I have reinserted and simplified the derivation of the Fourier form of the double sum in 6).</p> <p>1) While the sum</p> <p>$$f=\sum _{r=-\infty }^{\infty } \sum _{s=-\infty }^{\infty } \frac{1}{\left(k^2 \tau ^2+(2 \pi r+x)^2+(2 \pi s+y)^2\right)^{3/2}}$$</p> <p>is returned unevaluated by Mathematica the corresponding integral has a very simple closed form solution (independent of $x$ and $y$)</p> <p>$$fi=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }\frac{1}{\left(k^2 \tau ^2+(2 \pi r+x)^2+(2 \pi s+y)^2\right)^{3/2}}dsdr = \frac{1}{2 \pi \sqrt{k^2 \tau ^2}}$$</p> <p>This is of course a purely technical remark as the integral lacks the essential dependence on $x$ and $y$.</p> <p>2) Numerical investigation shows that but a few summand are sufficient to obtain a reasonable approximation to the complete sum.</p> <p>3) Plots of the double sum </p> <p>3a) 3D-plot for two values of $z = k \tau$</p> <p>$z = 1$</p> <p><a href="https://i.stack.imgur.com/C1gdb.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C1gdb.jpg" alt="enter image description here"></a></p> <p>$z = \pi /2$ (this is the value to be assigned to z acording to the OP)</p> <p><a href="https://i.stack.imgur.com/xZbt3.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xZbt3.jpg" alt="enter image description here"></a></p> <p>We see the expected regular spike structure. Where the spikes become steeper with smaller $z$ (notice that the vertical scale is the same in the two pictures).</p> <p>3b) Cross sections $y = 0$ for different values of z</p> <p><a href="https://i.stack.imgur.com/fQfIr.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fQfIr.jpg" alt="enter image description here"></a></p> <p>3c) Maximum value of $f$ as a function of $z$</p> <p><a href="https://i.stack.imgur.com/aKiTz.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aKiTz.jpg" alt="enter image description here"></a></p> <p>Notice that $f \simeq 1/z^3$ for small $z$. Which can also be derived directly form the double sum as is shown in section 5).</p> <p>3d) Minimum value of $f$ as a function of $z$ </p> <p><a href="https://i.stack.imgur.com/MPnhY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MPnhY.jpg" alt="enter image description here"></a></p> <p>4) Physical interpretation of the double sum.</p> <p>Consider an infinite array of unit masses placed in the x-y-plane at the points $(x,y) = 2 \pi (r,s)$.</p> <p>The double sum $f$ multiplied by $z$ is then equal to the vertical component of the gravitational force on a unit mass at the point $(x,y,z)$.</p> <p>It is clear from this analogue model that the force is strong for small $z$ and decreases for increasing $z$.</p> <p>5) Asymptotic behaviour </p> <p>5a) For small $z$ we have $z \simeq 1/z^3$.</p> <p>Indeed, for small $z$ we can write</p> <p>$$f(0,0,z\to 0)\simeq \frac{4}{(2 \pi )^{3/2}}\sum _{r=1}^{\infty } \sum _{s=1}^{\infty } \frac{1}{\left(r^2+s^2\right)^{3/2}}+\frac{1}{z^3}$$</p> <p>The double sum is a finite constant as can be shown thus</p> <p>$$fr=\sum _{r=1}^{\infty } \sum _{s=1}^{\infty } \frac{1}{\left(r^2+s^2\right)^{3/2}}&lt;\int _1^{\infty }\int _1^{\infty }\frac{1}{\left(r^2+s^2\right)^{3/2}}dsdr=2-\sqrt{2}\simeq 0.585786\text{...}$$</p> <p>This proves the statement. </p> <p>5b) For $z\to\infty$ we find $f\simeq \frac{1}{2 \pi z}$. </p> <p>Indeed, in this case the double sum can be approximated by the double integral as calculated in section 1).</p> <p>5c) Both limits are easily understood from the physical analogy: </p> <p>For small $z$ and $(x,y) = (0,0)$ we are approaching a single mass at the origin. The force we have calculated is equal to the $z \frac{1}{z^3} = \frac{1}{z^2}$, in agreement with Newton's law of gravity.</p> <p>For large $z$, on the other hand, the grid structure is "invisible" to the point mass, and the force becomes equal to the force of an infinite massive plate, and this is constant. Indeed we have found $f \simeq 1/z$, times $z$ this is constant.</p> <p>6) Fourier form of the double sum</p> <p>The double sum $f$, renamed $fc$ here, can also be written as a double Fourier series.</p> <p>$$fc(x,y,z)=\frac{1}{2 \pi z}\sum _{n=-\infty }^{\infty } \sum _{m=-\infty }^{\infty } e^{-z \sqrt{m^2+n^2}} \cos (m y) \cos (n x)$$</p> <p>The derivation starts from the expression obtained in the solution of Pierpaolo Vivo </p> <p>$$fe(x,y,z)=\int_0^{\infty } \frac{e^{-t z^2} \vartheta _3\left(\frac{x}{2},e^{-\frac{1}{4\; t}}\right) \vartheta _3\left(\frac{y}{2},e^{-\frac{1}{4\; t}}\right)}{2 \pi ^{3/2} \sqrt{t}} \, dt$$</p> <p>Subsituting the expression for the Jacobi theta function (see the real part of formula (4) in <a href="http://mathworld.wolfram.com/JacobiThetaFunctions.html" rel="nofollow noreferrer">http://mathworld.wolfram.com/JacobiThetaFunctions.html</a>)</p> <p>$$\vartheta _3\left(\frac{u}{2},q\right)=\sum _{n=-\infty }^{\infty } q^{n^2} \cos (n u)$$</p> <p>and doing the $t$-integral gives</p> <p>$$\cos (m y) \cos (n x) \int_0^{\infty } \frac{\exp \left(-\frac{m^2+n^2}{4 t}-t z^2\right)}{\left(2 \pi ^{3/2}\right) \sqrt{t}} \, dt=\frac{e^{-z \sqrt{m^2+n^2}} \cos (m y) \cos (n x)}{2 \pi z}$$</p> <p>The double sum of which is $fc$. Done.</p> <p>Remark: it is interesting, that replacing the double sum by a double integral gives (hint: transform to polar coordinates)</p> <p>$$\frac{1}{2 \pi z}\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }e^{-z \sqrt{m^2+n^2}} \cos (m y) \cos (n x)dmdn=\frac{1}{\left(x^2+y^2+z^2\right)^{3/2}}$$</p> <p>This expression is less trivial than the one in 1) but the double periodic structure is washed out as well.</p> <p>7) Approximations</p> <p>We can approximate the function by taking only a finite number $n$ of terms, defined here as letting the summation indices run from $-n$ to $n$.</p> <p>It turns out that for $z\gtrsim 1$ the Fourier double sum $fc$ requires less terms than the plain sum $f$, while for $z\lesssim 1$ the situation is reversed. The following graphs show the typical behaviour of the maximum and minimum values as a function of the number of terms.</p> <p>The case $z = \pi /2$, this is the values of the OP</p> <p><a href="https://i.stack.imgur.com/eIHcS.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eIHcS.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/duUs9.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/duUs9.jpg" alt="enter image description here"></a></p> <p>The case $z = 1/2$</p> <p><a href="https://i.stack.imgur.com/XEw1m.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XEw1m.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/bnjkG.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bnjkG.jpg" alt="enter image description here"></a></p> <p>We conclude that if $z$ takes on the value $\pi /2$ of the OP, then the Fourier double sum truncated at index $n = 4$ gives a satisfactory approximation to $f$. Hence we can state (somewhat boldly) that this is the requested closed form expression.</p> Answer 5: <p>Not really a full answer, but too long for a comment. You can always use the identity $$ \frac{1}{Z^{3/2}}=\frac{2}{\sqrt{\pi}}\int_0^\infty d\xi\ \xi^{1/2}e^{-\xi Z}\ , $$ for positive $Z$. Your double sum then reads $$ \frac{2}{\sqrt{\pi}}\int_0^\infty d\xi\ \xi^{1/2}e^{-\xi k^2\tau^2}\sum_{r=-\infty}^\infty e^{-\xi (x+2\pi r)^2}\sum_{s=-\infty}^\infty e^{-\xi (y+2\pi s)^2} $$ and evaluating the two sums in terms of Elliptic theta functions, one gets $$ \frac{2}{\sqrt{\pi}}\int_0^\infty d\xi\ \xi^{1/2}e^{-\xi k^2\tau^2}\frac{\vartheta _3\left(\frac{x}{2},e^{-\frac{1}{4 \xi}}\right)}{2 \sqrt{\pi } \sqrt{\xi}}\frac{\vartheta _3\left(\frac{y}{2},e^{-\frac{1}{4 \xi}}\right)}{2 \sqrt{\pi } \sqrt{\xi}} $$ $$ =\int_0^{\infty } \frac{e^{-k^2 \xi \tau ^2} \vartheta _3\left(\frac{x}{2},e^{-\frac{1}{4 \xi }}\right) \vartheta _3\left(\frac{y}{2},e^{-\frac{1}{4 \xi }}\right)}{2 \pi^{3/2} \sqrt{\xi }} \, d\xi. $$ The integral does not seem to be easy to compute in closed form, but might be better suited for an asymptotic/approximate evaluation. </p> Answer 6: <p>@Dr. Wolfgang Hintze, Thank you very much for the contribution. The point of this answer is just to propose something which could complete the derivation of $f_{s}$ you proposed in the previous answer. At this step I don't think there is a closed form solution, but a reasonable approximation can be made.</p> <p>Intuitively, $|\cos(nx) \cos(y n)|$ is spread out equally between 0 and 1. If we approximate it as a constant $c&gt;0$, then the sum become</p> <p>$f_{s} \simeq \frac{2c}{\pi z}\sum_{n=1}^\infty \sum_{m=1}^\infty e^{-z\sqrt{m^2+n^2}}$.</p> <p>Again this expression is not tractable with Mathematica, so I thought that approximation can be made using the Euler–Maclaurin formula.</p> <p>I start by doing a simple modification on $f_{s}$ :</p> <p>$f_{s} \simeq \frac{2c}{\pi z}[\sum_{n=0}^\infty \sum_{m=0}^\infty e^{-z\sqrt{m^2+n^2}}-2]$.</p> <p>In one variable, Euler_Maclaurin formula state that $\sum_{n=0}^\infty f(n)=\frac{1}{2}f(0)+\int_{0}^{\infty}f(n) dn$, I will use this twice in what follow.</p> <p>We know that $\frac{\pi}{2z^2} = \int_{0}^{\infty}\int_{0}^{\infty}e^{-z\sqrt{m^2+n^2}} dn dm$</p> <p>then</p> <p>$\frac{\pi}{2z^2} = \int_{0}^{\infty}[\sum_{m=0}^\infty e^{-z\sqrt{m^2+n^2}}-\frac{1}{2}e^{-zm}]dm$</p> <p>$\frac{\pi}{2z^2} = \sum_{m=0}^\infty \sum_{n=0}^\infty e^{-z\sqrt{m^2+n^2}}-\frac{1}{2}\sum_{n=0}^\infty e^{-zn}+\frac{1}{4}$</p> <p>$\frac{\pi}{2z^2} = \sum_{m=0}^\infty \sum_{n=0}^\infty e^{-z\sqrt{m^2+n^2}}-\frac{1}{2} \frac{e^{z}}{e^{z}-1}+\frac{1}{4}$</p> <p>So, we can now conclude that </p> <p>$\sum_{m=0}^\infty \sum_{n=0}^\infty e^{-z\sqrt{m^2+n^2}}= \frac{\pi}{2z^2}+\frac{1}{2} \frac{e^{z}}{e^{z}-1}-\frac{1}{4}.$</p> <p>This all what I was able to do, but I'am not sure that my assumption ($|\cos(nx) \cos(y n)|$=c) is reasonable to guarantee a good approximation. What do you think ?</p> Answer 7: <p>This is not a solution, as I haven't found a closed form expression (and I doubt meanwhile that it exists) but a collection of illustrative remarks that might be useful. </p> <p>Notice that I have reinserted and simplified the derivation of the Fourier form of the double sum in 6).</p> <p>1) While the sum</p> <p>$$f=\sum _{r=-\infty }^{\infty } \sum _{s=-\infty }^{\infty } \frac{1}{\left(k^2 \tau ^2+(2 \pi r+x)^2+(2 \pi s+y)^2\right)^{3/2}}$$</p> <p>is returned unevaluated by Mathematica the corresponding integral has a very simple closed form solution (independent of $x$ and $y$)</p> <p>$$fi=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }\frac{1}{\left(k^2 \tau ^2+(2 \pi r+x)^2+(2 \pi s+y)^2\right)^{3/2}}dsdr = \frac{1}{2 \pi \sqrt{k^2 \tau ^2}}$$</p> <p>This is of course a purely technical remark as the integral lacks the essential dependence on $x$ and $y$.</p> <p>2) Numerical investigation shows that but a few summand are sufficient to obtain a reasonable approximation to the complete sum.</p> <p>3) Plots of the double sum </p> <p>3a) 3D-plot for two values of $z = k \tau$</p> <p>$z = 1$</p> <p><a href="https://i.stack.imgur.com/C1gdb.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C1gdb.jpg" alt="enter image description here"></a></p> <p>$z = \pi /2$ (this is the value to be assigned to z acording to the OP)</p> <p><a href="https://i.stack.imgur.com/xZbt3.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xZbt3.jpg" alt="enter image description here"></a></p> <p>We see the expected regular spike structure. Where the spikes become steeper with smaller $z$ (notice that the vertical scale is the same in the two pictures).</p> <p>3b) Cross sections $y = 0$ for different values of z</p> <p><a href="https://i.stack.imgur.com/fQfIr.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fQfIr.jpg" alt="enter image description here"></a></p> <p>3c) Maximum value of $f$ as a function of $z$</p> <p><a href="https://i.stack.imgur.com/aKiTz.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aKiTz.jpg" alt="enter image description here"></a></p> <p>Notice that $f \simeq 1/z^3$ for small $z$. Which can also be derived directly form the double sum as is shown in section 5).</p> <p>3d) Minimum value of $f$ as a function of $z$ </p> <p><a href="https://i.stack.imgur.com/MPnhY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MPnhY.jpg" alt="enter image description here"></a></p> <p>4) Physical interpretation of the double sum.</p> <p>Consider an infinite array of unit masses placed in the x-y-plane at the points $(x,y) = 2 \pi (r,s)$.</p> <p>The double sum $f$ multiplied by $z$ is then equal to the vertical component of the gravitational force on a unit mass at the point $(x,y,z)$.</p> <p>It is clear from this analogue model that the force is strong for small $z$ and decreases for increasing $z$.</p> <p>5) Asymptotic behaviour </p> <p>5a) For small $z$ we have $z \simeq 1/z^3$.</p> <p>Indeed, for small $z$ we can write</p> <p>$$f(0,0,z\to 0)\simeq \frac{4}{(2 \pi )^{3/2}}\sum _{r=1}^{\infty } \sum _{s=1}^{\infty } \frac{1}{\left(r^2+s^2\right)^{3/2}}+\frac{1}{z^3}$$</p> <p>The double sum is a finite constant as can be shown thus</p> <p>$$fr=\sum _{r=1}^{\infty } \sum _{s=1}^{\infty } \frac{1}{\left(r^2+s^2\right)^{3/2}}&lt;\int _1^{\infty }\int _1^{\infty }\frac{1}{\left(r^2+s^2\right)^{3/2}}dsdr=2-\sqrt{2}\simeq 0.585786\text{...}$$</p> <p>This proves the statement. </p> <p>5b) For $z\to\infty$ we find $f\simeq \frac{1}{2 \pi z}$. </p> <p>Indeed, in this case the double sum can be approximated by the double integral as calculated in section 1).</p> <p>5c) Both limits are easily understood from the physical analogy: </p> <p>For small $z$ and $(x,y) = (0,0)$ we are approaching a single mass at the origin. The force we have calculated is equal to the $z \frac{1}{z^3} = \frac{1}{z^2}$, in agreement with Newton's law of gravity.</p> <p>For large $z$, on the other hand, the grid structure is "invisible" to the point mass, and the force becomes equal to the force of an infinite massive plate, and this is constant. Indeed we have found $f \simeq 1/z$, times $z$ this is constant.</p> <p>6) Fourier form of the double sum</p> <p>The double sum $f$, renamed $fc$ here, can also be written as a double Fourier series.</p> <p>$$fc(x,y,z)=\frac{1}{2 \pi z}\sum _{n=-\infty }^{\infty } \sum _{m=-\infty }^{\infty } e^{-z \sqrt{m^2+n^2}} \cos (m y) \cos (n x)$$</p> <p>The derivation starts from the expression obtained in the solution of Pierpaolo Vivo </p> <p>$$fe(x,y,z)=\int_0^{\infty } \frac{e^{-t z^2} \vartheta _3\left(\frac{x}{2},e^{-\frac{1}{4\; t}}\right) \vartheta _3\left(\frac{y}{2},e^{-\frac{1}{4\; t}}\right)}{2 \pi ^{3/2} \sqrt{t}} \, dt$$</p> <p>Subsituting the expression for the Jacobi theta function (see the real part of formula (4) in <a href="http://mathworld.wolfram.com/JacobiThetaFunctions.html" rel="nofollow noreferrer">http://mathworld.wolfram.com/JacobiThetaFunctions.html</a>)</p> <p>$$\vartheta _3\left(\frac{u}{2},q\right)=\sum _{n=-\infty }^{\infty } q^{n^2} \cos (n u)$$</p> <p>and doing the $t$-integral gives</p> <p>$$\cos (m y) \cos (n x) \int_0^{\infty } \frac{\exp \left(-\frac{m^2+n^2}{4 t}-t z^2\right)}{\left(2 \pi ^{3/2}\right) \sqrt{t}} \, dt=\frac{e^{-z \sqrt{m^2+n^2}} \cos (m y) \cos (n x)}{2 \pi z}$$</p> <p>The double sum of which is $fc$. Done.</p> <p>Remark: it is interesting, that replacing the double sum by a double integral gives (hint: transform to polar coordinates)</p> <p>$$\frac{1}{2 \pi z}\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }e^{-z \sqrt{m^2+n^2}} \cos (m y) \cos (n x)dmdn=\frac{1}{\left(x^2+y^2+z^2\right)^{3/2}}$$</p> <p>This expression is less trivial than the one in 1) but the double periodic structure is washed out as well.</p> <p>7) Approximations</p> <p>We can approximate the function by taking only a finite number $n$ of terms, defined here as letting the summation indices run from $-n$ to $n$.</p> <p>It turns out that for $z\gtrsim 1$ the Fourier double sum $fc$ requires less terms than the plain sum $f$, while for $z\lesssim 1$ the situation is reversed. The following graphs show the typical behaviour of the maximum and minimum values as a function of the number of terms.</p> <p>The case $z = \pi /2$, this is the values of the OP</p> <p><a href="https://i.stack.imgur.com/eIHcS.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eIHcS.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/duUs9.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/duUs9.jpg" alt="enter image description here"></a></p> <p>The case $z = 1/2$</p> <p><a href="https://i.stack.imgur.com/XEw1m.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XEw1m.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/bnjkG.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bnjkG.jpg" alt="enter image description here"></a></p> <p>We conclude that if $z$ takes on the value $\pi /2$ of the OP, then the Fourier double sum truncated at index $n = 4$ gives a satisfactory approximation to $f$. Hence we can state (somewhat boldly) that this is the requested closed form expression.</p> Answer 8: <p>Not really a full answer, but too long for a comment. You can always use the identity $$ \frac{1}{Z^{3/2}}=\frac{2}{\sqrt{\pi}}\int_0^\infty d\xi\ \xi^{1/2}e^{-\xi Z}\ , $$ for positive $Z$. Your double sum then reads $$ \frac{2}{\sqrt{\pi}}\int_0^\infty d\xi\ \xi^{1/2}e^{-\xi k^2\tau^2}\sum_{r=-\infty}^\infty e^{-\xi (x+2\pi r)^2}\sum_{s=-\infty}^\infty e^{-\xi (y+2\pi s)^2} $$ and evaluating the two sums in terms of Elliptic theta functions, one gets $$ \frac{2}{\sqrt{\pi}}\int_0^\infty d\xi\ \xi^{1/2}e^{-\xi k^2\tau^2}\frac{\vartheta _3\left(\frac{x}{2},e^{-\frac{1}{4 \xi}}\right)}{2 \sqrt{\pi } \sqrt{\xi}}\frac{\vartheta _3\left(\frac{y}{2},e^{-\frac{1}{4 \xi}}\right)}{2 \sqrt{\pi } \sqrt{\xi}} $$ $$ =\int_0^{\infty } \frac{e^{-k^2 \xi \tau ^2} \vartheta _3\left(\frac{x}{2},e^{-\frac{1}{4 \xi }}\right) \vartheta _3\left(\frac{y}{2},e^{-\frac{1}{4 \xi }}\right)}{2 \pi^{3/2} \sqrt{\xi }} \, d\xi. $$ The integral does not seem to be easy to compute in closed form, but might be better suited for an asymptotic/approximate evaluation. </p> Answer 9: <p>@Dr. Wolfgang Hintze, Thank you very much for the contribution. The point of this answer is just to propose something which could complete the derivation of $f_{s}$ you proposed in the previous answer. At this step I don't think there is a closed form solution, but a reasonable approximation can be made.</p> <p>Intuitively, $|\cos(nx) \cos(y n)|$ is spread out equally between 0 and 1. If we approximate it as a constant $c&gt;0$, then the sum become</p> <p>$f_{s} \simeq \frac{2c}{\pi z}\sum_{n=1}^\infty \sum_{m=1}^\infty e^{-z\sqrt{m^2+n^2}}$.</p> <p>Again this expression is not tractable with Mathematica, so I thought that approximation can be made using the Euler–Maclaurin formula.</p> <p>I start by doing a simple modification on $f_{s}$ :</p> <p>$f_{s} \simeq \frac{2c}{\pi z}[\sum_{n=0}^\infty \sum_{m=0}^\infty e^{-z\sqrt{m^2+n^2}}-2]$.</p> <p>In one variable, Euler_Maclaurin formula state that $\sum_{n=0}^\infty f(n)=\frac{1}{2}f(0)+\int_{0}^{\infty}f(n) dn$, I will use this twice in what follow.</p> <p>We know that $\frac{\pi}{2z^2} = \int_{0}^{\infty}\int_{0}^{\infty}e^{-z\sqrt{m^2+n^2}} dn dm$</p> <p>then</p> <p>$\frac{\pi}{2z^2} = \int_{0}^{\infty}[\sum_{m=0}^\infty e^{-z\sqrt{m^2+n^2}}-\frac{1}{2}e^{-zm}]dm$</p> <p>$\frac{\pi}{2z^2} = \sum_{m=0}^\infty \sum_{n=0}^\infty e^{-z\sqrt{m^2+n^2}}-\frac{1}{2}\sum_{n=0}^\infty e^{-zn}+\frac{1}{4}$</p> <p>$\frac{\pi}{2z^2} = \sum_{m=0}^\infty \sum_{n=0}^\infty e^{-z\sqrt{m^2+n^2}}-\frac{1}{2} \frac{e^{z}}{e^{z}-1}+\frac{1}{4}$</p> <p>So, we can now conclude that </p> <p>$\sum_{m=0}^\infty \sum_{n=0}^\infty e^{-z\sqrt{m^2+n^2}}= \frac{\pi}{2z^2}+\frac{1}{2} \frac{e^{z}}{e^{z}-1}-\frac{1}{4}.$</p> <p>This all what I was able to do, but I'am not sure that my assumption ($|\cos(nx) \cos(y n)|$=c) is reasonable to guarantee a good approximation. What do you think ?</p> Answer 10: <p>This is not a solution, as I haven't found a closed form expression (and I doubt meanwhile that it exists) but a collection of illustrative remarks that might be useful. </p> <p>Notice that I have reinserted and simplified the derivation of the Fourier form of the double sum in 6).</p> <p>1) While the sum</p> <p>$$f=\sum _{r=-\infty }^{\infty } \sum _{s=-\infty }^{\infty } \frac{1}{\left(k^2 \tau ^2+(2 \pi r+x)^2+(2 \pi s+y)^2\right)^{3/2}}$$</p> <p>is returned unevaluated by Mathematica the corresponding integral has a very simple closed form solution (independent of $x$ and $y$)</p> <p>$$fi=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }\frac{1}{\left(k^2 \tau ^2+(2 \pi r+x)^2+(2 \pi s+y)^2\right)^{3/2}}dsdr = \frac{1}{2 \pi \sqrt{k^2 \tau ^2}}$$</p> <p>This is of course a purely technical remark as the integral lacks the essential dependence on $x$ and $y$.</p> <p>2) Numerical investigation shows that but a few summand are sufficient to obtain a reasonable approximation to the complete sum.</p> <p>3) Plots of the double sum </p> <p>3a) 3D-plot for two values of $z = k \tau$</p> <p>$z = 1$</p> <p><a href="https://i.stack.imgur.com/C1gdb.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C1gdb.jpg" alt="enter image description here"></a></p> <p>$z = \pi /2$ (this is the value to be assigned to z acording to the OP)</p> <p><a href="https://i.stack.imgur.com/xZbt3.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xZbt3.jpg" alt="enter image description here"></a></p> <p>We see the expected regular spike structure. Where the spikes become steeper with smaller $z$ (notice that the vertical scale is the same in the two pictures).</p> <p>3b) Cross sections $y = 0$ for different values of z</p> <p><a href="https://i.stack.imgur.com/fQfIr.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fQfIr.jpg" alt="enter image description here"></a></p> <p>3c) Maximum value of $f$ as a function of $z$</p> <p><a href="https://i.stack.imgur.com/aKiTz.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aKiTz.jpg" alt="enter image description here"></a></p> <p>Notice that $f \simeq 1/z^3$ for small $z$. Which can also be derived directly form the double sum as is shown in section 5).</p> <p>3d) Minimum value of $f$ as a function of $z$ </p> <p><a href="https://i.stack.imgur.com/MPnhY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MPnhY.jpg" alt="enter image description here"></a></p> <p>4) Physical interpretation of the double sum.</p> <p>Consider an infinite array of unit masses placed in the x-y-plane at the points $(x,y) = 2 \pi (r,s)$.</p> <p>The double sum $f$ multiplied by $z$ is then equal to the vertical component of the gravitational force on a unit mass at the point $(x,y,z)$.</p> <p>It is clear from this analogue model that the force is strong for small $z$ and decreases for increasing $z$.</p> <p>5) Asymptotic behaviour </p> <p>5a) For small $z$ we have $z \simeq 1/z^3$.</p> <p>Indeed, for small $z$ we can write</p> <p>$$f(0,0,z\to 0)\simeq \frac{4}{(2 \pi )^{3/2}}\sum _{r=1}^{\infty } \sum _{s=1}^{\infty } \frac{1}{\left(r^2+s^2\right)^{3/2}}+\frac{1}{z^3}$$</p> <p>The double sum is a finite constant as can be shown thus</p> <p>$$fr=\sum _{r=1}^{\infty } \sum _{s=1}^{\infty } \frac{1}{\left(r^2+s^2\right)^{3/2}}&lt;\int _1^{\infty }\int _1^{\infty }\frac{1}{\left(r^2+s^2\right)^{3/2}}dsdr=2-\sqrt{2}\simeq 0.585786\text{...}$$</p> <p>This proves the statement. </p> <p>5b) For $z\to\infty$ we find $f\simeq \frac{1}{2 \pi z}$. </p> <p>Indeed, in this case the double sum can be approximated by the double integral as calculated in section 1).</p> <p>5c) Both limits are easily understood from the physical analogy: </p> <p>For small $z$ and $(x,y) = (0,0)$ we are approaching a single mass at the origin. The force we have calculated is equal to the $z \frac{1}{z^3} = \frac{1}{z^2}$, in agreement with Newton's law of gravity.</p> <p>For large $z$, on the other hand, the grid structure is "invisible" to the point mass, and the force becomes equal to the force of an infinite massive plate, and this is constant. Indeed we have found $f \simeq 1/z$, times $z$ this is constant.</p> <p>6) Fourier form of the double sum</p> <p>The double sum $f$, renamed $fc$ here, can also be written as a double Fourier series.</p> <p>$$fc(x,y,z)=\frac{1}{2 \pi z}\sum _{n=-\infty }^{\infty } \sum _{m=-\infty }^{\infty } e^{-z \sqrt{m^2+n^2}} \cos (m y) \cos (n x)$$</p> <p>The derivation starts from the expression obtained in the solution of Pierpaolo Vivo </p> <p>$$fe(x,y,z)=\int_0^{\infty } \frac{e^{-t z^2} \vartheta _3\left(\frac{x}{2},e^{-\frac{1}{4\; t}}\right) \vartheta _3\left(\frac{y}{2},e^{-\frac{1}{4\; t}}\right)}{2 \pi ^{3/2} \sqrt{t}} \, dt$$</p> <p>Subsituting the expression for the Jacobi theta function (see the real part of formula (4) in <a href="http://mathworld.wolfram.com/JacobiThetaFunctions.html" rel="nofollow noreferrer">http://mathworld.wolfram.com/JacobiThetaFunctions.html</a>)</p> <p>$$\vartheta _3\left(\frac{u}{2},q\right)=\sum _{n=-\infty }^{\infty } q^{n^2} \cos (n u)$$</p> <p>and doing the $t$-integral gives</p> <p>$$\cos (m y) \cos (n x) \int_0^{\infty } \frac{\exp \left(-\frac{m^2+n^2}{4 t}-t z^2\right)}{\left(2 \pi ^{3/2}\right) \sqrt{t}} \, dt=\frac{e^{-z \sqrt{m^2+n^2}} \cos (m y) \cos (n x)}{2 \pi z}$$</p> <p>The double sum of which is $fc$. Done.</p> <p>Remark: it is interesting, that replacing the double sum by a double integral gives (hint: transform to polar coordinates)</p> <p>$$\frac{1}{2 \pi z}\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }e^{-z \sqrt{m^2+n^2}} \cos (m y) \cos (n x)dmdn=\frac{1}{\left(x^2+y^2+z^2\right)^{3/2}}$$</p> <p>This expression is less trivial than the one in 1) but the double periodic structure is washed out as well.</p> <p>7) Approximations</p> <p>We can approximate the function by taking only a finite number $n$ of terms, defined here as letting the summation indices run from $-n$ to $n$.</p> <p>It turns out that for $z\gtrsim 1$ the Fourier double sum $fc$ requires less terms than the plain sum $f$, while for $z\lesssim 1$ the situation is reversed. The following graphs show the typical behaviour of the maximum and minimum values as a function of the number of terms.</p> <p>The case $z = \pi /2$, this is the values of the OP</p> <p><a href="https://i.stack.imgur.com/eIHcS.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eIHcS.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/duUs9.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/duUs9.jpg" alt="enter image description here"></a></p> <p>The case $z = 1/2$</p> <p><a href="https://i.stack.imgur.com/XEw1m.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XEw1m.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/bnjkG.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bnjkG.jpg" alt="enter image description here"></a></p> <p>We conclude that if $z$ takes on the value $\pi /2$ of the OP, then the Fourier double sum truncated at index $n = 4$ gives a satisfactory approximation to $f$. Hence we can state (somewhat boldly) that this is the requested closed form expression.</p> Answer 11: <p>Not really a full answer, but too long for a comment. You can always use the identity $$ \frac{1}{Z^{3/2}}=\frac{2}{\sqrt{\pi}}\int_0^\infty d\xi\ \xi^{1/2}e^{-\xi Z}\ , $$ for positive $Z$. Your double sum then reads $$ \frac{2}{\sqrt{\pi}}\int_0^\infty d\xi\ \xi^{1/2}e^{-\xi k^2\tau^2}\sum_{r=-\infty}^\infty e^{-\xi (x+2\pi r)^2}\sum_{s=-\infty}^\infty e^{-\xi (y+2\pi s)^2} $$ and evaluating the two sums in terms of Elliptic theta functions, one gets $$ \frac{2}{\sqrt{\pi}}\int_0^\infty d\xi\ \xi^{1/2}e^{-\xi k^2\tau^2}\frac{\vartheta _3\left(\frac{x}{2},e^{-\frac{1}{4 \xi}}\right)}{2 \sqrt{\pi } \sqrt{\xi}}\frac{\vartheta _3\left(\frac{y}{2},e^{-\frac{1}{4 \xi}}\right)}{2 \sqrt{\pi } \sqrt{\xi}} $$ $$ =\int_0^{\infty } \frac{e^{-k^2 \xi \tau ^2} \vartheta _3\left(\frac{x}{2},e^{-\frac{1}{4 \xi }}\right) \vartheta _3\left(\frac{y}{2},e^{-\frac{1}{4 \xi }}\right)}{2 \pi^{3/2} \sqrt{\xi }} \, d\xi. $$ The integral does not seem to be easy to compute in closed form, but might be better suited for an asymptotic/approximate evaluation. </p> Answer 12: <p>@Dr. Wolfgang Hintze, Thank you very much for the contribution. The point of this answer is just to propose something which could complete the derivation of $f_{s}$ you proposed in the previous answer. At this step I don't think there is a closed form solution, but a reasonable approximation can be made.</p> <p>Intuitively, $|\cos(nx) \cos(y n)|$ is spread out equally between 0 and 1. If we approximate it as a constant $c&gt;0$, then the sum become</p> <p>$f_{s} \simeq \frac{2c}{\pi z}\sum_{n=1}^\infty \sum_{m=1}^\infty e^{-z\sqrt{m^2+n^2}}$.</p> <p>Again this expression is not tractable with Mathematica, so I thought that approximation can be made using the Euler–Maclaurin formula.</p> <p>I start by doing a simple modification on $f_{s}$ :</p> <p>$f_{s} \simeq \frac{2c}{\pi z}[\sum_{n=0}^\infty \sum_{m=0}^\infty e^{-z\sqrt{m^2+n^2}}-2]$.</p> <p>In one variable, Euler_Maclaurin formula state that $\sum_{n=0}^\infty f(n)=\frac{1}{2}f(0)+\int_{0}^{\infty}f(n) dn$, I will use this twice in what follow.</p> <p>We know that $\frac{\pi}{2z^2} = \int_{0}^{\infty}\int_{0}^{\infty}e^{-z\sqrt{m^2+n^2}} dn dm$</p> <p>then</p> <p>$\frac{\pi}{2z^2} = \int_{0}^{\infty}[\sum_{m=0}^\infty e^{-z\sqrt{m^2+n^2}}-\frac{1}{2}e^{-zm}]dm$</p> <p>$\frac{\pi}{2z^2} = \sum_{m=0}^\infty \sum_{n=0}^\infty e^{-z\sqrt{m^2+n^2}}-\frac{1}{2}\sum_{n=0}^\infty e^{-zn}+\frac{1}{4}$</p> <p>$\frac{\pi}{2z^2} = \sum_{m=0}^\infty \sum_{n=0}^\infty e^{-z\sqrt{m^2+n^2}}-\frac{1}{2} \frac{e^{z}}{e^{z}-1}+\frac{1}{4}$</p> <p>So, we can now conclude that </p> <p>$\sum_{m=0}^\infty \sum_{n=0}^\infty e^{-z\sqrt{m^2+n^2}}= \frac{\pi}{2z^2}+\frac{1}{2} \frac{e^{z}}{e^{z}-1}-\frac{1}{4}.$</p> <p>This all what I was able to do, but I'am not sure that my assumption ($|\cos(nx) \cos(y n)|$=c) is reasonable to guarantee a good approximation. What do you think ?</p> Answer 13: <p>This is not a solution, as I haven't found a closed form expression (and I doubt meanwhile that it exists) but a collection of illustrative remarks that might be useful. </p> <p>Notice that I have reinserted and simplified the derivation of the Fourier form of the double sum in 6).</p> <p>1) While the sum</p> <p>$$f=\sum _{r=-\infty }^{\infty } \sum _{s=-\infty }^{\infty } \frac{1}{\left(k^2 \tau ^2+(2 \pi r+x)^2+(2 \pi s+y)^2\right)^{3/2}}$$</p> <p>is returned unevaluated by Mathematica the corresponding integral has a very simple closed form solution (independent of $x$ and $y$)</p> <p>$$fi=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }\frac{1}{\left(k^2 \tau ^2+(2 \pi r+x)^2+(2 \pi s+y)^2\right)^{3/2}}dsdr = \frac{1}{2 \pi \sqrt{k^2 \tau ^2}}$$</p> <p>This is of course a purely technical remark as the integral lacks the essential dependence on $x$ and $y$.</p> <p>2) Numerical investigation shows that but a few summand are sufficient to obtain a reasonable approximation to the complete sum.</p> <p>3) Plots of the double sum </p> <p>3a) 3D-plot for two values of $z = k \tau$</p> <p>$z = 1$</p> <p><a href="https://i.stack.imgur.com/C1gdb.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C1gdb.jpg" alt="enter image description here"></a></p> <p>$z = \pi /2$ (this is the value to be assigned to z acording to the OP)</p> <p><a href="https://i.stack.imgur.com/xZbt3.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xZbt3.jpg" alt="enter image description here"></a></p> <p>We see the expected regular spike structure. Where the spikes become steeper with smaller $z$ (notice that the vertical scale is the same in the two pictures).</p> <p>3b) Cross sections $y = 0$ for different values of z</p> <p><a href="https://i.stack.imgur.com/fQfIr.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fQfIr.jpg" alt="enter image description here"></a></p> <p>3c) Maximum value of $f$ as a function of $z$</p> <p><a href="https://i.stack.imgur.com/aKiTz.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aKiTz.jpg" alt="enter image description here"></a></p> <p>Notice that $f \simeq 1/z^3$ for small $z$. Which can also be derived directly form the double sum as is shown in section 5).</p> <p>3d) Minimum value of $f$ as a function of $z$ </p> <p><a href="https://i.stack.imgur.com/MPnhY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MPnhY.jpg" alt="enter image description here"></a></p> <p>4) Physical interpretation of the double sum.</p> <p>Consider an infinite array of unit masses placed in the x-y-plane at the points $(x,y) = 2 \pi (r,s)$.</p> <p>The double sum $f$ multiplied by $z$ is then equal to the vertical component of the gravitational force on a unit mass at the point $(x,y,z)$.</p> <p>It is clear from this analogue model that the force is strong for small $z$ and decreases for increasing $z$.</p> <p>5) Asymptotic behaviour </p> <p>5a) For small $z$ we have $z \simeq 1/z^3$.</p> <p>Indeed, for small $z$ we can write</p> <p>$$f(0,0,z\to 0)\simeq \frac{4}{(2 \pi )^{3/2}}\sum _{r=1}^{\infty } \sum _{s=1}^{\infty } \frac{1}{\left(r^2+s^2\right)^{3/2}}+\frac{1}{z^3}$$</p> <p>The double sum is a finite constant as can be shown thus</p> <p>$$fr=\sum _{r=1}^{\infty } \sum _{s=1}^{\infty } \frac{1}{\left(r^2+s^2\right)^{3/2}}&lt;\int _1^{\infty }\int _1^{\infty }\frac{1}{\left(r^2+s^2\right)^{3/2}}dsdr=2-\sqrt{2}\simeq 0.585786\text{...}$$</p> <p>This proves the statement. </p> <p>5b) For $z\to\infty$ we find $f\simeq \frac{1}{2 \pi z}$. </p> <p>Indeed, in this case the double sum can be approximated by the double integral as calculated in section 1).</p> <p>5c) Both limits are easily understood from the physical analogy: </p> <p>For small $z$ and $(x,y) = (0,0)$ we are approaching a single mass at the origin. The force we have calculated is equal to the $z \frac{1}{z^3} = \frac{1}{z^2}$, in agreement with Newton's law of gravity.</p> <p>For large $z$, on the other hand, the grid structure is "invisible" to the point mass, and the force becomes equal to the force of an infinite massive plate, and this is constant. Indeed we have found $f \simeq 1/z$, times $z$ this is constant.</p> <p>6) Fourier form of the double sum</p> <p>The double sum $f$, renamed $fc$ here, can also be written as a double Fourier series.</p> <p>$$fc(x,y,z)=\frac{1}{2 \pi z}\sum _{n=-\infty }^{\infty } \sum _{m=-\infty }^{\infty } e^{-z \sqrt{m^2+n^2}} \cos (m y) \cos (n x)$$</p> <p>The derivation starts from the expression obtained in the solution of Pierpaolo Vivo </p> <p>$$fe(x,y,z)=\int_0^{\infty } \frac{e^{-t z^2} \vartheta _3\left(\frac{x}{2},e^{-\frac{1}{4\; t}}\right) \vartheta _3\left(\frac{y}{2},e^{-\frac{1}{4\; t}}\right)}{2 \pi ^{3/2} \sqrt{t}} \, dt$$</p> <p>Subsituting the expression for the Jacobi theta function (see the real part of formula (4) in <a href="http://mathworld.wolfram.com/JacobiThetaFunctions.html" rel="nofollow noreferrer">http://mathworld.wolfram.com/JacobiThetaFunctions.html</a>)</p> <p>$$\vartheta _3\left(\frac{u}{2},q\right)=\sum _{n=-\infty }^{\infty } q^{n^2} \cos (n u)$$</p> <p>and doing the $t$-integral gives</p> <p>$$\cos (m y) \cos (n x) \int_0^{\infty } \frac{\exp \left(-\frac{m^2+n^2}{4 t}-t z^2\right)}{\left(2 \pi ^{3/2}\right) \sqrt{t}} \, dt=\frac{e^{-z \sqrt{m^2+n^2}} \cos (m y) \cos (n x)}{2 \pi z}$$</p> <p>The double sum of which is $fc$. Done.</p> <p>Remark: it is interesting, that replacing the double sum by a double integral gives (hint: transform to polar coordinates)</p> <p>$$\frac{1}{2 \pi z}\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }e^{-z \sqrt{m^2+n^2}} \cos (m y) \cos (n x)dmdn=\frac{1}{\left(x^2+y^2+z^2\right)^{3/2}}$$</p> <p>This expression is less trivial than the one in 1) but the double periodic structure is washed out as well.</p> <p>7) Approximations</p> <p>We can approximate the function by taking only a finite number $n$ of terms, defined here as letting the summation indices run from $-n$ to $n$.</p> <p>It turns out that for $z\gtrsim 1$ the Fourier double sum $fc$ requires less terms than the plain sum $f$, while for $z\lesssim 1$ the situation is reversed. The following graphs show the typical behaviour of the maximum and minimum values as a function of the number of terms.</p> <p>The case $z = \pi /2$, this is the values of the OP</p> <p><a href="https://i.stack.imgur.com/eIHcS.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eIHcS.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/duUs9.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/duUs9.jpg" alt="enter image description here"></a></p> <p>The case $z = 1/2$</p> <p><a href="https://i.stack.imgur.com/XEw1m.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XEw1m.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/bnjkG.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bnjkG.jpg" alt="enter image description here"></a></p> <p>We conclude that if $z$ takes on the value $\pi /2$ of the OP, then the Fourier double sum truncated at index $n = 4$ gives a satisfactory approximation to $f$. Hence we can state (somewhat boldly) that this is the requested closed form expression.</p> Answer 14: <p>Not really a full answer, but too long for a comment. You can always use the identity $$ \frac{1}{Z^{3/2}}=\frac{2}{\sqrt{\pi}}\int_0^\infty d\xi\ \xi^{1/2}e^{-\xi Z}\ , $$ for positive $Z$. Your double sum then reads $$ \frac{2}{\sqrt{\pi}}\int_0^\infty d\xi\ \xi^{1/2}e^{-\xi k^2\tau^2}\sum_{r=-\infty}^\infty e^{-\xi (x+2\pi r)^2}\sum_{s=-\infty}^\infty e^{-\xi (y+2\pi s)^2} $$ and evaluating the two sums in terms of Elliptic theta functions, one gets $$ \frac{2}{\sqrt{\pi}}\int_0^\infty d\xi\ \xi^{1/2}e^{-\xi k^2\tau^2}\frac{\vartheta _3\left(\frac{x}{2},e^{-\frac{1}{4 \xi}}\right)}{2 \sqrt{\pi } \sqrt{\xi}}\frac{\vartheta _3\left(\frac{y}{2},e^{-\frac{1}{4 \xi}}\right)}{2 \sqrt{\pi } \sqrt{\xi}} $$ $$ =\int_0^{\infty } \frac{e^{-k^2 \xi \tau ^2} \vartheta _3\left(\frac{x}{2},e^{-\frac{1}{4 \xi }}\right) \vartheta _3\left(\frac{y}{2},e^{-\frac{1}{4 \xi }}\right)}{2 \pi^{3/2} \sqrt{\xi }} \, d\xi. $$ The integral does not seem to be easy to compute in closed form, but might be better suited for an asymptotic/approximate evaluation. </p> Answer 15: <p>@Dr. Wolfgang Hintze, Thank you very much for the contribution. The point of this answer is just to propose something which could complete the derivation of $f_{s}$ you proposed in the previous answer. At this step I don't think there is a closed form solution, but a reasonable approximation can be made.</p> <p>Intuitively, $|\cos(nx) \cos(y n)|$ is spread out equally between 0 and 1. If we approximate it as a constant $c&gt;0$, then the sum become</p> <p>$f_{s} \simeq \frac{2c}{\pi z}\sum_{n=1}^\infty \sum_{m=1}^\infty e^{-z\sqrt{m^2+n^2}}$.</p> <p>Again this expression is not tractable with Mathematica, so I thought that approximation can be made using the Euler–Maclaurin formula.</p> <p>I start by doing a simple modification on $f_{s}$ :</p> <p>$f_{s} \simeq \frac{2c}{\pi z}[\sum_{n=0}^\infty \sum_{m=0}^\infty e^{-z\sqrt{m^2+n^2}}-2]$.</p> <p>In one variable, Euler_Maclaurin formula state that $\sum_{n=0}^\infty f(n)=\frac{1}{2}f(0)+\int_{0}^{\infty}f(n) dn$, I will use this twice in what follow.</p> <p>We know that $\frac{\pi}{2z^2} = \int_{0}^{\infty}\int_{0}^{\infty}e^{-z\sqrt{m^2+n^2}} dn dm$</p> <p>then</p> <p>$\frac{\pi}{2z^2} = \int_{0}^{\infty}[\sum_{m=0}^\infty e^{-z\sqrt{m^2+n^2}}-\frac{1}{2}e^{-zm}]dm$</p> <p>$\frac{\pi}{2z^2} = \sum_{m=0}^\infty \sum_{n=0}^\infty e^{-z\sqrt{m^2+n^2}}-\frac{1}{2}\sum_{n=0}^\infty e^{-zn}+\frac{1}{4}$</p> <p>$\frac{\pi}{2z^2} = \sum_{m=0}^\infty \sum_{n=0}^\infty e^{-z\sqrt{m^2+n^2}}-\frac{1}{2} \frac{e^{z}}{e^{z}-1}+\frac{1}{4}$</p> <p>So, we can now conclude that </p> <p>$\sum_{m=0}^\infty \sum_{n=0}^\infty e^{-z\sqrt{m^2+n^2}}= \frac{\pi}{2z^2}+\frac{1}{2} \frac{e^{z}}{e^{z}-1}-\frac{1}{4}.$</p> <p>This all what I was able to do, but I'am not sure that my assumption ($|\cos(nx) \cos(y n)|$=c) is reasonable to guarantee a good approximation. What do you think ?</p> Answer 16: <p>This is not a solution, as I haven't found a closed form expression (and I doubt meanwhile that it exists) but a collection of illustrative remarks that might be useful. </p> <p>Notice that I have reinserted and simplified the derivation of the Fourier form of the double sum in 6).</p> <p>1) While the sum</p> <p>$$f=\sum _{r=-\infty }^{\infty } \sum _{s=-\infty }^{\infty } \frac{1}{\left(k^2 \tau ^2+(2 \pi r+x)^2+(2 \pi s+y)^2\right)^{3/2}}$$</p> <p>is returned unevaluated by Mathematica the corresponding integral has a very simple closed form solution (independent of $x$ and $y$)</p> <p>$$fi=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }\frac{1}{\left(k^2 \tau ^2+(2 \pi r+x)^2+(2 \pi s+y)^2\right)^{3/2}}dsdr = \frac{1}{2 \pi \sqrt{k^2 \tau ^2}}$$</p> <p>This is of course a purely technical remark as the integral lacks the essential dependence on $x$ and $y$.</p> <p>2) Numerical investigation shows that but a few summand are sufficient to obtain a reasonable approximation to the complete sum.</p> <p>3) Plots of the double sum </p> <p>3a) 3D-plot for two values of $z = k \tau$</p> <p>$z = 1$</p> <p><a href="https://i.stack.imgur.com/C1gdb.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C1gdb.jpg" alt="enter image description here"></a></p> <p>$z = \pi /2$ (this is the value to be assigned to z acording to the OP)</p> <p><a href="https://i.stack.imgur.com/xZbt3.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xZbt3.jpg" alt="enter image description here"></a></p> <p>We see the expected regular spike structure. Where the spikes become steeper with smaller $z$ (notice that the vertical scale is the same in the two pictures).</p> <p>3b) Cross sections $y = 0$ for different values of z</p> <p><a href="https://i.stack.imgur.com/fQfIr.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fQfIr.jpg" alt="enter image description here"></a></p> <p>3c) Maximum value of $f$ as a function of $z$</p> <p><a href="https://i.stack.imgur.com/aKiTz.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aKiTz.jpg" alt="enter image description here"></a></p> <p>Notice that $f \simeq 1/z^3$ for small $z$. Which can also be derived directly form the double sum as is shown in section 5).</p> <p>3d) Minimum value of $f$ as a function of $z$ </p> <p><a href="https://i.stack.imgur.com/MPnhY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MPnhY.jpg" alt="enter image description here"></a></p> <p>4) Physical interpretation of the double sum.</p> <p>Consider an infinite array of unit masses placed in the x-y-plane at the points $(x,y) = 2 \pi (r,s)$.</p> <p>The double sum $f$ multiplied by $z$ is then equal to the vertical component of the gravitational force on a unit mass at the point $(x,y,z)$.</p> <p>It is clear from this analogue model that the force is strong for small $z$ and decreases for increasing $z$.</p> <p>5) Asymptotic behaviour </p> <p>5a) For small $z$ we have $z \simeq 1/z^3$.</p> <p>Indeed, for small $z$ we can write</p> <p>$$f(0,0,z\to 0)\simeq \frac{4}{(2 \pi )^{3/2}}\sum _{r=1}^{\infty } \sum _{s=1}^{\infty } \frac{1}{\left(r^2+s^2\right)^{3/2}}+\frac{1}{z^3}$$</p> <p>The double sum is a finite constant as can be shown thus</p> <p>$$fr=\sum _{r=1}^{\infty } \sum _{s=1}^{\infty } \frac{1}{\left(r^2+s^2\right)^{3/2}}&lt;\int _1^{\infty }\int _1^{\infty }\frac{1}{\left(r^2+s^2\right)^{3/2}}dsdr=2-\sqrt{2}\simeq 0.585786\text{...}$$</p> <p>This proves the statement. </p> <p>5b) For $z\to\infty$ we find $f\simeq \frac{1}{2 \pi z}$. </p> <p>Indeed, in this case the double sum can be approximated by the double integral as calculated in section 1).</p> <p>5c) Both limits are easily understood from the physical analogy: </p> <p>For small $z$ and $(x,y) = (0,0)$ we are approaching a single mass at the origin. The force we have calculated is equal to the $z \frac{1}{z^3} = \frac{1}{z^2}$, in agreement with Newton's law of gravity.</p> <p>For large $z$, on the other hand, the grid structure is "invisible" to the point mass, and the force becomes equal to the force of an infinite massive plate, and this is constant. Indeed we have found $f \simeq 1/z$, times $z$ this is constant.</p> <p>6) Fourier form of the double sum</p> <p>The double sum $f$, renamed $fc$ here, can also be written as a double Fourier series.</p> <p>$$fc(x,y,z)=\frac{1}{2 \pi z}\sum _{n=-\infty }^{\infty } \sum _{m=-\infty }^{\infty } e^{-z \sqrt{m^2+n^2}} \cos (m y) \cos (n x)$$</p> <p>The derivation starts from the expression obtained in the solution of Pierpaolo Vivo </p> <p>$$fe(x,y,z)=\int_0^{\infty } \frac{e^{-t z^2} \vartheta _3\left(\frac{x}{2},e^{-\frac{1}{4\; t}}\right) \vartheta _3\left(\frac{y}{2},e^{-\frac{1}{4\; t}}\right)}{2 \pi ^{3/2} \sqrt{t}} \, dt$$</p> <p>Subsituting the expression for the Jacobi theta function (see the real part of formula (4) in <a href="http://mathworld.wolfram.com/JacobiThetaFunctions.html" rel="nofollow noreferrer">http://mathworld.wolfram.com/JacobiThetaFunctions.html</a>)</p> <p>$$\vartheta _3\left(\frac{u}{2},q\right)=\sum _{n=-\infty }^{\infty } q^{n^2} \cos (n u)$$</p> <p>and doing the $t$-integral gives</p> <p>$$\cos (m y) \cos (n x) \int_0^{\infty } \frac{\exp \left(-\frac{m^2+n^2}{4 t}-t z^2\right)}{\left(2 \pi ^{3/2}\right) \sqrt{t}} \, dt=\frac{e^{-z \sqrt{m^2+n^2}} \cos (m y) \cos (n x)}{2 \pi z}$$</p> <p>The double sum of which is $fc$. Done.</p> <p>Remark: it is interesting, that replacing the double sum by a double integral gives (hint: transform to polar coordinates)</p> <p>$$\frac{1}{2 \pi z}\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }e^{-z \sqrt{m^2+n^2}} \cos (m y) \cos (n x)dmdn=\frac{1}{\left(x^2+y^2+z^2\right)^{3/2}}$$</p> <p>This expression is less trivial than the one in 1) but the double periodic structure is washed out as well.</p> <p>7) Approximations</p> <p>We can approximate the function by taking only a finite number $n$ of terms, defined here as letting the summation indices run from $-n$ to $n$.</p> <p>It turns out that for $z\gtrsim 1$ the Fourier double sum $fc$ requires less terms than the plain sum $f$, while for $z\lesssim 1$ the situation is reversed. The following graphs show the typical behaviour of the maximum and minimum values as a function of the number of terms.</p> <p>The case $z = \pi /2$, this is the values of the OP</p> <p><a href="https://i.stack.imgur.com/eIHcS.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eIHcS.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/duUs9.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/duUs9.jpg" alt="enter image description here"></a></p> <p>The case $z = 1/2$</p> <p><a href="https://i.stack.imgur.com/XEw1m.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XEw1m.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/bnjkG.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bnjkG.jpg" alt="enter image description here"></a></p> <p>We conclude that if $z$ takes on the value $\pi /2$ of the OP, then the Fourier double sum truncated at index $n = 4$ gives a satisfactory approximation to $f$. Hence we can state (somewhat boldly) that this is the requested closed form expression.</p> Answer 17: <p>Not really a full answer, but too long for a comment. You can always use the identity $$ \frac{1}{Z^{3/2}}=\frac{2}{\sqrt{\pi}}\int_0^\infty d\xi\ \xi^{1/2}e^{-\xi Z}\ , $$ for positive $Z$. Your double sum then reads $$ \frac{2}{\sqrt{\pi}}\int_0^\infty d\xi\ \xi^{1/2}e^{-\xi k^2\tau^2}\sum_{r=-\infty}^\infty e^{-\xi (x+2\pi r)^2}\sum_{s=-\infty}^\infty e^{-\xi (y+2\pi s)^2} $$ and evaluating the two sums in terms of Elliptic theta functions, one gets $$ \frac{2}{\sqrt{\pi}}\int_0^\infty d\xi\ \xi^{1/2}e^{-\xi k^2\tau^2}\frac{\vartheta _3\left(\frac{x}{2},e^{-\frac{1}{4 \xi}}\right)}{2 \sqrt{\pi } \sqrt{\xi}}\frac{\vartheta _3\left(\frac{y}{2},e^{-\frac{1}{4 \xi}}\right)}{2 \sqrt{\pi } \sqrt{\xi}} $$ $$ =\int_0^{\infty } \frac{e^{-k^2 \xi \tau ^2} \vartheta _3\left(\frac{x}{2},e^{-\frac{1}{4 \xi }}\right) \vartheta _3\left(\frac{y}{2},e^{-\frac{1}{4 \xi }}\right)}{2 \pi^{3/2} \sqrt{\xi }} \, d\xi. $$ The integral does not seem to be easy to compute in closed form, but might be better suited for an asymptotic/approximate evaluation. </p> Answer 18: <p>@Dr. Wolfgang Hintze, Thank you very much for the contribution. The point of this answer is just to propose something which could complete the derivation of $f_{s}$ you proposed in the previous answer. At this step I don't think there is a closed form solution, but a reasonable approximation can be made.</p> <p>Intuitively, $|\cos(nx) \cos(y n)|$ is spread out equally between 0 and 1. If we approximate it as a constant $c&gt;0$, then the sum become</p> <p>$f_{s} \simeq \frac{2c}{\pi z}\sum_{n=1}^\infty \sum_{m=1}^\infty e^{-z\sqrt{m^2+n^2}}$.</p> <p>Again this expression is not tractable with Mathematica, so I thought that approximation can be made using the Euler–Maclaurin formula.</p> <p>I start by doing a simple modification on $f_{s}$ :</p> <p>$f_{s} \simeq \frac{2c}{\pi z}[\sum_{n=0}^\infty \sum_{m=0}^\infty e^{-z\sqrt{m^2+n^2}}-2]$.</p> <p>In one variable, Euler_Maclaurin formula state that $\sum_{n=0}^\infty f(n)=\frac{1}{2}f(0)+\int_{0}^{\infty}f(n) dn$, I will use this twice in what follow.</p> <p>We know that $\frac{\pi}{2z^2} = \int_{0}^{\infty}\int_{0}^{\infty}e^{-z\sqrt{m^2+n^2}} dn dm$</p> <p>then</p> <p>$\frac{\pi}{2z^2} = \int_{0}^{\infty}[\sum_{m=0}^\infty e^{-z\sqrt{m^2+n^2}}-\frac{1}{2}e^{-zm}]dm$</p> <p>$\frac{\pi}{2z^2} = \sum_{m=0}^\infty \sum_{n=0}^\infty e^{-z\sqrt{m^2+n^2}}-\frac{1}{2}\sum_{n=0}^\infty e^{-zn}+\frac{1}{4}$</p> <p>$\frac{\pi}{2z^2} = \sum_{m=0}^\infty \sum_{n=0}^\infty e^{-z\sqrt{m^2+n^2}}-\frac{1}{2} \frac{e^{z}}{e^{z}-1}+\frac{1}{4}$</p> <p>So, we can now conclude that </p> <p>$\sum_{m=0}^\infty \sum_{n=0}^\infty e^{-z\sqrt{m^2+n^2}}= \frac{\pi}{2z^2}+\frac{1}{2} \frac{e^{z}}{e^{z}-1}-\frac{1}{4}.$</p> <p>This all what I was able to do, but I'am not sure that my assumption ($|\cos(nx) \cos(y n)|$=c) is reasonable to guarantee a good approximation. What do you think ?</p> Answer 19: <p>This is not a solution, as I haven't found a closed form expression (and I doubt meanwhile that it exists) but a collection of illustrative remarks that might be useful. </p> <p>Notice that I have reinserted and simplified the derivation of the Fourier form of the double sum in 6).</p> <p>1) While the sum</p> <p>$$f=\sum _{r=-\infty }^{\infty } \sum _{s=-\infty }^{\infty } \frac{1}{\left(k^2 \tau ^2+(2 \pi r+x)^2+(2 \pi s+y)^2\right)^{3/2}}$$</p> <p>is returned unevaluated by Mathematica the corresponding integral has a very simple closed form solution (independent of $x$ and $y$)</p> <p>$$fi=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }\frac{1}{\left(k^2 \tau ^2+(2 \pi r+x)^2+(2 \pi s+y)^2\right)^{3/2}}dsdr = \frac{1}{2 \pi \sqrt{k^2 \tau ^2}}$$</p> <p>This is of course a purely technical remark as the integral lacks the essential dependence on $x$ and $y$.</p> <p>2) Numerical investigation shows that but a few summand are sufficient to obtain a reasonable approximation to the complete sum.</p> <p>3) Plots of the double sum </p> <p>3a) 3D-plot for two values of $z = k \tau$</p> <p>$z = 1$</p> <p><a href="https://i.stack.imgur.com/C1gdb.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C1gdb.jpg" alt="enter image description here"></a></p> <p>$z = \pi /2$ (this is the value to be assigned to z acording to the OP)</p> <p><a href="https://i.stack.imgur.com/xZbt3.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xZbt3.jpg" alt="enter image description here"></a></p> <p>We see the expected regular spike structure. Where the spikes become steeper with smaller $z$ (notice that the vertical scale is the same in the two pictures).</p> <p>3b) Cross sections $y = 0$ for different values of z</p> <p><a href="https://i.stack.imgur.com/fQfIr.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fQfIr.jpg" alt="enter image description here"></a></p> <p>3c) Maximum value of $f$ as a function of $z$</p> <p><a href="https://i.stack.imgur.com/aKiTz.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aKiTz.jpg" alt="enter image description here"></a></p> <p>Notice that $f \simeq 1/z^3$ for small $z$. Which can also be derived directly form the double sum as is shown in section 5).</p> <p>3d) Minimum value of $f$ as a function of $z$ </p> <p><a href="https://i.stack.imgur.com/MPnhY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MPnhY.jpg" alt="enter image description here"></a></p> <p>4) Physical interpretation of the double sum.</p> <p>Consider an infinite array of unit masses placed in the x-y-plane at the points $(x,y) = 2 \pi (r,s)$.</p> <p>The double sum $f$ multiplied by $z$ is then equal to the vertical component of the gravitational force on a unit mass at the point $(x,y,z)$.</p> <p>It is clear from this analogue model that the force is strong for small $z$ and decreases for increasing $z$.</p> <p>5) Asymptotic behaviour </p> <p>5a) For small $z$ we have $z \simeq 1/z^3$.</p> <p>Indeed, for small $z$ we can write</p> <p>$$f(0,0,z\to 0)\simeq \frac{4}{(2 \pi )^{3/2}}\sum _{r=1}^{\infty } \sum _{s=1}^{\infty } \frac{1}{\left(r^2+s^2\right)^{3/2}}+\frac{1}{z^3}$$</p> <p>The double sum is a finite constant as can be shown thus</p> <p>$$fr=\sum _{r=1}^{\infty } \sum _{s=1}^{\infty } \frac{1}{\left(r^2+s^2\right)^{3/2}}&lt;\int _1^{\infty }\int _1^{\infty }\frac{1}{\left(r^2+s^2\right)^{3/2}}dsdr=2-\sqrt{2}\simeq 0.585786\text{...}$$</p> <p>This proves the statement. </p> <p>5b) For $z\to\infty$ we find $f\simeq \frac{1}{2 \pi z}$. </p> <p>Indeed, in this case the double sum can be approximated by the double integral as calculated in section 1).</p> <p>5c) Both limits are easily understood from the physical analogy: </p> <p>For small $z$ and $(x,y) = (0,0)$ we are approaching a single mass at the origin. The force we have calculated is equal to the $z \frac{1}{z^3} = \frac{1}{z^2}$, in agreement with Newton's law of gravity.</p> <p>For large $z$, on the other hand, the grid structure is "invisible" to the point mass, and the force becomes equal to the force of an infinite massive plate, and this is constant. Indeed we have found $f \simeq 1/z$, times $z$ this is constant.</p> <p>6) Fourier form of the double sum</p> <p>The double sum $f$, renamed $fc$ here, can also be written as a double Fourier series.</p> <p>$$fc(x,y,z)=\frac{1}{2 \pi z}\sum _{n=-\infty }^{\infty } \sum _{m=-\infty }^{\infty } e^{-z \sqrt{m^2+n^2}} \cos (m y) \cos (n x)$$</p> <p>The derivation starts from the expression obtained in the solution of Pierpaolo Vivo </p> <p>$$fe(x,y,z)=\int_0^{\infty } \frac{e^{-t z^2} \vartheta _3\left(\frac{x}{2},e^{-\frac{1}{4\; t}}\right) \vartheta _3\left(\frac{y}{2},e^{-\frac{1}{4\; t}}\right)}{2 \pi ^{3/2} \sqrt{t}} \, dt$$</p> <p>Subsituting the expression for the Jacobi theta function (see the real part of formula (4) in <a href="http://mathworld.wolfram.com/JacobiThetaFunctions.html" rel="nofollow noreferrer">http://mathworld.wolfram.com/JacobiThetaFunctions.html</a>)</p> <p>$$\vartheta _3\left(\frac{u}{2},q\right)=\sum _{n=-\infty }^{\infty } q^{n^2} \cos (n u)$$</p> <p>and doing the $t$-integral gives</p> <p>$$\cos (m y) \cos (n x) \int_0^{\infty } \frac{\exp \left(-\frac{m^2+n^2}{4 t}-t z^2\right)}{\left(2 \pi ^{3/2}\right) \sqrt{t}} \, dt=\frac{e^{-z \sqrt{m^2+n^2}} \cos (m y) \cos (n x)}{2 \pi z}$$</p> <p>The double sum of which is $fc$. Done.</p> <p>Remark: it is interesting, that replacing the double sum by a double integral gives (hint: transform to polar coordinates)</p> <p>$$\frac{1}{2 \pi z}\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }e^{-z \sqrt{m^2+n^2}} \cos (m y) \cos (n x)dmdn=\frac{1}{\left(x^2+y^2+z^2\right)^{3/2}}$$</p> <p>This expression is less trivial than the one in 1) but the double periodic structure is washed out as well.</p> <p>7) Approximations</p> <p>We can approximate the function by taking only a finite number $n$ of terms, defined here as letting the summation indices run from $-n$ to $n$.</p> <p>It turns out that for $z\gtrsim 1$ the Fourier double sum $fc$ requires less terms than the plain sum $f$, while for $z\lesssim 1$ the situation is reversed. The following graphs show the typical behaviour of the maximum and minimum values as a function of the number of terms.</p> <p>The case $z = \pi /2$, this is the values of the OP</p> <p><a href="https://i.stack.imgur.com/eIHcS.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eIHcS.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/duUs9.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/duUs9.jpg" alt="enter image description here"></a></p> <p>The case $z = 1/2$</p> <p><a href="https://i.stack.imgur.com/XEw1m.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XEw1m.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/bnjkG.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bnjkG.jpg" alt="enter image description here"></a></p> <p>We conclude that if $z$ takes on the value $\pi /2$ of the OP, then the Fourier double sum truncated at index $n = 4$ gives a satisfactory approximation to $f$. Hence we can state (somewhat boldly) that this is the requested closed form expression.</p> Answer 20: <p>Not really a full answer, but too long for a comment. You can always use the identity $$ \frac{1}{Z^{3/2}}=\frac{2}{\sqrt{\pi}}\int_0^\infty d\xi\ \xi^{1/2}e^{-\xi Z}\ , $$ for positive $Z$. Your double sum then reads $$ \frac{2}{\sqrt{\pi}}\int_0^\infty d\xi\ \xi^{1/2}e^{-\xi k^2\tau^2}\sum_{r=-\infty}^\infty e^{-\xi (x+2\pi r)^2}\sum_{s=-\infty}^\infty e^{-\xi (y+2\pi s)^2} $$ and evaluating the two sums in terms of Elliptic theta functions, one gets $$ \frac{2}{\sqrt{\pi}}\int_0^\infty d\xi\ \xi^{1/2}e^{-\xi k^2\tau^2}\frac{\vartheta _3\left(\frac{x}{2},e^{-\frac{1}{4 \xi}}\right)}{2 \sqrt{\pi } \sqrt{\xi}}\frac{\vartheta _3\left(\frac{y}{2},e^{-\frac{1}{4 \xi}}\right)}{2 \sqrt{\pi } \sqrt{\xi}} $$ $$ =\int_0^{\infty } \frac{e^{-k^2 \xi \tau ^2} \vartheta _3\left(\frac{x}{2},e^{-\frac{1}{4 \xi }}\right) \vartheta _3\left(\frac{y}{2},e^{-\frac{1}{4 \xi }}\right)}{2 \pi^{3/2} \sqrt{\xi }} \, d\xi. $$ The integral does not seem to be easy to compute in closed form, but might be better suited for an asymptotic/approximate evaluation. </p> Answer 21: <p>@Dr. Wolfgang Hintze, Thank you very much for the contribution. The point of this answer is just to propose something which could complete the derivation of $f_{s}$ you proposed in the previous answer. At this step I don't think there is a closed form solution, but a reasonable approximation can be made.</p> <p>Intuitively, $|\cos(nx) \cos(y n)|$ is spread out equally between 0 and 1. If we approximate it as a constant $c&gt;0$, then the sum become</p> <p>$f_{s} \simeq \frac{2c}{\pi z}\sum_{n=1}^\infty \sum_{m=1}^\infty e^{-z\sqrt{m^2+n^2}}$.</p> <p>Again this expression is not tractable with Mathematica, so I thought that approximation can be made using the Euler–Maclaurin formula.</p> <p>I start by doing a simple modification on $f_{s}$ :</p> <p>$f_{s} \simeq \frac{2c}{\pi z}[\sum_{n=0}^\infty \sum_{m=0}^\infty e^{-z\sqrt{m^2+n^2}}-2]$.</p> <p>In one variable, Euler_Maclaurin formula state that $\sum_{n=0}^\infty f(n)=\frac{1}{2}f(0)+\int_{0}^{\infty}f(n) dn$, I will use this twice in what follow.</p> <p>We know that $\frac{\pi}{2z^2} = \int_{0}^{\infty}\int_{0}^{\infty}e^{-z\sqrt{m^2+n^2}} dn dm$</p> <p>then</p> <p>$\frac{\pi}{2z^2} = \int_{0}^{\infty}[\sum_{m=0}^\infty e^{-z\sqrt{m^2+n^2}}-\frac{1}{2}e^{-zm}]dm$</p> <p>$\frac{\pi}{2z^2} = \sum_{m=0}^\infty \sum_{n=0}^\infty e^{-z\sqrt{m^2+n^2}}-\frac{1}{2}\sum_{n=0}^\infty e^{-zn}+\frac{1}{4}$</p> <p>$\frac{\pi}{2z^2} = \sum_{m=0}^\infty \sum_{n=0}^\infty e^{-z\sqrt{m^2+n^2}}-\frac{1}{2} \frac{e^{z}}{e^{z}-1}+\frac{1}{4}$</p> <p>So, we can now conclude that </p> <p>$\sum_{m=0}^\infty \sum_{n=0}^\infty e^{-z\sqrt{m^2+n^2}}= \frac{\pi}{2z^2}+\frac{1}{2} \frac{e^{z}}{e^{z}-1}-\frac{1}{4}.$</p> <p>This all what I was able to do, but I'am not sure that my assumption ($|\cos(nx) \cos(y n)|$=c) is reasonable to guarantee a good approximation. What do you think ?</p> Answer 22: <p>This is not a solution, as I haven't found a closed form expression (and I doubt meanwhile that it exists) but a collection of illustrative remarks that might be useful. </p> <p>Notice that I have reinserted and simplified the derivation of the Fourier form of the double sum in 6).</p> <p>1) While the sum</p> <p>$$f=\sum _{r=-\infty }^{\infty } \sum _{s=-\infty }^{\infty } \frac{1}{\left(k^2 \tau ^2+(2 \pi r+x)^2+(2 \pi s+y)^2\right)^{3/2}}$$</p> <p>is returned unevaluated by Mathematica the corresponding integral has a very simple closed form solution (independent of $x$ and $y$)</p> <p>$$fi=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }\frac{1}{\left(k^2 \tau ^2+(2 \pi r+x)^2+(2 \pi s+y)^2\right)^{3/2}}dsdr = \frac{1}{2 \pi \sqrt{k^2 \tau ^2}}$$</p> <p>This is of course a purely technical remark as the integral lacks the essential dependence on $x$ and $y$.</p> <p>2) Numerical investigation shows that but a few summand are sufficient to obtain a reasonable approximation to the complete sum.</p> <p>3) Plots of the double sum </p> <p>3a) 3D-plot for two values of $z = k \tau$</p> <p>$z = 1$</p> <p><a href="https://i.stack.imgur.com/C1gdb.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C1gdb.jpg" alt="enter image description here"></a></p> <p>$z = \pi /2$ (this is the value to be assigned to z acording to the OP)</p> <p><a href="https://i.stack.imgur.com/xZbt3.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xZbt3.jpg" alt="enter image description here"></a></p> <p>We see the expected regular spike structure. Where the spikes become steeper with smaller $z$ (notice that the vertical scale is the same in the two pictures).</p> <p>3b) Cross sections $y = 0$ for different values of z</p> <p><a href="https://i.stack.imgur.com/fQfIr.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fQfIr.jpg" alt="enter image description here"></a></p> <p>3c) Maximum value of $f$ as a function of $z$</p> <p><a href="https://i.stack.imgur.com/aKiTz.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aKiTz.jpg" alt="enter image description here"></a></p> <p>Notice that $f \simeq 1/z^3$ for small $z$. Which can also be derived directly form the double sum as is shown in section 5).</p> <p>3d) Minimum value of $f$ as a function of $z$ </p> <p><a href="https://i.stack.imgur.com/MPnhY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MPnhY.jpg" alt="enter image description here"></a></p> <p>4) Physical interpretation of the double sum.</p> <p>Consider an infinite array of unit masses placed in the x-y-plane at the points $(x,y) = 2 \pi (r,s)$.</p> <p>The double sum $f$ multiplied by $z$ is then equal to the vertical component of the gravitational force on a unit mass at the point $(x,y,z)$.</p> <p>It is clear from this analogue model that the force is strong for small $z$ and decreases for increasing $z$.</p> <p>5) Asymptotic behaviour </p> <p>5a) For small $z$ we have $z \simeq 1/z^3$.</p> <p>Indeed, for small $z$ we can write</p> <p>$$f(0,0,z\to 0)\simeq \frac{4}{(2 \pi )^{3/2}}\sum _{r=1}^{\infty } \sum _{s=1}^{\infty } \frac{1}{\left(r^2+s^2\right)^{3/2}}+\frac{1}{z^3}$$</p> <p>The double sum is a finite constant as can be shown thus</p> <p>$$fr=\sum _{r=1}^{\infty } \sum _{s=1}^{\infty } \frac{1}{\left(r^2+s^2\right)^{3/2}}&lt;\int _1^{\infty }\int _1^{\infty }\frac{1}{\left(r^2+s^2\right)^{3/2}}dsdr=2-\sqrt{2}\simeq 0.585786\text{...}$$</p> <p>This proves the statement. </p> <p>5b) For $z\to\infty$ we find $f\simeq \frac{1}{2 \pi z}$. </p> <p>Indeed, in this case the double sum can be approximated by the double integral as calculated in section 1).</p> <p>5c) Both limits are easily understood from the physical analogy: </p> <p>For small $z$ and $(x,y) = (0,0)$ we are approaching a single mass at the origin. The force we have calculated is equal to the $z \frac{1}{z^3} = \frac{1}{z^2}$, in agreement with Newton's law of gravity.</p> <p>For large $z$, on the other hand, the grid structure is "invisible" to the point mass, and the force becomes equal to the force of an infinite massive plate, and this is constant. Indeed we have found $f \simeq 1/z$, times $z$ this is constant.</p> <p>6) Fourier form of the double sum</p> <p>The double sum $f$, renamed $fc$ here, can also be written as a double Fourier series.</p> <p>$$fc(x,y,z)=\frac{1}{2 \pi z}\sum _{n=-\infty }^{\infty } \sum _{m=-\infty }^{\infty } e^{-z \sqrt{m^2+n^2}} \cos (m y) \cos (n x)$$</p> <p>The derivation starts from the expression obtained in the solution of Pierpaolo Vivo </p> <p>$$fe(x,y,z)=\int_0^{\infty } \frac{e^{-t z^2} \vartheta _3\left(\frac{x}{2},e^{-\frac{1}{4\; t}}\right) \vartheta _3\left(\frac{y}{2},e^{-\frac{1}{4\; t}}\right)}{2 \pi ^{3/2} \sqrt{t}} \, dt$$</p> <p>Subsituting the expression for the Jacobi theta function (see the real part of formula (4) in <a href="http://mathworld.wolfram.com/JacobiThetaFunctions.html" rel="nofollow noreferrer">http://mathworld.wolfram.com/JacobiThetaFunctions.html</a>)</p> <p>$$\vartheta _3\left(\frac{u}{2},q\right)=\sum _{n=-\infty }^{\infty } q^{n^2} \cos (n u)$$</p> <p>and doing the $t$-integral gives</p> <p>$$\cos (m y) \cos (n x) \int_0^{\infty } \frac{\exp \left(-\frac{m^2+n^2}{4 t}-t z^2\right)}{\left(2 \pi ^{3/2}\right) \sqrt{t}} \, dt=\frac{e^{-z \sqrt{m^2+n^2}} \cos (m y) \cos (n x)}{2 \pi z}$$</p> <p>The double sum of which is $fc$. Done.</p> <p>Remark: it is interesting, that replacing the double sum by a double integral gives (hint: transform to polar coordinates)</p> <p>$$\frac{1}{2 \pi z}\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }e^{-z \sqrt{m^2+n^2}} \cos (m y) \cos (n x)dmdn=\frac{1}{\left(x^2+y^2+z^2\right)^{3/2}}$$</p> <p>This expression is less trivial than the one in 1) but the double periodic structure is washed out as well.</p> <p>7) Approximations</p> <p>We can approximate the function by taking only a finite number $n$ of terms, defined here as letting the summation indices run from $-n$ to $n$.</p> <p>It turns out that for $z\gtrsim 1$ the Fourier double sum $fc$ requires less terms than the plain sum $f$, while for $z\lesssim 1$ the situation is reversed. The following graphs show the typical behaviour of the maximum and minimum values as a function of the number of terms.</p> <p>The case $z = \pi /2$, this is the values of the OP</p> <p><a href="https://i.stack.imgur.com/eIHcS.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eIHcS.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/duUs9.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/duUs9.jpg" alt="enter image description here"></a></p> <p>The case $z = 1/2$</p> <p><a href="https://i.stack.imgur.com/XEw1m.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XEw1m.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/bnjkG.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bnjkG.jpg" alt="enter image description here"></a></p> <p>We conclude that if $z$ takes on the value $\pi /2$ of the OP, then the Fourier double sum truncated at index $n = 4$ gives a satisfactory approximation to $f$. Hence we can state (somewhat boldly) that this is the requested closed form expression.</p> Answer 23: <p>Not really a full answer, but too long for a comment. You can always use the identity $$ \frac{1}{Z^{3/2}}=\frac{2}{\sqrt{\pi}}\int_0^\infty d\xi\ \xi^{1/2}e^{-\xi Z}\ , $$ for positive $Z$. Your double sum then reads $$ \frac{2}{\sqrt{\pi}}\int_0^\infty d\xi\ \xi^{1/2}e^{-\xi k^2\tau^2}\sum_{r=-\infty}^\infty e^{-\xi (x+2\pi r)^2}\sum_{s=-\infty}^\infty e^{-\xi (y+2\pi s)^2} $$ and evaluating the two sums in terms of Elliptic theta functions, one gets $$ \frac{2}{\sqrt{\pi}}\int_0^\infty d\xi\ \xi^{1/2}e^{-\xi k^2\tau^2}\frac{\vartheta _3\left(\frac{x}{2},e^{-\frac{1}{4 \xi}}\right)}{2 \sqrt{\pi } \sqrt{\xi}}\frac{\vartheta _3\left(\frac{y}{2},e^{-\frac{1}{4 \xi}}\right)}{2 \sqrt{\pi } \sqrt{\xi}} $$ $$ =\int_0^{\infty } \frac{e^{-k^2 \xi \tau ^2} \vartheta _3\left(\frac{x}{2},e^{-\frac{1}{4 \xi }}\right) \vartheta _3\left(\frac{y}{2},e^{-\frac{1}{4 \xi }}\right)}{2 \pi^{3/2} \sqrt{\xi }} \, d\xi. $$ The integral does not seem to be easy to compute in closed form, but might be better suited for an asymptotic/approximate evaluation. </p> Answer 24: <p>@Dr. Wolfgang Hintze, Thank you very much for the contribution. The point of this answer is just to propose something which could complete the derivation of $f_{s}$ you proposed in the previous answer. At this step I don't think there is a closed form solution, but a reasonable approximation can be made.</p> <p>Intuitively, $|\cos(nx) \cos(y n)|$ is spread out equally between 0 and 1. If we approximate it as a constant $c&gt;0$, then the sum become</p> <p>$f_{s} \simeq \frac{2c}{\pi z}\sum_{n=1}^\infty \sum_{m=1}^\infty e^{-z\sqrt{m^2+n^2}}$.</p> <p>Again this expression is not tractable with Mathematica, so I thought that approximation can be made using the Euler–Maclaurin formula.</p> <p>I start by doing a simple modification on $f_{s}$ :</p> <p>$f_{s} \simeq \frac{2c}{\pi z}[\sum_{n=0}^\infty \sum_{m=0}^\infty e^{-z\sqrt{m^2+n^2}}-2]$.</p> <p>In one variable, Euler_Maclaurin formula state that $\sum_{n=0}^\infty f(n)=\frac{1}{2}f(0)+\int_{0}^{\infty}f(n) dn$, I will use this twice in what follow.</p> <p>We know that $\frac{\pi}{2z^2} = \int_{0}^{\infty}\int_{0}^{\infty}e^{-z\sqrt{m^2+n^2}} dn dm$</p> <p>then</p> <p>$\frac{\pi}{2z^2} = \int_{0}^{\infty}[\sum_{m=0}^\infty e^{-z\sqrt{m^2+n^2}}-\frac{1}{2}e^{-zm}]dm$</p> <p>$\frac{\pi}{2z^2} = \sum_{m=0}^\infty \sum_{n=0}^\infty e^{-z\sqrt{m^2+n^2}}-\frac{1}{2}\sum_{n=0}^\infty e^{-zn}+\frac{1}{4}$</p> <p>$\frac{\pi}{2z^2} = \sum_{m=0}^\infty \sum_{n=0}^\infty e^{-z\sqrt{m^2+n^2}}-\frac{1}{2} \frac{e^{z}}{e^{z}-1}+\frac{1}{4}$</p> <p>So, we can now conclude that </p> <p>$\sum_{m=0}^\infty \sum_{n=0}^\infty e^{-z\sqrt{m^2+n^2}}= \frac{\pi}{2z^2}+\frac{1}{2} \frac{e^{z}}{e^{z}-1}-\frac{1}{4}.$</p> <p>This all what I was able to do, but I'am not sure that my assumption ($|\cos(nx) \cos(y n)|$=c) is reasonable to guarantee a good approximation. What do you think ?</p> Answer 25: <p>This is not a solution, as I haven't found a closed form expression (and I doubt meanwhile that it exists) but a collection of illustrative remarks that might be useful. </p> <p>Notice that I have reinserted and simplified the derivation of the Fourier form of the double sum in 6).</p> <p>1) While the sum</p> <p>$$f=\sum _{r=-\infty }^{\infty } \sum _{s=-\infty }^{\infty } \frac{1}{\left(k^2 \tau ^2+(2 \pi r+x)^2+(2 \pi s+y)^2\right)^{3/2}}$$</p> <p>is returned unevaluated by Mathematica the corresponding integral has a very simple closed form solution (independent of $x$ and $y$)</p> <p>$$fi=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }\frac{1}{\left(k^2 \tau ^2+(2 \pi r+x)^2+(2 \pi s+y)^2\right)^{3/2}}dsdr = \frac{1}{2 \pi \sqrt{k^2 \tau ^2}}$$</p> <p>This is of course a purely technical remark as the integral lacks the essential dependence on $x$ and $y$.</p> <p>2) Numerical investigation shows that but a few summand are sufficient to obtain a reasonable approximation to the complete sum.</p> <p>3) Plots of the double sum </p> <p>3a) 3D-plot for two values of $z = k \tau$</p> <p>$z = 1$</p> <p><a href="https://i.stack.imgur.com/C1gdb.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C1gdb.jpg" alt="enter image description here"></a></p> <p>$z = \pi /2$ (this is the value to be assigned to z acording to the OP)</p> <p><a href="https://i.stack.imgur.com/xZbt3.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xZbt3.jpg" alt="enter image description here"></a></p> <p>We see the expected regular spike structure. Where the spikes become steeper with smaller $z$ (notice that the vertical scale is the same in the two pictures).</p> <p>3b) Cross sections $y = 0$ for different values of z</p> <p><a href="https://i.stack.imgur.com/fQfIr.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fQfIr.jpg" alt="enter image description here"></a></p> <p>3c) Maximum value of $f$ as a function of $z$</p> <p><a href="https://i.stack.imgur.com/aKiTz.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aKiTz.jpg" alt="enter image description here"></a></p> <p>Notice that $f \simeq 1/z^3$ for small $z$. Which can also be derived directly form the double sum as is shown in section 5).</p> <p>3d) Minimum value of $f$ as a function of $z$ </p> <p><a href="https://i.stack.imgur.com/MPnhY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MPnhY.jpg" alt="enter image description here"></a></p> <p>4) Physical interpretation of the double sum.</p> <p>Consider an infinite array of unit masses placed in the x-y-plane at the points $(x,y) = 2 \pi (r,s)$.</p> <p>The double sum $f$ multiplied by $z$ is then equal to the vertical component of the gravitational force on a unit mass at the point $(x,y,z)$.</p> <p>It is clear from this analogue model that the force is strong for small $z$ and decreases for increasing $z$.</p> <p>5) Asymptotic behaviour </p> <p>5a) For small $z$ we have $z \simeq 1/z^3$.</p> <p>Indeed, for small $z$ we can write</p> <p>$$f(0,0,z\to 0)\simeq \frac{4}{(2 \pi )^{3/2}}\sum _{r=1}^{\infty } \sum _{s=1}^{\infty } \frac{1}{\left(r^2+s^2\right)^{3/2}}+\frac{1}{z^3}$$</p> <p>The double sum is a finite constant as can be shown thus</p> <p>$$fr=\sum _{r=1}^{\infty } \sum _{s=1}^{\infty } \frac{1}{\left(r^2+s^2\right)^{3/2}}&lt;\int _1^{\infty }\int _1^{\infty }\frac{1}{\left(r^2+s^2\right)^{3/2}}dsdr=2-\sqrt{2}\simeq 0.585786\text{...}$$</p> <p>This proves the statement. </p> <p>5b) For $z\to\infty$ we find $f\simeq \frac{1}{2 \pi z}$. </p> <p>Indeed, in this case the double sum can be approximated by the double integral as calculated in section 1).</p> <p>5c) Both limits are easily understood from the physical analogy: </p> <p>For small $z$ and $(x,y) = (0,0)$ we are approaching a single mass at the origin. The force we have calculated is equal to the $z \frac{1}{z^3} = \frac{1}{z^2}$, in agreement with Newton's law of gravity.</p> <p>For large $z$, on the other hand, the grid structure is "invisible" to the point mass, and the force becomes equal to the force of an infinite massive plate, and this is constant. Indeed we have found $f \simeq 1/z$, times $z$ this is constant.</p> <p>6) Fourier form of the double sum</p> <p>The double sum $f$, renamed $fc$ here, can also be written as a double Fourier series.</p> <p>$$fc(x,y,z)=\frac{1}{2 \pi z}\sum _{n=-\infty }^{\infty } \sum _{m=-\infty }^{\infty } e^{-z \sqrt{m^2+n^2}} \cos (m y) \cos (n x)$$</p> <p>The derivation starts from the expression obtained in the solution of Pierpaolo Vivo </p> <p>$$fe(x,y,z)=\int_0^{\infty } \frac{e^{-t z^2} \vartheta _3\left(\frac{x}{2},e^{-\frac{1}{4\; t}}\right) \vartheta _3\left(\frac{y}{2},e^{-\frac{1}{4\; t}}\right)}{2 \pi ^{3/2} \sqrt{t}} \, dt$$</p> <p>Subsituting the expression for the Jacobi theta function (see the real part of formula (4) in <a href="http://mathworld.wolfram.com/JacobiThetaFunctions.html" rel="nofollow noreferrer">http://mathworld.wolfram.com/JacobiThetaFunctions.html</a>)</p> <p>$$\vartheta _3\left(\frac{u}{2},q\right)=\sum _{n=-\infty }^{\infty } q^{n^2} \cos (n u)$$</p> <p>and doing the $t$-integral gives</p> <p>$$\cos (m y) \cos (n x) \int_0^{\infty } \frac{\exp \left(-\frac{m^2+n^2}{4 t}-t z^2\right)}{\left(2 \pi ^{3/2}\right) \sqrt{t}} \, dt=\frac{e^{-z \sqrt{m^2+n^2}} \cos (m y) \cos (n x)}{2 \pi z}$$</p> <p>The double sum of which is $fc$. Done.</p> <p>Remark: it is interesting, that replacing the double sum by a double integral gives (hint: transform to polar coordinates)</p> <p>$$\frac{1}{2 \pi z}\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }e^{-z \sqrt{m^2+n^2}} \cos (m y) \cos (n x)dmdn=\frac{1}{\left(x^2+y^2+z^2\right)^{3/2}}$$</p> <p>This expression is less trivial than the one in 1) but the double periodic structure is washed out as well.</p> <p>7) Approximations</p> <p>We can approximate the function by taking only a finite number $n$ of terms, defined here as letting the summation indices run from $-n$ to $n$.</p> <p>It turns out that for $z\gtrsim 1$ the Fourier double sum $fc$ requires less terms than the plain sum $f$, while for $z\lesssim 1$ the situation is reversed. The following graphs show the typical behaviour of the maximum and minimum values as a function of the number of terms.</p> <p>The case $z = \pi /2$, this is the values of the OP</p> <p><a href="https://i.stack.imgur.com/eIHcS.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eIHcS.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/duUs9.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/duUs9.jpg" alt="enter image description here"></a></p> <p>The case $z = 1/2$</p> <p><a href="https://i.stack.imgur.com/XEw1m.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XEw1m.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/bnjkG.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bnjkG.jpg" alt="enter image description here"></a></p> <p>We conclude that if $z$ takes on the value $\pi /2$ of the OP, then the Fourier double sum truncated at index $n = 4$ gives a satisfactory approximation to $f$. Hence we can state (somewhat boldly) that this is the requested closed form expression.</p> Answer 26: <p>Not really a full answer, but too long for a comment. You can always use the identity $$ \frac{1}{Z^{3/2}}=\frac{2}{\sqrt{\pi}}\int_0^\infty d\xi\ \xi^{1/2}e^{-\xi Z}\ , $$ for positive $Z$. Your double sum then reads $$ \frac{2}{\sqrt{\pi}}\int_0^\infty d\xi\ \xi^{1/2}e^{-\xi k^2\tau^2}\sum_{r=-\infty}^\infty e^{-\xi (x+2\pi r)^2}\sum_{s=-\infty}^\infty e^{-\xi (y+2\pi s)^2} $$ and evaluating the two sums in terms of Elliptic theta functions, one gets $$ \frac{2}{\sqrt{\pi}}\int_0^\infty d\xi\ \xi^{1/2}e^{-\xi k^2\tau^2}\frac{\vartheta _3\left(\frac{x}{2},e^{-\frac{1}{4 \xi}}\right)}{2 \sqrt{\pi } \sqrt{\xi}}\frac{\vartheta _3\left(\frac{y}{2},e^{-\frac{1}{4 \xi}}\right)}{2 \sqrt{\pi } \sqrt{\xi}} $$ $$ =\int_0^{\infty } \frac{e^{-k^2 \xi \tau ^2} \vartheta _3\left(\frac{x}{2},e^{-\frac{1}{4 \xi }}\right) \vartheta _3\left(\frac{y}{2},e^{-\frac{1}{4 \xi }}\right)}{2 \pi^{3/2} \sqrt{\xi }} \, d\xi. $$ The integral does not seem to be easy to compute in closed form, but might be better suited for an asymptotic/approximate evaluation. </p> Answer 27: <p>@Dr. Wolfgang Hintze, Thank you very much for the contribution. The point of this answer is just to propose something which could complete the derivation of $f_{s}$ you proposed in the previous answer. At this step I don't think there is a closed form solution, but a reasonable approximation can be made.</p> <p>Intuitively, $|\cos(nx) \cos(y n)|$ is spread out equally between 0 and 1. If we approximate it as a constant $c&gt;0$, then the sum become</p> <p>$f_{s} \simeq \frac{2c}{\pi z}\sum_{n=1}^\infty \sum_{m=1}^\infty e^{-z\sqrt{m^2+n^2}}$.</p> <p>Again this expression is not tractable with Mathematica, so I thought that approximation can be made using the Euler–Maclaurin formula.</p> <p>I start by doing a simple modification on $f_{s}$ :</p> <p>$f_{s} \simeq \frac{2c}{\pi z}[\sum_{n=0}^\infty \sum_{m=0}^\infty e^{-z\sqrt{m^2+n^2}}-2]$.</p> <p>In one variable, Euler_Maclaurin formula state that $\sum_{n=0}^\infty f(n)=\frac{1}{2}f(0)+\int_{0}^{\infty}f(n) dn$, I will use this twice in what follow.</p> <p>We know that $\frac{\pi}{2z^2} = \int_{0}^{\infty}\int_{0}^{\infty}e^{-z\sqrt{m^2+n^2}} dn dm$</p> <p>then</p> <p>$\frac{\pi}{2z^2} = \int_{0}^{\infty}[\sum_{m=0}^\infty e^{-z\sqrt{m^2+n^2}}-\frac{1}{2}e^{-zm}]dm$</p> <p>$\frac{\pi}{2z^2} = \sum_{m=0}^\infty \sum_{n=0}^\infty e^{-z\sqrt{m^2+n^2}}-\frac{1}{2}\sum_{n=0}^\infty e^{-zn}+\frac{1}{4}$</p> <p>$\frac{\pi}{2z^2} = \sum_{m=0}^\infty \sum_{n=0}^\infty e^{-z\sqrt{m^2+n^2}}-\frac{1}{2} \frac{e^{z}}{e^{z}-1}+\frac{1}{4}$</p> <p>So, we can now conclude that </p> <p>$\sum_{m=0}^\infty \sum_{n=0}^\infty e^{-z\sqrt{m^2+n^2}}= \frac{\pi}{2z^2}+\frac{1}{2} \frac{e^{z}}{e^{z}-1}-\frac{1}{4}.$</p> <p>This all what I was able to do, but I'am not sure that my assumption ($|\cos(nx) \cos(y n)|$=c) is reasonable to guarantee a good approximation. What do you think ?</p>
Differential equation combined with functional equation which looks simple
2,082,251
5
https://math.stackexchange.com/questions/2082251/differential-equation-combined-with-functional-equation-which-looks-simple
<p>The equation is <span class="math-container">$$f'(x)=cf(x/2).$$</span> The problem emerges when I'm trying to deal with a partial differntial equation <span class="math-container">$$u_{t}(x,t)=u_{xx}(x,t/2).$$</span> Either using separation of variables: <span class="math-container">$$u(x,t)=X(x)T(t)\rightarrow X(x)T'(t)=X''(x)T(t/2)$$</span> so that <span class="math-container">$$X''(x)-cX(x)=0, T'(t)-cT(t/2)=0$$</span> or doing Fourier transform: <span class="math-container">$$\hat{u}_t(s,t)+s^2\hat{u}(s,t/2)=0$$</span> eventually reduces the problem to solving <span class="math-container">$$f'(x)=cf(x/2).$$</span></p> <p>But the equation does not seem easy to solve. I can only prove the existence of solutions using the Euler method. The numerical simulation looks like</p> <p><a href="https://i.stack.imgur.com/rg9mr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rg9mr.png" alt="numerical simulation"></a></p> <p>The other function plotted for comparison is the exponential function.</p> <p>Does a solution of closed form exist to the equation? If not, how do we solve the original PDE?</p>
[ { "answer_id": 2082313, "score": 2, "text": "<p>To find the analytic solution, we can use a power series expansion around $x=0$ in the usual way:\n$$ 0 = f'(x) - c f(x/2) = \\sum_{n=0}^{\\infty} \\left( f^{(n+1)}(0)-\\frac{c}{2^n}f^{(n)}(0) \\right) \\frac{x^n}{n!}, $$\nand then equating coefficients gives\n$$ f^{(n+1)}(0)=\\frac{c}{2^n}f^{(n)}(0). $$\nIt is then simple to prove by induction that\n$$ f^{(n)}(0) = \\frac{c^n}{2^{1+2+3+\\dotsb+(n-1)}} f(0) = \\frac{c^n}{2^{n(n-1)/2}} f(0), $$\nso\n$$ f(x) = f(0) \\sum_{n=0}^{\\infty} \\frac{c^n}{2^{n(n+1)/2}} \\frac{x^n}{n!}. $$</p>\n" }, { "answer_id": 2082319, "score": 2, "text": "<p>You can show pretty easily that $f$ is an entire function. Indeed, by taking an arbitrary $ K&gt;0$, and setting $M_n=\\|f^{(n)}\\|_{\\infty, [-K,K]}$, by differentiating the equation n times, we get the relation $M_n \\leq \\frac{c}{2^{n-1}}M_{n-1}$ for all positive integers $n$. This inequality yields $M_n \\leq \\frac{c^n}{2^{\\frac{n(n-1)}{2}}}M_0$. This shows that $M_n$ converges to $0$ and thus by using taylor estimations, $f$ is entire.</p>\n\n<p>Now writing :</p>\n\n<p>\\begin{equation*}\nf(x) = \\sum_{k=0}^{\\infty} a_k x^k\n\\end{equation*}</p>\n\n<p>We get by substituting in the equation: \n\\begin{equation*}\n \\sum_{k=0}^{\\infty}(k+1) a_{k+1}x^k = \\sum_{k=0}^{\\infty} \\frac{c}{2^k} x^k\n\\end{equation*}</p>\n\n<p>Which yields $a_{k+1} = \\frac{c}{k+1}\\cdot\\frac{a_k}{2^k}$. Therefore, $a_k= \\frac{c^k}{k!} \\cdot \\frac{f(0)}{2^{\\frac{k(k-1}{2}}}$. Which gives you the function you are looking for.</p>\n" }, { "answer_id": 2082313, "score": 2, "text": "<p>To find the analytic solution, we can use a power series expansion around $x=0$ in the usual way:\n$$ 0 = f'(x) - c f(x/2) = \\sum_{n=0}^{\\infty} \\left( f^{(n+1)}(0)-\\frac{c}{2^n}f^{(n)}(0) \\right) \\frac{x^n}{n!}, $$\nand then equating coefficients gives\n$$ f^{(n+1)}(0)=\\frac{c}{2^n}f^{(n)}(0). $$\nIt is then simple to prove by induction that\n$$ f^{(n)}(0) = \\frac{c^n}{2^{1+2+3+\\dotsb+(n-1)}} f(0) = \\frac{c^n}{2^{n(n-1)/2}} f(0), $$\nso\n$$ f(x) = f(0) \\sum_{n=0}^{\\infty} \\frac{c^n}{2^{n(n+1)/2}} \\frac{x^n}{n!}. $$</p>\n" }, { "answer_id": 2082319, "score": 2, "text": "<p>You can show pretty easily that $f$ is an entire function. Indeed, by taking an arbitrary $ K&gt;0$, and setting $M_n=\\|f^{(n)}\\|_{\\infty, [-K,K]}$, by differentiating the equation n times, we get the relation $M_n \\leq \\frac{c}{2^{n-1}}M_{n-1}$ for all positive integers $n$. This inequality yields $M_n \\leq \\frac{c^n}{2^{\\frac{n(n-1)}{2}}}M_0$. This shows that $M_n$ converges to $0$ and thus by using taylor estimations, $f$ is entire.</p>\n\n<p>Now writing :</p>\n\n<p>\\begin{equation*}\nf(x) = \\sum_{k=0}^{\\infty} a_k x^k\n\\end{equation*}</p>\n\n<p>We get by substituting in the equation: \n\\begin{equation*}\n \\sum_{k=0}^{\\infty}(k+1) a_{k+1}x^k = \\sum_{k=0}^{\\infty} \\frac{c}{2^k} x^k\n\\end{equation*}</p>\n\n<p>Which yields $a_{k+1} = \\frac{c}{k+1}\\cdot\\frac{a_k}{2^k}$. Therefore, $a_k= \\frac{c^k}{k!} \\cdot \\frac{f(0)}{2^{\\frac{k(k-1}{2}}}$. Which gives you the function you are looking for.</p>\n" }, { "answer_id": 2082313, "score": 2, "text": "<p>To find the analytic solution, we can use a power series expansion around $x=0$ in the usual way:\n$$ 0 = f'(x) - c f(x/2) = \\sum_{n=0}^{\\infty} \\left( f^{(n+1)}(0)-\\frac{c}{2^n}f^{(n)}(0) \\right) \\frac{x^n}{n!}, $$\nand then equating coefficients gives\n$$ f^{(n+1)}(0)=\\frac{c}{2^n}f^{(n)}(0). $$\nIt is then simple to prove by induction that\n$$ f^{(n)}(0) = \\frac{c^n}{2^{1+2+3+\\dotsb+(n-1)}} f(0) = \\frac{c^n}{2^{n(n-1)/2}} f(0), $$\nso\n$$ f(x) = f(0) \\sum_{n=0}^{\\infty} \\frac{c^n}{2^{n(n+1)/2}} \\frac{x^n}{n!}. $$</p>\n" }, { "answer_id": 2082319, "score": 2, "text": "<p>You can show pretty easily that $f$ is an entire function. Indeed, by taking an arbitrary $ K&gt;0$, and setting $M_n=\\|f^{(n)}\\|_{\\infty, [-K,K]}$, by differentiating the equation n times, we get the relation $M_n \\leq \\frac{c}{2^{n-1}}M_{n-1}$ for all positive integers $n$. This inequality yields $M_n \\leq \\frac{c^n}{2^{\\frac{n(n-1)}{2}}}M_0$. This shows that $M_n$ converges to $0$ and thus by using taylor estimations, $f$ is entire.</p>\n\n<p>Now writing :</p>\n\n<p>\\begin{equation*}\nf(x) = \\sum_{k=0}^{\\infty} a_k x^k\n\\end{equation*}</p>\n\n<p>We get by substituting in the equation: \n\\begin{equation*}\n \\sum_{k=0}^{\\infty}(k+1) a_{k+1}x^k = \\sum_{k=0}^{\\infty} \\frac{c}{2^k} x^k\n\\end{equation*}</p>\n\n<p>Which yields $a_{k+1} = \\frac{c}{k+1}\\cdot\\frac{a_k}{2^k}$. Therefore, $a_k= \\frac{c^k}{k!} \\cdot \\frac{f(0)}{2^{\\frac{k(k-1}{2}}}$. Which gives you the function you are looking for.</p>\n" }, { "answer_id": 2082313, "score": 2, "text": "<p>To find the analytic solution, we can use a power series expansion around $x=0$ in the usual way:\n$$ 0 = f'(x) - c f(x/2) = \\sum_{n=0}^{\\infty} \\left( f^{(n+1)}(0)-\\frac{c}{2^n}f^{(n)}(0) \\right) \\frac{x^n}{n!}, $$\nand then equating coefficients gives\n$$ f^{(n+1)}(0)=\\frac{c}{2^n}f^{(n)}(0). $$\nIt is then simple to prove by induction that\n$$ f^{(n)}(0) = \\frac{c^n}{2^{1+2+3+\\dotsb+(n-1)}} f(0) = \\frac{c^n}{2^{n(n-1)/2}} f(0), $$\nso\n$$ f(x) = f(0) \\sum_{n=0}^{\\infty} \\frac{c^n}{2^{n(n+1)/2}} \\frac{x^n}{n!}. $$</p>\n" }, { "answer_id": 2082319, "score": 2, "text": "<p>You can show pretty easily that $f$ is an entire function. Indeed, by taking an arbitrary $ K&gt;0$, and setting $M_n=\\|f^{(n)}\\|_{\\infty, [-K,K]}$, by differentiating the equation n times, we get the relation $M_n \\leq \\frac{c}{2^{n-1}}M_{n-1}$ for all positive integers $n$. This inequality yields $M_n \\leq \\frac{c^n}{2^{\\frac{n(n-1)}{2}}}M_0$. This shows that $M_n$ converges to $0$ and thus by using taylor estimations, $f$ is entire.</p>\n\n<p>Now writing :</p>\n\n<p>\\begin{equation*}\nf(x) = \\sum_{k=0}^{\\infty} a_k x^k\n\\end{equation*}</p>\n\n<p>We get by substituting in the equation: \n\\begin{equation*}\n \\sum_{k=0}^{\\infty}(k+1) a_{k+1}x^k = \\sum_{k=0}^{\\infty} \\frac{c}{2^k} x^k\n\\end{equation*}</p>\n\n<p>Which yields $a_{k+1} = \\frac{c}{k+1}\\cdot\\frac{a_k}{2^k}$. Therefore, $a_k= \\frac{c^k}{k!} \\cdot \\frac{f(0)}{2^{\\frac{k(k-1}{2}}}$. Which gives you the function you are looking for.</p>\n" }, { "answer_id": 2082313, "score": 2, "text": "<p>To find the analytic solution, we can use a power series expansion around $x=0$ in the usual way:\n$$ 0 = f'(x) - c f(x/2) = \\sum_{n=0}^{\\infty} \\left( f^{(n+1)}(0)-\\frac{c}{2^n}f^{(n)}(0) \\right) \\frac{x^n}{n!}, $$\nand then equating coefficients gives\n$$ f^{(n+1)}(0)=\\frac{c}{2^n}f^{(n)}(0). $$\nIt is then simple to prove by induction that\n$$ f^{(n)}(0) = \\frac{c^n}{2^{1+2+3+\\dotsb+(n-1)}} f(0) = \\frac{c^n}{2^{n(n-1)/2}} f(0), $$\nso\n$$ f(x) = f(0) \\sum_{n=0}^{\\infty} \\frac{c^n}{2^{n(n+1)/2}} \\frac{x^n}{n!}. $$</p>\n" }, { "answer_id": 2082319, "score": 2, "text": "<p>You can show pretty easily that $f$ is an entire function. Indeed, by taking an arbitrary $ K&gt;0$, and setting $M_n=\\|f^{(n)}\\|_{\\infty, [-K,K]}$, by differentiating the equation n times, we get the relation $M_n \\leq \\frac{c}{2^{n-1}}M_{n-1}$ for all positive integers $n$. This inequality yields $M_n \\leq \\frac{c^n}{2^{\\frac{n(n-1)}{2}}}M_0$. This shows that $M_n$ converges to $0$ and thus by using taylor estimations, $f$ is entire.</p>\n\n<p>Now writing :</p>\n\n<p>\\begin{equation*}\nf(x) = \\sum_{k=0}^{\\infty} a_k x^k\n\\end{equation*}</p>\n\n<p>We get by substituting in the equation: \n\\begin{equation*}\n \\sum_{k=0}^{\\infty}(k+1) a_{k+1}x^k = \\sum_{k=0}^{\\infty} \\frac{c}{2^k} x^k\n\\end{equation*}</p>\n\n<p>Which yields $a_{k+1} = \\frac{c}{k+1}\\cdot\\frac{a_k}{2^k}$. Therefore, $a_k= \\frac{c^k}{k!} \\cdot \\frac{f(0)}{2^{\\frac{k(k-1}{2}}}$. Which gives you the function you are looking for.</p>\n" }, { "answer_id": 2082313, "score": 2, "text": "<p>To find the analytic solution, we can use a power series expansion around $x=0$ in the usual way:\n$$ 0 = f'(x) - c f(x/2) = \\sum_{n=0}^{\\infty} \\left( f^{(n+1)}(0)-\\frac{c}{2^n}f^{(n)}(0) \\right) \\frac{x^n}{n!}, $$\nand then equating coefficients gives\n$$ f^{(n+1)}(0)=\\frac{c}{2^n}f^{(n)}(0). $$\nIt is then simple to prove by induction that\n$$ f^{(n)}(0) = \\frac{c^n}{2^{1+2+3+\\dotsb+(n-1)}} f(0) = \\frac{c^n}{2^{n(n-1)/2}} f(0), $$\nso\n$$ f(x) = f(0) \\sum_{n=0}^{\\infty} \\frac{c^n}{2^{n(n+1)/2}} \\frac{x^n}{n!}. $$</p>\n" }, { "answer_id": 2082319, "score": 2, "text": "<p>You can show pretty easily that $f$ is an entire function. Indeed, by taking an arbitrary $ K&gt;0$, and setting $M_n=\\|f^{(n)}\\|_{\\infty, [-K,K]}$, by differentiating the equation n times, we get the relation $M_n \\leq \\frac{c}{2^{n-1}}M_{n-1}$ for all positive integers $n$. This inequality yields $M_n \\leq \\frac{c^n}{2^{\\frac{n(n-1)}{2}}}M_0$. This shows that $M_n$ converges to $0$ and thus by using taylor estimations, $f$ is entire.</p>\n\n<p>Now writing :</p>\n\n<p>\\begin{equation*}\nf(x) = \\sum_{k=0}^{\\infty} a_k x^k\n\\end{equation*}</p>\n\n<p>We get by substituting in the equation: \n\\begin{equation*}\n \\sum_{k=0}^{\\infty}(k+1) a_{k+1}x^k = \\sum_{k=0}^{\\infty} \\frac{c}{2^k} x^k\n\\end{equation*}</p>\n\n<p>Which yields $a_{k+1} = \\frac{c}{k+1}\\cdot\\frac{a_k}{2^k}$. Therefore, $a_k= \\frac{c^k}{k!} \\cdot \\frac{f(0)}{2^{\\frac{k(k-1}{2}}}$. Which gives you the function you are looking for.</p>\n" }, { "answer_id": 2082313, "score": 2, "text": "<p>To find the analytic solution, we can use a power series expansion around $x=0$ in the usual way:\n$$ 0 = f'(x) - c f(x/2) = \\sum_{n=0}^{\\infty} \\left( f^{(n+1)}(0)-\\frac{c}{2^n}f^{(n)}(0) \\right) \\frac{x^n}{n!}, $$\nand then equating coefficients gives\n$$ f^{(n+1)}(0)=\\frac{c}{2^n}f^{(n)}(0). $$\nIt is then simple to prove by induction that\n$$ f^{(n)}(0) = \\frac{c^n}{2^{1+2+3+\\dotsb+(n-1)}} f(0) = \\frac{c^n}{2^{n(n-1)/2}} f(0), $$\nso\n$$ f(x) = f(0) \\sum_{n=0}^{\\infty} \\frac{c^n}{2^{n(n+1)/2}} \\frac{x^n}{n!}. $$</p>\n" }, { "answer_id": 2082319, "score": 2, "text": "<p>You can show pretty easily that $f$ is an entire function. Indeed, by taking an arbitrary $ K&gt;0$, and setting $M_n=\\|f^{(n)}\\|_{\\infty, [-K,K]}$, by differentiating the equation n times, we get the relation $M_n \\leq \\frac{c}{2^{n-1}}M_{n-1}$ for all positive integers $n$. This inequality yields $M_n \\leq \\frac{c^n}{2^{\\frac{n(n-1)}{2}}}M_0$. This shows that $M_n$ converges to $0$ and thus by using taylor estimations, $f$ is entire.</p>\n\n<p>Now writing :</p>\n\n<p>\\begin{equation*}\nf(x) = \\sum_{k=0}^{\\infty} a_k x^k\n\\end{equation*}</p>\n\n<p>We get by substituting in the equation: \n\\begin{equation*}\n \\sum_{k=0}^{\\infty}(k+1) a_{k+1}x^k = \\sum_{k=0}^{\\infty} \\frac{c}{2^k} x^k\n\\end{equation*}</p>\n\n<p>Which yields $a_{k+1} = \\frac{c}{k+1}\\cdot\\frac{a_k}{2^k}$. Therefore, $a_k= \\frac{c^k}{k!} \\cdot \\frac{f(0)}{2^{\\frac{k(k-1}{2}}}$. Which gives you the function you are looking for.</p>\n" }, { "answer_id": 2082313, "score": 2, "text": "<p>To find the analytic solution, we can use a power series expansion around $x=0$ in the usual way:\n$$ 0 = f'(x) - c f(x/2) = \\sum_{n=0}^{\\infty} \\left( f^{(n+1)}(0)-\\frac{c}{2^n}f^{(n)}(0) \\right) \\frac{x^n}{n!}, $$\nand then equating coefficients gives\n$$ f^{(n+1)}(0)=\\frac{c}{2^n}f^{(n)}(0). $$\nIt is then simple to prove by induction that\n$$ f^{(n)}(0) = \\frac{c^n}{2^{1+2+3+\\dotsb+(n-1)}} f(0) = \\frac{c^n}{2^{n(n-1)/2}} f(0), $$\nso\n$$ f(x) = f(0) \\sum_{n=0}^{\\infty} \\frac{c^n}{2^{n(n+1)/2}} \\frac{x^n}{n!}. $$</p>\n" }, { "answer_id": 2082319, "score": 2, "text": "<p>You can show pretty easily that $f$ is an entire function. Indeed, by taking an arbitrary $ K&gt;0$, and setting $M_n=\\|f^{(n)}\\|_{\\infty, [-K,K]}$, by differentiating the equation n times, we get the relation $M_n \\leq \\frac{c}{2^{n-1}}M_{n-1}$ for all positive integers $n$. This inequality yields $M_n \\leq \\frac{c^n}{2^{\\frac{n(n-1)}{2}}}M_0$. This shows that $M_n$ converges to $0$ and thus by using taylor estimations, $f$ is entire.</p>\n\n<p>Now writing :</p>\n\n<p>\\begin{equation*}\nf(x) = \\sum_{k=0}^{\\infty} a_k x^k\n\\end{equation*}</p>\n\n<p>We get by substituting in the equation: \n\\begin{equation*}\n \\sum_{k=0}^{\\infty}(k+1) a_{k+1}x^k = \\sum_{k=0}^{\\infty} \\frac{c}{2^k} x^k\n\\end{equation*}</p>\n\n<p>Which yields $a_{k+1} = \\frac{c}{k+1}\\cdot\\frac{a_k}{2^k}$. Therefore, $a_k= \\frac{c^k}{k!} \\cdot \\frac{f(0)}{2^{\\frac{k(k-1}{2}}}$. Which gives you the function you are looking for.</p>\n" }, { "answer_id": 2082313, "score": 2, "text": "<p>To find the analytic solution, we can use a power series expansion around $x=0$ in the usual way:\n$$ 0 = f'(x) - c f(x/2) = \\sum_{n=0}^{\\infty} \\left( f^{(n+1)}(0)-\\frac{c}{2^n}f^{(n)}(0) \\right) \\frac{x^n}{n!}, $$\nand then equating coefficients gives\n$$ f^{(n+1)}(0)=\\frac{c}{2^n}f^{(n)}(0). $$\nIt is then simple to prove by induction that\n$$ f^{(n)}(0) = \\frac{c^n}{2^{1+2+3+\\dotsb+(n-1)}} f(0) = \\frac{c^n}{2^{n(n-1)/2}} f(0), $$\nso\n$$ f(x) = f(0) \\sum_{n=0}^{\\infty} \\frac{c^n}{2^{n(n+1)/2}} \\frac{x^n}{n!}. $$</p>\n" }, { "answer_id": 2082319, "score": 2, "text": "<p>You can show pretty easily that $f$ is an entire function. Indeed, by taking an arbitrary $ K&gt;0$, and setting $M_n=\\|f^{(n)}\\|_{\\infty, [-K,K]}$, by differentiating the equation n times, we get the relation $M_n \\leq \\frac{c}{2^{n-1}}M_{n-1}$ for all positive integers $n$. This inequality yields $M_n \\leq \\frac{c^n}{2^{\\frac{n(n-1)}{2}}}M_0$. This shows that $M_n$ converges to $0$ and thus by using taylor estimations, $f$ is entire.</p>\n\n<p>Now writing :</p>\n\n<p>\\begin{equation*}\nf(x) = \\sum_{k=0}^{\\infty} a_k x^k\n\\end{equation*}</p>\n\n<p>We get by substituting in the equation: \n\\begin{equation*}\n \\sum_{k=0}^{\\infty}(k+1) a_{k+1}x^k = \\sum_{k=0}^{\\infty} \\frac{c}{2^k} x^k\n\\end{equation*}</p>\n\n<p>Which yields $a_{k+1} = \\frac{c}{k+1}\\cdot\\frac{a_k}{2^k}$. Therefore, $a_k= \\frac{c^k}{k!} \\cdot \\frac{f(0)}{2^{\\frac{k(k-1}{2}}}$. Which gives you the function you are looking for.</p>\n" } ]
Question: Differential equation combined with functional equation which looks simple Answer 1: <p>To find the analytic solution, we can use a power series expansion around $x=0$ in the usual way: $$ 0 = f'(x) - c f(x/2) = \sum_{n=0}^{\infty} \left( f^{(n+1)}(0)-\frac{c}{2^n}f^{(n)}(0) \right) \frac{x^n}{n!}, $$ and then equating coefficients gives $$ f^{(n+1)}(0)=\frac{c}{2^n}f^{(n)}(0). $$ It is then simple to prove by induction that $$ f^{(n)}(0) = \frac{c^n}{2^{1+2+3+\dotsb+(n-1)}} f(0) = \frac{c^n}{2^{n(n-1)/2}} f(0), $$ so $$ f(x) = f(0) \sum_{n=0}^{\infty} \frac{c^n}{2^{n(n+1)/2}} \frac{x^n}{n!}. $$</p> Answer 2: <p>You can show pretty easily that $f$ is an entire function. Indeed, by taking an arbitrary $ K&gt;0$, and setting $M_n=\|f^{(n)}\|_{\infty, [-K,K]}$, by differentiating the equation n times, we get the relation $M_n \leq \frac{c}{2^{n-1}}M_{n-1}$ for all positive integers $n$. This inequality yields $M_n \leq \frac{c^n}{2^{\frac{n(n-1)}{2}}}M_0$. This shows that $M_n$ converges to $0$ and thus by using taylor estimations, $f$ is entire.</p> <p>Now writing :</p> <p>\begin{equation*} f(x) = \sum_{k=0}^{\infty} a_k x^k \end{equation*}</p> <p>We get by substituting in the equation: \begin{equation*} \sum_{k=0}^{\infty}(k+1) a_{k+1}x^k = \sum_{k=0}^{\infty} \frac{c}{2^k} x^k \end{equation*}</p> <p>Which yields $a_{k+1} = \frac{c}{k+1}\cdot\frac{a_k}{2^k}$. Therefore, $a_k= \frac{c^k}{k!} \cdot \frac{f(0)}{2^{\frac{k(k-1}{2}}}$. Which gives you the function you are looking for.</p> Answer 3: <p>To find the analytic solution, we can use a power series expansion around $x=0$ in the usual way: $$ 0 = f'(x) - c f(x/2) = \sum_{n=0}^{\infty} \left( f^{(n+1)}(0)-\frac{c}{2^n}f^{(n)}(0) \right) \frac{x^n}{n!}, $$ and then equating coefficients gives $$ f^{(n+1)}(0)=\frac{c}{2^n}f^{(n)}(0). $$ It is then simple to prove by induction that $$ f^{(n)}(0) = \frac{c^n}{2^{1+2+3+\dotsb+(n-1)}} f(0) = \frac{c^n}{2^{n(n-1)/2}} f(0), $$ so $$ f(x) = f(0) \sum_{n=0}^{\infty} \frac{c^n}{2^{n(n+1)/2}} \frac{x^n}{n!}. $$</p> Answer 4: <p>You can show pretty easily that $f$ is an entire function. Indeed, by taking an arbitrary $ K&gt;0$, and setting $M_n=\|f^{(n)}\|_{\infty, [-K,K]}$, by differentiating the equation n times, we get the relation $M_n \leq \frac{c}{2^{n-1}}M_{n-1}$ for all positive integers $n$. This inequality yields $M_n \leq \frac{c^n}{2^{\frac{n(n-1)}{2}}}M_0$. This shows that $M_n$ converges to $0$ and thus by using taylor estimations, $f$ is entire.</p> <p>Now writing :</p> <p>\begin{equation*} f(x) = \sum_{k=0}^{\infty} a_k x^k \end{equation*}</p> <p>We get by substituting in the equation: \begin{equation*} \sum_{k=0}^{\infty}(k+1) a_{k+1}x^k = \sum_{k=0}^{\infty} \frac{c}{2^k} x^k \end{equation*}</p> <p>Which yields $a_{k+1} = \frac{c}{k+1}\cdot\frac{a_k}{2^k}$. Therefore, $a_k= \frac{c^k}{k!} \cdot \frac{f(0)}{2^{\frac{k(k-1}{2}}}$. Which gives you the function you are looking for.</p> Answer 5: <p>To find the analytic solution, we can use a power series expansion around $x=0$ in the usual way: $$ 0 = f'(x) - c f(x/2) = \sum_{n=0}^{\infty} \left( f^{(n+1)}(0)-\frac{c}{2^n}f^{(n)}(0) \right) \frac{x^n}{n!}, $$ and then equating coefficients gives $$ f^{(n+1)}(0)=\frac{c}{2^n}f^{(n)}(0). $$ It is then simple to prove by induction that $$ f^{(n)}(0) = \frac{c^n}{2^{1+2+3+\dotsb+(n-1)}} f(0) = \frac{c^n}{2^{n(n-1)/2}} f(0), $$ so $$ f(x) = f(0) \sum_{n=0}^{\infty} \frac{c^n}{2^{n(n+1)/2}} \frac{x^n}{n!}. $$</p> Answer 6: <p>You can show pretty easily that $f$ is an entire function. Indeed, by taking an arbitrary $ K&gt;0$, and setting $M_n=\|f^{(n)}\|_{\infty, [-K,K]}$, by differentiating the equation n times, we get the relation $M_n \leq \frac{c}{2^{n-1}}M_{n-1}$ for all positive integers $n$. This inequality yields $M_n \leq \frac{c^n}{2^{\frac{n(n-1)}{2}}}M_0$. This shows that $M_n$ converges to $0$ and thus by using taylor estimations, $f$ is entire.</p> <p>Now writing :</p> <p>\begin{equation*} f(x) = \sum_{k=0}^{\infty} a_k x^k \end{equation*}</p> <p>We get by substituting in the equation: \begin{equation*} \sum_{k=0}^{\infty}(k+1) a_{k+1}x^k = \sum_{k=0}^{\infty} \frac{c}{2^k} x^k \end{equation*}</p> <p>Which yields $a_{k+1} = \frac{c}{k+1}\cdot\frac{a_k}{2^k}$. Therefore, $a_k= \frac{c^k}{k!} \cdot \frac{f(0)}{2^{\frac{k(k-1}{2}}}$. Which gives you the function you are looking for.</p> Answer 7: <p>To find the analytic solution, we can use a power series expansion around $x=0$ in the usual way: $$ 0 = f'(x) - c f(x/2) = \sum_{n=0}^{\infty} \left( f^{(n+1)}(0)-\frac{c}{2^n}f^{(n)}(0) \right) \frac{x^n}{n!}, $$ and then equating coefficients gives $$ f^{(n+1)}(0)=\frac{c}{2^n}f^{(n)}(0). $$ It is then simple to prove by induction that $$ f^{(n)}(0) = \frac{c^n}{2^{1+2+3+\dotsb+(n-1)}} f(0) = \frac{c^n}{2^{n(n-1)/2}} f(0), $$ so $$ f(x) = f(0) \sum_{n=0}^{\infty} \frac{c^n}{2^{n(n+1)/2}} \frac{x^n}{n!}. $$</p> Answer 8: <p>You can show pretty easily that $f$ is an entire function. Indeed, by taking an arbitrary $ K&gt;0$, and setting $M_n=\|f^{(n)}\|_{\infty, [-K,K]}$, by differentiating the equation n times, we get the relation $M_n \leq \frac{c}{2^{n-1}}M_{n-1}$ for all positive integers $n$. This inequality yields $M_n \leq \frac{c^n}{2^{\frac{n(n-1)}{2}}}M_0$. This shows that $M_n$ converges to $0$ and thus by using taylor estimations, $f$ is entire.</p> <p>Now writing :</p> <p>\begin{equation*} f(x) = \sum_{k=0}^{\infty} a_k x^k \end{equation*}</p> <p>We get by substituting in the equation: \begin{equation*} \sum_{k=0}^{\infty}(k+1) a_{k+1}x^k = \sum_{k=0}^{\infty} \frac{c}{2^k} x^k \end{equation*}</p> <p>Which yields $a_{k+1} = \frac{c}{k+1}\cdot\frac{a_k}{2^k}$. Therefore, $a_k= \frac{c^k}{k!} \cdot \frac{f(0)}{2^{\frac{k(k-1}{2}}}$. Which gives you the function you are looking for.</p> Answer 9: <p>To find the analytic solution, we can use a power series expansion around $x=0$ in the usual way: $$ 0 = f'(x) - c f(x/2) = \sum_{n=0}^{\infty} \left( f^{(n+1)}(0)-\frac{c}{2^n}f^{(n)}(0) \right) \frac{x^n}{n!}, $$ and then equating coefficients gives $$ f^{(n+1)}(0)=\frac{c}{2^n}f^{(n)}(0). $$ It is then simple to prove by induction that $$ f^{(n)}(0) = \frac{c^n}{2^{1+2+3+\dotsb+(n-1)}} f(0) = \frac{c^n}{2^{n(n-1)/2}} f(0), $$ so $$ f(x) = f(0) \sum_{n=0}^{\infty} \frac{c^n}{2^{n(n+1)/2}} \frac{x^n}{n!}. $$</p> Answer 10: <p>You can show pretty easily that $f$ is an entire function. Indeed, by taking an arbitrary $ K&gt;0$, and setting $M_n=\|f^{(n)}\|_{\infty, [-K,K]}$, by differentiating the equation n times, we get the relation $M_n \leq \frac{c}{2^{n-1}}M_{n-1}$ for all positive integers $n$. This inequality yields $M_n \leq \frac{c^n}{2^{\frac{n(n-1)}{2}}}M_0$. This shows that $M_n$ converges to $0$ and thus by using taylor estimations, $f$ is entire.</p> <p>Now writing :</p> <p>\begin{equation*} f(x) = \sum_{k=0}^{\infty} a_k x^k \end{equation*}</p> <p>We get by substituting in the equation: \begin{equation*} \sum_{k=0}^{\infty}(k+1) a_{k+1}x^k = \sum_{k=0}^{\infty} \frac{c}{2^k} x^k \end{equation*}</p> <p>Which yields $a_{k+1} = \frac{c}{k+1}\cdot\frac{a_k}{2^k}$. Therefore, $a_k= \frac{c^k}{k!} \cdot \frac{f(0)}{2^{\frac{k(k-1}{2}}}$. Which gives you the function you are looking for.</p> Answer 11: <p>To find the analytic solution, we can use a power series expansion around $x=0$ in the usual way: $$ 0 = f'(x) - c f(x/2) = \sum_{n=0}^{\infty} \left( f^{(n+1)}(0)-\frac{c}{2^n}f^{(n)}(0) \right) \frac{x^n}{n!}, $$ and then equating coefficients gives $$ f^{(n+1)}(0)=\frac{c}{2^n}f^{(n)}(0). $$ It is then simple to prove by induction that $$ f^{(n)}(0) = \frac{c^n}{2^{1+2+3+\dotsb+(n-1)}} f(0) = \frac{c^n}{2^{n(n-1)/2}} f(0), $$ so $$ f(x) = f(0) \sum_{n=0}^{\infty} \frac{c^n}{2^{n(n+1)/2}} \frac{x^n}{n!}. $$</p> Answer 12: <p>You can show pretty easily that $f$ is an entire function. Indeed, by taking an arbitrary $ K&gt;0$, and setting $M_n=\|f^{(n)}\|_{\infty, [-K,K]}$, by differentiating the equation n times, we get the relation $M_n \leq \frac{c}{2^{n-1}}M_{n-1}$ for all positive integers $n$. This inequality yields $M_n \leq \frac{c^n}{2^{\frac{n(n-1)}{2}}}M_0$. This shows that $M_n$ converges to $0$ and thus by using taylor estimations, $f$ is entire.</p> <p>Now writing :</p> <p>\begin{equation*} f(x) = \sum_{k=0}^{\infty} a_k x^k \end{equation*}</p> <p>We get by substituting in the equation: \begin{equation*} \sum_{k=0}^{\infty}(k+1) a_{k+1}x^k = \sum_{k=0}^{\infty} \frac{c}{2^k} x^k \end{equation*}</p> <p>Which yields $a_{k+1} = \frac{c}{k+1}\cdot\frac{a_k}{2^k}$. Therefore, $a_k= \frac{c^k}{k!} \cdot \frac{f(0)}{2^{\frac{k(k-1}{2}}}$. Which gives you the function you are looking for.</p> Answer 13: <p>To find the analytic solution, we can use a power series expansion around $x=0$ in the usual way: $$ 0 = f'(x) - c f(x/2) = \sum_{n=0}^{\infty} \left( f^{(n+1)}(0)-\frac{c}{2^n}f^{(n)}(0) \right) \frac{x^n}{n!}, $$ and then equating coefficients gives $$ f^{(n+1)}(0)=\frac{c}{2^n}f^{(n)}(0). $$ It is then simple to prove by induction that $$ f^{(n)}(0) = \frac{c^n}{2^{1+2+3+\dotsb+(n-1)}} f(0) = \frac{c^n}{2^{n(n-1)/2}} f(0), $$ so $$ f(x) = f(0) \sum_{n=0}^{\infty} \frac{c^n}{2^{n(n+1)/2}} \frac{x^n}{n!}. $$</p> Answer 14: <p>You can show pretty easily that $f$ is an entire function. Indeed, by taking an arbitrary $ K&gt;0$, and setting $M_n=\|f^{(n)}\|_{\infty, [-K,K]}$, by differentiating the equation n times, we get the relation $M_n \leq \frac{c}{2^{n-1}}M_{n-1}$ for all positive integers $n$. This inequality yields $M_n \leq \frac{c^n}{2^{\frac{n(n-1)}{2}}}M_0$. This shows that $M_n$ converges to $0$ and thus by using taylor estimations, $f$ is entire.</p> <p>Now writing :</p> <p>\begin{equation*} f(x) = \sum_{k=0}^{\infty} a_k x^k \end{equation*}</p> <p>We get by substituting in the equation: \begin{equation*} \sum_{k=0}^{\infty}(k+1) a_{k+1}x^k = \sum_{k=0}^{\infty} \frac{c}{2^k} x^k \end{equation*}</p> <p>Which yields $a_{k+1} = \frac{c}{k+1}\cdot\frac{a_k}{2^k}$. Therefore, $a_k= \frac{c^k}{k!} \cdot \frac{f(0)}{2^{\frac{k(k-1}{2}}}$. Which gives you the function you are looking for.</p> Answer 15: <p>To find the analytic solution, we can use a power series expansion around $x=0$ in the usual way: $$ 0 = f'(x) - c f(x/2) = \sum_{n=0}^{\infty} \left( f^{(n+1)}(0)-\frac{c}{2^n}f^{(n)}(0) \right) \frac{x^n}{n!}, $$ and then equating coefficients gives $$ f^{(n+1)}(0)=\frac{c}{2^n}f^{(n)}(0). $$ It is then simple to prove by induction that $$ f^{(n)}(0) = \frac{c^n}{2^{1+2+3+\dotsb+(n-1)}} f(0) = \frac{c^n}{2^{n(n-1)/2}} f(0), $$ so $$ f(x) = f(0) \sum_{n=0}^{\infty} \frac{c^n}{2^{n(n+1)/2}} \frac{x^n}{n!}. $$</p> Answer 16: <p>You can show pretty easily that $f$ is an entire function. Indeed, by taking an arbitrary $ K&gt;0$, and setting $M_n=\|f^{(n)}\|_{\infty, [-K,K]}$, by differentiating the equation n times, we get the relation $M_n \leq \frac{c}{2^{n-1}}M_{n-1}$ for all positive integers $n$. This inequality yields $M_n \leq \frac{c^n}{2^{\frac{n(n-1)}{2}}}M_0$. This shows that $M_n$ converges to $0$ and thus by using taylor estimations, $f$ is entire.</p> <p>Now writing :</p> <p>\begin{equation*} f(x) = \sum_{k=0}^{\infty} a_k x^k \end{equation*}</p> <p>We get by substituting in the equation: \begin{equation*} \sum_{k=0}^{\infty}(k+1) a_{k+1}x^k = \sum_{k=0}^{\infty} \frac{c}{2^k} x^k \end{equation*}</p> <p>Which yields $a_{k+1} = \frac{c}{k+1}\cdot\frac{a_k}{2^k}$. Therefore, $a_k= \frac{c^k}{k!} \cdot \frac{f(0)}{2^{\frac{k(k-1}{2}}}$. Which gives you the function you are looking for.</p> Answer 17: <p>To find the analytic solution, we can use a power series expansion around $x=0$ in the usual way: $$ 0 = f'(x) - c f(x/2) = \sum_{n=0}^{\infty} \left( f^{(n+1)}(0)-\frac{c}{2^n}f^{(n)}(0) \right) \frac{x^n}{n!}, $$ and then equating coefficients gives $$ f^{(n+1)}(0)=\frac{c}{2^n}f^{(n)}(0). $$ It is then simple to prove by induction that $$ f^{(n)}(0) = \frac{c^n}{2^{1+2+3+\dotsb+(n-1)}} f(0) = \frac{c^n}{2^{n(n-1)/2}} f(0), $$ so $$ f(x) = f(0) \sum_{n=0}^{\infty} \frac{c^n}{2^{n(n+1)/2}} \frac{x^n}{n!}. $$</p> Answer 18: <p>You can show pretty easily that $f$ is an entire function. Indeed, by taking an arbitrary $ K&gt;0$, and setting $M_n=\|f^{(n)}\|_{\infty, [-K,K]}$, by differentiating the equation n times, we get the relation $M_n \leq \frac{c}{2^{n-1}}M_{n-1}$ for all positive integers $n$. This inequality yields $M_n \leq \frac{c^n}{2^{\frac{n(n-1)}{2}}}M_0$. This shows that $M_n$ converges to $0$ and thus by using taylor estimations, $f$ is entire.</p> <p>Now writing :</p> <p>\begin{equation*} f(x) = \sum_{k=0}^{\infty} a_k x^k \end{equation*}</p> <p>We get by substituting in the equation: \begin{equation*} \sum_{k=0}^{\infty}(k+1) a_{k+1}x^k = \sum_{k=0}^{\infty} \frac{c}{2^k} x^k \end{equation*}</p> <p>Which yields $a_{k+1} = \frac{c}{k+1}\cdot\frac{a_k}{2^k}$. Therefore, $a_k= \frac{c^k}{k!} \cdot \frac{f(0)}{2^{\frac{k(k-1}{2}}}$. Which gives you the function you are looking for.</p>
Verify and Improve: $|x+\frac{1}{x}|\geq 6$
2,082,597
5
https://math.stackexchange.com/questions/2082597/verify-and-improve-x-frac1x-geq-6
<h2>Exercise:</h2> <blockquote> <p>Determine the intervals in which the following inequality is satisfied: $$|x+\frac{1}{x}| \geq 6$$</p> </blockquote> <hr> <h2>Attempt:</h2> <p><em>I apologize for not MathJax-ifying this; the following easy-to-read formatting is beyond my expertise.</em></p> <p><a href="https://i.stack.imgur.com/fDAQk.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fDAQk.jpg" alt=""></a></p> <hr> <h2>Request:</h2> <p>Can I improve upon the solution to make it more elegant and concise?</p>
[ { "answer_id": 2082633, "score": 3, "text": "<p>My answer tries to explain more precisely the symmetry involved.</p>\n\n<p>Let's consider:\n$$\nf(x) \\equiv x+\\frac{1}{x}=-f(-x)\n$$\nHence, the LHS of the inequality is an odd function of x. So, under an absolute value:\n$$\n|f(x)|=|f(-x)|\\implies |f(x)|=|f(|x|)| \\,\\,\\,\\forall x\n$$\nHence, you can replace $x\\to|x|$ and solve accordingly as other user have suggested.</p>\n" }, { "answer_id": 2082612, "score": 2, "text": "<p>Possibly more concise approach (though yours is certainly sound):</p>\n\n<ul>\n<li>For $|x+\\frac{1}{x}| \\geq 6$, we need only consider $x&gt;0$ by symmetry (exclude $x=0$ as you say)</li>\n<li>$x+\\frac{1}{x}\\ge6\\implies x^2-6x+1\\ge0\\iff(x-3)^2\\ge8\\iff|x-3|\\ge2\\sqrt{2}$</li>\n<li>So, for $x&gt;0$, our solution is $0&lt;x\\le3-2\\sqrt2\\cup x\\ge3+2\\sqrt2$</li>\n<li>For $x&lt;0$, symmetry gives $2\\sqrt2-3&lt;x&lt;0\\cup x&lt;-3-2\\sqrt2$\n\n<ul>\n<li>Alternatively, we can just replace every $x$ with $|x|$ to arrive directly at:</li>\n</ul></li>\n</ul>\n\n<p>$$|x+\\frac{1}{x}| \\geq 6 \\iff 0&lt;|x|\\le3-2\\sqrt2\\cup |x|\\ge3+2\\sqrt2$$</p>\n" }, { "answer_id": 2082618, "score": 2, "text": "<p>Square the inequality to get rid of the absolute value.</p>\n\n<p>$$\\left(x+\\frac1x\\right)^2=x^2+\\frac1{x^2}+2\\ge36.$$</p>\n\n<p>s $x^2&gt;0$, this can be rewritten</p>\n\n<p>$$x^4-34x^2+1\\ge0$$ and by completing the square,</p>\n\n<p>$$(x^2-17)^2\\ge17^2-1=2\\cdot12^2.$$</p>\n\n<p>Then</p>\n\n<p>$$x^2-17\\le-12\\sqrt2\\ \\lor\\ 12\\sqrt2\\le x^2-17,$$ \n$$x^2\\le17-12\\sqrt2\\ \\lor\\ 12\\sqrt2+17\\le x^2$$ \n and</p>\n\n<p>$$|x|\\le\\sqrt{17-12\\sqrt2}\\ \\lor\\ \\sqrt{12\\sqrt2+17}\\le|x|,$$</p>\n\n<p>also written, after radical denesting</p>\n\n<p>$$|x|\\le3-2\\sqrt2\\ \\lor\\ 3+2\\sqrt2\\le|x|.$$</p>\n" }, { "answer_id": 2082616, "score": 1, "text": "<p>First of all you don't need to check for negative/positive $x$ values separately. Because if the inequality holds for some positive (or negative) $x$, then it holds for $-x$, too. So you just need to check it for one side.</p>\n\n<p>Assuming $x&gt;0$, we have\n$$\n\\begin{align}\n6&amp;\\leq x+\\frac1x\\\\\n0&amp;\\leq x^2-6x+1\\\\\n0&amp;\\leq (x-3)^2-8\\\\\n8&amp;\\leq (x-3)^2\\\\\n2\\sqrt{2}&amp;\\leq|x-3|\\\\\n\\end{align}\n$$</p>\n\n<p>which is equivalent to\n$$2\\sqrt{2}\\leq x-3 \\quad\\mathrm{or}\\quad -2\\sqrt{2}\\geq x-3\\\\$$\nor\n$$x\\geq3+2\\sqrt{2} \\quad\\mathrm{or}\\quad 0&lt;x\\leq 3-2\\sqrt{2}\\\\$$\nDue to symmetry around zero (as I've mentioned in the beginning) the final answer is $$x\\in(-\\infty,-3-2\\sqrt{2}]\\cup[-3+2\\sqrt{2},0)\\cup(0,3-2\\sqrt{2}]\\cup[3+2\\sqrt{2},\\infty)$$</p>\n" }, { "answer_id": 2082633, "score": 3, "text": "<p>My answer tries to explain more precisely the symmetry involved.</p>\n\n<p>Let's consider:\n$$\nf(x) \\equiv x+\\frac{1}{x}=-f(-x)\n$$\nHence, the LHS of the inequality is an odd function of x. So, under an absolute value:\n$$\n|f(x)|=|f(-x)|\\implies |f(x)|=|f(|x|)| \\,\\,\\,\\forall x\n$$\nHence, you can replace $x\\to|x|$ and solve accordingly as other user have suggested.</p>\n" }, { "answer_id": 2082612, "score": 2, "text": "<p>Possibly more concise approach (though yours is certainly sound):</p>\n\n<ul>\n<li>For $|x+\\frac{1}{x}| \\geq 6$, we need only consider $x&gt;0$ by symmetry (exclude $x=0$ as you say)</li>\n<li>$x+\\frac{1}{x}\\ge6\\implies x^2-6x+1\\ge0\\iff(x-3)^2\\ge8\\iff|x-3|\\ge2\\sqrt{2}$</li>\n<li>So, for $x&gt;0$, our solution is $0&lt;x\\le3-2\\sqrt2\\cup x\\ge3+2\\sqrt2$</li>\n<li>For $x&lt;0$, symmetry gives $2\\sqrt2-3&lt;x&lt;0\\cup x&lt;-3-2\\sqrt2$\n\n<ul>\n<li>Alternatively, we can just replace every $x$ with $|x|$ to arrive directly at:</li>\n</ul></li>\n</ul>\n\n<p>$$|x+\\frac{1}{x}| \\geq 6 \\iff 0&lt;|x|\\le3-2\\sqrt2\\cup |x|\\ge3+2\\sqrt2$$</p>\n" }, { "answer_id": 2082618, "score": 2, "text": "<p>Square the inequality to get rid of the absolute value.</p>\n\n<p>$$\\left(x+\\frac1x\\right)^2=x^2+\\frac1{x^2}+2\\ge36.$$</p>\n\n<p>s $x^2&gt;0$, this can be rewritten</p>\n\n<p>$$x^4-34x^2+1\\ge0$$ and by completing the square,</p>\n\n<p>$$(x^2-17)^2\\ge17^2-1=2\\cdot12^2.$$</p>\n\n<p>Then</p>\n\n<p>$$x^2-17\\le-12\\sqrt2\\ \\lor\\ 12\\sqrt2\\le x^2-17,$$ \n$$x^2\\le17-12\\sqrt2\\ \\lor\\ 12\\sqrt2+17\\le x^2$$ \n and</p>\n\n<p>$$|x|\\le\\sqrt{17-12\\sqrt2}\\ \\lor\\ \\sqrt{12\\sqrt2+17}\\le|x|,$$</p>\n\n<p>also written, after radical denesting</p>\n\n<p>$$|x|\\le3-2\\sqrt2\\ \\lor\\ 3+2\\sqrt2\\le|x|.$$</p>\n" }, { "answer_id": 2082616, "score": 1, "text": "<p>First of all you don't need to check for negative/positive $x$ values separately. Because if the inequality holds for some positive (or negative) $x$, then it holds for $-x$, too. So you just need to check it for one side.</p>\n\n<p>Assuming $x&gt;0$, we have\n$$\n\\begin{align}\n6&amp;\\leq x+\\frac1x\\\\\n0&amp;\\leq x^2-6x+1\\\\\n0&amp;\\leq (x-3)^2-8\\\\\n8&amp;\\leq (x-3)^2\\\\\n2\\sqrt{2}&amp;\\leq|x-3|\\\\\n\\end{align}\n$$</p>\n\n<p>which is equivalent to\n$$2\\sqrt{2}\\leq x-3 \\quad\\mathrm{or}\\quad -2\\sqrt{2}\\geq x-3\\\\$$\nor\n$$x\\geq3+2\\sqrt{2} \\quad\\mathrm{or}\\quad 0&lt;x\\leq 3-2\\sqrt{2}\\\\$$\nDue to symmetry around zero (as I've mentioned in the beginning) the final answer is $$x\\in(-\\infty,-3-2\\sqrt{2}]\\cup[-3+2\\sqrt{2},0)\\cup(0,3-2\\sqrt{2}]\\cup[3+2\\sqrt{2},\\infty)$$</p>\n" }, { "answer_id": 2082633, "score": 3, "text": "<p>My answer tries to explain more precisely the symmetry involved.</p>\n\n<p>Let's consider:\n$$\nf(x) \\equiv x+\\frac{1}{x}=-f(-x)\n$$\nHence, the LHS of the inequality is an odd function of x. So, under an absolute value:\n$$\n|f(x)|=|f(-x)|\\implies |f(x)|=|f(|x|)| \\,\\,\\,\\forall x\n$$\nHence, you can replace $x\\to|x|$ and solve accordingly as other user have suggested.</p>\n" }, { "answer_id": 2082612, "score": 2, "text": "<p>Possibly more concise approach (though yours is certainly sound):</p>\n\n<ul>\n<li>For $|x+\\frac{1}{x}| \\geq 6$, we need only consider $x&gt;0$ by symmetry (exclude $x=0$ as you say)</li>\n<li>$x+\\frac{1}{x}\\ge6\\implies x^2-6x+1\\ge0\\iff(x-3)^2\\ge8\\iff|x-3|\\ge2\\sqrt{2}$</li>\n<li>So, for $x&gt;0$, our solution is $0&lt;x\\le3-2\\sqrt2\\cup x\\ge3+2\\sqrt2$</li>\n<li>For $x&lt;0$, symmetry gives $2\\sqrt2-3&lt;x&lt;0\\cup x&lt;-3-2\\sqrt2$\n\n<ul>\n<li>Alternatively, we can just replace every $x$ with $|x|$ to arrive directly at:</li>\n</ul></li>\n</ul>\n\n<p>$$|x+\\frac{1}{x}| \\geq 6 \\iff 0&lt;|x|\\le3-2\\sqrt2\\cup |x|\\ge3+2\\sqrt2$$</p>\n" }, { "answer_id": 2082618, "score": 2, "text": "<p>Square the inequality to get rid of the absolute value.</p>\n\n<p>$$\\left(x+\\frac1x\\right)^2=x^2+\\frac1{x^2}+2\\ge36.$$</p>\n\n<p>s $x^2&gt;0$, this can be rewritten</p>\n\n<p>$$x^4-34x^2+1\\ge0$$ and by completing the square,</p>\n\n<p>$$(x^2-17)^2\\ge17^2-1=2\\cdot12^2.$$</p>\n\n<p>Then</p>\n\n<p>$$x^2-17\\le-12\\sqrt2\\ \\lor\\ 12\\sqrt2\\le x^2-17,$$ \n$$x^2\\le17-12\\sqrt2\\ \\lor\\ 12\\sqrt2+17\\le x^2$$ \n and</p>\n\n<p>$$|x|\\le\\sqrt{17-12\\sqrt2}\\ \\lor\\ \\sqrt{12\\sqrt2+17}\\le|x|,$$</p>\n\n<p>also written, after radical denesting</p>\n\n<p>$$|x|\\le3-2\\sqrt2\\ \\lor\\ 3+2\\sqrt2\\le|x|.$$</p>\n" }, { "answer_id": 2082616, "score": 1, "text": "<p>First of all you don't need to check for negative/positive $x$ values separately. Because if the inequality holds for some positive (or negative) $x$, then it holds for $-x$, too. So you just need to check it for one side.</p>\n\n<p>Assuming $x&gt;0$, we have\n$$\n\\begin{align}\n6&amp;\\leq x+\\frac1x\\\\\n0&amp;\\leq x^2-6x+1\\\\\n0&amp;\\leq (x-3)^2-8\\\\\n8&amp;\\leq (x-3)^2\\\\\n2\\sqrt{2}&amp;\\leq|x-3|\\\\\n\\end{align}\n$$</p>\n\n<p>which is equivalent to\n$$2\\sqrt{2}\\leq x-3 \\quad\\mathrm{or}\\quad -2\\sqrt{2}\\geq x-3\\\\$$\nor\n$$x\\geq3+2\\sqrt{2} \\quad\\mathrm{or}\\quad 0&lt;x\\leq 3-2\\sqrt{2}\\\\$$\nDue to symmetry around zero (as I've mentioned in the beginning) the final answer is $$x\\in(-\\infty,-3-2\\sqrt{2}]\\cup[-3+2\\sqrt{2},0)\\cup(0,3-2\\sqrt{2}]\\cup[3+2\\sqrt{2},\\infty)$$</p>\n" }, { "answer_id": 2082633, "score": 3, "text": "<p>My answer tries to explain more precisely the symmetry involved.</p>\n\n<p>Let's consider:\n$$\nf(x) \\equiv x+\\frac{1}{x}=-f(-x)\n$$\nHence, the LHS of the inequality is an odd function of x. So, under an absolute value:\n$$\n|f(x)|=|f(-x)|\\implies |f(x)|=|f(|x|)| \\,\\,\\,\\forall x\n$$\nHence, you can replace $x\\to|x|$ and solve accordingly as other user have suggested.</p>\n" }, { "answer_id": 2082612, "score": 2, "text": "<p>Possibly more concise approach (though yours is certainly sound):</p>\n\n<ul>\n<li>For $|x+\\frac{1}{x}| \\geq 6$, we need only consider $x&gt;0$ by symmetry (exclude $x=0$ as you say)</li>\n<li>$x+\\frac{1}{x}\\ge6\\implies x^2-6x+1\\ge0\\iff(x-3)^2\\ge8\\iff|x-3|\\ge2\\sqrt{2}$</li>\n<li>So, for $x&gt;0$, our solution is $0&lt;x\\le3-2\\sqrt2\\cup x\\ge3+2\\sqrt2$</li>\n<li>For $x&lt;0$, symmetry gives $2\\sqrt2-3&lt;x&lt;0\\cup x&lt;-3-2\\sqrt2$\n\n<ul>\n<li>Alternatively, we can just replace every $x$ with $|x|$ to arrive directly at:</li>\n</ul></li>\n</ul>\n\n<p>$$|x+\\frac{1}{x}| \\geq 6 \\iff 0&lt;|x|\\le3-2\\sqrt2\\cup |x|\\ge3+2\\sqrt2$$</p>\n" }, { "answer_id": 2082618, "score": 2, "text": "<p>Square the inequality to get rid of the absolute value.</p>\n\n<p>$$\\left(x+\\frac1x\\right)^2=x^2+\\frac1{x^2}+2\\ge36.$$</p>\n\n<p>s $x^2&gt;0$, this can be rewritten</p>\n\n<p>$$x^4-34x^2+1\\ge0$$ and by completing the square,</p>\n\n<p>$$(x^2-17)^2\\ge17^2-1=2\\cdot12^2.$$</p>\n\n<p>Then</p>\n\n<p>$$x^2-17\\le-12\\sqrt2\\ \\lor\\ 12\\sqrt2\\le x^2-17,$$ \n$$x^2\\le17-12\\sqrt2\\ \\lor\\ 12\\sqrt2+17\\le x^2$$ \n and</p>\n\n<p>$$|x|\\le\\sqrt{17-12\\sqrt2}\\ \\lor\\ \\sqrt{12\\sqrt2+17}\\le|x|,$$</p>\n\n<p>also written, after radical denesting</p>\n\n<p>$$|x|\\le3-2\\sqrt2\\ \\lor\\ 3+2\\sqrt2\\le|x|.$$</p>\n" }, { "answer_id": 2082616, "score": 1, "text": "<p>First of all you don't need to check for negative/positive $x$ values separately. Because if the inequality holds for some positive (or negative) $x$, then it holds for $-x$, too. So you just need to check it for one side.</p>\n\n<p>Assuming $x&gt;0$, we have\n$$\n\\begin{align}\n6&amp;\\leq x+\\frac1x\\\\\n0&amp;\\leq x^2-6x+1\\\\\n0&amp;\\leq (x-3)^2-8\\\\\n8&amp;\\leq (x-3)^2\\\\\n2\\sqrt{2}&amp;\\leq|x-3|\\\\\n\\end{align}\n$$</p>\n\n<p>which is equivalent to\n$$2\\sqrt{2}\\leq x-3 \\quad\\mathrm{or}\\quad -2\\sqrt{2}\\geq x-3\\\\$$\nor\n$$x\\geq3+2\\sqrt{2} \\quad\\mathrm{or}\\quad 0&lt;x\\leq 3-2\\sqrt{2}\\\\$$\nDue to symmetry around zero (as I've mentioned in the beginning) the final answer is $$x\\in(-\\infty,-3-2\\sqrt{2}]\\cup[-3+2\\sqrt{2},0)\\cup(0,3-2\\sqrt{2}]\\cup[3+2\\sqrt{2},\\infty)$$</p>\n" }, { "answer_id": 2082633, "score": 3, "text": "<p>My answer tries to explain more precisely the symmetry involved.</p>\n\n<p>Let's consider:\n$$\nf(x) \\equiv x+\\frac{1}{x}=-f(-x)\n$$\nHence, the LHS of the inequality is an odd function of x. So, under an absolute value:\n$$\n|f(x)|=|f(-x)|\\implies |f(x)|=|f(|x|)| \\,\\,\\,\\forall x\n$$\nHence, you can replace $x\\to|x|$ and solve accordingly as other user have suggested.</p>\n" }, { "answer_id": 2082612, "score": 2, "text": "<p>Possibly more concise approach (though yours is certainly sound):</p>\n\n<ul>\n<li>For $|x+\\frac{1}{x}| \\geq 6$, we need only consider $x&gt;0$ by symmetry (exclude $x=0$ as you say)</li>\n<li>$x+\\frac{1}{x}\\ge6\\implies x^2-6x+1\\ge0\\iff(x-3)^2\\ge8\\iff|x-3|\\ge2\\sqrt{2}$</li>\n<li>So, for $x&gt;0$, our solution is $0&lt;x\\le3-2\\sqrt2\\cup x\\ge3+2\\sqrt2$</li>\n<li>For $x&lt;0$, symmetry gives $2\\sqrt2-3&lt;x&lt;0\\cup x&lt;-3-2\\sqrt2$\n\n<ul>\n<li>Alternatively, we can just replace every $x$ with $|x|$ to arrive directly at:</li>\n</ul></li>\n</ul>\n\n<p>$$|x+\\frac{1}{x}| \\geq 6 \\iff 0&lt;|x|\\le3-2\\sqrt2\\cup |x|\\ge3+2\\sqrt2$$</p>\n" }, { "answer_id": 2082618, "score": 2, "text": "<p>Square the inequality to get rid of the absolute value.</p>\n\n<p>$$\\left(x+\\frac1x\\right)^2=x^2+\\frac1{x^2}+2\\ge36.$$</p>\n\n<p>s $x^2&gt;0$, this can be rewritten</p>\n\n<p>$$x^4-34x^2+1\\ge0$$ and by completing the square,</p>\n\n<p>$$(x^2-17)^2\\ge17^2-1=2\\cdot12^2.$$</p>\n\n<p>Then</p>\n\n<p>$$x^2-17\\le-12\\sqrt2\\ \\lor\\ 12\\sqrt2\\le x^2-17,$$ \n$$x^2\\le17-12\\sqrt2\\ \\lor\\ 12\\sqrt2+17\\le x^2$$ \n and</p>\n\n<p>$$|x|\\le\\sqrt{17-12\\sqrt2}\\ \\lor\\ \\sqrt{12\\sqrt2+17}\\le|x|,$$</p>\n\n<p>also written, after radical denesting</p>\n\n<p>$$|x|\\le3-2\\sqrt2\\ \\lor\\ 3+2\\sqrt2\\le|x|.$$</p>\n" }, { "answer_id": 2082616, "score": 1, "text": "<p>First of all you don't need to check for negative/positive $x$ values separately. Because if the inequality holds for some positive (or negative) $x$, then it holds for $-x$, too. So you just need to check it for one side.</p>\n\n<p>Assuming $x&gt;0$, we have\n$$\n\\begin{align}\n6&amp;\\leq x+\\frac1x\\\\\n0&amp;\\leq x^2-6x+1\\\\\n0&amp;\\leq (x-3)^2-8\\\\\n8&amp;\\leq (x-3)^2\\\\\n2\\sqrt{2}&amp;\\leq|x-3|\\\\\n\\end{align}\n$$</p>\n\n<p>which is equivalent to\n$$2\\sqrt{2}\\leq x-3 \\quad\\mathrm{or}\\quad -2\\sqrt{2}\\geq x-3\\\\$$\nor\n$$x\\geq3+2\\sqrt{2} \\quad\\mathrm{or}\\quad 0&lt;x\\leq 3-2\\sqrt{2}\\\\$$\nDue to symmetry around zero (as I've mentioned in the beginning) the final answer is $$x\\in(-\\infty,-3-2\\sqrt{2}]\\cup[-3+2\\sqrt{2},0)\\cup(0,3-2\\sqrt{2}]\\cup[3+2\\sqrt{2},\\infty)$$</p>\n" }, { "answer_id": 2082633, "score": 3, "text": "<p>My answer tries to explain more precisely the symmetry involved.</p>\n\n<p>Let's consider:\n$$\nf(x) \\equiv x+\\frac{1}{x}=-f(-x)\n$$\nHence, the LHS of the inequality is an odd function of x. So, under an absolute value:\n$$\n|f(x)|=|f(-x)|\\implies |f(x)|=|f(|x|)| \\,\\,\\,\\forall x\n$$\nHence, you can replace $x\\to|x|$ and solve accordingly as other user have suggested.</p>\n" }, { "answer_id": 2082612, "score": 2, "text": "<p>Possibly more concise approach (though yours is certainly sound):</p>\n\n<ul>\n<li>For $|x+\\frac{1}{x}| \\geq 6$, we need only consider $x&gt;0$ by symmetry (exclude $x=0$ as you say)</li>\n<li>$x+\\frac{1}{x}\\ge6\\implies x^2-6x+1\\ge0\\iff(x-3)^2\\ge8\\iff|x-3|\\ge2\\sqrt{2}$</li>\n<li>So, for $x&gt;0$, our solution is $0&lt;x\\le3-2\\sqrt2\\cup x\\ge3+2\\sqrt2$</li>\n<li>For $x&lt;0$, symmetry gives $2\\sqrt2-3&lt;x&lt;0\\cup x&lt;-3-2\\sqrt2$\n\n<ul>\n<li>Alternatively, we can just replace every $x$ with $|x|$ to arrive directly at:</li>\n</ul></li>\n</ul>\n\n<p>$$|x+\\frac{1}{x}| \\geq 6 \\iff 0&lt;|x|\\le3-2\\sqrt2\\cup |x|\\ge3+2\\sqrt2$$</p>\n" }, { "answer_id": 2082618, "score": 2, "text": "<p>Square the inequality to get rid of the absolute value.</p>\n\n<p>$$\\left(x+\\frac1x\\right)^2=x^2+\\frac1{x^2}+2\\ge36.$$</p>\n\n<p>s $x^2&gt;0$, this can be rewritten</p>\n\n<p>$$x^4-34x^2+1\\ge0$$ and by completing the square,</p>\n\n<p>$$(x^2-17)^2\\ge17^2-1=2\\cdot12^2.$$</p>\n\n<p>Then</p>\n\n<p>$$x^2-17\\le-12\\sqrt2\\ \\lor\\ 12\\sqrt2\\le x^2-17,$$ \n$$x^2\\le17-12\\sqrt2\\ \\lor\\ 12\\sqrt2+17\\le x^2$$ \n and</p>\n\n<p>$$|x|\\le\\sqrt{17-12\\sqrt2}\\ \\lor\\ \\sqrt{12\\sqrt2+17}\\le|x|,$$</p>\n\n<p>also written, after radical denesting</p>\n\n<p>$$|x|\\le3-2\\sqrt2\\ \\lor\\ 3+2\\sqrt2\\le|x|.$$</p>\n" }, { "answer_id": 2082616, "score": 1, "text": "<p>First of all you don't need to check for negative/positive $x$ values separately. Because if the inequality holds for some positive (or negative) $x$, then it holds for $-x$, too. So you just need to check it for one side.</p>\n\n<p>Assuming $x&gt;0$, we have\n$$\n\\begin{align}\n6&amp;\\leq x+\\frac1x\\\\\n0&amp;\\leq x^2-6x+1\\\\\n0&amp;\\leq (x-3)^2-8\\\\\n8&amp;\\leq (x-3)^2\\\\\n2\\sqrt{2}&amp;\\leq|x-3|\\\\\n\\end{align}\n$$</p>\n\n<p>which is equivalent to\n$$2\\sqrt{2}\\leq x-3 \\quad\\mathrm{or}\\quad -2\\sqrt{2}\\geq x-3\\\\$$\nor\n$$x\\geq3+2\\sqrt{2} \\quad\\mathrm{or}\\quad 0&lt;x\\leq 3-2\\sqrt{2}\\\\$$\nDue to symmetry around zero (as I've mentioned in the beginning) the final answer is $$x\\in(-\\infty,-3-2\\sqrt{2}]\\cup[-3+2\\sqrt{2},0)\\cup(0,3-2\\sqrt{2}]\\cup[3+2\\sqrt{2},\\infty)$$</p>\n" }, { "answer_id": 2082633, "score": 3, "text": "<p>My answer tries to explain more precisely the symmetry involved.</p>\n\n<p>Let's consider:\n$$\nf(x) \\equiv x+\\frac{1}{x}=-f(-x)\n$$\nHence, the LHS of the inequality is an odd function of x. So, under an absolute value:\n$$\n|f(x)|=|f(-x)|\\implies |f(x)|=|f(|x|)| \\,\\,\\,\\forall x\n$$\nHence, you can replace $x\\to|x|$ and solve accordingly as other user have suggested.</p>\n" }, { "answer_id": 2082612, "score": 2, "text": "<p>Possibly more concise approach (though yours is certainly sound):</p>\n\n<ul>\n<li>For $|x+\\frac{1}{x}| \\geq 6$, we need only consider $x&gt;0$ by symmetry (exclude $x=0$ as you say)</li>\n<li>$x+\\frac{1}{x}\\ge6\\implies x^2-6x+1\\ge0\\iff(x-3)^2\\ge8\\iff|x-3|\\ge2\\sqrt{2}$</li>\n<li>So, for $x&gt;0$, our solution is $0&lt;x\\le3-2\\sqrt2\\cup x\\ge3+2\\sqrt2$</li>\n<li>For $x&lt;0$, symmetry gives $2\\sqrt2-3&lt;x&lt;0\\cup x&lt;-3-2\\sqrt2$\n\n<ul>\n<li>Alternatively, we can just replace every $x$ with $|x|$ to arrive directly at:</li>\n</ul></li>\n</ul>\n\n<p>$$|x+\\frac{1}{x}| \\geq 6 \\iff 0&lt;|x|\\le3-2\\sqrt2\\cup |x|\\ge3+2\\sqrt2$$</p>\n" }, { "answer_id": 2082618, "score": 2, "text": "<p>Square the inequality to get rid of the absolute value.</p>\n\n<p>$$\\left(x+\\frac1x\\right)^2=x^2+\\frac1{x^2}+2\\ge36.$$</p>\n\n<p>s $x^2&gt;0$, this can be rewritten</p>\n\n<p>$$x^4-34x^2+1\\ge0$$ and by completing the square,</p>\n\n<p>$$(x^2-17)^2\\ge17^2-1=2\\cdot12^2.$$</p>\n\n<p>Then</p>\n\n<p>$$x^2-17\\le-12\\sqrt2\\ \\lor\\ 12\\sqrt2\\le x^2-17,$$ \n$$x^2\\le17-12\\sqrt2\\ \\lor\\ 12\\sqrt2+17\\le x^2$$ \n and</p>\n\n<p>$$|x|\\le\\sqrt{17-12\\sqrt2}\\ \\lor\\ \\sqrt{12\\sqrt2+17}\\le|x|,$$</p>\n\n<p>also written, after radical denesting</p>\n\n<p>$$|x|\\le3-2\\sqrt2\\ \\lor\\ 3+2\\sqrt2\\le|x|.$$</p>\n" }, { "answer_id": 2082616, "score": 1, "text": "<p>First of all you don't need to check for negative/positive $x$ values separately. Because if the inequality holds for some positive (or negative) $x$, then it holds for $-x$, too. So you just need to check it for one side.</p>\n\n<p>Assuming $x&gt;0$, we have\n$$\n\\begin{align}\n6&amp;\\leq x+\\frac1x\\\\\n0&amp;\\leq x^2-6x+1\\\\\n0&amp;\\leq (x-3)^2-8\\\\\n8&amp;\\leq (x-3)^2\\\\\n2\\sqrt{2}&amp;\\leq|x-3|\\\\\n\\end{align}\n$$</p>\n\n<p>which is equivalent to\n$$2\\sqrt{2}\\leq x-3 \\quad\\mathrm{or}\\quad -2\\sqrt{2}\\geq x-3\\\\$$\nor\n$$x\\geq3+2\\sqrt{2} \\quad\\mathrm{or}\\quad 0&lt;x\\leq 3-2\\sqrt{2}\\\\$$\nDue to symmetry around zero (as I've mentioned in the beginning) the final answer is $$x\\in(-\\infty,-3-2\\sqrt{2}]\\cup[-3+2\\sqrt{2},0)\\cup(0,3-2\\sqrt{2}]\\cup[3+2\\sqrt{2},\\infty)$$</p>\n" }, { "answer_id": 2082633, "score": 3, "text": "<p>My answer tries to explain more precisely the symmetry involved.</p>\n\n<p>Let's consider:\n$$\nf(x) \\equiv x+\\frac{1}{x}=-f(-x)\n$$\nHence, the LHS of the inequality is an odd function of x. So, under an absolute value:\n$$\n|f(x)|=|f(-x)|\\implies |f(x)|=|f(|x|)| \\,\\,\\,\\forall x\n$$\nHence, you can replace $x\\to|x|$ and solve accordingly as other user have suggested.</p>\n" }, { "answer_id": 2082612, "score": 2, "text": "<p>Possibly more concise approach (though yours is certainly sound):</p>\n\n<ul>\n<li>For $|x+\\frac{1}{x}| \\geq 6$, we need only consider $x&gt;0$ by symmetry (exclude $x=0$ as you say)</li>\n<li>$x+\\frac{1}{x}\\ge6\\implies x^2-6x+1\\ge0\\iff(x-3)^2\\ge8\\iff|x-3|\\ge2\\sqrt{2}$</li>\n<li>So, for $x&gt;0$, our solution is $0&lt;x\\le3-2\\sqrt2\\cup x\\ge3+2\\sqrt2$</li>\n<li>For $x&lt;0$, symmetry gives $2\\sqrt2-3&lt;x&lt;0\\cup x&lt;-3-2\\sqrt2$\n\n<ul>\n<li>Alternatively, we can just replace every $x$ with $|x|$ to arrive directly at:</li>\n</ul></li>\n</ul>\n\n<p>$$|x+\\frac{1}{x}| \\geq 6 \\iff 0&lt;|x|\\le3-2\\sqrt2\\cup |x|\\ge3+2\\sqrt2$$</p>\n" }, { "answer_id": 2082618, "score": 2, "text": "<p>Square the inequality to get rid of the absolute value.</p>\n\n<p>$$\\left(x+\\frac1x\\right)^2=x^2+\\frac1{x^2}+2\\ge36.$$</p>\n\n<p>s $x^2&gt;0$, this can be rewritten</p>\n\n<p>$$x^4-34x^2+1\\ge0$$ and by completing the square,</p>\n\n<p>$$(x^2-17)^2\\ge17^2-1=2\\cdot12^2.$$</p>\n\n<p>Then</p>\n\n<p>$$x^2-17\\le-12\\sqrt2\\ \\lor\\ 12\\sqrt2\\le x^2-17,$$ \n$$x^2\\le17-12\\sqrt2\\ \\lor\\ 12\\sqrt2+17\\le x^2$$ \n and</p>\n\n<p>$$|x|\\le\\sqrt{17-12\\sqrt2}\\ \\lor\\ \\sqrt{12\\sqrt2+17}\\le|x|,$$</p>\n\n<p>also written, after radical denesting</p>\n\n<p>$$|x|\\le3-2\\sqrt2\\ \\lor\\ 3+2\\sqrt2\\le|x|.$$</p>\n" }, { "answer_id": 2082616, "score": 1, "text": "<p>First of all you don't need to check for negative/positive $x$ values separately. Because if the inequality holds for some positive (or negative) $x$, then it holds for $-x$, too. So you just need to check it for one side.</p>\n\n<p>Assuming $x&gt;0$, we have\n$$\n\\begin{align}\n6&amp;\\leq x+\\frac1x\\\\\n0&amp;\\leq x^2-6x+1\\\\\n0&amp;\\leq (x-3)^2-8\\\\\n8&amp;\\leq (x-3)^2\\\\\n2\\sqrt{2}&amp;\\leq|x-3|\\\\\n\\end{align}\n$$</p>\n\n<p>which is equivalent to\n$$2\\sqrt{2}\\leq x-3 \\quad\\mathrm{or}\\quad -2\\sqrt{2}\\geq x-3\\\\$$\nor\n$$x\\geq3+2\\sqrt{2} \\quad\\mathrm{or}\\quad 0&lt;x\\leq 3-2\\sqrt{2}\\\\$$\nDue to symmetry around zero (as I've mentioned in the beginning) the final answer is $$x\\in(-\\infty,-3-2\\sqrt{2}]\\cup[-3+2\\sqrt{2},0)\\cup(0,3-2\\sqrt{2}]\\cup[3+2\\sqrt{2},\\infty)$$</p>\n" }, { "answer_id": 2082633, "score": 3, "text": "<p>My answer tries to explain more precisely the symmetry involved.</p>\n\n<p>Let's consider:\n$$\nf(x) \\equiv x+\\frac{1}{x}=-f(-x)\n$$\nHence, the LHS of the inequality is an odd function of x. So, under an absolute value:\n$$\n|f(x)|=|f(-x)|\\implies |f(x)|=|f(|x|)| \\,\\,\\,\\forall x\n$$\nHence, you can replace $x\\to|x|$ and solve accordingly as other user have suggested.</p>\n" }, { "answer_id": 2082612, "score": 2, "text": "<p>Possibly more concise approach (though yours is certainly sound):</p>\n\n<ul>\n<li>For $|x+\\frac{1}{x}| \\geq 6$, we need only consider $x&gt;0$ by symmetry (exclude $x=0$ as you say)</li>\n<li>$x+\\frac{1}{x}\\ge6\\implies x^2-6x+1\\ge0\\iff(x-3)^2\\ge8\\iff|x-3|\\ge2\\sqrt{2}$</li>\n<li>So, for $x&gt;0$, our solution is $0&lt;x\\le3-2\\sqrt2\\cup x\\ge3+2\\sqrt2$</li>\n<li>For $x&lt;0$, symmetry gives $2\\sqrt2-3&lt;x&lt;0\\cup x&lt;-3-2\\sqrt2$\n\n<ul>\n<li>Alternatively, we can just replace every $x$ with $|x|$ to arrive directly at:</li>\n</ul></li>\n</ul>\n\n<p>$$|x+\\frac{1}{x}| \\geq 6 \\iff 0&lt;|x|\\le3-2\\sqrt2\\cup |x|\\ge3+2\\sqrt2$$</p>\n" }, { "answer_id": 2082618, "score": 2, "text": "<p>Square the inequality to get rid of the absolute value.</p>\n\n<p>$$\\left(x+\\frac1x\\right)^2=x^2+\\frac1{x^2}+2\\ge36.$$</p>\n\n<p>s $x^2&gt;0$, this can be rewritten</p>\n\n<p>$$x^4-34x^2+1\\ge0$$ and by completing the square,</p>\n\n<p>$$(x^2-17)^2\\ge17^2-1=2\\cdot12^2.$$</p>\n\n<p>Then</p>\n\n<p>$$x^2-17\\le-12\\sqrt2\\ \\lor\\ 12\\sqrt2\\le x^2-17,$$ \n$$x^2\\le17-12\\sqrt2\\ \\lor\\ 12\\sqrt2+17\\le x^2$$ \n and</p>\n\n<p>$$|x|\\le\\sqrt{17-12\\sqrt2}\\ \\lor\\ \\sqrt{12\\sqrt2+17}\\le|x|,$$</p>\n\n<p>also written, after radical denesting</p>\n\n<p>$$|x|\\le3-2\\sqrt2\\ \\lor\\ 3+2\\sqrt2\\le|x|.$$</p>\n" }, { "answer_id": 2082616, "score": 1, "text": "<p>First of all you don't need to check for negative/positive $x$ values separately. Because if the inequality holds for some positive (or negative) $x$, then it holds for $-x$, too. So you just need to check it for one side.</p>\n\n<p>Assuming $x&gt;0$, we have\n$$\n\\begin{align}\n6&amp;\\leq x+\\frac1x\\\\\n0&amp;\\leq x^2-6x+1\\\\\n0&amp;\\leq (x-3)^2-8\\\\\n8&amp;\\leq (x-3)^2\\\\\n2\\sqrt{2}&amp;\\leq|x-3|\\\\\n\\end{align}\n$$</p>\n\n<p>which is equivalent to\n$$2\\sqrt{2}\\leq x-3 \\quad\\mathrm{or}\\quad -2\\sqrt{2}\\geq x-3\\\\$$\nor\n$$x\\geq3+2\\sqrt{2} \\quad\\mathrm{or}\\quad 0&lt;x\\leq 3-2\\sqrt{2}\\\\$$\nDue to symmetry around zero (as I've mentioned in the beginning) the final answer is $$x\\in(-\\infty,-3-2\\sqrt{2}]\\cup[-3+2\\sqrt{2},0)\\cup(0,3-2\\sqrt{2}]\\cup[3+2\\sqrt{2},\\infty)$$</p>\n" } ]
Question: Verify and Improve: $|x+\frac{1}{x}|\geq 6$ Answer 1: <p>My answer tries to explain more precisely the symmetry involved.</p> <p>Let's consider: $$ f(x) \equiv x+\frac{1}{x}=-f(-x) $$ Hence, the LHS of the inequality is an odd function of x. So, under an absolute value: $$ |f(x)|=|f(-x)|\implies |f(x)|=|f(|x|)| \,\,\,\forall x $$ Hence, you can replace $x\to|x|$ and solve accordingly as other user have suggested.</p> Answer 2: <p>Possibly more concise approach (though yours is certainly sound):</p> <ul> <li>For $|x+\frac{1}{x}| \geq 6$, we need only consider $x&gt;0$ by symmetry (exclude $x=0$ as you say)</li> <li>$x+\frac{1}{x}\ge6\implies x^2-6x+1\ge0\iff(x-3)^2\ge8\iff|x-3|\ge2\sqrt{2}$</li> <li>So, for $x&gt;0$, our solution is $0&lt;x\le3-2\sqrt2\cup x\ge3+2\sqrt2$</li> <li>For $x&lt;0$, symmetry gives $2\sqrt2-3&lt;x&lt;0\cup x&lt;-3-2\sqrt2$ <ul> <li>Alternatively, we can just replace every $x$ with $|x|$ to arrive directly at:</li> </ul></li> </ul> <p>$$|x+\frac{1}{x}| \geq 6 \iff 0&lt;|x|\le3-2\sqrt2\cup |x|\ge3+2\sqrt2$$</p> Answer 3: <p>Square the inequality to get rid of the absolute value.</p> <p>$$\left(x+\frac1x\right)^2=x^2+\frac1{x^2}+2\ge36.$$</p> <p>s $x^2&gt;0$, this can be rewritten</p> <p>$$x^4-34x^2+1\ge0$$ and by completing the square,</p> <p>$$(x^2-17)^2\ge17^2-1=2\cdot12^2.$$</p> <p>Then</p> <p>$$x^2-17\le-12\sqrt2\ \lor\ 12\sqrt2\le x^2-17,$$ $$x^2\le17-12\sqrt2\ \lor\ 12\sqrt2+17\le x^2$$ and</p> <p>$$|x|\le\sqrt{17-12\sqrt2}\ \lor\ \sqrt{12\sqrt2+17}\le|x|,$$</p> <p>also written, after radical denesting</p> <p>$$|x|\le3-2\sqrt2\ \lor\ 3+2\sqrt2\le|x|.$$</p> Answer 4: <p>First of all you don't need to check for negative/positive $x$ values separately. Because if the inequality holds for some positive (or negative) $x$, then it holds for $-x$, too. So you just need to check it for one side.</p> <p>Assuming $x&gt;0$, we have $$ \begin{align} 6&amp;\leq x+\frac1x\\ 0&amp;\leq x^2-6x+1\\ 0&amp;\leq (x-3)^2-8\\ 8&amp;\leq (x-3)^2\\ 2\sqrt{2}&amp;\leq|x-3|\\ \end{align} $$</p> <p>which is equivalent to $$2\sqrt{2}\leq x-3 \quad\mathrm{or}\quad -2\sqrt{2}\geq x-3\\$$ or $$x\geq3+2\sqrt{2} \quad\mathrm{or}\quad 0&lt;x\leq 3-2\sqrt{2}\\$$ Due to symmetry around zero (as I've mentioned in the beginning) the final answer is $$x\in(-\infty,-3-2\sqrt{2}]\cup[-3+2\sqrt{2},0)\cup(0,3-2\sqrt{2}]\cup[3+2\sqrt{2},\infty)$$</p> Answer 5: <p>My answer tries to explain more precisely the symmetry involved.</p> <p>Let's consider: $$ f(x) \equiv x+\frac{1}{x}=-f(-x) $$ Hence, the LHS of the inequality is an odd function of x. So, under an absolute value: $$ |f(x)|=|f(-x)|\implies |f(x)|=|f(|x|)| \,\,\,\forall x $$ Hence, you can replace $x\to|x|$ and solve accordingly as other user have suggested.</p> Answer 6: <p>Possibly more concise approach (though yours is certainly sound):</p> <ul> <li>For $|x+\frac{1}{x}| \geq 6$, we need only consider $x&gt;0$ by symmetry (exclude $x=0$ as you say)</li> <li>$x+\frac{1}{x}\ge6\implies x^2-6x+1\ge0\iff(x-3)^2\ge8\iff|x-3|\ge2\sqrt{2}$</li> <li>So, for $x&gt;0$, our solution is $0&lt;x\le3-2\sqrt2\cup x\ge3+2\sqrt2$</li> <li>For $x&lt;0$, symmetry gives $2\sqrt2-3&lt;x&lt;0\cup x&lt;-3-2\sqrt2$ <ul> <li>Alternatively, we can just replace every $x$ with $|x|$ to arrive directly at:</li> </ul></li> </ul> <p>$$|x+\frac{1}{x}| \geq 6 \iff 0&lt;|x|\le3-2\sqrt2\cup |x|\ge3+2\sqrt2$$</p> Answer 7: <p>Square the inequality to get rid of the absolute value.</p> <p>$$\left(x+\frac1x\right)^2=x^2+\frac1{x^2}+2\ge36.$$</p> <p>s $x^2&gt;0$, this can be rewritten</p> <p>$$x^4-34x^2+1\ge0$$ and by completing the square,</p> <p>$$(x^2-17)^2\ge17^2-1=2\cdot12^2.$$</p> <p>Then</p> <p>$$x^2-17\le-12\sqrt2\ \lor\ 12\sqrt2\le x^2-17,$$ $$x^2\le17-12\sqrt2\ \lor\ 12\sqrt2+17\le x^2$$ and</p> <p>$$|x|\le\sqrt{17-12\sqrt2}\ \lor\ \sqrt{12\sqrt2+17}\le|x|,$$</p> <p>also written, after radical denesting</p> <p>$$|x|\le3-2\sqrt2\ \lor\ 3+2\sqrt2\le|x|.$$</p> Answer 8: <p>First of all you don't need to check for negative/positive $x$ values separately. Because if the inequality holds for some positive (or negative) $x$, then it holds for $-x$, too. So you just need to check it for one side.</p> <p>Assuming $x&gt;0$, we have $$ \begin{align} 6&amp;\leq x+\frac1x\\ 0&amp;\leq x^2-6x+1\\ 0&amp;\leq (x-3)^2-8\\ 8&amp;\leq (x-3)^2\\ 2\sqrt{2}&amp;\leq|x-3|\\ \end{align} $$</p> <p>which is equivalent to $$2\sqrt{2}\leq x-3 \quad\mathrm{or}\quad -2\sqrt{2}\geq x-3\\$$ or $$x\geq3+2\sqrt{2} \quad\mathrm{or}\quad 0&lt;x\leq 3-2\sqrt{2}\\$$ Due to symmetry around zero (as I've mentioned in the beginning) the final answer is $$x\in(-\infty,-3-2\sqrt{2}]\cup[-3+2\sqrt{2},0)\cup(0,3-2\sqrt{2}]\cup[3+2\sqrt{2},\infty)$$</p> Answer 9: <p>My answer tries to explain more precisely the symmetry involved.</p> <p>Let's consider: $$ f(x) \equiv x+\frac{1}{x}=-f(-x) $$ Hence, the LHS of the inequality is an odd function of x. So, under an absolute value: $$ |f(x)|=|f(-x)|\implies |f(x)|=|f(|x|)| \,\,\,\forall x $$ Hence, you can replace $x\to|x|$ and solve accordingly as other user have suggested.</p> Answer 10: <p>Possibly more concise approach (though yours is certainly sound):</p> <ul> <li>For $|x+\frac{1}{x}| \geq 6$, we need only consider $x&gt;0$ by symmetry (exclude $x=0$ as you say)</li> <li>$x+\frac{1}{x}\ge6\implies x^2-6x+1\ge0\iff(x-3)^2\ge8\iff|x-3|\ge2\sqrt{2}$</li> <li>So, for $x&gt;0$, our solution is $0&lt;x\le3-2\sqrt2\cup x\ge3+2\sqrt2$</li> <li>For $x&lt;0$, symmetry gives $2\sqrt2-3&lt;x&lt;0\cup x&lt;-3-2\sqrt2$ <ul> <li>Alternatively, we can just replace every $x$ with $|x|$ to arrive directly at:</li> </ul></li> </ul> <p>$$|x+\frac{1}{x}| \geq 6 \iff 0&lt;|x|\le3-2\sqrt2\cup |x|\ge3+2\sqrt2$$</p> Answer 11: <p>Square the inequality to get rid of the absolute value.</p> <p>$$\left(x+\frac1x\right)^2=x^2+\frac1{x^2}+2\ge36.$$</p> <p>s $x^2&gt;0$, this can be rewritten</p> <p>$$x^4-34x^2+1\ge0$$ and by completing the square,</p> <p>$$(x^2-17)^2\ge17^2-1=2\cdot12^2.$$</p> <p>Then</p> <p>$$x^2-17\le-12\sqrt2\ \lor\ 12\sqrt2\le x^2-17,$$ $$x^2\le17-12\sqrt2\ \lor\ 12\sqrt2+17\le x^2$$ and</p> <p>$$|x|\le\sqrt{17-12\sqrt2}\ \lor\ \sqrt{12\sqrt2+17}\le|x|,$$</p> <p>also written, after radical denesting</p> <p>$$|x|\le3-2\sqrt2\ \lor\ 3+2\sqrt2\le|x|.$$</p> Answer 12: <p>First of all you don't need to check for negative/positive $x$ values separately. Because if the inequality holds for some positive (or negative) $x$, then it holds for $-x$, too. So you just need to check it for one side.</p> <p>Assuming $x&gt;0$, we have $$ \begin{align} 6&amp;\leq x+\frac1x\\ 0&amp;\leq x^2-6x+1\\ 0&amp;\leq (x-3)^2-8\\ 8&amp;\leq (x-3)^2\\ 2\sqrt{2}&amp;\leq|x-3|\\ \end{align} $$</p> <p>which is equivalent to $$2\sqrt{2}\leq x-3 \quad\mathrm{or}\quad -2\sqrt{2}\geq x-3\\$$ or $$x\geq3+2\sqrt{2} \quad\mathrm{or}\quad 0&lt;x\leq 3-2\sqrt{2}\\$$ Due to symmetry around zero (as I've mentioned in the beginning) the final answer is $$x\in(-\infty,-3-2\sqrt{2}]\cup[-3+2\sqrt{2},0)\cup(0,3-2\sqrt{2}]\cup[3+2\sqrt{2},\infty)$$</p> Answer 13: <p>My answer tries to explain more precisely the symmetry involved.</p> <p>Let's consider: $$ f(x) \equiv x+\frac{1}{x}=-f(-x) $$ Hence, the LHS of the inequality is an odd function of x. So, under an absolute value: $$ |f(x)|=|f(-x)|\implies |f(x)|=|f(|x|)| \,\,\,\forall x $$ Hence, you can replace $x\to|x|$ and solve accordingly as other user have suggested.</p> Answer 14: <p>Possibly more concise approach (though yours is certainly sound):</p> <ul> <li>For $|x+\frac{1}{x}| \geq 6$, we need only consider $x&gt;0$ by symmetry (exclude $x=0$ as you say)</li> <li>$x+\frac{1}{x}\ge6\implies x^2-6x+1\ge0\iff(x-3)^2\ge8\iff|x-3|\ge2\sqrt{2}$</li> <li>So, for $x&gt;0$, our solution is $0&lt;x\le3-2\sqrt2\cup x\ge3+2\sqrt2$</li> <li>For $x&lt;0$, symmetry gives $2\sqrt2-3&lt;x&lt;0\cup x&lt;-3-2\sqrt2$ <ul> <li>Alternatively, we can just replace every $x$ with $|x|$ to arrive directly at:</li> </ul></li> </ul> <p>$$|x+\frac{1}{x}| \geq 6 \iff 0&lt;|x|\le3-2\sqrt2\cup |x|\ge3+2\sqrt2$$</p> Answer 15: <p>Square the inequality to get rid of the absolute value.</p> <p>$$\left(x+\frac1x\right)^2=x^2+\frac1{x^2}+2\ge36.$$</p> <p>s $x^2&gt;0$, this can be rewritten</p> <p>$$x^4-34x^2+1\ge0$$ and by completing the square,</p> <p>$$(x^2-17)^2\ge17^2-1=2\cdot12^2.$$</p> <p>Then</p> <p>$$x^2-17\le-12\sqrt2\ \lor\ 12\sqrt2\le x^2-17,$$ $$x^2\le17-12\sqrt2\ \lor\ 12\sqrt2+17\le x^2$$ and</p> <p>$$|x|\le\sqrt{17-12\sqrt2}\ \lor\ \sqrt{12\sqrt2+17}\le|x|,$$</p> <p>also written, after radical denesting</p> <p>$$|x|\le3-2\sqrt2\ \lor\ 3+2\sqrt2\le|x|.$$</p> Answer 16: <p>First of all you don't need to check for negative/positive $x$ values separately. Because if the inequality holds for some positive (or negative) $x$, then it holds for $-x$, too. So you just need to check it for one side.</p> <p>Assuming $x&gt;0$, we have $$ \begin{align} 6&amp;\leq x+\frac1x\\ 0&amp;\leq x^2-6x+1\\ 0&amp;\leq (x-3)^2-8\\ 8&amp;\leq (x-3)^2\\ 2\sqrt{2}&amp;\leq|x-3|\\ \end{align} $$</p> <p>which is equivalent to $$2\sqrt{2}\leq x-3 \quad\mathrm{or}\quad -2\sqrt{2}\geq x-3\\$$ or $$x\geq3+2\sqrt{2} \quad\mathrm{or}\quad 0&lt;x\leq 3-2\sqrt{2}\\$$ Due to symmetry around zero (as I've mentioned in the beginning) the final answer is $$x\in(-\infty,-3-2\sqrt{2}]\cup[-3+2\sqrt{2},0)\cup(0,3-2\sqrt{2}]\cup[3+2\sqrt{2},\infty)$$</p> Answer 17: <p>My answer tries to explain more precisely the symmetry involved.</p> <p>Let's consider: $$ f(x) \equiv x+\frac{1}{x}=-f(-x) $$ Hence, the LHS of the inequality is an odd function of x. So, under an absolute value: $$ |f(x)|=|f(-x)|\implies |f(x)|=|f(|x|)| \,\,\,\forall x $$ Hence, you can replace $x\to|x|$ and solve accordingly as other user have suggested.</p> Answer 18: <p>Possibly more concise approach (though yours is certainly sound):</p> <ul> <li>For $|x+\frac{1}{x}| \geq 6$, we need only consider $x&gt;0$ by symmetry (exclude $x=0$ as you say)</li> <li>$x+\frac{1}{x}\ge6\implies x^2-6x+1\ge0\iff(x-3)^2\ge8\iff|x-3|\ge2\sqrt{2}$</li> <li>So, for $x&gt;0$, our solution is $0&lt;x\le3-2\sqrt2\cup x\ge3+2\sqrt2$</li> <li>For $x&lt;0$, symmetry gives $2\sqrt2-3&lt;x&lt;0\cup x&lt;-3-2\sqrt2$ <ul> <li>Alternatively, we can just replace every $x$ with $|x|$ to arrive directly at:</li> </ul></li> </ul> <p>$$|x+\frac{1}{x}| \geq 6 \iff 0&lt;|x|\le3-2\sqrt2\cup |x|\ge3+2\sqrt2$$</p> Answer 19: <p>Square the inequality to get rid of the absolute value.</p> <p>$$\left(x+\frac1x\right)^2=x^2+\frac1{x^2}+2\ge36.$$</p> <p>s $x^2&gt;0$, this can be rewritten</p> <p>$$x^4-34x^2+1\ge0$$ and by completing the square,</p> <p>$$(x^2-17)^2\ge17^2-1=2\cdot12^2.$$</p> <p>Then</p> <p>$$x^2-17\le-12\sqrt2\ \lor\ 12\sqrt2\le x^2-17,$$ $$x^2\le17-12\sqrt2\ \lor\ 12\sqrt2+17\le x^2$$ and</p> <p>$$|x|\le\sqrt{17-12\sqrt2}\ \lor\ \sqrt{12\sqrt2+17}\le|x|,$$</p> <p>also written, after radical denesting</p> <p>$$|x|\le3-2\sqrt2\ \lor\ 3+2\sqrt2\le|x|.$$</p> Answer 20: <p>First of all you don't need to check for negative/positive $x$ values separately. Because if the inequality holds for some positive (or negative) $x$, then it holds for $-x$, too. So you just need to check it for one side.</p> <p>Assuming $x&gt;0$, we have $$ \begin{align} 6&amp;\leq x+\frac1x\\ 0&amp;\leq x^2-6x+1\\ 0&amp;\leq (x-3)^2-8\\ 8&amp;\leq (x-3)^2\\ 2\sqrt{2}&amp;\leq|x-3|\\ \end{align} $$</p> <p>which is equivalent to $$2\sqrt{2}\leq x-3 \quad\mathrm{or}\quad -2\sqrt{2}\geq x-3\\$$ or $$x\geq3+2\sqrt{2} \quad\mathrm{or}\quad 0&lt;x\leq 3-2\sqrt{2}\\$$ Due to symmetry around zero (as I've mentioned in the beginning) the final answer is $$x\in(-\infty,-3-2\sqrt{2}]\cup[-3+2\sqrt{2},0)\cup(0,3-2\sqrt{2}]\cup[3+2\sqrt{2},\infty)$$</p> Answer 21: <p>My answer tries to explain more precisely the symmetry involved.</p> <p>Let's consider: $$ f(x) \equiv x+\frac{1}{x}=-f(-x) $$ Hence, the LHS of the inequality is an odd function of x. So, under an absolute value: $$ |f(x)|=|f(-x)|\implies |f(x)|=|f(|x|)| \,\,\,\forall x $$ Hence, you can replace $x\to|x|$ and solve accordingly as other user have suggested.</p> Answer 22: <p>Possibly more concise approach (though yours is certainly sound):</p> <ul> <li>For $|x+\frac{1}{x}| \geq 6$, we need only consider $x&gt;0$ by symmetry (exclude $x=0$ as you say)</li> <li>$x+\frac{1}{x}\ge6\implies x^2-6x+1\ge0\iff(x-3)^2\ge8\iff|x-3|\ge2\sqrt{2}$</li> <li>So, for $x&gt;0$, our solution is $0&lt;x\le3-2\sqrt2\cup x\ge3+2\sqrt2$</li> <li>For $x&lt;0$, symmetry gives $2\sqrt2-3&lt;x&lt;0\cup x&lt;-3-2\sqrt2$ <ul> <li>Alternatively, we can just replace every $x$ with $|x|$ to arrive directly at:</li> </ul></li> </ul> <p>$$|x+\frac{1}{x}| \geq 6 \iff 0&lt;|x|\le3-2\sqrt2\cup |x|\ge3+2\sqrt2$$</p> Answer 23: <p>Square the inequality to get rid of the absolute value.</p> <p>$$\left(x+\frac1x\right)^2=x^2+\frac1{x^2}+2\ge36.$$</p> <p>s $x^2&gt;0$, this can be rewritten</p> <p>$$x^4-34x^2+1\ge0$$ and by completing the square,</p> <p>$$(x^2-17)^2\ge17^2-1=2\cdot12^2.$$</p> <p>Then</p> <p>$$x^2-17\le-12\sqrt2\ \lor\ 12\sqrt2\le x^2-17,$$ $$x^2\le17-12\sqrt2\ \lor\ 12\sqrt2+17\le x^2$$ and</p> <p>$$|x|\le\sqrt{17-12\sqrt2}\ \lor\ \sqrt{12\sqrt2+17}\le|x|,$$</p> <p>also written, after radical denesting</p> <p>$$|x|\le3-2\sqrt2\ \lor\ 3+2\sqrt2\le|x|.$$</p> Answer 24: <p>First of all you don't need to check for negative/positive $x$ values separately. Because if the inequality holds for some positive (or negative) $x$, then it holds for $-x$, too. So you just need to check it for one side.</p> <p>Assuming $x&gt;0$, we have $$ \begin{align} 6&amp;\leq x+\frac1x\\ 0&amp;\leq x^2-6x+1\\ 0&amp;\leq (x-3)^2-8\\ 8&amp;\leq (x-3)^2\\ 2\sqrt{2}&amp;\leq|x-3|\\ \end{align} $$</p> <p>which is equivalent to $$2\sqrt{2}\leq x-3 \quad\mathrm{or}\quad -2\sqrt{2}\geq x-3\\$$ or $$x\geq3+2\sqrt{2} \quad\mathrm{or}\quad 0&lt;x\leq 3-2\sqrt{2}\\$$ Due to symmetry around zero (as I've mentioned in the beginning) the final answer is $$x\in(-\infty,-3-2\sqrt{2}]\cup[-3+2\sqrt{2},0)\cup(0,3-2\sqrt{2}]\cup[3+2\sqrt{2},\infty)$$</p> Answer 25: <p>My answer tries to explain more precisely the symmetry involved.</p> <p>Let's consider: $$ f(x) \equiv x+\frac{1}{x}=-f(-x) $$ Hence, the LHS of the inequality is an odd function of x. So, under an absolute value: $$ |f(x)|=|f(-x)|\implies |f(x)|=|f(|x|)| \,\,\,\forall x $$ Hence, you can replace $x\to|x|$ and solve accordingly as other user have suggested.</p> Answer 26: <p>Possibly more concise approach (though yours is certainly sound):</p> <ul> <li>For $|x+\frac{1}{x}| \geq 6$, we need only consider $x&gt;0$ by symmetry (exclude $x=0$ as you say)</li> <li>$x+\frac{1}{x}\ge6\implies x^2-6x+1\ge0\iff(x-3)^2\ge8\iff|x-3|\ge2\sqrt{2}$</li> <li>So, for $x&gt;0$, our solution is $0&lt;x\le3-2\sqrt2\cup x\ge3+2\sqrt2$</li> <li>For $x&lt;0$, symmetry gives $2\sqrt2-3&lt;x&lt;0\cup x&lt;-3-2\sqrt2$ <ul> <li>Alternatively, we can just replace every $x$ with $|x|$ to arrive directly at:</li> </ul></li> </ul> <p>$$|x+\frac{1}{x}| \geq 6 \iff 0&lt;|x|\le3-2\sqrt2\cup |x|\ge3+2\sqrt2$$</p> Answer 27: <p>Square the inequality to get rid of the absolute value.</p> <p>$$\left(x+\frac1x\right)^2=x^2+\frac1{x^2}+2\ge36.$$</p> <p>s $x^2&gt;0$, this can be rewritten</p> <p>$$x^4-34x^2+1\ge0$$ and by completing the square,</p> <p>$$(x^2-17)^2\ge17^2-1=2\cdot12^2.$$</p> <p>Then</p> <p>$$x^2-17\le-12\sqrt2\ \lor\ 12\sqrt2\le x^2-17,$$ $$x^2\le17-12\sqrt2\ \lor\ 12\sqrt2+17\le x^2$$ and</p> <p>$$|x|\le\sqrt{17-12\sqrt2}\ \lor\ \sqrt{12\sqrt2+17}\le|x|,$$</p> <p>also written, after radical denesting</p> <p>$$|x|\le3-2\sqrt2\ \lor\ 3+2\sqrt2\le|x|.$$</p> Answer 28: <p>First of all you don't need to check for negative/positive $x$ values separately. Because if the inequality holds for some positive (or negative) $x$, then it holds for $-x$, too. So you just need to check it for one side.</p> <p>Assuming $x&gt;0$, we have $$ \begin{align} 6&amp;\leq x+\frac1x\\ 0&amp;\leq x^2-6x+1\\ 0&amp;\leq (x-3)^2-8\\ 8&amp;\leq (x-3)^2\\ 2\sqrt{2}&amp;\leq|x-3|\\ \end{align} $$</p> <p>which is equivalent to $$2\sqrt{2}\leq x-3 \quad\mathrm{or}\quad -2\sqrt{2}\geq x-3\\$$ or $$x\geq3+2\sqrt{2} \quad\mathrm{or}\quad 0&lt;x\leq 3-2\sqrt{2}\\$$ Due to symmetry around zero (as I've mentioned in the beginning) the final answer is $$x\in(-\infty,-3-2\sqrt{2}]\cup[-3+2\sqrt{2},0)\cup(0,3-2\sqrt{2}]\cup[3+2\sqrt{2},\infty)$$</p> Answer 29: <p>My answer tries to explain more precisely the symmetry involved.</p> <p>Let's consider: $$ f(x) \equiv x+\frac{1}{x}=-f(-x) $$ Hence, the LHS of the inequality is an odd function of x. So, under an absolute value: $$ |f(x)|=|f(-x)|\implies |f(x)|=|f(|x|)| \,\,\,\forall x $$ Hence, you can replace $x\to|x|$ and solve accordingly as other user have suggested.</p> Answer 30: <p>Possibly more concise approach (though yours is certainly sound):</p> <ul> <li>For $|x+\frac{1}{x}| \geq 6$, we need only consider $x&gt;0$ by symmetry (exclude $x=0$ as you say)</li> <li>$x+\frac{1}{x}\ge6\implies x^2-6x+1\ge0\iff(x-3)^2\ge8\iff|x-3|\ge2\sqrt{2}$</li> <li>So, for $x&gt;0$, our solution is $0&lt;x\le3-2\sqrt2\cup x\ge3+2\sqrt2$</li> <li>For $x&lt;0$, symmetry gives $2\sqrt2-3&lt;x&lt;0\cup x&lt;-3-2\sqrt2$ <ul> <li>Alternatively, we can just replace every $x$ with $|x|$ to arrive directly at:</li> </ul></li> </ul> <p>$$|x+\frac{1}{x}| \geq 6 \iff 0&lt;|x|\le3-2\sqrt2\cup |x|\ge3+2\sqrt2$$</p> Answer 31: <p>Square the inequality to get rid of the absolute value.</p> <p>$$\left(x+\frac1x\right)^2=x^2+\frac1{x^2}+2\ge36.$$</p> <p>s $x^2&gt;0$, this can be rewritten</p> <p>$$x^4-34x^2+1\ge0$$ and by completing the square,</p> <p>$$(x^2-17)^2\ge17^2-1=2\cdot12^2.$$</p> <p>Then</p> <p>$$x^2-17\le-12\sqrt2\ \lor\ 12\sqrt2\le x^2-17,$$ $$x^2\le17-12\sqrt2\ \lor\ 12\sqrt2+17\le x^2$$ and</p> <p>$$|x|\le\sqrt{17-12\sqrt2}\ \lor\ \sqrt{12\sqrt2+17}\le|x|,$$</p> <p>also written, after radical denesting</p> <p>$$|x|\le3-2\sqrt2\ \lor\ 3+2\sqrt2\le|x|.$$</p> Answer 32: <p>First of all you don't need to check for negative/positive $x$ values separately. Because if the inequality holds for some positive (or negative) $x$, then it holds for $-x$, too. So you just need to check it for one side.</p> <p>Assuming $x&gt;0$, we have $$ \begin{align} 6&amp;\leq x+\frac1x\\ 0&amp;\leq x^2-6x+1\\ 0&amp;\leq (x-3)^2-8\\ 8&amp;\leq (x-3)^2\\ 2\sqrt{2}&amp;\leq|x-3|\\ \end{align} $$</p> <p>which is equivalent to $$2\sqrt{2}\leq x-3 \quad\mathrm{or}\quad -2\sqrt{2}\geq x-3\\$$ or $$x\geq3+2\sqrt{2} \quad\mathrm{or}\quad 0&lt;x\leq 3-2\sqrt{2}\\$$ Due to symmetry around zero (as I've mentioned in the beginning) the final answer is $$x\in(-\infty,-3-2\sqrt{2}]\cup[-3+2\sqrt{2},0)\cup(0,3-2\sqrt{2}]\cup[3+2\sqrt{2},\infty)$$</p> Answer 33: <p>My answer tries to explain more precisely the symmetry involved.</p> <p>Let's consider: $$ f(x) \equiv x+\frac{1}{x}=-f(-x) $$ Hence, the LHS of the inequality is an odd function of x. So, under an absolute value: $$ |f(x)|=|f(-x)|\implies |f(x)|=|f(|x|)| \,\,\,\forall x $$ Hence, you can replace $x\to|x|$ and solve accordingly as other user have suggested.</p> Answer 34: <p>Possibly more concise approach (though yours is certainly sound):</p> <ul> <li>For $|x+\frac{1}{x}| \geq 6$, we need only consider $x&gt;0$ by symmetry (exclude $x=0$ as you say)</li> <li>$x+\frac{1}{x}\ge6\implies x^2-6x+1\ge0\iff(x-3)^2\ge8\iff|x-3|\ge2\sqrt{2}$</li> <li>So, for $x&gt;0$, our solution is $0&lt;x\le3-2\sqrt2\cup x\ge3+2\sqrt2$</li> <li>For $x&lt;0$, symmetry gives $2\sqrt2-3&lt;x&lt;0\cup x&lt;-3-2\sqrt2$ <ul> <li>Alternatively, we can just replace every $x$ with $|x|$ to arrive directly at:</li> </ul></li> </ul> <p>$$|x+\frac{1}{x}| \geq 6 \iff 0&lt;|x|\le3-2\sqrt2\cup |x|\ge3+2\sqrt2$$</p> Answer 35: <p>Square the inequality to get rid of the absolute value.</p> <p>$$\left(x+\frac1x\right)^2=x^2+\frac1{x^2}+2\ge36.$$</p> <p>s $x^2&gt;0$, this can be rewritten</p> <p>$$x^4-34x^2+1\ge0$$ and by completing the square,</p> <p>$$(x^2-17)^2\ge17^2-1=2\cdot12^2.$$</p> <p>Then</p> <p>$$x^2-17\le-12\sqrt2\ \lor\ 12\sqrt2\le x^2-17,$$ $$x^2\le17-12\sqrt2\ \lor\ 12\sqrt2+17\le x^2$$ and</p> <p>$$|x|\le\sqrt{17-12\sqrt2}\ \lor\ \sqrt{12\sqrt2+17}\le|x|,$$</p> <p>also written, after radical denesting</p> <p>$$|x|\le3-2\sqrt2\ \lor\ 3+2\sqrt2\le|x|.$$</p> Answer 36: <p>First of all you don't need to check for negative/positive $x$ values separately. Because if the inequality holds for some positive (or negative) $x$, then it holds for $-x$, too. So you just need to check it for one side.</p> <p>Assuming $x&gt;0$, we have $$ \begin{align} 6&amp;\leq x+\frac1x\\ 0&amp;\leq x^2-6x+1\\ 0&amp;\leq (x-3)^2-8\\ 8&amp;\leq (x-3)^2\\ 2\sqrt{2}&amp;\leq|x-3|\\ \end{align} $$</p> <p>which is equivalent to $$2\sqrt{2}\leq x-3 \quad\mathrm{or}\quad -2\sqrt{2}\geq x-3\\$$ or $$x\geq3+2\sqrt{2} \quad\mathrm{or}\quad 0&lt;x\leq 3-2\sqrt{2}\\$$ Due to symmetry around zero (as I've mentioned in the beginning) the final answer is $$x\in(-\infty,-3-2\sqrt{2}]\cup[-3+2\sqrt{2},0)\cup(0,3-2\sqrt{2}]\cup[3+2\sqrt{2},\infty)$$</p>
Ratio of Legs in 15, 75, 90 triangles
2,082,660
5
https://math.stackexchange.com/questions/2082660/ratio-of-legs-in-15-75-90-triangles
<p>$\text{What is the ratio of legs in a right triangle with angles of 15, 75, and 90?}$ I know the ratio of legs in a $30, 60, 90$ triangle, which is the lengths $1$, $\sqrt{3}$, and $2$ respectively. This is what I have got so far: <a href="https://i.stack.imgur.com/Y82yy.gif" rel="noreferrer">Using the 30-60-90 Ratio</a></p> <p>How would I be able to take this a step further and be able to find the answer? Thanks in advance.</p>
[ { "answer_id": 2082666, "score": 3, "text": "<p>The ratio of legs is\n$$\nr = \\tan 15^\\circ.\n$$\n(This is quite easily derived from the definition of the $\\tan$ function.)</p>\n\n<p>You can also represent the ratio using radicals:\n$$\nr = 2 - \\sqrt{3} \\approx 0.267949\n$$</p>\n\n<p>If we do not want to use $\\tan$ at all, then we obtain the same answer just reasoning from your picture:\n$$\nr = {1\\over2+\\sqrt{3}}= 2 - \\sqrt{3}.\n$$</p>\n\n<p>(In this ratio, the numerator $1$ is the vertical leg in your picture; and the denominator $2+\\sqrt{3}$ is the horizontal leg.)</p>\n" }, { "answer_id": 2082699, "score": 1, "text": "<p>Let $a$, $b$, $c$ represent the line segments the make up a right triangle, and let $A$, $B$, $C$ represent the angles opposite those line segments. Let $C=90^\\circ$</p>\n\n<p>The legs ($a$ and $b$) are given by the following equations.\n$$a=c\\sin{A}$$\n$$b=c\\cos{A}$$</p>\n\n<p>We want to find the ratio $a\\over b$.</p>\n\n<p>$${a\\over b}={c\\sin{A}\\over c\\cos{A}}={\\sin{A}\\over \\cos{A}}=\\tan(A)$$</p>\n\n<p>Now substitute the value of $A$ in your example. You can either use $15^\\circ$ or $75^\\circ$ (using one instead of the other will give the inverse ratio).</p>\n\n<p>$${a\\over b}=\\tan(15^\\circ)\\quad\\text{or}\\quad{b\\over a}=\\tan(75^\\circ)$$</p>\n" }, { "answer_id": 4714648, "score": 1, "text": "<p><span class=\"math-container\">$\\triangle ABD$</span> has a right angle at <span class=\"math-container\">$D$</span> and acute angles of <span class=\"math-container\">$30°(B)$</span> and <span class=\"math-container\">$60°(A)$</span>. <span class=\"math-container\">$\\triangle ABC$</span> is isosceles with base <span class=\"math-container\">$\\overline{AC}$</span>. Thus with the dimensions of the <span class=\"math-container\">$30°-60°-90°$</span> right triangle, <span class=\"math-container\">$BC=AB=2AD$</span> and <span class=\"math-container\">$BD=\\sqrt3AD$</span>. Also <span class=\"math-container\">$\\angle ACD$</span> in the isosceles triangle measures half the exterior <span class=\"math-container\">$\\angle ABD$</span>, thus <span class=\"math-container\">$15°$</span>. Thereby <span class=\"math-container\">$\\triangle ACD$</span> is a right triangle with acute angles measuring <span class=\"math-container\">$15°$</span> and <span class=\"math-container\">$75°$</span>, with <span class=\"math-container\">$\\color{blue}{CD=BC+BD=(2+\\sqrt3)AD}$</span>.</p>\n<p><a href=\"https://i.stack.imgur.com/AmqPb.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/AmqPb.jpg\" alt=\"enter image description here\" /></a></p>\n" }, { "answer_id": 4714665, "score": 1, "text": "<p><a href=\"https://i.stack.imgur.com/Es9fB.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Es9fB.png\" alt=\"enter image description here\" /></a></p>\n<p>...........................................................................</p>\n" }, { "answer_id": 4303747, "score": -1, "text": "<p>See the link here, it provides an explanation of the ratios and has other resources on triangle ratios:</p>\n<p><a href=\"https://robertlovespi.net/2013/10/18/the-15-75-90-triangle/\" rel=\"nofollow noreferrer\">https://robertlovespi.net/2013/10/18/the-15-75-90-triangle/</a></p>\n<p>Brief overview:</p>\n<p>15-75-90 triangle is based on the regular dodecagon, using this <a href=\"https://i.stack.imgur.com/5iWeY.png\" rel=\"nofollow noreferrer\">diagram:</a></p>\n<p>Using the Pythagoran Theorem and other methods, we get the short leg:long leg:hypotenuse ratio in a 15-75-90 triangle as <span class=\"math-container\">$2-\\sqrt{3}:1:2\\sqrt{2 - \\sqrt{3}}.$</span> The nested radical ultimately simplifies to <span class=\"math-container\">$\\sqrt{6} - \\sqrt{2},$</span> so the simplified ratio is <span class=\"math-container\">$2 - \\sqrt{3}:1:\\sqrt{6} - \\sqrt{2}.$</span></p>\n" }, { "answer_id": 2082666, "score": 3, "text": "<p>The ratio of legs is\n$$\nr = \\tan 15^\\circ.\n$$\n(This is quite easily derived from the definition of the $\\tan$ function.)</p>\n\n<p>You can also represent the ratio using radicals:\n$$\nr = 2 - \\sqrt{3} \\approx 0.267949\n$$</p>\n\n<p>If we do not want to use $\\tan$ at all, then we obtain the same answer just reasoning from your picture:\n$$\nr = {1\\over2+\\sqrt{3}}= 2 - \\sqrt{3}.\n$$</p>\n\n<p>(In this ratio, the numerator $1$ is the vertical leg in your picture; and the denominator $2+\\sqrt{3}$ is the horizontal leg.)</p>\n" }, { "answer_id": 2082699, "score": 1, "text": "<p>Let $a$, $b$, $c$ represent the line segments the make up a right triangle, and let $A$, $B$, $C$ represent the angles opposite those line segments. Let $C=90^\\circ$</p>\n\n<p>The legs ($a$ and $b$) are given by the following equations.\n$$a=c\\sin{A}$$\n$$b=c\\cos{A}$$</p>\n\n<p>We want to find the ratio $a\\over b$.</p>\n\n<p>$${a\\over b}={c\\sin{A}\\over c\\cos{A}}={\\sin{A}\\over \\cos{A}}=\\tan(A)$$</p>\n\n<p>Now substitute the value of $A$ in your example. You can either use $15^\\circ$ or $75^\\circ$ (using one instead of the other will give the inverse ratio).</p>\n\n<p>$${a\\over b}=\\tan(15^\\circ)\\quad\\text{or}\\quad{b\\over a}=\\tan(75^\\circ)$$</p>\n" }, { "answer_id": 4714648, "score": 1, "text": "<p><span class=\"math-container\">$\\triangle ABD$</span> has a right angle at <span class=\"math-container\">$D$</span> and acute angles of <span class=\"math-container\">$30°(B)$</span> and <span class=\"math-container\">$60°(A)$</span>. <span class=\"math-container\">$\\triangle ABC$</span> is isosceles with base <span class=\"math-container\">$\\overline{AC}$</span>. Thus with the dimensions of the <span class=\"math-container\">$30°-60°-90°$</span> right triangle, <span class=\"math-container\">$BC=AB=2AD$</span> and <span class=\"math-container\">$BD=\\sqrt3AD$</span>. Also <span class=\"math-container\">$\\angle ACD$</span> in the isosceles triangle measures half the exterior <span class=\"math-container\">$\\angle ABD$</span>, thus <span class=\"math-container\">$15°$</span>. Thereby <span class=\"math-container\">$\\triangle ACD$</span> is a right triangle with acute angles measuring <span class=\"math-container\">$15°$</span> and <span class=\"math-container\">$75°$</span>, with <span class=\"math-container\">$\\color{blue}{CD=BC+BD=(2+\\sqrt3)AD}$</span>.</p>\n<p><a href=\"https://i.stack.imgur.com/AmqPb.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/AmqPb.jpg\" alt=\"enter image description here\" /></a></p>\n" }, { "answer_id": 4714665, "score": 1, "text": "<p><a href=\"https://i.stack.imgur.com/Es9fB.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Es9fB.png\" alt=\"enter image description here\" /></a></p>\n<p>...........................................................................</p>\n" }, { "answer_id": 4303747, "score": -1, "text": "<p>See the link here, it provides an explanation of the ratios and has other resources on triangle ratios:</p>\n<p><a href=\"https://robertlovespi.net/2013/10/18/the-15-75-90-triangle/\" rel=\"nofollow noreferrer\">https://robertlovespi.net/2013/10/18/the-15-75-90-triangle/</a></p>\n<p>Brief overview:</p>\n<p>15-75-90 triangle is based on the regular dodecagon, using this <a href=\"https://i.stack.imgur.com/5iWeY.png\" rel=\"nofollow noreferrer\">diagram:</a></p>\n<p>Using the Pythagoran Theorem and other methods, we get the short leg:long leg:hypotenuse ratio in a 15-75-90 triangle as <span class=\"math-container\">$2-\\sqrt{3}:1:2\\sqrt{2 - \\sqrt{3}}.$</span> The nested radical ultimately simplifies to <span class=\"math-container\">$\\sqrt{6} - \\sqrt{2},$</span> so the simplified ratio is <span class=\"math-container\">$2 - \\sqrt{3}:1:\\sqrt{6} - \\sqrt{2}.$</span></p>\n" }, { "answer_id": 2082666, "score": 3, "text": "<p>The ratio of legs is\n$$\nr = \\tan 15^\\circ.\n$$\n(This is quite easily derived from the definition of the $\\tan$ function.)</p>\n\n<p>You can also represent the ratio using radicals:\n$$\nr = 2 - \\sqrt{3} \\approx 0.267949\n$$</p>\n\n<p>If we do not want to use $\\tan$ at all, then we obtain the same answer just reasoning from your picture:\n$$\nr = {1\\over2+\\sqrt{3}}= 2 - \\sqrt{3}.\n$$</p>\n\n<p>(In this ratio, the numerator $1$ is the vertical leg in your picture; and the denominator $2+\\sqrt{3}$ is the horizontal leg.)</p>\n" }, { "answer_id": 2082699, "score": 1, "text": "<p>Let $a$, $b$, $c$ represent the line segments the make up a right triangle, and let $A$, $B$, $C$ represent the angles opposite those line segments. Let $C=90^\\circ$</p>\n\n<p>The legs ($a$ and $b$) are given by the following equations.\n$$a=c\\sin{A}$$\n$$b=c\\cos{A}$$</p>\n\n<p>We want to find the ratio $a\\over b$.</p>\n\n<p>$${a\\over b}={c\\sin{A}\\over c\\cos{A}}={\\sin{A}\\over \\cos{A}}=\\tan(A)$$</p>\n\n<p>Now substitute the value of $A$ in your example. You can either use $15^\\circ$ or $75^\\circ$ (using one instead of the other will give the inverse ratio).</p>\n\n<p>$${a\\over b}=\\tan(15^\\circ)\\quad\\text{or}\\quad{b\\over a}=\\tan(75^\\circ)$$</p>\n" }, { "answer_id": 4714648, "score": 1, "text": "<p><span class=\"math-container\">$\\triangle ABD$</span> has a right angle at <span class=\"math-container\">$D$</span> and acute angles of <span class=\"math-container\">$30°(B)$</span> and <span class=\"math-container\">$60°(A)$</span>. <span class=\"math-container\">$\\triangle ABC$</span> is isosceles with base <span class=\"math-container\">$\\overline{AC}$</span>. Thus with the dimensions of the <span class=\"math-container\">$30°-60°-90°$</span> right triangle, <span class=\"math-container\">$BC=AB=2AD$</span> and <span class=\"math-container\">$BD=\\sqrt3AD$</span>. Also <span class=\"math-container\">$\\angle ACD$</span> in the isosceles triangle measures half the exterior <span class=\"math-container\">$\\angle ABD$</span>, thus <span class=\"math-container\">$15°$</span>. Thereby <span class=\"math-container\">$\\triangle ACD$</span> is a right triangle with acute angles measuring <span class=\"math-container\">$15°$</span> and <span class=\"math-container\">$75°$</span>, with <span class=\"math-container\">$\\color{blue}{CD=BC+BD=(2+\\sqrt3)AD}$</span>.</p>\n<p><a href=\"https://i.stack.imgur.com/AmqPb.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/AmqPb.jpg\" alt=\"enter image description here\" /></a></p>\n" }, { "answer_id": 4714665, "score": 1, "text": "<p><a href=\"https://i.stack.imgur.com/Es9fB.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Es9fB.png\" alt=\"enter image description here\" /></a></p>\n<p>...........................................................................</p>\n" }, { "answer_id": 4303747, "score": -1, "text": "<p>See the link here, it provides an explanation of the ratios and has other resources on triangle ratios:</p>\n<p><a href=\"https://robertlovespi.net/2013/10/18/the-15-75-90-triangle/\" rel=\"nofollow noreferrer\">https://robertlovespi.net/2013/10/18/the-15-75-90-triangle/</a></p>\n<p>Brief overview:</p>\n<p>15-75-90 triangle is based on the regular dodecagon, using this <a href=\"https://i.stack.imgur.com/5iWeY.png\" rel=\"nofollow noreferrer\">diagram:</a></p>\n<p>Using the Pythagoran Theorem and other methods, we get the short leg:long leg:hypotenuse ratio in a 15-75-90 triangle as <span class=\"math-container\">$2-\\sqrt{3}:1:2\\sqrt{2 - \\sqrt{3}}.$</span> The nested radical ultimately simplifies to <span class=\"math-container\">$\\sqrt{6} - \\sqrt{2},$</span> so the simplified ratio is <span class=\"math-container\">$2 - \\sqrt{3}:1:\\sqrt{6} - \\sqrt{2}.$</span></p>\n" }, { "answer_id": 2082666, "score": 3, "text": "<p>The ratio of legs is\n$$\nr = \\tan 15^\\circ.\n$$\n(This is quite easily derived from the definition of the $\\tan$ function.)</p>\n\n<p>You can also represent the ratio using radicals:\n$$\nr = 2 - \\sqrt{3} \\approx 0.267949\n$$</p>\n\n<p>If we do not want to use $\\tan$ at all, then we obtain the same answer just reasoning from your picture:\n$$\nr = {1\\over2+\\sqrt{3}}= 2 - \\sqrt{3}.\n$$</p>\n\n<p>(In this ratio, the numerator $1$ is the vertical leg in your picture; and the denominator $2+\\sqrt{3}$ is the horizontal leg.)</p>\n" }, { "answer_id": 2082699, "score": 1, "text": "<p>Let $a$, $b$, $c$ represent the line segments the make up a right triangle, and let $A$, $B$, $C$ represent the angles opposite those line segments. Let $C=90^\\circ$</p>\n\n<p>The legs ($a$ and $b$) are given by the following equations.\n$$a=c\\sin{A}$$\n$$b=c\\cos{A}$$</p>\n\n<p>We want to find the ratio $a\\over b$.</p>\n\n<p>$${a\\over b}={c\\sin{A}\\over c\\cos{A}}={\\sin{A}\\over \\cos{A}}=\\tan(A)$$</p>\n\n<p>Now substitute the value of $A$ in your example. You can either use $15^\\circ$ or $75^\\circ$ (using one instead of the other will give the inverse ratio).</p>\n\n<p>$${a\\over b}=\\tan(15^\\circ)\\quad\\text{or}\\quad{b\\over a}=\\tan(75^\\circ)$$</p>\n" }, { "answer_id": 4714648, "score": 1, "text": "<p><span class=\"math-container\">$\\triangle ABD$</span> has a right angle at <span class=\"math-container\">$D$</span> and acute angles of <span class=\"math-container\">$30°(B)$</span> and <span class=\"math-container\">$60°(A)$</span>. <span class=\"math-container\">$\\triangle ABC$</span> is isosceles with base <span class=\"math-container\">$\\overline{AC}$</span>. Thus with the dimensions of the <span class=\"math-container\">$30°-60°-90°$</span> right triangle, <span class=\"math-container\">$BC=AB=2AD$</span> and <span class=\"math-container\">$BD=\\sqrt3AD$</span>. Also <span class=\"math-container\">$\\angle ACD$</span> in the isosceles triangle measures half the exterior <span class=\"math-container\">$\\angle ABD$</span>, thus <span class=\"math-container\">$15°$</span>. Thereby <span class=\"math-container\">$\\triangle ACD$</span> is a right triangle with acute angles measuring <span class=\"math-container\">$15°$</span> and <span class=\"math-container\">$75°$</span>, with <span class=\"math-container\">$\\color{blue}{CD=BC+BD=(2+\\sqrt3)AD}$</span>.</p>\n<p><a href=\"https://i.stack.imgur.com/AmqPb.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/AmqPb.jpg\" alt=\"enter image description here\" /></a></p>\n" }, { "answer_id": 4714665, "score": 1, "text": "<p><a href=\"https://i.stack.imgur.com/Es9fB.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Es9fB.png\" alt=\"enter image description here\" /></a></p>\n<p>...........................................................................</p>\n" }, { "answer_id": 4303747, "score": -1, "text": "<p>See the link here, it provides an explanation of the ratios and has other resources on triangle ratios:</p>\n<p><a href=\"https://robertlovespi.net/2013/10/18/the-15-75-90-triangle/\" rel=\"nofollow noreferrer\">https://robertlovespi.net/2013/10/18/the-15-75-90-triangle/</a></p>\n<p>Brief overview:</p>\n<p>15-75-90 triangle is based on the regular dodecagon, using this <a href=\"https://i.stack.imgur.com/5iWeY.png\" rel=\"nofollow noreferrer\">diagram:</a></p>\n<p>Using the Pythagoran Theorem and other methods, we get the short leg:long leg:hypotenuse ratio in a 15-75-90 triangle as <span class=\"math-container\">$2-\\sqrt{3}:1:2\\sqrt{2 - \\sqrt{3}}.$</span> The nested radical ultimately simplifies to <span class=\"math-container\">$\\sqrt{6} - \\sqrt{2},$</span> so the simplified ratio is <span class=\"math-container\">$2 - \\sqrt{3}:1:\\sqrt{6} - \\sqrt{2}.$</span></p>\n" }, { "answer_id": 2082666, "score": 3, "text": "<p>The ratio of legs is\n$$\nr = \\tan 15^\\circ.\n$$\n(This is quite easily derived from the definition of the $\\tan$ function.)</p>\n\n<p>You can also represent the ratio using radicals:\n$$\nr = 2 - \\sqrt{3} \\approx 0.267949\n$$</p>\n\n<p>If we do not want to use $\\tan$ at all, then we obtain the same answer just reasoning from your picture:\n$$\nr = {1\\over2+\\sqrt{3}}= 2 - \\sqrt{3}.\n$$</p>\n\n<p>(In this ratio, the numerator $1$ is the vertical leg in your picture; and the denominator $2+\\sqrt{3}$ is the horizontal leg.)</p>\n" }, { "answer_id": 2082699, "score": 1, "text": "<p>Let $a$, $b$, $c$ represent the line segments the make up a right triangle, and let $A$, $B$, $C$ represent the angles opposite those line segments. Let $C=90^\\circ$</p>\n\n<p>The legs ($a$ and $b$) are given by the following equations.\n$$a=c\\sin{A}$$\n$$b=c\\cos{A}$$</p>\n\n<p>We want to find the ratio $a\\over b$.</p>\n\n<p>$${a\\over b}={c\\sin{A}\\over c\\cos{A}}={\\sin{A}\\over \\cos{A}}=\\tan(A)$$</p>\n\n<p>Now substitute the value of $A$ in your example. You can either use $15^\\circ$ or $75^\\circ$ (using one instead of the other will give the inverse ratio).</p>\n\n<p>$${a\\over b}=\\tan(15^\\circ)\\quad\\text{or}\\quad{b\\over a}=\\tan(75^\\circ)$$</p>\n" }, { "answer_id": 4714648, "score": 1, "text": "<p><span class=\"math-container\">$\\triangle ABD$</span> has a right angle at <span class=\"math-container\">$D$</span> and acute angles of <span class=\"math-container\">$30°(B)$</span> and <span class=\"math-container\">$60°(A)$</span>. <span class=\"math-container\">$\\triangle ABC$</span> is isosceles with base <span class=\"math-container\">$\\overline{AC}$</span>. Thus with the dimensions of the <span class=\"math-container\">$30°-60°-90°$</span> right triangle, <span class=\"math-container\">$BC=AB=2AD$</span> and <span class=\"math-container\">$BD=\\sqrt3AD$</span>. Also <span class=\"math-container\">$\\angle ACD$</span> in the isosceles triangle measures half the exterior <span class=\"math-container\">$\\angle ABD$</span>, thus <span class=\"math-container\">$15°$</span>. Thereby <span class=\"math-container\">$\\triangle ACD$</span> is a right triangle with acute angles measuring <span class=\"math-container\">$15°$</span> and <span class=\"math-container\">$75°$</span>, with <span class=\"math-container\">$\\color{blue}{CD=BC+BD=(2+\\sqrt3)AD}$</span>.</p>\n<p><a href=\"https://i.stack.imgur.com/AmqPb.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/AmqPb.jpg\" alt=\"enter image description here\" /></a></p>\n" }, { "answer_id": 4714665, "score": 1, "text": "<p><a href=\"https://i.stack.imgur.com/Es9fB.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Es9fB.png\" alt=\"enter image description here\" /></a></p>\n<p>...........................................................................</p>\n" }, { "answer_id": 4303747, "score": -1, "text": "<p>See the link here, it provides an explanation of the ratios and has other resources on triangle ratios:</p>\n<p><a href=\"https://robertlovespi.net/2013/10/18/the-15-75-90-triangle/\" rel=\"nofollow noreferrer\">https://robertlovespi.net/2013/10/18/the-15-75-90-triangle/</a></p>\n<p>Brief overview:</p>\n<p>15-75-90 triangle is based on the regular dodecagon, using this <a href=\"https://i.stack.imgur.com/5iWeY.png\" rel=\"nofollow noreferrer\">diagram:</a></p>\n<p>Using the Pythagoran Theorem and other methods, we get the short leg:long leg:hypotenuse ratio in a 15-75-90 triangle as <span class=\"math-container\">$2-\\sqrt{3}:1:2\\sqrt{2 - \\sqrt{3}}.$</span> The nested radical ultimately simplifies to <span class=\"math-container\">$\\sqrt{6} - \\sqrt{2},$</span> so the simplified ratio is <span class=\"math-container\">$2 - \\sqrt{3}:1:\\sqrt{6} - \\sqrt{2}.$</span></p>\n" }, { "answer_id": 2082666, "score": 3, "text": "<p>The ratio of legs is\n$$\nr = \\tan 15^\\circ.\n$$\n(This is quite easily derived from the definition of the $\\tan$ function.)</p>\n\n<p>You can also represent the ratio using radicals:\n$$\nr = 2 - \\sqrt{3} \\approx 0.267949\n$$</p>\n\n<p>If we do not want to use $\\tan$ at all, then we obtain the same answer just reasoning from your picture:\n$$\nr = {1\\over2+\\sqrt{3}}= 2 - \\sqrt{3}.\n$$</p>\n\n<p>(In this ratio, the numerator $1$ is the vertical leg in your picture; and the denominator $2+\\sqrt{3}$ is the horizontal leg.)</p>\n" }, { "answer_id": 2082699, "score": 1, "text": "<p>Let $a$, $b$, $c$ represent the line segments the make up a right triangle, and let $A$, $B$, $C$ represent the angles opposite those line segments. Let $C=90^\\circ$</p>\n\n<p>The legs ($a$ and $b$) are given by the following equations.\n$$a=c\\sin{A}$$\n$$b=c\\cos{A}$$</p>\n\n<p>We want to find the ratio $a\\over b$.</p>\n\n<p>$${a\\over b}={c\\sin{A}\\over c\\cos{A}}={\\sin{A}\\over \\cos{A}}=\\tan(A)$$</p>\n\n<p>Now substitute the value of $A$ in your example. You can either use $15^\\circ$ or $75^\\circ$ (using one instead of the other will give the inverse ratio).</p>\n\n<p>$${a\\over b}=\\tan(15^\\circ)\\quad\\text{or}\\quad{b\\over a}=\\tan(75^\\circ)$$</p>\n" }, { "answer_id": 4714648, "score": 1, "text": "<p><span class=\"math-container\">$\\triangle ABD$</span> has a right angle at <span class=\"math-container\">$D$</span> and acute angles of <span class=\"math-container\">$30°(B)$</span> and <span class=\"math-container\">$60°(A)$</span>. <span class=\"math-container\">$\\triangle ABC$</span> is isosceles with base <span class=\"math-container\">$\\overline{AC}$</span>. Thus with the dimensions of the <span class=\"math-container\">$30°-60°-90°$</span> right triangle, <span class=\"math-container\">$BC=AB=2AD$</span> and <span class=\"math-container\">$BD=\\sqrt3AD$</span>. Also <span class=\"math-container\">$\\angle ACD$</span> in the isosceles triangle measures half the exterior <span class=\"math-container\">$\\angle ABD$</span>, thus <span class=\"math-container\">$15°$</span>. Thereby <span class=\"math-container\">$\\triangle ACD$</span> is a right triangle with acute angles measuring <span class=\"math-container\">$15°$</span> and <span class=\"math-container\">$75°$</span>, with <span class=\"math-container\">$\\color{blue}{CD=BC+BD=(2+\\sqrt3)AD}$</span>.</p>\n<p><a href=\"https://i.stack.imgur.com/AmqPb.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/AmqPb.jpg\" alt=\"enter image description here\" /></a></p>\n" }, { "answer_id": 4714665, "score": 1, "text": "<p><a href=\"https://i.stack.imgur.com/Es9fB.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Es9fB.png\" alt=\"enter image description here\" /></a></p>\n<p>...........................................................................</p>\n" }, { "answer_id": 4303747, "score": -1, "text": "<p>See the link here, it provides an explanation of the ratios and has other resources on triangle ratios:</p>\n<p><a href=\"https://robertlovespi.net/2013/10/18/the-15-75-90-triangle/\" rel=\"nofollow noreferrer\">https://robertlovespi.net/2013/10/18/the-15-75-90-triangle/</a></p>\n<p>Brief overview:</p>\n<p>15-75-90 triangle is based on the regular dodecagon, using this <a href=\"https://i.stack.imgur.com/5iWeY.png\" rel=\"nofollow noreferrer\">diagram:</a></p>\n<p>Using the Pythagoran Theorem and other methods, we get the short leg:long leg:hypotenuse ratio in a 15-75-90 triangle as <span class=\"math-container\">$2-\\sqrt{3}:1:2\\sqrt{2 - \\sqrt{3}}.$</span> The nested radical ultimately simplifies to <span class=\"math-container\">$\\sqrt{6} - \\sqrt{2},$</span> so the simplified ratio is <span class=\"math-container\">$2 - \\sqrt{3}:1:\\sqrt{6} - \\sqrt{2}.$</span></p>\n" }, { "answer_id": 2082666, "score": 3, "text": "<p>The ratio of legs is\n$$\nr = \\tan 15^\\circ.\n$$\n(This is quite easily derived from the definition of the $\\tan$ function.)</p>\n\n<p>You can also represent the ratio using radicals:\n$$\nr = 2 - \\sqrt{3} \\approx 0.267949\n$$</p>\n\n<p>If we do not want to use $\\tan$ at all, then we obtain the same answer just reasoning from your picture:\n$$\nr = {1\\over2+\\sqrt{3}}= 2 - \\sqrt{3}.\n$$</p>\n\n<p>(In this ratio, the numerator $1$ is the vertical leg in your picture; and the denominator $2+\\sqrt{3}$ is the horizontal leg.)</p>\n" }, { "answer_id": 2082699, "score": 1, "text": "<p>Let $a$, $b$, $c$ represent the line segments the make up a right triangle, and let $A$, $B$, $C$ represent the angles opposite those line segments. Let $C=90^\\circ$</p>\n\n<p>The legs ($a$ and $b$) are given by the following equations.\n$$a=c\\sin{A}$$\n$$b=c\\cos{A}$$</p>\n\n<p>We want to find the ratio $a\\over b$.</p>\n\n<p>$${a\\over b}={c\\sin{A}\\over c\\cos{A}}={\\sin{A}\\over \\cos{A}}=\\tan(A)$$</p>\n\n<p>Now substitute the value of $A$ in your example. You can either use $15^\\circ$ or $75^\\circ$ (using one instead of the other will give the inverse ratio).</p>\n\n<p>$${a\\over b}=\\tan(15^\\circ)\\quad\\text{or}\\quad{b\\over a}=\\tan(75^\\circ)$$</p>\n" }, { "answer_id": 4714648, "score": 1, "text": "<p><span class=\"math-container\">$\\triangle ABD$</span> has a right angle at <span class=\"math-container\">$D$</span> and acute angles of <span class=\"math-container\">$30°(B)$</span> and <span class=\"math-container\">$60°(A)$</span>. <span class=\"math-container\">$\\triangle ABC$</span> is isosceles with base <span class=\"math-container\">$\\overline{AC}$</span>. Thus with the dimensions of the <span class=\"math-container\">$30°-60°-90°$</span> right triangle, <span class=\"math-container\">$BC=AB=2AD$</span> and <span class=\"math-container\">$BD=\\sqrt3AD$</span>. Also <span class=\"math-container\">$\\angle ACD$</span> in the isosceles triangle measures half the exterior <span class=\"math-container\">$\\angle ABD$</span>, thus <span class=\"math-container\">$15°$</span>. Thereby <span class=\"math-container\">$\\triangle ACD$</span> is a right triangle with acute angles measuring <span class=\"math-container\">$15°$</span> and <span class=\"math-container\">$75°$</span>, with <span class=\"math-container\">$\\color{blue}{CD=BC+BD=(2+\\sqrt3)AD}$</span>.</p>\n<p><a href=\"https://i.stack.imgur.com/AmqPb.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/AmqPb.jpg\" alt=\"enter image description here\" /></a></p>\n" }, { "answer_id": 4714665, "score": 1, "text": "<p><a href=\"https://i.stack.imgur.com/Es9fB.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Es9fB.png\" alt=\"enter image description here\" /></a></p>\n<p>...........................................................................</p>\n" }, { "answer_id": 4303747, "score": -1, "text": "<p>See the link here, it provides an explanation of the ratios and has other resources on triangle ratios:</p>\n<p><a href=\"https://robertlovespi.net/2013/10/18/the-15-75-90-triangle/\" rel=\"nofollow noreferrer\">https://robertlovespi.net/2013/10/18/the-15-75-90-triangle/</a></p>\n<p>Brief overview:</p>\n<p>15-75-90 triangle is based on the regular dodecagon, using this <a href=\"https://i.stack.imgur.com/5iWeY.png\" rel=\"nofollow noreferrer\">diagram:</a></p>\n<p>Using the Pythagoran Theorem and other methods, we get the short leg:long leg:hypotenuse ratio in a 15-75-90 triangle as <span class=\"math-container\">$2-\\sqrt{3}:1:2\\sqrt{2 - \\sqrt{3}}.$</span> The nested radical ultimately simplifies to <span class=\"math-container\">$\\sqrt{6} - \\sqrt{2},$</span> so the simplified ratio is <span class=\"math-container\">$2 - \\sqrt{3}:1:\\sqrt{6} - \\sqrt{2}.$</span></p>\n" }, { "answer_id": 2082666, "score": 3, "text": "<p>The ratio of legs is\n$$\nr = \\tan 15^\\circ.\n$$\n(This is quite easily derived from the definition of the $\\tan$ function.)</p>\n\n<p>You can also represent the ratio using radicals:\n$$\nr = 2 - \\sqrt{3} \\approx 0.267949\n$$</p>\n\n<p>If we do not want to use $\\tan$ at all, then we obtain the same answer just reasoning from your picture:\n$$\nr = {1\\over2+\\sqrt{3}}= 2 - \\sqrt{3}.\n$$</p>\n\n<p>(In this ratio, the numerator $1$ is the vertical leg in your picture; and the denominator $2+\\sqrt{3}$ is the horizontal leg.)</p>\n" }, { "answer_id": 2082699, "score": 1, "text": "<p>Let $a$, $b$, $c$ represent the line segments the make up a right triangle, and let $A$, $B$, $C$ represent the angles opposite those line segments. Let $C=90^\\circ$</p>\n\n<p>The legs ($a$ and $b$) are given by the following equations.\n$$a=c\\sin{A}$$\n$$b=c\\cos{A}$$</p>\n\n<p>We want to find the ratio $a\\over b$.</p>\n\n<p>$${a\\over b}={c\\sin{A}\\over c\\cos{A}}={\\sin{A}\\over \\cos{A}}=\\tan(A)$$</p>\n\n<p>Now substitute the value of $A$ in your example. You can either use $15^\\circ$ or $75^\\circ$ (using one instead of the other will give the inverse ratio).</p>\n\n<p>$${a\\over b}=\\tan(15^\\circ)\\quad\\text{or}\\quad{b\\over a}=\\tan(75^\\circ)$$</p>\n" }, { "answer_id": 4714648, "score": 1, "text": "<p><span class=\"math-container\">$\\triangle ABD$</span> has a right angle at <span class=\"math-container\">$D$</span> and acute angles of <span class=\"math-container\">$30°(B)$</span> and <span class=\"math-container\">$60°(A)$</span>. <span class=\"math-container\">$\\triangle ABC$</span> is isosceles with base <span class=\"math-container\">$\\overline{AC}$</span>. Thus with the dimensions of the <span class=\"math-container\">$30°-60°-90°$</span> right triangle, <span class=\"math-container\">$BC=AB=2AD$</span> and <span class=\"math-container\">$BD=\\sqrt3AD$</span>. Also <span class=\"math-container\">$\\angle ACD$</span> in the isosceles triangle measures half the exterior <span class=\"math-container\">$\\angle ABD$</span>, thus <span class=\"math-container\">$15°$</span>. Thereby <span class=\"math-container\">$\\triangle ACD$</span> is a right triangle with acute angles measuring <span class=\"math-container\">$15°$</span> and <span class=\"math-container\">$75°$</span>, with <span class=\"math-container\">$\\color{blue}{CD=BC+BD=(2+\\sqrt3)AD}$</span>.</p>\n<p><a href=\"https://i.stack.imgur.com/AmqPb.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/AmqPb.jpg\" alt=\"enter image description here\" /></a></p>\n" }, { "answer_id": 4714665, "score": 1, "text": "<p><a href=\"https://i.stack.imgur.com/Es9fB.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Es9fB.png\" alt=\"enter image description here\" /></a></p>\n<p>...........................................................................</p>\n" }, { "answer_id": 4303747, "score": -1, "text": "<p>See the link here, it provides an explanation of the ratios and has other resources on triangle ratios:</p>\n<p><a href=\"https://robertlovespi.net/2013/10/18/the-15-75-90-triangle/\" rel=\"nofollow noreferrer\">https://robertlovespi.net/2013/10/18/the-15-75-90-triangle/</a></p>\n<p>Brief overview:</p>\n<p>15-75-90 triangle is based on the regular dodecagon, using this <a href=\"https://i.stack.imgur.com/5iWeY.png\" rel=\"nofollow noreferrer\">diagram:</a></p>\n<p>Using the Pythagoran Theorem and other methods, we get the short leg:long leg:hypotenuse ratio in a 15-75-90 triangle as <span class=\"math-container\">$2-\\sqrt{3}:1:2\\sqrt{2 - \\sqrt{3}}.$</span> The nested radical ultimately simplifies to <span class=\"math-container\">$\\sqrt{6} - \\sqrt{2},$</span> so the simplified ratio is <span class=\"math-container\">$2 - \\sqrt{3}:1:\\sqrt{6} - \\sqrt{2}.$</span></p>\n" }, { "answer_id": 2082666, "score": 3, "text": "<p>The ratio of legs is\n$$\nr = \\tan 15^\\circ.\n$$\n(This is quite easily derived from the definition of the $\\tan$ function.)</p>\n\n<p>You can also represent the ratio using radicals:\n$$\nr = 2 - \\sqrt{3} \\approx 0.267949\n$$</p>\n\n<p>If we do not want to use $\\tan$ at all, then we obtain the same answer just reasoning from your picture:\n$$\nr = {1\\over2+\\sqrt{3}}= 2 - \\sqrt{3}.\n$$</p>\n\n<p>(In this ratio, the numerator $1$ is the vertical leg in your picture; and the denominator $2+\\sqrt{3}$ is the horizontal leg.)</p>\n" }, { "answer_id": 2082699, "score": 1, "text": "<p>Let $a$, $b$, $c$ represent the line segments the make up a right triangle, and let $A$, $B$, $C$ represent the angles opposite those line segments. Let $C=90^\\circ$</p>\n\n<p>The legs ($a$ and $b$) are given by the following equations.\n$$a=c\\sin{A}$$\n$$b=c\\cos{A}$$</p>\n\n<p>We want to find the ratio $a\\over b$.</p>\n\n<p>$${a\\over b}={c\\sin{A}\\over c\\cos{A}}={\\sin{A}\\over \\cos{A}}=\\tan(A)$$</p>\n\n<p>Now substitute the value of $A$ in your example. You can either use $15^\\circ$ or $75^\\circ$ (using one instead of the other will give the inverse ratio).</p>\n\n<p>$${a\\over b}=\\tan(15^\\circ)\\quad\\text{or}\\quad{b\\over a}=\\tan(75^\\circ)$$</p>\n" }, { "answer_id": 4714648, "score": 1, "text": "<p><span class=\"math-container\">$\\triangle ABD$</span> has a right angle at <span class=\"math-container\">$D$</span> and acute angles of <span class=\"math-container\">$30°(B)$</span> and <span class=\"math-container\">$60°(A)$</span>. <span class=\"math-container\">$\\triangle ABC$</span> is isosceles with base <span class=\"math-container\">$\\overline{AC}$</span>. Thus with the dimensions of the <span class=\"math-container\">$30°-60°-90°$</span> right triangle, <span class=\"math-container\">$BC=AB=2AD$</span> and <span class=\"math-container\">$BD=\\sqrt3AD$</span>. Also <span class=\"math-container\">$\\angle ACD$</span> in the isosceles triangle measures half the exterior <span class=\"math-container\">$\\angle ABD$</span>, thus <span class=\"math-container\">$15°$</span>. Thereby <span class=\"math-container\">$\\triangle ACD$</span> is a right triangle with acute angles measuring <span class=\"math-container\">$15°$</span> and <span class=\"math-container\">$75°$</span>, with <span class=\"math-container\">$\\color{blue}{CD=BC+BD=(2+\\sqrt3)AD}$</span>.</p>\n<p><a href=\"https://i.stack.imgur.com/AmqPb.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/AmqPb.jpg\" alt=\"enter image description here\" /></a></p>\n" }, { "answer_id": 4714665, "score": 1, "text": "<p><a href=\"https://i.stack.imgur.com/Es9fB.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Es9fB.png\" alt=\"enter image description here\" /></a></p>\n<p>...........................................................................</p>\n" }, { "answer_id": 4303747, "score": -1, "text": "<p>See the link here, it provides an explanation of the ratios and has other resources on triangle ratios:</p>\n<p><a href=\"https://robertlovespi.net/2013/10/18/the-15-75-90-triangle/\" rel=\"nofollow noreferrer\">https://robertlovespi.net/2013/10/18/the-15-75-90-triangle/</a></p>\n<p>Brief overview:</p>\n<p>15-75-90 triangle is based on the regular dodecagon, using this <a href=\"https://i.stack.imgur.com/5iWeY.png\" rel=\"nofollow noreferrer\">diagram:</a></p>\n<p>Using the Pythagoran Theorem and other methods, we get the short leg:long leg:hypotenuse ratio in a 15-75-90 triangle as <span class=\"math-container\">$2-\\sqrt{3}:1:2\\sqrt{2 - \\sqrt{3}}.$</span> The nested radical ultimately simplifies to <span class=\"math-container\">$\\sqrt{6} - \\sqrt{2},$</span> so the simplified ratio is <span class=\"math-container\">$2 - \\sqrt{3}:1:\\sqrt{6} - \\sqrt{2}.$</span></p>\n" } ]
Question: Ratio of Legs in 15, 75, 90 triangles Answer 1: <p>The ratio of legs is $$ r = \tan 15^\circ. $$ (This is quite easily derived from the definition of the $\tan$ function.)</p> <p>You can also represent the ratio using radicals: $$ r = 2 - \sqrt{3} \approx 0.267949 $$</p> <p>If we do not want to use $\tan$ at all, then we obtain the same answer just reasoning from your picture: $$ r = {1\over2+\sqrt{3}}= 2 - \sqrt{3}. $$</p> <p>(In this ratio, the numerator $1$ is the vertical leg in your picture; and the denominator $2+\sqrt{3}$ is the horizontal leg.)</p> Answer 2: <p>Let $a$, $b$, $c$ represent the line segments the make up a right triangle, and let $A$, $B$, $C$ represent the angles opposite those line segments. Let $C=90^\circ$</p> <p>The legs ($a$ and $b$) are given by the following equations. $$a=c\sin{A}$$ $$b=c\cos{A}$$</p> <p>We want to find the ratio $a\over b$.</p> <p>$${a\over b}={c\sin{A}\over c\cos{A}}={\sin{A}\over \cos{A}}=\tan(A)$$</p> <p>Now substitute the value of $A$ in your example. You can either use $15^\circ$ or $75^\circ$ (using one instead of the other will give the inverse ratio).</p> <p>$${a\over b}=\tan(15^\circ)\quad\text{or}\quad{b\over a}=\tan(75^\circ)$$</p> Answer 3: <p><span class="math-container">$\triangle ABD$</span> has a right angle at <span class="math-container">$D$</span> and acute angles of <span class="math-container">$30°(B)$</span> and <span class="math-container">$60°(A)$</span>. <span class="math-container">$\triangle ABC$</span> is isosceles with base <span class="math-container">$\overline{AC}$</span>. Thus with the dimensions of the <span class="math-container">$30°-60°-90°$</span> right triangle, <span class="math-container">$BC=AB=2AD$</span> and <span class="math-container">$BD=\sqrt3AD$</span>. Also <span class="math-container">$\angle ACD$</span> in the isosceles triangle measures half the exterior <span class="math-container">$\angle ABD$</span>, thus <span class="math-container">$15°$</span>. Thereby <span class="math-container">$\triangle ACD$</span> is a right triangle with acute angles measuring <span class="math-container">$15°$</span> and <span class="math-container">$75°$</span>, with <span class="math-container">$\color{blue}{CD=BC+BD=(2+\sqrt3)AD}$</span>.</p> <p><a href="https://i.stack.imgur.com/AmqPb.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AmqPb.jpg" alt="enter image description here" /></a></p> Answer 4: <p><a href="https://i.stack.imgur.com/Es9fB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Es9fB.png" alt="enter image description here" /></a></p> <p>...........................................................................</p> Answer 5: <p>See the link here, it provides an explanation of the ratios and has other resources on triangle ratios:</p> <p><a href="https://robertlovespi.net/2013/10/18/the-15-75-90-triangle/" rel="nofollow noreferrer">https://robertlovespi.net/2013/10/18/the-15-75-90-triangle/</a></p> <p>Brief overview:</p> <p>15-75-90 triangle is based on the regular dodecagon, using this <a href="https://i.stack.imgur.com/5iWeY.png" rel="nofollow noreferrer">diagram:</a></p> <p>Using the Pythagoran Theorem and other methods, we get the short leg:long leg:hypotenuse ratio in a 15-75-90 triangle as <span class="math-container">$2-\sqrt{3}:1:2\sqrt{2 - \sqrt{3}}.$</span> The nested radical ultimately simplifies to <span class="math-container">$\sqrt{6} - \sqrt{2},$</span> so the simplified ratio is <span class="math-container">$2 - \sqrt{3}:1:\sqrt{6} - \sqrt{2}.$</span></p> Answer 6: <p>The ratio of legs is $$ r = \tan 15^\circ. $$ (This is quite easily derived from the definition of the $\tan$ function.)</p> <p>You can also represent the ratio using radicals: $$ r = 2 - \sqrt{3} \approx 0.267949 $$</p> <p>If we do not want to use $\tan$ at all, then we obtain the same answer just reasoning from your picture: $$ r = {1\over2+\sqrt{3}}= 2 - \sqrt{3}. $$</p> <p>(In this ratio, the numerator $1$ is the vertical leg in your picture; and the denominator $2+\sqrt{3}$ is the horizontal leg.)</p> Answer 7: <p>Let $a$, $b$, $c$ represent the line segments the make up a right triangle, and let $A$, $B$, $C$ represent the angles opposite those line segments. Let $C=90^\circ$</p> <p>The legs ($a$ and $b$) are given by the following equations. $$a=c\sin{A}$$ $$b=c\cos{A}$$</p> <p>We want to find the ratio $a\over b$.</p> <p>$${a\over b}={c\sin{A}\over c\cos{A}}={\sin{A}\over \cos{A}}=\tan(A)$$</p> <p>Now substitute the value of $A$ in your example. You can either use $15^\circ$ or $75^\circ$ (using one instead of the other will give the inverse ratio).</p> <p>$${a\over b}=\tan(15^\circ)\quad\text{or}\quad{b\over a}=\tan(75^\circ)$$</p> Answer 8: <p><span class="math-container">$\triangle ABD$</span> has a right angle at <span class="math-container">$D$</span> and acute angles of <span class="math-container">$30°(B)$</span> and <span class="math-container">$60°(A)$</span>. <span class="math-container">$\triangle ABC$</span> is isosceles with base <span class="math-container">$\overline{AC}$</span>. Thus with the dimensions of the <span class="math-container">$30°-60°-90°$</span> right triangle, <span class="math-container">$BC=AB=2AD$</span> and <span class="math-container">$BD=\sqrt3AD$</span>. Also <span class="math-container">$\angle ACD$</span> in the isosceles triangle measures half the exterior <span class="math-container">$\angle ABD$</span>, thus <span class="math-container">$15°$</span>. Thereby <span class="math-container">$\triangle ACD$</span> is a right triangle with acute angles measuring <span class="math-container">$15°$</span> and <span class="math-container">$75°$</span>, with <span class="math-container">$\color{blue}{CD=BC+BD=(2+\sqrt3)AD}$</span>.</p> <p><a href="https://i.stack.imgur.com/AmqPb.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AmqPb.jpg" alt="enter image description here" /></a></p> Answer 9: <p><a href="https://i.stack.imgur.com/Es9fB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Es9fB.png" alt="enter image description here" /></a></p> <p>...........................................................................</p> Answer 10: <p>See the link here, it provides an explanation of the ratios and has other resources on triangle ratios:</p> <p><a href="https://robertlovespi.net/2013/10/18/the-15-75-90-triangle/" rel="nofollow noreferrer">https://robertlovespi.net/2013/10/18/the-15-75-90-triangle/</a></p> <p>Brief overview:</p> <p>15-75-90 triangle is based on the regular dodecagon, using this <a href="https://i.stack.imgur.com/5iWeY.png" rel="nofollow noreferrer">diagram:</a></p> <p>Using the Pythagoran Theorem and other methods, we get the short leg:long leg:hypotenuse ratio in a 15-75-90 triangle as <span class="math-container">$2-\sqrt{3}:1:2\sqrt{2 - \sqrt{3}}.$</span> The nested radical ultimately simplifies to <span class="math-container">$\sqrt{6} - \sqrt{2},$</span> so the simplified ratio is <span class="math-container">$2 - \sqrt{3}:1:\sqrt{6} - \sqrt{2}.$</span></p> Answer 11: <p>The ratio of legs is $$ r = \tan 15^\circ. $$ (This is quite easily derived from the definition of the $\tan$ function.)</p> <p>You can also represent the ratio using radicals: $$ r = 2 - \sqrt{3} \approx 0.267949 $$</p> <p>If we do not want to use $\tan$ at all, then we obtain the same answer just reasoning from your picture: $$ r = {1\over2+\sqrt{3}}= 2 - \sqrt{3}. $$</p> <p>(In this ratio, the numerator $1$ is the vertical leg in your picture; and the denominator $2+\sqrt{3}$ is the horizontal leg.)</p> Answer 12: <p>Let $a$, $b$, $c$ represent the line segments the make up a right triangle, and let $A$, $B$, $C$ represent the angles opposite those line segments. Let $C=90^\circ$</p> <p>The legs ($a$ and $b$) are given by the following equations. $$a=c\sin{A}$$ $$b=c\cos{A}$$</p> <p>We want to find the ratio $a\over b$.</p> <p>$${a\over b}={c\sin{A}\over c\cos{A}}={\sin{A}\over \cos{A}}=\tan(A)$$</p> <p>Now substitute the value of $A$ in your example. You can either use $15^\circ$ or $75^\circ$ (using one instead of the other will give the inverse ratio).</p> <p>$${a\over b}=\tan(15^\circ)\quad\text{or}\quad{b\over a}=\tan(75^\circ)$$</p> Answer 13: <p><span class="math-container">$\triangle ABD$</span> has a right angle at <span class="math-container">$D$</span> and acute angles of <span class="math-container">$30°(B)$</span> and <span class="math-container">$60°(A)$</span>. <span class="math-container">$\triangle ABC$</span> is isosceles with base <span class="math-container">$\overline{AC}$</span>. Thus with the dimensions of the <span class="math-container">$30°-60°-90°$</span> right triangle, <span class="math-container">$BC=AB=2AD$</span> and <span class="math-container">$BD=\sqrt3AD$</span>. Also <span class="math-container">$\angle ACD$</span> in the isosceles triangle measures half the exterior <span class="math-container">$\angle ABD$</span>, thus <span class="math-container">$15°$</span>. Thereby <span class="math-container">$\triangle ACD$</span> is a right triangle with acute angles measuring <span class="math-container">$15°$</span> and <span class="math-container">$75°$</span>, with <span class="math-container">$\color{blue}{CD=BC+BD=(2+\sqrt3)AD}$</span>.</p> <p><a href="https://i.stack.imgur.com/AmqPb.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AmqPb.jpg" alt="enter image description here" /></a></p> Answer 14: <p><a href="https://i.stack.imgur.com/Es9fB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Es9fB.png" alt="enter image description here" /></a></p> <p>...........................................................................</p> Answer 15: <p>See the link here, it provides an explanation of the ratios and has other resources on triangle ratios:</p> <p><a href="https://robertlovespi.net/2013/10/18/the-15-75-90-triangle/" rel="nofollow noreferrer">https://robertlovespi.net/2013/10/18/the-15-75-90-triangle/</a></p> <p>Brief overview:</p> <p>15-75-90 triangle is based on the regular dodecagon, using this <a href="https://i.stack.imgur.com/5iWeY.png" rel="nofollow noreferrer">diagram:</a></p> <p>Using the Pythagoran Theorem and other methods, we get the short leg:long leg:hypotenuse ratio in a 15-75-90 triangle as <span class="math-container">$2-\sqrt{3}:1:2\sqrt{2 - \sqrt{3}}.$</span> The nested radical ultimately simplifies to <span class="math-container">$\sqrt{6} - \sqrt{2},$</span> so the simplified ratio is <span class="math-container">$2 - \sqrt{3}:1:\sqrt{6} - \sqrt{2}.$</span></p> Answer 16: <p>The ratio of legs is $$ r = \tan 15^\circ. $$ (This is quite easily derived from the definition of the $\tan$ function.)</p> <p>You can also represent the ratio using radicals: $$ r = 2 - \sqrt{3} \approx 0.267949 $$</p> <p>If we do not want to use $\tan$ at all, then we obtain the same answer just reasoning from your picture: $$ r = {1\over2+\sqrt{3}}= 2 - \sqrt{3}. $$</p> <p>(In this ratio, the numerator $1$ is the vertical leg in your picture; and the denominator $2+\sqrt{3}$ is the horizontal leg.)</p> Answer 17: <p>Let $a$, $b$, $c$ represent the line segments the make up a right triangle, and let $A$, $B$, $C$ represent the angles opposite those line segments. Let $C=90^\circ$</p> <p>The legs ($a$ and $b$) are given by the following equations. $$a=c\sin{A}$$ $$b=c\cos{A}$$</p> <p>We want to find the ratio $a\over b$.</p> <p>$${a\over b}={c\sin{A}\over c\cos{A}}={\sin{A}\over \cos{A}}=\tan(A)$$</p> <p>Now substitute the value of $A$ in your example. You can either use $15^\circ$ or $75^\circ$ (using one instead of the other will give the inverse ratio).</p> <p>$${a\over b}=\tan(15^\circ)\quad\text{or}\quad{b\over a}=\tan(75^\circ)$$</p> Answer 18: <p><span class="math-container">$\triangle ABD$</span> has a right angle at <span class="math-container">$D$</span> and acute angles of <span class="math-container">$30°(B)$</span> and <span class="math-container">$60°(A)$</span>. <span class="math-container">$\triangle ABC$</span> is isosceles with base <span class="math-container">$\overline{AC}$</span>. Thus with the dimensions of the <span class="math-container">$30°-60°-90°$</span> right triangle, <span class="math-container">$BC=AB=2AD$</span> and <span class="math-container">$BD=\sqrt3AD$</span>. Also <span class="math-container">$\angle ACD$</span> in the isosceles triangle measures half the exterior <span class="math-container">$\angle ABD$</span>, thus <span class="math-container">$15°$</span>. Thereby <span class="math-container">$\triangle ACD$</span> is a right triangle with acute angles measuring <span class="math-container">$15°$</span> and <span class="math-container">$75°$</span>, with <span class="math-container">$\color{blue}{CD=BC+BD=(2+\sqrt3)AD}$</span>.</p> <p><a href="https://i.stack.imgur.com/AmqPb.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AmqPb.jpg" alt="enter image description here" /></a></p> Answer 19: <p><a href="https://i.stack.imgur.com/Es9fB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Es9fB.png" alt="enter image description here" /></a></p> <p>...........................................................................</p> Answer 20: <p>See the link here, it provides an explanation of the ratios and has other resources on triangle ratios:</p> <p><a href="https://robertlovespi.net/2013/10/18/the-15-75-90-triangle/" rel="nofollow noreferrer">https://robertlovespi.net/2013/10/18/the-15-75-90-triangle/</a></p> <p>Brief overview:</p> <p>15-75-90 triangle is based on the regular dodecagon, using this <a href="https://i.stack.imgur.com/5iWeY.png" rel="nofollow noreferrer">diagram:</a></p> <p>Using the Pythagoran Theorem and other methods, we get the short leg:long leg:hypotenuse ratio in a 15-75-90 triangle as <span class="math-container">$2-\sqrt{3}:1:2\sqrt{2 - \sqrt{3}}.$</span> The nested radical ultimately simplifies to <span class="math-container">$\sqrt{6} - \sqrt{2},$</span> so the simplified ratio is <span class="math-container">$2 - \sqrt{3}:1:\sqrt{6} - \sqrt{2}.$</span></p> Answer 21: <p>The ratio of legs is $$ r = \tan 15^\circ. $$ (This is quite easily derived from the definition of the $\tan$ function.)</p> <p>You can also represent the ratio using radicals: $$ r = 2 - \sqrt{3} \approx 0.267949 $$</p> <p>If we do not want to use $\tan$ at all, then we obtain the same answer just reasoning from your picture: $$ r = {1\over2+\sqrt{3}}= 2 - \sqrt{3}. $$</p> <p>(In this ratio, the numerator $1$ is the vertical leg in your picture; and the denominator $2+\sqrt{3}$ is the horizontal leg.)</p> Answer 22: <p>Let $a$, $b$, $c$ represent the line segments the make up a right triangle, and let $A$, $B$, $C$ represent the angles opposite those line segments. Let $C=90^\circ$</p> <p>The legs ($a$ and $b$) are given by the following equations. $$a=c\sin{A}$$ $$b=c\cos{A}$$</p> <p>We want to find the ratio $a\over b$.</p> <p>$${a\over b}={c\sin{A}\over c\cos{A}}={\sin{A}\over \cos{A}}=\tan(A)$$</p> <p>Now substitute the value of $A$ in your example. You can either use $15^\circ$ or $75^\circ$ (using one instead of the other will give the inverse ratio).</p> <p>$${a\over b}=\tan(15^\circ)\quad\text{or}\quad{b\over a}=\tan(75^\circ)$$</p> Answer 23: <p><span class="math-container">$\triangle ABD$</span> has a right angle at <span class="math-container">$D$</span> and acute angles of <span class="math-container">$30°(B)$</span> and <span class="math-container">$60°(A)$</span>. <span class="math-container">$\triangle ABC$</span> is isosceles with base <span class="math-container">$\overline{AC}$</span>. Thus with the dimensions of the <span class="math-container">$30°-60°-90°$</span> right triangle, <span class="math-container">$BC=AB=2AD$</span> and <span class="math-container">$BD=\sqrt3AD$</span>. Also <span class="math-container">$\angle ACD$</span> in the isosceles triangle measures half the exterior <span class="math-container">$\angle ABD$</span>, thus <span class="math-container">$15°$</span>. Thereby <span class="math-container">$\triangle ACD$</span> is a right triangle with acute angles measuring <span class="math-container">$15°$</span> and <span class="math-container">$75°$</span>, with <span class="math-container">$\color{blue}{CD=BC+BD=(2+\sqrt3)AD}$</span>.</p> <p><a href="https://i.stack.imgur.com/AmqPb.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AmqPb.jpg" alt="enter image description here" /></a></p> Answer 24: <p><a href="https://i.stack.imgur.com/Es9fB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Es9fB.png" alt="enter image description here" /></a></p> <p>...........................................................................</p> Answer 25: <p>See the link here, it provides an explanation of the ratios and has other resources on triangle ratios:</p> <p><a href="https://robertlovespi.net/2013/10/18/the-15-75-90-triangle/" rel="nofollow noreferrer">https://robertlovespi.net/2013/10/18/the-15-75-90-triangle/</a></p> <p>Brief overview:</p> <p>15-75-90 triangle is based on the regular dodecagon, using this <a href="https://i.stack.imgur.com/5iWeY.png" rel="nofollow noreferrer">diagram:</a></p> <p>Using the Pythagoran Theorem and other methods, we get the short leg:long leg:hypotenuse ratio in a 15-75-90 triangle as <span class="math-container">$2-\sqrt{3}:1:2\sqrt{2 - \sqrt{3}}.$</span> The nested radical ultimately simplifies to <span class="math-container">$\sqrt{6} - \sqrt{2},$</span> so the simplified ratio is <span class="math-container">$2 - \sqrt{3}:1:\sqrt{6} - \sqrt{2}.$</span></p> Answer 26: <p>The ratio of legs is $$ r = \tan 15^\circ. $$ (This is quite easily derived from the definition of the $\tan$ function.)</p> <p>You can also represent the ratio using radicals: $$ r = 2 - \sqrt{3} \approx 0.267949 $$</p> <p>If we do not want to use $\tan$ at all, then we obtain the same answer just reasoning from your picture: $$ r = {1\over2+\sqrt{3}}= 2 - \sqrt{3}. $$</p> <p>(In this ratio, the numerator $1$ is the vertical leg in your picture; and the denominator $2+\sqrt{3}$ is the horizontal leg.)</p> Answer 27: <p>Let $a$, $b$, $c$ represent the line segments the make up a right triangle, and let $A$, $B$, $C$ represent the angles opposite those line segments. Let $C=90^\circ$</p> <p>The legs ($a$ and $b$) are given by the following equations. $$a=c\sin{A}$$ $$b=c\cos{A}$$</p> <p>We want to find the ratio $a\over b$.</p> <p>$${a\over b}={c\sin{A}\over c\cos{A}}={\sin{A}\over \cos{A}}=\tan(A)$$</p> <p>Now substitute the value of $A$ in your example. You can either use $15^\circ$ or $75^\circ$ (using one instead of the other will give the inverse ratio).</p> <p>$${a\over b}=\tan(15^\circ)\quad\text{or}\quad{b\over a}=\tan(75^\circ)$$</p> Answer 28: <p><span class="math-container">$\triangle ABD$</span> has a right angle at <span class="math-container">$D$</span> and acute angles of <span class="math-container">$30°(B)$</span> and <span class="math-container">$60°(A)$</span>. <span class="math-container">$\triangle ABC$</span> is isosceles with base <span class="math-container">$\overline{AC}$</span>. Thus with the dimensions of the <span class="math-container">$30°-60°-90°$</span> right triangle, <span class="math-container">$BC=AB=2AD$</span> and <span class="math-container">$BD=\sqrt3AD$</span>. Also <span class="math-container">$\angle ACD$</span> in the isosceles triangle measures half the exterior <span class="math-container">$\angle ABD$</span>, thus <span class="math-container">$15°$</span>. Thereby <span class="math-container">$\triangle ACD$</span> is a right triangle with acute angles measuring <span class="math-container">$15°$</span> and <span class="math-container">$75°$</span>, with <span class="math-container">$\color{blue}{CD=BC+BD=(2+\sqrt3)AD}$</span>.</p> <p><a href="https://i.stack.imgur.com/AmqPb.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AmqPb.jpg" alt="enter image description here" /></a></p> Answer 29: <p><a href="https://i.stack.imgur.com/Es9fB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Es9fB.png" alt="enter image description here" /></a></p> <p>...........................................................................</p> Answer 30: <p>See the link here, it provides an explanation of the ratios and has other resources on triangle ratios:</p> <p><a href="https://robertlovespi.net/2013/10/18/the-15-75-90-triangle/" rel="nofollow noreferrer">https://robertlovespi.net/2013/10/18/the-15-75-90-triangle/</a></p> <p>Brief overview:</p> <p>15-75-90 triangle is based on the regular dodecagon, using this <a href="https://i.stack.imgur.com/5iWeY.png" rel="nofollow noreferrer">diagram:</a></p> <p>Using the Pythagoran Theorem and other methods, we get the short leg:long leg:hypotenuse ratio in a 15-75-90 triangle as <span class="math-container">$2-\sqrt{3}:1:2\sqrt{2 - \sqrt{3}}.$</span> The nested radical ultimately simplifies to <span class="math-container">$\sqrt{6} - \sqrt{2},$</span> so the simplified ratio is <span class="math-container">$2 - \sqrt{3}:1:\sqrt{6} - \sqrt{2}.$</span></p> Answer 31: <p>The ratio of legs is $$ r = \tan 15^\circ. $$ (This is quite easily derived from the definition of the $\tan$ function.)</p> <p>You can also represent the ratio using radicals: $$ r = 2 - \sqrt{3} \approx 0.267949 $$</p> <p>If we do not want to use $\tan$ at all, then we obtain the same answer just reasoning from your picture: $$ r = {1\over2+\sqrt{3}}= 2 - \sqrt{3}. $$</p> <p>(In this ratio, the numerator $1$ is the vertical leg in your picture; and the denominator $2+\sqrt{3}$ is the horizontal leg.)</p> Answer 32: <p>Let $a$, $b$, $c$ represent the line segments the make up a right triangle, and let $A$, $B$, $C$ represent the angles opposite those line segments. Let $C=90^\circ$</p> <p>The legs ($a$ and $b$) are given by the following equations. $$a=c\sin{A}$$ $$b=c\cos{A}$$</p> <p>We want to find the ratio $a\over b$.</p> <p>$${a\over b}={c\sin{A}\over c\cos{A}}={\sin{A}\over \cos{A}}=\tan(A)$$</p> <p>Now substitute the value of $A$ in your example. You can either use $15^\circ$ or $75^\circ$ (using one instead of the other will give the inverse ratio).</p> <p>$${a\over b}=\tan(15^\circ)\quad\text{or}\quad{b\over a}=\tan(75^\circ)$$</p> Answer 33: <p><span class="math-container">$\triangle ABD$</span> has a right angle at <span class="math-container">$D$</span> and acute angles of <span class="math-container">$30°(B)$</span> and <span class="math-container">$60°(A)$</span>. <span class="math-container">$\triangle ABC$</span> is isosceles with base <span class="math-container">$\overline{AC}$</span>. Thus with the dimensions of the <span class="math-container">$30°-60°-90°$</span> right triangle, <span class="math-container">$BC=AB=2AD$</span> and <span class="math-container">$BD=\sqrt3AD$</span>. Also <span class="math-container">$\angle ACD$</span> in the isosceles triangle measures half the exterior <span class="math-container">$\angle ABD$</span>, thus <span class="math-container">$15°$</span>. Thereby <span class="math-container">$\triangle ACD$</span> is a right triangle with acute angles measuring <span class="math-container">$15°$</span> and <span class="math-container">$75°$</span>, with <span class="math-container">$\color{blue}{CD=BC+BD=(2+\sqrt3)AD}$</span>.</p> <p><a href="https://i.stack.imgur.com/AmqPb.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AmqPb.jpg" alt="enter image description here" /></a></p> Answer 34: <p><a href="https://i.stack.imgur.com/Es9fB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Es9fB.png" alt="enter image description here" /></a></p> <p>...........................................................................</p> Answer 35: <p>See the link here, it provides an explanation of the ratios and has other resources on triangle ratios:</p> <p><a href="https://robertlovespi.net/2013/10/18/the-15-75-90-triangle/" rel="nofollow noreferrer">https://robertlovespi.net/2013/10/18/the-15-75-90-triangle/</a></p> <p>Brief overview:</p> <p>15-75-90 triangle is based on the regular dodecagon, using this <a href="https://i.stack.imgur.com/5iWeY.png" rel="nofollow noreferrer">diagram:</a></p> <p>Using the Pythagoran Theorem and other methods, we get the short leg:long leg:hypotenuse ratio in a 15-75-90 triangle as <span class="math-container">$2-\sqrt{3}:1:2\sqrt{2 - \sqrt{3}}.$</span> The nested radical ultimately simplifies to <span class="math-container">$\sqrt{6} - \sqrt{2},$</span> so the simplified ratio is <span class="math-container">$2 - \sqrt{3}:1:\sqrt{6} - \sqrt{2}.$</span></p> Answer 36: <p>The ratio of legs is $$ r = \tan 15^\circ. $$ (This is quite easily derived from the definition of the $\tan$ function.)</p> <p>You can also represent the ratio using radicals: $$ r = 2 - \sqrt{3} \approx 0.267949 $$</p> <p>If we do not want to use $\tan$ at all, then we obtain the same answer just reasoning from your picture: $$ r = {1\over2+\sqrt{3}}= 2 - \sqrt{3}. $$</p> <p>(In this ratio, the numerator $1$ is the vertical leg in your picture; and the denominator $2+\sqrt{3}$ is the horizontal leg.)</p> Answer 37: <p>Let $a$, $b$, $c$ represent the line segments the make up a right triangle, and let $A$, $B$, $C$ represent the angles opposite those line segments. Let $C=90^\circ$</p> <p>The legs ($a$ and $b$) are given by the following equations. $$a=c\sin{A}$$ $$b=c\cos{A}$$</p> <p>We want to find the ratio $a\over b$.</p> <p>$${a\over b}={c\sin{A}\over c\cos{A}}={\sin{A}\over \cos{A}}=\tan(A)$$</p> <p>Now substitute the value of $A$ in your example. You can either use $15^\circ$ or $75^\circ$ (using one instead of the other will give the inverse ratio).</p> <p>$${a\over b}=\tan(15^\circ)\quad\text{or}\quad{b\over a}=\tan(75^\circ)$$</p> Answer 38: <p><span class="math-container">$\triangle ABD$</span> has a right angle at <span class="math-container">$D$</span> and acute angles of <span class="math-container">$30°(B)$</span> and <span class="math-container">$60°(A)$</span>. <span class="math-container">$\triangle ABC$</span> is isosceles with base <span class="math-container">$\overline{AC}$</span>. Thus with the dimensions of the <span class="math-container">$30°-60°-90°$</span> right triangle, <span class="math-container">$BC=AB=2AD$</span> and <span class="math-container">$BD=\sqrt3AD$</span>. Also <span class="math-container">$\angle ACD$</span> in the isosceles triangle measures half the exterior <span class="math-container">$\angle ABD$</span>, thus <span class="math-container">$15°$</span>. Thereby <span class="math-container">$\triangle ACD$</span> is a right triangle with acute angles measuring <span class="math-container">$15°$</span> and <span class="math-container">$75°$</span>, with <span class="math-container">$\color{blue}{CD=BC+BD=(2+\sqrt3)AD}$</span>.</p> <p><a href="https://i.stack.imgur.com/AmqPb.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AmqPb.jpg" alt="enter image description here" /></a></p> Answer 39: <p><a href="https://i.stack.imgur.com/Es9fB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Es9fB.png" alt="enter image description here" /></a></p> <p>...........................................................................</p> Answer 40: <p>See the link here, it provides an explanation of the ratios and has other resources on triangle ratios:</p> <p><a href="https://robertlovespi.net/2013/10/18/the-15-75-90-triangle/" rel="nofollow noreferrer">https://robertlovespi.net/2013/10/18/the-15-75-90-triangle/</a></p> <p>Brief overview:</p> <p>15-75-90 triangle is based on the regular dodecagon, using this <a href="https://i.stack.imgur.com/5iWeY.png" rel="nofollow noreferrer">diagram:</a></p> <p>Using the Pythagoran Theorem and other methods, we get the short leg:long leg:hypotenuse ratio in a 15-75-90 triangle as <span class="math-container">$2-\sqrt{3}:1:2\sqrt{2 - \sqrt{3}}.$</span> The nested radical ultimately simplifies to <span class="math-container">$\sqrt{6} - \sqrt{2},$</span> so the simplified ratio is <span class="math-container">$2 - \sqrt{3}:1:\sqrt{6} - \sqrt{2}.$</span></p> Answer 41: <p>The ratio of legs is $$ r = \tan 15^\circ. $$ (This is quite easily derived from the definition of the $\tan$ function.)</p> <p>You can also represent the ratio using radicals: $$ r = 2 - \sqrt{3} \approx 0.267949 $$</p> <p>If we do not want to use $\tan$ at all, then we obtain the same answer just reasoning from your picture: $$ r = {1\over2+\sqrt{3}}= 2 - \sqrt{3}. $$</p> <p>(In this ratio, the numerator $1$ is the vertical leg in your picture; and the denominator $2+\sqrt{3}$ is the horizontal leg.)</p> Answer 42: <p>Let $a$, $b$, $c$ represent the line segments the make up a right triangle, and let $A$, $B$, $C$ represent the angles opposite those line segments. Let $C=90^\circ$</p> <p>The legs ($a$ and $b$) are given by the following equations. $$a=c\sin{A}$$ $$b=c\cos{A}$$</p> <p>We want to find the ratio $a\over b$.</p> <p>$${a\over b}={c\sin{A}\over c\cos{A}}={\sin{A}\over \cos{A}}=\tan(A)$$</p> <p>Now substitute the value of $A$ in your example. You can either use $15^\circ$ or $75^\circ$ (using one instead of the other will give the inverse ratio).</p> <p>$${a\over b}=\tan(15^\circ)\quad\text{or}\quad{b\over a}=\tan(75^\circ)$$</p> Answer 43: <p><span class="math-container">$\triangle ABD$</span> has a right angle at <span class="math-container">$D$</span> and acute angles of <span class="math-container">$30°(B)$</span> and <span class="math-container">$60°(A)$</span>. <span class="math-container">$\triangle ABC$</span> is isosceles with base <span class="math-container">$\overline{AC}$</span>. Thus with the dimensions of the <span class="math-container">$30°-60°-90°$</span> right triangle, <span class="math-container">$BC=AB=2AD$</span> and <span class="math-container">$BD=\sqrt3AD$</span>. Also <span class="math-container">$\angle ACD$</span> in the isosceles triangle measures half the exterior <span class="math-container">$\angle ABD$</span>, thus <span class="math-container">$15°$</span>. Thereby <span class="math-container">$\triangle ACD$</span> is a right triangle with acute angles measuring <span class="math-container">$15°$</span> and <span class="math-container">$75°$</span>, with <span class="math-container">$\color{blue}{CD=BC+BD=(2+\sqrt3)AD}$</span>.</p> <p><a href="https://i.stack.imgur.com/AmqPb.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AmqPb.jpg" alt="enter image description here" /></a></p> Answer 44: <p><a href="https://i.stack.imgur.com/Es9fB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Es9fB.png" alt="enter image description here" /></a></p> <p>...........................................................................</p> Answer 45: <p>See the link here, it provides an explanation of the ratios and has other resources on triangle ratios:</p> <p><a href="https://robertlovespi.net/2013/10/18/the-15-75-90-triangle/" rel="nofollow noreferrer">https://robertlovespi.net/2013/10/18/the-15-75-90-triangle/</a></p> <p>Brief overview:</p> <p>15-75-90 triangle is based on the regular dodecagon, using this <a href="https://i.stack.imgur.com/5iWeY.png" rel="nofollow noreferrer">diagram:</a></p> <p>Using the Pythagoran Theorem and other methods, we get the short leg:long leg:hypotenuse ratio in a 15-75-90 triangle as <span class="math-container">$2-\sqrt{3}:1:2\sqrt{2 - \sqrt{3}}.$</span> The nested radical ultimately simplifies to <span class="math-container">$\sqrt{6} - \sqrt{2},$</span> so the simplified ratio is <span class="math-container">$2 - \sqrt{3}:1:\sqrt{6} - \sqrt{2}.$</span></p>
What is the most efficient way to find the most separated pair in a finite sequence such that the first is greater than the second?
2,082,784
5
https://math.stackexchange.com/questions/2082784/what-is-the-most-efficient-way-to-find-the-most-separated-pair-in-a-finite-seque
<p>Given some finite sequence $a_n$ for $n \in \{1..N\}$, is there a more efficient method to solve (P1) than the O($N^2$) brute force method?</p> <p>($P1$) maximise $n_2-n_1$ such that $a_{n_1}&gt;a_{n_2}$</p>
[ { "answer_id": 2084150, "score": 4, "text": "<p>This is $O(N)$.</p>\n\n<p>If $a_n$ is greater than all the elements before it, call it a <em>peak</em>; and if $a_n$ is less than all elements after it, call it a <em>trough</em>.</p>\n\n<p>Then if $(a_i,a_j)$ is a maximally separated pair satisfying $a_i&gt;a_j$, we know that $a_i$ must be a peak (otherwise there would be an earlier element $a_k$ such that $a_k&gt;a_j$); and similarly, $a_j$ must be a trough.</p>\n\n<p>So here is the algorithm:</p>\n\n<ol>\n<li>Scan forwards through the sequence, marking all peaks ($N$ steps).</li>\n<li>Scan backwards through the sequence, marking all troughs ($N$ steps).</li>\n<li>Set $p = 1$ (this is the peak index). Note that $a_1$ is a peak by definition.</li>\n<li>Set $t$ (the trough index) to be the index of the last trough such that $a_t &lt; a_p$.</li>\n<li>Now $t-p$ is a candidate for the maximal separation; if it is the best yet, remember it.</li>\n<li>If we have scanned all peaks, we are done. Otherwise, set $p$ to be the index of the next peak and go to Step 4.</li>\n</ol>\n\n<p>Note that we can start the scan in Step 4 from the value of $t$ in the previous iteration, so the total number of operations used by that step in the whole algorithm is $O(N)$.</p>\n\n<p><strong>Edited to add:</strong> In fact we can omit Step 1, and just scan for the next peak in Step 6. (We can't omit Step 2, because scanning for troughs has to be done backwards.)</p>\n" }, { "answer_id": 2083890, "score": 2, "text": "<p>I believe you can do this in $O(N \\log N)$, but there may be a faster way. Here is a high level description of the algorithm:</p>\n\n<ol>\n<li>Store the elements as tuples of $(value, index)$ where $value$ is the value of the element and index is its position in the original array. So for ex. if your input was <code>[3, 1, 4, 2]</code>, you would store it as <code>[(3,0), (1,1), (4,2), (2,3)]</code>. This will be $O(N)$</li>\n<li>Sort this new array by the first element (value field) in the tuple in decreasing order. Thus previous example would now look like: <code>[(4, 2), (3, 0), (2, 3), (1, 1)]</code>. This can be done in $O(N\\log N)$</li>\n<li>Find the maximum difference between second elements (index fields) of the tuples such that the larger one comes after the smaller one in the new array. This can be done in $O(N)$.</li>\n</ol>\n\n<p>Total complexity = $O(N \\log N)$. </p>\n\n<p>I think steps $1$ and $2$ are pretty self explanatory. If you are curious about how step $3$ can be done in $O(N)$, here is some code:</p>\n\n<pre><code>def max_diff_indexes(arr):\n \"\"\"\n @params:\n\n arr - This is just an array of the SECOND parts of the tuple in \n the algorithm described above. That is, arr contains only ints \n representing the second values of the tuple (index field) after \n the sorting in step 2. \n\n @return:\n\n max_pair - Returns a pair of indexes n_1 and n_2 that maximise \n n_2 - n_1 such that a_{n_1} &gt; a_{n_2}. The first property is \n ensured by this algorithm while the second property in ensured\n by the fact that the array was initally sorted in decresing order\n of value\n\n \"\"\"\n if not arr: return None\n min_val = arr[0]\n max_diff = 0\n for elem in arr:\n max_diff = max(elem - min_val, max_diff)\n min_val = min(elem, min_val)\n\n max_val = max_diff - min_val\n return min_val, max_val \n</code></pre>\n" }, { "answer_id": 2083752, "score": 0, "text": "<p>Alright, I took a crack at it myself and came up with the following approach which seems to be $O\\big( (N/M)^2 + N \\log(N) \\big)$, where $M$ is the optimal value of (P1).</p>\n\n<p>Loop the following over $k$ from $1,2,\\dots$:</p>\n\n<ul>\n<li>Partition the sequence into $2^k$ contiguous sequences of size at most $\\lceil N 2^{-k} \\rceil $.</li>\n<li>Find each group's local minima and maxima, saving the indices as well.</li>\n<li>Perform this brute-force search among the local maxima / minima to find the best $n_1$ and $n_2$ belonging to different groups</li>\n<li>if $n_2-n_1 &gt; N 2^{-k}$, break out of this loop and return this pair, since no better pair may be found from within the same group from further subdivision</li>\n</ul>\n\n<p>Each $k$th iteration requires:\n$O\\big( N+ 4^k\\big)$ operations, so summing over all necessary $k$ gives the total complexity $O\\big((N/M)^2 + N \\log(N) \\big)$, where the former term dominates for small $M$ and the latter terms dominates for $M \\gtrsim \\sqrt{N}$.</p>\n" }, { "answer_id": 2084150, "score": 4, "text": "<p>This is $O(N)$.</p>\n\n<p>If $a_n$ is greater than all the elements before it, call it a <em>peak</em>; and if $a_n$ is less than all elements after it, call it a <em>trough</em>.</p>\n\n<p>Then if $(a_i,a_j)$ is a maximally separated pair satisfying $a_i&gt;a_j$, we know that $a_i$ must be a peak (otherwise there would be an earlier element $a_k$ such that $a_k&gt;a_j$); and similarly, $a_j$ must be a trough.</p>\n\n<p>So here is the algorithm:</p>\n\n<ol>\n<li>Scan forwards through the sequence, marking all peaks ($N$ steps).</li>\n<li>Scan backwards through the sequence, marking all troughs ($N$ steps).</li>\n<li>Set $p = 1$ (this is the peak index). Note that $a_1$ is a peak by definition.</li>\n<li>Set $t$ (the trough index) to be the index of the last trough such that $a_t &lt; a_p$.</li>\n<li>Now $t-p$ is a candidate for the maximal separation; if it is the best yet, remember it.</li>\n<li>If we have scanned all peaks, we are done. Otherwise, set $p$ to be the index of the next peak and go to Step 4.</li>\n</ol>\n\n<p>Note that we can start the scan in Step 4 from the value of $t$ in the previous iteration, so the total number of operations used by that step in the whole algorithm is $O(N)$.</p>\n\n<p><strong>Edited to add:</strong> In fact we can omit Step 1, and just scan for the next peak in Step 6. (We can't omit Step 2, because scanning for troughs has to be done backwards.)</p>\n" }, { "answer_id": 2083890, "score": 2, "text": "<p>I believe you can do this in $O(N \\log N)$, but there may be a faster way. Here is a high level description of the algorithm:</p>\n\n<ol>\n<li>Store the elements as tuples of $(value, index)$ where $value$ is the value of the element and index is its position in the original array. So for ex. if your input was <code>[3, 1, 4, 2]</code>, you would store it as <code>[(3,0), (1,1), (4,2), (2,3)]</code>. This will be $O(N)$</li>\n<li>Sort this new array by the first element (value field) in the tuple in decreasing order. Thus previous example would now look like: <code>[(4, 2), (3, 0), (2, 3), (1, 1)]</code>. This can be done in $O(N\\log N)$</li>\n<li>Find the maximum difference between second elements (index fields) of the tuples such that the larger one comes after the smaller one in the new array. This can be done in $O(N)$.</li>\n</ol>\n\n<p>Total complexity = $O(N \\log N)$. </p>\n\n<p>I think steps $1$ and $2$ are pretty self explanatory. If you are curious about how step $3$ can be done in $O(N)$, here is some code:</p>\n\n<pre><code>def max_diff_indexes(arr):\n \"\"\"\n @params:\n\n arr - This is just an array of the SECOND parts of the tuple in \n the algorithm described above. That is, arr contains only ints \n representing the second values of the tuple (index field) after \n the sorting in step 2. \n\n @return:\n\n max_pair - Returns a pair of indexes n_1 and n_2 that maximise \n n_2 - n_1 such that a_{n_1} &gt; a_{n_2}. The first property is \n ensured by this algorithm while the second property in ensured\n by the fact that the array was initally sorted in decresing order\n of value\n\n \"\"\"\n if not arr: return None\n min_val = arr[0]\n max_diff = 0\n for elem in arr:\n max_diff = max(elem - min_val, max_diff)\n min_val = min(elem, min_val)\n\n max_val = max_diff - min_val\n return min_val, max_val \n</code></pre>\n" }, { "answer_id": 2083752, "score": 0, "text": "<p>Alright, I took a crack at it myself and came up with the following approach which seems to be $O\\big( (N/M)^2 + N \\log(N) \\big)$, where $M$ is the optimal value of (P1).</p>\n\n<p>Loop the following over $k$ from $1,2,\\dots$:</p>\n\n<ul>\n<li>Partition the sequence into $2^k$ contiguous sequences of size at most $\\lceil N 2^{-k} \\rceil $.</li>\n<li>Find each group's local minima and maxima, saving the indices as well.</li>\n<li>Perform this brute-force search among the local maxima / minima to find the best $n_1$ and $n_2$ belonging to different groups</li>\n<li>if $n_2-n_1 &gt; N 2^{-k}$, break out of this loop and return this pair, since no better pair may be found from within the same group from further subdivision</li>\n</ul>\n\n<p>Each $k$th iteration requires:\n$O\\big( N+ 4^k\\big)$ operations, so summing over all necessary $k$ gives the total complexity $O\\big((N/M)^2 + N \\log(N) \\big)$, where the former term dominates for small $M$ and the latter terms dominates for $M \\gtrsim \\sqrt{N}$.</p>\n" }, { "answer_id": 2084150, "score": 4, "text": "<p>This is $O(N)$.</p>\n\n<p>If $a_n$ is greater than all the elements before it, call it a <em>peak</em>; and if $a_n$ is less than all elements after it, call it a <em>trough</em>.</p>\n\n<p>Then if $(a_i,a_j)$ is a maximally separated pair satisfying $a_i&gt;a_j$, we know that $a_i$ must be a peak (otherwise there would be an earlier element $a_k$ such that $a_k&gt;a_j$); and similarly, $a_j$ must be a trough.</p>\n\n<p>So here is the algorithm:</p>\n\n<ol>\n<li>Scan forwards through the sequence, marking all peaks ($N$ steps).</li>\n<li>Scan backwards through the sequence, marking all troughs ($N$ steps).</li>\n<li>Set $p = 1$ (this is the peak index). Note that $a_1$ is a peak by definition.</li>\n<li>Set $t$ (the trough index) to be the index of the last trough such that $a_t &lt; a_p$.</li>\n<li>Now $t-p$ is a candidate for the maximal separation; if it is the best yet, remember it.</li>\n<li>If we have scanned all peaks, we are done. Otherwise, set $p$ to be the index of the next peak and go to Step 4.</li>\n</ol>\n\n<p>Note that we can start the scan in Step 4 from the value of $t$ in the previous iteration, so the total number of operations used by that step in the whole algorithm is $O(N)$.</p>\n\n<p><strong>Edited to add:</strong> In fact we can omit Step 1, and just scan for the next peak in Step 6. (We can't omit Step 2, because scanning for troughs has to be done backwards.)</p>\n" }, { "answer_id": 2083890, "score": 2, "text": "<p>I believe you can do this in $O(N \\log N)$, but there may be a faster way. Here is a high level description of the algorithm:</p>\n\n<ol>\n<li>Store the elements as tuples of $(value, index)$ where $value$ is the value of the element and index is its position in the original array. So for ex. if your input was <code>[3, 1, 4, 2]</code>, you would store it as <code>[(3,0), (1,1), (4,2), (2,3)]</code>. This will be $O(N)$</li>\n<li>Sort this new array by the first element (value field) in the tuple in decreasing order. Thus previous example would now look like: <code>[(4, 2), (3, 0), (2, 3), (1, 1)]</code>. This can be done in $O(N\\log N)$</li>\n<li>Find the maximum difference between second elements (index fields) of the tuples such that the larger one comes after the smaller one in the new array. This can be done in $O(N)$.</li>\n</ol>\n\n<p>Total complexity = $O(N \\log N)$. </p>\n\n<p>I think steps $1$ and $2$ are pretty self explanatory. If you are curious about how step $3$ can be done in $O(N)$, here is some code:</p>\n\n<pre><code>def max_diff_indexes(arr):\n \"\"\"\n @params:\n\n arr - This is just an array of the SECOND parts of the tuple in \n the algorithm described above. That is, arr contains only ints \n representing the second values of the tuple (index field) after \n the sorting in step 2. \n\n @return:\n\n max_pair - Returns a pair of indexes n_1 and n_2 that maximise \n n_2 - n_1 such that a_{n_1} &gt; a_{n_2}. The first property is \n ensured by this algorithm while the second property in ensured\n by the fact that the array was initally sorted in decresing order\n of value\n\n \"\"\"\n if not arr: return None\n min_val = arr[0]\n max_diff = 0\n for elem in arr:\n max_diff = max(elem - min_val, max_diff)\n min_val = min(elem, min_val)\n\n max_val = max_diff - min_val\n return min_val, max_val \n</code></pre>\n" }, { "answer_id": 2083752, "score": 0, "text": "<p>Alright, I took a crack at it myself and came up with the following approach which seems to be $O\\big( (N/M)^2 + N \\log(N) \\big)$, where $M$ is the optimal value of (P1).</p>\n\n<p>Loop the following over $k$ from $1,2,\\dots$:</p>\n\n<ul>\n<li>Partition the sequence into $2^k$ contiguous sequences of size at most $\\lceil N 2^{-k} \\rceil $.</li>\n<li>Find each group's local minima and maxima, saving the indices as well.</li>\n<li>Perform this brute-force search among the local maxima / minima to find the best $n_1$ and $n_2$ belonging to different groups</li>\n<li>if $n_2-n_1 &gt; N 2^{-k}$, break out of this loop and return this pair, since no better pair may be found from within the same group from further subdivision</li>\n</ul>\n\n<p>Each $k$th iteration requires:\n$O\\big( N+ 4^k\\big)$ operations, so summing over all necessary $k$ gives the total complexity $O\\big((N/M)^2 + N \\log(N) \\big)$, where the former term dominates for small $M$ and the latter terms dominates for $M \\gtrsim \\sqrt{N}$.</p>\n" }, { "answer_id": 2084150, "score": 4, "text": "<p>This is $O(N)$.</p>\n\n<p>If $a_n$ is greater than all the elements before it, call it a <em>peak</em>; and if $a_n$ is less than all elements after it, call it a <em>trough</em>.</p>\n\n<p>Then if $(a_i,a_j)$ is a maximally separated pair satisfying $a_i&gt;a_j$, we know that $a_i$ must be a peak (otherwise there would be an earlier element $a_k$ such that $a_k&gt;a_j$); and similarly, $a_j$ must be a trough.</p>\n\n<p>So here is the algorithm:</p>\n\n<ol>\n<li>Scan forwards through the sequence, marking all peaks ($N$ steps).</li>\n<li>Scan backwards through the sequence, marking all troughs ($N$ steps).</li>\n<li>Set $p = 1$ (this is the peak index). Note that $a_1$ is a peak by definition.</li>\n<li>Set $t$ (the trough index) to be the index of the last trough such that $a_t &lt; a_p$.</li>\n<li>Now $t-p$ is a candidate for the maximal separation; if it is the best yet, remember it.</li>\n<li>If we have scanned all peaks, we are done. Otherwise, set $p$ to be the index of the next peak and go to Step 4.</li>\n</ol>\n\n<p>Note that we can start the scan in Step 4 from the value of $t$ in the previous iteration, so the total number of operations used by that step in the whole algorithm is $O(N)$.</p>\n\n<p><strong>Edited to add:</strong> In fact we can omit Step 1, and just scan for the next peak in Step 6. (We can't omit Step 2, because scanning for troughs has to be done backwards.)</p>\n" }, { "answer_id": 2083890, "score": 2, "text": "<p>I believe you can do this in $O(N \\log N)$, but there may be a faster way. Here is a high level description of the algorithm:</p>\n\n<ol>\n<li>Store the elements as tuples of $(value, index)$ where $value$ is the value of the element and index is its position in the original array. So for ex. if your input was <code>[3, 1, 4, 2]</code>, you would store it as <code>[(3,0), (1,1), (4,2), (2,3)]</code>. This will be $O(N)$</li>\n<li>Sort this new array by the first element (value field) in the tuple in decreasing order. Thus previous example would now look like: <code>[(4, 2), (3, 0), (2, 3), (1, 1)]</code>. This can be done in $O(N\\log N)$</li>\n<li>Find the maximum difference between second elements (index fields) of the tuples such that the larger one comes after the smaller one in the new array. This can be done in $O(N)$.</li>\n</ol>\n\n<p>Total complexity = $O(N \\log N)$. </p>\n\n<p>I think steps $1$ and $2$ are pretty self explanatory. If you are curious about how step $3$ can be done in $O(N)$, here is some code:</p>\n\n<pre><code>def max_diff_indexes(arr):\n \"\"\"\n @params:\n\n arr - This is just an array of the SECOND parts of the tuple in \n the algorithm described above. That is, arr contains only ints \n representing the second values of the tuple (index field) after \n the sorting in step 2. \n\n @return:\n\n max_pair - Returns a pair of indexes n_1 and n_2 that maximise \n n_2 - n_1 such that a_{n_1} &gt; a_{n_2}. The first property is \n ensured by this algorithm while the second property in ensured\n by the fact that the array was initally sorted in decresing order\n of value\n\n \"\"\"\n if not arr: return None\n min_val = arr[0]\n max_diff = 0\n for elem in arr:\n max_diff = max(elem - min_val, max_diff)\n min_val = min(elem, min_val)\n\n max_val = max_diff - min_val\n return min_val, max_val \n</code></pre>\n" }, { "answer_id": 2083752, "score": 0, "text": "<p>Alright, I took a crack at it myself and came up with the following approach which seems to be $O\\big( (N/M)^2 + N \\log(N) \\big)$, where $M$ is the optimal value of (P1).</p>\n\n<p>Loop the following over $k$ from $1,2,\\dots$:</p>\n\n<ul>\n<li>Partition the sequence into $2^k$ contiguous sequences of size at most $\\lceil N 2^{-k} \\rceil $.</li>\n<li>Find each group's local minima and maxima, saving the indices as well.</li>\n<li>Perform this brute-force search among the local maxima / minima to find the best $n_1$ and $n_2$ belonging to different groups</li>\n<li>if $n_2-n_1 &gt; N 2^{-k}$, break out of this loop and return this pair, since no better pair may be found from within the same group from further subdivision</li>\n</ul>\n\n<p>Each $k$th iteration requires:\n$O\\big( N+ 4^k\\big)$ operations, so summing over all necessary $k$ gives the total complexity $O\\big((N/M)^2 + N \\log(N) \\big)$, where the former term dominates for small $M$ and the latter terms dominates for $M \\gtrsim \\sqrt{N}$.</p>\n" }, { "answer_id": 2084150, "score": 4, "text": "<p>This is $O(N)$.</p>\n\n<p>If $a_n$ is greater than all the elements before it, call it a <em>peak</em>; and if $a_n$ is less than all elements after it, call it a <em>trough</em>.</p>\n\n<p>Then if $(a_i,a_j)$ is a maximally separated pair satisfying $a_i&gt;a_j$, we know that $a_i$ must be a peak (otherwise there would be an earlier element $a_k$ such that $a_k&gt;a_j$); and similarly, $a_j$ must be a trough.</p>\n\n<p>So here is the algorithm:</p>\n\n<ol>\n<li>Scan forwards through the sequence, marking all peaks ($N$ steps).</li>\n<li>Scan backwards through the sequence, marking all troughs ($N$ steps).</li>\n<li>Set $p = 1$ (this is the peak index). Note that $a_1$ is a peak by definition.</li>\n<li>Set $t$ (the trough index) to be the index of the last trough such that $a_t &lt; a_p$.</li>\n<li>Now $t-p$ is a candidate for the maximal separation; if it is the best yet, remember it.</li>\n<li>If we have scanned all peaks, we are done. Otherwise, set $p$ to be the index of the next peak and go to Step 4.</li>\n</ol>\n\n<p>Note that we can start the scan in Step 4 from the value of $t$ in the previous iteration, so the total number of operations used by that step in the whole algorithm is $O(N)$.</p>\n\n<p><strong>Edited to add:</strong> In fact we can omit Step 1, and just scan for the next peak in Step 6. (We can't omit Step 2, because scanning for troughs has to be done backwards.)</p>\n" }, { "answer_id": 2083890, "score": 2, "text": "<p>I believe you can do this in $O(N \\log N)$, but there may be a faster way. Here is a high level description of the algorithm:</p>\n\n<ol>\n<li>Store the elements as tuples of $(value, index)$ where $value$ is the value of the element and index is its position in the original array. So for ex. if your input was <code>[3, 1, 4, 2]</code>, you would store it as <code>[(3,0), (1,1), (4,2), (2,3)]</code>. This will be $O(N)$</li>\n<li>Sort this new array by the first element (value field) in the tuple in decreasing order. Thus previous example would now look like: <code>[(4, 2), (3, 0), (2, 3), (1, 1)]</code>. This can be done in $O(N\\log N)$</li>\n<li>Find the maximum difference between second elements (index fields) of the tuples such that the larger one comes after the smaller one in the new array. This can be done in $O(N)$.</li>\n</ol>\n\n<p>Total complexity = $O(N \\log N)$. </p>\n\n<p>I think steps $1$ and $2$ are pretty self explanatory. If you are curious about how step $3$ can be done in $O(N)$, here is some code:</p>\n\n<pre><code>def max_diff_indexes(arr):\n \"\"\"\n @params:\n\n arr - This is just an array of the SECOND parts of the tuple in \n the algorithm described above. That is, arr contains only ints \n representing the second values of the tuple (index field) after \n the sorting in step 2. \n\n @return:\n\n max_pair - Returns a pair of indexes n_1 and n_2 that maximise \n n_2 - n_1 such that a_{n_1} &gt; a_{n_2}. The first property is \n ensured by this algorithm while the second property in ensured\n by the fact that the array was initally sorted in decresing order\n of value\n\n \"\"\"\n if not arr: return None\n min_val = arr[0]\n max_diff = 0\n for elem in arr:\n max_diff = max(elem - min_val, max_diff)\n min_val = min(elem, min_val)\n\n max_val = max_diff - min_val\n return min_val, max_val \n</code></pre>\n" }, { "answer_id": 2083752, "score": 0, "text": "<p>Alright, I took a crack at it myself and came up with the following approach which seems to be $O\\big( (N/M)^2 + N \\log(N) \\big)$, where $M$ is the optimal value of (P1).</p>\n\n<p>Loop the following over $k$ from $1,2,\\dots$:</p>\n\n<ul>\n<li>Partition the sequence into $2^k$ contiguous sequences of size at most $\\lceil N 2^{-k} \\rceil $.</li>\n<li>Find each group's local minima and maxima, saving the indices as well.</li>\n<li>Perform this brute-force search among the local maxima / minima to find the best $n_1$ and $n_2$ belonging to different groups</li>\n<li>if $n_2-n_1 &gt; N 2^{-k}$, break out of this loop and return this pair, since no better pair may be found from within the same group from further subdivision</li>\n</ul>\n\n<p>Each $k$th iteration requires:\n$O\\big( N+ 4^k\\big)$ operations, so summing over all necessary $k$ gives the total complexity $O\\big((N/M)^2 + N \\log(N) \\big)$, where the former term dominates for small $M$ and the latter terms dominates for $M \\gtrsim \\sqrt{N}$.</p>\n" }, { "answer_id": 2084150, "score": 4, "text": "<p>This is $O(N)$.</p>\n\n<p>If $a_n$ is greater than all the elements before it, call it a <em>peak</em>; and if $a_n$ is less than all elements after it, call it a <em>trough</em>.</p>\n\n<p>Then if $(a_i,a_j)$ is a maximally separated pair satisfying $a_i&gt;a_j$, we know that $a_i$ must be a peak (otherwise there would be an earlier element $a_k$ such that $a_k&gt;a_j$); and similarly, $a_j$ must be a trough.</p>\n\n<p>So here is the algorithm:</p>\n\n<ol>\n<li>Scan forwards through the sequence, marking all peaks ($N$ steps).</li>\n<li>Scan backwards through the sequence, marking all troughs ($N$ steps).</li>\n<li>Set $p = 1$ (this is the peak index). Note that $a_1$ is a peak by definition.</li>\n<li>Set $t$ (the trough index) to be the index of the last trough such that $a_t &lt; a_p$.</li>\n<li>Now $t-p$ is a candidate for the maximal separation; if it is the best yet, remember it.</li>\n<li>If we have scanned all peaks, we are done. Otherwise, set $p$ to be the index of the next peak and go to Step 4.</li>\n</ol>\n\n<p>Note that we can start the scan in Step 4 from the value of $t$ in the previous iteration, so the total number of operations used by that step in the whole algorithm is $O(N)$.</p>\n\n<p><strong>Edited to add:</strong> In fact we can omit Step 1, and just scan for the next peak in Step 6. (We can't omit Step 2, because scanning for troughs has to be done backwards.)</p>\n" }, { "answer_id": 2083890, "score": 2, "text": "<p>I believe you can do this in $O(N \\log N)$, but there may be a faster way. Here is a high level description of the algorithm:</p>\n\n<ol>\n<li>Store the elements as tuples of $(value, index)$ where $value$ is the value of the element and index is its position in the original array. So for ex. if your input was <code>[3, 1, 4, 2]</code>, you would store it as <code>[(3,0), (1,1), (4,2), (2,3)]</code>. This will be $O(N)$</li>\n<li>Sort this new array by the first element (value field) in the tuple in decreasing order. Thus previous example would now look like: <code>[(4, 2), (3, 0), (2, 3), (1, 1)]</code>. This can be done in $O(N\\log N)$</li>\n<li>Find the maximum difference between second elements (index fields) of the tuples such that the larger one comes after the smaller one in the new array. This can be done in $O(N)$.</li>\n</ol>\n\n<p>Total complexity = $O(N \\log N)$. </p>\n\n<p>I think steps $1$ and $2$ are pretty self explanatory. If you are curious about how step $3$ can be done in $O(N)$, here is some code:</p>\n\n<pre><code>def max_diff_indexes(arr):\n \"\"\"\n @params:\n\n arr - This is just an array of the SECOND parts of the tuple in \n the algorithm described above. That is, arr contains only ints \n representing the second values of the tuple (index field) after \n the sorting in step 2. \n\n @return:\n\n max_pair - Returns a pair of indexes n_1 and n_2 that maximise \n n_2 - n_1 such that a_{n_1} &gt; a_{n_2}. The first property is \n ensured by this algorithm while the second property in ensured\n by the fact that the array was initally sorted in decresing order\n of value\n\n \"\"\"\n if not arr: return None\n min_val = arr[0]\n max_diff = 0\n for elem in arr:\n max_diff = max(elem - min_val, max_diff)\n min_val = min(elem, min_val)\n\n max_val = max_diff - min_val\n return min_val, max_val \n</code></pre>\n" }, { "answer_id": 2083752, "score": 0, "text": "<p>Alright, I took a crack at it myself and came up with the following approach which seems to be $O\\big( (N/M)^2 + N \\log(N) \\big)$, where $M$ is the optimal value of (P1).</p>\n\n<p>Loop the following over $k$ from $1,2,\\dots$:</p>\n\n<ul>\n<li>Partition the sequence into $2^k$ contiguous sequences of size at most $\\lceil N 2^{-k} \\rceil $.</li>\n<li>Find each group's local minima and maxima, saving the indices as well.</li>\n<li>Perform this brute-force search among the local maxima / minima to find the best $n_1$ and $n_2$ belonging to different groups</li>\n<li>if $n_2-n_1 &gt; N 2^{-k}$, break out of this loop and return this pair, since no better pair may be found from within the same group from further subdivision</li>\n</ul>\n\n<p>Each $k$th iteration requires:\n$O\\big( N+ 4^k\\big)$ operations, so summing over all necessary $k$ gives the total complexity $O\\big((N/M)^2 + N \\log(N) \\big)$, where the former term dominates for small $M$ and the latter terms dominates for $M \\gtrsim \\sqrt{N}$.</p>\n" }, { "answer_id": 2084150, "score": 4, "text": "<p>This is $O(N)$.</p>\n\n<p>If $a_n$ is greater than all the elements before it, call it a <em>peak</em>; and if $a_n$ is less than all elements after it, call it a <em>trough</em>.</p>\n\n<p>Then if $(a_i,a_j)$ is a maximally separated pair satisfying $a_i&gt;a_j$, we know that $a_i$ must be a peak (otherwise there would be an earlier element $a_k$ such that $a_k&gt;a_j$); and similarly, $a_j$ must be a trough.</p>\n\n<p>So here is the algorithm:</p>\n\n<ol>\n<li>Scan forwards through the sequence, marking all peaks ($N$ steps).</li>\n<li>Scan backwards through the sequence, marking all troughs ($N$ steps).</li>\n<li>Set $p = 1$ (this is the peak index). Note that $a_1$ is a peak by definition.</li>\n<li>Set $t$ (the trough index) to be the index of the last trough such that $a_t &lt; a_p$.</li>\n<li>Now $t-p$ is a candidate for the maximal separation; if it is the best yet, remember it.</li>\n<li>If we have scanned all peaks, we are done. Otherwise, set $p$ to be the index of the next peak and go to Step 4.</li>\n</ol>\n\n<p>Note that we can start the scan in Step 4 from the value of $t$ in the previous iteration, so the total number of operations used by that step in the whole algorithm is $O(N)$.</p>\n\n<p><strong>Edited to add:</strong> In fact we can omit Step 1, and just scan for the next peak in Step 6. (We can't omit Step 2, because scanning for troughs has to be done backwards.)</p>\n" }, { "answer_id": 2083890, "score": 2, "text": "<p>I believe you can do this in $O(N \\log N)$, but there may be a faster way. Here is a high level description of the algorithm:</p>\n\n<ol>\n<li>Store the elements as tuples of $(value, index)$ where $value$ is the value of the element and index is its position in the original array. So for ex. if your input was <code>[3, 1, 4, 2]</code>, you would store it as <code>[(3,0), (1,1), (4,2), (2,3)]</code>. This will be $O(N)$</li>\n<li>Sort this new array by the first element (value field) in the tuple in decreasing order. Thus previous example would now look like: <code>[(4, 2), (3, 0), (2, 3), (1, 1)]</code>. This can be done in $O(N\\log N)$</li>\n<li>Find the maximum difference between second elements (index fields) of the tuples such that the larger one comes after the smaller one in the new array. This can be done in $O(N)$.</li>\n</ol>\n\n<p>Total complexity = $O(N \\log N)$. </p>\n\n<p>I think steps $1$ and $2$ are pretty self explanatory. If you are curious about how step $3$ can be done in $O(N)$, here is some code:</p>\n\n<pre><code>def max_diff_indexes(arr):\n \"\"\"\n @params:\n\n arr - This is just an array of the SECOND parts of the tuple in \n the algorithm described above. That is, arr contains only ints \n representing the second values of the tuple (index field) after \n the sorting in step 2. \n\n @return:\n\n max_pair - Returns a pair of indexes n_1 and n_2 that maximise \n n_2 - n_1 such that a_{n_1} &gt; a_{n_2}. The first property is \n ensured by this algorithm while the second property in ensured\n by the fact that the array was initally sorted in decresing order\n of value\n\n \"\"\"\n if not arr: return None\n min_val = arr[0]\n max_diff = 0\n for elem in arr:\n max_diff = max(elem - min_val, max_diff)\n min_val = min(elem, min_val)\n\n max_val = max_diff - min_val\n return min_val, max_val \n</code></pre>\n" }, { "answer_id": 2083752, "score": 0, "text": "<p>Alright, I took a crack at it myself and came up with the following approach which seems to be $O\\big( (N/M)^2 + N \\log(N) \\big)$, where $M$ is the optimal value of (P1).</p>\n\n<p>Loop the following over $k$ from $1,2,\\dots$:</p>\n\n<ul>\n<li>Partition the sequence into $2^k$ contiguous sequences of size at most $\\lceil N 2^{-k} \\rceil $.</li>\n<li>Find each group's local minima and maxima, saving the indices as well.</li>\n<li>Perform this brute-force search among the local maxima / minima to find the best $n_1$ and $n_2$ belonging to different groups</li>\n<li>if $n_2-n_1 &gt; N 2^{-k}$, break out of this loop and return this pair, since no better pair may be found from within the same group from further subdivision</li>\n</ul>\n\n<p>Each $k$th iteration requires:\n$O\\big( N+ 4^k\\big)$ operations, so summing over all necessary $k$ gives the total complexity $O\\big((N/M)^2 + N \\log(N) \\big)$, where the former term dominates for small $M$ and the latter terms dominates for $M \\gtrsim \\sqrt{N}$.</p>\n" }, { "answer_id": 2084150, "score": 4, "text": "<p>This is $O(N)$.</p>\n\n<p>If $a_n$ is greater than all the elements before it, call it a <em>peak</em>; and if $a_n$ is less than all elements after it, call it a <em>trough</em>.</p>\n\n<p>Then if $(a_i,a_j)$ is a maximally separated pair satisfying $a_i&gt;a_j$, we know that $a_i$ must be a peak (otherwise there would be an earlier element $a_k$ such that $a_k&gt;a_j$); and similarly, $a_j$ must be a trough.</p>\n\n<p>So here is the algorithm:</p>\n\n<ol>\n<li>Scan forwards through the sequence, marking all peaks ($N$ steps).</li>\n<li>Scan backwards through the sequence, marking all troughs ($N$ steps).</li>\n<li>Set $p = 1$ (this is the peak index). Note that $a_1$ is a peak by definition.</li>\n<li>Set $t$ (the trough index) to be the index of the last trough such that $a_t &lt; a_p$.</li>\n<li>Now $t-p$ is a candidate for the maximal separation; if it is the best yet, remember it.</li>\n<li>If we have scanned all peaks, we are done. Otherwise, set $p$ to be the index of the next peak and go to Step 4.</li>\n</ol>\n\n<p>Note that we can start the scan in Step 4 from the value of $t$ in the previous iteration, so the total number of operations used by that step in the whole algorithm is $O(N)$.</p>\n\n<p><strong>Edited to add:</strong> In fact we can omit Step 1, and just scan for the next peak in Step 6. (We can't omit Step 2, because scanning for troughs has to be done backwards.)</p>\n" }, { "answer_id": 2083890, "score": 2, "text": "<p>I believe you can do this in $O(N \\log N)$, but there may be a faster way. Here is a high level description of the algorithm:</p>\n\n<ol>\n<li>Store the elements as tuples of $(value, index)$ where $value$ is the value of the element and index is its position in the original array. So for ex. if your input was <code>[3, 1, 4, 2]</code>, you would store it as <code>[(3,0), (1,1), (4,2), (2,3)]</code>. This will be $O(N)$</li>\n<li>Sort this new array by the first element (value field) in the tuple in decreasing order. Thus previous example would now look like: <code>[(4, 2), (3, 0), (2, 3), (1, 1)]</code>. This can be done in $O(N\\log N)$</li>\n<li>Find the maximum difference between second elements (index fields) of the tuples such that the larger one comes after the smaller one in the new array. This can be done in $O(N)$.</li>\n</ol>\n\n<p>Total complexity = $O(N \\log N)$. </p>\n\n<p>I think steps $1$ and $2$ are pretty self explanatory. If you are curious about how step $3$ can be done in $O(N)$, here is some code:</p>\n\n<pre><code>def max_diff_indexes(arr):\n \"\"\"\n @params:\n\n arr - This is just an array of the SECOND parts of the tuple in \n the algorithm described above. That is, arr contains only ints \n representing the second values of the tuple (index field) after \n the sorting in step 2. \n\n @return:\n\n max_pair - Returns a pair of indexes n_1 and n_2 that maximise \n n_2 - n_1 such that a_{n_1} &gt; a_{n_2}. The first property is \n ensured by this algorithm while the second property in ensured\n by the fact that the array was initally sorted in decresing order\n of value\n\n \"\"\"\n if not arr: return None\n min_val = arr[0]\n max_diff = 0\n for elem in arr:\n max_diff = max(elem - min_val, max_diff)\n min_val = min(elem, min_val)\n\n max_val = max_diff - min_val\n return min_val, max_val \n</code></pre>\n" }, { "answer_id": 2083752, "score": 0, "text": "<p>Alright, I took a crack at it myself and came up with the following approach which seems to be $O\\big( (N/M)^2 + N \\log(N) \\big)$, where $M$ is the optimal value of (P1).</p>\n\n<p>Loop the following over $k$ from $1,2,\\dots$:</p>\n\n<ul>\n<li>Partition the sequence into $2^k$ contiguous sequences of size at most $\\lceil N 2^{-k} \\rceil $.</li>\n<li>Find each group's local minima and maxima, saving the indices as well.</li>\n<li>Perform this brute-force search among the local maxima / minima to find the best $n_1$ and $n_2$ belonging to different groups</li>\n<li>if $n_2-n_1 &gt; N 2^{-k}$, break out of this loop and return this pair, since no better pair may be found from within the same group from further subdivision</li>\n</ul>\n\n<p>Each $k$th iteration requires:\n$O\\big( N+ 4^k\\big)$ operations, so summing over all necessary $k$ gives the total complexity $O\\big((N/M)^2 + N \\log(N) \\big)$, where the former term dominates for small $M$ and the latter terms dominates for $M \\gtrsim \\sqrt{N}$.</p>\n" }, { "answer_id": 2084150, "score": 4, "text": "<p>This is $O(N)$.</p>\n\n<p>If $a_n$ is greater than all the elements before it, call it a <em>peak</em>; and if $a_n$ is less than all elements after it, call it a <em>trough</em>.</p>\n\n<p>Then if $(a_i,a_j)$ is a maximally separated pair satisfying $a_i&gt;a_j$, we know that $a_i$ must be a peak (otherwise there would be an earlier element $a_k$ such that $a_k&gt;a_j$); and similarly, $a_j$ must be a trough.</p>\n\n<p>So here is the algorithm:</p>\n\n<ol>\n<li>Scan forwards through the sequence, marking all peaks ($N$ steps).</li>\n<li>Scan backwards through the sequence, marking all troughs ($N$ steps).</li>\n<li>Set $p = 1$ (this is the peak index). Note that $a_1$ is a peak by definition.</li>\n<li>Set $t$ (the trough index) to be the index of the last trough such that $a_t &lt; a_p$.</li>\n<li>Now $t-p$ is a candidate for the maximal separation; if it is the best yet, remember it.</li>\n<li>If we have scanned all peaks, we are done. Otherwise, set $p$ to be the index of the next peak and go to Step 4.</li>\n</ol>\n\n<p>Note that we can start the scan in Step 4 from the value of $t$ in the previous iteration, so the total number of operations used by that step in the whole algorithm is $O(N)$.</p>\n\n<p><strong>Edited to add:</strong> In fact we can omit Step 1, and just scan for the next peak in Step 6. (We can't omit Step 2, because scanning for troughs has to be done backwards.)</p>\n" }, { "answer_id": 2083890, "score": 2, "text": "<p>I believe you can do this in $O(N \\log N)$, but there may be a faster way. Here is a high level description of the algorithm:</p>\n\n<ol>\n<li>Store the elements as tuples of $(value, index)$ where $value$ is the value of the element and index is its position in the original array. So for ex. if your input was <code>[3, 1, 4, 2]</code>, you would store it as <code>[(3,0), (1,1), (4,2), (2,3)]</code>. This will be $O(N)$</li>\n<li>Sort this new array by the first element (value field) in the tuple in decreasing order. Thus previous example would now look like: <code>[(4, 2), (3, 0), (2, 3), (1, 1)]</code>. This can be done in $O(N\\log N)$</li>\n<li>Find the maximum difference between second elements (index fields) of the tuples such that the larger one comes after the smaller one in the new array. This can be done in $O(N)$.</li>\n</ol>\n\n<p>Total complexity = $O(N \\log N)$. </p>\n\n<p>I think steps $1$ and $2$ are pretty self explanatory. If you are curious about how step $3$ can be done in $O(N)$, here is some code:</p>\n\n<pre><code>def max_diff_indexes(arr):\n \"\"\"\n @params:\n\n arr - This is just an array of the SECOND parts of the tuple in \n the algorithm described above. That is, arr contains only ints \n representing the second values of the tuple (index field) after \n the sorting in step 2. \n\n @return:\n\n max_pair - Returns a pair of indexes n_1 and n_2 that maximise \n n_2 - n_1 such that a_{n_1} &gt; a_{n_2}. The first property is \n ensured by this algorithm while the second property in ensured\n by the fact that the array was initally sorted in decresing order\n of value\n\n \"\"\"\n if not arr: return None\n min_val = arr[0]\n max_diff = 0\n for elem in arr:\n max_diff = max(elem - min_val, max_diff)\n min_val = min(elem, min_val)\n\n max_val = max_diff - min_val\n return min_val, max_val \n</code></pre>\n" }, { "answer_id": 2083752, "score": 0, "text": "<p>Alright, I took a crack at it myself and came up with the following approach which seems to be $O\\big( (N/M)^2 + N \\log(N) \\big)$, where $M$ is the optimal value of (P1).</p>\n\n<p>Loop the following over $k$ from $1,2,\\dots$:</p>\n\n<ul>\n<li>Partition the sequence into $2^k$ contiguous sequences of size at most $\\lceil N 2^{-k} \\rceil $.</li>\n<li>Find each group's local minima and maxima, saving the indices as well.</li>\n<li>Perform this brute-force search among the local maxima / minima to find the best $n_1$ and $n_2$ belonging to different groups</li>\n<li>if $n_2-n_1 &gt; N 2^{-k}$, break out of this loop and return this pair, since no better pair may be found from within the same group from further subdivision</li>\n</ul>\n\n<p>Each $k$th iteration requires:\n$O\\big( N+ 4^k\\big)$ operations, so summing over all necessary $k$ gives the total complexity $O\\big((N/M)^2 + N \\log(N) \\big)$, where the former term dominates for small $M$ and the latter terms dominates for $M \\gtrsim \\sqrt{N}$.</p>\n" } ]
Question: What is the most efficient way to find the most separated pair in a finite sequence such that the first is greater than the second? Answer 1: <p>This is $O(N)$.</p> <p>If $a_n$ is greater than all the elements before it, call it a <em>peak</em>; and if $a_n$ is less than all elements after it, call it a <em>trough</em>.</p> <p>Then if $(a_i,a_j)$ is a maximally separated pair satisfying $a_i&gt;a_j$, we know that $a_i$ must be a peak (otherwise there would be an earlier element $a_k$ such that $a_k&gt;a_j$); and similarly, $a_j$ must be a trough.</p> <p>So here is the algorithm:</p> <ol> <li>Scan forwards through the sequence, marking all peaks ($N$ steps).</li> <li>Scan backwards through the sequence, marking all troughs ($N$ steps).</li> <li>Set $p = 1$ (this is the peak index). Note that $a_1$ is a peak by definition.</li> <li>Set $t$ (the trough index) to be the index of the last trough such that $a_t &lt; a_p$.</li> <li>Now $t-p$ is a candidate for the maximal separation; if it is the best yet, remember it.</li> <li>If we have scanned all peaks, we are done. Otherwise, set $p$ to be the index of the next peak and go to Step 4.</li> </ol> <p>Note that we can start the scan in Step 4 from the value of $t$ in the previous iteration, so the total number of operations used by that step in the whole algorithm is $O(N)$.</p> <p><strong>Edited to add:</strong> In fact we can omit Step 1, and just scan for the next peak in Step 6. (We can't omit Step 2, because scanning for troughs has to be done backwards.)</p> Answer 2: <p>I believe you can do this in $O(N \log N)$, but there may be a faster way. Here is a high level description of the algorithm:</p> <ol> <li>Store the elements as tuples of $(value, index)$ where $value$ is the value of the element and index is its position in the original array. So for ex. if your input was <code>[3, 1, 4, 2]</code>, you would store it as <code>[(3,0), (1,1), (4,2), (2,3)]</code>. This will be $O(N)$</li> <li>Sort this new array by the first element (value field) in the tuple in decreasing order. Thus previous example would now look like: <code>[(4, 2), (3, 0), (2, 3), (1, 1)]</code>. This can be done in $O(N\log N)$</li> <li>Find the maximum difference between second elements (index fields) of the tuples such that the larger one comes after the smaller one in the new array. This can be done in $O(N)$.</li> </ol> <p>Total complexity = $O(N \log N)$. </p> <p>I think steps $1$ and $2$ are pretty self explanatory. If you are curious about how step $3$ can be done in $O(N)$, here is some code:</p> <pre><code>def max_diff_indexes(arr): """ @params: arr - This is just an array of the SECOND parts of the tuple in the algorithm described above. That is, arr contains only ints representing the second values of the tuple (index field) after the sorting in step 2. @return: max_pair - Returns a pair of indexes n_1 and n_2 that maximise n_2 - n_1 such that a_{n_1} &gt; a_{n_2}. The first property is ensured by this algorithm while the second property in ensured by the fact that the array was initally sorted in decresing order of value """ if not arr: return None min_val = arr[0] max_diff = 0 for elem in arr: max_diff = max(elem - min_val, max_diff) min_val = min(elem, min_val) max_val = max_diff - min_val return min_val, max_val </code></pre> Answer 3: <p>Alright, I took a crack at it myself and came up with the following approach which seems to be $O\big( (N/M)^2 + N \log(N) \big)$, where $M$ is the optimal value of (P1).</p> <p>Loop the following over $k$ from $1,2,\dots$:</p> <ul> <li>Partition the sequence into $2^k$ contiguous sequences of size at most $\lceil N 2^{-k} \rceil $.</li> <li>Find each group's local minima and maxima, saving the indices as well.</li> <li>Perform this brute-force search among the local maxima / minima to find the best $n_1$ and $n_2$ belonging to different groups</li> <li>if $n_2-n_1 &gt; N 2^{-k}$, break out of this loop and return this pair, since no better pair may be found from within the same group from further subdivision</li> </ul> <p>Each $k$th iteration requires: $O\big( N+ 4^k\big)$ operations, so summing over all necessary $k$ gives the total complexity $O\big((N/M)^2 + N \log(N) \big)$, where the former term dominates for small $M$ and the latter terms dominates for $M \gtrsim \sqrt{N}$.</p> Answer 4: <p>This is $O(N)$.</p> <p>If $a_n$ is greater than all the elements before it, call it a <em>peak</em>; and if $a_n$ is less than all elements after it, call it a <em>trough</em>.</p> <p>Then if $(a_i,a_j)$ is a maximally separated pair satisfying $a_i&gt;a_j$, we know that $a_i$ must be a peak (otherwise there would be an earlier element $a_k$ such that $a_k&gt;a_j$); and similarly, $a_j$ must be a trough.</p> <p>So here is the algorithm:</p> <ol> <li>Scan forwards through the sequence, marking all peaks ($N$ steps).</li> <li>Scan backwards through the sequence, marking all troughs ($N$ steps).</li> <li>Set $p = 1$ (this is the peak index). Note that $a_1$ is a peak by definition.</li> <li>Set $t$ (the trough index) to be the index of the last trough such that $a_t &lt; a_p$.</li> <li>Now $t-p$ is a candidate for the maximal separation; if it is the best yet, remember it.</li> <li>If we have scanned all peaks, we are done. Otherwise, set $p$ to be the index of the next peak and go to Step 4.</li> </ol> <p>Note that we can start the scan in Step 4 from the value of $t$ in the previous iteration, so the total number of operations used by that step in the whole algorithm is $O(N)$.</p> <p><strong>Edited to add:</strong> In fact we can omit Step 1, and just scan for the next peak in Step 6. (We can't omit Step 2, because scanning for troughs has to be done backwards.)</p> Answer 5: <p>I believe you can do this in $O(N \log N)$, but there may be a faster way. Here is a high level description of the algorithm:</p> <ol> <li>Store the elements as tuples of $(value, index)$ where $value$ is the value of the element and index is its position in the original array. So for ex. if your input was <code>[3, 1, 4, 2]</code>, you would store it as <code>[(3,0), (1,1), (4,2), (2,3)]</code>. This will be $O(N)$</li> <li>Sort this new array by the first element (value field) in the tuple in decreasing order. Thus previous example would now look like: <code>[(4, 2), (3, 0), (2, 3), (1, 1)]</code>. This can be done in $O(N\log N)$</li> <li>Find the maximum difference between second elements (index fields) of the tuples such that the larger one comes after the smaller one in the new array. This can be done in $O(N)$.</li> </ol> <p>Total complexity = $O(N \log N)$. </p> <p>I think steps $1$ and $2$ are pretty self explanatory. If you are curious about how step $3$ can be done in $O(N)$, here is some code:</p> <pre><code>def max_diff_indexes(arr): """ @params: arr - This is just an array of the SECOND parts of the tuple in the algorithm described above. That is, arr contains only ints representing the second values of the tuple (index field) after the sorting in step 2. @return: max_pair - Returns a pair of indexes n_1 and n_2 that maximise n_2 - n_1 such that a_{n_1} &gt; a_{n_2}. The first property is ensured by this algorithm while the second property in ensured by the fact that the array was initally sorted in decresing order of value """ if not arr: return None min_val = arr[0] max_diff = 0 for elem in arr: max_diff = max(elem - min_val, max_diff) min_val = min(elem, min_val) max_val = max_diff - min_val return min_val, max_val </code></pre> Answer 6: <p>Alright, I took a crack at it myself and came up with the following approach which seems to be $O\big( (N/M)^2 + N \log(N) \big)$, where $M$ is the optimal value of (P1).</p> <p>Loop the following over $k$ from $1,2,\dots$:</p> <ul> <li>Partition the sequence into $2^k$ contiguous sequences of size at most $\lceil N 2^{-k} \rceil $.</li> <li>Find each group's local minima and maxima, saving the indices as well.</li> <li>Perform this brute-force search among the local maxima / minima to find the best $n_1$ and $n_2$ belonging to different groups</li> <li>if $n_2-n_1 &gt; N 2^{-k}$, break out of this loop and return this pair, since no better pair may be found from within the same group from further subdivision</li> </ul> <p>Each $k$th iteration requires: $O\big( N+ 4^k\big)$ operations, so summing over all necessary $k$ gives the total complexity $O\big((N/M)^2 + N \log(N) \big)$, where the former term dominates for small $M$ and the latter terms dominates for $M \gtrsim \sqrt{N}$.</p> Answer 7: <p>This is $O(N)$.</p> <p>If $a_n$ is greater than all the elements before it, call it a <em>peak</em>; and if $a_n$ is less than all elements after it, call it a <em>trough</em>.</p> <p>Then if $(a_i,a_j)$ is a maximally separated pair satisfying $a_i&gt;a_j$, we know that $a_i$ must be a peak (otherwise there would be an earlier element $a_k$ such that $a_k&gt;a_j$); and similarly, $a_j$ must be a trough.</p> <p>So here is the algorithm:</p> <ol> <li>Scan forwards through the sequence, marking all peaks ($N$ steps).</li> <li>Scan backwards through the sequence, marking all troughs ($N$ steps).</li> <li>Set $p = 1$ (this is the peak index). Note that $a_1$ is a peak by definition.</li> <li>Set $t$ (the trough index) to be the index of the last trough such that $a_t &lt; a_p$.</li> <li>Now $t-p$ is a candidate for the maximal separation; if it is the best yet, remember it.</li> <li>If we have scanned all peaks, we are done. Otherwise, set $p$ to be the index of the next peak and go to Step 4.</li> </ol> <p>Note that we can start the scan in Step 4 from the value of $t$ in the previous iteration, so the total number of operations used by that step in the whole algorithm is $O(N)$.</p> <p><strong>Edited to add:</strong> In fact we can omit Step 1, and just scan for the next peak in Step 6. (We can't omit Step 2, because scanning for troughs has to be done backwards.)</p> Answer 8: <p>I believe you can do this in $O(N \log N)$, but there may be a faster way. Here is a high level description of the algorithm:</p> <ol> <li>Store the elements as tuples of $(value, index)$ where $value$ is the value of the element and index is its position in the original array. So for ex. if your input was <code>[3, 1, 4, 2]</code>, you would store it as <code>[(3,0), (1,1), (4,2), (2,3)]</code>. This will be $O(N)$</li> <li>Sort this new array by the first element (value field) in the tuple in decreasing order. Thus previous example would now look like: <code>[(4, 2), (3, 0), (2, 3), (1, 1)]</code>. This can be done in $O(N\log N)$</li> <li>Find the maximum difference between second elements (index fields) of the tuples such that the larger one comes after the smaller one in the new array. This can be done in $O(N)$.</li> </ol> <p>Total complexity = $O(N \log N)$. </p> <p>I think steps $1$ and $2$ are pretty self explanatory. If you are curious about how step $3$ can be done in $O(N)$, here is some code:</p> <pre><code>def max_diff_indexes(arr): """ @params: arr - This is just an array of the SECOND parts of the tuple in the algorithm described above. That is, arr contains only ints representing the second values of the tuple (index field) after the sorting in step 2. @return: max_pair - Returns a pair of indexes n_1 and n_2 that maximise n_2 - n_1 such that a_{n_1} &gt; a_{n_2}. The first property is ensured by this algorithm while the second property in ensured by the fact that the array was initally sorted in decresing order of value """ if not arr: return None min_val = arr[0] max_diff = 0 for elem in arr: max_diff = max(elem - min_val, max_diff) min_val = min(elem, min_val) max_val = max_diff - min_val return min_val, max_val </code></pre> Answer 9: <p>Alright, I took a crack at it myself and came up with the following approach which seems to be $O\big( (N/M)^2 + N \log(N) \big)$, where $M$ is the optimal value of (P1).</p> <p>Loop the following over $k$ from $1,2,\dots$:</p> <ul> <li>Partition the sequence into $2^k$ contiguous sequences of size at most $\lceil N 2^{-k} \rceil $.</li> <li>Find each group's local minima and maxima, saving the indices as well.</li> <li>Perform this brute-force search among the local maxima / minima to find the best $n_1$ and $n_2$ belonging to different groups</li> <li>if $n_2-n_1 &gt; N 2^{-k}$, break out of this loop and return this pair, since no better pair may be found from within the same group from further subdivision</li> </ul> <p>Each $k$th iteration requires: $O\big( N+ 4^k\big)$ operations, so summing over all necessary $k$ gives the total complexity $O\big((N/M)^2 + N \log(N) \big)$, where the former term dominates for small $M$ and the latter terms dominates for $M \gtrsim \sqrt{N}$.</p> Answer 10: <p>This is $O(N)$.</p> <p>If $a_n$ is greater than all the elements before it, call it a <em>peak</em>; and if $a_n$ is less than all elements after it, call it a <em>trough</em>.</p> <p>Then if $(a_i,a_j)$ is a maximally separated pair satisfying $a_i&gt;a_j$, we know that $a_i$ must be a peak (otherwise there would be an earlier element $a_k$ such that $a_k&gt;a_j$); and similarly, $a_j$ must be a trough.</p> <p>So here is the algorithm:</p> <ol> <li>Scan forwards through the sequence, marking all peaks ($N$ steps).</li> <li>Scan backwards through the sequence, marking all troughs ($N$ steps).</li> <li>Set $p = 1$ (this is the peak index). Note that $a_1$ is a peak by definition.</li> <li>Set $t$ (the trough index) to be the index of the last trough such that $a_t &lt; a_p$.</li> <li>Now $t-p$ is a candidate for the maximal separation; if it is the best yet, remember it.</li> <li>If we have scanned all peaks, we are done. Otherwise, set $p$ to be the index of the next peak and go to Step 4.</li> </ol> <p>Note that we can start the scan in Step 4 from the value of $t$ in the previous iteration, so the total number of operations used by that step in the whole algorithm is $O(N)$.</p> <p><strong>Edited to add:</strong> In fact we can omit Step 1, and just scan for the next peak in Step 6. (We can't omit Step 2, because scanning for troughs has to be done backwards.)</p> Answer 11: <p>I believe you can do this in $O(N \log N)$, but there may be a faster way. Here is a high level description of the algorithm:</p> <ol> <li>Store the elements as tuples of $(value, index)$ where $value$ is the value of the element and index is its position in the original array. So for ex. if your input was <code>[3, 1, 4, 2]</code>, you would store it as <code>[(3,0), (1,1), (4,2), (2,3)]</code>. This will be $O(N)$</li> <li>Sort this new array by the first element (value field) in the tuple in decreasing order. Thus previous example would now look like: <code>[(4, 2), (3, 0), (2, 3), (1, 1)]</code>. This can be done in $O(N\log N)$</li> <li>Find the maximum difference between second elements (index fields) of the tuples such that the larger one comes after the smaller one in the new array. This can be done in $O(N)$.</li> </ol> <p>Total complexity = $O(N \log N)$. </p> <p>I think steps $1$ and $2$ are pretty self explanatory. If you are curious about how step $3$ can be done in $O(N)$, here is some code:</p> <pre><code>def max_diff_indexes(arr): """ @params: arr - This is just an array of the SECOND parts of the tuple in the algorithm described above. That is, arr contains only ints representing the second values of the tuple (index field) after the sorting in step 2. @return: max_pair - Returns a pair of indexes n_1 and n_2 that maximise n_2 - n_1 such that a_{n_1} &gt; a_{n_2}. The first property is ensured by this algorithm while the second property in ensured by the fact that the array was initally sorted in decresing order of value """ if not arr: return None min_val = arr[0] max_diff = 0 for elem in arr: max_diff = max(elem - min_val, max_diff) min_val = min(elem, min_val) max_val = max_diff - min_val return min_val, max_val </code></pre> Answer 12: <p>Alright, I took a crack at it myself and came up with the following approach which seems to be $O\big( (N/M)^2 + N \log(N) \big)$, where $M$ is the optimal value of (P1).</p> <p>Loop the following over $k$ from $1,2,\dots$:</p> <ul> <li>Partition the sequence into $2^k$ contiguous sequences of size at most $\lceil N 2^{-k} \rceil $.</li> <li>Find each group's local minima and maxima, saving the indices as well.</li> <li>Perform this brute-force search among the local maxima / minima to find the best $n_1$ and $n_2$ belonging to different groups</li> <li>if $n_2-n_1 &gt; N 2^{-k}$, break out of this loop and return this pair, since no better pair may be found from within the same group from further subdivision</li> </ul> <p>Each $k$th iteration requires: $O\big( N+ 4^k\big)$ operations, so summing over all necessary $k$ gives the total complexity $O\big((N/M)^2 + N \log(N) \big)$, where the former term dominates for small $M$ and the latter terms dominates for $M \gtrsim \sqrt{N}$.</p> Answer 13: <p>This is $O(N)$.</p> <p>If $a_n$ is greater than all the elements before it, call it a <em>peak</em>; and if $a_n$ is less than all elements after it, call it a <em>trough</em>.</p> <p>Then if $(a_i,a_j)$ is a maximally separated pair satisfying $a_i&gt;a_j$, we know that $a_i$ must be a peak (otherwise there would be an earlier element $a_k$ such that $a_k&gt;a_j$); and similarly, $a_j$ must be a trough.</p> <p>So here is the algorithm:</p> <ol> <li>Scan forwards through the sequence, marking all peaks ($N$ steps).</li> <li>Scan backwards through the sequence, marking all troughs ($N$ steps).</li> <li>Set $p = 1$ (this is the peak index). Note that $a_1$ is a peak by definition.</li> <li>Set $t$ (the trough index) to be the index of the last trough such that $a_t &lt; a_p$.</li> <li>Now $t-p$ is a candidate for the maximal separation; if it is the best yet, remember it.</li> <li>If we have scanned all peaks, we are done. Otherwise, set $p$ to be the index of the next peak and go to Step 4.</li> </ol> <p>Note that we can start the scan in Step 4 from the value of $t$ in the previous iteration, so the total number of operations used by that step in the whole algorithm is $O(N)$.</p> <p><strong>Edited to add:</strong> In fact we can omit Step 1, and just scan for the next peak in Step 6. (We can't omit Step 2, because scanning for troughs has to be done backwards.)</p> Answer 14: <p>I believe you can do this in $O(N \log N)$, but there may be a faster way. Here is a high level description of the algorithm:</p> <ol> <li>Store the elements as tuples of $(value, index)$ where $value$ is the value of the element and index is its position in the original array. So for ex. if your input was <code>[3, 1, 4, 2]</code>, you would store it as <code>[(3,0), (1,1), (4,2), (2,3)]</code>. This will be $O(N)$</li> <li>Sort this new array by the first element (value field) in the tuple in decreasing order. Thus previous example would now look like: <code>[(4, 2), (3, 0), (2, 3), (1, 1)]</code>. This can be done in $O(N\log N)$</li> <li>Find the maximum difference between second elements (index fields) of the tuples such that the larger one comes after the smaller one in the new array. This can be done in $O(N)$.</li> </ol> <p>Total complexity = $O(N \log N)$. </p> <p>I think steps $1$ and $2$ are pretty self explanatory. If you are curious about how step $3$ can be done in $O(N)$, here is some code:</p> <pre><code>def max_diff_indexes(arr): """ @params: arr - This is just an array of the SECOND parts of the tuple in the algorithm described above. That is, arr contains only ints representing the second values of the tuple (index field) after the sorting in step 2. @return: max_pair - Returns a pair of indexes n_1 and n_2 that maximise n_2 - n_1 such that a_{n_1} &gt; a_{n_2}. The first property is ensured by this algorithm while the second property in ensured by the fact that the array was initally sorted in decresing order of value """ if not arr: return None min_val = arr[0] max_diff = 0 for elem in arr: max_diff = max(elem - min_val, max_diff) min_val = min(elem, min_val) max_val = max_diff - min_val return min_val, max_val </code></pre> Answer 15: <p>Alright, I took a crack at it myself and came up with the following approach which seems to be $O\big( (N/M)^2 + N \log(N) \big)$, where $M$ is the optimal value of (P1).</p> <p>Loop the following over $k$ from $1,2,\dots$:</p> <ul> <li>Partition the sequence into $2^k$ contiguous sequences of size at most $\lceil N 2^{-k} \rceil $.</li> <li>Find each group's local minima and maxima, saving the indices as well.</li> <li>Perform this brute-force search among the local maxima / minima to find the best $n_1$ and $n_2$ belonging to different groups</li> <li>if $n_2-n_1 &gt; N 2^{-k}$, break out of this loop and return this pair, since no better pair may be found from within the same group from further subdivision</li> </ul> <p>Each $k$th iteration requires: $O\big( N+ 4^k\big)$ operations, so summing over all necessary $k$ gives the total complexity $O\big((N/M)^2 + N \log(N) \big)$, where the former term dominates for small $M$ and the latter terms dominates for $M \gtrsim \sqrt{N}$.</p> Answer 16: <p>This is $O(N)$.</p> <p>If $a_n$ is greater than all the elements before it, call it a <em>peak</em>; and if $a_n$ is less than all elements after it, call it a <em>trough</em>.</p> <p>Then if $(a_i,a_j)$ is a maximally separated pair satisfying $a_i&gt;a_j$, we know that $a_i$ must be a peak (otherwise there would be an earlier element $a_k$ such that $a_k&gt;a_j$); and similarly, $a_j$ must be a trough.</p> <p>So here is the algorithm:</p> <ol> <li>Scan forwards through the sequence, marking all peaks ($N$ steps).</li> <li>Scan backwards through the sequence, marking all troughs ($N$ steps).</li> <li>Set $p = 1$ (this is the peak index). Note that $a_1$ is a peak by definition.</li> <li>Set $t$ (the trough index) to be the index of the last trough such that $a_t &lt; a_p$.</li> <li>Now $t-p$ is a candidate for the maximal separation; if it is the best yet, remember it.</li> <li>If we have scanned all peaks, we are done. Otherwise, set $p$ to be the index of the next peak and go to Step 4.</li> </ol> <p>Note that we can start the scan in Step 4 from the value of $t$ in the previous iteration, so the total number of operations used by that step in the whole algorithm is $O(N)$.</p> <p><strong>Edited to add:</strong> In fact we can omit Step 1, and just scan for the next peak in Step 6. (We can't omit Step 2, because scanning for troughs has to be done backwards.)</p> Answer 17: <p>I believe you can do this in $O(N \log N)$, but there may be a faster way. Here is a high level description of the algorithm:</p> <ol> <li>Store the elements as tuples of $(value, index)$ where $value$ is the value of the element and index is its position in the original array. So for ex. if your input was <code>[3, 1, 4, 2]</code>, you would store it as <code>[(3,0), (1,1), (4,2), (2,3)]</code>. This will be $O(N)$</li> <li>Sort this new array by the first element (value field) in the tuple in decreasing order. Thus previous example would now look like: <code>[(4, 2), (3, 0), (2, 3), (1, 1)]</code>. This can be done in $O(N\log N)$</li> <li>Find the maximum difference between second elements (index fields) of the tuples such that the larger one comes after the smaller one in the new array. This can be done in $O(N)$.</li> </ol> <p>Total complexity = $O(N \log N)$. </p> <p>I think steps $1$ and $2$ are pretty self explanatory. If you are curious about how step $3$ can be done in $O(N)$, here is some code:</p> <pre><code>def max_diff_indexes(arr): """ @params: arr - This is just an array of the SECOND parts of the tuple in the algorithm described above. That is, arr contains only ints representing the second values of the tuple (index field) after the sorting in step 2. @return: max_pair - Returns a pair of indexes n_1 and n_2 that maximise n_2 - n_1 such that a_{n_1} &gt; a_{n_2}. The first property is ensured by this algorithm while the second property in ensured by the fact that the array was initally sorted in decresing order of value """ if not arr: return None min_val = arr[0] max_diff = 0 for elem in arr: max_diff = max(elem - min_val, max_diff) min_val = min(elem, min_val) max_val = max_diff - min_val return min_val, max_val </code></pre> Answer 18: <p>Alright, I took a crack at it myself and came up with the following approach which seems to be $O\big( (N/M)^2 + N \log(N) \big)$, where $M$ is the optimal value of (P1).</p> <p>Loop the following over $k$ from $1,2,\dots$:</p> <ul> <li>Partition the sequence into $2^k$ contiguous sequences of size at most $\lceil N 2^{-k} \rceil $.</li> <li>Find each group's local minima and maxima, saving the indices as well.</li> <li>Perform this brute-force search among the local maxima / minima to find the best $n_1$ and $n_2$ belonging to different groups</li> <li>if $n_2-n_1 &gt; N 2^{-k}$, break out of this loop and return this pair, since no better pair may be found from within the same group from further subdivision</li> </ul> <p>Each $k$th iteration requires: $O\big( N+ 4^k\big)$ operations, so summing over all necessary $k$ gives the total complexity $O\big((N/M)^2 + N \log(N) \big)$, where the former term dominates for small $M$ and the latter terms dominates for $M \gtrsim \sqrt{N}$.</p> Answer 19: <p>This is $O(N)$.</p> <p>If $a_n$ is greater than all the elements before it, call it a <em>peak</em>; and if $a_n$ is less than all elements after it, call it a <em>trough</em>.</p> <p>Then if $(a_i,a_j)$ is a maximally separated pair satisfying $a_i&gt;a_j$, we know that $a_i$ must be a peak (otherwise there would be an earlier element $a_k$ such that $a_k&gt;a_j$); and similarly, $a_j$ must be a trough.</p> <p>So here is the algorithm:</p> <ol> <li>Scan forwards through the sequence, marking all peaks ($N$ steps).</li> <li>Scan backwards through the sequence, marking all troughs ($N$ steps).</li> <li>Set $p = 1$ (this is the peak index). Note that $a_1$ is a peak by definition.</li> <li>Set $t$ (the trough index) to be the index of the last trough such that $a_t &lt; a_p$.</li> <li>Now $t-p$ is a candidate for the maximal separation; if it is the best yet, remember it.</li> <li>If we have scanned all peaks, we are done. Otherwise, set $p$ to be the index of the next peak and go to Step 4.</li> </ol> <p>Note that we can start the scan in Step 4 from the value of $t$ in the previous iteration, so the total number of operations used by that step in the whole algorithm is $O(N)$.</p> <p><strong>Edited to add:</strong> In fact we can omit Step 1, and just scan for the next peak in Step 6. (We can't omit Step 2, because scanning for troughs has to be done backwards.)</p> Answer 20: <p>I believe you can do this in $O(N \log N)$, but there may be a faster way. Here is a high level description of the algorithm:</p> <ol> <li>Store the elements as tuples of $(value, index)$ where $value$ is the value of the element and index is its position in the original array. So for ex. if your input was <code>[3, 1, 4, 2]</code>, you would store it as <code>[(3,0), (1,1), (4,2), (2,3)]</code>. This will be $O(N)$</li> <li>Sort this new array by the first element (value field) in the tuple in decreasing order. Thus previous example would now look like: <code>[(4, 2), (3, 0), (2, 3), (1, 1)]</code>. This can be done in $O(N\log N)$</li> <li>Find the maximum difference between second elements (index fields) of the tuples such that the larger one comes after the smaller one in the new array. This can be done in $O(N)$.</li> </ol> <p>Total complexity = $O(N \log N)$. </p> <p>I think steps $1$ and $2$ are pretty self explanatory. If you are curious about how step $3$ can be done in $O(N)$, here is some code:</p> <pre><code>def max_diff_indexes(arr): """ @params: arr - This is just an array of the SECOND parts of the tuple in the algorithm described above. That is, arr contains only ints representing the second values of the tuple (index field) after the sorting in step 2. @return: max_pair - Returns a pair of indexes n_1 and n_2 that maximise n_2 - n_1 such that a_{n_1} &gt; a_{n_2}. The first property is ensured by this algorithm while the second property in ensured by the fact that the array was initally sorted in decresing order of value """ if not arr: return None min_val = arr[0] max_diff = 0 for elem in arr: max_diff = max(elem - min_val, max_diff) min_val = min(elem, min_val) max_val = max_diff - min_val return min_val, max_val </code></pre> Answer 21: <p>Alright, I took a crack at it myself and came up with the following approach which seems to be $O\big( (N/M)^2 + N \log(N) \big)$, where $M$ is the optimal value of (P1).</p> <p>Loop the following over $k$ from $1,2,\dots$:</p> <ul> <li>Partition the sequence into $2^k$ contiguous sequences of size at most $\lceil N 2^{-k} \rceil $.</li> <li>Find each group's local minima and maxima, saving the indices as well.</li> <li>Perform this brute-force search among the local maxima / minima to find the best $n_1$ and $n_2$ belonging to different groups</li> <li>if $n_2-n_1 &gt; N 2^{-k}$, break out of this loop and return this pair, since no better pair may be found from within the same group from further subdivision</li> </ul> <p>Each $k$th iteration requires: $O\big( N+ 4^k\big)$ operations, so summing over all necessary $k$ gives the total complexity $O\big((N/M)^2 + N \log(N) \big)$, where the former term dominates for small $M$ and the latter terms dominates for $M \gtrsim \sqrt{N}$.</p> Answer 22: <p>This is $O(N)$.</p> <p>If $a_n$ is greater than all the elements before it, call it a <em>peak</em>; and if $a_n$ is less than all elements after it, call it a <em>trough</em>.</p> <p>Then if $(a_i,a_j)$ is a maximally separated pair satisfying $a_i&gt;a_j$, we know that $a_i$ must be a peak (otherwise there would be an earlier element $a_k$ such that $a_k&gt;a_j$); and similarly, $a_j$ must be a trough.</p> <p>So here is the algorithm:</p> <ol> <li>Scan forwards through the sequence, marking all peaks ($N$ steps).</li> <li>Scan backwards through the sequence, marking all troughs ($N$ steps).</li> <li>Set $p = 1$ (this is the peak index). Note that $a_1$ is a peak by definition.</li> <li>Set $t$ (the trough index) to be the index of the last trough such that $a_t &lt; a_p$.</li> <li>Now $t-p$ is a candidate for the maximal separation; if it is the best yet, remember it.</li> <li>If we have scanned all peaks, we are done. Otherwise, set $p$ to be the index of the next peak and go to Step 4.</li> </ol> <p>Note that we can start the scan in Step 4 from the value of $t$ in the previous iteration, so the total number of operations used by that step in the whole algorithm is $O(N)$.</p> <p><strong>Edited to add:</strong> In fact we can omit Step 1, and just scan for the next peak in Step 6. (We can't omit Step 2, because scanning for troughs has to be done backwards.)</p> Answer 23: <p>I believe you can do this in $O(N \log N)$, but there may be a faster way. Here is a high level description of the algorithm:</p> <ol> <li>Store the elements as tuples of $(value, index)$ where $value$ is the value of the element and index is its position in the original array. So for ex. if your input was <code>[3, 1, 4, 2]</code>, you would store it as <code>[(3,0), (1,1), (4,2), (2,3)]</code>. This will be $O(N)$</li> <li>Sort this new array by the first element (value field) in the tuple in decreasing order. Thus previous example would now look like: <code>[(4, 2), (3, 0), (2, 3), (1, 1)]</code>. This can be done in $O(N\log N)$</li> <li>Find the maximum difference between second elements (index fields) of the tuples such that the larger one comes after the smaller one in the new array. This can be done in $O(N)$.</li> </ol> <p>Total complexity = $O(N \log N)$. </p> <p>I think steps $1$ and $2$ are pretty self explanatory. If you are curious about how step $3$ can be done in $O(N)$, here is some code:</p> <pre><code>def max_diff_indexes(arr): """ @params: arr - This is just an array of the SECOND parts of the tuple in the algorithm described above. That is, arr contains only ints representing the second values of the tuple (index field) after the sorting in step 2. @return: max_pair - Returns a pair of indexes n_1 and n_2 that maximise n_2 - n_1 such that a_{n_1} &gt; a_{n_2}. The first property is ensured by this algorithm while the second property in ensured by the fact that the array was initally sorted in decresing order of value """ if not arr: return None min_val = arr[0] max_diff = 0 for elem in arr: max_diff = max(elem - min_val, max_diff) min_val = min(elem, min_val) max_val = max_diff - min_val return min_val, max_val </code></pre> Answer 24: <p>Alright, I took a crack at it myself and came up with the following approach which seems to be $O\big( (N/M)^2 + N \log(N) \big)$, where $M$ is the optimal value of (P1).</p> <p>Loop the following over $k$ from $1,2,\dots$:</p> <ul> <li>Partition the sequence into $2^k$ contiguous sequences of size at most $\lceil N 2^{-k} \rceil $.</li> <li>Find each group's local minima and maxima, saving the indices as well.</li> <li>Perform this brute-force search among the local maxima / minima to find the best $n_1$ and $n_2$ belonging to different groups</li> <li>if $n_2-n_1 &gt; N 2^{-k}$, break out of this loop and return this pair, since no better pair may be found from within the same group from further subdivision</li> </ul> <p>Each $k$th iteration requires: $O\big( N+ 4^k\big)$ operations, so summing over all necessary $k$ gives the total complexity $O\big((N/M)^2 + N \log(N) \big)$, where the former term dominates for small $M$ and the latter terms dominates for $M \gtrsim \sqrt{N}$.</p> Answer 25: <p>This is $O(N)$.</p> <p>If $a_n$ is greater than all the elements before it, call it a <em>peak</em>; and if $a_n$ is less than all elements after it, call it a <em>trough</em>.</p> <p>Then if $(a_i,a_j)$ is a maximally separated pair satisfying $a_i&gt;a_j$, we know that $a_i$ must be a peak (otherwise there would be an earlier element $a_k$ such that $a_k&gt;a_j$); and similarly, $a_j$ must be a trough.</p> <p>So here is the algorithm:</p> <ol> <li>Scan forwards through the sequence, marking all peaks ($N$ steps).</li> <li>Scan backwards through the sequence, marking all troughs ($N$ steps).</li> <li>Set $p = 1$ (this is the peak index). Note that $a_1$ is a peak by definition.</li> <li>Set $t$ (the trough index) to be the index of the last trough such that $a_t &lt; a_p$.</li> <li>Now $t-p$ is a candidate for the maximal separation; if it is the best yet, remember it.</li> <li>If we have scanned all peaks, we are done. Otherwise, set $p$ to be the index of the next peak and go to Step 4.</li> </ol> <p>Note that we can start the scan in Step 4 from the value of $t$ in the previous iteration, so the total number of operations used by that step in the whole algorithm is $O(N)$.</p> <p><strong>Edited to add:</strong> In fact we can omit Step 1, and just scan for the next peak in Step 6. (We can't omit Step 2, because scanning for troughs has to be done backwards.)</p> Answer 26: <p>I believe you can do this in $O(N \log N)$, but there may be a faster way. Here is a high level description of the algorithm:</p> <ol> <li>Store the elements as tuples of $(value, index)$ where $value$ is the value of the element and index is its position in the original array. So for ex. if your input was <code>[3, 1, 4, 2]</code>, you would store it as <code>[(3,0), (1,1), (4,2), (2,3)]</code>. This will be $O(N)$</li> <li>Sort this new array by the first element (value field) in the tuple in decreasing order. Thus previous example would now look like: <code>[(4, 2), (3, 0), (2, 3), (1, 1)]</code>. This can be done in $O(N\log N)$</li> <li>Find the maximum difference between second elements (index fields) of the tuples such that the larger one comes after the smaller one in the new array. This can be done in $O(N)$.</li> </ol> <p>Total complexity = $O(N \log N)$. </p> <p>I think steps $1$ and $2$ are pretty self explanatory. If you are curious about how step $3$ can be done in $O(N)$, here is some code:</p> <pre><code>def max_diff_indexes(arr): """ @params: arr - This is just an array of the SECOND parts of the tuple in the algorithm described above. That is, arr contains only ints representing the second values of the tuple (index field) after the sorting in step 2. @return: max_pair - Returns a pair of indexes n_1 and n_2 that maximise n_2 - n_1 such that a_{n_1} &gt; a_{n_2}. The first property is ensured by this algorithm while the second property in ensured by the fact that the array was initally sorted in decresing order of value """ if not arr: return None min_val = arr[0] max_diff = 0 for elem in arr: max_diff = max(elem - min_val, max_diff) min_val = min(elem, min_val) max_val = max_diff - min_val return min_val, max_val </code></pre> Answer 27: <p>Alright, I took a crack at it myself and came up with the following approach which seems to be $O\big( (N/M)^2 + N \log(N) \big)$, where $M$ is the optimal value of (P1).</p> <p>Loop the following over $k$ from $1,2,\dots$:</p> <ul> <li>Partition the sequence into $2^k$ contiguous sequences of size at most $\lceil N 2^{-k} \rceil $.</li> <li>Find each group's local minima and maxima, saving the indices as well.</li> <li>Perform this brute-force search among the local maxima / minima to find the best $n_1$ and $n_2$ belonging to different groups</li> <li>if $n_2-n_1 &gt; N 2^{-k}$, break out of this loop and return this pair, since no better pair may be found from within the same group from further subdivision</li> </ul> <p>Each $k$th iteration requires: $O\big( N+ 4^k\big)$ operations, so summing over all necessary $k$ gives the total complexity $O\big((N/M)^2 + N \log(N) \big)$, where the former term dominates for small $M$ and the latter terms dominates for $M \gtrsim \sqrt{N}$.</p>
Find $x^2+y^2+z^2$ if $x+y+z=0$ and $\sin{x}+\sin{y}+\sin{z}=0$ and $\cos{x}+\cos{y}+\cos{z}=0$ for any $x,y,z \in [-\pi,\pi]$
2,083,271
5
https://math.stackexchange.com/questions/2083271/find-x2y2z2-if-xyz-0-and-sinx-siny-sinz-0-and-cosx-co
<p>Find $x^2+y^2+z^2$ if $x+y+z=0$ and $\sin{x}+\sin{y}+\sin{z}=0$ and $\cos{x}+\cos{y}+\cos{z}=0$ for any $x,y,z \in [-\pi,\pi]$.</p> <p><strong>My attempt</strong>:I found one answer $x=0,y=\frac{2\pi}{3},z=-\frac{2\pi}{3}$ which gives the answer $x^2+y^2+z^2=\frac{8{\pi}^2}{9}$.But I want a way to find the answer using equations.ANy hints?</p>
[ { "answer_id": 2083351, "score": 1, "text": "<p>You can try this method:\n$$x+y+z=0 \\implies x+y=-z\\tag1$$\nAnd $$\\sin{x}+\\sin{y}+\\sin{z}=0$$\n$$\\implies \\sin{x}+\\sin{y}=-\\sin{z}\\tag2$$\nAnd $$\\cos{x}+\\cos{y}+\\cos{z}=0$$\n$$\\cos{x}+\\cos{y}=-\\cos{z}\\tag3$$</p>\n\n<p>So $$(2)^2+(3)^2 \\implies 2+2\\cos(x-y)=1$$\n$$\\implies \\cos(x-y)=-\\frac12\\tag4$$</p>\n\n<p>Similarly we get $$\\cos(y-z)=-\\frac12\\tag5$$\n$$\\cos(z-x)=-\\frac12\\tag6$$</p>\n\n<p>And from $(1)$, we get $\\cos(x+y)=\\cos z$</p>\n\n<p>Can you proceed now?</p>\n" }, { "answer_id": 2083373, "score": 1, "text": "<p>\\begin{eqnarray}\n\\sin z &amp;=&amp; -\\sin x-\\sin y\\\\\n\\cos z &amp;=&amp; -\\cos x-\\cos y\n\\end{eqnarray}\nso\n\\begin{eqnarray}\n\\sin^2z &amp;=&amp; \\sin^2x+\\sin^2y+2\\sin x\\sin y\\\\\n\\cos^2z &amp;=&amp; \\cos^2x+\\cos^2y+2\\cos x\\cos y\n\\end{eqnarray}\nadding to these equations concludes\n\\begin{eqnarray}\n1 &amp;=&amp; 1+1+2(\\sin x\\sin y+\\cos x\\cos y)\\\\\n0 &amp;=&amp; 1+2\\cos(x-y)\\\\\n-\\frac{1}{2} &amp;=&amp; \\cos(x-y)\n\\end{eqnarray}\nand also\n$$\\cos^2\\frac{x-y}{2}=\\frac{1+\\cos(x-y)}{2}=\\frac{1}{4}$$\nor\n$$\\cos\\frac{x-y}{2}=\\pm\\frac{1}{2}$$\nIn the other hand\n\\begin{eqnarray}\n(\\sin x+\\sin y+\\sin z)^2 + (\\cos x+\\cos y+\\cos z)^2 &amp;=&amp; 0\\\\\n\\sin^2x+\\sin^2y+\\sin^2z+2\\sin x\\sin y+2\\sin y\\sin z+2\\sin z\\sin x &amp;+&amp;\\\\\n\\cos^2x+\\cos^2y+\\cos^2z+2\\cos x\\cos y+2\\cos y\\cos z+2\\cos z\\cos x &amp;=&amp;0\\\\\n3+2\\big(\\cos x\\cos y+\\sin x\\sin y\\big)+2\\big(\\cos y\\cos z+\\sin y\\sin z\\big)+2\\big(\\cos z\\cos x+\\sin z\\sin x\\big) &amp;=&amp;0\\\\\n3+2\\cos(x-y)+2\\cos(y-z)+2\\cos(z-x) &amp;=&amp;0\\\\\n3+2(-\\frac{1}{2})+4\\cos\\frac{x-y}{2}\\cos3\\frac{x+y}{2} &amp;=&amp;0\\\\\n\\cos\\frac{x-y}{2}\\cos3\\frac{x+y}{2} &amp;=&amp;-\\frac{1}{2}\\\\\n\\cos3\\frac{x+y}{2} &amp;=&amp;\\mp1\\\\\n\\end{eqnarray}\nso\n\\begin{eqnarray}\n\\cos(x-y) &amp;=&amp; -\\frac{1}{2}\\\\\n\\cos3\\frac{x+y}{2} &amp;=&amp;\\mp1\n\\end{eqnarray}\nWe have two answer for each of them\n\\begin{eqnarray}\nx-y &amp;=&amp; \\frac{2\\pi}{3},~\\frac{-\\pi}{3}\\\\\nx+y &amp;=&amp; 0,~\\frac{2\\pi}{3}\n\\end{eqnarray}\nWe obtain 4 answer from these equations and then permutation over $(x,y,z)$ from equations symmetrically, gives us 12 answer. Finally by substitution answers in equations, $x^2+y^2+z^2$ has one value ($\\displaystyle\\frac{2\\pi}{3},\\frac{-2\\pi}{3},0$), it is\n$$\\frac{8\\pi^2}{9}$$</p>\n" }, { "answer_id": 2083378, "score": 0, "text": "<p>Although this is same to answer of @schrodingersCat Let $x\\geq y\\geq0\\geq z$ \n$$\\cos(x+y)^2=1-\\sin(x+y)^2$$ \n$$⇔1-(\\sin{x}+\\sin{y})^2=(\\cos{x}+\\cos{y})^2$$\n$$⇔1+2(\\cos{x}\\cos{y}+\\sin{x}\\sin{y})=0$$\n$$⇔\\cos(x-y)=-1/2 $$\n$$x-y=2\\pi/3$$\nand\n$$\\sin{x}+\\sin{y}=-\\sin{z}$$\n$$⇔\\sin(y+2\\pi/3)+\\sin{y}=\\sqrt3/2$$ \n$$⇔(1/2)\\sin{y}+\\sqrt3/2\\cos{y}=\\sqrt3/2$$\n$$⇔\\sin(y+\\pi/3)=\\sqrt3/2$$ </p>\n\n<p>then we get $$x=2\\pi/3 ,y=0,z=-2\\pi/3$$, if angle(x,y,z) are points of unit circle, they consist of the regular triangle. The answer of $x^2+y^2+z^2$ is unique and,\n$$\\displaystyle x^2+y^2+z^2=\\frac{8\\pi^2}9 $$</p>\n" }, { "answer_id": 2083550, "score": 0, "text": "<p>HINT.- $$\\begin{cases}\\sin{x}+\\sin{y}+\\sin{z}=0\\\\\\cos{x}+\\cos{y}+\\cos{z}=0\\end{cases}\\iff\\begin{cases}2\\sin\\dfrac{x+y}{2}\\cos\\dfrac{x-y}{2}=-\\sin z\\\\2\\cos\\dfrac{x+y}{2}\\cos\\dfrac{x-y}{2}=-\\cos z\\end{cases}\\Rightarrow \\tan\\frac{x+y}{2}=\\tan z$$\nHence, because of $x,y,z \\in [-\\pi,\\pi]$, $$x+y=2z\\iff x+y+z=3z\\Rightarrow \\color{red}{z=0 \\text{ and }x+y=0}$$\nNOTE.-Obviously instead of $z$ we can choose either $x$ and $y$ to be zero. Thus $$\\color{red}{x^2+y^2+z^2=2t^2}$$ where $(x,y,z)=(t,-t,0)$ and with $t\\in [-\\pi,\\pi]$.</p>\n" }, { "answer_id": 2083351, "score": 1, "text": "<p>You can try this method:\n$$x+y+z=0 \\implies x+y=-z\\tag1$$\nAnd $$\\sin{x}+\\sin{y}+\\sin{z}=0$$\n$$\\implies \\sin{x}+\\sin{y}=-\\sin{z}\\tag2$$\nAnd $$\\cos{x}+\\cos{y}+\\cos{z}=0$$\n$$\\cos{x}+\\cos{y}=-\\cos{z}\\tag3$$</p>\n\n<p>So $$(2)^2+(3)^2 \\implies 2+2\\cos(x-y)=1$$\n$$\\implies \\cos(x-y)=-\\frac12\\tag4$$</p>\n\n<p>Similarly we get $$\\cos(y-z)=-\\frac12\\tag5$$\n$$\\cos(z-x)=-\\frac12\\tag6$$</p>\n\n<p>And from $(1)$, we get $\\cos(x+y)=\\cos z$</p>\n\n<p>Can you proceed now?</p>\n" }, { "answer_id": 2083373, "score": 1, "text": "<p>\\begin{eqnarray}\n\\sin z &amp;=&amp; -\\sin x-\\sin y\\\\\n\\cos z &amp;=&amp; -\\cos x-\\cos y\n\\end{eqnarray}\nso\n\\begin{eqnarray}\n\\sin^2z &amp;=&amp; \\sin^2x+\\sin^2y+2\\sin x\\sin y\\\\\n\\cos^2z &amp;=&amp; \\cos^2x+\\cos^2y+2\\cos x\\cos y\n\\end{eqnarray}\nadding to these equations concludes\n\\begin{eqnarray}\n1 &amp;=&amp; 1+1+2(\\sin x\\sin y+\\cos x\\cos y)\\\\\n0 &amp;=&amp; 1+2\\cos(x-y)\\\\\n-\\frac{1}{2} &amp;=&amp; \\cos(x-y)\n\\end{eqnarray}\nand also\n$$\\cos^2\\frac{x-y}{2}=\\frac{1+\\cos(x-y)}{2}=\\frac{1}{4}$$\nor\n$$\\cos\\frac{x-y}{2}=\\pm\\frac{1}{2}$$\nIn the other hand\n\\begin{eqnarray}\n(\\sin x+\\sin y+\\sin z)^2 + (\\cos x+\\cos y+\\cos z)^2 &amp;=&amp; 0\\\\\n\\sin^2x+\\sin^2y+\\sin^2z+2\\sin x\\sin y+2\\sin y\\sin z+2\\sin z\\sin x &amp;+&amp;\\\\\n\\cos^2x+\\cos^2y+\\cos^2z+2\\cos x\\cos y+2\\cos y\\cos z+2\\cos z\\cos x &amp;=&amp;0\\\\\n3+2\\big(\\cos x\\cos y+\\sin x\\sin y\\big)+2\\big(\\cos y\\cos z+\\sin y\\sin z\\big)+2\\big(\\cos z\\cos x+\\sin z\\sin x\\big) &amp;=&amp;0\\\\\n3+2\\cos(x-y)+2\\cos(y-z)+2\\cos(z-x) &amp;=&amp;0\\\\\n3+2(-\\frac{1}{2})+4\\cos\\frac{x-y}{2}\\cos3\\frac{x+y}{2} &amp;=&amp;0\\\\\n\\cos\\frac{x-y}{2}\\cos3\\frac{x+y}{2} &amp;=&amp;-\\frac{1}{2}\\\\\n\\cos3\\frac{x+y}{2} &amp;=&amp;\\mp1\\\\\n\\end{eqnarray}\nso\n\\begin{eqnarray}\n\\cos(x-y) &amp;=&amp; -\\frac{1}{2}\\\\\n\\cos3\\frac{x+y}{2} &amp;=&amp;\\mp1\n\\end{eqnarray}\nWe have two answer for each of them\n\\begin{eqnarray}\nx-y &amp;=&amp; \\frac{2\\pi}{3},~\\frac{-\\pi}{3}\\\\\nx+y &amp;=&amp; 0,~\\frac{2\\pi}{3}\n\\end{eqnarray}\nWe obtain 4 answer from these equations and then permutation over $(x,y,z)$ from equations symmetrically, gives us 12 answer. Finally by substitution answers in equations, $x^2+y^2+z^2$ has one value ($\\displaystyle\\frac{2\\pi}{3},\\frac{-2\\pi}{3},0$), it is\n$$\\frac{8\\pi^2}{9}$$</p>\n" }, { "answer_id": 2083378, "score": 0, "text": "<p>Although this is same to answer of @schrodingersCat Let $x\\geq y\\geq0\\geq z$ \n$$\\cos(x+y)^2=1-\\sin(x+y)^2$$ \n$$⇔1-(\\sin{x}+\\sin{y})^2=(\\cos{x}+\\cos{y})^2$$\n$$⇔1+2(\\cos{x}\\cos{y}+\\sin{x}\\sin{y})=0$$\n$$⇔\\cos(x-y)=-1/2 $$\n$$x-y=2\\pi/3$$\nand\n$$\\sin{x}+\\sin{y}=-\\sin{z}$$\n$$⇔\\sin(y+2\\pi/3)+\\sin{y}=\\sqrt3/2$$ \n$$⇔(1/2)\\sin{y}+\\sqrt3/2\\cos{y}=\\sqrt3/2$$\n$$⇔\\sin(y+\\pi/3)=\\sqrt3/2$$ </p>\n\n<p>then we get $$x=2\\pi/3 ,y=0,z=-2\\pi/3$$, if angle(x,y,z) are points of unit circle, they consist of the regular triangle. The answer of $x^2+y^2+z^2$ is unique and,\n$$\\displaystyle x^2+y^2+z^2=\\frac{8\\pi^2}9 $$</p>\n" }, { "answer_id": 2083550, "score": 0, "text": "<p>HINT.- $$\\begin{cases}\\sin{x}+\\sin{y}+\\sin{z}=0\\\\\\cos{x}+\\cos{y}+\\cos{z}=0\\end{cases}\\iff\\begin{cases}2\\sin\\dfrac{x+y}{2}\\cos\\dfrac{x-y}{2}=-\\sin z\\\\2\\cos\\dfrac{x+y}{2}\\cos\\dfrac{x-y}{2}=-\\cos z\\end{cases}\\Rightarrow \\tan\\frac{x+y}{2}=\\tan z$$\nHence, because of $x,y,z \\in [-\\pi,\\pi]$, $$x+y=2z\\iff x+y+z=3z\\Rightarrow \\color{red}{z=0 \\text{ and }x+y=0}$$\nNOTE.-Obviously instead of $z$ we can choose either $x$ and $y$ to be zero. Thus $$\\color{red}{x^2+y^2+z^2=2t^2}$$ where $(x,y,z)=(t,-t,0)$ and with $t\\in [-\\pi,\\pi]$.</p>\n" }, { "answer_id": 2083351, "score": 1, "text": "<p>You can try this method:\n$$x+y+z=0 \\implies x+y=-z\\tag1$$\nAnd $$\\sin{x}+\\sin{y}+\\sin{z}=0$$\n$$\\implies \\sin{x}+\\sin{y}=-\\sin{z}\\tag2$$\nAnd $$\\cos{x}+\\cos{y}+\\cos{z}=0$$\n$$\\cos{x}+\\cos{y}=-\\cos{z}\\tag3$$</p>\n\n<p>So $$(2)^2+(3)^2 \\implies 2+2\\cos(x-y)=1$$\n$$\\implies \\cos(x-y)=-\\frac12\\tag4$$</p>\n\n<p>Similarly we get $$\\cos(y-z)=-\\frac12\\tag5$$\n$$\\cos(z-x)=-\\frac12\\tag6$$</p>\n\n<p>And from $(1)$, we get $\\cos(x+y)=\\cos z$</p>\n\n<p>Can you proceed now?</p>\n" }, { "answer_id": 2083373, "score": 1, "text": "<p>\\begin{eqnarray}\n\\sin z &amp;=&amp; -\\sin x-\\sin y\\\\\n\\cos z &amp;=&amp; -\\cos x-\\cos y\n\\end{eqnarray}\nso\n\\begin{eqnarray}\n\\sin^2z &amp;=&amp; \\sin^2x+\\sin^2y+2\\sin x\\sin y\\\\\n\\cos^2z &amp;=&amp; \\cos^2x+\\cos^2y+2\\cos x\\cos y\n\\end{eqnarray}\nadding to these equations concludes\n\\begin{eqnarray}\n1 &amp;=&amp; 1+1+2(\\sin x\\sin y+\\cos x\\cos y)\\\\\n0 &amp;=&amp; 1+2\\cos(x-y)\\\\\n-\\frac{1}{2} &amp;=&amp; \\cos(x-y)\n\\end{eqnarray}\nand also\n$$\\cos^2\\frac{x-y}{2}=\\frac{1+\\cos(x-y)}{2}=\\frac{1}{4}$$\nor\n$$\\cos\\frac{x-y}{2}=\\pm\\frac{1}{2}$$\nIn the other hand\n\\begin{eqnarray}\n(\\sin x+\\sin y+\\sin z)^2 + (\\cos x+\\cos y+\\cos z)^2 &amp;=&amp; 0\\\\\n\\sin^2x+\\sin^2y+\\sin^2z+2\\sin x\\sin y+2\\sin y\\sin z+2\\sin z\\sin x &amp;+&amp;\\\\\n\\cos^2x+\\cos^2y+\\cos^2z+2\\cos x\\cos y+2\\cos y\\cos z+2\\cos z\\cos x &amp;=&amp;0\\\\\n3+2\\big(\\cos x\\cos y+\\sin x\\sin y\\big)+2\\big(\\cos y\\cos z+\\sin y\\sin z\\big)+2\\big(\\cos z\\cos x+\\sin z\\sin x\\big) &amp;=&amp;0\\\\\n3+2\\cos(x-y)+2\\cos(y-z)+2\\cos(z-x) &amp;=&amp;0\\\\\n3+2(-\\frac{1}{2})+4\\cos\\frac{x-y}{2}\\cos3\\frac{x+y}{2} &amp;=&amp;0\\\\\n\\cos\\frac{x-y}{2}\\cos3\\frac{x+y}{2} &amp;=&amp;-\\frac{1}{2}\\\\\n\\cos3\\frac{x+y}{2} &amp;=&amp;\\mp1\\\\\n\\end{eqnarray}\nso\n\\begin{eqnarray}\n\\cos(x-y) &amp;=&amp; -\\frac{1}{2}\\\\\n\\cos3\\frac{x+y}{2} &amp;=&amp;\\mp1\n\\end{eqnarray}\nWe have two answer for each of them\n\\begin{eqnarray}\nx-y &amp;=&amp; \\frac{2\\pi}{3},~\\frac{-\\pi}{3}\\\\\nx+y &amp;=&amp; 0,~\\frac{2\\pi}{3}\n\\end{eqnarray}\nWe obtain 4 answer from these equations and then permutation over $(x,y,z)$ from equations symmetrically, gives us 12 answer. Finally by substitution answers in equations, $x^2+y^2+z^2$ has one value ($\\displaystyle\\frac{2\\pi}{3},\\frac{-2\\pi}{3},0$), it is\n$$\\frac{8\\pi^2}{9}$$</p>\n" }, { "answer_id": 2083378, "score": 0, "text": "<p>Although this is same to answer of @schrodingersCat Let $x\\geq y\\geq0\\geq z$ \n$$\\cos(x+y)^2=1-\\sin(x+y)^2$$ \n$$⇔1-(\\sin{x}+\\sin{y})^2=(\\cos{x}+\\cos{y})^2$$\n$$⇔1+2(\\cos{x}\\cos{y}+\\sin{x}\\sin{y})=0$$\n$$⇔\\cos(x-y)=-1/2 $$\n$$x-y=2\\pi/3$$\nand\n$$\\sin{x}+\\sin{y}=-\\sin{z}$$\n$$⇔\\sin(y+2\\pi/3)+\\sin{y}=\\sqrt3/2$$ \n$$⇔(1/2)\\sin{y}+\\sqrt3/2\\cos{y}=\\sqrt3/2$$\n$$⇔\\sin(y+\\pi/3)=\\sqrt3/2$$ </p>\n\n<p>then we get $$x=2\\pi/3 ,y=0,z=-2\\pi/3$$, if angle(x,y,z) are points of unit circle, they consist of the regular triangle. The answer of $x^2+y^2+z^2$ is unique and,\n$$\\displaystyle x^2+y^2+z^2=\\frac{8\\pi^2}9 $$</p>\n" }, { "answer_id": 2083550, "score": 0, "text": "<p>HINT.- $$\\begin{cases}\\sin{x}+\\sin{y}+\\sin{z}=0\\\\\\cos{x}+\\cos{y}+\\cos{z}=0\\end{cases}\\iff\\begin{cases}2\\sin\\dfrac{x+y}{2}\\cos\\dfrac{x-y}{2}=-\\sin z\\\\2\\cos\\dfrac{x+y}{2}\\cos\\dfrac{x-y}{2}=-\\cos z\\end{cases}\\Rightarrow \\tan\\frac{x+y}{2}=\\tan z$$\nHence, because of $x,y,z \\in [-\\pi,\\pi]$, $$x+y=2z\\iff x+y+z=3z\\Rightarrow \\color{red}{z=0 \\text{ and }x+y=0}$$\nNOTE.-Obviously instead of $z$ we can choose either $x$ and $y$ to be zero. Thus $$\\color{red}{x^2+y^2+z^2=2t^2}$$ where $(x,y,z)=(t,-t,0)$ and with $t\\in [-\\pi,\\pi]$.</p>\n" }, { "answer_id": 2083351, "score": 1, "text": "<p>You can try this method:\n$$x+y+z=0 \\implies x+y=-z\\tag1$$\nAnd $$\\sin{x}+\\sin{y}+\\sin{z}=0$$\n$$\\implies \\sin{x}+\\sin{y}=-\\sin{z}\\tag2$$\nAnd $$\\cos{x}+\\cos{y}+\\cos{z}=0$$\n$$\\cos{x}+\\cos{y}=-\\cos{z}\\tag3$$</p>\n\n<p>So $$(2)^2+(3)^2 \\implies 2+2\\cos(x-y)=1$$\n$$\\implies \\cos(x-y)=-\\frac12\\tag4$$</p>\n\n<p>Similarly we get $$\\cos(y-z)=-\\frac12\\tag5$$\n$$\\cos(z-x)=-\\frac12\\tag6$$</p>\n\n<p>And from $(1)$, we get $\\cos(x+y)=\\cos z$</p>\n\n<p>Can you proceed now?</p>\n" }, { "answer_id": 2083373, "score": 1, "text": "<p>\\begin{eqnarray}\n\\sin z &amp;=&amp; -\\sin x-\\sin y\\\\\n\\cos z &amp;=&amp; -\\cos x-\\cos y\n\\end{eqnarray}\nso\n\\begin{eqnarray}\n\\sin^2z &amp;=&amp; \\sin^2x+\\sin^2y+2\\sin x\\sin y\\\\\n\\cos^2z &amp;=&amp; \\cos^2x+\\cos^2y+2\\cos x\\cos y\n\\end{eqnarray}\nadding to these equations concludes\n\\begin{eqnarray}\n1 &amp;=&amp; 1+1+2(\\sin x\\sin y+\\cos x\\cos y)\\\\\n0 &amp;=&amp; 1+2\\cos(x-y)\\\\\n-\\frac{1}{2} &amp;=&amp; \\cos(x-y)\n\\end{eqnarray}\nand also\n$$\\cos^2\\frac{x-y}{2}=\\frac{1+\\cos(x-y)}{2}=\\frac{1}{4}$$\nor\n$$\\cos\\frac{x-y}{2}=\\pm\\frac{1}{2}$$\nIn the other hand\n\\begin{eqnarray}\n(\\sin x+\\sin y+\\sin z)^2 + (\\cos x+\\cos y+\\cos z)^2 &amp;=&amp; 0\\\\\n\\sin^2x+\\sin^2y+\\sin^2z+2\\sin x\\sin y+2\\sin y\\sin z+2\\sin z\\sin x &amp;+&amp;\\\\\n\\cos^2x+\\cos^2y+\\cos^2z+2\\cos x\\cos y+2\\cos y\\cos z+2\\cos z\\cos x &amp;=&amp;0\\\\\n3+2\\big(\\cos x\\cos y+\\sin x\\sin y\\big)+2\\big(\\cos y\\cos z+\\sin y\\sin z\\big)+2\\big(\\cos z\\cos x+\\sin z\\sin x\\big) &amp;=&amp;0\\\\\n3+2\\cos(x-y)+2\\cos(y-z)+2\\cos(z-x) &amp;=&amp;0\\\\\n3+2(-\\frac{1}{2})+4\\cos\\frac{x-y}{2}\\cos3\\frac{x+y}{2} &amp;=&amp;0\\\\\n\\cos\\frac{x-y}{2}\\cos3\\frac{x+y}{2} &amp;=&amp;-\\frac{1}{2}\\\\\n\\cos3\\frac{x+y}{2} &amp;=&amp;\\mp1\\\\\n\\end{eqnarray}\nso\n\\begin{eqnarray}\n\\cos(x-y) &amp;=&amp; -\\frac{1}{2}\\\\\n\\cos3\\frac{x+y}{2} &amp;=&amp;\\mp1\n\\end{eqnarray}\nWe have two answer for each of them\n\\begin{eqnarray}\nx-y &amp;=&amp; \\frac{2\\pi}{3},~\\frac{-\\pi}{3}\\\\\nx+y &amp;=&amp; 0,~\\frac{2\\pi}{3}\n\\end{eqnarray}\nWe obtain 4 answer from these equations and then permutation over $(x,y,z)$ from equations symmetrically, gives us 12 answer. Finally by substitution answers in equations, $x^2+y^2+z^2$ has one value ($\\displaystyle\\frac{2\\pi}{3},\\frac{-2\\pi}{3},0$), it is\n$$\\frac{8\\pi^2}{9}$$</p>\n" }, { "answer_id": 2083378, "score": 0, "text": "<p>Although this is same to answer of @schrodingersCat Let $x\\geq y\\geq0\\geq z$ \n$$\\cos(x+y)^2=1-\\sin(x+y)^2$$ \n$$⇔1-(\\sin{x}+\\sin{y})^2=(\\cos{x}+\\cos{y})^2$$\n$$⇔1+2(\\cos{x}\\cos{y}+\\sin{x}\\sin{y})=0$$\n$$⇔\\cos(x-y)=-1/2 $$\n$$x-y=2\\pi/3$$\nand\n$$\\sin{x}+\\sin{y}=-\\sin{z}$$\n$$⇔\\sin(y+2\\pi/3)+\\sin{y}=\\sqrt3/2$$ \n$$⇔(1/2)\\sin{y}+\\sqrt3/2\\cos{y}=\\sqrt3/2$$\n$$⇔\\sin(y+\\pi/3)=\\sqrt3/2$$ </p>\n\n<p>then we get $$x=2\\pi/3 ,y=0,z=-2\\pi/3$$, if angle(x,y,z) are points of unit circle, they consist of the regular triangle. The answer of $x^2+y^2+z^2$ is unique and,\n$$\\displaystyle x^2+y^2+z^2=\\frac{8\\pi^2}9 $$</p>\n" }, { "answer_id": 2083550, "score": 0, "text": "<p>HINT.- $$\\begin{cases}\\sin{x}+\\sin{y}+\\sin{z}=0\\\\\\cos{x}+\\cos{y}+\\cos{z}=0\\end{cases}\\iff\\begin{cases}2\\sin\\dfrac{x+y}{2}\\cos\\dfrac{x-y}{2}=-\\sin z\\\\2\\cos\\dfrac{x+y}{2}\\cos\\dfrac{x-y}{2}=-\\cos z\\end{cases}\\Rightarrow \\tan\\frac{x+y}{2}=\\tan z$$\nHence, because of $x,y,z \\in [-\\pi,\\pi]$, $$x+y=2z\\iff x+y+z=3z\\Rightarrow \\color{red}{z=0 \\text{ and }x+y=0}$$\nNOTE.-Obviously instead of $z$ we can choose either $x$ and $y$ to be zero. Thus $$\\color{red}{x^2+y^2+z^2=2t^2}$$ where $(x,y,z)=(t,-t,0)$ and with $t\\in [-\\pi,\\pi]$.</p>\n" }, { "answer_id": 2083351, "score": 1, "text": "<p>You can try this method:\n$$x+y+z=0 \\implies x+y=-z\\tag1$$\nAnd $$\\sin{x}+\\sin{y}+\\sin{z}=0$$\n$$\\implies \\sin{x}+\\sin{y}=-\\sin{z}\\tag2$$\nAnd $$\\cos{x}+\\cos{y}+\\cos{z}=0$$\n$$\\cos{x}+\\cos{y}=-\\cos{z}\\tag3$$</p>\n\n<p>So $$(2)^2+(3)^2 \\implies 2+2\\cos(x-y)=1$$\n$$\\implies \\cos(x-y)=-\\frac12\\tag4$$</p>\n\n<p>Similarly we get $$\\cos(y-z)=-\\frac12\\tag5$$\n$$\\cos(z-x)=-\\frac12\\tag6$$</p>\n\n<p>And from $(1)$, we get $\\cos(x+y)=\\cos z$</p>\n\n<p>Can you proceed now?</p>\n" }, { "answer_id": 2083373, "score": 1, "text": "<p>\\begin{eqnarray}\n\\sin z &amp;=&amp; -\\sin x-\\sin y\\\\\n\\cos z &amp;=&amp; -\\cos x-\\cos y\n\\end{eqnarray}\nso\n\\begin{eqnarray}\n\\sin^2z &amp;=&amp; \\sin^2x+\\sin^2y+2\\sin x\\sin y\\\\\n\\cos^2z &amp;=&amp; \\cos^2x+\\cos^2y+2\\cos x\\cos y\n\\end{eqnarray}\nadding to these equations concludes\n\\begin{eqnarray}\n1 &amp;=&amp; 1+1+2(\\sin x\\sin y+\\cos x\\cos y)\\\\\n0 &amp;=&amp; 1+2\\cos(x-y)\\\\\n-\\frac{1}{2} &amp;=&amp; \\cos(x-y)\n\\end{eqnarray}\nand also\n$$\\cos^2\\frac{x-y}{2}=\\frac{1+\\cos(x-y)}{2}=\\frac{1}{4}$$\nor\n$$\\cos\\frac{x-y}{2}=\\pm\\frac{1}{2}$$\nIn the other hand\n\\begin{eqnarray}\n(\\sin x+\\sin y+\\sin z)^2 + (\\cos x+\\cos y+\\cos z)^2 &amp;=&amp; 0\\\\\n\\sin^2x+\\sin^2y+\\sin^2z+2\\sin x\\sin y+2\\sin y\\sin z+2\\sin z\\sin x &amp;+&amp;\\\\\n\\cos^2x+\\cos^2y+\\cos^2z+2\\cos x\\cos y+2\\cos y\\cos z+2\\cos z\\cos x &amp;=&amp;0\\\\\n3+2\\big(\\cos x\\cos y+\\sin x\\sin y\\big)+2\\big(\\cos y\\cos z+\\sin y\\sin z\\big)+2\\big(\\cos z\\cos x+\\sin z\\sin x\\big) &amp;=&amp;0\\\\\n3+2\\cos(x-y)+2\\cos(y-z)+2\\cos(z-x) &amp;=&amp;0\\\\\n3+2(-\\frac{1}{2})+4\\cos\\frac{x-y}{2}\\cos3\\frac{x+y}{2} &amp;=&amp;0\\\\\n\\cos\\frac{x-y}{2}\\cos3\\frac{x+y}{2} &amp;=&amp;-\\frac{1}{2}\\\\\n\\cos3\\frac{x+y}{2} &amp;=&amp;\\mp1\\\\\n\\end{eqnarray}\nso\n\\begin{eqnarray}\n\\cos(x-y) &amp;=&amp; -\\frac{1}{2}\\\\\n\\cos3\\frac{x+y}{2} &amp;=&amp;\\mp1\n\\end{eqnarray}\nWe have two answer for each of them\n\\begin{eqnarray}\nx-y &amp;=&amp; \\frac{2\\pi}{3},~\\frac{-\\pi}{3}\\\\\nx+y &amp;=&amp; 0,~\\frac{2\\pi}{3}\n\\end{eqnarray}\nWe obtain 4 answer from these equations and then permutation over $(x,y,z)$ from equations symmetrically, gives us 12 answer. Finally by substitution answers in equations, $x^2+y^2+z^2$ has one value ($\\displaystyle\\frac{2\\pi}{3},\\frac{-2\\pi}{3},0$), it is\n$$\\frac{8\\pi^2}{9}$$</p>\n" }, { "answer_id": 2083378, "score": 0, "text": "<p>Although this is same to answer of @schrodingersCat Let $x\\geq y\\geq0\\geq z$ \n$$\\cos(x+y)^2=1-\\sin(x+y)^2$$ \n$$⇔1-(\\sin{x}+\\sin{y})^2=(\\cos{x}+\\cos{y})^2$$\n$$⇔1+2(\\cos{x}\\cos{y}+\\sin{x}\\sin{y})=0$$\n$$⇔\\cos(x-y)=-1/2 $$\n$$x-y=2\\pi/3$$\nand\n$$\\sin{x}+\\sin{y}=-\\sin{z}$$\n$$⇔\\sin(y+2\\pi/3)+\\sin{y}=\\sqrt3/2$$ \n$$⇔(1/2)\\sin{y}+\\sqrt3/2\\cos{y}=\\sqrt3/2$$\n$$⇔\\sin(y+\\pi/3)=\\sqrt3/2$$ </p>\n\n<p>then we get $$x=2\\pi/3 ,y=0,z=-2\\pi/3$$, if angle(x,y,z) are points of unit circle, they consist of the regular triangle. The answer of $x^2+y^2+z^2$ is unique and,\n$$\\displaystyle x^2+y^2+z^2=\\frac{8\\pi^2}9 $$</p>\n" }, { "answer_id": 2083550, "score": 0, "text": "<p>HINT.- $$\\begin{cases}\\sin{x}+\\sin{y}+\\sin{z}=0\\\\\\cos{x}+\\cos{y}+\\cos{z}=0\\end{cases}\\iff\\begin{cases}2\\sin\\dfrac{x+y}{2}\\cos\\dfrac{x-y}{2}=-\\sin z\\\\2\\cos\\dfrac{x+y}{2}\\cos\\dfrac{x-y}{2}=-\\cos z\\end{cases}\\Rightarrow \\tan\\frac{x+y}{2}=\\tan z$$\nHence, because of $x,y,z \\in [-\\pi,\\pi]$, $$x+y=2z\\iff x+y+z=3z\\Rightarrow \\color{red}{z=0 \\text{ and }x+y=0}$$\nNOTE.-Obviously instead of $z$ we can choose either $x$ and $y$ to be zero. Thus $$\\color{red}{x^2+y^2+z^2=2t^2}$$ where $(x,y,z)=(t,-t,0)$ and with $t\\in [-\\pi,\\pi]$.</p>\n" }, { "answer_id": 2083351, "score": 1, "text": "<p>You can try this method:\n$$x+y+z=0 \\implies x+y=-z\\tag1$$\nAnd $$\\sin{x}+\\sin{y}+\\sin{z}=0$$\n$$\\implies \\sin{x}+\\sin{y}=-\\sin{z}\\tag2$$\nAnd $$\\cos{x}+\\cos{y}+\\cos{z}=0$$\n$$\\cos{x}+\\cos{y}=-\\cos{z}\\tag3$$</p>\n\n<p>So $$(2)^2+(3)^2 \\implies 2+2\\cos(x-y)=1$$\n$$\\implies \\cos(x-y)=-\\frac12\\tag4$$</p>\n\n<p>Similarly we get $$\\cos(y-z)=-\\frac12\\tag5$$\n$$\\cos(z-x)=-\\frac12\\tag6$$</p>\n\n<p>And from $(1)$, we get $\\cos(x+y)=\\cos z$</p>\n\n<p>Can you proceed now?</p>\n" }, { "answer_id": 2083373, "score": 1, "text": "<p>\\begin{eqnarray}\n\\sin z &amp;=&amp; -\\sin x-\\sin y\\\\\n\\cos z &amp;=&amp; -\\cos x-\\cos y\n\\end{eqnarray}\nso\n\\begin{eqnarray}\n\\sin^2z &amp;=&amp; \\sin^2x+\\sin^2y+2\\sin x\\sin y\\\\\n\\cos^2z &amp;=&amp; \\cos^2x+\\cos^2y+2\\cos x\\cos y\n\\end{eqnarray}\nadding to these equations concludes\n\\begin{eqnarray}\n1 &amp;=&amp; 1+1+2(\\sin x\\sin y+\\cos x\\cos y)\\\\\n0 &amp;=&amp; 1+2\\cos(x-y)\\\\\n-\\frac{1}{2} &amp;=&amp; \\cos(x-y)\n\\end{eqnarray}\nand also\n$$\\cos^2\\frac{x-y}{2}=\\frac{1+\\cos(x-y)}{2}=\\frac{1}{4}$$\nor\n$$\\cos\\frac{x-y}{2}=\\pm\\frac{1}{2}$$\nIn the other hand\n\\begin{eqnarray}\n(\\sin x+\\sin y+\\sin z)^2 + (\\cos x+\\cos y+\\cos z)^2 &amp;=&amp; 0\\\\\n\\sin^2x+\\sin^2y+\\sin^2z+2\\sin x\\sin y+2\\sin y\\sin z+2\\sin z\\sin x &amp;+&amp;\\\\\n\\cos^2x+\\cos^2y+\\cos^2z+2\\cos x\\cos y+2\\cos y\\cos z+2\\cos z\\cos x &amp;=&amp;0\\\\\n3+2\\big(\\cos x\\cos y+\\sin x\\sin y\\big)+2\\big(\\cos y\\cos z+\\sin y\\sin z\\big)+2\\big(\\cos z\\cos x+\\sin z\\sin x\\big) &amp;=&amp;0\\\\\n3+2\\cos(x-y)+2\\cos(y-z)+2\\cos(z-x) &amp;=&amp;0\\\\\n3+2(-\\frac{1}{2})+4\\cos\\frac{x-y}{2}\\cos3\\frac{x+y}{2} &amp;=&amp;0\\\\\n\\cos\\frac{x-y}{2}\\cos3\\frac{x+y}{2} &amp;=&amp;-\\frac{1}{2}\\\\\n\\cos3\\frac{x+y}{2} &amp;=&amp;\\mp1\\\\\n\\end{eqnarray}\nso\n\\begin{eqnarray}\n\\cos(x-y) &amp;=&amp; -\\frac{1}{2}\\\\\n\\cos3\\frac{x+y}{2} &amp;=&amp;\\mp1\n\\end{eqnarray}\nWe have two answer for each of them\n\\begin{eqnarray}\nx-y &amp;=&amp; \\frac{2\\pi}{3},~\\frac{-\\pi}{3}\\\\\nx+y &amp;=&amp; 0,~\\frac{2\\pi}{3}\n\\end{eqnarray}\nWe obtain 4 answer from these equations and then permutation over $(x,y,z)$ from equations symmetrically, gives us 12 answer. Finally by substitution answers in equations, $x^2+y^2+z^2$ has one value ($\\displaystyle\\frac{2\\pi}{3},\\frac{-2\\pi}{3},0$), it is\n$$\\frac{8\\pi^2}{9}$$</p>\n" }, { "answer_id": 2083378, "score": 0, "text": "<p>Although this is same to answer of @schrodingersCat Let $x\\geq y\\geq0\\geq z$ \n$$\\cos(x+y)^2=1-\\sin(x+y)^2$$ \n$$⇔1-(\\sin{x}+\\sin{y})^2=(\\cos{x}+\\cos{y})^2$$\n$$⇔1+2(\\cos{x}\\cos{y}+\\sin{x}\\sin{y})=0$$\n$$⇔\\cos(x-y)=-1/2 $$\n$$x-y=2\\pi/3$$\nand\n$$\\sin{x}+\\sin{y}=-\\sin{z}$$\n$$⇔\\sin(y+2\\pi/3)+\\sin{y}=\\sqrt3/2$$ \n$$⇔(1/2)\\sin{y}+\\sqrt3/2\\cos{y}=\\sqrt3/2$$\n$$⇔\\sin(y+\\pi/3)=\\sqrt3/2$$ </p>\n\n<p>then we get $$x=2\\pi/3 ,y=0,z=-2\\pi/3$$, if angle(x,y,z) are points of unit circle, they consist of the regular triangle. The answer of $x^2+y^2+z^2$ is unique and,\n$$\\displaystyle x^2+y^2+z^2=\\frac{8\\pi^2}9 $$</p>\n" }, { "answer_id": 2083550, "score": 0, "text": "<p>HINT.- $$\\begin{cases}\\sin{x}+\\sin{y}+\\sin{z}=0\\\\\\cos{x}+\\cos{y}+\\cos{z}=0\\end{cases}\\iff\\begin{cases}2\\sin\\dfrac{x+y}{2}\\cos\\dfrac{x-y}{2}=-\\sin z\\\\2\\cos\\dfrac{x+y}{2}\\cos\\dfrac{x-y}{2}=-\\cos z\\end{cases}\\Rightarrow \\tan\\frac{x+y}{2}=\\tan z$$\nHence, because of $x,y,z \\in [-\\pi,\\pi]$, $$x+y=2z\\iff x+y+z=3z\\Rightarrow \\color{red}{z=0 \\text{ and }x+y=0}$$\nNOTE.-Obviously instead of $z$ we can choose either $x$ and $y$ to be zero. Thus $$\\color{red}{x^2+y^2+z^2=2t^2}$$ where $(x,y,z)=(t,-t,0)$ and with $t\\in [-\\pi,\\pi]$.</p>\n" }, { "answer_id": 2083351, "score": 1, "text": "<p>You can try this method:\n$$x+y+z=0 \\implies x+y=-z\\tag1$$\nAnd $$\\sin{x}+\\sin{y}+\\sin{z}=0$$\n$$\\implies \\sin{x}+\\sin{y}=-\\sin{z}\\tag2$$\nAnd $$\\cos{x}+\\cos{y}+\\cos{z}=0$$\n$$\\cos{x}+\\cos{y}=-\\cos{z}\\tag3$$</p>\n\n<p>So $$(2)^2+(3)^2 \\implies 2+2\\cos(x-y)=1$$\n$$\\implies \\cos(x-y)=-\\frac12\\tag4$$</p>\n\n<p>Similarly we get $$\\cos(y-z)=-\\frac12\\tag5$$\n$$\\cos(z-x)=-\\frac12\\tag6$$</p>\n\n<p>And from $(1)$, we get $\\cos(x+y)=\\cos z$</p>\n\n<p>Can you proceed now?</p>\n" }, { "answer_id": 2083373, "score": 1, "text": "<p>\\begin{eqnarray}\n\\sin z &amp;=&amp; -\\sin x-\\sin y\\\\\n\\cos z &amp;=&amp; -\\cos x-\\cos y\n\\end{eqnarray}\nso\n\\begin{eqnarray}\n\\sin^2z &amp;=&amp; \\sin^2x+\\sin^2y+2\\sin x\\sin y\\\\\n\\cos^2z &amp;=&amp; \\cos^2x+\\cos^2y+2\\cos x\\cos y\n\\end{eqnarray}\nadding to these equations concludes\n\\begin{eqnarray}\n1 &amp;=&amp; 1+1+2(\\sin x\\sin y+\\cos x\\cos y)\\\\\n0 &amp;=&amp; 1+2\\cos(x-y)\\\\\n-\\frac{1}{2} &amp;=&amp; \\cos(x-y)\n\\end{eqnarray}\nand also\n$$\\cos^2\\frac{x-y}{2}=\\frac{1+\\cos(x-y)}{2}=\\frac{1}{4}$$\nor\n$$\\cos\\frac{x-y}{2}=\\pm\\frac{1}{2}$$\nIn the other hand\n\\begin{eqnarray}\n(\\sin x+\\sin y+\\sin z)^2 + (\\cos x+\\cos y+\\cos z)^2 &amp;=&amp; 0\\\\\n\\sin^2x+\\sin^2y+\\sin^2z+2\\sin x\\sin y+2\\sin y\\sin z+2\\sin z\\sin x &amp;+&amp;\\\\\n\\cos^2x+\\cos^2y+\\cos^2z+2\\cos x\\cos y+2\\cos y\\cos z+2\\cos z\\cos x &amp;=&amp;0\\\\\n3+2\\big(\\cos x\\cos y+\\sin x\\sin y\\big)+2\\big(\\cos y\\cos z+\\sin y\\sin z\\big)+2\\big(\\cos z\\cos x+\\sin z\\sin x\\big) &amp;=&amp;0\\\\\n3+2\\cos(x-y)+2\\cos(y-z)+2\\cos(z-x) &amp;=&amp;0\\\\\n3+2(-\\frac{1}{2})+4\\cos\\frac{x-y}{2}\\cos3\\frac{x+y}{2} &amp;=&amp;0\\\\\n\\cos\\frac{x-y}{2}\\cos3\\frac{x+y}{2} &amp;=&amp;-\\frac{1}{2}\\\\\n\\cos3\\frac{x+y}{2} &amp;=&amp;\\mp1\\\\\n\\end{eqnarray}\nso\n\\begin{eqnarray}\n\\cos(x-y) &amp;=&amp; -\\frac{1}{2}\\\\\n\\cos3\\frac{x+y}{2} &amp;=&amp;\\mp1\n\\end{eqnarray}\nWe have two answer for each of them\n\\begin{eqnarray}\nx-y &amp;=&amp; \\frac{2\\pi}{3},~\\frac{-\\pi}{3}\\\\\nx+y &amp;=&amp; 0,~\\frac{2\\pi}{3}\n\\end{eqnarray}\nWe obtain 4 answer from these equations and then permutation over $(x,y,z)$ from equations symmetrically, gives us 12 answer. Finally by substitution answers in equations, $x^2+y^2+z^2$ has one value ($\\displaystyle\\frac{2\\pi}{3},\\frac{-2\\pi}{3},0$), it is\n$$\\frac{8\\pi^2}{9}$$</p>\n" }, { "answer_id": 2083378, "score": 0, "text": "<p>Although this is same to answer of @schrodingersCat Let $x\\geq y\\geq0\\geq z$ \n$$\\cos(x+y)^2=1-\\sin(x+y)^2$$ \n$$⇔1-(\\sin{x}+\\sin{y})^2=(\\cos{x}+\\cos{y})^2$$\n$$⇔1+2(\\cos{x}\\cos{y}+\\sin{x}\\sin{y})=0$$\n$$⇔\\cos(x-y)=-1/2 $$\n$$x-y=2\\pi/3$$\nand\n$$\\sin{x}+\\sin{y}=-\\sin{z}$$\n$$⇔\\sin(y+2\\pi/3)+\\sin{y}=\\sqrt3/2$$ \n$$⇔(1/2)\\sin{y}+\\sqrt3/2\\cos{y}=\\sqrt3/2$$\n$$⇔\\sin(y+\\pi/3)=\\sqrt3/2$$ </p>\n\n<p>then we get $$x=2\\pi/3 ,y=0,z=-2\\pi/3$$, if angle(x,y,z) are points of unit circle, they consist of the regular triangle. The answer of $x^2+y^2+z^2$ is unique and,\n$$\\displaystyle x^2+y^2+z^2=\\frac{8\\pi^2}9 $$</p>\n" }, { "answer_id": 2083550, "score": 0, "text": "<p>HINT.- $$\\begin{cases}\\sin{x}+\\sin{y}+\\sin{z}=0\\\\\\cos{x}+\\cos{y}+\\cos{z}=0\\end{cases}\\iff\\begin{cases}2\\sin\\dfrac{x+y}{2}\\cos\\dfrac{x-y}{2}=-\\sin z\\\\2\\cos\\dfrac{x+y}{2}\\cos\\dfrac{x-y}{2}=-\\cos z\\end{cases}\\Rightarrow \\tan\\frac{x+y}{2}=\\tan z$$\nHence, because of $x,y,z \\in [-\\pi,\\pi]$, $$x+y=2z\\iff x+y+z=3z\\Rightarrow \\color{red}{z=0 \\text{ and }x+y=0}$$\nNOTE.-Obviously instead of $z$ we can choose either $x$ and $y$ to be zero. Thus $$\\color{red}{x^2+y^2+z^2=2t^2}$$ where $(x,y,z)=(t,-t,0)$ and with $t\\in [-\\pi,\\pi]$.</p>\n" }, { "answer_id": 2083351, "score": 1, "text": "<p>You can try this method:\n$$x+y+z=0 \\implies x+y=-z\\tag1$$\nAnd $$\\sin{x}+\\sin{y}+\\sin{z}=0$$\n$$\\implies \\sin{x}+\\sin{y}=-\\sin{z}\\tag2$$\nAnd $$\\cos{x}+\\cos{y}+\\cos{z}=0$$\n$$\\cos{x}+\\cos{y}=-\\cos{z}\\tag3$$</p>\n\n<p>So $$(2)^2+(3)^2 \\implies 2+2\\cos(x-y)=1$$\n$$\\implies \\cos(x-y)=-\\frac12\\tag4$$</p>\n\n<p>Similarly we get $$\\cos(y-z)=-\\frac12\\tag5$$\n$$\\cos(z-x)=-\\frac12\\tag6$$</p>\n\n<p>And from $(1)$, we get $\\cos(x+y)=\\cos z$</p>\n\n<p>Can you proceed now?</p>\n" }, { "answer_id": 2083373, "score": 1, "text": "<p>\\begin{eqnarray}\n\\sin z &amp;=&amp; -\\sin x-\\sin y\\\\\n\\cos z &amp;=&amp; -\\cos x-\\cos y\n\\end{eqnarray}\nso\n\\begin{eqnarray}\n\\sin^2z &amp;=&amp; \\sin^2x+\\sin^2y+2\\sin x\\sin y\\\\\n\\cos^2z &amp;=&amp; \\cos^2x+\\cos^2y+2\\cos x\\cos y\n\\end{eqnarray}\nadding to these equations concludes\n\\begin{eqnarray}\n1 &amp;=&amp; 1+1+2(\\sin x\\sin y+\\cos x\\cos y)\\\\\n0 &amp;=&amp; 1+2\\cos(x-y)\\\\\n-\\frac{1}{2} &amp;=&amp; \\cos(x-y)\n\\end{eqnarray}\nand also\n$$\\cos^2\\frac{x-y}{2}=\\frac{1+\\cos(x-y)}{2}=\\frac{1}{4}$$\nor\n$$\\cos\\frac{x-y}{2}=\\pm\\frac{1}{2}$$\nIn the other hand\n\\begin{eqnarray}\n(\\sin x+\\sin y+\\sin z)^2 + (\\cos x+\\cos y+\\cos z)^2 &amp;=&amp; 0\\\\\n\\sin^2x+\\sin^2y+\\sin^2z+2\\sin x\\sin y+2\\sin y\\sin z+2\\sin z\\sin x &amp;+&amp;\\\\\n\\cos^2x+\\cos^2y+\\cos^2z+2\\cos x\\cos y+2\\cos y\\cos z+2\\cos z\\cos x &amp;=&amp;0\\\\\n3+2\\big(\\cos x\\cos y+\\sin x\\sin y\\big)+2\\big(\\cos y\\cos z+\\sin y\\sin z\\big)+2\\big(\\cos z\\cos x+\\sin z\\sin x\\big) &amp;=&amp;0\\\\\n3+2\\cos(x-y)+2\\cos(y-z)+2\\cos(z-x) &amp;=&amp;0\\\\\n3+2(-\\frac{1}{2})+4\\cos\\frac{x-y}{2}\\cos3\\frac{x+y}{2} &amp;=&amp;0\\\\\n\\cos\\frac{x-y}{2}\\cos3\\frac{x+y}{2} &amp;=&amp;-\\frac{1}{2}\\\\\n\\cos3\\frac{x+y}{2} &amp;=&amp;\\mp1\\\\\n\\end{eqnarray}\nso\n\\begin{eqnarray}\n\\cos(x-y) &amp;=&amp; -\\frac{1}{2}\\\\\n\\cos3\\frac{x+y}{2} &amp;=&amp;\\mp1\n\\end{eqnarray}\nWe have two answer for each of them\n\\begin{eqnarray}\nx-y &amp;=&amp; \\frac{2\\pi}{3},~\\frac{-\\pi}{3}\\\\\nx+y &amp;=&amp; 0,~\\frac{2\\pi}{3}\n\\end{eqnarray}\nWe obtain 4 answer from these equations and then permutation over $(x,y,z)$ from equations symmetrically, gives us 12 answer. Finally by substitution answers in equations, $x^2+y^2+z^2$ has one value ($\\displaystyle\\frac{2\\pi}{3},\\frac{-2\\pi}{3},0$), it is\n$$\\frac{8\\pi^2}{9}$$</p>\n" }, { "answer_id": 2083378, "score": 0, "text": "<p>Although this is same to answer of @schrodingersCat Let $x\\geq y\\geq0\\geq z$ \n$$\\cos(x+y)^2=1-\\sin(x+y)^2$$ \n$$⇔1-(\\sin{x}+\\sin{y})^2=(\\cos{x}+\\cos{y})^2$$\n$$⇔1+2(\\cos{x}\\cos{y}+\\sin{x}\\sin{y})=0$$\n$$⇔\\cos(x-y)=-1/2 $$\n$$x-y=2\\pi/3$$\nand\n$$\\sin{x}+\\sin{y}=-\\sin{z}$$\n$$⇔\\sin(y+2\\pi/3)+\\sin{y}=\\sqrt3/2$$ \n$$⇔(1/2)\\sin{y}+\\sqrt3/2\\cos{y}=\\sqrt3/2$$\n$$⇔\\sin(y+\\pi/3)=\\sqrt3/2$$ </p>\n\n<p>then we get $$x=2\\pi/3 ,y=0,z=-2\\pi/3$$, if angle(x,y,z) are points of unit circle, they consist of the regular triangle. The answer of $x^2+y^2+z^2$ is unique and,\n$$\\displaystyle x^2+y^2+z^2=\\frac{8\\pi^2}9 $$</p>\n" }, { "answer_id": 2083550, "score": 0, "text": "<p>HINT.- $$\\begin{cases}\\sin{x}+\\sin{y}+\\sin{z}=0\\\\\\cos{x}+\\cos{y}+\\cos{z}=0\\end{cases}\\iff\\begin{cases}2\\sin\\dfrac{x+y}{2}\\cos\\dfrac{x-y}{2}=-\\sin z\\\\2\\cos\\dfrac{x+y}{2}\\cos\\dfrac{x-y}{2}=-\\cos z\\end{cases}\\Rightarrow \\tan\\frac{x+y}{2}=\\tan z$$\nHence, because of $x,y,z \\in [-\\pi,\\pi]$, $$x+y=2z\\iff x+y+z=3z\\Rightarrow \\color{red}{z=0 \\text{ and }x+y=0}$$\nNOTE.-Obviously instead of $z$ we can choose either $x$ and $y$ to be zero. Thus $$\\color{red}{x^2+y^2+z^2=2t^2}$$ where $(x,y,z)=(t,-t,0)$ and with $t\\in [-\\pi,\\pi]$.</p>\n" }, { "answer_id": 2083351, "score": 1, "text": "<p>You can try this method:\n$$x+y+z=0 \\implies x+y=-z\\tag1$$\nAnd $$\\sin{x}+\\sin{y}+\\sin{z}=0$$\n$$\\implies \\sin{x}+\\sin{y}=-\\sin{z}\\tag2$$\nAnd $$\\cos{x}+\\cos{y}+\\cos{z}=0$$\n$$\\cos{x}+\\cos{y}=-\\cos{z}\\tag3$$</p>\n\n<p>So $$(2)^2+(3)^2 \\implies 2+2\\cos(x-y)=1$$\n$$\\implies \\cos(x-y)=-\\frac12\\tag4$$</p>\n\n<p>Similarly we get $$\\cos(y-z)=-\\frac12\\tag5$$\n$$\\cos(z-x)=-\\frac12\\tag6$$</p>\n\n<p>And from $(1)$, we get $\\cos(x+y)=\\cos z$</p>\n\n<p>Can you proceed now?</p>\n" }, { "answer_id": 2083373, "score": 1, "text": "<p>\\begin{eqnarray}\n\\sin z &amp;=&amp; -\\sin x-\\sin y\\\\\n\\cos z &amp;=&amp; -\\cos x-\\cos y\n\\end{eqnarray}\nso\n\\begin{eqnarray}\n\\sin^2z &amp;=&amp; \\sin^2x+\\sin^2y+2\\sin x\\sin y\\\\\n\\cos^2z &amp;=&amp; \\cos^2x+\\cos^2y+2\\cos x\\cos y\n\\end{eqnarray}\nadding to these equations concludes\n\\begin{eqnarray}\n1 &amp;=&amp; 1+1+2(\\sin x\\sin y+\\cos x\\cos y)\\\\\n0 &amp;=&amp; 1+2\\cos(x-y)\\\\\n-\\frac{1}{2} &amp;=&amp; \\cos(x-y)\n\\end{eqnarray}\nand also\n$$\\cos^2\\frac{x-y}{2}=\\frac{1+\\cos(x-y)}{2}=\\frac{1}{4}$$\nor\n$$\\cos\\frac{x-y}{2}=\\pm\\frac{1}{2}$$\nIn the other hand\n\\begin{eqnarray}\n(\\sin x+\\sin y+\\sin z)^2 + (\\cos x+\\cos y+\\cos z)^2 &amp;=&amp; 0\\\\\n\\sin^2x+\\sin^2y+\\sin^2z+2\\sin x\\sin y+2\\sin y\\sin z+2\\sin z\\sin x &amp;+&amp;\\\\\n\\cos^2x+\\cos^2y+\\cos^2z+2\\cos x\\cos y+2\\cos y\\cos z+2\\cos z\\cos x &amp;=&amp;0\\\\\n3+2\\big(\\cos x\\cos y+\\sin x\\sin y\\big)+2\\big(\\cos y\\cos z+\\sin y\\sin z\\big)+2\\big(\\cos z\\cos x+\\sin z\\sin x\\big) &amp;=&amp;0\\\\\n3+2\\cos(x-y)+2\\cos(y-z)+2\\cos(z-x) &amp;=&amp;0\\\\\n3+2(-\\frac{1}{2})+4\\cos\\frac{x-y}{2}\\cos3\\frac{x+y}{2} &amp;=&amp;0\\\\\n\\cos\\frac{x-y}{2}\\cos3\\frac{x+y}{2} &amp;=&amp;-\\frac{1}{2}\\\\\n\\cos3\\frac{x+y}{2} &amp;=&amp;\\mp1\\\\\n\\end{eqnarray}\nso\n\\begin{eqnarray}\n\\cos(x-y) &amp;=&amp; -\\frac{1}{2}\\\\\n\\cos3\\frac{x+y}{2} &amp;=&amp;\\mp1\n\\end{eqnarray}\nWe have two answer for each of them\n\\begin{eqnarray}\nx-y &amp;=&amp; \\frac{2\\pi}{3},~\\frac{-\\pi}{3}\\\\\nx+y &amp;=&amp; 0,~\\frac{2\\pi}{3}\n\\end{eqnarray}\nWe obtain 4 answer from these equations and then permutation over $(x,y,z)$ from equations symmetrically, gives us 12 answer. Finally by substitution answers in equations, $x^2+y^2+z^2$ has one value ($\\displaystyle\\frac{2\\pi}{3},\\frac{-2\\pi}{3},0$), it is\n$$\\frac{8\\pi^2}{9}$$</p>\n" }, { "answer_id": 2083378, "score": 0, "text": "<p>Although this is same to answer of @schrodingersCat Let $x\\geq y\\geq0\\geq z$ \n$$\\cos(x+y)^2=1-\\sin(x+y)^2$$ \n$$⇔1-(\\sin{x}+\\sin{y})^2=(\\cos{x}+\\cos{y})^2$$\n$$⇔1+2(\\cos{x}\\cos{y}+\\sin{x}\\sin{y})=0$$\n$$⇔\\cos(x-y)=-1/2 $$\n$$x-y=2\\pi/3$$\nand\n$$\\sin{x}+\\sin{y}=-\\sin{z}$$\n$$⇔\\sin(y+2\\pi/3)+\\sin{y}=\\sqrt3/2$$ \n$$⇔(1/2)\\sin{y}+\\sqrt3/2\\cos{y}=\\sqrt3/2$$\n$$⇔\\sin(y+\\pi/3)=\\sqrt3/2$$ </p>\n\n<p>then we get $$x=2\\pi/3 ,y=0,z=-2\\pi/3$$, if angle(x,y,z) are points of unit circle, they consist of the regular triangle. The answer of $x^2+y^2+z^2$ is unique and,\n$$\\displaystyle x^2+y^2+z^2=\\frac{8\\pi^2}9 $$</p>\n" }, { "answer_id": 2083550, "score": 0, "text": "<p>HINT.- $$\\begin{cases}\\sin{x}+\\sin{y}+\\sin{z}=0\\\\\\cos{x}+\\cos{y}+\\cos{z}=0\\end{cases}\\iff\\begin{cases}2\\sin\\dfrac{x+y}{2}\\cos\\dfrac{x-y}{2}=-\\sin z\\\\2\\cos\\dfrac{x+y}{2}\\cos\\dfrac{x-y}{2}=-\\cos z\\end{cases}\\Rightarrow \\tan\\frac{x+y}{2}=\\tan z$$\nHence, because of $x,y,z \\in [-\\pi,\\pi]$, $$x+y=2z\\iff x+y+z=3z\\Rightarrow \\color{red}{z=0 \\text{ and }x+y=0}$$\nNOTE.-Obviously instead of $z$ we can choose either $x$ and $y$ to be zero. Thus $$\\color{red}{x^2+y^2+z^2=2t^2}$$ where $(x,y,z)=(t,-t,0)$ and with $t\\in [-\\pi,\\pi]$.</p>\n" } ]
Question: Find $x^2+y^2+z^2$ if $x+y+z=0$ and $\sin{x}+\sin{y}+\sin{z}=0$ and $\cos{x}+\cos{y}+\cos{z}=0$ for any $x,y,z \in [-\pi,\pi]$ Answer 1: <p>You can try this method: $$x+y+z=0 \implies x+y=-z\tag1$$ And $$\sin{x}+\sin{y}+\sin{z}=0$$ $$\implies \sin{x}+\sin{y}=-\sin{z}\tag2$$ And $$\cos{x}+\cos{y}+\cos{z}=0$$ $$\cos{x}+\cos{y}=-\cos{z}\tag3$$</p> <p>So $$(2)^2+(3)^2 \implies 2+2\cos(x-y)=1$$ $$\implies \cos(x-y)=-\frac12\tag4$$</p> <p>Similarly we get $$\cos(y-z)=-\frac12\tag5$$ $$\cos(z-x)=-\frac12\tag6$$</p> <p>And from $(1)$, we get $\cos(x+y)=\cos z$</p> <p>Can you proceed now?</p> Answer 2: <p>\begin{eqnarray} \sin z &amp;=&amp; -\sin x-\sin y\\ \cos z &amp;=&amp; -\cos x-\cos y \end{eqnarray} so \begin{eqnarray} \sin^2z &amp;=&amp; \sin^2x+\sin^2y+2\sin x\sin y\\ \cos^2z &amp;=&amp; \cos^2x+\cos^2y+2\cos x\cos y \end{eqnarray} adding to these equations concludes \begin{eqnarray} 1 &amp;=&amp; 1+1+2(\sin x\sin y+\cos x\cos y)\\ 0 &amp;=&amp; 1+2\cos(x-y)\\ -\frac{1}{2} &amp;=&amp; \cos(x-y) \end{eqnarray} and also $$\cos^2\frac{x-y}{2}=\frac{1+\cos(x-y)}{2}=\frac{1}{4}$$ or $$\cos\frac{x-y}{2}=\pm\frac{1}{2}$$ In the other hand \begin{eqnarray} (\sin x+\sin y+\sin z)^2 + (\cos x+\cos y+\cos z)^2 &amp;=&amp; 0\\ \sin^2x+\sin^2y+\sin^2z+2\sin x\sin y+2\sin y\sin z+2\sin z\sin x &amp;+&amp;\\ \cos^2x+\cos^2y+\cos^2z+2\cos x\cos y+2\cos y\cos z+2\cos z\cos x &amp;=&amp;0\\ 3+2\big(\cos x\cos y+\sin x\sin y\big)+2\big(\cos y\cos z+\sin y\sin z\big)+2\big(\cos z\cos x+\sin z\sin x\big) &amp;=&amp;0\\ 3+2\cos(x-y)+2\cos(y-z)+2\cos(z-x) &amp;=&amp;0\\ 3+2(-\frac{1}{2})+4\cos\frac{x-y}{2}\cos3\frac{x+y}{2} &amp;=&amp;0\\ \cos\frac{x-y}{2}\cos3\frac{x+y}{2} &amp;=&amp;-\frac{1}{2}\\ \cos3\frac{x+y}{2} &amp;=&amp;\mp1\\ \end{eqnarray} so \begin{eqnarray} \cos(x-y) &amp;=&amp; -\frac{1}{2}\\ \cos3\frac{x+y}{2} &amp;=&amp;\mp1 \end{eqnarray} We have two answer for each of them \begin{eqnarray} x-y &amp;=&amp; \frac{2\pi}{3},~\frac{-\pi}{3}\\ x+y &amp;=&amp; 0,~\frac{2\pi}{3} \end{eqnarray} We obtain 4 answer from these equations and then permutation over $(x,y,z)$ from equations symmetrically, gives us 12 answer. Finally by substitution answers in equations, $x^2+y^2+z^2$ has one value ($\displaystyle\frac{2\pi}{3},\frac{-2\pi}{3},0$), it is $$\frac{8\pi^2}{9}$$</p> Answer 3: <p>Although this is same to answer of @schrodingersCat Let $x\geq y\geq0\geq z$ $$\cos(x+y)^2=1-\sin(x+y)^2$$ $$⇔1-(\sin{x}+\sin{y})^2=(\cos{x}+\cos{y})^2$$ $$⇔1+2(\cos{x}\cos{y}+\sin{x}\sin{y})=0$$ $$⇔\cos(x-y)=-1/2 $$ $$x-y=2\pi/3$$ and $$\sin{x}+\sin{y}=-\sin{z}$$ $$⇔\sin(y+2\pi/3)+\sin{y}=\sqrt3/2$$ $$⇔(1/2)\sin{y}+\sqrt3/2\cos{y}=\sqrt3/2$$ $$⇔\sin(y+\pi/3)=\sqrt3/2$$ </p> <p>then we get $$x=2\pi/3 ,y=0,z=-2\pi/3$$, if angle(x,y,z) are points of unit circle, they consist of the regular triangle. The answer of $x^2+y^2+z^2$ is unique and, $$\displaystyle x^2+y^2+z^2=\frac{8\pi^2}9 $$</p> Answer 4: <p>HINT.- $$\begin{cases}\sin{x}+\sin{y}+\sin{z}=0\\\cos{x}+\cos{y}+\cos{z}=0\end{cases}\iff\begin{cases}2\sin\dfrac{x+y}{2}\cos\dfrac{x-y}{2}=-\sin z\\2\cos\dfrac{x+y}{2}\cos\dfrac{x-y}{2}=-\cos z\end{cases}\Rightarrow \tan\frac{x+y}{2}=\tan z$$ Hence, because of $x,y,z \in [-\pi,\pi]$, $$x+y=2z\iff x+y+z=3z\Rightarrow \color{red}{z=0 \text{ and }x+y=0}$$ NOTE.-Obviously instead of $z$ we can choose either $x$ and $y$ to be zero. Thus $$\color{red}{x^2+y^2+z^2=2t^2}$$ where $(x,y,z)=(t,-t,0)$ and with $t\in [-\pi,\pi]$.</p> Answer 5: <p>You can try this method: $$x+y+z=0 \implies x+y=-z\tag1$$ And $$\sin{x}+\sin{y}+\sin{z}=0$$ $$\implies \sin{x}+\sin{y}=-\sin{z}\tag2$$ And $$\cos{x}+\cos{y}+\cos{z}=0$$ $$\cos{x}+\cos{y}=-\cos{z}\tag3$$</p> <p>So $$(2)^2+(3)^2 \implies 2+2\cos(x-y)=1$$ $$\implies \cos(x-y)=-\frac12\tag4$$</p> <p>Similarly we get $$\cos(y-z)=-\frac12\tag5$$ $$\cos(z-x)=-\frac12\tag6$$</p> <p>And from $(1)$, we get $\cos(x+y)=\cos z$</p> <p>Can you proceed now?</p> Answer 6: <p>\begin{eqnarray} \sin z &amp;=&amp; -\sin x-\sin y\\ \cos z &amp;=&amp; -\cos x-\cos y \end{eqnarray} so \begin{eqnarray} \sin^2z &amp;=&amp; \sin^2x+\sin^2y+2\sin x\sin y\\ \cos^2z &amp;=&amp; \cos^2x+\cos^2y+2\cos x\cos y \end{eqnarray} adding to these equations concludes \begin{eqnarray} 1 &amp;=&amp; 1+1+2(\sin x\sin y+\cos x\cos y)\\ 0 &amp;=&amp; 1+2\cos(x-y)\\ -\frac{1}{2} &amp;=&amp; \cos(x-y) \end{eqnarray} and also $$\cos^2\frac{x-y}{2}=\frac{1+\cos(x-y)}{2}=\frac{1}{4}$$ or $$\cos\frac{x-y}{2}=\pm\frac{1}{2}$$ In the other hand \begin{eqnarray} (\sin x+\sin y+\sin z)^2 + (\cos x+\cos y+\cos z)^2 &amp;=&amp; 0\\ \sin^2x+\sin^2y+\sin^2z+2\sin x\sin y+2\sin y\sin z+2\sin z\sin x &amp;+&amp;\\ \cos^2x+\cos^2y+\cos^2z+2\cos x\cos y+2\cos y\cos z+2\cos z\cos x &amp;=&amp;0\\ 3+2\big(\cos x\cos y+\sin x\sin y\big)+2\big(\cos y\cos z+\sin y\sin z\big)+2\big(\cos z\cos x+\sin z\sin x\big) &amp;=&amp;0\\ 3+2\cos(x-y)+2\cos(y-z)+2\cos(z-x) &amp;=&amp;0\\ 3+2(-\frac{1}{2})+4\cos\frac{x-y}{2}\cos3\frac{x+y}{2} &amp;=&amp;0\\ \cos\frac{x-y}{2}\cos3\frac{x+y}{2} &amp;=&amp;-\frac{1}{2}\\ \cos3\frac{x+y}{2} &amp;=&amp;\mp1\\ \end{eqnarray} so \begin{eqnarray} \cos(x-y) &amp;=&amp; -\frac{1}{2}\\ \cos3\frac{x+y}{2} &amp;=&amp;\mp1 \end{eqnarray} We have two answer for each of them \begin{eqnarray} x-y &amp;=&amp; \frac{2\pi}{3},~\frac{-\pi}{3}\\ x+y &amp;=&amp; 0,~\frac{2\pi}{3} \end{eqnarray} We obtain 4 answer from these equations and then permutation over $(x,y,z)$ from equations symmetrically, gives us 12 answer. Finally by substitution answers in equations, $x^2+y^2+z^2$ has one value ($\displaystyle\frac{2\pi}{3},\frac{-2\pi}{3},0$), it is $$\frac{8\pi^2}{9}$$</p> Answer 7: <p>Although this is same to answer of @schrodingersCat Let $x\geq y\geq0\geq z$ $$\cos(x+y)^2=1-\sin(x+y)^2$$ $$⇔1-(\sin{x}+\sin{y})^2=(\cos{x}+\cos{y})^2$$ $$⇔1+2(\cos{x}\cos{y}+\sin{x}\sin{y})=0$$ $$⇔\cos(x-y)=-1/2 $$ $$x-y=2\pi/3$$ and $$\sin{x}+\sin{y}=-\sin{z}$$ $$⇔\sin(y+2\pi/3)+\sin{y}=\sqrt3/2$$ $$⇔(1/2)\sin{y}+\sqrt3/2\cos{y}=\sqrt3/2$$ $$⇔\sin(y+\pi/3)=\sqrt3/2$$ </p> <p>then we get $$x=2\pi/3 ,y=0,z=-2\pi/3$$, if angle(x,y,z) are points of unit circle, they consist of the regular triangle. The answer of $x^2+y^2+z^2$ is unique and, $$\displaystyle x^2+y^2+z^2=\frac{8\pi^2}9 $$</p> Answer 8: <p>HINT.- $$\begin{cases}\sin{x}+\sin{y}+\sin{z}=0\\\cos{x}+\cos{y}+\cos{z}=0\end{cases}\iff\begin{cases}2\sin\dfrac{x+y}{2}\cos\dfrac{x-y}{2}=-\sin z\\2\cos\dfrac{x+y}{2}\cos\dfrac{x-y}{2}=-\cos z\end{cases}\Rightarrow \tan\frac{x+y}{2}=\tan z$$ Hence, because of $x,y,z \in [-\pi,\pi]$, $$x+y=2z\iff x+y+z=3z\Rightarrow \color{red}{z=0 \text{ and }x+y=0}$$ NOTE.-Obviously instead of $z$ we can choose either $x$ and $y$ to be zero. Thus $$\color{red}{x^2+y^2+z^2=2t^2}$$ where $(x,y,z)=(t,-t,0)$ and with $t\in [-\pi,\pi]$.</p> Answer 9: <p>You can try this method: $$x+y+z=0 \implies x+y=-z\tag1$$ And $$\sin{x}+\sin{y}+\sin{z}=0$$ $$\implies \sin{x}+\sin{y}=-\sin{z}\tag2$$ And $$\cos{x}+\cos{y}+\cos{z}=0$$ $$\cos{x}+\cos{y}=-\cos{z}\tag3$$</p> <p>So $$(2)^2+(3)^2 \implies 2+2\cos(x-y)=1$$ $$\implies \cos(x-y)=-\frac12\tag4$$</p> <p>Similarly we get $$\cos(y-z)=-\frac12\tag5$$ $$\cos(z-x)=-\frac12\tag6$$</p> <p>And from $(1)$, we get $\cos(x+y)=\cos z$</p> <p>Can you proceed now?</p> Answer 10: <p>\begin{eqnarray} \sin z &amp;=&amp; -\sin x-\sin y\\ \cos z &amp;=&amp; -\cos x-\cos y \end{eqnarray} so \begin{eqnarray} \sin^2z &amp;=&amp; \sin^2x+\sin^2y+2\sin x\sin y\\ \cos^2z &amp;=&amp; \cos^2x+\cos^2y+2\cos x\cos y \end{eqnarray} adding to these equations concludes \begin{eqnarray} 1 &amp;=&amp; 1+1+2(\sin x\sin y+\cos x\cos y)\\ 0 &amp;=&amp; 1+2\cos(x-y)\\ -\frac{1}{2} &amp;=&amp; \cos(x-y) \end{eqnarray} and also $$\cos^2\frac{x-y}{2}=\frac{1+\cos(x-y)}{2}=\frac{1}{4}$$ or $$\cos\frac{x-y}{2}=\pm\frac{1}{2}$$ In the other hand \begin{eqnarray} (\sin x+\sin y+\sin z)^2 + (\cos x+\cos y+\cos z)^2 &amp;=&amp; 0\\ \sin^2x+\sin^2y+\sin^2z+2\sin x\sin y+2\sin y\sin z+2\sin z\sin x &amp;+&amp;\\ \cos^2x+\cos^2y+\cos^2z+2\cos x\cos y+2\cos y\cos z+2\cos z\cos x &amp;=&amp;0\\ 3+2\big(\cos x\cos y+\sin x\sin y\big)+2\big(\cos y\cos z+\sin y\sin z\big)+2\big(\cos z\cos x+\sin z\sin x\big) &amp;=&amp;0\\ 3+2\cos(x-y)+2\cos(y-z)+2\cos(z-x) &amp;=&amp;0\\ 3+2(-\frac{1}{2})+4\cos\frac{x-y}{2}\cos3\frac{x+y}{2} &amp;=&amp;0\\ \cos\frac{x-y}{2}\cos3\frac{x+y}{2} &amp;=&amp;-\frac{1}{2}\\ \cos3\frac{x+y}{2} &amp;=&amp;\mp1\\ \end{eqnarray} so \begin{eqnarray} \cos(x-y) &amp;=&amp; -\frac{1}{2}\\ \cos3\frac{x+y}{2} &amp;=&amp;\mp1 \end{eqnarray} We have two answer for each of them \begin{eqnarray} x-y &amp;=&amp; \frac{2\pi}{3},~\frac{-\pi}{3}\\ x+y &amp;=&amp; 0,~\frac{2\pi}{3} \end{eqnarray} We obtain 4 answer from these equations and then permutation over $(x,y,z)$ from equations symmetrically, gives us 12 answer. Finally by substitution answers in equations, $x^2+y^2+z^2$ has one value ($\displaystyle\frac{2\pi}{3},\frac{-2\pi}{3},0$), it is $$\frac{8\pi^2}{9}$$</p> Answer 11: <p>Although this is same to answer of @schrodingersCat Let $x\geq y\geq0\geq z$ $$\cos(x+y)^2=1-\sin(x+y)^2$$ $$⇔1-(\sin{x}+\sin{y})^2=(\cos{x}+\cos{y})^2$$ $$⇔1+2(\cos{x}\cos{y}+\sin{x}\sin{y})=0$$ $$⇔\cos(x-y)=-1/2 $$ $$x-y=2\pi/3$$ and $$\sin{x}+\sin{y}=-\sin{z}$$ $$⇔\sin(y+2\pi/3)+\sin{y}=\sqrt3/2$$ $$⇔(1/2)\sin{y}+\sqrt3/2\cos{y}=\sqrt3/2$$ $$⇔\sin(y+\pi/3)=\sqrt3/2$$ </p> <p>then we get $$x=2\pi/3 ,y=0,z=-2\pi/3$$, if angle(x,y,z) are points of unit circle, they consist of the regular triangle. The answer of $x^2+y^2+z^2$ is unique and, $$\displaystyle x^2+y^2+z^2=\frac{8\pi^2}9 $$</p> Answer 12: <p>HINT.- $$\begin{cases}\sin{x}+\sin{y}+\sin{z}=0\\\cos{x}+\cos{y}+\cos{z}=0\end{cases}\iff\begin{cases}2\sin\dfrac{x+y}{2}\cos\dfrac{x-y}{2}=-\sin z\\2\cos\dfrac{x+y}{2}\cos\dfrac{x-y}{2}=-\cos z\end{cases}\Rightarrow \tan\frac{x+y}{2}=\tan z$$ Hence, because of $x,y,z \in [-\pi,\pi]$, $$x+y=2z\iff x+y+z=3z\Rightarrow \color{red}{z=0 \text{ and }x+y=0}$$ NOTE.-Obviously instead of $z$ we can choose either $x$ and $y$ to be zero. Thus $$\color{red}{x^2+y^2+z^2=2t^2}$$ where $(x,y,z)=(t,-t,0)$ and with $t\in [-\pi,\pi]$.</p> Answer 13: <p>You can try this method: $$x+y+z=0 \implies x+y=-z\tag1$$ And $$\sin{x}+\sin{y}+\sin{z}=0$$ $$\implies \sin{x}+\sin{y}=-\sin{z}\tag2$$ And $$\cos{x}+\cos{y}+\cos{z}=0$$ $$\cos{x}+\cos{y}=-\cos{z}\tag3$$</p> <p>So $$(2)^2+(3)^2 \implies 2+2\cos(x-y)=1$$ $$\implies \cos(x-y)=-\frac12\tag4$$</p> <p>Similarly we get $$\cos(y-z)=-\frac12\tag5$$ $$\cos(z-x)=-\frac12\tag6$$</p> <p>And from $(1)$, we get $\cos(x+y)=\cos z$</p> <p>Can you proceed now?</p> Answer 14: <p>\begin{eqnarray} \sin z &amp;=&amp; -\sin x-\sin y\\ \cos z &amp;=&amp; -\cos x-\cos y \end{eqnarray} so \begin{eqnarray} \sin^2z &amp;=&amp; \sin^2x+\sin^2y+2\sin x\sin y\\ \cos^2z &amp;=&amp; \cos^2x+\cos^2y+2\cos x\cos y \end{eqnarray} adding to these equations concludes \begin{eqnarray} 1 &amp;=&amp; 1+1+2(\sin x\sin y+\cos x\cos y)\\ 0 &amp;=&amp; 1+2\cos(x-y)\\ -\frac{1}{2} &amp;=&amp; \cos(x-y) \end{eqnarray} and also $$\cos^2\frac{x-y}{2}=\frac{1+\cos(x-y)}{2}=\frac{1}{4}$$ or $$\cos\frac{x-y}{2}=\pm\frac{1}{2}$$ In the other hand \begin{eqnarray} (\sin x+\sin y+\sin z)^2 + (\cos x+\cos y+\cos z)^2 &amp;=&amp; 0\\ \sin^2x+\sin^2y+\sin^2z+2\sin x\sin y+2\sin y\sin z+2\sin z\sin x &amp;+&amp;\\ \cos^2x+\cos^2y+\cos^2z+2\cos x\cos y+2\cos y\cos z+2\cos z\cos x &amp;=&amp;0\\ 3+2\big(\cos x\cos y+\sin x\sin y\big)+2\big(\cos y\cos z+\sin y\sin z\big)+2\big(\cos z\cos x+\sin z\sin x\big) &amp;=&amp;0\\ 3+2\cos(x-y)+2\cos(y-z)+2\cos(z-x) &amp;=&amp;0\\ 3+2(-\frac{1}{2})+4\cos\frac{x-y}{2}\cos3\frac{x+y}{2} &amp;=&amp;0\\ \cos\frac{x-y}{2}\cos3\frac{x+y}{2} &amp;=&amp;-\frac{1}{2}\\ \cos3\frac{x+y}{2} &amp;=&amp;\mp1\\ \end{eqnarray} so \begin{eqnarray} \cos(x-y) &amp;=&amp; -\frac{1}{2}\\ \cos3\frac{x+y}{2} &amp;=&amp;\mp1 \end{eqnarray} We have two answer for each of them \begin{eqnarray} x-y &amp;=&amp; \frac{2\pi}{3},~\frac{-\pi}{3}\\ x+y &amp;=&amp; 0,~\frac{2\pi}{3} \end{eqnarray} We obtain 4 answer from these equations and then permutation over $(x,y,z)$ from equations symmetrically, gives us 12 answer. Finally by substitution answers in equations, $x^2+y^2+z^2$ has one value ($\displaystyle\frac{2\pi}{3},\frac{-2\pi}{3},0$), it is $$\frac{8\pi^2}{9}$$</p> Answer 15: <p>Although this is same to answer of @schrodingersCat Let $x\geq y\geq0\geq z$ $$\cos(x+y)^2=1-\sin(x+y)^2$$ $$⇔1-(\sin{x}+\sin{y})^2=(\cos{x}+\cos{y})^2$$ $$⇔1+2(\cos{x}\cos{y}+\sin{x}\sin{y})=0$$ $$⇔\cos(x-y)=-1/2 $$ $$x-y=2\pi/3$$ and $$\sin{x}+\sin{y}=-\sin{z}$$ $$⇔\sin(y+2\pi/3)+\sin{y}=\sqrt3/2$$ $$⇔(1/2)\sin{y}+\sqrt3/2\cos{y}=\sqrt3/2$$ $$⇔\sin(y+\pi/3)=\sqrt3/2$$ </p> <p>then we get $$x=2\pi/3 ,y=0,z=-2\pi/3$$, if angle(x,y,z) are points of unit circle, they consist of the regular triangle. The answer of $x^2+y^2+z^2$ is unique and, $$\displaystyle x^2+y^2+z^2=\frac{8\pi^2}9 $$</p> Answer 16: <p>HINT.- $$\begin{cases}\sin{x}+\sin{y}+\sin{z}=0\\\cos{x}+\cos{y}+\cos{z}=0\end{cases}\iff\begin{cases}2\sin\dfrac{x+y}{2}\cos\dfrac{x-y}{2}=-\sin z\\2\cos\dfrac{x+y}{2}\cos\dfrac{x-y}{2}=-\cos z\end{cases}\Rightarrow \tan\frac{x+y}{2}=\tan z$$ Hence, because of $x,y,z \in [-\pi,\pi]$, $$x+y=2z\iff x+y+z=3z\Rightarrow \color{red}{z=0 \text{ and }x+y=0}$$ NOTE.-Obviously instead of $z$ we can choose either $x$ and $y$ to be zero. Thus $$\color{red}{x^2+y^2+z^2=2t^2}$$ where $(x,y,z)=(t,-t,0)$ and with $t\in [-\pi,\pi]$.</p> Answer 17: <p>You can try this method: $$x+y+z=0 \implies x+y=-z\tag1$$ And $$\sin{x}+\sin{y}+\sin{z}=0$$ $$\implies \sin{x}+\sin{y}=-\sin{z}\tag2$$ And $$\cos{x}+\cos{y}+\cos{z}=0$$ $$\cos{x}+\cos{y}=-\cos{z}\tag3$$</p> <p>So $$(2)^2+(3)^2 \implies 2+2\cos(x-y)=1$$ $$\implies \cos(x-y)=-\frac12\tag4$$</p> <p>Similarly we get $$\cos(y-z)=-\frac12\tag5$$ $$\cos(z-x)=-\frac12\tag6$$</p> <p>And from $(1)$, we get $\cos(x+y)=\cos z$</p> <p>Can you proceed now?</p> Answer 18: <p>\begin{eqnarray} \sin z &amp;=&amp; -\sin x-\sin y\\ \cos z &amp;=&amp; -\cos x-\cos y \end{eqnarray} so \begin{eqnarray} \sin^2z &amp;=&amp; \sin^2x+\sin^2y+2\sin x\sin y\\ \cos^2z &amp;=&amp; \cos^2x+\cos^2y+2\cos x\cos y \end{eqnarray} adding to these equations concludes \begin{eqnarray} 1 &amp;=&amp; 1+1+2(\sin x\sin y+\cos x\cos y)\\ 0 &amp;=&amp; 1+2\cos(x-y)\\ -\frac{1}{2} &amp;=&amp; \cos(x-y) \end{eqnarray} and also $$\cos^2\frac{x-y}{2}=\frac{1+\cos(x-y)}{2}=\frac{1}{4}$$ or $$\cos\frac{x-y}{2}=\pm\frac{1}{2}$$ In the other hand \begin{eqnarray} (\sin x+\sin y+\sin z)^2 + (\cos x+\cos y+\cos z)^2 &amp;=&amp; 0\\ \sin^2x+\sin^2y+\sin^2z+2\sin x\sin y+2\sin y\sin z+2\sin z\sin x &amp;+&amp;\\ \cos^2x+\cos^2y+\cos^2z+2\cos x\cos y+2\cos y\cos z+2\cos z\cos x &amp;=&amp;0\\ 3+2\big(\cos x\cos y+\sin x\sin y\big)+2\big(\cos y\cos z+\sin y\sin z\big)+2\big(\cos z\cos x+\sin z\sin x\big) &amp;=&amp;0\\ 3+2\cos(x-y)+2\cos(y-z)+2\cos(z-x) &amp;=&amp;0\\ 3+2(-\frac{1}{2})+4\cos\frac{x-y}{2}\cos3\frac{x+y}{2} &amp;=&amp;0\\ \cos\frac{x-y}{2}\cos3\frac{x+y}{2} &amp;=&amp;-\frac{1}{2}\\ \cos3\frac{x+y}{2} &amp;=&amp;\mp1\\ \end{eqnarray} so \begin{eqnarray} \cos(x-y) &amp;=&amp; -\frac{1}{2}\\ \cos3\frac{x+y}{2} &amp;=&amp;\mp1 \end{eqnarray} We have two answer for each of them \begin{eqnarray} x-y &amp;=&amp; \frac{2\pi}{3},~\frac{-\pi}{3}\\ x+y &amp;=&amp; 0,~\frac{2\pi}{3} \end{eqnarray} We obtain 4 answer from these equations and then permutation over $(x,y,z)$ from equations symmetrically, gives us 12 answer. Finally by substitution answers in equations, $x^2+y^2+z^2$ has one value ($\displaystyle\frac{2\pi}{3},\frac{-2\pi}{3},0$), it is $$\frac{8\pi^2}{9}$$</p> Answer 19: <p>Although this is same to answer of @schrodingersCat Let $x\geq y\geq0\geq z$ $$\cos(x+y)^2=1-\sin(x+y)^2$$ $$⇔1-(\sin{x}+\sin{y})^2=(\cos{x}+\cos{y})^2$$ $$⇔1+2(\cos{x}\cos{y}+\sin{x}\sin{y})=0$$ $$⇔\cos(x-y)=-1/2 $$ $$x-y=2\pi/3$$ and $$\sin{x}+\sin{y}=-\sin{z}$$ $$⇔\sin(y+2\pi/3)+\sin{y}=\sqrt3/2$$ $$⇔(1/2)\sin{y}+\sqrt3/2\cos{y}=\sqrt3/2$$ $$⇔\sin(y+\pi/3)=\sqrt3/2$$ </p> <p>then we get $$x=2\pi/3 ,y=0,z=-2\pi/3$$, if angle(x,y,z) are points of unit circle, they consist of the regular triangle. The answer of $x^2+y^2+z^2$ is unique and, $$\displaystyle x^2+y^2+z^2=\frac{8\pi^2}9 $$</p> Answer 20: <p>HINT.- $$\begin{cases}\sin{x}+\sin{y}+\sin{z}=0\\\cos{x}+\cos{y}+\cos{z}=0\end{cases}\iff\begin{cases}2\sin\dfrac{x+y}{2}\cos\dfrac{x-y}{2}=-\sin z\\2\cos\dfrac{x+y}{2}\cos\dfrac{x-y}{2}=-\cos z\end{cases}\Rightarrow \tan\frac{x+y}{2}=\tan z$$ Hence, because of $x,y,z \in [-\pi,\pi]$, $$x+y=2z\iff x+y+z=3z\Rightarrow \color{red}{z=0 \text{ and }x+y=0}$$ NOTE.-Obviously instead of $z$ we can choose either $x$ and $y$ to be zero. Thus $$\color{red}{x^2+y^2+z^2=2t^2}$$ where $(x,y,z)=(t,-t,0)$ and with $t\in [-\pi,\pi]$.</p> Answer 21: <p>You can try this method: $$x+y+z=0 \implies x+y=-z\tag1$$ And $$\sin{x}+\sin{y}+\sin{z}=0$$ $$\implies \sin{x}+\sin{y}=-\sin{z}\tag2$$ And $$\cos{x}+\cos{y}+\cos{z}=0$$ $$\cos{x}+\cos{y}=-\cos{z}\tag3$$</p> <p>So $$(2)^2+(3)^2 \implies 2+2\cos(x-y)=1$$ $$\implies \cos(x-y)=-\frac12\tag4$$</p> <p>Similarly we get $$\cos(y-z)=-\frac12\tag5$$ $$\cos(z-x)=-\frac12\tag6$$</p> <p>And from $(1)$, we get $\cos(x+y)=\cos z$</p> <p>Can you proceed now?</p> Answer 22: <p>\begin{eqnarray} \sin z &amp;=&amp; -\sin x-\sin y\\ \cos z &amp;=&amp; -\cos x-\cos y \end{eqnarray} so \begin{eqnarray} \sin^2z &amp;=&amp; \sin^2x+\sin^2y+2\sin x\sin y\\ \cos^2z &amp;=&amp; \cos^2x+\cos^2y+2\cos x\cos y \end{eqnarray} adding to these equations concludes \begin{eqnarray} 1 &amp;=&amp; 1+1+2(\sin x\sin y+\cos x\cos y)\\ 0 &amp;=&amp; 1+2\cos(x-y)\\ -\frac{1}{2} &amp;=&amp; \cos(x-y) \end{eqnarray} and also $$\cos^2\frac{x-y}{2}=\frac{1+\cos(x-y)}{2}=\frac{1}{4}$$ or $$\cos\frac{x-y}{2}=\pm\frac{1}{2}$$ In the other hand \begin{eqnarray} (\sin x+\sin y+\sin z)^2 + (\cos x+\cos y+\cos z)^2 &amp;=&amp; 0\\ \sin^2x+\sin^2y+\sin^2z+2\sin x\sin y+2\sin y\sin z+2\sin z\sin x &amp;+&amp;\\ \cos^2x+\cos^2y+\cos^2z+2\cos x\cos y+2\cos y\cos z+2\cos z\cos x &amp;=&amp;0\\ 3+2\big(\cos x\cos y+\sin x\sin y\big)+2\big(\cos y\cos z+\sin y\sin z\big)+2\big(\cos z\cos x+\sin z\sin x\big) &amp;=&amp;0\\ 3+2\cos(x-y)+2\cos(y-z)+2\cos(z-x) &amp;=&amp;0\\ 3+2(-\frac{1}{2})+4\cos\frac{x-y}{2}\cos3\frac{x+y}{2} &amp;=&amp;0\\ \cos\frac{x-y}{2}\cos3\frac{x+y}{2} &amp;=&amp;-\frac{1}{2}\\ \cos3\frac{x+y}{2} &amp;=&amp;\mp1\\ \end{eqnarray} so \begin{eqnarray} \cos(x-y) &amp;=&amp; -\frac{1}{2}\\ \cos3\frac{x+y}{2} &amp;=&amp;\mp1 \end{eqnarray} We have two answer for each of them \begin{eqnarray} x-y &amp;=&amp; \frac{2\pi}{3},~\frac{-\pi}{3}\\ x+y &amp;=&amp; 0,~\frac{2\pi}{3} \end{eqnarray} We obtain 4 answer from these equations and then permutation over $(x,y,z)$ from equations symmetrically, gives us 12 answer. Finally by substitution answers in equations, $x^2+y^2+z^2$ has one value ($\displaystyle\frac{2\pi}{3},\frac{-2\pi}{3},0$), it is $$\frac{8\pi^2}{9}$$</p> Answer 23: <p>Although this is same to answer of @schrodingersCat Let $x\geq y\geq0\geq z$ $$\cos(x+y)^2=1-\sin(x+y)^2$$ $$⇔1-(\sin{x}+\sin{y})^2=(\cos{x}+\cos{y})^2$$ $$⇔1+2(\cos{x}\cos{y}+\sin{x}\sin{y})=0$$ $$⇔\cos(x-y)=-1/2 $$ $$x-y=2\pi/3$$ and $$\sin{x}+\sin{y}=-\sin{z}$$ $$⇔\sin(y+2\pi/3)+\sin{y}=\sqrt3/2$$ $$⇔(1/2)\sin{y}+\sqrt3/2\cos{y}=\sqrt3/2$$ $$⇔\sin(y+\pi/3)=\sqrt3/2$$ </p> <p>then we get $$x=2\pi/3 ,y=0,z=-2\pi/3$$, if angle(x,y,z) are points of unit circle, they consist of the regular triangle. The answer of $x^2+y^2+z^2$ is unique and, $$\displaystyle x^2+y^2+z^2=\frac{8\pi^2}9 $$</p> Answer 24: <p>HINT.- $$\begin{cases}\sin{x}+\sin{y}+\sin{z}=0\\\cos{x}+\cos{y}+\cos{z}=0\end{cases}\iff\begin{cases}2\sin\dfrac{x+y}{2}\cos\dfrac{x-y}{2}=-\sin z\\2\cos\dfrac{x+y}{2}\cos\dfrac{x-y}{2}=-\cos z\end{cases}\Rightarrow \tan\frac{x+y}{2}=\tan z$$ Hence, because of $x,y,z \in [-\pi,\pi]$, $$x+y=2z\iff x+y+z=3z\Rightarrow \color{red}{z=0 \text{ and }x+y=0}$$ NOTE.-Obviously instead of $z$ we can choose either $x$ and $y$ to be zero. Thus $$\color{red}{x^2+y^2+z^2=2t^2}$$ where $(x,y,z)=(t,-t,0)$ and with $t\in [-\pi,\pi]$.</p> Answer 25: <p>You can try this method: $$x+y+z=0 \implies x+y=-z\tag1$$ And $$\sin{x}+\sin{y}+\sin{z}=0$$ $$\implies \sin{x}+\sin{y}=-\sin{z}\tag2$$ And $$\cos{x}+\cos{y}+\cos{z}=0$$ $$\cos{x}+\cos{y}=-\cos{z}\tag3$$</p> <p>So $$(2)^2+(3)^2 \implies 2+2\cos(x-y)=1$$ $$\implies \cos(x-y)=-\frac12\tag4$$</p> <p>Similarly we get $$\cos(y-z)=-\frac12\tag5$$ $$\cos(z-x)=-\frac12\tag6$$</p> <p>And from $(1)$, we get $\cos(x+y)=\cos z$</p> <p>Can you proceed now?</p> Answer 26: <p>\begin{eqnarray} \sin z &amp;=&amp; -\sin x-\sin y\\ \cos z &amp;=&amp; -\cos x-\cos y \end{eqnarray} so \begin{eqnarray} \sin^2z &amp;=&amp; \sin^2x+\sin^2y+2\sin x\sin y\\ \cos^2z &amp;=&amp; \cos^2x+\cos^2y+2\cos x\cos y \end{eqnarray} adding to these equations concludes \begin{eqnarray} 1 &amp;=&amp; 1+1+2(\sin x\sin y+\cos x\cos y)\\ 0 &amp;=&amp; 1+2\cos(x-y)\\ -\frac{1}{2} &amp;=&amp; \cos(x-y) \end{eqnarray} and also $$\cos^2\frac{x-y}{2}=\frac{1+\cos(x-y)}{2}=\frac{1}{4}$$ or $$\cos\frac{x-y}{2}=\pm\frac{1}{2}$$ In the other hand \begin{eqnarray} (\sin x+\sin y+\sin z)^2 + (\cos x+\cos y+\cos z)^2 &amp;=&amp; 0\\ \sin^2x+\sin^2y+\sin^2z+2\sin x\sin y+2\sin y\sin z+2\sin z\sin x &amp;+&amp;\\ \cos^2x+\cos^2y+\cos^2z+2\cos x\cos y+2\cos y\cos z+2\cos z\cos x &amp;=&amp;0\\ 3+2\big(\cos x\cos y+\sin x\sin y\big)+2\big(\cos y\cos z+\sin y\sin z\big)+2\big(\cos z\cos x+\sin z\sin x\big) &amp;=&amp;0\\ 3+2\cos(x-y)+2\cos(y-z)+2\cos(z-x) &amp;=&amp;0\\ 3+2(-\frac{1}{2})+4\cos\frac{x-y}{2}\cos3\frac{x+y}{2} &amp;=&amp;0\\ \cos\frac{x-y}{2}\cos3\frac{x+y}{2} &amp;=&amp;-\frac{1}{2}\\ \cos3\frac{x+y}{2} &amp;=&amp;\mp1\\ \end{eqnarray} so \begin{eqnarray} \cos(x-y) &amp;=&amp; -\frac{1}{2}\\ \cos3\frac{x+y}{2} &amp;=&amp;\mp1 \end{eqnarray} We have two answer for each of them \begin{eqnarray} x-y &amp;=&amp; \frac{2\pi}{3},~\frac{-\pi}{3}\\ x+y &amp;=&amp; 0,~\frac{2\pi}{3} \end{eqnarray} We obtain 4 answer from these equations and then permutation over $(x,y,z)$ from equations symmetrically, gives us 12 answer. Finally by substitution answers in equations, $x^2+y^2+z^2$ has one value ($\displaystyle\frac{2\pi}{3},\frac{-2\pi}{3},0$), it is $$\frac{8\pi^2}{9}$$</p> Answer 27: <p>Although this is same to answer of @schrodingersCat Let $x\geq y\geq0\geq z$ $$\cos(x+y)^2=1-\sin(x+y)^2$$ $$⇔1-(\sin{x}+\sin{y})^2=(\cos{x}+\cos{y})^2$$ $$⇔1+2(\cos{x}\cos{y}+\sin{x}\sin{y})=0$$ $$⇔\cos(x-y)=-1/2 $$ $$x-y=2\pi/3$$ and $$\sin{x}+\sin{y}=-\sin{z}$$ $$⇔\sin(y+2\pi/3)+\sin{y}=\sqrt3/2$$ $$⇔(1/2)\sin{y}+\sqrt3/2\cos{y}=\sqrt3/2$$ $$⇔\sin(y+\pi/3)=\sqrt3/2$$ </p> <p>then we get $$x=2\pi/3 ,y=0,z=-2\pi/3$$, if angle(x,y,z) are points of unit circle, they consist of the regular triangle. The answer of $x^2+y^2+z^2$ is unique and, $$\displaystyle x^2+y^2+z^2=\frac{8\pi^2}9 $$</p> Answer 28: <p>HINT.- $$\begin{cases}\sin{x}+\sin{y}+\sin{z}=0\\\cos{x}+\cos{y}+\cos{z}=0\end{cases}\iff\begin{cases}2\sin\dfrac{x+y}{2}\cos\dfrac{x-y}{2}=-\sin z\\2\cos\dfrac{x+y}{2}\cos\dfrac{x-y}{2}=-\cos z\end{cases}\Rightarrow \tan\frac{x+y}{2}=\tan z$$ Hence, because of $x,y,z \in [-\pi,\pi]$, $$x+y=2z\iff x+y+z=3z\Rightarrow \color{red}{z=0 \text{ and }x+y=0}$$ NOTE.-Obviously instead of $z$ we can choose either $x$ and $y$ to be zero. Thus $$\color{red}{x^2+y^2+z^2=2t^2}$$ where $(x,y,z)=(t,-t,0)$ and with $t\in [-\pi,\pi]$.</p> Answer 29: <p>You can try this method: $$x+y+z=0 \implies x+y=-z\tag1$$ And $$\sin{x}+\sin{y}+\sin{z}=0$$ $$\implies \sin{x}+\sin{y}=-\sin{z}\tag2$$ And $$\cos{x}+\cos{y}+\cos{z}=0$$ $$\cos{x}+\cos{y}=-\cos{z}\tag3$$</p> <p>So $$(2)^2+(3)^2 \implies 2+2\cos(x-y)=1$$ $$\implies \cos(x-y)=-\frac12\tag4$$</p> <p>Similarly we get $$\cos(y-z)=-\frac12\tag5$$ $$\cos(z-x)=-\frac12\tag6$$</p> <p>And from $(1)$, we get $\cos(x+y)=\cos z$</p> <p>Can you proceed now?</p> Answer 30: <p>\begin{eqnarray} \sin z &amp;=&amp; -\sin x-\sin y\\ \cos z &amp;=&amp; -\cos x-\cos y \end{eqnarray} so \begin{eqnarray} \sin^2z &amp;=&amp; \sin^2x+\sin^2y+2\sin x\sin y\\ \cos^2z &amp;=&amp; \cos^2x+\cos^2y+2\cos x\cos y \end{eqnarray} adding to these equations concludes \begin{eqnarray} 1 &amp;=&amp; 1+1+2(\sin x\sin y+\cos x\cos y)\\ 0 &amp;=&amp; 1+2\cos(x-y)\\ -\frac{1}{2} &amp;=&amp; \cos(x-y) \end{eqnarray} and also $$\cos^2\frac{x-y}{2}=\frac{1+\cos(x-y)}{2}=\frac{1}{4}$$ or $$\cos\frac{x-y}{2}=\pm\frac{1}{2}$$ In the other hand \begin{eqnarray} (\sin x+\sin y+\sin z)^2 + (\cos x+\cos y+\cos z)^2 &amp;=&amp; 0\\ \sin^2x+\sin^2y+\sin^2z+2\sin x\sin y+2\sin y\sin z+2\sin z\sin x &amp;+&amp;\\ \cos^2x+\cos^2y+\cos^2z+2\cos x\cos y+2\cos y\cos z+2\cos z\cos x &amp;=&amp;0\\ 3+2\big(\cos x\cos y+\sin x\sin y\big)+2\big(\cos y\cos z+\sin y\sin z\big)+2\big(\cos z\cos x+\sin z\sin x\big) &amp;=&amp;0\\ 3+2\cos(x-y)+2\cos(y-z)+2\cos(z-x) &amp;=&amp;0\\ 3+2(-\frac{1}{2})+4\cos\frac{x-y}{2}\cos3\frac{x+y}{2} &amp;=&amp;0\\ \cos\frac{x-y}{2}\cos3\frac{x+y}{2} &amp;=&amp;-\frac{1}{2}\\ \cos3\frac{x+y}{2} &amp;=&amp;\mp1\\ \end{eqnarray} so \begin{eqnarray} \cos(x-y) &amp;=&amp; -\frac{1}{2}\\ \cos3\frac{x+y}{2} &amp;=&amp;\mp1 \end{eqnarray} We have two answer for each of them \begin{eqnarray} x-y &amp;=&amp; \frac{2\pi}{3},~\frac{-\pi}{3}\\ x+y &amp;=&amp; 0,~\frac{2\pi}{3} \end{eqnarray} We obtain 4 answer from these equations and then permutation over $(x,y,z)$ from equations symmetrically, gives us 12 answer. Finally by substitution answers in equations, $x^2+y^2+z^2$ has one value ($\displaystyle\frac{2\pi}{3},\frac{-2\pi}{3},0$), it is $$\frac{8\pi^2}{9}$$</p> Answer 31: <p>Although this is same to answer of @schrodingersCat Let $x\geq y\geq0\geq z$ $$\cos(x+y)^2=1-\sin(x+y)^2$$ $$⇔1-(\sin{x}+\sin{y})^2=(\cos{x}+\cos{y})^2$$ $$⇔1+2(\cos{x}\cos{y}+\sin{x}\sin{y})=0$$ $$⇔\cos(x-y)=-1/2 $$ $$x-y=2\pi/3$$ and $$\sin{x}+\sin{y}=-\sin{z}$$ $$⇔\sin(y+2\pi/3)+\sin{y}=\sqrt3/2$$ $$⇔(1/2)\sin{y}+\sqrt3/2\cos{y}=\sqrt3/2$$ $$⇔\sin(y+\pi/3)=\sqrt3/2$$ </p> <p>then we get $$x=2\pi/3 ,y=0,z=-2\pi/3$$, if angle(x,y,z) are points of unit circle, they consist of the regular triangle. The answer of $x^2+y^2+z^2$ is unique and, $$\displaystyle x^2+y^2+z^2=\frac{8\pi^2}9 $$</p> Answer 32: <p>HINT.- $$\begin{cases}\sin{x}+\sin{y}+\sin{z}=0\\\cos{x}+\cos{y}+\cos{z}=0\end{cases}\iff\begin{cases}2\sin\dfrac{x+y}{2}\cos\dfrac{x-y}{2}=-\sin z\\2\cos\dfrac{x+y}{2}\cos\dfrac{x-y}{2}=-\cos z\end{cases}\Rightarrow \tan\frac{x+y}{2}=\tan z$$ Hence, because of $x,y,z \in [-\pi,\pi]$, $$x+y=2z\iff x+y+z=3z\Rightarrow \color{red}{z=0 \text{ and }x+y=0}$$ NOTE.-Obviously instead of $z$ we can choose either $x$ and $y$ to be zero. Thus $$\color{red}{x^2+y^2+z^2=2t^2}$$ where $(x,y,z)=(t,-t,0)$ and with $t\in [-\pi,\pi]$.</p> Answer 33: <p>You can try this method: $$x+y+z=0 \implies x+y=-z\tag1$$ And $$\sin{x}+\sin{y}+\sin{z}=0$$ $$\implies \sin{x}+\sin{y}=-\sin{z}\tag2$$ And $$\cos{x}+\cos{y}+\cos{z}=0$$ $$\cos{x}+\cos{y}=-\cos{z}\tag3$$</p> <p>So $$(2)^2+(3)^2 \implies 2+2\cos(x-y)=1$$ $$\implies \cos(x-y)=-\frac12\tag4$$</p> <p>Similarly we get $$\cos(y-z)=-\frac12\tag5$$ $$\cos(z-x)=-\frac12\tag6$$</p> <p>And from $(1)$, we get $\cos(x+y)=\cos z$</p> <p>Can you proceed now?</p> Answer 34: <p>\begin{eqnarray} \sin z &amp;=&amp; -\sin x-\sin y\\ \cos z &amp;=&amp; -\cos x-\cos y \end{eqnarray} so \begin{eqnarray} \sin^2z &amp;=&amp; \sin^2x+\sin^2y+2\sin x\sin y\\ \cos^2z &amp;=&amp; \cos^2x+\cos^2y+2\cos x\cos y \end{eqnarray} adding to these equations concludes \begin{eqnarray} 1 &amp;=&amp; 1+1+2(\sin x\sin y+\cos x\cos y)\\ 0 &amp;=&amp; 1+2\cos(x-y)\\ -\frac{1}{2} &amp;=&amp; \cos(x-y) \end{eqnarray} and also $$\cos^2\frac{x-y}{2}=\frac{1+\cos(x-y)}{2}=\frac{1}{4}$$ or $$\cos\frac{x-y}{2}=\pm\frac{1}{2}$$ In the other hand \begin{eqnarray} (\sin x+\sin y+\sin z)^2 + (\cos x+\cos y+\cos z)^2 &amp;=&amp; 0\\ \sin^2x+\sin^2y+\sin^2z+2\sin x\sin y+2\sin y\sin z+2\sin z\sin x &amp;+&amp;\\ \cos^2x+\cos^2y+\cos^2z+2\cos x\cos y+2\cos y\cos z+2\cos z\cos x &amp;=&amp;0\\ 3+2\big(\cos x\cos y+\sin x\sin y\big)+2\big(\cos y\cos z+\sin y\sin z\big)+2\big(\cos z\cos x+\sin z\sin x\big) &amp;=&amp;0\\ 3+2\cos(x-y)+2\cos(y-z)+2\cos(z-x) &amp;=&amp;0\\ 3+2(-\frac{1}{2})+4\cos\frac{x-y}{2}\cos3\frac{x+y}{2} &amp;=&amp;0\\ \cos\frac{x-y}{2}\cos3\frac{x+y}{2} &amp;=&amp;-\frac{1}{2}\\ \cos3\frac{x+y}{2} &amp;=&amp;\mp1\\ \end{eqnarray} so \begin{eqnarray} \cos(x-y) &amp;=&amp; -\frac{1}{2}\\ \cos3\frac{x+y}{2} &amp;=&amp;\mp1 \end{eqnarray} We have two answer for each of them \begin{eqnarray} x-y &amp;=&amp; \frac{2\pi}{3},~\frac{-\pi}{3}\\ x+y &amp;=&amp; 0,~\frac{2\pi}{3} \end{eqnarray} We obtain 4 answer from these equations and then permutation over $(x,y,z)$ from equations symmetrically, gives us 12 answer. Finally by substitution answers in equations, $x^2+y^2+z^2$ has one value ($\displaystyle\frac{2\pi}{3},\frac{-2\pi}{3},0$), it is $$\frac{8\pi^2}{9}$$</p> Answer 35: <p>Although this is same to answer of @schrodingersCat Let $x\geq y\geq0\geq z$ $$\cos(x+y)^2=1-\sin(x+y)^2$$ $$⇔1-(\sin{x}+\sin{y})^2=(\cos{x}+\cos{y})^2$$ $$⇔1+2(\cos{x}\cos{y}+\sin{x}\sin{y})=0$$ $$⇔\cos(x-y)=-1/2 $$ $$x-y=2\pi/3$$ and $$\sin{x}+\sin{y}=-\sin{z}$$ $$⇔\sin(y+2\pi/3)+\sin{y}=\sqrt3/2$$ $$⇔(1/2)\sin{y}+\sqrt3/2\cos{y}=\sqrt3/2$$ $$⇔\sin(y+\pi/3)=\sqrt3/2$$ </p> <p>then we get $$x=2\pi/3 ,y=0,z=-2\pi/3$$, if angle(x,y,z) are points of unit circle, they consist of the regular triangle. The answer of $x^2+y^2+z^2$ is unique and, $$\displaystyle x^2+y^2+z^2=\frac{8\pi^2}9 $$</p> Answer 36: <p>HINT.- $$\begin{cases}\sin{x}+\sin{y}+\sin{z}=0\\\cos{x}+\cos{y}+\cos{z}=0\end{cases}\iff\begin{cases}2\sin\dfrac{x+y}{2}\cos\dfrac{x-y}{2}=-\sin z\\2\cos\dfrac{x+y}{2}\cos\dfrac{x-y}{2}=-\cos z\end{cases}\Rightarrow \tan\frac{x+y}{2}=\tan z$$ Hence, because of $x,y,z \in [-\pi,\pi]$, $$x+y=2z\iff x+y+z=3z\Rightarrow \color{red}{z=0 \text{ and }x+y=0}$$ NOTE.-Obviously instead of $z$ we can choose either $x$ and $y$ to be zero. Thus $$\color{red}{x^2+y^2+z^2=2t^2}$$ where $(x,y,z)=(t,-t,0)$ and with $t\in [-\pi,\pi]$.</p>
How to prove this sum $\sum_{k=1}^{n-1}\frac{1}{k(n-k)}\binom{2(k-1)}{k-1}\binom{2(n-k-1)}{n-k-1}=\binom{2(n-1)}{n-1}$
2,083,545
5
https://math.stackexchange.com/questions/2083545/how-to-prove-this-sum-sum-k-1n-1-frac1kn-k-binom2k-1k-1-binom
<p>Let $n\ge 2$ postive integer, show that $$I=\sum_{k=1}^{n-1}\dfrac{1}{k(n-k)}\binom{2(k-1)}{k-1}\binom{2(n-k-1)}{n-k-1}=\binom{2(n-1)}{n-1}$$</p> <p>I have done this works $$I=\sum_{k=1}^{n-1}\dfrac{1}{k}\binom{2(k-1)}{k-1}\binom{2(n-k-1)}{n-k-1}+\sum_{k=1}^{n-1}\dfrac{1}{n-k}\binom{2(k-1)}{k-1}\binom{2(n-k-1)}{n-k-1}=\sum_{k=1}^{n-1}\dfrac{2}{k}\binom{2(k-1)}{k-1}\binom{2(n-k-1)}{n-k-1}$$</p>
[ { "answer_id": 2083661, "score": 3, "text": "<p>Note the <em><a href=\"https://en.wikipedia.org/wiki/Catalan_number\" rel=\"nofollow noreferrer\">Catalan numbers</a></em> are defined as\n\\begin{align*}\nC_{n}&amp;=\\frac{1}{n+1}\\binom{2n}{n}\\qquad\\quad n\\geq 0\\\\\n\\end{align*}\nwith generating function\n\\begin{align*}\nC(z)=\\sum_{n=0}^\\infty C_nz^n=\\frac{1}{2z}\\left(1-\\sqrt{1-4z}\\right)\n\\end{align*}</p>\n\n<blockquote>\n <p>The LHS of OPs binomial identity is a <em>Cauchy-product</em> of Catalan numbers</p>\n \n <p>\\begin{align*}\n\\sum_{k=1}^{n - 1} C_{k-1}C_{n -k-1}\\tag{1}\n\\end{align*}</p>\n \n <p>Since Cauchy products occur when multiplying series we could work with generating functions:</p>\n \n <p>\\begin{align*}\nC^2(z)=\\sum_{n=0}^\\infty\\left(\\sum_{k=0}^{n}C_kC_{n-k}\\right)z^n\\tag{2}\n\\end{align*}</p>\n</blockquote>\n\n<p>Let $[z^n]$ denote the coefficient operator.</p>\n\n<blockquote>\n <p>We observe with the help of (1) and (2) for $n\\geq 2$</p>\n \n <p>\\begin{align*}\n[z^{n-2}]C^2(z)&amp;=[z^{n-2}]\\sum_{n=0}^\\infty\\left(\\sum_{k=0}^{n}C_kC_{n-k}\\right)z^n\\\\\n&amp;=\\sum_{k=0}^{n-2}C_kC_{n-2-k}\\\\\n&amp;=\\sum_{k=1}^{n-1}C_{k-1}C_{n-1-k}\\\\\n\\end{align*}\n on the other hand we obtain\n \\begin{align*}\n[z^{n-2}]C^2(z)&amp;=[z^{n-2}]\\left(\\frac{1}{2z}\\left(1-\\sqrt{1-4z}\\right)\\right)^2\\\\\n&amp;=[z^{n-2}]\\frac{1}{2z^2}\\left(1-\\sqrt{1-4z}\\right)-1\\\\\n&amp;=[z^{n-1}]\\frac{1}{2z}\\left(1-\\sqrt{1-4z}\\right)\\\\\n&amp;=[z^{n-1}]C(z)\\\\\n&amp;=C_{n-1}\\\\\n&amp;=\\frac{1}{n}\\binom{2n-2}{n-1}\\\\\n\\end{align*}\n and the claim follows.</p>\n</blockquote>\n" }, { "answer_id": 2083659, "score": 2, "text": "<p>It is a convolution identity. You may consider that:\n$$ \\sum_{k\\geq 1}\\frac{x^k}{k}\\binom{2k-2}{k-1} = \\frac{1-\\sqrt{1-4x}}{2}\\tag{1}$$\nholds as a consequence of the generating function of <a href=\"https://en.wikipedia.org/wiki/Catalan_number\" rel=\"nofollow noreferrer\">Catalan numbers</a>.<br> If you square both sides of $(1)$ and consider the coefficient of $x^n$, you get:\n$$ \\sum_{k=1}^{n-1}\\frac{1}{k(n-k)}\\binom{2k-2}{k-1}\\binom{2(n-k)-2}{(n-k)-1} = [x^n]\\left(\\frac{1-\\sqrt{1-4x}}{2}-x\\right) \\tag{2}$$\nand for $n\\geq 2$, the RHS of $(2)$ is just $\\color{red}{\\frac{1}{n}\\binom{2n-2}{n-1}}=\\color{blue}{\\frac{1}{2n-1}\\binom{2n-1}{n}}$, by $(1)$.</p>\n" }, { "answer_id": 2083661, "score": 3, "text": "<p>Note the <em><a href=\"https://en.wikipedia.org/wiki/Catalan_number\" rel=\"nofollow noreferrer\">Catalan numbers</a></em> are defined as\n\\begin{align*}\nC_{n}&amp;=\\frac{1}{n+1}\\binom{2n}{n}\\qquad\\quad n\\geq 0\\\\\n\\end{align*}\nwith generating function\n\\begin{align*}\nC(z)=\\sum_{n=0}^\\infty C_nz^n=\\frac{1}{2z}\\left(1-\\sqrt{1-4z}\\right)\n\\end{align*}</p>\n\n<blockquote>\n <p>The LHS of OPs binomial identity is a <em>Cauchy-product</em> of Catalan numbers</p>\n \n <p>\\begin{align*}\n\\sum_{k=1}^{n - 1} C_{k-1}C_{n -k-1}\\tag{1}\n\\end{align*}</p>\n \n <p>Since Cauchy products occur when multiplying series we could work with generating functions:</p>\n \n <p>\\begin{align*}\nC^2(z)=\\sum_{n=0}^\\infty\\left(\\sum_{k=0}^{n}C_kC_{n-k}\\right)z^n\\tag{2}\n\\end{align*}</p>\n</blockquote>\n\n<p>Let $[z^n]$ denote the coefficient operator.</p>\n\n<blockquote>\n <p>We observe with the help of (1) and (2) for $n\\geq 2$</p>\n \n <p>\\begin{align*}\n[z^{n-2}]C^2(z)&amp;=[z^{n-2}]\\sum_{n=0}^\\infty\\left(\\sum_{k=0}^{n}C_kC_{n-k}\\right)z^n\\\\\n&amp;=\\sum_{k=0}^{n-2}C_kC_{n-2-k}\\\\\n&amp;=\\sum_{k=1}^{n-1}C_{k-1}C_{n-1-k}\\\\\n\\end{align*}\n on the other hand we obtain\n \\begin{align*}\n[z^{n-2}]C^2(z)&amp;=[z^{n-2}]\\left(\\frac{1}{2z}\\left(1-\\sqrt{1-4z}\\right)\\right)^2\\\\\n&amp;=[z^{n-2}]\\frac{1}{2z^2}\\left(1-\\sqrt{1-4z}\\right)-1\\\\\n&amp;=[z^{n-1}]\\frac{1}{2z}\\left(1-\\sqrt{1-4z}\\right)\\\\\n&amp;=[z^{n-1}]C(z)\\\\\n&amp;=C_{n-1}\\\\\n&amp;=\\frac{1}{n}\\binom{2n-2}{n-1}\\\\\n\\end{align*}\n and the claim follows.</p>\n</blockquote>\n" }, { "answer_id": 2083659, "score": 2, "text": "<p>It is a convolution identity. You may consider that:\n$$ \\sum_{k\\geq 1}\\frac{x^k}{k}\\binom{2k-2}{k-1} = \\frac{1-\\sqrt{1-4x}}{2}\\tag{1}$$\nholds as a consequence of the generating function of <a href=\"https://en.wikipedia.org/wiki/Catalan_number\" rel=\"nofollow noreferrer\">Catalan numbers</a>.<br> If you square both sides of $(1)$ and consider the coefficient of $x^n$, you get:\n$$ \\sum_{k=1}^{n-1}\\frac{1}{k(n-k)}\\binom{2k-2}{k-1}\\binom{2(n-k)-2}{(n-k)-1} = [x^n]\\left(\\frac{1-\\sqrt{1-4x}}{2}-x\\right) \\tag{2}$$\nand for $n\\geq 2$, the RHS of $(2)$ is just $\\color{red}{\\frac{1}{n}\\binom{2n-2}{n-1}}=\\color{blue}{\\frac{1}{2n-1}\\binom{2n-1}{n}}$, by $(1)$.</p>\n" }, { "answer_id": 2083661, "score": 3, "text": "<p>Note the <em><a href=\"https://en.wikipedia.org/wiki/Catalan_number\" rel=\"nofollow noreferrer\">Catalan numbers</a></em> are defined as\n\\begin{align*}\nC_{n}&amp;=\\frac{1}{n+1}\\binom{2n}{n}\\qquad\\quad n\\geq 0\\\\\n\\end{align*}\nwith generating function\n\\begin{align*}\nC(z)=\\sum_{n=0}^\\infty C_nz^n=\\frac{1}{2z}\\left(1-\\sqrt{1-4z}\\right)\n\\end{align*}</p>\n\n<blockquote>\n <p>The LHS of OPs binomial identity is a <em>Cauchy-product</em> of Catalan numbers</p>\n \n <p>\\begin{align*}\n\\sum_{k=1}^{n - 1} C_{k-1}C_{n -k-1}\\tag{1}\n\\end{align*}</p>\n \n <p>Since Cauchy products occur when multiplying series we could work with generating functions:</p>\n \n <p>\\begin{align*}\nC^2(z)=\\sum_{n=0}^\\infty\\left(\\sum_{k=0}^{n}C_kC_{n-k}\\right)z^n\\tag{2}\n\\end{align*}</p>\n</blockquote>\n\n<p>Let $[z^n]$ denote the coefficient operator.</p>\n\n<blockquote>\n <p>We observe with the help of (1) and (2) for $n\\geq 2$</p>\n \n <p>\\begin{align*}\n[z^{n-2}]C^2(z)&amp;=[z^{n-2}]\\sum_{n=0}^\\infty\\left(\\sum_{k=0}^{n}C_kC_{n-k}\\right)z^n\\\\\n&amp;=\\sum_{k=0}^{n-2}C_kC_{n-2-k}\\\\\n&amp;=\\sum_{k=1}^{n-1}C_{k-1}C_{n-1-k}\\\\\n\\end{align*}\n on the other hand we obtain\n \\begin{align*}\n[z^{n-2}]C^2(z)&amp;=[z^{n-2}]\\left(\\frac{1}{2z}\\left(1-\\sqrt{1-4z}\\right)\\right)^2\\\\\n&amp;=[z^{n-2}]\\frac{1}{2z^2}\\left(1-\\sqrt{1-4z}\\right)-1\\\\\n&amp;=[z^{n-1}]\\frac{1}{2z}\\left(1-\\sqrt{1-4z}\\right)\\\\\n&amp;=[z^{n-1}]C(z)\\\\\n&amp;=C_{n-1}\\\\\n&amp;=\\frac{1}{n}\\binom{2n-2}{n-1}\\\\\n\\end{align*}\n and the claim follows.</p>\n</blockquote>\n" }, { "answer_id": 2083659, "score": 2, "text": "<p>It is a convolution identity. You may consider that:\n$$ \\sum_{k\\geq 1}\\frac{x^k}{k}\\binom{2k-2}{k-1} = \\frac{1-\\sqrt{1-4x}}{2}\\tag{1}$$\nholds as a consequence of the generating function of <a href=\"https://en.wikipedia.org/wiki/Catalan_number\" rel=\"nofollow noreferrer\">Catalan numbers</a>.<br> If you square both sides of $(1)$ and consider the coefficient of $x^n$, you get:\n$$ \\sum_{k=1}^{n-1}\\frac{1}{k(n-k)}\\binom{2k-2}{k-1}\\binom{2(n-k)-2}{(n-k)-1} = [x^n]\\left(\\frac{1-\\sqrt{1-4x}}{2}-x\\right) \\tag{2}$$\nand for $n\\geq 2$, the RHS of $(2)$ is just $\\color{red}{\\frac{1}{n}\\binom{2n-2}{n-1}}=\\color{blue}{\\frac{1}{2n-1}\\binom{2n-1}{n}}$, by $(1)$.</p>\n" }, { "answer_id": 2083661, "score": 3, "text": "<p>Note the <em><a href=\"https://en.wikipedia.org/wiki/Catalan_number\" rel=\"nofollow noreferrer\">Catalan numbers</a></em> are defined as\n\\begin{align*}\nC_{n}&amp;=\\frac{1}{n+1}\\binom{2n}{n}\\qquad\\quad n\\geq 0\\\\\n\\end{align*}\nwith generating function\n\\begin{align*}\nC(z)=\\sum_{n=0}^\\infty C_nz^n=\\frac{1}{2z}\\left(1-\\sqrt{1-4z}\\right)\n\\end{align*}</p>\n\n<blockquote>\n <p>The LHS of OPs binomial identity is a <em>Cauchy-product</em> of Catalan numbers</p>\n \n <p>\\begin{align*}\n\\sum_{k=1}^{n - 1} C_{k-1}C_{n -k-1}\\tag{1}\n\\end{align*}</p>\n \n <p>Since Cauchy products occur when multiplying series we could work with generating functions:</p>\n \n <p>\\begin{align*}\nC^2(z)=\\sum_{n=0}^\\infty\\left(\\sum_{k=0}^{n}C_kC_{n-k}\\right)z^n\\tag{2}\n\\end{align*}</p>\n</blockquote>\n\n<p>Let $[z^n]$ denote the coefficient operator.</p>\n\n<blockquote>\n <p>We observe with the help of (1) and (2) for $n\\geq 2$</p>\n \n <p>\\begin{align*}\n[z^{n-2}]C^2(z)&amp;=[z^{n-2}]\\sum_{n=0}^\\infty\\left(\\sum_{k=0}^{n}C_kC_{n-k}\\right)z^n\\\\\n&amp;=\\sum_{k=0}^{n-2}C_kC_{n-2-k}\\\\\n&amp;=\\sum_{k=1}^{n-1}C_{k-1}C_{n-1-k}\\\\\n\\end{align*}\n on the other hand we obtain\n \\begin{align*}\n[z^{n-2}]C^2(z)&amp;=[z^{n-2}]\\left(\\frac{1}{2z}\\left(1-\\sqrt{1-4z}\\right)\\right)^2\\\\\n&amp;=[z^{n-2}]\\frac{1}{2z^2}\\left(1-\\sqrt{1-4z}\\right)-1\\\\\n&amp;=[z^{n-1}]\\frac{1}{2z}\\left(1-\\sqrt{1-4z}\\right)\\\\\n&amp;=[z^{n-1}]C(z)\\\\\n&amp;=C_{n-1}\\\\\n&amp;=\\frac{1}{n}\\binom{2n-2}{n-1}\\\\\n\\end{align*}\n and the claim follows.</p>\n</blockquote>\n" }, { "answer_id": 2083659, "score": 2, "text": "<p>It is a convolution identity. You may consider that:\n$$ \\sum_{k\\geq 1}\\frac{x^k}{k}\\binom{2k-2}{k-1} = \\frac{1-\\sqrt{1-4x}}{2}\\tag{1}$$\nholds as a consequence of the generating function of <a href=\"https://en.wikipedia.org/wiki/Catalan_number\" rel=\"nofollow noreferrer\">Catalan numbers</a>.<br> If you square both sides of $(1)$ and consider the coefficient of $x^n$, you get:\n$$ \\sum_{k=1}^{n-1}\\frac{1}{k(n-k)}\\binom{2k-2}{k-1}\\binom{2(n-k)-2}{(n-k)-1} = [x^n]\\left(\\frac{1-\\sqrt{1-4x}}{2}-x\\right) \\tag{2}$$\nand for $n\\geq 2$, the RHS of $(2)$ is just $\\color{red}{\\frac{1}{n}\\binom{2n-2}{n-1}}=\\color{blue}{\\frac{1}{2n-1}\\binom{2n-1}{n}}$, by $(1)$.</p>\n" }, { "answer_id": 2083661, "score": 3, "text": "<p>Note the <em><a href=\"https://en.wikipedia.org/wiki/Catalan_number\" rel=\"nofollow noreferrer\">Catalan numbers</a></em> are defined as\n\\begin{align*}\nC_{n}&amp;=\\frac{1}{n+1}\\binom{2n}{n}\\qquad\\quad n\\geq 0\\\\\n\\end{align*}\nwith generating function\n\\begin{align*}\nC(z)=\\sum_{n=0}^\\infty C_nz^n=\\frac{1}{2z}\\left(1-\\sqrt{1-4z}\\right)\n\\end{align*}</p>\n\n<blockquote>\n <p>The LHS of OPs binomial identity is a <em>Cauchy-product</em> of Catalan numbers</p>\n \n <p>\\begin{align*}\n\\sum_{k=1}^{n - 1} C_{k-1}C_{n -k-1}\\tag{1}\n\\end{align*}</p>\n \n <p>Since Cauchy products occur when multiplying series we could work with generating functions:</p>\n \n <p>\\begin{align*}\nC^2(z)=\\sum_{n=0}^\\infty\\left(\\sum_{k=0}^{n}C_kC_{n-k}\\right)z^n\\tag{2}\n\\end{align*}</p>\n</blockquote>\n\n<p>Let $[z^n]$ denote the coefficient operator.</p>\n\n<blockquote>\n <p>We observe with the help of (1) and (2) for $n\\geq 2$</p>\n \n <p>\\begin{align*}\n[z^{n-2}]C^2(z)&amp;=[z^{n-2}]\\sum_{n=0}^\\infty\\left(\\sum_{k=0}^{n}C_kC_{n-k}\\right)z^n\\\\\n&amp;=\\sum_{k=0}^{n-2}C_kC_{n-2-k}\\\\\n&amp;=\\sum_{k=1}^{n-1}C_{k-1}C_{n-1-k}\\\\\n\\end{align*}\n on the other hand we obtain\n \\begin{align*}\n[z^{n-2}]C^2(z)&amp;=[z^{n-2}]\\left(\\frac{1}{2z}\\left(1-\\sqrt{1-4z}\\right)\\right)^2\\\\\n&amp;=[z^{n-2}]\\frac{1}{2z^2}\\left(1-\\sqrt{1-4z}\\right)-1\\\\\n&amp;=[z^{n-1}]\\frac{1}{2z}\\left(1-\\sqrt{1-4z}\\right)\\\\\n&amp;=[z^{n-1}]C(z)\\\\\n&amp;=C_{n-1}\\\\\n&amp;=\\frac{1}{n}\\binom{2n-2}{n-1}\\\\\n\\end{align*}\n and the claim follows.</p>\n</blockquote>\n" }, { "answer_id": 2083659, "score": 2, "text": "<p>It is a convolution identity. You may consider that:\n$$ \\sum_{k\\geq 1}\\frac{x^k}{k}\\binom{2k-2}{k-1} = \\frac{1-\\sqrt{1-4x}}{2}\\tag{1}$$\nholds as a consequence of the generating function of <a href=\"https://en.wikipedia.org/wiki/Catalan_number\" rel=\"nofollow noreferrer\">Catalan numbers</a>.<br> If you square both sides of $(1)$ and consider the coefficient of $x^n$, you get:\n$$ \\sum_{k=1}^{n-1}\\frac{1}{k(n-k)}\\binom{2k-2}{k-1}\\binom{2(n-k)-2}{(n-k)-1} = [x^n]\\left(\\frac{1-\\sqrt{1-4x}}{2}-x\\right) \\tag{2}$$\nand for $n\\geq 2$, the RHS of $(2)$ is just $\\color{red}{\\frac{1}{n}\\binom{2n-2}{n-1}}=\\color{blue}{\\frac{1}{2n-1}\\binom{2n-1}{n}}$, by $(1)$.</p>\n" }, { "answer_id": 2083661, "score": 3, "text": "<p>Note the <em><a href=\"https://en.wikipedia.org/wiki/Catalan_number\" rel=\"nofollow noreferrer\">Catalan numbers</a></em> are defined as\n\\begin{align*}\nC_{n}&amp;=\\frac{1}{n+1}\\binom{2n}{n}\\qquad\\quad n\\geq 0\\\\\n\\end{align*}\nwith generating function\n\\begin{align*}\nC(z)=\\sum_{n=0}^\\infty C_nz^n=\\frac{1}{2z}\\left(1-\\sqrt{1-4z}\\right)\n\\end{align*}</p>\n\n<blockquote>\n <p>The LHS of OPs binomial identity is a <em>Cauchy-product</em> of Catalan numbers</p>\n \n <p>\\begin{align*}\n\\sum_{k=1}^{n - 1} C_{k-1}C_{n -k-1}\\tag{1}\n\\end{align*}</p>\n \n <p>Since Cauchy products occur when multiplying series we could work with generating functions:</p>\n \n <p>\\begin{align*}\nC^2(z)=\\sum_{n=0}^\\infty\\left(\\sum_{k=0}^{n}C_kC_{n-k}\\right)z^n\\tag{2}\n\\end{align*}</p>\n</blockquote>\n\n<p>Let $[z^n]$ denote the coefficient operator.</p>\n\n<blockquote>\n <p>We observe with the help of (1) and (2) for $n\\geq 2$</p>\n \n <p>\\begin{align*}\n[z^{n-2}]C^2(z)&amp;=[z^{n-2}]\\sum_{n=0}^\\infty\\left(\\sum_{k=0}^{n}C_kC_{n-k}\\right)z^n\\\\\n&amp;=\\sum_{k=0}^{n-2}C_kC_{n-2-k}\\\\\n&amp;=\\sum_{k=1}^{n-1}C_{k-1}C_{n-1-k}\\\\\n\\end{align*}\n on the other hand we obtain\n \\begin{align*}\n[z^{n-2}]C^2(z)&amp;=[z^{n-2}]\\left(\\frac{1}{2z}\\left(1-\\sqrt{1-4z}\\right)\\right)^2\\\\\n&amp;=[z^{n-2}]\\frac{1}{2z^2}\\left(1-\\sqrt{1-4z}\\right)-1\\\\\n&amp;=[z^{n-1}]\\frac{1}{2z}\\left(1-\\sqrt{1-4z}\\right)\\\\\n&amp;=[z^{n-1}]C(z)\\\\\n&amp;=C_{n-1}\\\\\n&amp;=\\frac{1}{n}\\binom{2n-2}{n-1}\\\\\n\\end{align*}\n and the claim follows.</p>\n</blockquote>\n" }, { "answer_id": 2083659, "score": 2, "text": "<p>It is a convolution identity. You may consider that:\n$$ \\sum_{k\\geq 1}\\frac{x^k}{k}\\binom{2k-2}{k-1} = \\frac{1-\\sqrt{1-4x}}{2}\\tag{1}$$\nholds as a consequence of the generating function of <a href=\"https://en.wikipedia.org/wiki/Catalan_number\" rel=\"nofollow noreferrer\">Catalan numbers</a>.<br> If you square both sides of $(1)$ and consider the coefficient of $x^n$, you get:\n$$ \\sum_{k=1}^{n-1}\\frac{1}{k(n-k)}\\binom{2k-2}{k-1}\\binom{2(n-k)-2}{(n-k)-1} = [x^n]\\left(\\frac{1-\\sqrt{1-4x}}{2}-x\\right) \\tag{2}$$\nand for $n\\geq 2$, the RHS of $(2)$ is just $\\color{red}{\\frac{1}{n}\\binom{2n-2}{n-1}}=\\color{blue}{\\frac{1}{2n-1}\\binom{2n-1}{n}}$, by $(1)$.</p>\n" }, { "answer_id": 2083661, "score": 3, "text": "<p>Note the <em><a href=\"https://en.wikipedia.org/wiki/Catalan_number\" rel=\"nofollow noreferrer\">Catalan numbers</a></em> are defined as\n\\begin{align*}\nC_{n}&amp;=\\frac{1}{n+1}\\binom{2n}{n}\\qquad\\quad n\\geq 0\\\\\n\\end{align*}\nwith generating function\n\\begin{align*}\nC(z)=\\sum_{n=0}^\\infty C_nz^n=\\frac{1}{2z}\\left(1-\\sqrt{1-4z}\\right)\n\\end{align*}</p>\n\n<blockquote>\n <p>The LHS of OPs binomial identity is a <em>Cauchy-product</em> of Catalan numbers</p>\n \n <p>\\begin{align*}\n\\sum_{k=1}^{n - 1} C_{k-1}C_{n -k-1}\\tag{1}\n\\end{align*}</p>\n \n <p>Since Cauchy products occur when multiplying series we could work with generating functions:</p>\n \n <p>\\begin{align*}\nC^2(z)=\\sum_{n=0}^\\infty\\left(\\sum_{k=0}^{n}C_kC_{n-k}\\right)z^n\\tag{2}\n\\end{align*}</p>\n</blockquote>\n\n<p>Let $[z^n]$ denote the coefficient operator.</p>\n\n<blockquote>\n <p>We observe with the help of (1) and (2) for $n\\geq 2$</p>\n \n <p>\\begin{align*}\n[z^{n-2}]C^2(z)&amp;=[z^{n-2}]\\sum_{n=0}^\\infty\\left(\\sum_{k=0}^{n}C_kC_{n-k}\\right)z^n\\\\\n&amp;=\\sum_{k=0}^{n-2}C_kC_{n-2-k}\\\\\n&amp;=\\sum_{k=1}^{n-1}C_{k-1}C_{n-1-k}\\\\\n\\end{align*}\n on the other hand we obtain\n \\begin{align*}\n[z^{n-2}]C^2(z)&amp;=[z^{n-2}]\\left(\\frac{1}{2z}\\left(1-\\sqrt{1-4z}\\right)\\right)^2\\\\\n&amp;=[z^{n-2}]\\frac{1}{2z^2}\\left(1-\\sqrt{1-4z}\\right)-1\\\\\n&amp;=[z^{n-1}]\\frac{1}{2z}\\left(1-\\sqrt{1-4z}\\right)\\\\\n&amp;=[z^{n-1}]C(z)\\\\\n&amp;=C_{n-1}\\\\\n&amp;=\\frac{1}{n}\\binom{2n-2}{n-1}\\\\\n\\end{align*}\n and the claim follows.</p>\n</blockquote>\n" }, { "answer_id": 2083659, "score": 2, "text": "<p>It is a convolution identity. You may consider that:\n$$ \\sum_{k\\geq 1}\\frac{x^k}{k}\\binom{2k-2}{k-1} = \\frac{1-\\sqrt{1-4x}}{2}\\tag{1}$$\nholds as a consequence of the generating function of <a href=\"https://en.wikipedia.org/wiki/Catalan_number\" rel=\"nofollow noreferrer\">Catalan numbers</a>.<br> If you square both sides of $(1)$ and consider the coefficient of $x^n$, you get:\n$$ \\sum_{k=1}^{n-1}\\frac{1}{k(n-k)}\\binom{2k-2}{k-1}\\binom{2(n-k)-2}{(n-k)-1} = [x^n]\\left(\\frac{1-\\sqrt{1-4x}}{2}-x\\right) \\tag{2}$$\nand for $n\\geq 2$, the RHS of $(2)$ is just $\\color{red}{\\frac{1}{n}\\binom{2n-2}{n-1}}=\\color{blue}{\\frac{1}{2n-1}\\binom{2n-1}{n}}$, by $(1)$.</p>\n" }, { "answer_id": 2083661, "score": 3, "text": "<p>Note the <em><a href=\"https://en.wikipedia.org/wiki/Catalan_number\" rel=\"nofollow noreferrer\">Catalan numbers</a></em> are defined as\n\\begin{align*}\nC_{n}&amp;=\\frac{1}{n+1}\\binom{2n}{n}\\qquad\\quad n\\geq 0\\\\\n\\end{align*}\nwith generating function\n\\begin{align*}\nC(z)=\\sum_{n=0}^\\infty C_nz^n=\\frac{1}{2z}\\left(1-\\sqrt{1-4z}\\right)\n\\end{align*}</p>\n\n<blockquote>\n <p>The LHS of OPs binomial identity is a <em>Cauchy-product</em> of Catalan numbers</p>\n \n <p>\\begin{align*}\n\\sum_{k=1}^{n - 1} C_{k-1}C_{n -k-1}\\tag{1}\n\\end{align*}</p>\n \n <p>Since Cauchy products occur when multiplying series we could work with generating functions:</p>\n \n <p>\\begin{align*}\nC^2(z)=\\sum_{n=0}^\\infty\\left(\\sum_{k=0}^{n}C_kC_{n-k}\\right)z^n\\tag{2}\n\\end{align*}</p>\n</blockquote>\n\n<p>Let $[z^n]$ denote the coefficient operator.</p>\n\n<blockquote>\n <p>We observe with the help of (1) and (2) for $n\\geq 2$</p>\n \n <p>\\begin{align*}\n[z^{n-2}]C^2(z)&amp;=[z^{n-2}]\\sum_{n=0}^\\infty\\left(\\sum_{k=0}^{n}C_kC_{n-k}\\right)z^n\\\\\n&amp;=\\sum_{k=0}^{n-2}C_kC_{n-2-k}\\\\\n&amp;=\\sum_{k=1}^{n-1}C_{k-1}C_{n-1-k}\\\\\n\\end{align*}\n on the other hand we obtain\n \\begin{align*}\n[z^{n-2}]C^2(z)&amp;=[z^{n-2}]\\left(\\frac{1}{2z}\\left(1-\\sqrt{1-4z}\\right)\\right)^2\\\\\n&amp;=[z^{n-2}]\\frac{1}{2z^2}\\left(1-\\sqrt{1-4z}\\right)-1\\\\\n&amp;=[z^{n-1}]\\frac{1}{2z}\\left(1-\\sqrt{1-4z}\\right)\\\\\n&amp;=[z^{n-1}]C(z)\\\\\n&amp;=C_{n-1}\\\\\n&amp;=\\frac{1}{n}\\binom{2n-2}{n-1}\\\\\n\\end{align*}\n and the claim follows.</p>\n</blockquote>\n" }, { "answer_id": 2083659, "score": 2, "text": "<p>It is a convolution identity. You may consider that:\n$$ \\sum_{k\\geq 1}\\frac{x^k}{k}\\binom{2k-2}{k-1} = \\frac{1-\\sqrt{1-4x}}{2}\\tag{1}$$\nholds as a consequence of the generating function of <a href=\"https://en.wikipedia.org/wiki/Catalan_number\" rel=\"nofollow noreferrer\">Catalan numbers</a>.<br> If you square both sides of $(1)$ and consider the coefficient of $x^n$, you get:\n$$ \\sum_{k=1}^{n-1}\\frac{1}{k(n-k)}\\binom{2k-2}{k-1}\\binom{2(n-k)-2}{(n-k)-1} = [x^n]\\left(\\frac{1-\\sqrt{1-4x}}{2}-x\\right) \\tag{2}$$\nand for $n\\geq 2$, the RHS of $(2)$ is just $\\color{red}{\\frac{1}{n}\\binom{2n-2}{n-1}}=\\color{blue}{\\frac{1}{2n-1}\\binom{2n-1}{n}}$, by $(1)$.</p>\n" }, { "answer_id": 2083661, "score": 3, "text": "<p>Note the <em><a href=\"https://en.wikipedia.org/wiki/Catalan_number\" rel=\"nofollow noreferrer\">Catalan numbers</a></em> are defined as\n\\begin{align*}\nC_{n}&amp;=\\frac{1}{n+1}\\binom{2n}{n}\\qquad\\quad n\\geq 0\\\\\n\\end{align*}\nwith generating function\n\\begin{align*}\nC(z)=\\sum_{n=0}^\\infty C_nz^n=\\frac{1}{2z}\\left(1-\\sqrt{1-4z}\\right)\n\\end{align*}</p>\n\n<blockquote>\n <p>The LHS of OPs binomial identity is a <em>Cauchy-product</em> of Catalan numbers</p>\n \n <p>\\begin{align*}\n\\sum_{k=1}^{n - 1} C_{k-1}C_{n -k-1}\\tag{1}\n\\end{align*}</p>\n \n <p>Since Cauchy products occur when multiplying series we could work with generating functions:</p>\n \n <p>\\begin{align*}\nC^2(z)=\\sum_{n=0}^\\infty\\left(\\sum_{k=0}^{n}C_kC_{n-k}\\right)z^n\\tag{2}\n\\end{align*}</p>\n</blockquote>\n\n<p>Let $[z^n]$ denote the coefficient operator.</p>\n\n<blockquote>\n <p>We observe with the help of (1) and (2) for $n\\geq 2$</p>\n \n <p>\\begin{align*}\n[z^{n-2}]C^2(z)&amp;=[z^{n-2}]\\sum_{n=0}^\\infty\\left(\\sum_{k=0}^{n}C_kC_{n-k}\\right)z^n\\\\\n&amp;=\\sum_{k=0}^{n-2}C_kC_{n-2-k}\\\\\n&amp;=\\sum_{k=1}^{n-1}C_{k-1}C_{n-1-k}\\\\\n\\end{align*}\n on the other hand we obtain\n \\begin{align*}\n[z^{n-2}]C^2(z)&amp;=[z^{n-2}]\\left(\\frac{1}{2z}\\left(1-\\sqrt{1-4z}\\right)\\right)^2\\\\\n&amp;=[z^{n-2}]\\frac{1}{2z^2}\\left(1-\\sqrt{1-4z}\\right)-1\\\\\n&amp;=[z^{n-1}]\\frac{1}{2z}\\left(1-\\sqrt{1-4z}\\right)\\\\\n&amp;=[z^{n-1}]C(z)\\\\\n&amp;=C_{n-1}\\\\\n&amp;=\\frac{1}{n}\\binom{2n-2}{n-1}\\\\\n\\end{align*}\n and the claim follows.</p>\n</blockquote>\n" }, { "answer_id": 2083659, "score": 2, "text": "<p>It is a convolution identity. You may consider that:\n$$ \\sum_{k\\geq 1}\\frac{x^k}{k}\\binom{2k-2}{k-1} = \\frac{1-\\sqrt{1-4x}}{2}\\tag{1}$$\nholds as a consequence of the generating function of <a href=\"https://en.wikipedia.org/wiki/Catalan_number\" rel=\"nofollow noreferrer\">Catalan numbers</a>.<br> If you square both sides of $(1)$ and consider the coefficient of $x^n$, you get:\n$$ \\sum_{k=1}^{n-1}\\frac{1}{k(n-k)}\\binom{2k-2}{k-1}\\binom{2(n-k)-2}{(n-k)-1} = [x^n]\\left(\\frac{1-\\sqrt{1-4x}}{2}-x\\right) \\tag{2}$$\nand for $n\\geq 2$, the RHS of $(2)$ is just $\\color{red}{\\frac{1}{n}\\binom{2n-2}{n-1}}=\\color{blue}{\\frac{1}{2n-1}\\binom{2n-1}{n}}$, by $(1)$.</p>\n" } ]
Question: How to prove this sum $\sum_{k=1}^{n-1}\frac{1}{k(n-k)}\binom{2(k-1)}{k-1}\binom{2(n-k-1)}{n-k-1}=\binom{2(n-1)}{n-1}$ Answer 1: <p>Note the <em><a href="https://en.wikipedia.org/wiki/Catalan_number" rel="nofollow noreferrer">Catalan numbers</a></em> are defined as \begin{align*} C_{n}&amp;=\frac{1}{n+1}\binom{2n}{n}\qquad\quad n\geq 0\\ \end{align*} with generating function \begin{align*} C(z)=\sum_{n=0}^\infty C_nz^n=\frac{1}{2z}\left(1-\sqrt{1-4z}\right) \end{align*}</p> <blockquote> <p>The LHS of OPs binomial identity is a <em>Cauchy-product</em> of Catalan numbers</p> <p>\begin{align*} \sum_{k=1}^{n - 1} C_{k-1}C_{n -k-1}\tag{1} \end{align*}</p> <p>Since Cauchy products occur when multiplying series we could work with generating functions:</p> <p>\begin{align*} C^2(z)=\sum_{n=0}^\infty\left(\sum_{k=0}^{n}C_kC_{n-k}\right)z^n\tag{2} \end{align*}</p> </blockquote> <p>Let $[z^n]$ denote the coefficient operator.</p> <blockquote> <p>We observe with the help of (1) and (2) for $n\geq 2$</p> <p>\begin{align*} [z^{n-2}]C^2(z)&amp;=[z^{n-2}]\sum_{n=0}^\infty\left(\sum_{k=0}^{n}C_kC_{n-k}\right)z^n\\ &amp;=\sum_{k=0}^{n-2}C_kC_{n-2-k}\\ &amp;=\sum_{k=1}^{n-1}C_{k-1}C_{n-1-k}\\ \end{align*} on the other hand we obtain \begin{align*} [z^{n-2}]C^2(z)&amp;=[z^{n-2}]\left(\frac{1}{2z}\left(1-\sqrt{1-4z}\right)\right)^2\\ &amp;=[z^{n-2}]\frac{1}{2z^2}\left(1-\sqrt{1-4z}\right)-1\\ &amp;=[z^{n-1}]\frac{1}{2z}\left(1-\sqrt{1-4z}\right)\\ &amp;=[z^{n-1}]C(z)\\ &amp;=C_{n-1}\\ &amp;=\frac{1}{n}\binom{2n-2}{n-1}\\ \end{align*} and the claim follows.</p> </blockquote> Answer 2: <p>It is a convolution identity. You may consider that: $$ \sum_{k\geq 1}\frac{x^k}{k}\binom{2k-2}{k-1} = \frac{1-\sqrt{1-4x}}{2}\tag{1}$$ holds as a consequence of the generating function of <a href="https://en.wikipedia.org/wiki/Catalan_number" rel="nofollow noreferrer">Catalan numbers</a>.<br> If you square both sides of $(1)$ and consider the coefficient of $x^n$, you get: $$ \sum_{k=1}^{n-1}\frac{1}{k(n-k)}\binom{2k-2}{k-1}\binom{2(n-k)-2}{(n-k)-1} = [x^n]\left(\frac{1-\sqrt{1-4x}}{2}-x\right) \tag{2}$$ and for $n\geq 2$, the RHS of $(2)$ is just $\color{red}{\frac{1}{n}\binom{2n-2}{n-1}}=\color{blue}{\frac{1}{2n-1}\binom{2n-1}{n}}$, by $(1)$.</p> Answer 3: <p>Note the <em><a href="https://en.wikipedia.org/wiki/Catalan_number" rel="nofollow noreferrer">Catalan numbers</a></em> are defined as \begin{align*} C_{n}&amp;=\frac{1}{n+1}\binom{2n}{n}\qquad\quad n\geq 0\\ \end{align*} with generating function \begin{align*} C(z)=\sum_{n=0}^\infty C_nz^n=\frac{1}{2z}\left(1-\sqrt{1-4z}\right) \end{align*}</p> <blockquote> <p>The LHS of OPs binomial identity is a <em>Cauchy-product</em> of Catalan numbers</p> <p>\begin{align*} \sum_{k=1}^{n - 1} C_{k-1}C_{n -k-1}\tag{1} \end{align*}</p> <p>Since Cauchy products occur when multiplying series we could work with generating functions:</p> <p>\begin{align*} C^2(z)=\sum_{n=0}^\infty\left(\sum_{k=0}^{n}C_kC_{n-k}\right)z^n\tag{2} \end{align*}</p> </blockquote> <p>Let $[z^n]$ denote the coefficient operator.</p> <blockquote> <p>We observe with the help of (1) and (2) for $n\geq 2$</p> <p>\begin{align*} [z^{n-2}]C^2(z)&amp;=[z^{n-2}]\sum_{n=0}^\infty\left(\sum_{k=0}^{n}C_kC_{n-k}\right)z^n\\ &amp;=\sum_{k=0}^{n-2}C_kC_{n-2-k}\\ &amp;=\sum_{k=1}^{n-1}C_{k-1}C_{n-1-k}\\ \end{align*} on the other hand we obtain \begin{align*} [z^{n-2}]C^2(z)&amp;=[z^{n-2}]\left(\frac{1}{2z}\left(1-\sqrt{1-4z}\right)\right)^2\\ &amp;=[z^{n-2}]\frac{1}{2z^2}\left(1-\sqrt{1-4z}\right)-1\\ &amp;=[z^{n-1}]\frac{1}{2z}\left(1-\sqrt{1-4z}\right)\\ &amp;=[z^{n-1}]C(z)\\ &amp;=C_{n-1}\\ &amp;=\frac{1}{n}\binom{2n-2}{n-1}\\ \end{align*} and the claim follows.</p> </blockquote> Answer 4: <p>It is a convolution identity. You may consider that: $$ \sum_{k\geq 1}\frac{x^k}{k}\binom{2k-2}{k-1} = \frac{1-\sqrt{1-4x}}{2}\tag{1}$$ holds as a consequence of the generating function of <a href="https://en.wikipedia.org/wiki/Catalan_number" rel="nofollow noreferrer">Catalan numbers</a>.<br> If you square both sides of $(1)$ and consider the coefficient of $x^n$, you get: $$ \sum_{k=1}^{n-1}\frac{1}{k(n-k)}\binom{2k-2}{k-1}\binom{2(n-k)-2}{(n-k)-1} = [x^n]\left(\frac{1-\sqrt{1-4x}}{2}-x\right) \tag{2}$$ and for $n\geq 2$, the RHS of $(2)$ is just $\color{red}{\frac{1}{n}\binom{2n-2}{n-1}}=\color{blue}{\frac{1}{2n-1}\binom{2n-1}{n}}$, by $(1)$.</p> Answer 5: <p>Note the <em><a href="https://en.wikipedia.org/wiki/Catalan_number" rel="nofollow noreferrer">Catalan numbers</a></em> are defined as \begin{align*} C_{n}&amp;=\frac{1}{n+1}\binom{2n}{n}\qquad\quad n\geq 0\\ \end{align*} with generating function \begin{align*} C(z)=\sum_{n=0}^\infty C_nz^n=\frac{1}{2z}\left(1-\sqrt{1-4z}\right) \end{align*}</p> <blockquote> <p>The LHS of OPs binomial identity is a <em>Cauchy-product</em> of Catalan numbers</p> <p>\begin{align*} \sum_{k=1}^{n - 1} C_{k-1}C_{n -k-1}\tag{1} \end{align*}</p> <p>Since Cauchy products occur when multiplying series we could work with generating functions:</p> <p>\begin{align*} C^2(z)=\sum_{n=0}^\infty\left(\sum_{k=0}^{n}C_kC_{n-k}\right)z^n\tag{2} \end{align*}</p> </blockquote> <p>Let $[z^n]$ denote the coefficient operator.</p> <blockquote> <p>We observe with the help of (1) and (2) for $n\geq 2$</p> <p>\begin{align*} [z^{n-2}]C^2(z)&amp;=[z^{n-2}]\sum_{n=0}^\infty\left(\sum_{k=0}^{n}C_kC_{n-k}\right)z^n\\ &amp;=\sum_{k=0}^{n-2}C_kC_{n-2-k}\\ &amp;=\sum_{k=1}^{n-1}C_{k-1}C_{n-1-k}\\ \end{align*} on the other hand we obtain \begin{align*} [z^{n-2}]C^2(z)&amp;=[z^{n-2}]\left(\frac{1}{2z}\left(1-\sqrt{1-4z}\right)\right)^2\\ &amp;=[z^{n-2}]\frac{1}{2z^2}\left(1-\sqrt{1-4z}\right)-1\\ &amp;=[z^{n-1}]\frac{1}{2z}\left(1-\sqrt{1-4z}\right)\\ &amp;=[z^{n-1}]C(z)\\ &amp;=C_{n-1}\\ &amp;=\frac{1}{n}\binom{2n-2}{n-1}\\ \end{align*} and the claim follows.</p> </blockquote> Answer 6: <p>It is a convolution identity. You may consider that: $$ \sum_{k\geq 1}\frac{x^k}{k}\binom{2k-2}{k-1} = \frac{1-\sqrt{1-4x}}{2}\tag{1}$$ holds as a consequence of the generating function of <a href="https://en.wikipedia.org/wiki/Catalan_number" rel="nofollow noreferrer">Catalan numbers</a>.<br> If you square both sides of $(1)$ and consider the coefficient of $x^n$, you get: $$ \sum_{k=1}^{n-1}\frac{1}{k(n-k)}\binom{2k-2}{k-1}\binom{2(n-k)-2}{(n-k)-1} = [x^n]\left(\frac{1-\sqrt{1-4x}}{2}-x\right) \tag{2}$$ and for $n\geq 2$, the RHS of $(2)$ is just $\color{red}{\frac{1}{n}\binom{2n-2}{n-1}}=\color{blue}{\frac{1}{2n-1}\binom{2n-1}{n}}$, by $(1)$.</p> Answer 7: <p>Note the <em><a href="https://en.wikipedia.org/wiki/Catalan_number" rel="nofollow noreferrer">Catalan numbers</a></em> are defined as \begin{align*} C_{n}&amp;=\frac{1}{n+1}\binom{2n}{n}\qquad\quad n\geq 0\\ \end{align*} with generating function \begin{align*} C(z)=\sum_{n=0}^\infty C_nz^n=\frac{1}{2z}\left(1-\sqrt{1-4z}\right) \end{align*}</p> <blockquote> <p>The LHS of OPs binomial identity is a <em>Cauchy-product</em> of Catalan numbers</p> <p>\begin{align*} \sum_{k=1}^{n - 1} C_{k-1}C_{n -k-1}\tag{1} \end{align*}</p> <p>Since Cauchy products occur when multiplying series we could work with generating functions:</p> <p>\begin{align*} C^2(z)=\sum_{n=0}^\infty\left(\sum_{k=0}^{n}C_kC_{n-k}\right)z^n\tag{2} \end{align*}</p> </blockquote> <p>Let $[z^n]$ denote the coefficient operator.</p> <blockquote> <p>We observe with the help of (1) and (2) for $n\geq 2$</p> <p>\begin{align*} [z^{n-2}]C^2(z)&amp;=[z^{n-2}]\sum_{n=0}^\infty\left(\sum_{k=0}^{n}C_kC_{n-k}\right)z^n\\ &amp;=\sum_{k=0}^{n-2}C_kC_{n-2-k}\\ &amp;=\sum_{k=1}^{n-1}C_{k-1}C_{n-1-k}\\ \end{align*} on the other hand we obtain \begin{align*} [z^{n-2}]C^2(z)&amp;=[z^{n-2}]\left(\frac{1}{2z}\left(1-\sqrt{1-4z}\right)\right)^2\\ &amp;=[z^{n-2}]\frac{1}{2z^2}\left(1-\sqrt{1-4z}\right)-1\\ &amp;=[z^{n-1}]\frac{1}{2z}\left(1-\sqrt{1-4z}\right)\\ &amp;=[z^{n-1}]C(z)\\ &amp;=C_{n-1}\\ &amp;=\frac{1}{n}\binom{2n-2}{n-1}\\ \end{align*} and the claim follows.</p> </blockquote> Answer 8: <p>It is a convolution identity. You may consider that: $$ \sum_{k\geq 1}\frac{x^k}{k}\binom{2k-2}{k-1} = \frac{1-\sqrt{1-4x}}{2}\tag{1}$$ holds as a consequence of the generating function of <a href="https://en.wikipedia.org/wiki/Catalan_number" rel="nofollow noreferrer">Catalan numbers</a>.<br> If you square both sides of $(1)$ and consider the coefficient of $x^n$, you get: $$ \sum_{k=1}^{n-1}\frac{1}{k(n-k)}\binom{2k-2}{k-1}\binom{2(n-k)-2}{(n-k)-1} = [x^n]\left(\frac{1-\sqrt{1-4x}}{2}-x\right) \tag{2}$$ and for $n\geq 2$, the RHS of $(2)$ is just $\color{red}{\frac{1}{n}\binom{2n-2}{n-1}}=\color{blue}{\frac{1}{2n-1}\binom{2n-1}{n}}$, by $(1)$.</p> Answer 9: <p>Note the <em><a href="https://en.wikipedia.org/wiki/Catalan_number" rel="nofollow noreferrer">Catalan numbers</a></em> are defined as \begin{align*} C_{n}&amp;=\frac{1}{n+1}\binom{2n}{n}\qquad\quad n\geq 0\\ \end{align*} with generating function \begin{align*} C(z)=\sum_{n=0}^\infty C_nz^n=\frac{1}{2z}\left(1-\sqrt{1-4z}\right) \end{align*}</p> <blockquote> <p>The LHS of OPs binomial identity is a <em>Cauchy-product</em> of Catalan numbers</p> <p>\begin{align*} \sum_{k=1}^{n - 1} C_{k-1}C_{n -k-1}\tag{1} \end{align*}</p> <p>Since Cauchy products occur when multiplying series we could work with generating functions:</p> <p>\begin{align*} C^2(z)=\sum_{n=0}^\infty\left(\sum_{k=0}^{n}C_kC_{n-k}\right)z^n\tag{2} \end{align*}</p> </blockquote> <p>Let $[z^n]$ denote the coefficient operator.</p> <blockquote> <p>We observe with the help of (1) and (2) for $n\geq 2$</p> <p>\begin{align*} [z^{n-2}]C^2(z)&amp;=[z^{n-2}]\sum_{n=0}^\infty\left(\sum_{k=0}^{n}C_kC_{n-k}\right)z^n\\ &amp;=\sum_{k=0}^{n-2}C_kC_{n-2-k}\\ &amp;=\sum_{k=1}^{n-1}C_{k-1}C_{n-1-k}\\ \end{align*} on the other hand we obtain \begin{align*} [z^{n-2}]C^2(z)&amp;=[z^{n-2}]\left(\frac{1}{2z}\left(1-\sqrt{1-4z}\right)\right)^2\\ &amp;=[z^{n-2}]\frac{1}{2z^2}\left(1-\sqrt{1-4z}\right)-1\\ &amp;=[z^{n-1}]\frac{1}{2z}\left(1-\sqrt{1-4z}\right)\\ &amp;=[z^{n-1}]C(z)\\ &amp;=C_{n-1}\\ &amp;=\frac{1}{n}\binom{2n-2}{n-1}\\ \end{align*} and the claim follows.</p> </blockquote> Answer 10: <p>It is a convolution identity. You may consider that: $$ \sum_{k\geq 1}\frac{x^k}{k}\binom{2k-2}{k-1} = \frac{1-\sqrt{1-4x}}{2}\tag{1}$$ holds as a consequence of the generating function of <a href="https://en.wikipedia.org/wiki/Catalan_number" rel="nofollow noreferrer">Catalan numbers</a>.<br> If you square both sides of $(1)$ and consider the coefficient of $x^n$, you get: $$ \sum_{k=1}^{n-1}\frac{1}{k(n-k)}\binom{2k-2}{k-1}\binom{2(n-k)-2}{(n-k)-1} = [x^n]\left(\frac{1-\sqrt{1-4x}}{2}-x\right) \tag{2}$$ and for $n\geq 2$, the RHS of $(2)$ is just $\color{red}{\frac{1}{n}\binom{2n-2}{n-1}}=\color{blue}{\frac{1}{2n-1}\binom{2n-1}{n}}$, by $(1)$.</p> Answer 11: <p>Note the <em><a href="https://en.wikipedia.org/wiki/Catalan_number" rel="nofollow noreferrer">Catalan numbers</a></em> are defined as \begin{align*} C_{n}&amp;=\frac{1}{n+1}\binom{2n}{n}\qquad\quad n\geq 0\\ \end{align*} with generating function \begin{align*} C(z)=\sum_{n=0}^\infty C_nz^n=\frac{1}{2z}\left(1-\sqrt{1-4z}\right) \end{align*}</p> <blockquote> <p>The LHS of OPs binomial identity is a <em>Cauchy-product</em> of Catalan numbers</p> <p>\begin{align*} \sum_{k=1}^{n - 1} C_{k-1}C_{n -k-1}\tag{1} \end{align*}</p> <p>Since Cauchy products occur when multiplying series we could work with generating functions:</p> <p>\begin{align*} C^2(z)=\sum_{n=0}^\infty\left(\sum_{k=0}^{n}C_kC_{n-k}\right)z^n\tag{2} \end{align*}</p> </blockquote> <p>Let $[z^n]$ denote the coefficient operator.</p> <blockquote> <p>We observe with the help of (1) and (2) for $n\geq 2$</p> <p>\begin{align*} [z^{n-2}]C^2(z)&amp;=[z^{n-2}]\sum_{n=0}^\infty\left(\sum_{k=0}^{n}C_kC_{n-k}\right)z^n\\ &amp;=\sum_{k=0}^{n-2}C_kC_{n-2-k}\\ &amp;=\sum_{k=1}^{n-1}C_{k-1}C_{n-1-k}\\ \end{align*} on the other hand we obtain \begin{align*} [z^{n-2}]C^2(z)&amp;=[z^{n-2}]\left(\frac{1}{2z}\left(1-\sqrt{1-4z}\right)\right)^2\\ &amp;=[z^{n-2}]\frac{1}{2z^2}\left(1-\sqrt{1-4z}\right)-1\\ &amp;=[z^{n-1}]\frac{1}{2z}\left(1-\sqrt{1-4z}\right)\\ &amp;=[z^{n-1}]C(z)\\ &amp;=C_{n-1}\\ &amp;=\frac{1}{n}\binom{2n-2}{n-1}\\ \end{align*} and the claim follows.</p> </blockquote> Answer 12: <p>It is a convolution identity. You may consider that: $$ \sum_{k\geq 1}\frac{x^k}{k}\binom{2k-2}{k-1} = \frac{1-\sqrt{1-4x}}{2}\tag{1}$$ holds as a consequence of the generating function of <a href="https://en.wikipedia.org/wiki/Catalan_number" rel="nofollow noreferrer">Catalan numbers</a>.<br> If you square both sides of $(1)$ and consider the coefficient of $x^n$, you get: $$ \sum_{k=1}^{n-1}\frac{1}{k(n-k)}\binom{2k-2}{k-1}\binom{2(n-k)-2}{(n-k)-1} = [x^n]\left(\frac{1-\sqrt{1-4x}}{2}-x\right) \tag{2}$$ and for $n\geq 2$, the RHS of $(2)$ is just $\color{red}{\frac{1}{n}\binom{2n-2}{n-1}}=\color{blue}{\frac{1}{2n-1}\binom{2n-1}{n}}$, by $(1)$.</p> Answer 13: <p>Note the <em><a href="https://en.wikipedia.org/wiki/Catalan_number" rel="nofollow noreferrer">Catalan numbers</a></em> are defined as \begin{align*} C_{n}&amp;=\frac{1}{n+1}\binom{2n}{n}\qquad\quad n\geq 0\\ \end{align*} with generating function \begin{align*} C(z)=\sum_{n=0}^\infty C_nz^n=\frac{1}{2z}\left(1-\sqrt{1-4z}\right) \end{align*}</p> <blockquote> <p>The LHS of OPs binomial identity is a <em>Cauchy-product</em> of Catalan numbers</p> <p>\begin{align*} \sum_{k=1}^{n - 1} C_{k-1}C_{n -k-1}\tag{1} \end{align*}</p> <p>Since Cauchy products occur when multiplying series we could work with generating functions:</p> <p>\begin{align*} C^2(z)=\sum_{n=0}^\infty\left(\sum_{k=0}^{n}C_kC_{n-k}\right)z^n\tag{2} \end{align*}</p> </blockquote> <p>Let $[z^n]$ denote the coefficient operator.</p> <blockquote> <p>We observe with the help of (1) and (2) for $n\geq 2$</p> <p>\begin{align*} [z^{n-2}]C^2(z)&amp;=[z^{n-2}]\sum_{n=0}^\infty\left(\sum_{k=0}^{n}C_kC_{n-k}\right)z^n\\ &amp;=\sum_{k=0}^{n-2}C_kC_{n-2-k}\\ &amp;=\sum_{k=1}^{n-1}C_{k-1}C_{n-1-k}\\ \end{align*} on the other hand we obtain \begin{align*} [z^{n-2}]C^2(z)&amp;=[z^{n-2}]\left(\frac{1}{2z}\left(1-\sqrt{1-4z}\right)\right)^2\\ &amp;=[z^{n-2}]\frac{1}{2z^2}\left(1-\sqrt{1-4z}\right)-1\\ &amp;=[z^{n-1}]\frac{1}{2z}\left(1-\sqrt{1-4z}\right)\\ &amp;=[z^{n-1}]C(z)\\ &amp;=C_{n-1}\\ &amp;=\frac{1}{n}\binom{2n-2}{n-1}\\ \end{align*} and the claim follows.</p> </blockquote> Answer 14: <p>It is a convolution identity. You may consider that: $$ \sum_{k\geq 1}\frac{x^k}{k}\binom{2k-2}{k-1} = \frac{1-\sqrt{1-4x}}{2}\tag{1}$$ holds as a consequence of the generating function of <a href="https://en.wikipedia.org/wiki/Catalan_number" rel="nofollow noreferrer">Catalan numbers</a>.<br> If you square both sides of $(1)$ and consider the coefficient of $x^n$, you get: $$ \sum_{k=1}^{n-1}\frac{1}{k(n-k)}\binom{2k-2}{k-1}\binom{2(n-k)-2}{(n-k)-1} = [x^n]\left(\frac{1-\sqrt{1-4x}}{2}-x\right) \tag{2}$$ and for $n\geq 2$, the RHS of $(2)$ is just $\color{red}{\frac{1}{n}\binom{2n-2}{n-1}}=\color{blue}{\frac{1}{2n-1}\binom{2n-1}{n}}$, by $(1)$.</p> Answer 15: <p>Note the <em><a href="https://en.wikipedia.org/wiki/Catalan_number" rel="nofollow noreferrer">Catalan numbers</a></em> are defined as \begin{align*} C_{n}&amp;=\frac{1}{n+1}\binom{2n}{n}\qquad\quad n\geq 0\\ \end{align*} with generating function \begin{align*} C(z)=\sum_{n=0}^\infty C_nz^n=\frac{1}{2z}\left(1-\sqrt{1-4z}\right) \end{align*}</p> <blockquote> <p>The LHS of OPs binomial identity is a <em>Cauchy-product</em> of Catalan numbers</p> <p>\begin{align*} \sum_{k=1}^{n - 1} C_{k-1}C_{n -k-1}\tag{1} \end{align*}</p> <p>Since Cauchy products occur when multiplying series we could work with generating functions:</p> <p>\begin{align*} C^2(z)=\sum_{n=0}^\infty\left(\sum_{k=0}^{n}C_kC_{n-k}\right)z^n\tag{2} \end{align*}</p> </blockquote> <p>Let $[z^n]$ denote the coefficient operator.</p> <blockquote> <p>We observe with the help of (1) and (2) for $n\geq 2$</p> <p>\begin{align*} [z^{n-2}]C^2(z)&amp;=[z^{n-2}]\sum_{n=0}^\infty\left(\sum_{k=0}^{n}C_kC_{n-k}\right)z^n\\ &amp;=\sum_{k=0}^{n-2}C_kC_{n-2-k}\\ &amp;=\sum_{k=1}^{n-1}C_{k-1}C_{n-1-k}\\ \end{align*} on the other hand we obtain \begin{align*} [z^{n-2}]C^2(z)&amp;=[z^{n-2}]\left(\frac{1}{2z}\left(1-\sqrt{1-4z}\right)\right)^2\\ &amp;=[z^{n-2}]\frac{1}{2z^2}\left(1-\sqrt{1-4z}\right)-1\\ &amp;=[z^{n-1}]\frac{1}{2z}\left(1-\sqrt{1-4z}\right)\\ &amp;=[z^{n-1}]C(z)\\ &amp;=C_{n-1}\\ &amp;=\frac{1}{n}\binom{2n-2}{n-1}\\ \end{align*} and the claim follows.</p> </blockquote> Answer 16: <p>It is a convolution identity. You may consider that: $$ \sum_{k\geq 1}\frac{x^k}{k}\binom{2k-2}{k-1} = \frac{1-\sqrt{1-4x}}{2}\tag{1}$$ holds as a consequence of the generating function of <a href="https://en.wikipedia.org/wiki/Catalan_number" rel="nofollow noreferrer">Catalan numbers</a>.<br> If you square both sides of $(1)$ and consider the coefficient of $x^n$, you get: $$ \sum_{k=1}^{n-1}\frac{1}{k(n-k)}\binom{2k-2}{k-1}\binom{2(n-k)-2}{(n-k)-1} = [x^n]\left(\frac{1-\sqrt{1-4x}}{2}-x\right) \tag{2}$$ and for $n\geq 2$, the RHS of $(2)$ is just $\color{red}{\frac{1}{n}\binom{2n-2}{n-1}}=\color{blue}{\frac{1}{2n-1}\binom{2n-1}{n}}$, by $(1)$.</p> Answer 17: <p>Note the <em><a href="https://en.wikipedia.org/wiki/Catalan_number" rel="nofollow noreferrer">Catalan numbers</a></em> are defined as \begin{align*} C_{n}&amp;=\frac{1}{n+1}\binom{2n}{n}\qquad\quad n\geq 0\\ \end{align*} with generating function \begin{align*} C(z)=\sum_{n=0}^\infty C_nz^n=\frac{1}{2z}\left(1-\sqrt{1-4z}\right) \end{align*}</p> <blockquote> <p>The LHS of OPs binomial identity is a <em>Cauchy-product</em> of Catalan numbers</p> <p>\begin{align*} \sum_{k=1}^{n - 1} C_{k-1}C_{n -k-1}\tag{1} \end{align*}</p> <p>Since Cauchy products occur when multiplying series we could work with generating functions:</p> <p>\begin{align*} C^2(z)=\sum_{n=0}^\infty\left(\sum_{k=0}^{n}C_kC_{n-k}\right)z^n\tag{2} \end{align*}</p> </blockquote> <p>Let $[z^n]$ denote the coefficient operator.</p> <blockquote> <p>We observe with the help of (1) and (2) for $n\geq 2$</p> <p>\begin{align*} [z^{n-2}]C^2(z)&amp;=[z^{n-2}]\sum_{n=0}^\infty\left(\sum_{k=0}^{n}C_kC_{n-k}\right)z^n\\ &amp;=\sum_{k=0}^{n-2}C_kC_{n-2-k}\\ &amp;=\sum_{k=1}^{n-1}C_{k-1}C_{n-1-k}\\ \end{align*} on the other hand we obtain \begin{align*} [z^{n-2}]C^2(z)&amp;=[z^{n-2}]\left(\frac{1}{2z}\left(1-\sqrt{1-4z}\right)\right)^2\\ &amp;=[z^{n-2}]\frac{1}{2z^2}\left(1-\sqrt{1-4z}\right)-1\\ &amp;=[z^{n-1}]\frac{1}{2z}\left(1-\sqrt{1-4z}\right)\\ &amp;=[z^{n-1}]C(z)\\ &amp;=C_{n-1}\\ &amp;=\frac{1}{n}\binom{2n-2}{n-1}\\ \end{align*} and the claim follows.</p> </blockquote> Answer 18: <p>It is a convolution identity. You may consider that: $$ \sum_{k\geq 1}\frac{x^k}{k}\binom{2k-2}{k-1} = \frac{1-\sqrt{1-4x}}{2}\tag{1}$$ holds as a consequence of the generating function of <a href="https://en.wikipedia.org/wiki/Catalan_number" rel="nofollow noreferrer">Catalan numbers</a>.<br> If you square both sides of $(1)$ and consider the coefficient of $x^n$, you get: $$ \sum_{k=1}^{n-1}\frac{1}{k(n-k)}\binom{2k-2}{k-1}\binom{2(n-k)-2}{(n-k)-1} = [x^n]\left(\frac{1-\sqrt{1-4x}}{2}-x\right) \tag{2}$$ and for $n\geq 2$, the RHS of $(2)$ is just $\color{red}{\frac{1}{n}\binom{2n-2}{n-1}}=\color{blue}{\frac{1}{2n-1}\binom{2n-1}{n}}$, by $(1)$.</p>
Limit in n-dimensions with dot product
2,083,642
5
https://math.stackexchange.com/questions/2083642/limit-in-n-dimensions-with-dot-product
<p>Let there be three vectors $\mathbf a$, $\mathbf b$, and $\mathbf h$ in $\mathbb R^n$. Define the real number $$p(\mathbf h) \doteq (\mathbf a \cdot \mathbf h) (\mathbf b \cdot \mathbf h)$$ Does the following limit exist? In particular, does it equal $0$? $$\lim_{\mathbf h \to \mathbf 0} \frac {p(\mathbf h)} {|\mathbf h|} $$ I was thinking of expressing $p$ as a product of sums, like $$p(\mathbf h)=\left(\sum_{j=1}^n a_jh_j\right)\left(\sum_{k=1}^n b_kh_k\right) = \sum_{i=1}^na_ib_ih_i^2 + \sum_{j\neq k} a_jb_kh_jh_k $$ Are there any dot product properties I may be forgetting at this point?</p>
[ { "answer_id": 2083649, "score": 7, "text": "<p>Observe that</p>\n\n<p>$$p(h):=(a\\cdot h)(b\\cdot h)=\\left\\|a\\right\\|\\left\\|b\\right\\|\\left\\|h\\right\\|^2\\cos\\theta\\cos\\eta$$</p>\n\n<p>with $\\;\\theta=\\;$ the angle between $\\;a,h\\;$ , and $\\;\\eta=$ the angle between $\\;b,h\\;$ , and thus</p>\n\n<p>$$\\frac{p(h)}{\\left\\|h\\right\\|}=\\left\\|a\\right\\|\\left\\|b\\right\\|\\left\\|h\\right\\|\\cos\\theta\\cos\\eta\\xrightarrow[h\\to0]{}0$$</p>\n" }, { "answer_id": 2083647, "score": 5, "text": "<p>Using <a href=\"https://en.wikipedia.org/wiki/Cauchy%E2%80%93Schwarz_inequality\" rel=\"nofollow noreferrer\">Cauchy-Schwarz inequality</a>, we have\n$$\n0 \\le \\frac{|p(h)|}{|h|}\\le \\frac{|a||b||h|^2}{|h|} = |a||b||h|\n$$\nthat goes to zero as long as $h$ tends to zero.</p>\n" }, { "answer_id": 2083652, "score": 3, "text": "<p>Observe that using Cauchy Schwarz Inequality yields:</p>\n\n<p>$$0 \\leq \\bigg | \\frac {p(h)} h \\bigg | \\leq \\frac 1 {|h|} |a||b||h|^2 = |a||b||h| \\to 0$$\nas $h \\to 0$. Hence by the Squeeze Thm your limit equals 0. </p>\n" }, { "answer_id": 2083649, "score": 7, "text": "<p>Observe that</p>\n\n<p>$$p(h):=(a\\cdot h)(b\\cdot h)=\\left\\|a\\right\\|\\left\\|b\\right\\|\\left\\|h\\right\\|^2\\cos\\theta\\cos\\eta$$</p>\n\n<p>with $\\;\\theta=\\;$ the angle between $\\;a,h\\;$ , and $\\;\\eta=$ the angle between $\\;b,h\\;$ , and thus</p>\n\n<p>$$\\frac{p(h)}{\\left\\|h\\right\\|}=\\left\\|a\\right\\|\\left\\|b\\right\\|\\left\\|h\\right\\|\\cos\\theta\\cos\\eta\\xrightarrow[h\\to0]{}0$$</p>\n" }, { "answer_id": 2083647, "score": 5, "text": "<p>Using <a href=\"https://en.wikipedia.org/wiki/Cauchy%E2%80%93Schwarz_inequality\" rel=\"nofollow noreferrer\">Cauchy-Schwarz inequality</a>, we have\n$$\n0 \\le \\frac{|p(h)|}{|h|}\\le \\frac{|a||b||h|^2}{|h|} = |a||b||h|\n$$\nthat goes to zero as long as $h$ tends to zero.</p>\n" }, { "answer_id": 2083652, "score": 3, "text": "<p>Observe that using Cauchy Schwarz Inequality yields:</p>\n\n<p>$$0 \\leq \\bigg | \\frac {p(h)} h \\bigg | \\leq \\frac 1 {|h|} |a||b||h|^2 = |a||b||h| \\to 0$$\nas $h \\to 0$. Hence by the Squeeze Thm your limit equals 0. </p>\n" }, { "answer_id": 2083649, "score": 7, "text": "<p>Observe that</p>\n\n<p>$$p(h):=(a\\cdot h)(b\\cdot h)=\\left\\|a\\right\\|\\left\\|b\\right\\|\\left\\|h\\right\\|^2\\cos\\theta\\cos\\eta$$</p>\n\n<p>with $\\;\\theta=\\;$ the angle between $\\;a,h\\;$ , and $\\;\\eta=$ the angle between $\\;b,h\\;$ , and thus</p>\n\n<p>$$\\frac{p(h)}{\\left\\|h\\right\\|}=\\left\\|a\\right\\|\\left\\|b\\right\\|\\left\\|h\\right\\|\\cos\\theta\\cos\\eta\\xrightarrow[h\\to0]{}0$$</p>\n" }, { "answer_id": 2083647, "score": 5, "text": "<p>Using <a href=\"https://en.wikipedia.org/wiki/Cauchy%E2%80%93Schwarz_inequality\" rel=\"nofollow noreferrer\">Cauchy-Schwarz inequality</a>, we have\n$$\n0 \\le \\frac{|p(h)|}{|h|}\\le \\frac{|a||b||h|^2}{|h|} = |a||b||h|\n$$\nthat goes to zero as long as $h$ tends to zero.</p>\n" }, { "answer_id": 2083652, "score": 3, "text": "<p>Observe that using Cauchy Schwarz Inequality yields:</p>\n\n<p>$$0 \\leq \\bigg | \\frac {p(h)} h \\bigg | \\leq \\frac 1 {|h|} |a||b||h|^2 = |a||b||h| \\to 0$$\nas $h \\to 0$. Hence by the Squeeze Thm your limit equals 0. </p>\n" }, { "answer_id": 2083649, "score": 7, "text": "<p>Observe that</p>\n\n<p>$$p(h):=(a\\cdot h)(b\\cdot h)=\\left\\|a\\right\\|\\left\\|b\\right\\|\\left\\|h\\right\\|^2\\cos\\theta\\cos\\eta$$</p>\n\n<p>with $\\;\\theta=\\;$ the angle between $\\;a,h\\;$ , and $\\;\\eta=$ the angle between $\\;b,h\\;$ , and thus</p>\n\n<p>$$\\frac{p(h)}{\\left\\|h\\right\\|}=\\left\\|a\\right\\|\\left\\|b\\right\\|\\left\\|h\\right\\|\\cos\\theta\\cos\\eta\\xrightarrow[h\\to0]{}0$$</p>\n" }, { "answer_id": 2083647, "score": 5, "text": "<p>Using <a href=\"https://en.wikipedia.org/wiki/Cauchy%E2%80%93Schwarz_inequality\" rel=\"nofollow noreferrer\">Cauchy-Schwarz inequality</a>, we have\n$$\n0 \\le \\frac{|p(h)|}{|h|}\\le \\frac{|a||b||h|^2}{|h|} = |a||b||h|\n$$\nthat goes to zero as long as $h$ tends to zero.</p>\n" }, { "answer_id": 2083652, "score": 3, "text": "<p>Observe that using Cauchy Schwarz Inequality yields:</p>\n\n<p>$$0 \\leq \\bigg | \\frac {p(h)} h \\bigg | \\leq \\frac 1 {|h|} |a||b||h|^2 = |a||b||h| \\to 0$$\nas $h \\to 0$. Hence by the Squeeze Thm your limit equals 0. </p>\n" }, { "answer_id": 2083649, "score": 7, "text": "<p>Observe that</p>\n\n<p>$$p(h):=(a\\cdot h)(b\\cdot h)=\\left\\|a\\right\\|\\left\\|b\\right\\|\\left\\|h\\right\\|^2\\cos\\theta\\cos\\eta$$</p>\n\n<p>with $\\;\\theta=\\;$ the angle between $\\;a,h\\;$ , and $\\;\\eta=$ the angle between $\\;b,h\\;$ , and thus</p>\n\n<p>$$\\frac{p(h)}{\\left\\|h\\right\\|}=\\left\\|a\\right\\|\\left\\|b\\right\\|\\left\\|h\\right\\|\\cos\\theta\\cos\\eta\\xrightarrow[h\\to0]{}0$$</p>\n" }, { "answer_id": 2083647, "score": 5, "text": "<p>Using <a href=\"https://en.wikipedia.org/wiki/Cauchy%E2%80%93Schwarz_inequality\" rel=\"nofollow noreferrer\">Cauchy-Schwarz inequality</a>, we have\n$$\n0 \\le \\frac{|p(h)|}{|h|}\\le \\frac{|a||b||h|^2}{|h|} = |a||b||h|\n$$\nthat goes to zero as long as $h$ tends to zero.</p>\n" }, { "answer_id": 2083652, "score": 3, "text": "<p>Observe that using Cauchy Schwarz Inequality yields:</p>\n\n<p>$$0 \\leq \\bigg | \\frac {p(h)} h \\bigg | \\leq \\frac 1 {|h|} |a||b||h|^2 = |a||b||h| \\to 0$$\nas $h \\to 0$. Hence by the Squeeze Thm your limit equals 0. </p>\n" }, { "answer_id": 2083649, "score": 7, "text": "<p>Observe that</p>\n\n<p>$$p(h):=(a\\cdot h)(b\\cdot h)=\\left\\|a\\right\\|\\left\\|b\\right\\|\\left\\|h\\right\\|^2\\cos\\theta\\cos\\eta$$</p>\n\n<p>with $\\;\\theta=\\;$ the angle between $\\;a,h\\;$ , and $\\;\\eta=$ the angle between $\\;b,h\\;$ , and thus</p>\n\n<p>$$\\frac{p(h)}{\\left\\|h\\right\\|}=\\left\\|a\\right\\|\\left\\|b\\right\\|\\left\\|h\\right\\|\\cos\\theta\\cos\\eta\\xrightarrow[h\\to0]{}0$$</p>\n" }, { "answer_id": 2083647, "score": 5, "text": "<p>Using <a href=\"https://en.wikipedia.org/wiki/Cauchy%E2%80%93Schwarz_inequality\" rel=\"nofollow noreferrer\">Cauchy-Schwarz inequality</a>, we have\n$$\n0 \\le \\frac{|p(h)|}{|h|}\\le \\frac{|a||b||h|^2}{|h|} = |a||b||h|\n$$\nthat goes to zero as long as $h$ tends to zero.</p>\n" }, { "answer_id": 2083652, "score": 3, "text": "<p>Observe that using Cauchy Schwarz Inequality yields:</p>\n\n<p>$$0 \\leq \\bigg | \\frac {p(h)} h \\bigg | \\leq \\frac 1 {|h|} |a||b||h|^2 = |a||b||h| \\to 0$$\nas $h \\to 0$. Hence by the Squeeze Thm your limit equals 0. </p>\n" }, { "answer_id": 2083649, "score": 7, "text": "<p>Observe that</p>\n\n<p>$$p(h):=(a\\cdot h)(b\\cdot h)=\\left\\|a\\right\\|\\left\\|b\\right\\|\\left\\|h\\right\\|^2\\cos\\theta\\cos\\eta$$</p>\n\n<p>with $\\;\\theta=\\;$ the angle between $\\;a,h\\;$ , and $\\;\\eta=$ the angle between $\\;b,h\\;$ , and thus</p>\n\n<p>$$\\frac{p(h)}{\\left\\|h\\right\\|}=\\left\\|a\\right\\|\\left\\|b\\right\\|\\left\\|h\\right\\|\\cos\\theta\\cos\\eta\\xrightarrow[h\\to0]{}0$$</p>\n" }, { "answer_id": 2083647, "score": 5, "text": "<p>Using <a href=\"https://en.wikipedia.org/wiki/Cauchy%E2%80%93Schwarz_inequality\" rel=\"nofollow noreferrer\">Cauchy-Schwarz inequality</a>, we have\n$$\n0 \\le \\frac{|p(h)|}{|h|}\\le \\frac{|a||b||h|^2}{|h|} = |a||b||h|\n$$\nthat goes to zero as long as $h$ tends to zero.</p>\n" }, { "answer_id": 2083652, "score": 3, "text": "<p>Observe that using Cauchy Schwarz Inequality yields:</p>\n\n<p>$$0 \\leq \\bigg | \\frac {p(h)} h \\bigg | \\leq \\frac 1 {|h|} |a||b||h|^2 = |a||b||h| \\to 0$$\nas $h \\to 0$. Hence by the Squeeze Thm your limit equals 0. </p>\n" }, { "answer_id": 2083649, "score": 7, "text": "<p>Observe that</p>\n\n<p>$$p(h):=(a\\cdot h)(b\\cdot h)=\\left\\|a\\right\\|\\left\\|b\\right\\|\\left\\|h\\right\\|^2\\cos\\theta\\cos\\eta$$</p>\n\n<p>with $\\;\\theta=\\;$ the angle between $\\;a,h\\;$ , and $\\;\\eta=$ the angle between $\\;b,h\\;$ , and thus</p>\n\n<p>$$\\frac{p(h)}{\\left\\|h\\right\\|}=\\left\\|a\\right\\|\\left\\|b\\right\\|\\left\\|h\\right\\|\\cos\\theta\\cos\\eta\\xrightarrow[h\\to0]{}0$$</p>\n" }, { "answer_id": 2083647, "score": 5, "text": "<p>Using <a href=\"https://en.wikipedia.org/wiki/Cauchy%E2%80%93Schwarz_inequality\" rel=\"nofollow noreferrer\">Cauchy-Schwarz inequality</a>, we have\n$$\n0 \\le \\frac{|p(h)|}{|h|}\\le \\frac{|a||b||h|^2}{|h|} = |a||b||h|\n$$\nthat goes to zero as long as $h$ tends to zero.</p>\n" }, { "answer_id": 2083652, "score": 3, "text": "<p>Observe that using Cauchy Schwarz Inequality yields:</p>\n\n<p>$$0 \\leq \\bigg | \\frac {p(h)} h \\bigg | \\leq \\frac 1 {|h|} |a||b||h|^2 = |a||b||h| \\to 0$$\nas $h \\to 0$. Hence by the Squeeze Thm your limit equals 0. </p>\n" }, { "answer_id": 2083649, "score": 7, "text": "<p>Observe that</p>\n\n<p>$$p(h):=(a\\cdot h)(b\\cdot h)=\\left\\|a\\right\\|\\left\\|b\\right\\|\\left\\|h\\right\\|^2\\cos\\theta\\cos\\eta$$</p>\n\n<p>with $\\;\\theta=\\;$ the angle between $\\;a,h\\;$ , and $\\;\\eta=$ the angle between $\\;b,h\\;$ , and thus</p>\n\n<p>$$\\frac{p(h)}{\\left\\|h\\right\\|}=\\left\\|a\\right\\|\\left\\|b\\right\\|\\left\\|h\\right\\|\\cos\\theta\\cos\\eta\\xrightarrow[h\\to0]{}0$$</p>\n" }, { "answer_id": 2083647, "score": 5, "text": "<p>Using <a href=\"https://en.wikipedia.org/wiki/Cauchy%E2%80%93Schwarz_inequality\" rel=\"nofollow noreferrer\">Cauchy-Schwarz inequality</a>, we have\n$$\n0 \\le \\frac{|p(h)|}{|h|}\\le \\frac{|a||b||h|^2}{|h|} = |a||b||h|\n$$\nthat goes to zero as long as $h$ tends to zero.</p>\n" }, { "answer_id": 2083652, "score": 3, "text": "<p>Observe that using Cauchy Schwarz Inequality yields:</p>\n\n<p>$$0 \\leq \\bigg | \\frac {p(h)} h \\bigg | \\leq \\frac 1 {|h|} |a||b||h|^2 = |a||b||h| \\to 0$$\nas $h \\to 0$. Hence by the Squeeze Thm your limit equals 0. </p>\n" } ]
Question: Limit in n-dimensions with dot product Answer 1: <p>Observe that</p> <p>$$p(h):=(a\cdot h)(b\cdot h)=\left\|a\right\|\left\|b\right\|\left\|h\right\|^2\cos\theta\cos\eta$$</p> <p>with $\;\theta=\;$ the angle between $\;a,h\;$ , and $\;\eta=$ the angle between $\;b,h\;$ , and thus</p> <p>$$\frac{p(h)}{\left\|h\right\|}=\left\|a\right\|\left\|b\right\|\left\|h\right\|\cos\theta\cos\eta\xrightarrow[h\to0]{}0$$</p> Answer 2: <p>Using <a href="https://en.wikipedia.org/wiki/Cauchy%E2%80%93Schwarz_inequality" rel="nofollow noreferrer">Cauchy-Schwarz inequality</a>, we have $$ 0 \le \frac{|p(h)|}{|h|}\le \frac{|a||b||h|^2}{|h|} = |a||b||h| $$ that goes to zero as long as $h$ tends to zero.</p> Answer 3: <p>Observe that using Cauchy Schwarz Inequality yields:</p> <p>$$0 \leq \bigg | \frac {p(h)} h \bigg | \leq \frac 1 {|h|} |a||b||h|^2 = |a||b||h| \to 0$$ as $h \to 0$. Hence by the Squeeze Thm your limit equals 0. </p> Answer 4: <p>Observe that</p> <p>$$p(h):=(a\cdot h)(b\cdot h)=\left\|a\right\|\left\|b\right\|\left\|h\right\|^2\cos\theta\cos\eta$$</p> <p>with $\;\theta=\;$ the angle between $\;a,h\;$ , and $\;\eta=$ the angle between $\;b,h\;$ , and thus</p> <p>$$\frac{p(h)}{\left\|h\right\|}=\left\|a\right\|\left\|b\right\|\left\|h\right\|\cos\theta\cos\eta\xrightarrow[h\to0]{}0$$</p> Answer 5: <p>Using <a href="https://en.wikipedia.org/wiki/Cauchy%E2%80%93Schwarz_inequality" rel="nofollow noreferrer">Cauchy-Schwarz inequality</a>, we have $$ 0 \le \frac{|p(h)|}{|h|}\le \frac{|a||b||h|^2}{|h|} = |a||b||h| $$ that goes to zero as long as $h$ tends to zero.</p> Answer 6: <p>Observe that using Cauchy Schwarz Inequality yields:</p> <p>$$0 \leq \bigg | \frac {p(h)} h \bigg | \leq \frac 1 {|h|} |a||b||h|^2 = |a||b||h| \to 0$$ as $h \to 0$. Hence by the Squeeze Thm your limit equals 0. </p> Answer 7: <p>Observe that</p> <p>$$p(h):=(a\cdot h)(b\cdot h)=\left\|a\right\|\left\|b\right\|\left\|h\right\|^2\cos\theta\cos\eta$$</p> <p>with $\;\theta=\;$ the angle between $\;a,h\;$ , and $\;\eta=$ the angle between $\;b,h\;$ , and thus</p> <p>$$\frac{p(h)}{\left\|h\right\|}=\left\|a\right\|\left\|b\right\|\left\|h\right\|\cos\theta\cos\eta\xrightarrow[h\to0]{}0$$</p> Answer 8: <p>Using <a href="https://en.wikipedia.org/wiki/Cauchy%E2%80%93Schwarz_inequality" rel="nofollow noreferrer">Cauchy-Schwarz inequality</a>, we have $$ 0 \le \frac{|p(h)|}{|h|}\le \frac{|a||b||h|^2}{|h|} = |a||b||h| $$ that goes to zero as long as $h$ tends to zero.</p> Answer 9: <p>Observe that using Cauchy Schwarz Inequality yields:</p> <p>$$0 \leq \bigg | \frac {p(h)} h \bigg | \leq \frac 1 {|h|} |a||b||h|^2 = |a||b||h| \to 0$$ as $h \to 0$. Hence by the Squeeze Thm your limit equals 0. </p> Answer 10: <p>Observe that</p> <p>$$p(h):=(a\cdot h)(b\cdot h)=\left\|a\right\|\left\|b\right\|\left\|h\right\|^2\cos\theta\cos\eta$$</p> <p>with $\;\theta=\;$ the angle between $\;a,h\;$ , and $\;\eta=$ the angle between $\;b,h\;$ , and thus</p> <p>$$\frac{p(h)}{\left\|h\right\|}=\left\|a\right\|\left\|b\right\|\left\|h\right\|\cos\theta\cos\eta\xrightarrow[h\to0]{}0$$</p> Answer 11: <p>Using <a href="https://en.wikipedia.org/wiki/Cauchy%E2%80%93Schwarz_inequality" rel="nofollow noreferrer">Cauchy-Schwarz inequality</a>, we have $$ 0 \le \frac{|p(h)|}{|h|}\le \frac{|a||b||h|^2}{|h|} = |a||b||h| $$ that goes to zero as long as $h$ tends to zero.</p> Answer 12: <p>Observe that using Cauchy Schwarz Inequality yields:</p> <p>$$0 \leq \bigg | \frac {p(h)} h \bigg | \leq \frac 1 {|h|} |a||b||h|^2 = |a||b||h| \to 0$$ as $h \to 0$. Hence by the Squeeze Thm your limit equals 0. </p> Answer 13: <p>Observe that</p> <p>$$p(h):=(a\cdot h)(b\cdot h)=\left\|a\right\|\left\|b\right\|\left\|h\right\|^2\cos\theta\cos\eta$$</p> <p>with $\;\theta=\;$ the angle between $\;a,h\;$ , and $\;\eta=$ the angle between $\;b,h\;$ , and thus</p> <p>$$\frac{p(h)}{\left\|h\right\|}=\left\|a\right\|\left\|b\right\|\left\|h\right\|\cos\theta\cos\eta\xrightarrow[h\to0]{}0$$</p> Answer 14: <p>Using <a href="https://en.wikipedia.org/wiki/Cauchy%E2%80%93Schwarz_inequality" rel="nofollow noreferrer">Cauchy-Schwarz inequality</a>, we have $$ 0 \le \frac{|p(h)|}{|h|}\le \frac{|a||b||h|^2}{|h|} = |a||b||h| $$ that goes to zero as long as $h$ tends to zero.</p> Answer 15: <p>Observe that using Cauchy Schwarz Inequality yields:</p> <p>$$0 \leq \bigg | \frac {p(h)} h \bigg | \leq \frac 1 {|h|} |a||b||h|^2 = |a||b||h| \to 0$$ as $h \to 0$. Hence by the Squeeze Thm your limit equals 0. </p> Answer 16: <p>Observe that</p> <p>$$p(h):=(a\cdot h)(b\cdot h)=\left\|a\right\|\left\|b\right\|\left\|h\right\|^2\cos\theta\cos\eta$$</p> <p>with $\;\theta=\;$ the angle between $\;a,h\;$ , and $\;\eta=$ the angle between $\;b,h\;$ , and thus</p> <p>$$\frac{p(h)}{\left\|h\right\|}=\left\|a\right\|\left\|b\right\|\left\|h\right\|\cos\theta\cos\eta\xrightarrow[h\to0]{}0$$</p> Answer 17: <p>Using <a href="https://en.wikipedia.org/wiki/Cauchy%E2%80%93Schwarz_inequality" rel="nofollow noreferrer">Cauchy-Schwarz inequality</a>, we have $$ 0 \le \frac{|p(h)|}{|h|}\le \frac{|a||b||h|^2}{|h|} = |a||b||h| $$ that goes to zero as long as $h$ tends to zero.</p> Answer 18: <p>Observe that using Cauchy Schwarz Inequality yields:</p> <p>$$0 \leq \bigg | \frac {p(h)} h \bigg | \leq \frac 1 {|h|} |a||b||h|^2 = |a||b||h| \to 0$$ as $h \to 0$. Hence by the Squeeze Thm your limit equals 0. </p> Answer 19: <p>Observe that</p> <p>$$p(h):=(a\cdot h)(b\cdot h)=\left\|a\right\|\left\|b\right\|\left\|h\right\|^2\cos\theta\cos\eta$$</p> <p>with $\;\theta=\;$ the angle between $\;a,h\;$ , and $\;\eta=$ the angle between $\;b,h\;$ , and thus</p> <p>$$\frac{p(h)}{\left\|h\right\|}=\left\|a\right\|\left\|b\right\|\left\|h\right\|\cos\theta\cos\eta\xrightarrow[h\to0]{}0$$</p> Answer 20: <p>Using <a href="https://en.wikipedia.org/wiki/Cauchy%E2%80%93Schwarz_inequality" rel="nofollow noreferrer">Cauchy-Schwarz inequality</a>, we have $$ 0 \le \frac{|p(h)|}{|h|}\le \frac{|a||b||h|^2}{|h|} = |a||b||h| $$ that goes to zero as long as $h$ tends to zero.</p> Answer 21: <p>Observe that using Cauchy Schwarz Inequality yields:</p> <p>$$0 \leq \bigg | \frac {p(h)} h \bigg | \leq \frac 1 {|h|} |a||b||h|^2 = |a||b||h| \to 0$$ as $h \to 0$. Hence by the Squeeze Thm your limit equals 0. </p> Answer 22: <p>Observe that</p> <p>$$p(h):=(a\cdot h)(b\cdot h)=\left\|a\right\|\left\|b\right\|\left\|h\right\|^2\cos\theta\cos\eta$$</p> <p>with $\;\theta=\;$ the angle between $\;a,h\;$ , and $\;\eta=$ the angle between $\;b,h\;$ , and thus</p> <p>$$\frac{p(h)}{\left\|h\right\|}=\left\|a\right\|\left\|b\right\|\left\|h\right\|\cos\theta\cos\eta\xrightarrow[h\to0]{}0$$</p> Answer 23: <p>Using <a href="https://en.wikipedia.org/wiki/Cauchy%E2%80%93Schwarz_inequality" rel="nofollow noreferrer">Cauchy-Schwarz inequality</a>, we have $$ 0 \le \frac{|p(h)|}{|h|}\le \frac{|a||b||h|^2}{|h|} = |a||b||h| $$ that goes to zero as long as $h$ tends to zero.</p> Answer 24: <p>Observe that using Cauchy Schwarz Inequality yields:</p> <p>$$0 \leq \bigg | \frac {p(h)} h \bigg | \leq \frac 1 {|h|} |a||b||h|^2 = |a||b||h| \to 0$$ as $h \to 0$. Hence by the Squeeze Thm your limit equals 0. </p> Answer 25: <p>Observe that</p> <p>$$p(h):=(a\cdot h)(b\cdot h)=\left\|a\right\|\left\|b\right\|\left\|h\right\|^2\cos\theta\cos\eta$$</p> <p>with $\;\theta=\;$ the angle between $\;a,h\;$ , and $\;\eta=$ the angle between $\;b,h\;$ , and thus</p> <p>$$\frac{p(h)}{\left\|h\right\|}=\left\|a\right\|\left\|b\right\|\left\|h\right\|\cos\theta\cos\eta\xrightarrow[h\to0]{}0$$</p> Answer 26: <p>Using <a href="https://en.wikipedia.org/wiki/Cauchy%E2%80%93Schwarz_inequality" rel="nofollow noreferrer">Cauchy-Schwarz inequality</a>, we have $$ 0 \le \frac{|p(h)|}{|h|}\le \frac{|a||b||h|^2}{|h|} = |a||b||h| $$ that goes to zero as long as $h$ tends to zero.</p> Answer 27: <p>Observe that using Cauchy Schwarz Inequality yields:</p> <p>$$0 \leq \bigg | \frac {p(h)} h \bigg | \leq \frac 1 {|h|} |a||b||h|^2 = |a||b||h| \to 0$$ as $h \to 0$. Hence by the Squeeze Thm your limit equals 0. </p>
Linear algebra proof with linear operator
2,083,657
5
https://math.stackexchange.com/questions/2083657/linear-algebra-proof-with-linear-operator
<p>Let $V$ be a finite dimensional vector space, $T \in L(V)$ a linear map whose matrix $M(T)$ is the same for every basis of $V$. Show that $T$ is a scalar multiple of the identity map $I$.</p> <p>I know it has to do with something about the vectors being linearly independent in a basis but I don't know where to go with that when trying to find a contradiction</p>
[ { "answer_id": 2083679, "score": 1, "text": "<p><strong>Hint</strong>: The condition \"$M$ is the same for every basis of $V$\" can be written:</p>\n\n<p>$$\\forall P \\in \\mathbf{GL}_n(K) \\;\\;\\;\\; PMP^{-1} = M$$</p>\n\n<p>ie. $M$ is in the center of $\\mathbf{GL}_n(K)$.</p>\n\n<p>You can check what the above condition means for some well chosen matrices of $\\mathbf{GL}_n(K)$ (e.g. the transvection $I_n + E_{i,j}$).</p>\n" }, { "answer_id": 2083682, "score": 0, "text": "<p>We must have $A=SAS^{-1}$, for every invertible matrix $S$, so $AS=SA$.</p>\n\n<p>Let $E_i(c)$ be the matrix obtained from the identity by multiplying the $i$-th row by $c\\ne0$. Then $E_i(c)A$ is the same as $A$ with the $i$-th row multiplied by $c$, whereas $AE_i(c)$ is the same as $A$ but with the $i$-th <em>column</em> multiplied by $c$.</p>\n\n<p>Choosing $i=1$ and $c=2$, we see that the coefficient in place $(1,k)$ is multiplied by $2$ in $E_1(2)A$, whereas it is not changed in $AE_1(2)$. Hence it is zero. So the first row is $[a_{11}\\;0\\;\\dots\\;0]$.</p>\n\n<p>You can prove similarly that the matrix is diagonal.</p>\n\n<p>In particular there is a basis formed by eigenvectors; permuting them doesn't change the matrix, so all the eigenvalues are equal.</p>\n\n<p>The converse is also true: the matrix relative to every basis of the map $v\\mapsto av$ is $aI$.</p>\n" }, { "answer_id": 2083683, "score": 0, "text": "<p>If you have a finite vector space $V$, if you want to change the representation of vectors,according to a new basis, you just need to find a full-rank matrix $X$ that transfers every vector $v$ to the new representation $Xv$.</p>\n\n<p>Now, assume a map $M:V\\rightarrow V$</p>\n\n<p>$Mv=w$</p>\n\n<p>If we want vectors $v$ and $w$ to be represented with a new basis, there exist a matrix $X$ for it.</p>\n\n<p>Multiply both sides of $Mv=w$ by $X$ to get</p>\n\n<p>$XMv=Xw$</p>\n\n<p>The vector $w$ is transferred to the new basis. However, the same can be done for $v$, if we can commute $X$ and $M$. This is possible, for every full rank matrix $X$, if and only if $M$ is a multiple of the identity matrix.</p>\n" }, { "answer_id": 2083693, "score": 0, "text": "<p>Let's show that the matrix element $m_{i,j}=0$ for $i\\neq j$:\nmake the following new basis\n$u_a=v_a$ except $a=i$ where $u_i=v_i+v_j$ </p>\n\n<p>which corresponds to the matrix $A$ being almost identity except the off-diagonal element $A_{ij}=1$</p>\n\n<p>After that we have $A^{-1}MA=M$ which implies that $m_{i,j}=0$ and also $m_{ii}=m_{jj}$</p>\n\n<p>Repeating the same procedure for different $i\\neq j$ we conclude that all off-diagonal elements are zero and the diagonal elements are all equal.</p>\n\n<p>To exemplify the method take $4\\times 4$ matrix M:\n$$\nM=\\left(\n\\begin{array}{cccc}\n m_{1,1} &amp; m_{1,2} &amp; m_{1,3} &amp; m_{1,4} \\\\\n m_{2,1} &amp; m_{2,2} &amp; m_{2,3} &amp; m_{2,4} \\\\\n m_{3,1} &amp; m_{3,2} &amp; m_{3,3} &amp; m_{3,4} \\\\\n m_{4,1} &amp; m_{4,2} &amp; m_{4,3} &amp; m_{4,4} \\\\\n\\end{array}\n\\right)\n$$\nand $i=1$ $j=2$ to get for $A^{-1}.M.A-M$:\n$$\n\\left(\n\\begin{array}{cccc}\n -m_{2,1} &amp; m_{1,1}-m_{2,1}-m_{2,2} &amp; -m_{2,3} &amp; -m_{2,4} \\\\\n 0 &amp; m_{2,1} &amp; 0 &amp; 0 \\\\\n 0 &amp; m_{3,1} &amp; 0 &amp; 0 \\\\\n 0 &amp; m_{4,1} &amp; 0 &amp; 0 \\\\\n\\end{array}\n\\right)\n$$\nwhich implies that $m_{2,1}=0$ and $m_{1,1}=m_{2,2}$</p>\n" }, { "answer_id": 2083705, "score": 0, "text": "<p>Note that I will be using Einstein summation notation to denote matrix multiplication, where repeated indices are summed over. E.g. $c_{ik}=a_{ij}b_{jk}$ denotes the same thing as $c_{ik}=\\sum_j a_{ij}b_{jk}$.</p>\n\n<p>Let $E_{ij}$ denote the matrix that is all zeros except for at location $ij$, and assume $i\\ne j$. Then $E_{ij}^2=0$, so $(1+E_{ij})(1-E_{ij})=1$. Now since $1+E_{ij}$ is invertible, the condition given tells us that $(1+E_{ij})M=M+E_{ij}M=M(1+E_{ij})=M+ME_{ij}$. Then this tells us that $E_{ij}M=ME_{ij}$.</p>\n\n<p>Now $(E_{ij}M)_{rc}=\\delta_{ri}\\delta_{kj}m_{kc}=\\delta_{ri}m_{jc}$ is the matrix that is all zeros except for the $i$th row which contains the $j$th row of $M$. Similarly, $(ME_{ij})_{rc}=M_{rk}\\delta_{ki}\\delta_{jc}=m_{ri}\\delta_{jc}$\n$ME_{ij}$ is the matrix that is all zeros except for the $j$th column which contains the $i$th column of $M$. Comparing these two matrices, we have $\\delta{ri}m_{jc}=m_{ri}\\delta_{jc}$. Letting $r=i$, we have $m_{jc}=m_{ii}\\delta{jc}$, or $m_{jc}=0$ when $c\\ne j$ and $m_{jj}=m_{ii}$. Since $i$ and $j$ were arbitrary, we see that every entry of the diagonal must be equal and every off diagonal entry must be zero.</p>\n\n<p>In other words, $M=\\lambda\\cdot 1$. </p>\n" }, { "answer_id": 2083767, "score": 0, "text": "<p>Here is a (mostly) 'matrix free' approach:</p>\n\n<p>Suppose there is some $b$ such that $Tb \\notin \\operatorname{sp} \\{ b\\}$.</p>\n\n<p>Let $b,Tb,...$ be a basis. Then we see that $Ae_1 = e_2$.</p>\n\n<p>Now choose the basis $Tb,b,...$, we see that $A e_2 = e_1$. </p>\n\n<p>Combining the two implies that $Tb = b$, a contradiction. Hence $Tb \\in \\operatorname{sp} \\{ b\\}$ for all $b$.</p>\n\n<p>Hence there is some $\\lambda_b$ such that $Tb= \\lambda_b b$ for all $b$.\nSuppose $b_1,...,b_n$ is a basis, then\n$T(b_1+\\cdots +b_n) = \\lambda_{b_1+\\cdots +b_n} (b_1+\\cdots +b_n) = \\lambda_{b_1} b_1 + \\cdots \\lambda_{b_n} b_n$, and linear independence gives\n$\\lambda_{b_1} = \\cdots = \\lambda_{b_n}$. Hence $T$ is a scalar times the\nidentity.</p>\n" }, { "answer_id": 2083679, "score": 1, "text": "<p><strong>Hint</strong>: The condition \"$M$ is the same for every basis of $V$\" can be written:</p>\n\n<p>$$\\forall P \\in \\mathbf{GL}_n(K) \\;\\;\\;\\; PMP^{-1} = M$$</p>\n\n<p>ie. $M$ is in the center of $\\mathbf{GL}_n(K)$.</p>\n\n<p>You can check what the above condition means for some well chosen matrices of $\\mathbf{GL}_n(K)$ (e.g. the transvection $I_n + E_{i,j}$).</p>\n" }, { "answer_id": 2083682, "score": 0, "text": "<p>We must have $A=SAS^{-1}$, for every invertible matrix $S$, so $AS=SA$.</p>\n\n<p>Let $E_i(c)$ be the matrix obtained from the identity by multiplying the $i$-th row by $c\\ne0$. Then $E_i(c)A$ is the same as $A$ with the $i$-th row multiplied by $c$, whereas $AE_i(c)$ is the same as $A$ but with the $i$-th <em>column</em> multiplied by $c$.</p>\n\n<p>Choosing $i=1$ and $c=2$, we see that the coefficient in place $(1,k)$ is multiplied by $2$ in $E_1(2)A$, whereas it is not changed in $AE_1(2)$. Hence it is zero. So the first row is $[a_{11}\\;0\\;\\dots\\;0]$.</p>\n\n<p>You can prove similarly that the matrix is diagonal.</p>\n\n<p>In particular there is a basis formed by eigenvectors; permuting them doesn't change the matrix, so all the eigenvalues are equal.</p>\n\n<p>The converse is also true: the matrix relative to every basis of the map $v\\mapsto av$ is $aI$.</p>\n" }, { "answer_id": 2083683, "score": 0, "text": "<p>If you have a finite vector space $V$, if you want to change the representation of vectors,according to a new basis, you just need to find a full-rank matrix $X$ that transfers every vector $v$ to the new representation $Xv$.</p>\n\n<p>Now, assume a map $M:V\\rightarrow V$</p>\n\n<p>$Mv=w$</p>\n\n<p>If we want vectors $v$ and $w$ to be represented with a new basis, there exist a matrix $X$ for it.</p>\n\n<p>Multiply both sides of $Mv=w$ by $X$ to get</p>\n\n<p>$XMv=Xw$</p>\n\n<p>The vector $w$ is transferred to the new basis. However, the same can be done for $v$, if we can commute $X$ and $M$. This is possible, for every full rank matrix $X$, if and only if $M$ is a multiple of the identity matrix.</p>\n" }, { "answer_id": 2083693, "score": 0, "text": "<p>Let's show that the matrix element $m_{i,j}=0$ for $i\\neq j$:\nmake the following new basis\n$u_a=v_a$ except $a=i$ where $u_i=v_i+v_j$ </p>\n\n<p>which corresponds to the matrix $A$ being almost identity except the off-diagonal element $A_{ij}=1$</p>\n\n<p>After that we have $A^{-1}MA=M$ which implies that $m_{i,j}=0$ and also $m_{ii}=m_{jj}$</p>\n\n<p>Repeating the same procedure for different $i\\neq j$ we conclude that all off-diagonal elements are zero and the diagonal elements are all equal.</p>\n\n<p>To exemplify the method take $4\\times 4$ matrix M:\n$$\nM=\\left(\n\\begin{array}{cccc}\n m_{1,1} &amp; m_{1,2} &amp; m_{1,3} &amp; m_{1,4} \\\\\n m_{2,1} &amp; m_{2,2} &amp; m_{2,3} &amp; m_{2,4} \\\\\n m_{3,1} &amp; m_{3,2} &amp; m_{3,3} &amp; m_{3,4} \\\\\n m_{4,1} &amp; m_{4,2} &amp; m_{4,3} &amp; m_{4,4} \\\\\n\\end{array}\n\\right)\n$$\nand $i=1$ $j=2$ to get for $A^{-1}.M.A-M$:\n$$\n\\left(\n\\begin{array}{cccc}\n -m_{2,1} &amp; m_{1,1}-m_{2,1}-m_{2,2} &amp; -m_{2,3} &amp; -m_{2,4} \\\\\n 0 &amp; m_{2,1} &amp; 0 &amp; 0 \\\\\n 0 &amp; m_{3,1} &amp; 0 &amp; 0 \\\\\n 0 &amp; m_{4,1} &amp; 0 &amp; 0 \\\\\n\\end{array}\n\\right)\n$$\nwhich implies that $m_{2,1}=0$ and $m_{1,1}=m_{2,2}$</p>\n" }, { "answer_id": 2083705, "score": 0, "text": "<p>Note that I will be using Einstein summation notation to denote matrix multiplication, where repeated indices are summed over. E.g. $c_{ik}=a_{ij}b_{jk}$ denotes the same thing as $c_{ik}=\\sum_j a_{ij}b_{jk}$.</p>\n\n<p>Let $E_{ij}$ denote the matrix that is all zeros except for at location $ij$, and assume $i\\ne j$. Then $E_{ij}^2=0$, so $(1+E_{ij})(1-E_{ij})=1$. Now since $1+E_{ij}$ is invertible, the condition given tells us that $(1+E_{ij})M=M+E_{ij}M=M(1+E_{ij})=M+ME_{ij}$. Then this tells us that $E_{ij}M=ME_{ij}$.</p>\n\n<p>Now $(E_{ij}M)_{rc}=\\delta_{ri}\\delta_{kj}m_{kc}=\\delta_{ri}m_{jc}$ is the matrix that is all zeros except for the $i$th row which contains the $j$th row of $M$. Similarly, $(ME_{ij})_{rc}=M_{rk}\\delta_{ki}\\delta_{jc}=m_{ri}\\delta_{jc}$\n$ME_{ij}$ is the matrix that is all zeros except for the $j$th column which contains the $i$th column of $M$. Comparing these two matrices, we have $\\delta{ri}m_{jc}=m_{ri}\\delta_{jc}$. Letting $r=i$, we have $m_{jc}=m_{ii}\\delta{jc}$, or $m_{jc}=0$ when $c\\ne j$ and $m_{jj}=m_{ii}$. Since $i$ and $j$ were arbitrary, we see that every entry of the diagonal must be equal and every off diagonal entry must be zero.</p>\n\n<p>In other words, $M=\\lambda\\cdot 1$. </p>\n" }, { "answer_id": 2083767, "score": 0, "text": "<p>Here is a (mostly) 'matrix free' approach:</p>\n\n<p>Suppose there is some $b$ such that $Tb \\notin \\operatorname{sp} \\{ b\\}$.</p>\n\n<p>Let $b,Tb,...$ be a basis. Then we see that $Ae_1 = e_2$.</p>\n\n<p>Now choose the basis $Tb,b,...$, we see that $A e_2 = e_1$. </p>\n\n<p>Combining the two implies that $Tb = b$, a contradiction. Hence $Tb \\in \\operatorname{sp} \\{ b\\}$ for all $b$.</p>\n\n<p>Hence there is some $\\lambda_b$ such that $Tb= \\lambda_b b$ for all $b$.\nSuppose $b_1,...,b_n$ is a basis, then\n$T(b_1+\\cdots +b_n) = \\lambda_{b_1+\\cdots +b_n} (b_1+\\cdots +b_n) = \\lambda_{b_1} b_1 + \\cdots \\lambda_{b_n} b_n$, and linear independence gives\n$\\lambda_{b_1} = \\cdots = \\lambda_{b_n}$. Hence $T$ is a scalar times the\nidentity.</p>\n" }, { "answer_id": 2083679, "score": 1, "text": "<p><strong>Hint</strong>: The condition \"$M$ is the same for every basis of $V$\" can be written:</p>\n\n<p>$$\\forall P \\in \\mathbf{GL}_n(K) \\;\\;\\;\\; PMP^{-1} = M$$</p>\n\n<p>ie. $M$ is in the center of $\\mathbf{GL}_n(K)$.</p>\n\n<p>You can check what the above condition means for some well chosen matrices of $\\mathbf{GL}_n(K)$ (e.g. the transvection $I_n + E_{i,j}$).</p>\n" }, { "answer_id": 2083682, "score": 0, "text": "<p>We must have $A=SAS^{-1}$, for every invertible matrix $S$, so $AS=SA$.</p>\n\n<p>Let $E_i(c)$ be the matrix obtained from the identity by multiplying the $i$-th row by $c\\ne0$. Then $E_i(c)A$ is the same as $A$ with the $i$-th row multiplied by $c$, whereas $AE_i(c)$ is the same as $A$ but with the $i$-th <em>column</em> multiplied by $c$.</p>\n\n<p>Choosing $i=1$ and $c=2$, we see that the coefficient in place $(1,k)$ is multiplied by $2$ in $E_1(2)A$, whereas it is not changed in $AE_1(2)$. Hence it is zero. So the first row is $[a_{11}\\;0\\;\\dots\\;0]$.</p>\n\n<p>You can prove similarly that the matrix is diagonal.</p>\n\n<p>In particular there is a basis formed by eigenvectors; permuting them doesn't change the matrix, so all the eigenvalues are equal.</p>\n\n<p>The converse is also true: the matrix relative to every basis of the map $v\\mapsto av$ is $aI$.</p>\n" }, { "answer_id": 2083683, "score": 0, "text": "<p>If you have a finite vector space $V$, if you want to change the representation of vectors,according to a new basis, you just need to find a full-rank matrix $X$ that transfers every vector $v$ to the new representation $Xv$.</p>\n\n<p>Now, assume a map $M:V\\rightarrow V$</p>\n\n<p>$Mv=w$</p>\n\n<p>If we want vectors $v$ and $w$ to be represented with a new basis, there exist a matrix $X$ for it.</p>\n\n<p>Multiply both sides of $Mv=w$ by $X$ to get</p>\n\n<p>$XMv=Xw$</p>\n\n<p>The vector $w$ is transferred to the new basis. However, the same can be done for $v$, if we can commute $X$ and $M$. This is possible, for every full rank matrix $X$, if and only if $M$ is a multiple of the identity matrix.</p>\n" }, { "answer_id": 2083693, "score": 0, "text": "<p>Let's show that the matrix element $m_{i,j}=0$ for $i\\neq j$:\nmake the following new basis\n$u_a=v_a$ except $a=i$ where $u_i=v_i+v_j$ </p>\n\n<p>which corresponds to the matrix $A$ being almost identity except the off-diagonal element $A_{ij}=1$</p>\n\n<p>After that we have $A^{-1}MA=M$ which implies that $m_{i,j}=0$ and also $m_{ii}=m_{jj}$</p>\n\n<p>Repeating the same procedure for different $i\\neq j$ we conclude that all off-diagonal elements are zero and the diagonal elements are all equal.</p>\n\n<p>To exemplify the method take $4\\times 4$ matrix M:\n$$\nM=\\left(\n\\begin{array}{cccc}\n m_{1,1} &amp; m_{1,2} &amp; m_{1,3} &amp; m_{1,4} \\\\\n m_{2,1} &amp; m_{2,2} &amp; m_{2,3} &amp; m_{2,4} \\\\\n m_{3,1} &amp; m_{3,2} &amp; m_{3,3} &amp; m_{3,4} \\\\\n m_{4,1} &amp; m_{4,2} &amp; m_{4,3} &amp; m_{4,4} \\\\\n\\end{array}\n\\right)\n$$\nand $i=1$ $j=2$ to get for $A^{-1}.M.A-M$:\n$$\n\\left(\n\\begin{array}{cccc}\n -m_{2,1} &amp; m_{1,1}-m_{2,1}-m_{2,2} &amp; -m_{2,3} &amp; -m_{2,4} \\\\\n 0 &amp; m_{2,1} &amp; 0 &amp; 0 \\\\\n 0 &amp; m_{3,1} &amp; 0 &amp; 0 \\\\\n 0 &amp; m_{4,1} &amp; 0 &amp; 0 \\\\\n\\end{array}\n\\right)\n$$\nwhich implies that $m_{2,1}=0$ and $m_{1,1}=m_{2,2}$</p>\n" }, { "answer_id": 2083705, "score": 0, "text": "<p>Note that I will be using Einstein summation notation to denote matrix multiplication, where repeated indices are summed over. E.g. $c_{ik}=a_{ij}b_{jk}$ denotes the same thing as $c_{ik}=\\sum_j a_{ij}b_{jk}$.</p>\n\n<p>Let $E_{ij}$ denote the matrix that is all zeros except for at location $ij$, and assume $i\\ne j$. Then $E_{ij}^2=0$, so $(1+E_{ij})(1-E_{ij})=1$. Now since $1+E_{ij}$ is invertible, the condition given tells us that $(1+E_{ij})M=M+E_{ij}M=M(1+E_{ij})=M+ME_{ij}$. Then this tells us that $E_{ij}M=ME_{ij}$.</p>\n\n<p>Now $(E_{ij}M)_{rc}=\\delta_{ri}\\delta_{kj}m_{kc}=\\delta_{ri}m_{jc}$ is the matrix that is all zeros except for the $i$th row which contains the $j$th row of $M$. Similarly, $(ME_{ij})_{rc}=M_{rk}\\delta_{ki}\\delta_{jc}=m_{ri}\\delta_{jc}$\n$ME_{ij}$ is the matrix that is all zeros except for the $j$th column which contains the $i$th column of $M$. Comparing these two matrices, we have $\\delta{ri}m_{jc}=m_{ri}\\delta_{jc}$. Letting $r=i$, we have $m_{jc}=m_{ii}\\delta{jc}$, or $m_{jc}=0$ when $c\\ne j$ and $m_{jj}=m_{ii}$. Since $i$ and $j$ were arbitrary, we see that every entry of the diagonal must be equal and every off diagonal entry must be zero.</p>\n\n<p>In other words, $M=\\lambda\\cdot 1$. </p>\n" }, { "answer_id": 2083767, "score": 0, "text": "<p>Here is a (mostly) 'matrix free' approach:</p>\n\n<p>Suppose there is some $b$ such that $Tb \\notin \\operatorname{sp} \\{ b\\}$.</p>\n\n<p>Let $b,Tb,...$ be a basis. Then we see that $Ae_1 = e_2$.</p>\n\n<p>Now choose the basis $Tb,b,...$, we see that $A e_2 = e_1$. </p>\n\n<p>Combining the two implies that $Tb = b$, a contradiction. Hence $Tb \\in \\operatorname{sp} \\{ b\\}$ for all $b$.</p>\n\n<p>Hence there is some $\\lambda_b$ such that $Tb= \\lambda_b b$ for all $b$.\nSuppose $b_1,...,b_n$ is a basis, then\n$T(b_1+\\cdots +b_n) = \\lambda_{b_1+\\cdots +b_n} (b_1+\\cdots +b_n) = \\lambda_{b_1} b_1 + \\cdots \\lambda_{b_n} b_n$, and linear independence gives\n$\\lambda_{b_1} = \\cdots = \\lambda_{b_n}$. Hence $T$ is a scalar times the\nidentity.</p>\n" }, { "answer_id": 2083679, "score": 1, "text": "<p><strong>Hint</strong>: The condition \"$M$ is the same for every basis of $V$\" can be written:</p>\n\n<p>$$\\forall P \\in \\mathbf{GL}_n(K) \\;\\;\\;\\; PMP^{-1} = M$$</p>\n\n<p>ie. $M$ is in the center of $\\mathbf{GL}_n(K)$.</p>\n\n<p>You can check what the above condition means for some well chosen matrices of $\\mathbf{GL}_n(K)$ (e.g. the transvection $I_n + E_{i,j}$).</p>\n" }, { "answer_id": 2083682, "score": 0, "text": "<p>We must have $A=SAS^{-1}$, for every invertible matrix $S$, so $AS=SA$.</p>\n\n<p>Let $E_i(c)$ be the matrix obtained from the identity by multiplying the $i$-th row by $c\\ne0$. Then $E_i(c)A$ is the same as $A$ with the $i$-th row multiplied by $c$, whereas $AE_i(c)$ is the same as $A$ but with the $i$-th <em>column</em> multiplied by $c$.</p>\n\n<p>Choosing $i=1$ and $c=2$, we see that the coefficient in place $(1,k)$ is multiplied by $2$ in $E_1(2)A$, whereas it is not changed in $AE_1(2)$. Hence it is zero. So the first row is $[a_{11}\\;0\\;\\dots\\;0]$.</p>\n\n<p>You can prove similarly that the matrix is diagonal.</p>\n\n<p>In particular there is a basis formed by eigenvectors; permuting them doesn't change the matrix, so all the eigenvalues are equal.</p>\n\n<p>The converse is also true: the matrix relative to every basis of the map $v\\mapsto av$ is $aI$.</p>\n" }, { "answer_id": 2083683, "score": 0, "text": "<p>If you have a finite vector space $V$, if you want to change the representation of vectors,according to a new basis, you just need to find a full-rank matrix $X$ that transfers every vector $v$ to the new representation $Xv$.</p>\n\n<p>Now, assume a map $M:V\\rightarrow V$</p>\n\n<p>$Mv=w$</p>\n\n<p>If we want vectors $v$ and $w$ to be represented with a new basis, there exist a matrix $X$ for it.</p>\n\n<p>Multiply both sides of $Mv=w$ by $X$ to get</p>\n\n<p>$XMv=Xw$</p>\n\n<p>The vector $w$ is transferred to the new basis. However, the same can be done for $v$, if we can commute $X$ and $M$. This is possible, for every full rank matrix $X$, if and only if $M$ is a multiple of the identity matrix.</p>\n" }, { "answer_id": 2083693, "score": 0, "text": "<p>Let's show that the matrix element $m_{i,j}=0$ for $i\\neq j$:\nmake the following new basis\n$u_a=v_a$ except $a=i$ where $u_i=v_i+v_j$ </p>\n\n<p>which corresponds to the matrix $A$ being almost identity except the off-diagonal element $A_{ij}=1$</p>\n\n<p>After that we have $A^{-1}MA=M$ which implies that $m_{i,j}=0$ and also $m_{ii}=m_{jj}$</p>\n\n<p>Repeating the same procedure for different $i\\neq j$ we conclude that all off-diagonal elements are zero and the diagonal elements are all equal.</p>\n\n<p>To exemplify the method take $4\\times 4$ matrix M:\n$$\nM=\\left(\n\\begin{array}{cccc}\n m_{1,1} &amp; m_{1,2} &amp; m_{1,3} &amp; m_{1,4} \\\\\n m_{2,1} &amp; m_{2,2} &amp; m_{2,3} &amp; m_{2,4} \\\\\n m_{3,1} &amp; m_{3,2} &amp; m_{3,3} &amp; m_{3,4} \\\\\n m_{4,1} &amp; m_{4,2} &amp; m_{4,3} &amp; m_{4,4} \\\\\n\\end{array}\n\\right)\n$$\nand $i=1$ $j=2$ to get for $A^{-1}.M.A-M$:\n$$\n\\left(\n\\begin{array}{cccc}\n -m_{2,1} &amp; m_{1,1}-m_{2,1}-m_{2,2} &amp; -m_{2,3} &amp; -m_{2,4} \\\\\n 0 &amp; m_{2,1} &amp; 0 &amp; 0 \\\\\n 0 &amp; m_{3,1} &amp; 0 &amp; 0 \\\\\n 0 &amp; m_{4,1} &amp; 0 &amp; 0 \\\\\n\\end{array}\n\\right)\n$$\nwhich implies that $m_{2,1}=0$ and $m_{1,1}=m_{2,2}$</p>\n" }, { "answer_id": 2083705, "score": 0, "text": "<p>Note that I will be using Einstein summation notation to denote matrix multiplication, where repeated indices are summed over. E.g. $c_{ik}=a_{ij}b_{jk}$ denotes the same thing as $c_{ik}=\\sum_j a_{ij}b_{jk}$.</p>\n\n<p>Let $E_{ij}$ denote the matrix that is all zeros except for at location $ij$, and assume $i\\ne j$. Then $E_{ij}^2=0$, so $(1+E_{ij})(1-E_{ij})=1$. Now since $1+E_{ij}$ is invertible, the condition given tells us that $(1+E_{ij})M=M+E_{ij}M=M(1+E_{ij})=M+ME_{ij}$. Then this tells us that $E_{ij}M=ME_{ij}$.</p>\n\n<p>Now $(E_{ij}M)_{rc}=\\delta_{ri}\\delta_{kj}m_{kc}=\\delta_{ri}m_{jc}$ is the matrix that is all zeros except for the $i$th row which contains the $j$th row of $M$. Similarly, $(ME_{ij})_{rc}=M_{rk}\\delta_{ki}\\delta_{jc}=m_{ri}\\delta_{jc}$\n$ME_{ij}$ is the matrix that is all zeros except for the $j$th column which contains the $i$th column of $M$. Comparing these two matrices, we have $\\delta{ri}m_{jc}=m_{ri}\\delta_{jc}$. Letting $r=i$, we have $m_{jc}=m_{ii}\\delta{jc}$, or $m_{jc}=0$ when $c\\ne j$ and $m_{jj}=m_{ii}$. Since $i$ and $j$ were arbitrary, we see that every entry of the diagonal must be equal and every off diagonal entry must be zero.</p>\n\n<p>In other words, $M=\\lambda\\cdot 1$. </p>\n" }, { "answer_id": 2083767, "score": 0, "text": "<p>Here is a (mostly) 'matrix free' approach:</p>\n\n<p>Suppose there is some $b$ such that $Tb \\notin \\operatorname{sp} \\{ b\\}$.</p>\n\n<p>Let $b,Tb,...$ be a basis. Then we see that $Ae_1 = e_2$.</p>\n\n<p>Now choose the basis $Tb,b,...$, we see that $A e_2 = e_1$. </p>\n\n<p>Combining the two implies that $Tb = b$, a contradiction. Hence $Tb \\in \\operatorname{sp} \\{ b\\}$ for all $b$.</p>\n\n<p>Hence there is some $\\lambda_b$ such that $Tb= \\lambda_b b$ for all $b$.\nSuppose $b_1,...,b_n$ is a basis, then\n$T(b_1+\\cdots +b_n) = \\lambda_{b_1+\\cdots +b_n} (b_1+\\cdots +b_n) = \\lambda_{b_1} b_1 + \\cdots \\lambda_{b_n} b_n$, and linear independence gives\n$\\lambda_{b_1} = \\cdots = \\lambda_{b_n}$. Hence $T$ is a scalar times the\nidentity.</p>\n" }, { "answer_id": 2083679, "score": 1, "text": "<p><strong>Hint</strong>: The condition \"$M$ is the same for every basis of $V$\" can be written:</p>\n\n<p>$$\\forall P \\in \\mathbf{GL}_n(K) \\;\\;\\;\\; PMP^{-1} = M$$</p>\n\n<p>ie. $M$ is in the center of $\\mathbf{GL}_n(K)$.</p>\n\n<p>You can check what the above condition means for some well chosen matrices of $\\mathbf{GL}_n(K)$ (e.g. the transvection $I_n + E_{i,j}$).</p>\n" }, { "answer_id": 2083682, "score": 0, "text": "<p>We must have $A=SAS^{-1}$, for every invertible matrix $S$, so $AS=SA$.</p>\n\n<p>Let $E_i(c)$ be the matrix obtained from the identity by multiplying the $i$-th row by $c\\ne0$. Then $E_i(c)A$ is the same as $A$ with the $i$-th row multiplied by $c$, whereas $AE_i(c)$ is the same as $A$ but with the $i$-th <em>column</em> multiplied by $c$.</p>\n\n<p>Choosing $i=1$ and $c=2$, we see that the coefficient in place $(1,k)$ is multiplied by $2$ in $E_1(2)A$, whereas it is not changed in $AE_1(2)$. Hence it is zero. So the first row is $[a_{11}\\;0\\;\\dots\\;0]$.</p>\n\n<p>You can prove similarly that the matrix is diagonal.</p>\n\n<p>In particular there is a basis formed by eigenvectors; permuting them doesn't change the matrix, so all the eigenvalues are equal.</p>\n\n<p>The converse is also true: the matrix relative to every basis of the map $v\\mapsto av$ is $aI$.</p>\n" }, { "answer_id": 2083683, "score": 0, "text": "<p>If you have a finite vector space $V$, if you want to change the representation of vectors,according to a new basis, you just need to find a full-rank matrix $X$ that transfers every vector $v$ to the new representation $Xv$.</p>\n\n<p>Now, assume a map $M:V\\rightarrow V$</p>\n\n<p>$Mv=w$</p>\n\n<p>If we want vectors $v$ and $w$ to be represented with a new basis, there exist a matrix $X$ for it.</p>\n\n<p>Multiply both sides of $Mv=w$ by $X$ to get</p>\n\n<p>$XMv=Xw$</p>\n\n<p>The vector $w$ is transferred to the new basis. However, the same can be done for $v$, if we can commute $X$ and $M$. This is possible, for every full rank matrix $X$, if and only if $M$ is a multiple of the identity matrix.</p>\n" }, { "answer_id": 2083693, "score": 0, "text": "<p>Let's show that the matrix element $m_{i,j}=0$ for $i\\neq j$:\nmake the following new basis\n$u_a=v_a$ except $a=i$ where $u_i=v_i+v_j$ </p>\n\n<p>which corresponds to the matrix $A$ being almost identity except the off-diagonal element $A_{ij}=1$</p>\n\n<p>After that we have $A^{-1}MA=M$ which implies that $m_{i,j}=0$ and also $m_{ii}=m_{jj}$</p>\n\n<p>Repeating the same procedure for different $i\\neq j$ we conclude that all off-diagonal elements are zero and the diagonal elements are all equal.</p>\n\n<p>To exemplify the method take $4\\times 4$ matrix M:\n$$\nM=\\left(\n\\begin{array}{cccc}\n m_{1,1} &amp; m_{1,2} &amp; m_{1,3} &amp; m_{1,4} \\\\\n m_{2,1} &amp; m_{2,2} &amp; m_{2,3} &amp; m_{2,4} \\\\\n m_{3,1} &amp; m_{3,2} &amp; m_{3,3} &amp; m_{3,4} \\\\\n m_{4,1} &amp; m_{4,2} &amp; m_{4,3} &amp; m_{4,4} \\\\\n\\end{array}\n\\right)\n$$\nand $i=1$ $j=2$ to get for $A^{-1}.M.A-M$:\n$$\n\\left(\n\\begin{array}{cccc}\n -m_{2,1} &amp; m_{1,1}-m_{2,1}-m_{2,2} &amp; -m_{2,3} &amp; -m_{2,4} \\\\\n 0 &amp; m_{2,1} &amp; 0 &amp; 0 \\\\\n 0 &amp; m_{3,1} &amp; 0 &amp; 0 \\\\\n 0 &amp; m_{4,1} &amp; 0 &amp; 0 \\\\\n\\end{array}\n\\right)\n$$\nwhich implies that $m_{2,1}=0$ and $m_{1,1}=m_{2,2}$</p>\n" }, { "answer_id": 2083705, "score": 0, "text": "<p>Note that I will be using Einstein summation notation to denote matrix multiplication, where repeated indices are summed over. E.g. $c_{ik}=a_{ij}b_{jk}$ denotes the same thing as $c_{ik}=\\sum_j a_{ij}b_{jk}$.</p>\n\n<p>Let $E_{ij}$ denote the matrix that is all zeros except for at location $ij$, and assume $i\\ne j$. Then $E_{ij}^2=0$, so $(1+E_{ij})(1-E_{ij})=1$. Now since $1+E_{ij}$ is invertible, the condition given tells us that $(1+E_{ij})M=M+E_{ij}M=M(1+E_{ij})=M+ME_{ij}$. Then this tells us that $E_{ij}M=ME_{ij}$.</p>\n\n<p>Now $(E_{ij}M)_{rc}=\\delta_{ri}\\delta_{kj}m_{kc}=\\delta_{ri}m_{jc}$ is the matrix that is all zeros except for the $i$th row which contains the $j$th row of $M$. Similarly, $(ME_{ij})_{rc}=M_{rk}\\delta_{ki}\\delta_{jc}=m_{ri}\\delta_{jc}$\n$ME_{ij}$ is the matrix that is all zeros except for the $j$th column which contains the $i$th column of $M$. Comparing these two matrices, we have $\\delta{ri}m_{jc}=m_{ri}\\delta_{jc}$. Letting $r=i$, we have $m_{jc}=m_{ii}\\delta{jc}$, or $m_{jc}=0$ when $c\\ne j$ and $m_{jj}=m_{ii}$. Since $i$ and $j$ were arbitrary, we see that every entry of the diagonal must be equal and every off diagonal entry must be zero.</p>\n\n<p>In other words, $M=\\lambda\\cdot 1$. </p>\n" }, { "answer_id": 2083767, "score": 0, "text": "<p>Here is a (mostly) 'matrix free' approach:</p>\n\n<p>Suppose there is some $b$ such that $Tb \\notin \\operatorname{sp} \\{ b\\}$.</p>\n\n<p>Let $b,Tb,...$ be a basis. Then we see that $Ae_1 = e_2$.</p>\n\n<p>Now choose the basis $Tb,b,...$, we see that $A e_2 = e_1$. </p>\n\n<p>Combining the two implies that $Tb = b$, a contradiction. Hence $Tb \\in \\operatorname{sp} \\{ b\\}$ for all $b$.</p>\n\n<p>Hence there is some $\\lambda_b$ such that $Tb= \\lambda_b b$ for all $b$.\nSuppose $b_1,...,b_n$ is a basis, then\n$T(b_1+\\cdots +b_n) = \\lambda_{b_1+\\cdots +b_n} (b_1+\\cdots +b_n) = \\lambda_{b_1} b_1 + \\cdots \\lambda_{b_n} b_n$, and linear independence gives\n$\\lambda_{b_1} = \\cdots = \\lambda_{b_n}$. Hence $T$ is a scalar times the\nidentity.</p>\n" }, { "answer_id": 2083679, "score": 1, "text": "<p><strong>Hint</strong>: The condition \"$M$ is the same for every basis of $V$\" can be written:</p>\n\n<p>$$\\forall P \\in \\mathbf{GL}_n(K) \\;\\;\\;\\; PMP^{-1} = M$$</p>\n\n<p>ie. $M$ is in the center of $\\mathbf{GL}_n(K)$.</p>\n\n<p>You can check what the above condition means for some well chosen matrices of $\\mathbf{GL}_n(K)$ (e.g. the transvection $I_n + E_{i,j}$).</p>\n" }, { "answer_id": 2083682, "score": 0, "text": "<p>We must have $A=SAS^{-1}$, for every invertible matrix $S$, so $AS=SA$.</p>\n\n<p>Let $E_i(c)$ be the matrix obtained from the identity by multiplying the $i$-th row by $c\\ne0$. Then $E_i(c)A$ is the same as $A$ with the $i$-th row multiplied by $c$, whereas $AE_i(c)$ is the same as $A$ but with the $i$-th <em>column</em> multiplied by $c$.</p>\n\n<p>Choosing $i=1$ and $c=2$, we see that the coefficient in place $(1,k)$ is multiplied by $2$ in $E_1(2)A$, whereas it is not changed in $AE_1(2)$. Hence it is zero. So the first row is $[a_{11}\\;0\\;\\dots\\;0]$.</p>\n\n<p>You can prove similarly that the matrix is diagonal.</p>\n\n<p>In particular there is a basis formed by eigenvectors; permuting them doesn't change the matrix, so all the eigenvalues are equal.</p>\n\n<p>The converse is also true: the matrix relative to every basis of the map $v\\mapsto av$ is $aI$.</p>\n" }, { "answer_id": 2083683, "score": 0, "text": "<p>If you have a finite vector space $V$, if you want to change the representation of vectors,according to a new basis, you just need to find a full-rank matrix $X$ that transfers every vector $v$ to the new representation $Xv$.</p>\n\n<p>Now, assume a map $M:V\\rightarrow V$</p>\n\n<p>$Mv=w$</p>\n\n<p>If we want vectors $v$ and $w$ to be represented with a new basis, there exist a matrix $X$ for it.</p>\n\n<p>Multiply both sides of $Mv=w$ by $X$ to get</p>\n\n<p>$XMv=Xw$</p>\n\n<p>The vector $w$ is transferred to the new basis. However, the same can be done for $v$, if we can commute $X$ and $M$. This is possible, for every full rank matrix $X$, if and only if $M$ is a multiple of the identity matrix.</p>\n" }, { "answer_id": 2083693, "score": 0, "text": "<p>Let's show that the matrix element $m_{i,j}=0$ for $i\\neq j$:\nmake the following new basis\n$u_a=v_a$ except $a=i$ where $u_i=v_i+v_j$ </p>\n\n<p>which corresponds to the matrix $A$ being almost identity except the off-diagonal element $A_{ij}=1$</p>\n\n<p>After that we have $A^{-1}MA=M$ which implies that $m_{i,j}=0$ and also $m_{ii}=m_{jj}$</p>\n\n<p>Repeating the same procedure for different $i\\neq j$ we conclude that all off-diagonal elements are zero and the diagonal elements are all equal.</p>\n\n<p>To exemplify the method take $4\\times 4$ matrix M:\n$$\nM=\\left(\n\\begin{array}{cccc}\n m_{1,1} &amp; m_{1,2} &amp; m_{1,3} &amp; m_{1,4} \\\\\n m_{2,1} &amp; m_{2,2} &amp; m_{2,3} &amp; m_{2,4} \\\\\n m_{3,1} &amp; m_{3,2} &amp; m_{3,3} &amp; m_{3,4} \\\\\n m_{4,1} &amp; m_{4,2} &amp; m_{4,3} &amp; m_{4,4} \\\\\n\\end{array}\n\\right)\n$$\nand $i=1$ $j=2$ to get for $A^{-1}.M.A-M$:\n$$\n\\left(\n\\begin{array}{cccc}\n -m_{2,1} &amp; m_{1,1}-m_{2,1}-m_{2,2} &amp; -m_{2,3} &amp; -m_{2,4} \\\\\n 0 &amp; m_{2,1} &amp; 0 &amp; 0 \\\\\n 0 &amp; m_{3,1} &amp; 0 &amp; 0 \\\\\n 0 &amp; m_{4,1} &amp; 0 &amp; 0 \\\\\n\\end{array}\n\\right)\n$$\nwhich implies that $m_{2,1}=0$ and $m_{1,1}=m_{2,2}$</p>\n" }, { "answer_id": 2083705, "score": 0, "text": "<p>Note that I will be using Einstein summation notation to denote matrix multiplication, where repeated indices are summed over. E.g. $c_{ik}=a_{ij}b_{jk}$ denotes the same thing as $c_{ik}=\\sum_j a_{ij}b_{jk}$.</p>\n\n<p>Let $E_{ij}$ denote the matrix that is all zeros except for at location $ij$, and assume $i\\ne j$. Then $E_{ij}^2=0$, so $(1+E_{ij})(1-E_{ij})=1$. Now since $1+E_{ij}$ is invertible, the condition given tells us that $(1+E_{ij})M=M+E_{ij}M=M(1+E_{ij})=M+ME_{ij}$. Then this tells us that $E_{ij}M=ME_{ij}$.</p>\n\n<p>Now $(E_{ij}M)_{rc}=\\delta_{ri}\\delta_{kj}m_{kc}=\\delta_{ri}m_{jc}$ is the matrix that is all zeros except for the $i$th row which contains the $j$th row of $M$. Similarly, $(ME_{ij})_{rc}=M_{rk}\\delta_{ki}\\delta_{jc}=m_{ri}\\delta_{jc}$\n$ME_{ij}$ is the matrix that is all zeros except for the $j$th column which contains the $i$th column of $M$. Comparing these two matrices, we have $\\delta{ri}m_{jc}=m_{ri}\\delta_{jc}$. Letting $r=i$, we have $m_{jc}=m_{ii}\\delta{jc}$, or $m_{jc}=0$ when $c\\ne j$ and $m_{jj}=m_{ii}$. Since $i$ and $j$ were arbitrary, we see that every entry of the diagonal must be equal and every off diagonal entry must be zero.</p>\n\n<p>In other words, $M=\\lambda\\cdot 1$. </p>\n" }, { "answer_id": 2083767, "score": 0, "text": "<p>Here is a (mostly) 'matrix free' approach:</p>\n\n<p>Suppose there is some $b$ such that $Tb \\notin \\operatorname{sp} \\{ b\\}$.</p>\n\n<p>Let $b,Tb,...$ be a basis. Then we see that $Ae_1 = e_2$.</p>\n\n<p>Now choose the basis $Tb,b,...$, we see that $A e_2 = e_1$. </p>\n\n<p>Combining the two implies that $Tb = b$, a contradiction. Hence $Tb \\in \\operatorname{sp} \\{ b\\}$ for all $b$.</p>\n\n<p>Hence there is some $\\lambda_b$ such that $Tb= \\lambda_b b$ for all $b$.\nSuppose $b_1,...,b_n$ is a basis, then\n$T(b_1+\\cdots +b_n) = \\lambda_{b_1+\\cdots +b_n} (b_1+\\cdots +b_n) = \\lambda_{b_1} b_1 + \\cdots \\lambda_{b_n} b_n$, and linear independence gives\n$\\lambda_{b_1} = \\cdots = \\lambda_{b_n}$. Hence $T$ is a scalar times the\nidentity.</p>\n" }, { "answer_id": 2083679, "score": 1, "text": "<p><strong>Hint</strong>: The condition \"$M$ is the same for every basis of $V$\" can be written:</p>\n\n<p>$$\\forall P \\in \\mathbf{GL}_n(K) \\;\\;\\;\\; PMP^{-1} = M$$</p>\n\n<p>ie. $M$ is in the center of $\\mathbf{GL}_n(K)$.</p>\n\n<p>You can check what the above condition means for some well chosen matrices of $\\mathbf{GL}_n(K)$ (e.g. the transvection $I_n + E_{i,j}$).</p>\n" }, { "answer_id": 2083682, "score": 0, "text": "<p>We must have $A=SAS^{-1}$, for every invertible matrix $S$, so $AS=SA$.</p>\n\n<p>Let $E_i(c)$ be the matrix obtained from the identity by multiplying the $i$-th row by $c\\ne0$. Then $E_i(c)A$ is the same as $A$ with the $i$-th row multiplied by $c$, whereas $AE_i(c)$ is the same as $A$ but with the $i$-th <em>column</em> multiplied by $c$.</p>\n\n<p>Choosing $i=1$ and $c=2$, we see that the coefficient in place $(1,k)$ is multiplied by $2$ in $E_1(2)A$, whereas it is not changed in $AE_1(2)$. Hence it is zero. So the first row is $[a_{11}\\;0\\;\\dots\\;0]$.</p>\n\n<p>You can prove similarly that the matrix is diagonal.</p>\n\n<p>In particular there is a basis formed by eigenvectors; permuting them doesn't change the matrix, so all the eigenvalues are equal.</p>\n\n<p>The converse is also true: the matrix relative to every basis of the map $v\\mapsto av$ is $aI$.</p>\n" }, { "answer_id": 2083683, "score": 0, "text": "<p>If you have a finite vector space $V$, if you want to change the representation of vectors,according to a new basis, you just need to find a full-rank matrix $X$ that transfers every vector $v$ to the new representation $Xv$.</p>\n\n<p>Now, assume a map $M:V\\rightarrow V$</p>\n\n<p>$Mv=w$</p>\n\n<p>If we want vectors $v$ and $w$ to be represented with a new basis, there exist a matrix $X$ for it.</p>\n\n<p>Multiply both sides of $Mv=w$ by $X$ to get</p>\n\n<p>$XMv=Xw$</p>\n\n<p>The vector $w$ is transferred to the new basis. However, the same can be done for $v$, if we can commute $X$ and $M$. This is possible, for every full rank matrix $X$, if and only if $M$ is a multiple of the identity matrix.</p>\n" }, { "answer_id": 2083693, "score": 0, "text": "<p>Let's show that the matrix element $m_{i,j}=0$ for $i\\neq j$:\nmake the following new basis\n$u_a=v_a$ except $a=i$ where $u_i=v_i+v_j$ </p>\n\n<p>which corresponds to the matrix $A$ being almost identity except the off-diagonal element $A_{ij}=1$</p>\n\n<p>After that we have $A^{-1}MA=M$ which implies that $m_{i,j}=0$ and also $m_{ii}=m_{jj}$</p>\n\n<p>Repeating the same procedure for different $i\\neq j$ we conclude that all off-diagonal elements are zero and the diagonal elements are all equal.</p>\n\n<p>To exemplify the method take $4\\times 4$ matrix M:\n$$\nM=\\left(\n\\begin{array}{cccc}\n m_{1,1} &amp; m_{1,2} &amp; m_{1,3} &amp; m_{1,4} \\\\\n m_{2,1} &amp; m_{2,2} &amp; m_{2,3} &amp; m_{2,4} \\\\\n m_{3,1} &amp; m_{3,2} &amp; m_{3,3} &amp; m_{3,4} \\\\\n m_{4,1} &amp; m_{4,2} &amp; m_{4,3} &amp; m_{4,4} \\\\\n\\end{array}\n\\right)\n$$\nand $i=1$ $j=2$ to get for $A^{-1}.M.A-M$:\n$$\n\\left(\n\\begin{array}{cccc}\n -m_{2,1} &amp; m_{1,1}-m_{2,1}-m_{2,2} &amp; -m_{2,3} &amp; -m_{2,4} \\\\\n 0 &amp; m_{2,1} &amp; 0 &amp; 0 \\\\\n 0 &amp; m_{3,1} &amp; 0 &amp; 0 \\\\\n 0 &amp; m_{4,1} &amp; 0 &amp; 0 \\\\\n\\end{array}\n\\right)\n$$\nwhich implies that $m_{2,1}=0$ and $m_{1,1}=m_{2,2}$</p>\n" }, { "answer_id": 2083705, "score": 0, "text": "<p>Note that I will be using Einstein summation notation to denote matrix multiplication, where repeated indices are summed over. E.g. $c_{ik}=a_{ij}b_{jk}$ denotes the same thing as $c_{ik}=\\sum_j a_{ij}b_{jk}$.</p>\n\n<p>Let $E_{ij}$ denote the matrix that is all zeros except for at location $ij$, and assume $i\\ne j$. Then $E_{ij}^2=0$, so $(1+E_{ij})(1-E_{ij})=1$. Now since $1+E_{ij}$ is invertible, the condition given tells us that $(1+E_{ij})M=M+E_{ij}M=M(1+E_{ij})=M+ME_{ij}$. Then this tells us that $E_{ij}M=ME_{ij}$.</p>\n\n<p>Now $(E_{ij}M)_{rc}=\\delta_{ri}\\delta_{kj}m_{kc}=\\delta_{ri}m_{jc}$ is the matrix that is all zeros except for the $i$th row which contains the $j$th row of $M$. Similarly, $(ME_{ij})_{rc}=M_{rk}\\delta_{ki}\\delta_{jc}=m_{ri}\\delta_{jc}$\n$ME_{ij}$ is the matrix that is all zeros except for the $j$th column which contains the $i$th column of $M$. Comparing these two matrices, we have $\\delta{ri}m_{jc}=m_{ri}\\delta_{jc}$. Letting $r=i$, we have $m_{jc}=m_{ii}\\delta{jc}$, or $m_{jc}=0$ when $c\\ne j$ and $m_{jj}=m_{ii}$. Since $i$ and $j$ were arbitrary, we see that every entry of the diagonal must be equal and every off diagonal entry must be zero.</p>\n\n<p>In other words, $M=\\lambda\\cdot 1$. </p>\n" }, { "answer_id": 2083767, "score": 0, "text": "<p>Here is a (mostly) 'matrix free' approach:</p>\n\n<p>Suppose there is some $b$ such that $Tb \\notin \\operatorname{sp} \\{ b\\}$.</p>\n\n<p>Let $b,Tb,...$ be a basis. Then we see that $Ae_1 = e_2$.</p>\n\n<p>Now choose the basis $Tb,b,...$, we see that $A e_2 = e_1$. </p>\n\n<p>Combining the two implies that $Tb = b$, a contradiction. Hence $Tb \\in \\operatorname{sp} \\{ b\\}$ for all $b$.</p>\n\n<p>Hence there is some $\\lambda_b$ such that $Tb= \\lambda_b b$ for all $b$.\nSuppose $b_1,...,b_n$ is a basis, then\n$T(b_1+\\cdots +b_n) = \\lambda_{b_1+\\cdots +b_n} (b_1+\\cdots +b_n) = \\lambda_{b_1} b_1 + \\cdots \\lambda_{b_n} b_n$, and linear independence gives\n$\\lambda_{b_1} = \\cdots = \\lambda_{b_n}$. Hence $T$ is a scalar times the\nidentity.</p>\n" }, { "answer_id": 2083679, "score": 1, "text": "<p><strong>Hint</strong>: The condition \"$M$ is the same for every basis of $V$\" can be written:</p>\n\n<p>$$\\forall P \\in \\mathbf{GL}_n(K) \\;\\;\\;\\; PMP^{-1} = M$$</p>\n\n<p>ie. $M$ is in the center of $\\mathbf{GL}_n(K)$.</p>\n\n<p>You can check what the above condition means for some well chosen matrices of $\\mathbf{GL}_n(K)$ (e.g. the transvection $I_n + E_{i,j}$).</p>\n" }, { "answer_id": 2083682, "score": 0, "text": "<p>We must have $A=SAS^{-1}$, for every invertible matrix $S$, so $AS=SA$.</p>\n\n<p>Let $E_i(c)$ be the matrix obtained from the identity by multiplying the $i$-th row by $c\\ne0$. Then $E_i(c)A$ is the same as $A$ with the $i$-th row multiplied by $c$, whereas $AE_i(c)$ is the same as $A$ but with the $i$-th <em>column</em> multiplied by $c$.</p>\n\n<p>Choosing $i=1$ and $c=2$, we see that the coefficient in place $(1,k)$ is multiplied by $2$ in $E_1(2)A$, whereas it is not changed in $AE_1(2)$. Hence it is zero. So the first row is $[a_{11}\\;0\\;\\dots\\;0]$.</p>\n\n<p>You can prove similarly that the matrix is diagonal.</p>\n\n<p>In particular there is a basis formed by eigenvectors; permuting them doesn't change the matrix, so all the eigenvalues are equal.</p>\n\n<p>The converse is also true: the matrix relative to every basis of the map $v\\mapsto av$ is $aI$.</p>\n" }, { "answer_id": 2083683, "score": 0, "text": "<p>If you have a finite vector space $V$, if you want to change the representation of vectors,according to a new basis, you just need to find a full-rank matrix $X$ that transfers every vector $v$ to the new representation $Xv$.</p>\n\n<p>Now, assume a map $M:V\\rightarrow V$</p>\n\n<p>$Mv=w$</p>\n\n<p>If we want vectors $v$ and $w$ to be represented with a new basis, there exist a matrix $X$ for it.</p>\n\n<p>Multiply both sides of $Mv=w$ by $X$ to get</p>\n\n<p>$XMv=Xw$</p>\n\n<p>The vector $w$ is transferred to the new basis. However, the same can be done for $v$, if we can commute $X$ and $M$. This is possible, for every full rank matrix $X$, if and only if $M$ is a multiple of the identity matrix.</p>\n" }, { "answer_id": 2083693, "score": 0, "text": "<p>Let's show that the matrix element $m_{i,j}=0$ for $i\\neq j$:\nmake the following new basis\n$u_a=v_a$ except $a=i$ where $u_i=v_i+v_j$ </p>\n\n<p>which corresponds to the matrix $A$ being almost identity except the off-diagonal element $A_{ij}=1$</p>\n\n<p>After that we have $A^{-1}MA=M$ which implies that $m_{i,j}=0$ and also $m_{ii}=m_{jj}$</p>\n\n<p>Repeating the same procedure for different $i\\neq j$ we conclude that all off-diagonal elements are zero and the diagonal elements are all equal.</p>\n\n<p>To exemplify the method take $4\\times 4$ matrix M:\n$$\nM=\\left(\n\\begin{array}{cccc}\n m_{1,1} &amp; m_{1,2} &amp; m_{1,3} &amp; m_{1,4} \\\\\n m_{2,1} &amp; m_{2,2} &amp; m_{2,3} &amp; m_{2,4} \\\\\n m_{3,1} &amp; m_{3,2} &amp; m_{3,3} &amp; m_{3,4} \\\\\n m_{4,1} &amp; m_{4,2} &amp; m_{4,3} &amp; m_{4,4} \\\\\n\\end{array}\n\\right)\n$$\nand $i=1$ $j=2$ to get for $A^{-1}.M.A-M$:\n$$\n\\left(\n\\begin{array}{cccc}\n -m_{2,1} &amp; m_{1,1}-m_{2,1}-m_{2,2} &amp; -m_{2,3} &amp; -m_{2,4} \\\\\n 0 &amp; m_{2,1} &amp; 0 &amp; 0 \\\\\n 0 &amp; m_{3,1} &amp; 0 &amp; 0 \\\\\n 0 &amp; m_{4,1} &amp; 0 &amp; 0 \\\\\n\\end{array}\n\\right)\n$$\nwhich implies that $m_{2,1}=0$ and $m_{1,1}=m_{2,2}$</p>\n" }, { "answer_id": 2083705, "score": 0, "text": "<p>Note that I will be using Einstein summation notation to denote matrix multiplication, where repeated indices are summed over. E.g. $c_{ik}=a_{ij}b_{jk}$ denotes the same thing as $c_{ik}=\\sum_j a_{ij}b_{jk}$.</p>\n\n<p>Let $E_{ij}$ denote the matrix that is all zeros except for at location $ij$, and assume $i\\ne j$. Then $E_{ij}^2=0$, so $(1+E_{ij})(1-E_{ij})=1$. Now since $1+E_{ij}$ is invertible, the condition given tells us that $(1+E_{ij})M=M+E_{ij}M=M(1+E_{ij})=M+ME_{ij}$. Then this tells us that $E_{ij}M=ME_{ij}$.</p>\n\n<p>Now $(E_{ij}M)_{rc}=\\delta_{ri}\\delta_{kj}m_{kc}=\\delta_{ri}m_{jc}$ is the matrix that is all zeros except for the $i$th row which contains the $j$th row of $M$. Similarly, $(ME_{ij})_{rc}=M_{rk}\\delta_{ki}\\delta_{jc}=m_{ri}\\delta_{jc}$\n$ME_{ij}$ is the matrix that is all zeros except for the $j$th column which contains the $i$th column of $M$. Comparing these two matrices, we have $\\delta{ri}m_{jc}=m_{ri}\\delta_{jc}$. Letting $r=i$, we have $m_{jc}=m_{ii}\\delta{jc}$, or $m_{jc}=0$ when $c\\ne j$ and $m_{jj}=m_{ii}$. Since $i$ and $j$ were arbitrary, we see that every entry of the diagonal must be equal and every off diagonal entry must be zero.</p>\n\n<p>In other words, $M=\\lambda\\cdot 1$. </p>\n" }, { "answer_id": 2083767, "score": 0, "text": "<p>Here is a (mostly) 'matrix free' approach:</p>\n\n<p>Suppose there is some $b$ such that $Tb \\notin \\operatorname{sp} \\{ b\\}$.</p>\n\n<p>Let $b,Tb,...$ be a basis. Then we see that $Ae_1 = e_2$.</p>\n\n<p>Now choose the basis $Tb,b,...$, we see that $A e_2 = e_1$. </p>\n\n<p>Combining the two implies that $Tb = b$, a contradiction. Hence $Tb \\in \\operatorname{sp} \\{ b\\}$ for all $b$.</p>\n\n<p>Hence there is some $\\lambda_b$ such that $Tb= \\lambda_b b$ for all $b$.\nSuppose $b_1,...,b_n$ is a basis, then\n$T(b_1+\\cdots +b_n) = \\lambda_{b_1+\\cdots +b_n} (b_1+\\cdots +b_n) = \\lambda_{b_1} b_1 + \\cdots \\lambda_{b_n} b_n$, and linear independence gives\n$\\lambda_{b_1} = \\cdots = \\lambda_{b_n}$. Hence $T$ is a scalar times the\nidentity.</p>\n" }, { "answer_id": 2083679, "score": 1, "text": "<p><strong>Hint</strong>: The condition \"$M$ is the same for every basis of $V$\" can be written:</p>\n\n<p>$$\\forall P \\in \\mathbf{GL}_n(K) \\;\\;\\;\\; PMP^{-1} = M$$</p>\n\n<p>ie. $M$ is in the center of $\\mathbf{GL}_n(K)$.</p>\n\n<p>You can check what the above condition means for some well chosen matrices of $\\mathbf{GL}_n(K)$ (e.g. the transvection $I_n + E_{i,j}$).</p>\n" }, { "answer_id": 2083682, "score": 0, "text": "<p>We must have $A=SAS^{-1}$, for every invertible matrix $S$, so $AS=SA$.</p>\n\n<p>Let $E_i(c)$ be the matrix obtained from the identity by multiplying the $i$-th row by $c\\ne0$. Then $E_i(c)A$ is the same as $A$ with the $i$-th row multiplied by $c$, whereas $AE_i(c)$ is the same as $A$ but with the $i$-th <em>column</em> multiplied by $c$.</p>\n\n<p>Choosing $i=1$ and $c=2$, we see that the coefficient in place $(1,k)$ is multiplied by $2$ in $E_1(2)A$, whereas it is not changed in $AE_1(2)$. Hence it is zero. So the first row is $[a_{11}\\;0\\;\\dots\\;0]$.</p>\n\n<p>You can prove similarly that the matrix is diagonal.</p>\n\n<p>In particular there is a basis formed by eigenvectors; permuting them doesn't change the matrix, so all the eigenvalues are equal.</p>\n\n<p>The converse is also true: the matrix relative to every basis of the map $v\\mapsto av$ is $aI$.</p>\n" }, { "answer_id": 2083683, "score": 0, "text": "<p>If you have a finite vector space $V$, if you want to change the representation of vectors,according to a new basis, you just need to find a full-rank matrix $X$ that transfers every vector $v$ to the new representation $Xv$.</p>\n\n<p>Now, assume a map $M:V\\rightarrow V$</p>\n\n<p>$Mv=w$</p>\n\n<p>If we want vectors $v$ and $w$ to be represented with a new basis, there exist a matrix $X$ for it.</p>\n\n<p>Multiply both sides of $Mv=w$ by $X$ to get</p>\n\n<p>$XMv=Xw$</p>\n\n<p>The vector $w$ is transferred to the new basis. However, the same can be done for $v$, if we can commute $X$ and $M$. This is possible, for every full rank matrix $X$, if and only if $M$ is a multiple of the identity matrix.</p>\n" }, { "answer_id": 2083693, "score": 0, "text": "<p>Let's show that the matrix element $m_{i,j}=0$ for $i\\neq j$:\nmake the following new basis\n$u_a=v_a$ except $a=i$ where $u_i=v_i+v_j$ </p>\n\n<p>which corresponds to the matrix $A$ being almost identity except the off-diagonal element $A_{ij}=1$</p>\n\n<p>After that we have $A^{-1}MA=M$ which implies that $m_{i,j}=0$ and also $m_{ii}=m_{jj}$</p>\n\n<p>Repeating the same procedure for different $i\\neq j$ we conclude that all off-diagonal elements are zero and the diagonal elements are all equal.</p>\n\n<p>To exemplify the method take $4\\times 4$ matrix M:\n$$\nM=\\left(\n\\begin{array}{cccc}\n m_{1,1} &amp; m_{1,2} &amp; m_{1,3} &amp; m_{1,4} \\\\\n m_{2,1} &amp; m_{2,2} &amp; m_{2,3} &amp; m_{2,4} \\\\\n m_{3,1} &amp; m_{3,2} &amp; m_{3,3} &amp; m_{3,4} \\\\\n m_{4,1} &amp; m_{4,2} &amp; m_{4,3} &amp; m_{4,4} \\\\\n\\end{array}\n\\right)\n$$\nand $i=1$ $j=2$ to get for $A^{-1}.M.A-M$:\n$$\n\\left(\n\\begin{array}{cccc}\n -m_{2,1} &amp; m_{1,1}-m_{2,1}-m_{2,2} &amp; -m_{2,3} &amp; -m_{2,4} \\\\\n 0 &amp; m_{2,1} &amp; 0 &amp; 0 \\\\\n 0 &amp; m_{3,1} &amp; 0 &amp; 0 \\\\\n 0 &amp; m_{4,1} &amp; 0 &amp; 0 \\\\\n\\end{array}\n\\right)\n$$\nwhich implies that $m_{2,1}=0$ and $m_{1,1}=m_{2,2}$</p>\n" }, { "answer_id": 2083705, "score": 0, "text": "<p>Note that I will be using Einstein summation notation to denote matrix multiplication, where repeated indices are summed over. E.g. $c_{ik}=a_{ij}b_{jk}$ denotes the same thing as $c_{ik}=\\sum_j a_{ij}b_{jk}$.</p>\n\n<p>Let $E_{ij}$ denote the matrix that is all zeros except for at location $ij$, and assume $i\\ne j$. Then $E_{ij}^2=0$, so $(1+E_{ij})(1-E_{ij})=1$. Now since $1+E_{ij}$ is invertible, the condition given tells us that $(1+E_{ij})M=M+E_{ij}M=M(1+E_{ij})=M+ME_{ij}$. Then this tells us that $E_{ij}M=ME_{ij}$.</p>\n\n<p>Now $(E_{ij}M)_{rc}=\\delta_{ri}\\delta_{kj}m_{kc}=\\delta_{ri}m_{jc}$ is the matrix that is all zeros except for the $i$th row which contains the $j$th row of $M$. Similarly, $(ME_{ij})_{rc}=M_{rk}\\delta_{ki}\\delta_{jc}=m_{ri}\\delta_{jc}$\n$ME_{ij}$ is the matrix that is all zeros except for the $j$th column which contains the $i$th column of $M$. Comparing these two matrices, we have $\\delta{ri}m_{jc}=m_{ri}\\delta_{jc}$. Letting $r=i$, we have $m_{jc}=m_{ii}\\delta{jc}$, or $m_{jc}=0$ when $c\\ne j$ and $m_{jj}=m_{ii}$. Since $i$ and $j$ were arbitrary, we see that every entry of the diagonal must be equal and every off diagonal entry must be zero.</p>\n\n<p>In other words, $M=\\lambda\\cdot 1$. </p>\n" }, { "answer_id": 2083767, "score": 0, "text": "<p>Here is a (mostly) 'matrix free' approach:</p>\n\n<p>Suppose there is some $b$ such that $Tb \\notin \\operatorname{sp} \\{ b\\}$.</p>\n\n<p>Let $b,Tb,...$ be a basis. Then we see that $Ae_1 = e_2$.</p>\n\n<p>Now choose the basis $Tb,b,...$, we see that $A e_2 = e_1$. </p>\n\n<p>Combining the two implies that $Tb = b$, a contradiction. Hence $Tb \\in \\operatorname{sp} \\{ b\\}$ for all $b$.</p>\n\n<p>Hence there is some $\\lambda_b$ such that $Tb= \\lambda_b b$ for all $b$.\nSuppose $b_1,...,b_n$ is a basis, then\n$T(b_1+\\cdots +b_n) = \\lambda_{b_1+\\cdots +b_n} (b_1+\\cdots +b_n) = \\lambda_{b_1} b_1 + \\cdots \\lambda_{b_n} b_n$, and linear independence gives\n$\\lambda_{b_1} = \\cdots = \\lambda_{b_n}$. Hence $T$ is a scalar times the\nidentity.</p>\n" } ]
Question: Linear algebra proof with linear operator Answer 1: <p><strong>Hint</strong>: The condition "$M$ is the same for every basis of $V$" can be written:</p> <p>$$\forall P \in \mathbf{GL}_n(K) \;\;\;\; PMP^{-1} = M$$</p> <p>ie. $M$ is in the center of $\mathbf{GL}_n(K)$.</p> <p>You can check what the above condition means for some well chosen matrices of $\mathbf{GL}_n(K)$ (e.g. the transvection $I_n + E_{i,j}$).</p> Answer 2: <p>We must have $A=SAS^{-1}$, for every invertible matrix $S$, so $AS=SA$.</p> <p>Let $E_i(c)$ be the matrix obtained from the identity by multiplying the $i$-th row by $c\ne0$. Then $E_i(c)A$ is the same as $A$ with the $i$-th row multiplied by $c$, whereas $AE_i(c)$ is the same as $A$ but with the $i$-th <em>column</em> multiplied by $c$.</p> <p>Choosing $i=1$ and $c=2$, we see that the coefficient in place $(1,k)$ is multiplied by $2$ in $E_1(2)A$, whereas it is not changed in $AE_1(2)$. Hence it is zero. So the first row is $[a_{11}\;0\;\dots\;0]$.</p> <p>You can prove similarly that the matrix is diagonal.</p> <p>In particular there is a basis formed by eigenvectors; permuting them doesn't change the matrix, so all the eigenvalues are equal.</p> <p>The converse is also true: the matrix relative to every basis of the map $v\mapsto av$ is $aI$.</p> Answer 3: <p>If you have a finite vector space $V$, if you want to change the representation of vectors,according to a new basis, you just need to find a full-rank matrix $X$ that transfers every vector $v$ to the new representation $Xv$.</p> <p>Now, assume a map $M:V\rightarrow V$</p> <p>$Mv=w$</p> <p>If we want vectors $v$ and $w$ to be represented with a new basis, there exist a matrix $X$ for it.</p> <p>Multiply both sides of $Mv=w$ by $X$ to get</p> <p>$XMv=Xw$</p> <p>The vector $w$ is transferred to the new basis. However, the same can be done for $v$, if we can commute $X$ and $M$. This is possible, for every full rank matrix $X$, if and only if $M$ is a multiple of the identity matrix.</p> Answer 4: <p>Let's show that the matrix element $m_{i,j}=0$ for $i\neq j$: make the following new basis $u_a=v_a$ except $a=i$ where $u_i=v_i+v_j$ </p> <p>which corresponds to the matrix $A$ being almost identity except the off-diagonal element $A_{ij}=1$</p> <p>After that we have $A^{-1}MA=M$ which implies that $m_{i,j}=0$ and also $m_{ii}=m_{jj}$</p> <p>Repeating the same procedure for different $i\neq j$ we conclude that all off-diagonal elements are zero and the diagonal elements are all equal.</p> <p>To exemplify the method take $4\times 4$ matrix M: $$ M=\left( \begin{array}{cccc} m_{1,1} &amp; m_{1,2} &amp; m_{1,3} &amp; m_{1,4} \\ m_{2,1} &amp; m_{2,2} &amp; m_{2,3} &amp; m_{2,4} \\ m_{3,1} &amp; m_{3,2} &amp; m_{3,3} &amp; m_{3,4} \\ m_{4,1} &amp; m_{4,2} &amp; m_{4,3} &amp; m_{4,4} \\ \end{array} \right) $$ and $i=1$ $j=2$ to get for $A^{-1}.M.A-M$: $$ \left( \begin{array}{cccc} -m_{2,1} &amp; m_{1,1}-m_{2,1}-m_{2,2} &amp; -m_{2,3} &amp; -m_{2,4} \\ 0 &amp; m_{2,1} &amp; 0 &amp; 0 \\ 0 &amp; m_{3,1} &amp; 0 &amp; 0 \\ 0 &amp; m_{4,1} &amp; 0 &amp; 0 \\ \end{array} \right) $$ which implies that $m_{2,1}=0$ and $m_{1,1}=m_{2,2}$</p> Answer 5: <p>Note that I will be using Einstein summation notation to denote matrix multiplication, where repeated indices are summed over. E.g. $c_{ik}=a_{ij}b_{jk}$ denotes the same thing as $c_{ik}=\sum_j a_{ij}b_{jk}$.</p> <p>Let $E_{ij}$ denote the matrix that is all zeros except for at location $ij$, and assume $i\ne j$. Then $E_{ij}^2=0$, so $(1+E_{ij})(1-E_{ij})=1$. Now since $1+E_{ij}$ is invertible, the condition given tells us that $(1+E_{ij})M=M+E_{ij}M=M(1+E_{ij})=M+ME_{ij}$. Then this tells us that $E_{ij}M=ME_{ij}$.</p> <p>Now $(E_{ij}M)_{rc}=\delta_{ri}\delta_{kj}m_{kc}=\delta_{ri}m_{jc}$ is the matrix that is all zeros except for the $i$th row which contains the $j$th row of $M$. Similarly, $(ME_{ij})_{rc}=M_{rk}\delta_{ki}\delta_{jc}=m_{ri}\delta_{jc}$ $ME_{ij}$ is the matrix that is all zeros except for the $j$th column which contains the $i$th column of $M$. Comparing these two matrices, we have $\delta{ri}m_{jc}=m_{ri}\delta_{jc}$. Letting $r=i$, we have $m_{jc}=m_{ii}\delta{jc}$, or $m_{jc}=0$ when $c\ne j$ and $m_{jj}=m_{ii}$. Since $i$ and $j$ were arbitrary, we see that every entry of the diagonal must be equal and every off diagonal entry must be zero.</p> <p>In other words, $M=\lambda\cdot 1$. </p> Answer 6: <p>Here is a (mostly) 'matrix free' approach:</p> <p>Suppose there is some $b$ such that $Tb \notin \operatorname{sp} \{ b\}$.</p> <p>Let $b,Tb,...$ be a basis. Then we see that $Ae_1 = e_2$.</p> <p>Now choose the basis $Tb,b,...$, we see that $A e_2 = e_1$. </p> <p>Combining the two implies that $Tb = b$, a contradiction. Hence $Tb \in \operatorname{sp} \{ b\}$ for all $b$.</p> <p>Hence there is some $\lambda_b$ such that $Tb= \lambda_b b$ for all $b$. Suppose $b_1,...,b_n$ is a basis, then $T(b_1+\cdots +b_n) = \lambda_{b_1+\cdots +b_n} (b_1+\cdots +b_n) = \lambda_{b_1} b_1 + \cdots \lambda_{b_n} b_n$, and linear independence gives $\lambda_{b_1} = \cdots = \lambda_{b_n}$. Hence $T$ is a scalar times the identity.</p> Answer 7: <p><strong>Hint</strong>: The condition "$M$ is the same for every basis of $V$" can be written:</p> <p>$$\forall P \in \mathbf{GL}_n(K) \;\;\;\; PMP^{-1} = M$$</p> <p>ie. $M$ is in the center of $\mathbf{GL}_n(K)$.</p> <p>You can check what the above condition means for some well chosen matrices of $\mathbf{GL}_n(K)$ (e.g. the transvection $I_n + E_{i,j}$).</p> Answer 8: <p>We must have $A=SAS^{-1}$, for every invertible matrix $S$, so $AS=SA$.</p> <p>Let $E_i(c)$ be the matrix obtained from the identity by multiplying the $i$-th row by $c\ne0$. Then $E_i(c)A$ is the same as $A$ with the $i$-th row multiplied by $c$, whereas $AE_i(c)$ is the same as $A$ but with the $i$-th <em>column</em> multiplied by $c$.</p> <p>Choosing $i=1$ and $c=2$, we see that the coefficient in place $(1,k)$ is multiplied by $2$ in $E_1(2)A$, whereas it is not changed in $AE_1(2)$. Hence it is zero. So the first row is $[a_{11}\;0\;\dots\;0]$.</p> <p>You can prove similarly that the matrix is diagonal.</p> <p>In particular there is a basis formed by eigenvectors; permuting them doesn't change the matrix, so all the eigenvalues are equal.</p> <p>The converse is also true: the matrix relative to every basis of the map $v\mapsto av$ is $aI$.</p> Answer 9: <p>If you have a finite vector space $V$, if you want to change the representation of vectors,according to a new basis, you just need to find a full-rank matrix $X$ that transfers every vector $v$ to the new representation $Xv$.</p> <p>Now, assume a map $M:V\rightarrow V$</p> <p>$Mv=w$</p> <p>If we want vectors $v$ and $w$ to be represented with a new basis, there exist a matrix $X$ for it.</p> <p>Multiply both sides of $Mv=w$ by $X$ to get</p> <p>$XMv=Xw$</p> <p>The vector $w$ is transferred to the new basis. However, the same can be done for $v$, if we can commute $X$ and $M$. This is possible, for every full rank matrix $X$, if and only if $M$ is a multiple of the identity matrix.</p> Answer 10: <p>Let's show that the matrix element $m_{i,j}=0$ for $i\neq j$: make the following new basis $u_a=v_a$ except $a=i$ where $u_i=v_i+v_j$ </p> <p>which corresponds to the matrix $A$ being almost identity except the off-diagonal element $A_{ij}=1$</p> <p>After that we have $A^{-1}MA=M$ which implies that $m_{i,j}=0$ and also $m_{ii}=m_{jj}$</p> <p>Repeating the same procedure for different $i\neq j$ we conclude that all off-diagonal elements are zero and the diagonal elements are all equal.</p> <p>To exemplify the method take $4\times 4$ matrix M: $$ M=\left( \begin{array}{cccc} m_{1,1} &amp; m_{1,2} &amp; m_{1,3} &amp; m_{1,4} \\ m_{2,1} &amp; m_{2,2} &amp; m_{2,3} &amp; m_{2,4} \\ m_{3,1} &amp; m_{3,2} &amp; m_{3,3} &amp; m_{3,4} \\ m_{4,1} &amp; m_{4,2} &amp; m_{4,3} &amp; m_{4,4} \\ \end{array} \right) $$ and $i=1$ $j=2$ to get for $A^{-1}.M.A-M$: $$ \left( \begin{array}{cccc} -m_{2,1} &amp; m_{1,1}-m_{2,1}-m_{2,2} &amp; -m_{2,3} &amp; -m_{2,4} \\ 0 &amp; m_{2,1} &amp; 0 &amp; 0 \\ 0 &amp; m_{3,1} &amp; 0 &amp; 0 \\ 0 &amp; m_{4,1} &amp; 0 &amp; 0 \\ \end{array} \right) $$ which implies that $m_{2,1}=0$ and $m_{1,1}=m_{2,2}$</p> Answer 11: <p>Note that I will be using Einstein summation notation to denote matrix multiplication, where repeated indices are summed over. E.g. $c_{ik}=a_{ij}b_{jk}$ denotes the same thing as $c_{ik}=\sum_j a_{ij}b_{jk}$.</p> <p>Let $E_{ij}$ denote the matrix that is all zeros except for at location $ij$, and assume $i\ne j$. Then $E_{ij}^2=0$, so $(1+E_{ij})(1-E_{ij})=1$. Now since $1+E_{ij}$ is invertible, the condition given tells us that $(1+E_{ij})M=M+E_{ij}M=M(1+E_{ij})=M+ME_{ij}$. Then this tells us that $E_{ij}M=ME_{ij}$.</p> <p>Now $(E_{ij}M)_{rc}=\delta_{ri}\delta_{kj}m_{kc}=\delta_{ri}m_{jc}$ is the matrix that is all zeros except for the $i$th row which contains the $j$th row of $M$. Similarly, $(ME_{ij})_{rc}=M_{rk}\delta_{ki}\delta_{jc}=m_{ri}\delta_{jc}$ $ME_{ij}$ is the matrix that is all zeros except for the $j$th column which contains the $i$th column of $M$. Comparing these two matrices, we have $\delta{ri}m_{jc}=m_{ri}\delta_{jc}$. Letting $r=i$, we have $m_{jc}=m_{ii}\delta{jc}$, or $m_{jc}=0$ when $c\ne j$ and $m_{jj}=m_{ii}$. Since $i$ and $j$ were arbitrary, we see that every entry of the diagonal must be equal and every off diagonal entry must be zero.</p> <p>In other words, $M=\lambda\cdot 1$. </p> Answer 12: <p>Here is a (mostly) 'matrix free' approach:</p> <p>Suppose there is some $b$ such that $Tb \notin \operatorname{sp} \{ b\}$.</p> <p>Let $b,Tb,...$ be a basis. Then we see that $Ae_1 = e_2$.</p> <p>Now choose the basis $Tb,b,...$, we see that $A e_2 = e_1$. </p> <p>Combining the two implies that $Tb = b$, a contradiction. Hence $Tb \in \operatorname{sp} \{ b\}$ for all $b$.</p> <p>Hence there is some $\lambda_b$ such that $Tb= \lambda_b b$ for all $b$. Suppose $b_1,...,b_n$ is a basis, then $T(b_1+\cdots +b_n) = \lambda_{b_1+\cdots +b_n} (b_1+\cdots +b_n) = \lambda_{b_1} b_1 + \cdots \lambda_{b_n} b_n$, and linear independence gives $\lambda_{b_1} = \cdots = \lambda_{b_n}$. Hence $T$ is a scalar times the identity.</p> Answer 13: <p><strong>Hint</strong>: The condition "$M$ is the same for every basis of $V$" can be written:</p> <p>$$\forall P \in \mathbf{GL}_n(K) \;\;\;\; PMP^{-1} = M$$</p> <p>ie. $M$ is in the center of $\mathbf{GL}_n(K)$.</p> <p>You can check what the above condition means for some well chosen matrices of $\mathbf{GL}_n(K)$ (e.g. the transvection $I_n + E_{i,j}$).</p> Answer 14: <p>We must have $A=SAS^{-1}$, for every invertible matrix $S$, so $AS=SA$.</p> <p>Let $E_i(c)$ be the matrix obtained from the identity by multiplying the $i$-th row by $c\ne0$. Then $E_i(c)A$ is the same as $A$ with the $i$-th row multiplied by $c$, whereas $AE_i(c)$ is the same as $A$ but with the $i$-th <em>column</em> multiplied by $c$.</p> <p>Choosing $i=1$ and $c=2$, we see that the coefficient in place $(1,k)$ is multiplied by $2$ in $E_1(2)A$, whereas it is not changed in $AE_1(2)$. Hence it is zero. So the first row is $[a_{11}\;0\;\dots\;0]$.</p> <p>You can prove similarly that the matrix is diagonal.</p> <p>In particular there is a basis formed by eigenvectors; permuting them doesn't change the matrix, so all the eigenvalues are equal.</p> <p>The converse is also true: the matrix relative to every basis of the map $v\mapsto av$ is $aI$.</p> Answer 15: <p>If you have a finite vector space $V$, if you want to change the representation of vectors,according to a new basis, you just need to find a full-rank matrix $X$ that transfers every vector $v$ to the new representation $Xv$.</p> <p>Now, assume a map $M:V\rightarrow V$</p> <p>$Mv=w$</p> <p>If we want vectors $v$ and $w$ to be represented with a new basis, there exist a matrix $X$ for it.</p> <p>Multiply both sides of $Mv=w$ by $X$ to get</p> <p>$XMv=Xw$</p> <p>The vector $w$ is transferred to the new basis. However, the same can be done for $v$, if we can commute $X$ and $M$. This is possible, for every full rank matrix $X$, if and only if $M$ is a multiple of the identity matrix.</p> Answer 16: <p>Let's show that the matrix element $m_{i,j}=0$ for $i\neq j$: make the following new basis $u_a=v_a$ except $a=i$ where $u_i=v_i+v_j$ </p> <p>which corresponds to the matrix $A$ being almost identity except the off-diagonal element $A_{ij}=1$</p> <p>After that we have $A^{-1}MA=M$ which implies that $m_{i,j}=0$ and also $m_{ii}=m_{jj}$</p> <p>Repeating the same procedure for different $i\neq j$ we conclude that all off-diagonal elements are zero and the diagonal elements are all equal.</p> <p>To exemplify the method take $4\times 4$ matrix M: $$ M=\left( \begin{array}{cccc} m_{1,1} &amp; m_{1,2} &amp; m_{1,3} &amp; m_{1,4} \\ m_{2,1} &amp; m_{2,2} &amp; m_{2,3} &amp; m_{2,4} \\ m_{3,1} &amp; m_{3,2} &amp; m_{3,3} &amp; m_{3,4} \\ m_{4,1} &amp; m_{4,2} &amp; m_{4,3} &amp; m_{4,4} \\ \end{array} \right) $$ and $i=1$ $j=2$ to get for $A^{-1}.M.A-M$: $$ \left( \begin{array}{cccc} -m_{2,1} &amp; m_{1,1}-m_{2,1}-m_{2,2} &amp; -m_{2,3} &amp; -m_{2,4} \\ 0 &amp; m_{2,1} &amp; 0 &amp; 0 \\ 0 &amp; m_{3,1} &amp; 0 &amp; 0 \\ 0 &amp; m_{4,1} &amp; 0 &amp; 0 \\ \end{array} \right) $$ which implies that $m_{2,1}=0$ and $m_{1,1}=m_{2,2}$</p> Answer 17: <p>Note that I will be using Einstein summation notation to denote matrix multiplication, where repeated indices are summed over. E.g. $c_{ik}=a_{ij}b_{jk}$ denotes the same thing as $c_{ik}=\sum_j a_{ij}b_{jk}$.</p> <p>Let $E_{ij}$ denote the matrix that is all zeros except for at location $ij$, and assume $i\ne j$. Then $E_{ij}^2=0$, so $(1+E_{ij})(1-E_{ij})=1$. Now since $1+E_{ij}$ is invertible, the condition given tells us that $(1+E_{ij})M=M+E_{ij}M=M(1+E_{ij})=M+ME_{ij}$. Then this tells us that $E_{ij}M=ME_{ij}$.</p> <p>Now $(E_{ij}M)_{rc}=\delta_{ri}\delta_{kj}m_{kc}=\delta_{ri}m_{jc}$ is the matrix that is all zeros except for the $i$th row which contains the $j$th row of $M$. Similarly, $(ME_{ij})_{rc}=M_{rk}\delta_{ki}\delta_{jc}=m_{ri}\delta_{jc}$ $ME_{ij}$ is the matrix that is all zeros except for the $j$th column which contains the $i$th column of $M$. Comparing these two matrices, we have $\delta{ri}m_{jc}=m_{ri}\delta_{jc}$. Letting $r=i$, we have $m_{jc}=m_{ii}\delta{jc}$, or $m_{jc}=0$ when $c\ne j$ and $m_{jj}=m_{ii}$. Since $i$ and $j$ were arbitrary, we see that every entry of the diagonal must be equal and every off diagonal entry must be zero.</p> <p>In other words, $M=\lambda\cdot 1$. </p> Answer 18: <p>Here is a (mostly) 'matrix free' approach:</p> <p>Suppose there is some $b$ such that $Tb \notin \operatorname{sp} \{ b\}$.</p> <p>Let $b,Tb,...$ be a basis. Then we see that $Ae_1 = e_2$.</p> <p>Now choose the basis $Tb,b,...$, we see that $A e_2 = e_1$. </p> <p>Combining the two implies that $Tb = b$, a contradiction. Hence $Tb \in \operatorname{sp} \{ b\}$ for all $b$.</p> <p>Hence there is some $\lambda_b$ such that $Tb= \lambda_b b$ for all $b$. Suppose $b_1,...,b_n$ is a basis, then $T(b_1+\cdots +b_n) = \lambda_{b_1+\cdots +b_n} (b_1+\cdots +b_n) = \lambda_{b_1} b_1 + \cdots \lambda_{b_n} b_n$, and linear independence gives $\lambda_{b_1} = \cdots = \lambda_{b_n}$. Hence $T$ is a scalar times the identity.</p> Answer 19: <p><strong>Hint</strong>: The condition "$M$ is the same for every basis of $V$" can be written:</p> <p>$$\forall P \in \mathbf{GL}_n(K) \;\;\;\; PMP^{-1} = M$$</p> <p>ie. $M$ is in the center of $\mathbf{GL}_n(K)$.</p> <p>You can check what the above condition means for some well chosen matrices of $\mathbf{GL}_n(K)$ (e.g. the transvection $I_n + E_{i,j}$).</p> Answer 20: <p>We must have $A=SAS^{-1}$, for every invertible matrix $S$, so $AS=SA$.</p> <p>Let $E_i(c)$ be the matrix obtained from the identity by multiplying the $i$-th row by $c\ne0$. Then $E_i(c)A$ is the same as $A$ with the $i$-th row multiplied by $c$, whereas $AE_i(c)$ is the same as $A$ but with the $i$-th <em>column</em> multiplied by $c$.</p> <p>Choosing $i=1$ and $c=2$, we see that the coefficient in place $(1,k)$ is multiplied by $2$ in $E_1(2)A$, whereas it is not changed in $AE_1(2)$. Hence it is zero. So the first row is $[a_{11}\;0\;\dots\;0]$.</p> <p>You can prove similarly that the matrix is diagonal.</p> <p>In particular there is a basis formed by eigenvectors; permuting them doesn't change the matrix, so all the eigenvalues are equal.</p> <p>The converse is also true: the matrix relative to every basis of the map $v\mapsto av$ is $aI$.</p> Answer 21: <p>If you have a finite vector space $V$, if you want to change the representation of vectors,according to a new basis, you just need to find a full-rank matrix $X$ that transfers every vector $v$ to the new representation $Xv$.</p> <p>Now, assume a map $M:V\rightarrow V$</p> <p>$Mv=w$</p> <p>If we want vectors $v$ and $w$ to be represented with a new basis, there exist a matrix $X$ for it.</p> <p>Multiply both sides of $Mv=w$ by $X$ to get</p> <p>$XMv=Xw$</p> <p>The vector $w$ is transferred to the new basis. However, the same can be done for $v$, if we can commute $X$ and $M$. This is possible, for every full rank matrix $X$, if and only if $M$ is a multiple of the identity matrix.</p> Answer 22: <p>Let's show that the matrix element $m_{i,j}=0$ for $i\neq j$: make the following new basis $u_a=v_a$ except $a=i$ where $u_i=v_i+v_j$ </p> <p>which corresponds to the matrix $A$ being almost identity except the off-diagonal element $A_{ij}=1$</p> <p>After that we have $A^{-1}MA=M$ which implies that $m_{i,j}=0$ and also $m_{ii}=m_{jj}$</p> <p>Repeating the same procedure for different $i\neq j$ we conclude that all off-diagonal elements are zero and the diagonal elements are all equal.</p> <p>To exemplify the method take $4\times 4$ matrix M: $$ M=\left( \begin{array}{cccc} m_{1,1} &amp; m_{1,2} &amp; m_{1,3} &amp; m_{1,4} \\ m_{2,1} &amp; m_{2,2} &amp; m_{2,3} &amp; m_{2,4} \\ m_{3,1} &amp; m_{3,2} &amp; m_{3,3} &amp; m_{3,4} \\ m_{4,1} &amp; m_{4,2} &amp; m_{4,3} &amp; m_{4,4} \\ \end{array} \right) $$ and $i=1$ $j=2$ to get for $A^{-1}.M.A-M$: $$ \left( \begin{array}{cccc} -m_{2,1} &amp; m_{1,1}-m_{2,1}-m_{2,2} &amp; -m_{2,3} &amp; -m_{2,4} \\ 0 &amp; m_{2,1} &amp; 0 &amp; 0 \\ 0 &amp; m_{3,1} &amp; 0 &amp; 0 \\ 0 &amp; m_{4,1} &amp; 0 &amp; 0 \\ \end{array} \right) $$ which implies that $m_{2,1}=0$ and $m_{1,1}=m_{2,2}$</p> Answer 23: <p>Note that I will be using Einstein summation notation to denote matrix multiplication, where repeated indices are summed over. E.g. $c_{ik}=a_{ij}b_{jk}$ denotes the same thing as $c_{ik}=\sum_j a_{ij}b_{jk}$.</p> <p>Let $E_{ij}$ denote the matrix that is all zeros except for at location $ij$, and assume $i\ne j$. Then $E_{ij}^2=0$, so $(1+E_{ij})(1-E_{ij})=1$. Now since $1+E_{ij}$ is invertible, the condition given tells us that $(1+E_{ij})M=M+E_{ij}M=M(1+E_{ij})=M+ME_{ij}$. Then this tells us that $E_{ij}M=ME_{ij}$.</p> <p>Now $(E_{ij}M)_{rc}=\delta_{ri}\delta_{kj}m_{kc}=\delta_{ri}m_{jc}$ is the matrix that is all zeros except for the $i$th row which contains the $j$th row of $M$. Similarly, $(ME_{ij})_{rc}=M_{rk}\delta_{ki}\delta_{jc}=m_{ri}\delta_{jc}$ $ME_{ij}$ is the matrix that is all zeros except for the $j$th column which contains the $i$th column of $M$. Comparing these two matrices, we have $\delta{ri}m_{jc}=m_{ri}\delta_{jc}$. Letting $r=i$, we have $m_{jc}=m_{ii}\delta{jc}$, or $m_{jc}=0$ when $c\ne j$ and $m_{jj}=m_{ii}$. Since $i$ and $j$ were arbitrary, we see that every entry of the diagonal must be equal and every off diagonal entry must be zero.</p> <p>In other words, $M=\lambda\cdot 1$. </p> Answer 24: <p>Here is a (mostly) 'matrix free' approach:</p> <p>Suppose there is some $b$ such that $Tb \notin \operatorname{sp} \{ b\}$.</p> <p>Let $b,Tb,...$ be a basis. Then we see that $Ae_1 = e_2$.</p> <p>Now choose the basis $Tb,b,...$, we see that $A e_2 = e_1$. </p> <p>Combining the two implies that $Tb = b$, a contradiction. Hence $Tb \in \operatorname{sp} \{ b\}$ for all $b$.</p> <p>Hence there is some $\lambda_b$ such that $Tb= \lambda_b b$ for all $b$. Suppose $b_1,...,b_n$ is a basis, then $T(b_1+\cdots +b_n) = \lambda_{b_1+\cdots +b_n} (b_1+\cdots +b_n) = \lambda_{b_1} b_1 + \cdots \lambda_{b_n} b_n$, and linear independence gives $\lambda_{b_1} = \cdots = \lambda_{b_n}$. Hence $T$ is a scalar times the identity.</p> Answer 25: <p><strong>Hint</strong>: The condition "$M$ is the same for every basis of $V$" can be written:</p> <p>$$\forall P \in \mathbf{GL}_n(K) \;\;\;\; PMP^{-1} = M$$</p> <p>ie. $M$ is in the center of $\mathbf{GL}_n(K)$.</p> <p>You can check what the above condition means for some well chosen matrices of $\mathbf{GL}_n(K)$ (e.g. the transvection $I_n + E_{i,j}$).</p> Answer 26: <p>We must have $A=SAS^{-1}$, for every invertible matrix $S$, so $AS=SA$.</p> <p>Let $E_i(c)$ be the matrix obtained from the identity by multiplying the $i$-th row by $c\ne0$. Then $E_i(c)A$ is the same as $A$ with the $i$-th row multiplied by $c$, whereas $AE_i(c)$ is the same as $A$ but with the $i$-th <em>column</em> multiplied by $c$.</p> <p>Choosing $i=1$ and $c=2$, we see that the coefficient in place $(1,k)$ is multiplied by $2$ in $E_1(2)A$, whereas it is not changed in $AE_1(2)$. Hence it is zero. So the first row is $[a_{11}\;0\;\dots\;0]$.</p> <p>You can prove similarly that the matrix is diagonal.</p> <p>In particular there is a basis formed by eigenvectors; permuting them doesn't change the matrix, so all the eigenvalues are equal.</p> <p>The converse is also true: the matrix relative to every basis of the map $v\mapsto av$ is $aI$.</p> Answer 27: <p>If you have a finite vector space $V$, if you want to change the representation of vectors,according to a new basis, you just need to find a full-rank matrix $X$ that transfers every vector $v$ to the new representation $Xv$.</p> <p>Now, assume a map $M:V\rightarrow V$</p> <p>$Mv=w$</p> <p>If we want vectors $v$ and $w$ to be represented with a new basis, there exist a matrix $X$ for it.</p> <p>Multiply both sides of $Mv=w$ by $X$ to get</p> <p>$XMv=Xw$</p> <p>The vector $w$ is transferred to the new basis. However, the same can be done for $v$, if we can commute $X$ and $M$. This is possible, for every full rank matrix $X$, if and only if $M$ is a multiple of the identity matrix.</p> Answer 28: <p>Let's show that the matrix element $m_{i,j}=0$ for $i\neq j$: make the following new basis $u_a=v_a$ except $a=i$ where $u_i=v_i+v_j$ </p> <p>which corresponds to the matrix $A$ being almost identity except the off-diagonal element $A_{ij}=1$</p> <p>After that we have $A^{-1}MA=M$ which implies that $m_{i,j}=0$ and also $m_{ii}=m_{jj}$</p> <p>Repeating the same procedure for different $i\neq j$ we conclude that all off-diagonal elements are zero and the diagonal elements are all equal.</p> <p>To exemplify the method take $4\times 4$ matrix M: $$ M=\left( \begin{array}{cccc} m_{1,1} &amp; m_{1,2} &amp; m_{1,3} &amp; m_{1,4} \\ m_{2,1} &amp; m_{2,2} &amp; m_{2,3} &amp; m_{2,4} \\ m_{3,1} &amp; m_{3,2} &amp; m_{3,3} &amp; m_{3,4} \\ m_{4,1} &amp; m_{4,2} &amp; m_{4,3} &amp; m_{4,4} \\ \end{array} \right) $$ and $i=1$ $j=2$ to get for $A^{-1}.M.A-M$: $$ \left( \begin{array}{cccc} -m_{2,1} &amp; m_{1,1}-m_{2,1}-m_{2,2} &amp; -m_{2,3} &amp; -m_{2,4} \\ 0 &amp; m_{2,1} &amp; 0 &amp; 0 \\ 0 &amp; m_{3,1} &amp; 0 &amp; 0 \\ 0 &amp; m_{4,1} &amp; 0 &amp; 0 \\ \end{array} \right) $$ which implies that $m_{2,1}=0$ and $m_{1,1}=m_{2,2}$</p> Answer 29: <p>Note that I will be using Einstein summation notation to denote matrix multiplication, where repeated indices are summed over. E.g. $c_{ik}=a_{ij}b_{jk}$ denotes the same thing as $c_{ik}=\sum_j a_{ij}b_{jk}$.</p> <p>Let $E_{ij}$ denote the matrix that is all zeros except for at location $ij$, and assume $i\ne j$. Then $E_{ij}^2=0$, so $(1+E_{ij})(1-E_{ij})=1$. Now since $1+E_{ij}$ is invertible, the condition given tells us that $(1+E_{ij})M=M+E_{ij}M=M(1+E_{ij})=M+ME_{ij}$. Then this tells us that $E_{ij}M=ME_{ij}$.</p> <p>Now $(E_{ij}M)_{rc}=\delta_{ri}\delta_{kj}m_{kc}=\delta_{ri}m_{jc}$ is the matrix that is all zeros except for the $i$th row which contains the $j$th row of $M$. Similarly, $(ME_{ij})_{rc}=M_{rk}\delta_{ki}\delta_{jc}=m_{ri}\delta_{jc}$ $ME_{ij}$ is the matrix that is all zeros except for the $j$th column which contains the $i$th column of $M$. Comparing these two matrices, we have $\delta{ri}m_{jc}=m_{ri}\delta_{jc}$. Letting $r=i$, we have $m_{jc}=m_{ii}\delta{jc}$, or $m_{jc}=0$ when $c\ne j$ and $m_{jj}=m_{ii}$. Since $i$ and $j$ were arbitrary, we see that every entry of the diagonal must be equal and every off diagonal entry must be zero.</p> <p>In other words, $M=\lambda\cdot 1$. </p> Answer 30: <p>Here is a (mostly) 'matrix free' approach:</p> <p>Suppose there is some $b$ such that $Tb \notin \operatorname{sp} \{ b\}$.</p> <p>Let $b,Tb,...$ be a basis. Then we see that $Ae_1 = e_2$.</p> <p>Now choose the basis $Tb,b,...$, we see that $A e_2 = e_1$. </p> <p>Combining the two implies that $Tb = b$, a contradiction. Hence $Tb \in \operatorname{sp} \{ b\}$ for all $b$.</p> <p>Hence there is some $\lambda_b$ such that $Tb= \lambda_b b$ for all $b$. Suppose $b_1,...,b_n$ is a basis, then $T(b_1+\cdots +b_n) = \lambda_{b_1+\cdots +b_n} (b_1+\cdots +b_n) = \lambda_{b_1} b_1 + \cdots \lambda_{b_n} b_n$, and linear independence gives $\lambda_{b_1} = \cdots = \lambda_{b_n}$. Hence $T$ is a scalar times the identity.</p> Answer 31: <p><strong>Hint</strong>: The condition "$M$ is the same for every basis of $V$" can be written:</p> <p>$$\forall P \in \mathbf{GL}_n(K) \;\;\;\; PMP^{-1} = M$$</p> <p>ie. $M$ is in the center of $\mathbf{GL}_n(K)$.</p> <p>You can check what the above condition means for some well chosen matrices of $\mathbf{GL}_n(K)$ (e.g. the transvection $I_n + E_{i,j}$).</p> Answer 32: <p>We must have $A=SAS^{-1}$, for every invertible matrix $S$, so $AS=SA$.</p> <p>Let $E_i(c)$ be the matrix obtained from the identity by multiplying the $i$-th row by $c\ne0$. Then $E_i(c)A$ is the same as $A$ with the $i$-th row multiplied by $c$, whereas $AE_i(c)$ is the same as $A$ but with the $i$-th <em>column</em> multiplied by $c$.</p> <p>Choosing $i=1$ and $c=2$, we see that the coefficient in place $(1,k)$ is multiplied by $2$ in $E_1(2)A$, whereas it is not changed in $AE_1(2)$. Hence it is zero. So the first row is $[a_{11}\;0\;\dots\;0]$.</p> <p>You can prove similarly that the matrix is diagonal.</p> <p>In particular there is a basis formed by eigenvectors; permuting them doesn't change the matrix, so all the eigenvalues are equal.</p> <p>The converse is also true: the matrix relative to every basis of the map $v\mapsto av$ is $aI$.</p> Answer 33: <p>If you have a finite vector space $V$, if you want to change the representation of vectors,according to a new basis, you just need to find a full-rank matrix $X$ that transfers every vector $v$ to the new representation $Xv$.</p> <p>Now, assume a map $M:V\rightarrow V$</p> <p>$Mv=w$</p> <p>If we want vectors $v$ and $w$ to be represented with a new basis, there exist a matrix $X$ for it.</p> <p>Multiply both sides of $Mv=w$ by $X$ to get</p> <p>$XMv=Xw$</p> <p>The vector $w$ is transferred to the new basis. However, the same can be done for $v$, if we can commute $X$ and $M$. This is possible, for every full rank matrix $X$, if and only if $M$ is a multiple of the identity matrix.</p> Answer 34: <p>Let's show that the matrix element $m_{i,j}=0$ for $i\neq j$: make the following new basis $u_a=v_a$ except $a=i$ where $u_i=v_i+v_j$ </p> <p>which corresponds to the matrix $A$ being almost identity except the off-diagonal element $A_{ij}=1$</p> <p>After that we have $A^{-1}MA=M$ which implies that $m_{i,j}=0$ and also $m_{ii}=m_{jj}$</p> <p>Repeating the same procedure for different $i\neq j$ we conclude that all off-diagonal elements are zero and the diagonal elements are all equal.</p> <p>To exemplify the method take $4\times 4$ matrix M: $$ M=\left( \begin{array}{cccc} m_{1,1} &amp; m_{1,2} &amp; m_{1,3} &amp; m_{1,4} \\ m_{2,1} &amp; m_{2,2} &amp; m_{2,3} &amp; m_{2,4} \\ m_{3,1} &amp; m_{3,2} &amp; m_{3,3} &amp; m_{3,4} \\ m_{4,1} &amp; m_{4,2} &amp; m_{4,3} &amp; m_{4,4} \\ \end{array} \right) $$ and $i=1$ $j=2$ to get for $A^{-1}.M.A-M$: $$ \left( \begin{array}{cccc} -m_{2,1} &amp; m_{1,1}-m_{2,1}-m_{2,2} &amp; -m_{2,3} &amp; -m_{2,4} \\ 0 &amp; m_{2,1} &amp; 0 &amp; 0 \\ 0 &amp; m_{3,1} &amp; 0 &amp; 0 \\ 0 &amp; m_{4,1} &amp; 0 &amp; 0 \\ \end{array} \right) $$ which implies that $m_{2,1}=0$ and $m_{1,1}=m_{2,2}$</p> Answer 35: <p>Note that I will be using Einstein summation notation to denote matrix multiplication, where repeated indices are summed over. E.g. $c_{ik}=a_{ij}b_{jk}$ denotes the same thing as $c_{ik}=\sum_j a_{ij}b_{jk}$.</p> <p>Let $E_{ij}$ denote the matrix that is all zeros except for at location $ij$, and assume $i\ne j$. Then $E_{ij}^2=0$, so $(1+E_{ij})(1-E_{ij})=1$. Now since $1+E_{ij}$ is invertible, the condition given tells us that $(1+E_{ij})M=M+E_{ij}M=M(1+E_{ij})=M+ME_{ij}$. Then this tells us that $E_{ij}M=ME_{ij}$.</p> <p>Now $(E_{ij}M)_{rc}=\delta_{ri}\delta_{kj}m_{kc}=\delta_{ri}m_{jc}$ is the matrix that is all zeros except for the $i$th row which contains the $j$th row of $M$. Similarly, $(ME_{ij})_{rc}=M_{rk}\delta_{ki}\delta_{jc}=m_{ri}\delta_{jc}$ $ME_{ij}$ is the matrix that is all zeros except for the $j$th column which contains the $i$th column of $M$. Comparing these two matrices, we have $\delta{ri}m_{jc}=m_{ri}\delta_{jc}$. Letting $r=i$, we have $m_{jc}=m_{ii}\delta{jc}$, or $m_{jc}=0$ when $c\ne j$ and $m_{jj}=m_{ii}$. Since $i$ and $j$ were arbitrary, we see that every entry of the diagonal must be equal and every off diagonal entry must be zero.</p> <p>In other words, $M=\lambda\cdot 1$. </p> Answer 36: <p>Here is a (mostly) 'matrix free' approach:</p> <p>Suppose there is some $b$ such that $Tb \notin \operatorname{sp} \{ b\}$.</p> <p>Let $b,Tb,...$ be a basis. Then we see that $Ae_1 = e_2$.</p> <p>Now choose the basis $Tb,b,...$, we see that $A e_2 = e_1$. </p> <p>Combining the two implies that $Tb = b$, a contradiction. Hence $Tb \in \operatorname{sp} \{ b\}$ for all $b$.</p> <p>Hence there is some $\lambda_b$ such that $Tb= \lambda_b b$ for all $b$. Suppose $b_1,...,b_n$ is a basis, then $T(b_1+\cdots +b_n) = \lambda_{b_1+\cdots +b_n} (b_1+\cdots +b_n) = \lambda_{b_1} b_1 + \cdots \lambda_{b_n} b_n$, and linear independence gives $\lambda_{b_1} = \cdots = \lambda_{b_n}$. Hence $T$ is a scalar times the identity.</p> Answer 37: <p><strong>Hint</strong>: The condition "$M$ is the same for every basis of $V$" can be written:</p> <p>$$\forall P \in \mathbf{GL}_n(K) \;\;\;\; PMP^{-1} = M$$</p> <p>ie. $M$ is in the center of $\mathbf{GL}_n(K)$.</p> <p>You can check what the above condition means for some well chosen matrices of $\mathbf{GL}_n(K)$ (e.g. the transvection $I_n + E_{i,j}$).</p> Answer 38: <p>We must have $A=SAS^{-1}$, for every invertible matrix $S$, so $AS=SA$.</p> <p>Let $E_i(c)$ be the matrix obtained from the identity by multiplying the $i$-th row by $c\ne0$. Then $E_i(c)A$ is the same as $A$ with the $i$-th row multiplied by $c$, whereas $AE_i(c)$ is the same as $A$ but with the $i$-th <em>column</em> multiplied by $c$.</p> <p>Choosing $i=1$ and $c=2$, we see that the coefficient in place $(1,k)$ is multiplied by $2$ in $E_1(2)A$, whereas it is not changed in $AE_1(2)$. Hence it is zero. So the first row is $[a_{11}\;0\;\dots\;0]$.</p> <p>You can prove similarly that the matrix is diagonal.</p> <p>In particular there is a basis formed by eigenvectors; permuting them doesn't change the matrix, so all the eigenvalues are equal.</p> <p>The converse is also true: the matrix relative to every basis of the map $v\mapsto av$ is $aI$.</p> Answer 39: <p>If you have a finite vector space $V$, if you want to change the representation of vectors,according to a new basis, you just need to find a full-rank matrix $X$ that transfers every vector $v$ to the new representation $Xv$.</p> <p>Now, assume a map $M:V\rightarrow V$</p> <p>$Mv=w$</p> <p>If we want vectors $v$ and $w$ to be represented with a new basis, there exist a matrix $X$ for it.</p> <p>Multiply both sides of $Mv=w$ by $X$ to get</p> <p>$XMv=Xw$</p> <p>The vector $w$ is transferred to the new basis. However, the same can be done for $v$, if we can commute $X$ and $M$. This is possible, for every full rank matrix $X$, if and only if $M$ is a multiple of the identity matrix.</p> Answer 40: <p>Let's show that the matrix element $m_{i,j}=0$ for $i\neq j$: make the following new basis $u_a=v_a$ except $a=i$ where $u_i=v_i+v_j$ </p> <p>which corresponds to the matrix $A$ being almost identity except the off-diagonal element $A_{ij}=1$</p> <p>After that we have $A^{-1}MA=M$ which implies that $m_{i,j}=0$ and also $m_{ii}=m_{jj}$</p> <p>Repeating the same procedure for different $i\neq j$ we conclude that all off-diagonal elements are zero and the diagonal elements are all equal.</p> <p>To exemplify the method take $4\times 4$ matrix M: $$ M=\left( \begin{array}{cccc} m_{1,1} &amp; m_{1,2} &amp; m_{1,3} &amp; m_{1,4} \\ m_{2,1} &amp; m_{2,2} &amp; m_{2,3} &amp; m_{2,4} \\ m_{3,1} &amp; m_{3,2} &amp; m_{3,3} &amp; m_{3,4} \\ m_{4,1} &amp; m_{4,2} &amp; m_{4,3} &amp; m_{4,4} \\ \end{array} \right) $$ and $i=1$ $j=2$ to get for $A^{-1}.M.A-M$: $$ \left( \begin{array}{cccc} -m_{2,1} &amp; m_{1,1}-m_{2,1}-m_{2,2} &amp; -m_{2,3} &amp; -m_{2,4} \\ 0 &amp; m_{2,1} &amp; 0 &amp; 0 \\ 0 &amp; m_{3,1} &amp; 0 &amp; 0 \\ 0 &amp; m_{4,1} &amp; 0 &amp; 0 \\ \end{array} \right) $$ which implies that $m_{2,1}=0$ and $m_{1,1}=m_{2,2}$</p> Answer 41: <p>Note that I will be using Einstein summation notation to denote matrix multiplication, where repeated indices are summed over. E.g. $c_{ik}=a_{ij}b_{jk}$ denotes the same thing as $c_{ik}=\sum_j a_{ij}b_{jk}$.</p> <p>Let $E_{ij}$ denote the matrix that is all zeros except for at location $ij$, and assume $i\ne j$. Then $E_{ij}^2=0$, so $(1+E_{ij})(1-E_{ij})=1$. Now since $1+E_{ij}$ is invertible, the condition given tells us that $(1+E_{ij})M=M+E_{ij}M=M(1+E_{ij})=M+ME_{ij}$. Then this tells us that $E_{ij}M=ME_{ij}$.</p> <p>Now $(E_{ij}M)_{rc}=\delta_{ri}\delta_{kj}m_{kc}=\delta_{ri}m_{jc}$ is the matrix that is all zeros except for the $i$th row which contains the $j$th row of $M$. Similarly, $(ME_{ij})_{rc}=M_{rk}\delta_{ki}\delta_{jc}=m_{ri}\delta_{jc}$ $ME_{ij}$ is the matrix that is all zeros except for the $j$th column which contains the $i$th column of $M$. Comparing these two matrices, we have $\delta{ri}m_{jc}=m_{ri}\delta_{jc}$. Letting $r=i$, we have $m_{jc}=m_{ii}\delta{jc}$, or $m_{jc}=0$ when $c\ne j$ and $m_{jj}=m_{ii}$. Since $i$ and $j$ were arbitrary, we see that every entry of the diagonal must be equal and every off diagonal entry must be zero.</p> <p>In other words, $M=\lambda\cdot 1$. </p> Answer 42: <p>Here is a (mostly) 'matrix free' approach:</p> <p>Suppose there is some $b$ such that $Tb \notin \operatorname{sp} \{ b\}$.</p> <p>Let $b,Tb,...$ be a basis. Then we see that $Ae_1 = e_2$.</p> <p>Now choose the basis $Tb,b,...$, we see that $A e_2 = e_1$. </p> <p>Combining the two implies that $Tb = b$, a contradiction. Hence $Tb \in \operatorname{sp} \{ b\}$ for all $b$.</p> <p>Hence there is some $\lambda_b$ such that $Tb= \lambda_b b$ for all $b$. Suppose $b_1,...,b_n$ is a basis, then $T(b_1+\cdots +b_n) = \lambda_{b_1+\cdots +b_n} (b_1+\cdots +b_n) = \lambda_{b_1} b_1 + \cdots \lambda_{b_n} b_n$, and linear independence gives $\lambda_{b_1} = \cdots = \lambda_{b_n}$. Hence $T$ is a scalar times the identity.</p> Answer 43: <p><strong>Hint</strong>: The condition "$M$ is the same for every basis of $V$" can be written:</p> <p>$$\forall P \in \mathbf{GL}_n(K) \;\;\;\; PMP^{-1} = M$$</p> <p>ie. $M$ is in the center of $\mathbf{GL}_n(K)$.</p> <p>You can check what the above condition means for some well chosen matrices of $\mathbf{GL}_n(K)$ (e.g. the transvection $I_n + E_{i,j}$).</p> Answer 44: <p>We must have $A=SAS^{-1}$, for every invertible matrix $S$, so $AS=SA$.</p> <p>Let $E_i(c)$ be the matrix obtained from the identity by multiplying the $i$-th row by $c\ne0$. Then $E_i(c)A$ is the same as $A$ with the $i$-th row multiplied by $c$, whereas $AE_i(c)$ is the same as $A$ but with the $i$-th <em>column</em> multiplied by $c$.</p> <p>Choosing $i=1$ and $c=2$, we see that the coefficient in place $(1,k)$ is multiplied by $2$ in $E_1(2)A$, whereas it is not changed in $AE_1(2)$. Hence it is zero. So the first row is $[a_{11}\;0\;\dots\;0]$.</p> <p>You can prove similarly that the matrix is diagonal.</p> <p>In particular there is a basis formed by eigenvectors; permuting them doesn't change the matrix, so all the eigenvalues are equal.</p> <p>The converse is also true: the matrix relative to every basis of the map $v\mapsto av$ is $aI$.</p> Answer 45: <p>If you have a finite vector space $V$, if you want to change the representation of vectors,according to a new basis, you just need to find a full-rank matrix $X$ that transfers every vector $v$ to the new representation $Xv$.</p> <p>Now, assume a map $M:V\rightarrow V$</p> <p>$Mv=w$</p> <p>If we want vectors $v$ and $w$ to be represented with a new basis, there exist a matrix $X$ for it.</p> <p>Multiply both sides of $Mv=w$ by $X$ to get</p> <p>$XMv=Xw$</p> <p>The vector $w$ is transferred to the new basis. However, the same can be done for $v$, if we can commute $X$ and $M$. This is possible, for every full rank matrix $X$, if and only if $M$ is a multiple of the identity matrix.</p> Answer 46: <p>Let's show that the matrix element $m_{i,j}=0$ for $i\neq j$: make the following new basis $u_a=v_a$ except $a=i$ where $u_i=v_i+v_j$ </p> <p>which corresponds to the matrix $A$ being almost identity except the off-diagonal element $A_{ij}=1$</p> <p>After that we have $A^{-1}MA=M$ which implies that $m_{i,j}=0$ and also $m_{ii}=m_{jj}$</p> <p>Repeating the same procedure for different $i\neq j$ we conclude that all off-diagonal elements are zero and the diagonal elements are all equal.</p> <p>To exemplify the method take $4\times 4$ matrix M: $$ M=\left( \begin{array}{cccc} m_{1,1} &amp; m_{1,2} &amp; m_{1,3} &amp; m_{1,4} \\ m_{2,1} &amp; m_{2,2} &amp; m_{2,3} &amp; m_{2,4} \\ m_{3,1} &amp; m_{3,2} &amp; m_{3,3} &amp; m_{3,4} \\ m_{4,1} &amp; m_{4,2} &amp; m_{4,3} &amp; m_{4,4} \\ \end{array} \right) $$ and $i=1$ $j=2$ to get for $A^{-1}.M.A-M$: $$ \left( \begin{array}{cccc} -m_{2,1} &amp; m_{1,1}-m_{2,1}-m_{2,2} &amp; -m_{2,3} &amp; -m_{2,4} \\ 0 &amp; m_{2,1} &amp; 0 &amp; 0 \\ 0 &amp; m_{3,1} &amp; 0 &amp; 0 \\ 0 &amp; m_{4,1} &amp; 0 &amp; 0 \\ \end{array} \right) $$ which implies that $m_{2,1}=0$ and $m_{1,1}=m_{2,2}$</p> Answer 47: <p>Note that I will be using Einstein summation notation to denote matrix multiplication, where repeated indices are summed over. E.g. $c_{ik}=a_{ij}b_{jk}$ denotes the same thing as $c_{ik}=\sum_j a_{ij}b_{jk}$.</p> <p>Let $E_{ij}$ denote the matrix that is all zeros except for at location $ij$, and assume $i\ne j$. Then $E_{ij}^2=0$, so $(1+E_{ij})(1-E_{ij})=1$. Now since $1+E_{ij}$ is invertible, the condition given tells us that $(1+E_{ij})M=M+E_{ij}M=M(1+E_{ij})=M+ME_{ij}$. Then this tells us that $E_{ij}M=ME_{ij}$.</p> <p>Now $(E_{ij}M)_{rc}=\delta_{ri}\delta_{kj}m_{kc}=\delta_{ri}m_{jc}$ is the matrix that is all zeros except for the $i$th row which contains the $j$th row of $M$. Similarly, $(ME_{ij})_{rc}=M_{rk}\delta_{ki}\delta_{jc}=m_{ri}\delta_{jc}$ $ME_{ij}$ is the matrix that is all zeros except for the $j$th column which contains the $i$th column of $M$. Comparing these two matrices, we have $\delta{ri}m_{jc}=m_{ri}\delta_{jc}$. Letting $r=i$, we have $m_{jc}=m_{ii}\delta{jc}$, or $m_{jc}=0$ when $c\ne j$ and $m_{jj}=m_{ii}$. Since $i$ and $j$ were arbitrary, we see that every entry of the diagonal must be equal and every off diagonal entry must be zero.</p> <p>In other words, $M=\lambda\cdot 1$. </p> Answer 48: <p>Here is a (mostly) 'matrix free' approach:</p> <p>Suppose there is some $b$ such that $Tb \notin \operatorname{sp} \{ b\}$.</p> <p>Let $b,Tb,...$ be a basis. Then we see that $Ae_1 = e_2$.</p> <p>Now choose the basis $Tb,b,...$, we see that $A e_2 = e_1$. </p> <p>Combining the two implies that $Tb = b$, a contradiction. Hence $Tb \in \operatorname{sp} \{ b\}$ for all $b$.</p> <p>Hence there is some $\lambda_b$ such that $Tb= \lambda_b b$ for all $b$. Suppose $b_1,...,b_n$ is a basis, then $T(b_1+\cdots +b_n) = \lambda_{b_1+\cdots +b_n} (b_1+\cdots +b_n) = \lambda_{b_1} b_1 + \cdots \lambda_{b_n} b_n$, and linear independence gives $\lambda_{b_1} = \cdots = \lambda_{b_n}$. Hence $T$ is a scalar times the identity.</p> Answer 49: <p><strong>Hint</strong>: The condition "$M$ is the same for every basis of $V$" can be written:</p> <p>$$\forall P \in \mathbf{GL}_n(K) \;\;\;\; PMP^{-1} = M$$</p> <p>ie. $M$ is in the center of $\mathbf{GL}_n(K)$.</p> <p>You can check what the above condition means for some well chosen matrices of $\mathbf{GL}_n(K)$ (e.g. the transvection $I_n + E_{i,j}$).</p> Answer 50: <p>We must have $A=SAS^{-1}$, for every invertible matrix $S$, so $AS=SA$.</p> <p>Let $E_i(c)$ be the matrix obtained from the identity by multiplying the $i$-th row by $c\ne0$. Then $E_i(c)A$ is the same as $A$ with the $i$-th row multiplied by $c$, whereas $AE_i(c)$ is the same as $A$ but with the $i$-th <em>column</em> multiplied by $c$.</p> <p>Choosing $i=1$ and $c=2$, we see that the coefficient in place $(1,k)$ is multiplied by $2$ in $E_1(2)A$, whereas it is not changed in $AE_1(2)$. Hence it is zero. So the first row is $[a_{11}\;0\;\dots\;0]$.</p> <p>You can prove similarly that the matrix is diagonal.</p> <p>In particular there is a basis formed by eigenvectors; permuting them doesn't change the matrix, so all the eigenvalues are equal.</p> <p>The converse is also true: the matrix relative to every basis of the map $v\mapsto av$ is $aI$.</p> Answer 51: <p>If you have a finite vector space $V$, if you want to change the representation of vectors,according to a new basis, you just need to find a full-rank matrix $X$ that transfers every vector $v$ to the new representation $Xv$.</p> <p>Now, assume a map $M:V\rightarrow V$</p> <p>$Mv=w$</p> <p>If we want vectors $v$ and $w$ to be represented with a new basis, there exist a matrix $X$ for it.</p> <p>Multiply both sides of $Mv=w$ by $X$ to get</p> <p>$XMv=Xw$</p> <p>The vector $w$ is transferred to the new basis. However, the same can be done for $v$, if we can commute $X$ and $M$. This is possible, for every full rank matrix $X$, if and only if $M$ is a multiple of the identity matrix.</p> Answer 52: <p>Let's show that the matrix element $m_{i,j}=0$ for $i\neq j$: make the following new basis $u_a=v_a$ except $a=i$ where $u_i=v_i+v_j$ </p> <p>which corresponds to the matrix $A$ being almost identity except the off-diagonal element $A_{ij}=1$</p> <p>After that we have $A^{-1}MA=M$ which implies that $m_{i,j}=0$ and also $m_{ii}=m_{jj}$</p> <p>Repeating the same procedure for different $i\neq j$ we conclude that all off-diagonal elements are zero and the diagonal elements are all equal.</p> <p>To exemplify the method take $4\times 4$ matrix M: $$ M=\left( \begin{array}{cccc} m_{1,1} &amp; m_{1,2} &amp; m_{1,3} &amp; m_{1,4} \\ m_{2,1} &amp; m_{2,2} &amp; m_{2,3} &amp; m_{2,4} \\ m_{3,1} &amp; m_{3,2} &amp; m_{3,3} &amp; m_{3,4} \\ m_{4,1} &amp; m_{4,2} &amp; m_{4,3} &amp; m_{4,4} \\ \end{array} \right) $$ and $i=1$ $j=2$ to get for $A^{-1}.M.A-M$: $$ \left( \begin{array}{cccc} -m_{2,1} &amp; m_{1,1}-m_{2,1}-m_{2,2} &amp; -m_{2,3} &amp; -m_{2,4} \\ 0 &amp; m_{2,1} &amp; 0 &amp; 0 \\ 0 &amp; m_{3,1} &amp; 0 &amp; 0 \\ 0 &amp; m_{4,1} &amp; 0 &amp; 0 \\ \end{array} \right) $$ which implies that $m_{2,1}=0$ and $m_{1,1}=m_{2,2}$</p> Answer 53: <p>Note that I will be using Einstein summation notation to denote matrix multiplication, where repeated indices are summed over. E.g. $c_{ik}=a_{ij}b_{jk}$ denotes the same thing as $c_{ik}=\sum_j a_{ij}b_{jk}$.</p> <p>Let $E_{ij}$ denote the matrix that is all zeros except for at location $ij$, and assume $i\ne j$. Then $E_{ij}^2=0$, so $(1+E_{ij})(1-E_{ij})=1$. Now since $1+E_{ij}$ is invertible, the condition given tells us that $(1+E_{ij})M=M+E_{ij}M=M(1+E_{ij})=M+ME_{ij}$. Then this tells us that $E_{ij}M=ME_{ij}$.</p> <p>Now $(E_{ij}M)_{rc}=\delta_{ri}\delta_{kj}m_{kc}=\delta_{ri}m_{jc}$ is the matrix that is all zeros except for the $i$th row which contains the $j$th row of $M$. Similarly, $(ME_{ij})_{rc}=M_{rk}\delta_{ki}\delta_{jc}=m_{ri}\delta_{jc}$ $ME_{ij}$ is the matrix that is all zeros except for the $j$th column which contains the $i$th column of $M$. Comparing these two matrices, we have $\delta{ri}m_{jc}=m_{ri}\delta_{jc}$. Letting $r=i$, we have $m_{jc}=m_{ii}\delta{jc}$, or $m_{jc}=0$ when $c\ne j$ and $m_{jj}=m_{ii}$. Since $i$ and $j$ were arbitrary, we see that every entry of the diagonal must be equal and every off diagonal entry must be zero.</p> <p>In other words, $M=\lambda\cdot 1$. </p> Answer 54: <p>Here is a (mostly) 'matrix free' approach:</p> <p>Suppose there is some $b$ such that $Tb \notin \operatorname{sp} \{ b\}$.</p> <p>Let $b,Tb,...$ be a basis. Then we see that $Ae_1 = e_2$.</p> <p>Now choose the basis $Tb,b,...$, we see that $A e_2 = e_1$. </p> <p>Combining the two implies that $Tb = b$, a contradiction. Hence $Tb \in \operatorname{sp} \{ b\}$ for all $b$.</p> <p>Hence there is some $\lambda_b$ such that $Tb= \lambda_b b$ for all $b$. Suppose $b_1,...,b_n$ is a basis, then $T(b_1+\cdots +b_n) = \lambda_{b_1+\cdots +b_n} (b_1+\cdots +b_n) = \lambda_{b_1} b_1 + \cdots \lambda_{b_n} b_n$, and linear independence gives $\lambda_{b_1} = \cdots = \lambda_{b_n}$. Hence $T$ is a scalar times the identity.</p>
Prove that $\frac{bc}{a(a-b)(a-c)}+\frac{ca}{b(b-c)(b-a)}+\frac{ab}{c(c-a)(c-b)}=\frac{ab+bc+ca}{abc}$
2,083,723
5
https://math.stackexchange.com/questions/2083723/prove-that-fracbcaa-ba-c-fraccabb-cb-a-fracabcc-ac-b
<blockquote> <p>Prove that $$\frac{bc}{a(a-b)(a-c)}+\frac{ca}{b(b-c)(b-a)}+\frac{ab}{c(c-a)(c-b)}=\frac{ab+bc+ca}{abc}$$</p> </blockquote> <p>To do this I unified three terms on the left side with a common denominator and then factored the numerator (with the aid of Wolfram Alpha... as the numerator is a 5th order expression). $$\frac{b²c²(c-b)+c²a²(a-c)+a²b²(b-a)}{abc(a-b)(b-c)(c-a)}=\dots=\frac{(ab+bc+ca)(a-b)(b-c)(c-a)}{abc(a-b)(b-c)(c-a)}$$</p> <p>My question is would there be a better or easier way to lead from the left side to the right side? Actually, the original question was to 'simplify' the left side without showing the right side. I am not sure if I could do so without Wolfram Alpha. So that's why I'm asking an easier way. What I've thought of was, for example, $$f(x):=(x-a)(x-b)(x-c)$$ $$\text{LHS}=abc\left(\frac1{a^2f'(a)}+\frac1{b^2f'(b)}+\frac1{c^2f'(c)}\right)$$ which didn't help.</p>
[ { "answer_id": 2083773, "score": 6, "text": "<p><strong>Claim:</strong> </p>\n\n<p>$$\\frac{bc}{a(a-b)(a-c)}+\\frac{ca}{b(b-c)(b-a)}+\\frac{ab}{c(c-a)(c-b)}=\\frac{ab+bc+ca}{abc}$$</p>\n\n<p><strong>Proof:</strong> </p>\n\n<p>Note that the Lagrange interpolating polynomial through $(a,p),(b,q),(c,r)$ is given by:</p>\n\n<p>$$f(x)=\\frac{(x-b)(x-c)}{(a-b)(a-c)}p+\\frac{(x-a)(x-c)}{(b-a)(b-c)}q+\\frac{(x-a)(x-b)}{(c-a)(c-b)}r$$</p>\n\n<p>We seek the value of $f(0)$ when $p=\\dfrac{1}{a},q=\\dfrac{1}{b},r=\\dfrac{1}{c}$, i.e. when $y=f(x)$ interpolates $y=\\dfrac{1}{x}$ at $x=a,b,c$.</p>\n\n<p>We then notice that $1-xf(x)$ is a degree $3$ polynomial interpolant passing through $(0,1),(a,0),(b,0),(c,0)$, and is in fact the unique such polynomial.</p>\n\n<p>By inspection, one must have </p>\n\n<p>$$1-xf(x)=\\frac{(x-a)(x-b)(x-c)}{(-a)(-b)(-c)}=\\left(1-\\frac{x}{a}\\right)\\left(1-\\frac{x}{b}\\right)\\left(1-\\frac{x}{c}\\right)$$</p>\n\n<p>Expanding to leading order,</p>\n\n<p>$$1-xf(x)=1-x\\left(\\frac{1}{a}+\\frac{1}{b}+\\frac{1}{c}\\right)+\\text{ higher order terms}$$</p>\n\n<p>which, upon a moment's thought, gives that $f(0)=\\frac{1}{a}+\\frac{1}{b}+\\frac{1}{c}$ as desired.</p>\n\n<p>Note that this method gives a natural generalisation to when we have more than just $a,b,c$.</p>\n" }, { "answer_id": 2083748, "score": 3, "text": "<p>Hint: $P(a) = b²c²(c-b)+c²a²(a-c)+a²b²(b-a)$ is a $3^{rd}$ degree polynomial in $a$, and has $a=b$ and $a=c$ as obvious roots, so it is divisible by $(a-b)(a-c)$. By symmetry in $a,b,c$ it is divisible by $(b-c)$ as well. Divide successively by $(a-b),(b-c),(c-a)$ using <a href=\"https://en.wikipedia.org/wiki/Polynomial_long_division\" rel=\"nofollow noreferrer\">polynomial long division</a>, and you get the quotient $ab+bc+ca$.</p>\n" }, { "answer_id": 2083746, "score": 1, "text": "<p>I think simpler way is to make partial fraction decomposition in $a$ of each of the terms. I.e. for the first term you get\n$$\n-\\frac{b}{(a-c) (b-c)}+\\frac{c}{(a-b) (b-c)}+\\frac{1}{a}\n$$\nfor the second\n$$\n-\\frac{c}{(a-b) (b-c)}-\\frac{c}{b (b-c)}\n$$\nand for the last one\n$$\n\\frac{b}{(a-c) (b-c)}+\\frac{b}{c (b-c)}\n$$\nSo most of the terms cancel and you are left with\n$$\n\\frac{1}{a}+\\frac{b}{c (b-c)}-\\frac{c}{b (b-c)}\n$$\nafter that it is simpler already\n(you can make the expansion in $b$ after that to make it even simpler before joining together the fractions)</p>\n\n<p>After partial fraction in $b$ you get\n$$\n\\frac{a+c}{a c}+\\frac{1}{b}\n$$</p>\n" }, { "answer_id": 2083998, "score": 1, "text": "<p>We have to prove that\n$$ \\sum_{cyc}\\frac{bc}{a(a-b)(a-c)}=\\sum_{cyc}\\frac{1}{a} $$\nor:\n$$ \\sum_{cyc}\\frac{bc-(a-b)(a-c)}{a(a-b)(a-c)} = \\sum_{cyc}\\frac{b+c-a}{(a-b)(a-c)}=0.$$\nSince\n$$ 0 = \\sum_{cyc}\\left(\\frac{1}{a-b}+\\frac{1}{a-c}\\right)=\\sum_{cyc}\\frac{2a-b-c}{(a-b)(a-c)} $$\nit is enough to prove that\n$$ \\frac{a}{(a-b)(a-c)}+\\frac{b}{(b-c)(b-a)}+\\frac{c}{(c-a)(c-b)}=0$$\nor, by multiplying both sides by $(a-b)(a-c)(b-c)$,\n$$ a(b-c)-b(a-c)+c(a-b) = 0 $$\nthat is trivial.</p>\n" }, { "answer_id": 2083773, "score": 6, "text": "<p><strong>Claim:</strong> </p>\n\n<p>$$\\frac{bc}{a(a-b)(a-c)}+\\frac{ca}{b(b-c)(b-a)}+\\frac{ab}{c(c-a)(c-b)}=\\frac{ab+bc+ca}{abc}$$</p>\n\n<p><strong>Proof:</strong> </p>\n\n<p>Note that the Lagrange interpolating polynomial through $(a,p),(b,q),(c,r)$ is given by:</p>\n\n<p>$$f(x)=\\frac{(x-b)(x-c)}{(a-b)(a-c)}p+\\frac{(x-a)(x-c)}{(b-a)(b-c)}q+\\frac{(x-a)(x-b)}{(c-a)(c-b)}r$$</p>\n\n<p>We seek the value of $f(0)$ when $p=\\dfrac{1}{a},q=\\dfrac{1}{b},r=\\dfrac{1}{c}$, i.e. when $y=f(x)$ interpolates $y=\\dfrac{1}{x}$ at $x=a,b,c$.</p>\n\n<p>We then notice that $1-xf(x)$ is a degree $3$ polynomial interpolant passing through $(0,1),(a,0),(b,0),(c,0)$, and is in fact the unique such polynomial.</p>\n\n<p>By inspection, one must have </p>\n\n<p>$$1-xf(x)=\\frac{(x-a)(x-b)(x-c)}{(-a)(-b)(-c)}=\\left(1-\\frac{x}{a}\\right)\\left(1-\\frac{x}{b}\\right)\\left(1-\\frac{x}{c}\\right)$$</p>\n\n<p>Expanding to leading order,</p>\n\n<p>$$1-xf(x)=1-x\\left(\\frac{1}{a}+\\frac{1}{b}+\\frac{1}{c}\\right)+\\text{ higher order terms}$$</p>\n\n<p>which, upon a moment's thought, gives that $f(0)=\\frac{1}{a}+\\frac{1}{b}+\\frac{1}{c}$ as desired.</p>\n\n<p>Note that this method gives a natural generalisation to when we have more than just $a,b,c$.</p>\n" }, { "answer_id": 2083748, "score": 3, "text": "<p>Hint: $P(a) = b²c²(c-b)+c²a²(a-c)+a²b²(b-a)$ is a $3^{rd}$ degree polynomial in $a$, and has $a=b$ and $a=c$ as obvious roots, so it is divisible by $(a-b)(a-c)$. By symmetry in $a,b,c$ it is divisible by $(b-c)$ as well. Divide successively by $(a-b),(b-c),(c-a)$ using <a href=\"https://en.wikipedia.org/wiki/Polynomial_long_division\" rel=\"nofollow noreferrer\">polynomial long division</a>, and you get the quotient $ab+bc+ca$.</p>\n" }, { "answer_id": 2083746, "score": 1, "text": "<p>I think simpler way is to make partial fraction decomposition in $a$ of each of the terms. I.e. for the first term you get\n$$\n-\\frac{b}{(a-c) (b-c)}+\\frac{c}{(a-b) (b-c)}+\\frac{1}{a}\n$$\nfor the second\n$$\n-\\frac{c}{(a-b) (b-c)}-\\frac{c}{b (b-c)}\n$$\nand for the last one\n$$\n\\frac{b}{(a-c) (b-c)}+\\frac{b}{c (b-c)}\n$$\nSo most of the terms cancel and you are left with\n$$\n\\frac{1}{a}+\\frac{b}{c (b-c)}-\\frac{c}{b (b-c)}\n$$\nafter that it is simpler already\n(you can make the expansion in $b$ after that to make it even simpler before joining together the fractions)</p>\n\n<p>After partial fraction in $b$ you get\n$$\n\\frac{a+c}{a c}+\\frac{1}{b}\n$$</p>\n" }, { "answer_id": 2083998, "score": 1, "text": "<p>We have to prove that\n$$ \\sum_{cyc}\\frac{bc}{a(a-b)(a-c)}=\\sum_{cyc}\\frac{1}{a} $$\nor:\n$$ \\sum_{cyc}\\frac{bc-(a-b)(a-c)}{a(a-b)(a-c)} = \\sum_{cyc}\\frac{b+c-a}{(a-b)(a-c)}=0.$$\nSince\n$$ 0 = \\sum_{cyc}\\left(\\frac{1}{a-b}+\\frac{1}{a-c}\\right)=\\sum_{cyc}\\frac{2a-b-c}{(a-b)(a-c)} $$\nit is enough to prove that\n$$ \\frac{a}{(a-b)(a-c)}+\\frac{b}{(b-c)(b-a)}+\\frac{c}{(c-a)(c-b)}=0$$\nor, by multiplying both sides by $(a-b)(a-c)(b-c)$,\n$$ a(b-c)-b(a-c)+c(a-b) = 0 $$\nthat is trivial.</p>\n" }, { "answer_id": 2083773, "score": 6, "text": "<p><strong>Claim:</strong> </p>\n\n<p>$$\\frac{bc}{a(a-b)(a-c)}+\\frac{ca}{b(b-c)(b-a)}+\\frac{ab}{c(c-a)(c-b)}=\\frac{ab+bc+ca}{abc}$$</p>\n\n<p><strong>Proof:</strong> </p>\n\n<p>Note that the Lagrange interpolating polynomial through $(a,p),(b,q),(c,r)$ is given by:</p>\n\n<p>$$f(x)=\\frac{(x-b)(x-c)}{(a-b)(a-c)}p+\\frac{(x-a)(x-c)}{(b-a)(b-c)}q+\\frac{(x-a)(x-b)}{(c-a)(c-b)}r$$</p>\n\n<p>We seek the value of $f(0)$ when $p=\\dfrac{1}{a},q=\\dfrac{1}{b},r=\\dfrac{1}{c}$, i.e. when $y=f(x)$ interpolates $y=\\dfrac{1}{x}$ at $x=a,b,c$.</p>\n\n<p>We then notice that $1-xf(x)$ is a degree $3$ polynomial interpolant passing through $(0,1),(a,0),(b,0),(c,0)$, and is in fact the unique such polynomial.</p>\n\n<p>By inspection, one must have </p>\n\n<p>$$1-xf(x)=\\frac{(x-a)(x-b)(x-c)}{(-a)(-b)(-c)}=\\left(1-\\frac{x}{a}\\right)\\left(1-\\frac{x}{b}\\right)\\left(1-\\frac{x}{c}\\right)$$</p>\n\n<p>Expanding to leading order,</p>\n\n<p>$$1-xf(x)=1-x\\left(\\frac{1}{a}+\\frac{1}{b}+\\frac{1}{c}\\right)+\\text{ higher order terms}$$</p>\n\n<p>which, upon a moment's thought, gives that $f(0)=\\frac{1}{a}+\\frac{1}{b}+\\frac{1}{c}$ as desired.</p>\n\n<p>Note that this method gives a natural generalisation to when we have more than just $a,b,c$.</p>\n" }, { "answer_id": 2083748, "score": 3, "text": "<p>Hint: $P(a) = b²c²(c-b)+c²a²(a-c)+a²b²(b-a)$ is a $3^{rd}$ degree polynomial in $a$, and has $a=b$ and $a=c$ as obvious roots, so it is divisible by $(a-b)(a-c)$. By symmetry in $a,b,c$ it is divisible by $(b-c)$ as well. Divide successively by $(a-b),(b-c),(c-a)$ using <a href=\"https://en.wikipedia.org/wiki/Polynomial_long_division\" rel=\"nofollow noreferrer\">polynomial long division</a>, and you get the quotient $ab+bc+ca$.</p>\n" }, { "answer_id": 2083746, "score": 1, "text": "<p>I think simpler way is to make partial fraction decomposition in $a$ of each of the terms. I.e. for the first term you get\n$$\n-\\frac{b}{(a-c) (b-c)}+\\frac{c}{(a-b) (b-c)}+\\frac{1}{a}\n$$\nfor the second\n$$\n-\\frac{c}{(a-b) (b-c)}-\\frac{c}{b (b-c)}\n$$\nand for the last one\n$$\n\\frac{b}{(a-c) (b-c)}+\\frac{b}{c (b-c)}\n$$\nSo most of the terms cancel and you are left with\n$$\n\\frac{1}{a}+\\frac{b}{c (b-c)}-\\frac{c}{b (b-c)}\n$$\nafter that it is simpler already\n(you can make the expansion in $b$ after that to make it even simpler before joining together the fractions)</p>\n\n<p>After partial fraction in $b$ you get\n$$\n\\frac{a+c}{a c}+\\frac{1}{b}\n$$</p>\n" }, { "answer_id": 2083998, "score": 1, "text": "<p>We have to prove that\n$$ \\sum_{cyc}\\frac{bc}{a(a-b)(a-c)}=\\sum_{cyc}\\frac{1}{a} $$\nor:\n$$ \\sum_{cyc}\\frac{bc-(a-b)(a-c)}{a(a-b)(a-c)} = \\sum_{cyc}\\frac{b+c-a}{(a-b)(a-c)}=0.$$\nSince\n$$ 0 = \\sum_{cyc}\\left(\\frac{1}{a-b}+\\frac{1}{a-c}\\right)=\\sum_{cyc}\\frac{2a-b-c}{(a-b)(a-c)} $$\nit is enough to prove that\n$$ \\frac{a}{(a-b)(a-c)}+\\frac{b}{(b-c)(b-a)}+\\frac{c}{(c-a)(c-b)}=0$$\nor, by multiplying both sides by $(a-b)(a-c)(b-c)$,\n$$ a(b-c)-b(a-c)+c(a-b) = 0 $$\nthat is trivial.</p>\n" }, { "answer_id": 2083773, "score": 6, "text": "<p><strong>Claim:</strong> </p>\n\n<p>$$\\frac{bc}{a(a-b)(a-c)}+\\frac{ca}{b(b-c)(b-a)}+\\frac{ab}{c(c-a)(c-b)}=\\frac{ab+bc+ca}{abc}$$</p>\n\n<p><strong>Proof:</strong> </p>\n\n<p>Note that the Lagrange interpolating polynomial through $(a,p),(b,q),(c,r)$ is given by:</p>\n\n<p>$$f(x)=\\frac{(x-b)(x-c)}{(a-b)(a-c)}p+\\frac{(x-a)(x-c)}{(b-a)(b-c)}q+\\frac{(x-a)(x-b)}{(c-a)(c-b)}r$$</p>\n\n<p>We seek the value of $f(0)$ when $p=\\dfrac{1}{a},q=\\dfrac{1}{b},r=\\dfrac{1}{c}$, i.e. when $y=f(x)$ interpolates $y=\\dfrac{1}{x}$ at $x=a,b,c$.</p>\n\n<p>We then notice that $1-xf(x)$ is a degree $3$ polynomial interpolant passing through $(0,1),(a,0),(b,0),(c,0)$, and is in fact the unique such polynomial.</p>\n\n<p>By inspection, one must have </p>\n\n<p>$$1-xf(x)=\\frac{(x-a)(x-b)(x-c)}{(-a)(-b)(-c)}=\\left(1-\\frac{x}{a}\\right)\\left(1-\\frac{x}{b}\\right)\\left(1-\\frac{x}{c}\\right)$$</p>\n\n<p>Expanding to leading order,</p>\n\n<p>$$1-xf(x)=1-x\\left(\\frac{1}{a}+\\frac{1}{b}+\\frac{1}{c}\\right)+\\text{ higher order terms}$$</p>\n\n<p>which, upon a moment's thought, gives that $f(0)=\\frac{1}{a}+\\frac{1}{b}+\\frac{1}{c}$ as desired.</p>\n\n<p>Note that this method gives a natural generalisation to when we have more than just $a,b,c$.</p>\n" }, { "answer_id": 2083748, "score": 3, "text": "<p>Hint: $P(a) = b²c²(c-b)+c²a²(a-c)+a²b²(b-a)$ is a $3^{rd}$ degree polynomial in $a$, and has $a=b$ and $a=c$ as obvious roots, so it is divisible by $(a-b)(a-c)$. By symmetry in $a,b,c$ it is divisible by $(b-c)$ as well. Divide successively by $(a-b),(b-c),(c-a)$ using <a href=\"https://en.wikipedia.org/wiki/Polynomial_long_division\" rel=\"nofollow noreferrer\">polynomial long division</a>, and you get the quotient $ab+bc+ca$.</p>\n" }, { "answer_id": 2083746, "score": 1, "text": "<p>I think simpler way is to make partial fraction decomposition in $a$ of each of the terms. I.e. for the first term you get\n$$\n-\\frac{b}{(a-c) (b-c)}+\\frac{c}{(a-b) (b-c)}+\\frac{1}{a}\n$$\nfor the second\n$$\n-\\frac{c}{(a-b) (b-c)}-\\frac{c}{b (b-c)}\n$$\nand for the last one\n$$\n\\frac{b}{(a-c) (b-c)}+\\frac{b}{c (b-c)}\n$$\nSo most of the terms cancel and you are left with\n$$\n\\frac{1}{a}+\\frac{b}{c (b-c)}-\\frac{c}{b (b-c)}\n$$\nafter that it is simpler already\n(you can make the expansion in $b$ after that to make it even simpler before joining together the fractions)</p>\n\n<p>After partial fraction in $b$ you get\n$$\n\\frac{a+c}{a c}+\\frac{1}{b}\n$$</p>\n" }, { "answer_id": 2083998, "score": 1, "text": "<p>We have to prove that\n$$ \\sum_{cyc}\\frac{bc}{a(a-b)(a-c)}=\\sum_{cyc}\\frac{1}{a} $$\nor:\n$$ \\sum_{cyc}\\frac{bc-(a-b)(a-c)}{a(a-b)(a-c)} = \\sum_{cyc}\\frac{b+c-a}{(a-b)(a-c)}=0.$$\nSince\n$$ 0 = \\sum_{cyc}\\left(\\frac{1}{a-b}+\\frac{1}{a-c}\\right)=\\sum_{cyc}\\frac{2a-b-c}{(a-b)(a-c)} $$\nit is enough to prove that\n$$ \\frac{a}{(a-b)(a-c)}+\\frac{b}{(b-c)(b-a)}+\\frac{c}{(c-a)(c-b)}=0$$\nor, by multiplying both sides by $(a-b)(a-c)(b-c)$,\n$$ a(b-c)-b(a-c)+c(a-b) = 0 $$\nthat is trivial.</p>\n" }, { "answer_id": 2083773, "score": 6, "text": "<p><strong>Claim:</strong> </p>\n\n<p>$$\\frac{bc}{a(a-b)(a-c)}+\\frac{ca}{b(b-c)(b-a)}+\\frac{ab}{c(c-a)(c-b)}=\\frac{ab+bc+ca}{abc}$$</p>\n\n<p><strong>Proof:</strong> </p>\n\n<p>Note that the Lagrange interpolating polynomial through $(a,p),(b,q),(c,r)$ is given by:</p>\n\n<p>$$f(x)=\\frac{(x-b)(x-c)}{(a-b)(a-c)}p+\\frac{(x-a)(x-c)}{(b-a)(b-c)}q+\\frac{(x-a)(x-b)}{(c-a)(c-b)}r$$</p>\n\n<p>We seek the value of $f(0)$ when $p=\\dfrac{1}{a},q=\\dfrac{1}{b},r=\\dfrac{1}{c}$, i.e. when $y=f(x)$ interpolates $y=\\dfrac{1}{x}$ at $x=a,b,c$.</p>\n\n<p>We then notice that $1-xf(x)$ is a degree $3$ polynomial interpolant passing through $(0,1),(a,0),(b,0),(c,0)$, and is in fact the unique such polynomial.</p>\n\n<p>By inspection, one must have </p>\n\n<p>$$1-xf(x)=\\frac{(x-a)(x-b)(x-c)}{(-a)(-b)(-c)}=\\left(1-\\frac{x}{a}\\right)\\left(1-\\frac{x}{b}\\right)\\left(1-\\frac{x}{c}\\right)$$</p>\n\n<p>Expanding to leading order,</p>\n\n<p>$$1-xf(x)=1-x\\left(\\frac{1}{a}+\\frac{1}{b}+\\frac{1}{c}\\right)+\\text{ higher order terms}$$</p>\n\n<p>which, upon a moment's thought, gives that $f(0)=\\frac{1}{a}+\\frac{1}{b}+\\frac{1}{c}$ as desired.</p>\n\n<p>Note that this method gives a natural generalisation to when we have more than just $a,b,c$.</p>\n" }, { "answer_id": 2083748, "score": 3, "text": "<p>Hint: $P(a) = b²c²(c-b)+c²a²(a-c)+a²b²(b-a)$ is a $3^{rd}$ degree polynomial in $a$, and has $a=b$ and $a=c$ as obvious roots, so it is divisible by $(a-b)(a-c)$. By symmetry in $a,b,c$ it is divisible by $(b-c)$ as well. Divide successively by $(a-b),(b-c),(c-a)$ using <a href=\"https://en.wikipedia.org/wiki/Polynomial_long_division\" rel=\"nofollow noreferrer\">polynomial long division</a>, and you get the quotient $ab+bc+ca$.</p>\n" }, { "answer_id": 2083746, "score": 1, "text": "<p>I think simpler way is to make partial fraction decomposition in $a$ of each of the terms. I.e. for the first term you get\n$$\n-\\frac{b}{(a-c) (b-c)}+\\frac{c}{(a-b) (b-c)}+\\frac{1}{a}\n$$\nfor the second\n$$\n-\\frac{c}{(a-b) (b-c)}-\\frac{c}{b (b-c)}\n$$\nand for the last one\n$$\n\\frac{b}{(a-c) (b-c)}+\\frac{b}{c (b-c)}\n$$\nSo most of the terms cancel and you are left with\n$$\n\\frac{1}{a}+\\frac{b}{c (b-c)}-\\frac{c}{b (b-c)}\n$$\nafter that it is simpler already\n(you can make the expansion in $b$ after that to make it even simpler before joining together the fractions)</p>\n\n<p>After partial fraction in $b$ you get\n$$\n\\frac{a+c}{a c}+\\frac{1}{b}\n$$</p>\n" }, { "answer_id": 2083998, "score": 1, "text": "<p>We have to prove that\n$$ \\sum_{cyc}\\frac{bc}{a(a-b)(a-c)}=\\sum_{cyc}\\frac{1}{a} $$\nor:\n$$ \\sum_{cyc}\\frac{bc-(a-b)(a-c)}{a(a-b)(a-c)} = \\sum_{cyc}\\frac{b+c-a}{(a-b)(a-c)}=0.$$\nSince\n$$ 0 = \\sum_{cyc}\\left(\\frac{1}{a-b}+\\frac{1}{a-c}\\right)=\\sum_{cyc}\\frac{2a-b-c}{(a-b)(a-c)} $$\nit is enough to prove that\n$$ \\frac{a}{(a-b)(a-c)}+\\frac{b}{(b-c)(b-a)}+\\frac{c}{(c-a)(c-b)}=0$$\nor, by multiplying both sides by $(a-b)(a-c)(b-c)$,\n$$ a(b-c)-b(a-c)+c(a-b) = 0 $$\nthat is trivial.</p>\n" }, { "answer_id": 2083773, "score": 6, "text": "<p><strong>Claim:</strong> </p>\n\n<p>$$\\frac{bc}{a(a-b)(a-c)}+\\frac{ca}{b(b-c)(b-a)}+\\frac{ab}{c(c-a)(c-b)}=\\frac{ab+bc+ca}{abc}$$</p>\n\n<p><strong>Proof:</strong> </p>\n\n<p>Note that the Lagrange interpolating polynomial through $(a,p),(b,q),(c,r)$ is given by:</p>\n\n<p>$$f(x)=\\frac{(x-b)(x-c)}{(a-b)(a-c)}p+\\frac{(x-a)(x-c)}{(b-a)(b-c)}q+\\frac{(x-a)(x-b)}{(c-a)(c-b)}r$$</p>\n\n<p>We seek the value of $f(0)$ when $p=\\dfrac{1}{a},q=\\dfrac{1}{b},r=\\dfrac{1}{c}$, i.e. when $y=f(x)$ interpolates $y=\\dfrac{1}{x}$ at $x=a,b,c$.</p>\n\n<p>We then notice that $1-xf(x)$ is a degree $3$ polynomial interpolant passing through $(0,1),(a,0),(b,0),(c,0)$, and is in fact the unique such polynomial.</p>\n\n<p>By inspection, one must have </p>\n\n<p>$$1-xf(x)=\\frac{(x-a)(x-b)(x-c)}{(-a)(-b)(-c)}=\\left(1-\\frac{x}{a}\\right)\\left(1-\\frac{x}{b}\\right)\\left(1-\\frac{x}{c}\\right)$$</p>\n\n<p>Expanding to leading order,</p>\n\n<p>$$1-xf(x)=1-x\\left(\\frac{1}{a}+\\frac{1}{b}+\\frac{1}{c}\\right)+\\text{ higher order terms}$$</p>\n\n<p>which, upon a moment's thought, gives that $f(0)=\\frac{1}{a}+\\frac{1}{b}+\\frac{1}{c}$ as desired.</p>\n\n<p>Note that this method gives a natural generalisation to when we have more than just $a,b,c$.</p>\n" }, { "answer_id": 2083748, "score": 3, "text": "<p>Hint: $P(a) = b²c²(c-b)+c²a²(a-c)+a²b²(b-a)$ is a $3^{rd}$ degree polynomial in $a$, and has $a=b$ and $a=c$ as obvious roots, so it is divisible by $(a-b)(a-c)$. By symmetry in $a,b,c$ it is divisible by $(b-c)$ as well. Divide successively by $(a-b),(b-c),(c-a)$ using <a href=\"https://en.wikipedia.org/wiki/Polynomial_long_division\" rel=\"nofollow noreferrer\">polynomial long division</a>, and you get the quotient $ab+bc+ca$.</p>\n" }, { "answer_id": 2083746, "score": 1, "text": "<p>I think simpler way is to make partial fraction decomposition in $a$ of each of the terms. I.e. for the first term you get\n$$\n-\\frac{b}{(a-c) (b-c)}+\\frac{c}{(a-b) (b-c)}+\\frac{1}{a}\n$$\nfor the second\n$$\n-\\frac{c}{(a-b) (b-c)}-\\frac{c}{b (b-c)}\n$$\nand for the last one\n$$\n\\frac{b}{(a-c) (b-c)}+\\frac{b}{c (b-c)}\n$$\nSo most of the terms cancel and you are left with\n$$\n\\frac{1}{a}+\\frac{b}{c (b-c)}-\\frac{c}{b (b-c)}\n$$\nafter that it is simpler already\n(you can make the expansion in $b$ after that to make it even simpler before joining together the fractions)</p>\n\n<p>After partial fraction in $b$ you get\n$$\n\\frac{a+c}{a c}+\\frac{1}{b}\n$$</p>\n" }, { "answer_id": 2083998, "score": 1, "text": "<p>We have to prove that\n$$ \\sum_{cyc}\\frac{bc}{a(a-b)(a-c)}=\\sum_{cyc}\\frac{1}{a} $$\nor:\n$$ \\sum_{cyc}\\frac{bc-(a-b)(a-c)}{a(a-b)(a-c)} = \\sum_{cyc}\\frac{b+c-a}{(a-b)(a-c)}=0.$$\nSince\n$$ 0 = \\sum_{cyc}\\left(\\frac{1}{a-b}+\\frac{1}{a-c}\\right)=\\sum_{cyc}\\frac{2a-b-c}{(a-b)(a-c)} $$\nit is enough to prove that\n$$ \\frac{a}{(a-b)(a-c)}+\\frac{b}{(b-c)(b-a)}+\\frac{c}{(c-a)(c-b)}=0$$\nor, by multiplying both sides by $(a-b)(a-c)(b-c)$,\n$$ a(b-c)-b(a-c)+c(a-b) = 0 $$\nthat is trivial.</p>\n" }, { "answer_id": 2083773, "score": 6, "text": "<p><strong>Claim:</strong> </p>\n\n<p>$$\\frac{bc}{a(a-b)(a-c)}+\\frac{ca}{b(b-c)(b-a)}+\\frac{ab}{c(c-a)(c-b)}=\\frac{ab+bc+ca}{abc}$$</p>\n\n<p><strong>Proof:</strong> </p>\n\n<p>Note that the Lagrange interpolating polynomial through $(a,p),(b,q),(c,r)$ is given by:</p>\n\n<p>$$f(x)=\\frac{(x-b)(x-c)}{(a-b)(a-c)}p+\\frac{(x-a)(x-c)}{(b-a)(b-c)}q+\\frac{(x-a)(x-b)}{(c-a)(c-b)}r$$</p>\n\n<p>We seek the value of $f(0)$ when $p=\\dfrac{1}{a},q=\\dfrac{1}{b},r=\\dfrac{1}{c}$, i.e. when $y=f(x)$ interpolates $y=\\dfrac{1}{x}$ at $x=a,b,c$.</p>\n\n<p>We then notice that $1-xf(x)$ is a degree $3$ polynomial interpolant passing through $(0,1),(a,0),(b,0),(c,0)$, and is in fact the unique such polynomial.</p>\n\n<p>By inspection, one must have </p>\n\n<p>$$1-xf(x)=\\frac{(x-a)(x-b)(x-c)}{(-a)(-b)(-c)}=\\left(1-\\frac{x}{a}\\right)\\left(1-\\frac{x}{b}\\right)\\left(1-\\frac{x}{c}\\right)$$</p>\n\n<p>Expanding to leading order,</p>\n\n<p>$$1-xf(x)=1-x\\left(\\frac{1}{a}+\\frac{1}{b}+\\frac{1}{c}\\right)+\\text{ higher order terms}$$</p>\n\n<p>which, upon a moment's thought, gives that $f(0)=\\frac{1}{a}+\\frac{1}{b}+\\frac{1}{c}$ as desired.</p>\n\n<p>Note that this method gives a natural generalisation to when we have more than just $a,b,c$.</p>\n" }, { "answer_id": 2083748, "score": 3, "text": "<p>Hint: $P(a) = b²c²(c-b)+c²a²(a-c)+a²b²(b-a)$ is a $3^{rd}$ degree polynomial in $a$, and has $a=b$ and $a=c$ as obvious roots, so it is divisible by $(a-b)(a-c)$. By symmetry in $a,b,c$ it is divisible by $(b-c)$ as well. Divide successively by $(a-b),(b-c),(c-a)$ using <a href=\"https://en.wikipedia.org/wiki/Polynomial_long_division\" rel=\"nofollow noreferrer\">polynomial long division</a>, and you get the quotient $ab+bc+ca$.</p>\n" }, { "answer_id": 2083746, "score": 1, "text": "<p>I think simpler way is to make partial fraction decomposition in $a$ of each of the terms. I.e. for the first term you get\n$$\n-\\frac{b}{(a-c) (b-c)}+\\frac{c}{(a-b) (b-c)}+\\frac{1}{a}\n$$\nfor the second\n$$\n-\\frac{c}{(a-b) (b-c)}-\\frac{c}{b (b-c)}\n$$\nand for the last one\n$$\n\\frac{b}{(a-c) (b-c)}+\\frac{b}{c (b-c)}\n$$\nSo most of the terms cancel and you are left with\n$$\n\\frac{1}{a}+\\frac{b}{c (b-c)}-\\frac{c}{b (b-c)}\n$$\nafter that it is simpler already\n(you can make the expansion in $b$ after that to make it even simpler before joining together the fractions)</p>\n\n<p>After partial fraction in $b$ you get\n$$\n\\frac{a+c}{a c}+\\frac{1}{b}\n$$</p>\n" }, { "answer_id": 2083998, "score": 1, "text": "<p>We have to prove that\n$$ \\sum_{cyc}\\frac{bc}{a(a-b)(a-c)}=\\sum_{cyc}\\frac{1}{a} $$\nor:\n$$ \\sum_{cyc}\\frac{bc-(a-b)(a-c)}{a(a-b)(a-c)} = \\sum_{cyc}\\frac{b+c-a}{(a-b)(a-c)}=0.$$\nSince\n$$ 0 = \\sum_{cyc}\\left(\\frac{1}{a-b}+\\frac{1}{a-c}\\right)=\\sum_{cyc}\\frac{2a-b-c}{(a-b)(a-c)} $$\nit is enough to prove that\n$$ \\frac{a}{(a-b)(a-c)}+\\frac{b}{(b-c)(b-a)}+\\frac{c}{(c-a)(c-b)}=0$$\nor, by multiplying both sides by $(a-b)(a-c)(b-c)$,\n$$ a(b-c)-b(a-c)+c(a-b) = 0 $$\nthat is trivial.</p>\n" }, { "answer_id": 2083773, "score": 6, "text": "<p><strong>Claim:</strong> </p>\n\n<p>$$\\frac{bc}{a(a-b)(a-c)}+\\frac{ca}{b(b-c)(b-a)}+\\frac{ab}{c(c-a)(c-b)}=\\frac{ab+bc+ca}{abc}$$</p>\n\n<p><strong>Proof:</strong> </p>\n\n<p>Note that the Lagrange interpolating polynomial through $(a,p),(b,q),(c,r)$ is given by:</p>\n\n<p>$$f(x)=\\frac{(x-b)(x-c)}{(a-b)(a-c)}p+\\frac{(x-a)(x-c)}{(b-a)(b-c)}q+\\frac{(x-a)(x-b)}{(c-a)(c-b)}r$$</p>\n\n<p>We seek the value of $f(0)$ when $p=\\dfrac{1}{a},q=\\dfrac{1}{b},r=\\dfrac{1}{c}$, i.e. when $y=f(x)$ interpolates $y=\\dfrac{1}{x}$ at $x=a,b,c$.</p>\n\n<p>We then notice that $1-xf(x)$ is a degree $3$ polynomial interpolant passing through $(0,1),(a,0),(b,0),(c,0)$, and is in fact the unique such polynomial.</p>\n\n<p>By inspection, one must have </p>\n\n<p>$$1-xf(x)=\\frac{(x-a)(x-b)(x-c)}{(-a)(-b)(-c)}=\\left(1-\\frac{x}{a}\\right)\\left(1-\\frac{x}{b}\\right)\\left(1-\\frac{x}{c}\\right)$$</p>\n\n<p>Expanding to leading order,</p>\n\n<p>$$1-xf(x)=1-x\\left(\\frac{1}{a}+\\frac{1}{b}+\\frac{1}{c}\\right)+\\text{ higher order terms}$$</p>\n\n<p>which, upon a moment's thought, gives that $f(0)=\\frac{1}{a}+\\frac{1}{b}+\\frac{1}{c}$ as desired.</p>\n\n<p>Note that this method gives a natural generalisation to when we have more than just $a,b,c$.</p>\n" }, { "answer_id": 2083748, "score": 3, "text": "<p>Hint: $P(a) = b²c²(c-b)+c²a²(a-c)+a²b²(b-a)$ is a $3^{rd}$ degree polynomial in $a$, and has $a=b$ and $a=c$ as obvious roots, so it is divisible by $(a-b)(a-c)$. By symmetry in $a,b,c$ it is divisible by $(b-c)$ as well. Divide successively by $(a-b),(b-c),(c-a)$ using <a href=\"https://en.wikipedia.org/wiki/Polynomial_long_division\" rel=\"nofollow noreferrer\">polynomial long division</a>, and you get the quotient $ab+bc+ca$.</p>\n" }, { "answer_id": 2083746, "score": 1, "text": "<p>I think simpler way is to make partial fraction decomposition in $a$ of each of the terms. I.e. for the first term you get\n$$\n-\\frac{b}{(a-c) (b-c)}+\\frac{c}{(a-b) (b-c)}+\\frac{1}{a}\n$$\nfor the second\n$$\n-\\frac{c}{(a-b) (b-c)}-\\frac{c}{b (b-c)}\n$$\nand for the last one\n$$\n\\frac{b}{(a-c) (b-c)}+\\frac{b}{c (b-c)}\n$$\nSo most of the terms cancel and you are left with\n$$\n\\frac{1}{a}+\\frac{b}{c (b-c)}-\\frac{c}{b (b-c)}\n$$\nafter that it is simpler already\n(you can make the expansion in $b$ after that to make it even simpler before joining together the fractions)</p>\n\n<p>After partial fraction in $b$ you get\n$$\n\\frac{a+c}{a c}+\\frac{1}{b}\n$$</p>\n" }, { "answer_id": 2083998, "score": 1, "text": "<p>We have to prove that\n$$ \\sum_{cyc}\\frac{bc}{a(a-b)(a-c)}=\\sum_{cyc}\\frac{1}{a} $$\nor:\n$$ \\sum_{cyc}\\frac{bc-(a-b)(a-c)}{a(a-b)(a-c)} = \\sum_{cyc}\\frac{b+c-a}{(a-b)(a-c)}=0.$$\nSince\n$$ 0 = \\sum_{cyc}\\left(\\frac{1}{a-b}+\\frac{1}{a-c}\\right)=\\sum_{cyc}\\frac{2a-b-c}{(a-b)(a-c)} $$\nit is enough to prove that\n$$ \\frac{a}{(a-b)(a-c)}+\\frac{b}{(b-c)(b-a)}+\\frac{c}{(c-a)(c-b)}=0$$\nor, by multiplying both sides by $(a-b)(a-c)(b-c)$,\n$$ a(b-c)-b(a-c)+c(a-b) = 0 $$\nthat is trivial.</p>\n" }, { "answer_id": 2083773, "score": 6, "text": "<p><strong>Claim:</strong> </p>\n\n<p>$$\\frac{bc}{a(a-b)(a-c)}+\\frac{ca}{b(b-c)(b-a)}+\\frac{ab}{c(c-a)(c-b)}=\\frac{ab+bc+ca}{abc}$$</p>\n\n<p><strong>Proof:</strong> </p>\n\n<p>Note that the Lagrange interpolating polynomial through $(a,p),(b,q),(c,r)$ is given by:</p>\n\n<p>$$f(x)=\\frac{(x-b)(x-c)}{(a-b)(a-c)}p+\\frac{(x-a)(x-c)}{(b-a)(b-c)}q+\\frac{(x-a)(x-b)}{(c-a)(c-b)}r$$</p>\n\n<p>We seek the value of $f(0)$ when $p=\\dfrac{1}{a},q=\\dfrac{1}{b},r=\\dfrac{1}{c}$, i.e. when $y=f(x)$ interpolates $y=\\dfrac{1}{x}$ at $x=a,b,c$.</p>\n\n<p>We then notice that $1-xf(x)$ is a degree $3$ polynomial interpolant passing through $(0,1),(a,0),(b,0),(c,0)$, and is in fact the unique such polynomial.</p>\n\n<p>By inspection, one must have </p>\n\n<p>$$1-xf(x)=\\frac{(x-a)(x-b)(x-c)}{(-a)(-b)(-c)}=\\left(1-\\frac{x}{a}\\right)\\left(1-\\frac{x}{b}\\right)\\left(1-\\frac{x}{c}\\right)$$</p>\n\n<p>Expanding to leading order,</p>\n\n<p>$$1-xf(x)=1-x\\left(\\frac{1}{a}+\\frac{1}{b}+\\frac{1}{c}\\right)+\\text{ higher order terms}$$</p>\n\n<p>which, upon a moment's thought, gives that $f(0)=\\frac{1}{a}+\\frac{1}{b}+\\frac{1}{c}$ as desired.</p>\n\n<p>Note that this method gives a natural generalisation to when we have more than just $a,b,c$.</p>\n" }, { "answer_id": 2083748, "score": 3, "text": "<p>Hint: $P(a) = b²c²(c-b)+c²a²(a-c)+a²b²(b-a)$ is a $3^{rd}$ degree polynomial in $a$, and has $a=b$ and $a=c$ as obvious roots, so it is divisible by $(a-b)(a-c)$. By symmetry in $a,b,c$ it is divisible by $(b-c)$ as well. Divide successively by $(a-b),(b-c),(c-a)$ using <a href=\"https://en.wikipedia.org/wiki/Polynomial_long_division\" rel=\"nofollow noreferrer\">polynomial long division</a>, and you get the quotient $ab+bc+ca$.</p>\n" }, { "answer_id": 2083746, "score": 1, "text": "<p>I think simpler way is to make partial fraction decomposition in $a$ of each of the terms. I.e. for the first term you get\n$$\n-\\frac{b}{(a-c) (b-c)}+\\frac{c}{(a-b) (b-c)}+\\frac{1}{a}\n$$\nfor the second\n$$\n-\\frac{c}{(a-b) (b-c)}-\\frac{c}{b (b-c)}\n$$\nand for the last one\n$$\n\\frac{b}{(a-c) (b-c)}+\\frac{b}{c (b-c)}\n$$\nSo most of the terms cancel and you are left with\n$$\n\\frac{1}{a}+\\frac{b}{c (b-c)}-\\frac{c}{b (b-c)}\n$$\nafter that it is simpler already\n(you can make the expansion in $b$ after that to make it even simpler before joining together the fractions)</p>\n\n<p>After partial fraction in $b$ you get\n$$\n\\frac{a+c}{a c}+\\frac{1}{b}\n$$</p>\n" }, { "answer_id": 2083998, "score": 1, "text": "<p>We have to prove that\n$$ \\sum_{cyc}\\frac{bc}{a(a-b)(a-c)}=\\sum_{cyc}\\frac{1}{a} $$\nor:\n$$ \\sum_{cyc}\\frac{bc-(a-b)(a-c)}{a(a-b)(a-c)} = \\sum_{cyc}\\frac{b+c-a}{(a-b)(a-c)}=0.$$\nSince\n$$ 0 = \\sum_{cyc}\\left(\\frac{1}{a-b}+\\frac{1}{a-c}\\right)=\\sum_{cyc}\\frac{2a-b-c}{(a-b)(a-c)} $$\nit is enough to prove that\n$$ \\frac{a}{(a-b)(a-c)}+\\frac{b}{(b-c)(b-a)}+\\frac{c}{(c-a)(c-b)}=0$$\nor, by multiplying both sides by $(a-b)(a-c)(b-c)$,\n$$ a(b-c)-b(a-c)+c(a-b) = 0 $$\nthat is trivial.</p>\n" } ]
Question: Prove that $\frac{bc}{a(a-b)(a-c)}+\frac{ca}{b(b-c)(b-a)}+\frac{ab}{c(c-a)(c-b)}=\frac{ab+bc+ca}{abc}$ Answer 1: <p><strong>Claim:</strong> </p> <p>$$\frac{bc}{a(a-b)(a-c)}+\frac{ca}{b(b-c)(b-a)}+\frac{ab}{c(c-a)(c-b)}=\frac{ab+bc+ca}{abc}$$</p> <p><strong>Proof:</strong> </p> <p>Note that the Lagrange interpolating polynomial through $(a,p),(b,q),(c,r)$ is given by:</p> <p>$$f(x)=\frac{(x-b)(x-c)}{(a-b)(a-c)}p+\frac{(x-a)(x-c)}{(b-a)(b-c)}q+\frac{(x-a)(x-b)}{(c-a)(c-b)}r$$</p> <p>We seek the value of $f(0)$ when $p=\dfrac{1}{a},q=\dfrac{1}{b},r=\dfrac{1}{c}$, i.e. when $y=f(x)$ interpolates $y=\dfrac{1}{x}$ at $x=a,b,c$.</p> <p>We then notice that $1-xf(x)$ is a degree $3$ polynomial interpolant passing through $(0,1),(a,0),(b,0),(c,0)$, and is in fact the unique such polynomial.</p> <p>By inspection, one must have </p> <p>$$1-xf(x)=\frac{(x-a)(x-b)(x-c)}{(-a)(-b)(-c)}=\left(1-\frac{x}{a}\right)\left(1-\frac{x}{b}\right)\left(1-\frac{x}{c}\right)$$</p> <p>Expanding to leading order,</p> <p>$$1-xf(x)=1-x\left(\frac{1}{a}+\frac{1}{b}+\frac{1}{c}\right)+\text{ higher order terms}$$</p> <p>which, upon a moment's thought, gives that $f(0)=\frac{1}{a}+\frac{1}{b}+\frac{1}{c}$ as desired.</p> <p>Note that this method gives a natural generalisation to when we have more than just $a,b,c$.</p> Answer 2: <p>Hint: $P(a) = b²c²(c-b)+c²a²(a-c)+a²b²(b-a)$ is a $3^{rd}$ degree polynomial in $a$, and has $a=b$ and $a=c$ as obvious roots, so it is divisible by $(a-b)(a-c)$. By symmetry in $a,b,c$ it is divisible by $(b-c)$ as well. Divide successively by $(a-b),(b-c),(c-a)$ using <a href="https://en.wikipedia.org/wiki/Polynomial_long_division" rel="nofollow noreferrer">polynomial long division</a>, and you get the quotient $ab+bc+ca$.</p> Answer 3: <p>I think simpler way is to make partial fraction decomposition in $a$ of each of the terms. I.e. for the first term you get $$ -\frac{b}{(a-c) (b-c)}+\frac{c}{(a-b) (b-c)}+\frac{1}{a} $$ for the second $$ -\frac{c}{(a-b) (b-c)}-\frac{c}{b (b-c)} $$ and for the last one $$ \frac{b}{(a-c) (b-c)}+\frac{b}{c (b-c)} $$ So most of the terms cancel and you are left with $$ \frac{1}{a}+\frac{b}{c (b-c)}-\frac{c}{b (b-c)} $$ after that it is simpler already (you can make the expansion in $b$ after that to make it even simpler before joining together the fractions)</p> <p>After partial fraction in $b$ you get $$ \frac{a+c}{a c}+\frac{1}{b} $$</p> Answer 4: <p>We have to prove that $$ \sum_{cyc}\frac{bc}{a(a-b)(a-c)}=\sum_{cyc}\frac{1}{a} $$ or: $$ \sum_{cyc}\frac{bc-(a-b)(a-c)}{a(a-b)(a-c)} = \sum_{cyc}\frac{b+c-a}{(a-b)(a-c)}=0.$$ Since $$ 0 = \sum_{cyc}\left(\frac{1}{a-b}+\frac{1}{a-c}\right)=\sum_{cyc}\frac{2a-b-c}{(a-b)(a-c)} $$ it is enough to prove that $$ \frac{a}{(a-b)(a-c)}+\frac{b}{(b-c)(b-a)}+\frac{c}{(c-a)(c-b)}=0$$ or, by multiplying both sides by $(a-b)(a-c)(b-c)$, $$ a(b-c)-b(a-c)+c(a-b) = 0 $$ that is trivial.</p> Answer 5: <p><strong>Claim:</strong> </p> <p>$$\frac{bc}{a(a-b)(a-c)}+\frac{ca}{b(b-c)(b-a)}+\frac{ab}{c(c-a)(c-b)}=\frac{ab+bc+ca}{abc}$$</p> <p><strong>Proof:</strong> </p> <p>Note that the Lagrange interpolating polynomial through $(a,p),(b,q),(c,r)$ is given by:</p> <p>$$f(x)=\frac{(x-b)(x-c)}{(a-b)(a-c)}p+\frac{(x-a)(x-c)}{(b-a)(b-c)}q+\frac{(x-a)(x-b)}{(c-a)(c-b)}r$$</p> <p>We seek the value of $f(0)$ when $p=\dfrac{1}{a},q=\dfrac{1}{b},r=\dfrac{1}{c}$, i.e. when $y=f(x)$ interpolates $y=\dfrac{1}{x}$ at $x=a,b,c$.</p> <p>We then notice that $1-xf(x)$ is a degree $3$ polynomial interpolant passing through $(0,1),(a,0),(b,0),(c,0)$, and is in fact the unique such polynomial.</p> <p>By inspection, one must have </p> <p>$$1-xf(x)=\frac{(x-a)(x-b)(x-c)}{(-a)(-b)(-c)}=\left(1-\frac{x}{a}\right)\left(1-\frac{x}{b}\right)\left(1-\frac{x}{c}\right)$$</p> <p>Expanding to leading order,</p> <p>$$1-xf(x)=1-x\left(\frac{1}{a}+\frac{1}{b}+\frac{1}{c}\right)+\text{ higher order terms}$$</p> <p>which, upon a moment's thought, gives that $f(0)=\frac{1}{a}+\frac{1}{b}+\frac{1}{c}$ as desired.</p> <p>Note that this method gives a natural generalisation to when we have more than just $a,b,c$.</p> Answer 6: <p>Hint: $P(a) = b²c²(c-b)+c²a²(a-c)+a²b²(b-a)$ is a $3^{rd}$ degree polynomial in $a$, and has $a=b$ and $a=c$ as obvious roots, so it is divisible by $(a-b)(a-c)$. By symmetry in $a,b,c$ it is divisible by $(b-c)$ as well. Divide successively by $(a-b),(b-c),(c-a)$ using <a href="https://en.wikipedia.org/wiki/Polynomial_long_division" rel="nofollow noreferrer">polynomial long division</a>, and you get the quotient $ab+bc+ca$.</p> Answer 7: <p>I think simpler way is to make partial fraction decomposition in $a$ of each of the terms. I.e. for the first term you get $$ -\frac{b}{(a-c) (b-c)}+\frac{c}{(a-b) (b-c)}+\frac{1}{a} $$ for the second $$ -\frac{c}{(a-b) (b-c)}-\frac{c}{b (b-c)} $$ and for the last one $$ \frac{b}{(a-c) (b-c)}+\frac{b}{c (b-c)} $$ So most of the terms cancel and you are left with $$ \frac{1}{a}+\frac{b}{c (b-c)}-\frac{c}{b (b-c)} $$ after that it is simpler already (you can make the expansion in $b$ after that to make it even simpler before joining together the fractions)</p> <p>After partial fraction in $b$ you get $$ \frac{a+c}{a c}+\frac{1}{b} $$</p> Answer 8: <p>We have to prove that $$ \sum_{cyc}\frac{bc}{a(a-b)(a-c)}=\sum_{cyc}\frac{1}{a} $$ or: $$ \sum_{cyc}\frac{bc-(a-b)(a-c)}{a(a-b)(a-c)} = \sum_{cyc}\frac{b+c-a}{(a-b)(a-c)}=0.$$ Since $$ 0 = \sum_{cyc}\left(\frac{1}{a-b}+\frac{1}{a-c}\right)=\sum_{cyc}\frac{2a-b-c}{(a-b)(a-c)} $$ it is enough to prove that $$ \frac{a}{(a-b)(a-c)}+\frac{b}{(b-c)(b-a)}+\frac{c}{(c-a)(c-b)}=0$$ or, by multiplying both sides by $(a-b)(a-c)(b-c)$, $$ a(b-c)-b(a-c)+c(a-b) = 0 $$ that is trivial.</p> Answer 9: <p><strong>Claim:</strong> </p> <p>$$\frac{bc}{a(a-b)(a-c)}+\frac{ca}{b(b-c)(b-a)}+\frac{ab}{c(c-a)(c-b)}=\frac{ab+bc+ca}{abc}$$</p> <p><strong>Proof:</strong> </p> <p>Note that the Lagrange interpolating polynomial through $(a,p),(b,q),(c,r)$ is given by:</p> <p>$$f(x)=\frac{(x-b)(x-c)}{(a-b)(a-c)}p+\frac{(x-a)(x-c)}{(b-a)(b-c)}q+\frac{(x-a)(x-b)}{(c-a)(c-b)}r$$</p> <p>We seek the value of $f(0)$ when $p=\dfrac{1}{a},q=\dfrac{1}{b},r=\dfrac{1}{c}$, i.e. when $y=f(x)$ interpolates $y=\dfrac{1}{x}$ at $x=a,b,c$.</p> <p>We then notice that $1-xf(x)$ is a degree $3$ polynomial interpolant passing through $(0,1),(a,0),(b,0),(c,0)$, and is in fact the unique such polynomial.</p> <p>By inspection, one must have </p> <p>$$1-xf(x)=\frac{(x-a)(x-b)(x-c)}{(-a)(-b)(-c)}=\left(1-\frac{x}{a}\right)\left(1-\frac{x}{b}\right)\left(1-\frac{x}{c}\right)$$</p> <p>Expanding to leading order,</p> <p>$$1-xf(x)=1-x\left(\frac{1}{a}+\frac{1}{b}+\frac{1}{c}\right)+\text{ higher order terms}$$</p> <p>which, upon a moment's thought, gives that $f(0)=\frac{1}{a}+\frac{1}{b}+\frac{1}{c}$ as desired.</p> <p>Note that this method gives a natural generalisation to when we have more than just $a,b,c$.</p> Answer 10: <p>Hint: $P(a) = b²c²(c-b)+c²a²(a-c)+a²b²(b-a)$ is a $3^{rd}$ degree polynomial in $a$, and has $a=b$ and $a=c$ as obvious roots, so it is divisible by $(a-b)(a-c)$. By symmetry in $a,b,c$ it is divisible by $(b-c)$ as well. Divide successively by $(a-b),(b-c),(c-a)$ using <a href="https://en.wikipedia.org/wiki/Polynomial_long_division" rel="nofollow noreferrer">polynomial long division</a>, and you get the quotient $ab+bc+ca$.</p> Answer 11: <p>I think simpler way is to make partial fraction decomposition in $a$ of each of the terms. I.e. for the first term you get $$ -\frac{b}{(a-c) (b-c)}+\frac{c}{(a-b) (b-c)}+\frac{1}{a} $$ for the second $$ -\frac{c}{(a-b) (b-c)}-\frac{c}{b (b-c)} $$ and for the last one $$ \frac{b}{(a-c) (b-c)}+\frac{b}{c (b-c)} $$ So most of the terms cancel and you are left with $$ \frac{1}{a}+\frac{b}{c (b-c)}-\frac{c}{b (b-c)} $$ after that it is simpler already (you can make the expansion in $b$ after that to make it even simpler before joining together the fractions)</p> <p>After partial fraction in $b$ you get $$ \frac{a+c}{a c}+\frac{1}{b} $$</p> Answer 12: <p>We have to prove that $$ \sum_{cyc}\frac{bc}{a(a-b)(a-c)}=\sum_{cyc}\frac{1}{a} $$ or: $$ \sum_{cyc}\frac{bc-(a-b)(a-c)}{a(a-b)(a-c)} = \sum_{cyc}\frac{b+c-a}{(a-b)(a-c)}=0.$$ Since $$ 0 = \sum_{cyc}\left(\frac{1}{a-b}+\frac{1}{a-c}\right)=\sum_{cyc}\frac{2a-b-c}{(a-b)(a-c)} $$ it is enough to prove that $$ \frac{a}{(a-b)(a-c)}+\frac{b}{(b-c)(b-a)}+\frac{c}{(c-a)(c-b)}=0$$ or, by multiplying both sides by $(a-b)(a-c)(b-c)$, $$ a(b-c)-b(a-c)+c(a-b) = 0 $$ that is trivial.</p> Answer 13: <p><strong>Claim:</strong> </p> <p>$$\frac{bc}{a(a-b)(a-c)}+\frac{ca}{b(b-c)(b-a)}+\frac{ab}{c(c-a)(c-b)}=\frac{ab+bc+ca}{abc}$$</p> <p><strong>Proof:</strong> </p> <p>Note that the Lagrange interpolating polynomial through $(a,p),(b,q),(c,r)$ is given by:</p> <p>$$f(x)=\frac{(x-b)(x-c)}{(a-b)(a-c)}p+\frac{(x-a)(x-c)}{(b-a)(b-c)}q+\frac{(x-a)(x-b)}{(c-a)(c-b)}r$$</p> <p>We seek the value of $f(0)$ when $p=\dfrac{1}{a},q=\dfrac{1}{b},r=\dfrac{1}{c}$, i.e. when $y=f(x)$ interpolates $y=\dfrac{1}{x}$ at $x=a,b,c$.</p> <p>We then notice that $1-xf(x)$ is a degree $3$ polynomial interpolant passing through $(0,1),(a,0),(b,0),(c,0)$, and is in fact the unique such polynomial.</p> <p>By inspection, one must have </p> <p>$$1-xf(x)=\frac{(x-a)(x-b)(x-c)}{(-a)(-b)(-c)}=\left(1-\frac{x}{a}\right)\left(1-\frac{x}{b}\right)\left(1-\frac{x}{c}\right)$$</p> <p>Expanding to leading order,</p> <p>$$1-xf(x)=1-x\left(\frac{1}{a}+\frac{1}{b}+\frac{1}{c}\right)+\text{ higher order terms}$$</p> <p>which, upon a moment's thought, gives that $f(0)=\frac{1}{a}+\frac{1}{b}+\frac{1}{c}$ as desired.</p> <p>Note that this method gives a natural generalisation to when we have more than just $a,b,c$.</p> Answer 14: <p>Hint: $P(a) = b²c²(c-b)+c²a²(a-c)+a²b²(b-a)$ is a $3^{rd}$ degree polynomial in $a$, and has $a=b$ and $a=c$ as obvious roots, so it is divisible by $(a-b)(a-c)$. By symmetry in $a,b,c$ it is divisible by $(b-c)$ as well. Divide successively by $(a-b),(b-c),(c-a)$ using <a href="https://en.wikipedia.org/wiki/Polynomial_long_division" rel="nofollow noreferrer">polynomial long division</a>, and you get the quotient $ab+bc+ca$.</p> Answer 15: <p>I think simpler way is to make partial fraction decomposition in $a$ of each of the terms. I.e. for the first term you get $$ -\frac{b}{(a-c) (b-c)}+\frac{c}{(a-b) (b-c)}+\frac{1}{a} $$ for the second $$ -\frac{c}{(a-b) (b-c)}-\frac{c}{b (b-c)} $$ and for the last one $$ \frac{b}{(a-c) (b-c)}+\frac{b}{c (b-c)} $$ So most of the terms cancel and you are left with $$ \frac{1}{a}+\frac{b}{c (b-c)}-\frac{c}{b (b-c)} $$ after that it is simpler already (you can make the expansion in $b$ after that to make it even simpler before joining together the fractions)</p> <p>After partial fraction in $b$ you get $$ \frac{a+c}{a c}+\frac{1}{b} $$</p> Answer 16: <p>We have to prove that $$ \sum_{cyc}\frac{bc}{a(a-b)(a-c)}=\sum_{cyc}\frac{1}{a} $$ or: $$ \sum_{cyc}\frac{bc-(a-b)(a-c)}{a(a-b)(a-c)} = \sum_{cyc}\frac{b+c-a}{(a-b)(a-c)}=0.$$ Since $$ 0 = \sum_{cyc}\left(\frac{1}{a-b}+\frac{1}{a-c}\right)=\sum_{cyc}\frac{2a-b-c}{(a-b)(a-c)} $$ it is enough to prove that $$ \frac{a}{(a-b)(a-c)}+\frac{b}{(b-c)(b-a)}+\frac{c}{(c-a)(c-b)}=0$$ or, by multiplying both sides by $(a-b)(a-c)(b-c)$, $$ a(b-c)-b(a-c)+c(a-b) = 0 $$ that is trivial.</p> Answer 17: <p><strong>Claim:</strong> </p> <p>$$\frac{bc}{a(a-b)(a-c)}+\frac{ca}{b(b-c)(b-a)}+\frac{ab}{c(c-a)(c-b)}=\frac{ab+bc+ca}{abc}$$</p> <p><strong>Proof:</strong> </p> <p>Note that the Lagrange interpolating polynomial through $(a,p),(b,q),(c,r)$ is given by:</p> <p>$$f(x)=\frac{(x-b)(x-c)}{(a-b)(a-c)}p+\frac{(x-a)(x-c)}{(b-a)(b-c)}q+\frac{(x-a)(x-b)}{(c-a)(c-b)}r$$</p> <p>We seek the value of $f(0)$ when $p=\dfrac{1}{a},q=\dfrac{1}{b},r=\dfrac{1}{c}$, i.e. when $y=f(x)$ interpolates $y=\dfrac{1}{x}$ at $x=a,b,c$.</p> <p>We then notice that $1-xf(x)$ is a degree $3$ polynomial interpolant passing through $(0,1),(a,0),(b,0),(c,0)$, and is in fact the unique such polynomial.</p> <p>By inspection, one must have </p> <p>$$1-xf(x)=\frac{(x-a)(x-b)(x-c)}{(-a)(-b)(-c)}=\left(1-\frac{x}{a}\right)\left(1-\frac{x}{b}\right)\left(1-\frac{x}{c}\right)$$</p> <p>Expanding to leading order,</p> <p>$$1-xf(x)=1-x\left(\frac{1}{a}+\frac{1}{b}+\frac{1}{c}\right)+\text{ higher order terms}$$</p> <p>which, upon a moment's thought, gives that $f(0)=\frac{1}{a}+\frac{1}{b}+\frac{1}{c}$ as desired.</p> <p>Note that this method gives a natural generalisation to when we have more than just $a,b,c$.</p> Answer 18: <p>Hint: $P(a) = b²c²(c-b)+c²a²(a-c)+a²b²(b-a)$ is a $3^{rd}$ degree polynomial in $a$, and has $a=b$ and $a=c$ as obvious roots, so it is divisible by $(a-b)(a-c)$. By symmetry in $a,b,c$ it is divisible by $(b-c)$ as well. Divide successively by $(a-b),(b-c),(c-a)$ using <a href="https://en.wikipedia.org/wiki/Polynomial_long_division" rel="nofollow noreferrer">polynomial long division</a>, and you get the quotient $ab+bc+ca$.</p> Answer 19: <p>I think simpler way is to make partial fraction decomposition in $a$ of each of the terms. I.e. for the first term you get $$ -\frac{b}{(a-c) (b-c)}+\frac{c}{(a-b) (b-c)}+\frac{1}{a} $$ for the second $$ -\frac{c}{(a-b) (b-c)}-\frac{c}{b (b-c)} $$ and for the last one $$ \frac{b}{(a-c) (b-c)}+\frac{b}{c (b-c)} $$ So most of the terms cancel and you are left with $$ \frac{1}{a}+\frac{b}{c (b-c)}-\frac{c}{b (b-c)} $$ after that it is simpler already (you can make the expansion in $b$ after that to make it even simpler before joining together the fractions)</p> <p>After partial fraction in $b$ you get $$ \frac{a+c}{a c}+\frac{1}{b} $$</p> Answer 20: <p>We have to prove that $$ \sum_{cyc}\frac{bc}{a(a-b)(a-c)}=\sum_{cyc}\frac{1}{a} $$ or: $$ \sum_{cyc}\frac{bc-(a-b)(a-c)}{a(a-b)(a-c)} = \sum_{cyc}\frac{b+c-a}{(a-b)(a-c)}=0.$$ Since $$ 0 = \sum_{cyc}\left(\frac{1}{a-b}+\frac{1}{a-c}\right)=\sum_{cyc}\frac{2a-b-c}{(a-b)(a-c)} $$ it is enough to prove that $$ \frac{a}{(a-b)(a-c)}+\frac{b}{(b-c)(b-a)}+\frac{c}{(c-a)(c-b)}=0$$ or, by multiplying both sides by $(a-b)(a-c)(b-c)$, $$ a(b-c)-b(a-c)+c(a-b) = 0 $$ that is trivial.</p> Answer 21: <p><strong>Claim:</strong> </p> <p>$$\frac{bc}{a(a-b)(a-c)}+\frac{ca}{b(b-c)(b-a)}+\frac{ab}{c(c-a)(c-b)}=\frac{ab+bc+ca}{abc}$$</p> <p><strong>Proof:</strong> </p> <p>Note that the Lagrange interpolating polynomial through $(a,p),(b,q),(c,r)$ is given by:</p> <p>$$f(x)=\frac{(x-b)(x-c)}{(a-b)(a-c)}p+\frac{(x-a)(x-c)}{(b-a)(b-c)}q+\frac{(x-a)(x-b)}{(c-a)(c-b)}r$$</p> <p>We seek the value of $f(0)$ when $p=\dfrac{1}{a},q=\dfrac{1}{b},r=\dfrac{1}{c}$, i.e. when $y=f(x)$ interpolates $y=\dfrac{1}{x}$ at $x=a,b,c$.</p> <p>We then notice that $1-xf(x)$ is a degree $3$ polynomial interpolant passing through $(0,1),(a,0),(b,0),(c,0)$, and is in fact the unique such polynomial.</p> <p>By inspection, one must have </p> <p>$$1-xf(x)=\frac{(x-a)(x-b)(x-c)}{(-a)(-b)(-c)}=\left(1-\frac{x}{a}\right)\left(1-\frac{x}{b}\right)\left(1-\frac{x}{c}\right)$$</p> <p>Expanding to leading order,</p> <p>$$1-xf(x)=1-x\left(\frac{1}{a}+\frac{1}{b}+\frac{1}{c}\right)+\text{ higher order terms}$$</p> <p>which, upon a moment's thought, gives that $f(0)=\frac{1}{a}+\frac{1}{b}+\frac{1}{c}$ as desired.</p> <p>Note that this method gives a natural generalisation to when we have more than just $a,b,c$.</p> Answer 22: <p>Hint: $P(a) = b²c²(c-b)+c²a²(a-c)+a²b²(b-a)$ is a $3^{rd}$ degree polynomial in $a$, and has $a=b$ and $a=c$ as obvious roots, so it is divisible by $(a-b)(a-c)$. By symmetry in $a,b,c$ it is divisible by $(b-c)$ as well. Divide successively by $(a-b),(b-c),(c-a)$ using <a href="https://en.wikipedia.org/wiki/Polynomial_long_division" rel="nofollow noreferrer">polynomial long division</a>, and you get the quotient $ab+bc+ca$.</p> Answer 23: <p>I think simpler way is to make partial fraction decomposition in $a$ of each of the terms. I.e. for the first term you get $$ -\frac{b}{(a-c) (b-c)}+\frac{c}{(a-b) (b-c)}+\frac{1}{a} $$ for the second $$ -\frac{c}{(a-b) (b-c)}-\frac{c}{b (b-c)} $$ and for the last one $$ \frac{b}{(a-c) (b-c)}+\frac{b}{c (b-c)} $$ So most of the terms cancel and you are left with $$ \frac{1}{a}+\frac{b}{c (b-c)}-\frac{c}{b (b-c)} $$ after that it is simpler already (you can make the expansion in $b$ after that to make it even simpler before joining together the fractions)</p> <p>After partial fraction in $b$ you get $$ \frac{a+c}{a c}+\frac{1}{b} $$</p> Answer 24: <p>We have to prove that $$ \sum_{cyc}\frac{bc}{a(a-b)(a-c)}=\sum_{cyc}\frac{1}{a} $$ or: $$ \sum_{cyc}\frac{bc-(a-b)(a-c)}{a(a-b)(a-c)} = \sum_{cyc}\frac{b+c-a}{(a-b)(a-c)}=0.$$ Since $$ 0 = \sum_{cyc}\left(\frac{1}{a-b}+\frac{1}{a-c}\right)=\sum_{cyc}\frac{2a-b-c}{(a-b)(a-c)} $$ it is enough to prove that $$ \frac{a}{(a-b)(a-c)}+\frac{b}{(b-c)(b-a)}+\frac{c}{(c-a)(c-b)}=0$$ or, by multiplying both sides by $(a-b)(a-c)(b-c)$, $$ a(b-c)-b(a-c)+c(a-b) = 0 $$ that is trivial.</p> Answer 25: <p><strong>Claim:</strong> </p> <p>$$\frac{bc}{a(a-b)(a-c)}+\frac{ca}{b(b-c)(b-a)}+\frac{ab}{c(c-a)(c-b)}=\frac{ab+bc+ca}{abc}$$</p> <p><strong>Proof:</strong> </p> <p>Note that the Lagrange interpolating polynomial through $(a,p),(b,q),(c,r)$ is given by:</p> <p>$$f(x)=\frac{(x-b)(x-c)}{(a-b)(a-c)}p+\frac{(x-a)(x-c)}{(b-a)(b-c)}q+\frac{(x-a)(x-b)}{(c-a)(c-b)}r$$</p> <p>We seek the value of $f(0)$ when $p=\dfrac{1}{a},q=\dfrac{1}{b},r=\dfrac{1}{c}$, i.e. when $y=f(x)$ interpolates $y=\dfrac{1}{x}$ at $x=a,b,c$.</p> <p>We then notice that $1-xf(x)$ is a degree $3$ polynomial interpolant passing through $(0,1),(a,0),(b,0),(c,0)$, and is in fact the unique such polynomial.</p> <p>By inspection, one must have </p> <p>$$1-xf(x)=\frac{(x-a)(x-b)(x-c)}{(-a)(-b)(-c)}=\left(1-\frac{x}{a}\right)\left(1-\frac{x}{b}\right)\left(1-\frac{x}{c}\right)$$</p> <p>Expanding to leading order,</p> <p>$$1-xf(x)=1-x\left(\frac{1}{a}+\frac{1}{b}+\frac{1}{c}\right)+\text{ higher order terms}$$</p> <p>which, upon a moment's thought, gives that $f(0)=\frac{1}{a}+\frac{1}{b}+\frac{1}{c}$ as desired.</p> <p>Note that this method gives a natural generalisation to when we have more than just $a,b,c$.</p> Answer 26: <p>Hint: $P(a) = b²c²(c-b)+c²a²(a-c)+a²b²(b-a)$ is a $3^{rd}$ degree polynomial in $a$, and has $a=b$ and $a=c$ as obvious roots, so it is divisible by $(a-b)(a-c)$. By symmetry in $a,b,c$ it is divisible by $(b-c)$ as well. Divide successively by $(a-b),(b-c),(c-a)$ using <a href="https://en.wikipedia.org/wiki/Polynomial_long_division" rel="nofollow noreferrer">polynomial long division</a>, and you get the quotient $ab+bc+ca$.</p> Answer 27: <p>I think simpler way is to make partial fraction decomposition in $a$ of each of the terms. I.e. for the first term you get $$ -\frac{b}{(a-c) (b-c)}+\frac{c}{(a-b) (b-c)}+\frac{1}{a} $$ for the second $$ -\frac{c}{(a-b) (b-c)}-\frac{c}{b (b-c)} $$ and for the last one $$ \frac{b}{(a-c) (b-c)}+\frac{b}{c (b-c)} $$ So most of the terms cancel and you are left with $$ \frac{1}{a}+\frac{b}{c (b-c)}-\frac{c}{b (b-c)} $$ after that it is simpler already (you can make the expansion in $b$ after that to make it even simpler before joining together the fractions)</p> <p>After partial fraction in $b$ you get $$ \frac{a+c}{a c}+\frac{1}{b} $$</p> Answer 28: <p>We have to prove that $$ \sum_{cyc}\frac{bc}{a(a-b)(a-c)}=\sum_{cyc}\frac{1}{a} $$ or: $$ \sum_{cyc}\frac{bc-(a-b)(a-c)}{a(a-b)(a-c)} = \sum_{cyc}\frac{b+c-a}{(a-b)(a-c)}=0.$$ Since $$ 0 = \sum_{cyc}\left(\frac{1}{a-b}+\frac{1}{a-c}\right)=\sum_{cyc}\frac{2a-b-c}{(a-b)(a-c)} $$ it is enough to prove that $$ \frac{a}{(a-b)(a-c)}+\frac{b}{(b-c)(b-a)}+\frac{c}{(c-a)(c-b)}=0$$ or, by multiplying both sides by $(a-b)(a-c)(b-c)$, $$ a(b-c)-b(a-c)+c(a-b) = 0 $$ that is trivial.</p> Answer 29: <p><strong>Claim:</strong> </p> <p>$$\frac{bc}{a(a-b)(a-c)}+\frac{ca}{b(b-c)(b-a)}+\frac{ab}{c(c-a)(c-b)}=\frac{ab+bc+ca}{abc}$$</p> <p><strong>Proof:</strong> </p> <p>Note that the Lagrange interpolating polynomial through $(a,p),(b,q),(c,r)$ is given by:</p> <p>$$f(x)=\frac{(x-b)(x-c)}{(a-b)(a-c)}p+\frac{(x-a)(x-c)}{(b-a)(b-c)}q+\frac{(x-a)(x-b)}{(c-a)(c-b)}r$$</p> <p>We seek the value of $f(0)$ when $p=\dfrac{1}{a},q=\dfrac{1}{b},r=\dfrac{1}{c}$, i.e. when $y=f(x)$ interpolates $y=\dfrac{1}{x}$ at $x=a,b,c$.</p> <p>We then notice that $1-xf(x)$ is a degree $3$ polynomial interpolant passing through $(0,1),(a,0),(b,0),(c,0)$, and is in fact the unique such polynomial.</p> <p>By inspection, one must have </p> <p>$$1-xf(x)=\frac{(x-a)(x-b)(x-c)}{(-a)(-b)(-c)}=\left(1-\frac{x}{a}\right)\left(1-\frac{x}{b}\right)\left(1-\frac{x}{c}\right)$$</p> <p>Expanding to leading order,</p> <p>$$1-xf(x)=1-x\left(\frac{1}{a}+\frac{1}{b}+\frac{1}{c}\right)+\text{ higher order terms}$$</p> <p>which, upon a moment's thought, gives that $f(0)=\frac{1}{a}+\frac{1}{b}+\frac{1}{c}$ as desired.</p> <p>Note that this method gives a natural generalisation to when we have more than just $a,b,c$.</p> Answer 30: <p>Hint: $P(a) = b²c²(c-b)+c²a²(a-c)+a²b²(b-a)$ is a $3^{rd}$ degree polynomial in $a$, and has $a=b$ and $a=c$ as obvious roots, so it is divisible by $(a-b)(a-c)$. By symmetry in $a,b,c$ it is divisible by $(b-c)$ as well. Divide successively by $(a-b),(b-c),(c-a)$ using <a href="https://en.wikipedia.org/wiki/Polynomial_long_division" rel="nofollow noreferrer">polynomial long division</a>, and you get the quotient $ab+bc+ca$.</p> Answer 31: <p>I think simpler way is to make partial fraction decomposition in $a$ of each of the terms. I.e. for the first term you get $$ -\frac{b}{(a-c) (b-c)}+\frac{c}{(a-b) (b-c)}+\frac{1}{a} $$ for the second $$ -\frac{c}{(a-b) (b-c)}-\frac{c}{b (b-c)} $$ and for the last one $$ \frac{b}{(a-c) (b-c)}+\frac{b}{c (b-c)} $$ So most of the terms cancel and you are left with $$ \frac{1}{a}+\frac{b}{c (b-c)}-\frac{c}{b (b-c)} $$ after that it is simpler already (you can make the expansion in $b$ after that to make it even simpler before joining together the fractions)</p> <p>After partial fraction in $b$ you get $$ \frac{a+c}{a c}+\frac{1}{b} $$</p> Answer 32: <p>We have to prove that $$ \sum_{cyc}\frac{bc}{a(a-b)(a-c)}=\sum_{cyc}\frac{1}{a} $$ or: $$ \sum_{cyc}\frac{bc-(a-b)(a-c)}{a(a-b)(a-c)} = \sum_{cyc}\frac{b+c-a}{(a-b)(a-c)}=0.$$ Since $$ 0 = \sum_{cyc}\left(\frac{1}{a-b}+\frac{1}{a-c}\right)=\sum_{cyc}\frac{2a-b-c}{(a-b)(a-c)} $$ it is enough to prove that $$ \frac{a}{(a-b)(a-c)}+\frac{b}{(b-c)(b-a)}+\frac{c}{(c-a)(c-b)}=0$$ or, by multiplying both sides by $(a-b)(a-c)(b-c)$, $$ a(b-c)-b(a-c)+c(a-b) = 0 $$ that is trivial.</p> Answer 33: <p><strong>Claim:</strong> </p> <p>$$\frac{bc}{a(a-b)(a-c)}+\frac{ca}{b(b-c)(b-a)}+\frac{ab}{c(c-a)(c-b)}=\frac{ab+bc+ca}{abc}$$</p> <p><strong>Proof:</strong> </p> <p>Note that the Lagrange interpolating polynomial through $(a,p),(b,q),(c,r)$ is given by:</p> <p>$$f(x)=\frac{(x-b)(x-c)}{(a-b)(a-c)}p+\frac{(x-a)(x-c)}{(b-a)(b-c)}q+\frac{(x-a)(x-b)}{(c-a)(c-b)}r$$</p> <p>We seek the value of $f(0)$ when $p=\dfrac{1}{a},q=\dfrac{1}{b},r=\dfrac{1}{c}$, i.e. when $y=f(x)$ interpolates $y=\dfrac{1}{x}$ at $x=a,b,c$.</p> <p>We then notice that $1-xf(x)$ is a degree $3$ polynomial interpolant passing through $(0,1),(a,0),(b,0),(c,0)$, and is in fact the unique such polynomial.</p> <p>By inspection, one must have </p> <p>$$1-xf(x)=\frac{(x-a)(x-b)(x-c)}{(-a)(-b)(-c)}=\left(1-\frac{x}{a}\right)\left(1-\frac{x}{b}\right)\left(1-\frac{x}{c}\right)$$</p> <p>Expanding to leading order,</p> <p>$$1-xf(x)=1-x\left(\frac{1}{a}+\frac{1}{b}+\frac{1}{c}\right)+\text{ higher order terms}$$</p> <p>which, upon a moment's thought, gives that $f(0)=\frac{1}{a}+\frac{1}{b}+\frac{1}{c}$ as desired.</p> <p>Note that this method gives a natural generalisation to when we have more than just $a,b,c$.</p> Answer 34: <p>Hint: $P(a) = b²c²(c-b)+c²a²(a-c)+a²b²(b-a)$ is a $3^{rd}$ degree polynomial in $a$, and has $a=b$ and $a=c$ as obvious roots, so it is divisible by $(a-b)(a-c)$. By symmetry in $a,b,c$ it is divisible by $(b-c)$ as well. Divide successively by $(a-b),(b-c),(c-a)$ using <a href="https://en.wikipedia.org/wiki/Polynomial_long_division" rel="nofollow noreferrer">polynomial long division</a>, and you get the quotient $ab+bc+ca$.</p> Answer 35: <p>I think simpler way is to make partial fraction decomposition in $a$ of each of the terms. I.e. for the first term you get $$ -\frac{b}{(a-c) (b-c)}+\frac{c}{(a-b) (b-c)}+\frac{1}{a} $$ for the second $$ -\frac{c}{(a-b) (b-c)}-\frac{c}{b (b-c)} $$ and for the last one $$ \frac{b}{(a-c) (b-c)}+\frac{b}{c (b-c)} $$ So most of the terms cancel and you are left with $$ \frac{1}{a}+\frac{b}{c (b-c)}-\frac{c}{b (b-c)} $$ after that it is simpler already (you can make the expansion in $b$ after that to make it even simpler before joining together the fractions)</p> <p>After partial fraction in $b$ you get $$ \frac{a+c}{a c}+\frac{1}{b} $$</p> Answer 36: <p>We have to prove that $$ \sum_{cyc}\frac{bc}{a(a-b)(a-c)}=\sum_{cyc}\frac{1}{a} $$ or: $$ \sum_{cyc}\frac{bc-(a-b)(a-c)}{a(a-b)(a-c)} = \sum_{cyc}\frac{b+c-a}{(a-b)(a-c)}=0.$$ Since $$ 0 = \sum_{cyc}\left(\frac{1}{a-b}+\frac{1}{a-c}\right)=\sum_{cyc}\frac{2a-b-c}{(a-b)(a-c)} $$ it is enough to prove that $$ \frac{a}{(a-b)(a-c)}+\frac{b}{(b-c)(b-a)}+\frac{c}{(c-a)(c-b)}=0$$ or, by multiplying both sides by $(a-b)(a-c)(b-c)$, $$ a(b-c)-b(a-c)+c(a-b) = 0 $$ that is trivial.</p>